NeuroLens: Data‐Driven Camera Lens Simulation Using Neural Networks
dc.contributor.author | Zheng, Quan | en_US |
dc.contributor.author | Zheng, Changwen | en_US |
dc.contributor.editor | Chen, Min and Zhang, Hao (Richard) | en_US |
dc.date.accessioned | 2018-01-10T07:43:03Z | |
dc.date.available | 2018-01-10T07:43:03Z | |
dc.date.issued | 2017 | |
dc.description.abstract | Rendering with full lens model can offer images with photorealistic lens effects, but it leads to high computational costs. This paper proposes a novel camera lens model, NeuroLens, to emulate the imaging of real camera lenses through a data‐driven approach. The mapping of image formation in a camera lens is formulated as imaging regression functions (IRFs), which map input rays to output rays. IRFs are approximated with neural networks, which compactly represent the imaging properties and support parallel evaluation on a graphics processing unit (GPU). To effectively represent spatially varying imaging properties of a camera lens, the input space spanned by incident rays is subdivided into multiple subspaces and each subspace is fitted with a separate IRF. To further raise the evaluation accuracy, a set of neural networks is trained for each IRF and the output is calculated as the average output of the set. The effectiveness of the NeuroLens is demonstrated by fitting a wide range of real camera lenses. Experimental results show that it provides higher imaging accuracy in comparison to state‐of‐the‐art camera lens models, while maintaining the high efficiency for processing camera rays.Camera lens models are indispensable components of three‐dimensional graphics. Rendering with full lens model offers images with photorealistic lens effects, but it leads to high computational costs. This paper proposes a camera lens model, NeuroLens, to emulate real camera lenses through a data‐driven approach. The mapping of image formation in a camera lens is formulated as imaging regression functions (IRFs). IRFs are approximated with neural networks, which compactly represent the imaging properties and support parallel evaluation on a GPU. To represent spatially varying imaging properties, the input space spanned by incident rays is subdivided, and each subspace is locally fitted with a separate IRF. | en_US |
dc.description.number | 8 | |
dc.description.sectionheaders | Articles | |
dc.description.seriesinformation | Computer Graphics Forum | |
dc.description.volume | 36 | |
dc.identifier.doi | 10.1111/cgf.13087 | |
dc.identifier.issn | 1467-8659 | |
dc.identifier.pages | 390-401 | |
dc.identifier.uri | https://doi.org/10.1111/cgf.13087 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.1111/cgf13087 | |
dc.publisher | © 2017 The Eurographics Association and John Wiley & Sons Ltd. | en_US |
dc.subject | camera lens simulation | |
dc.subject | neural networks | |
dc.subject | regression | |
dc.subject | lens effects | |
dc.subject | I.3.7 [Computer Graphics]: Three‐Dimensional Graphics and Realism Raytracing | |
dc.title | NeuroLens: Data‐Driven Camera Lens Simulation Using Neural Networks | en_US |