Browsing by Author "Zheng, Changwen"
Now showing 1 - 1 of 1
Results Per Page
Sort Options
Item NeuroLens: Data‐Driven Camera Lens Simulation Using Neural Networks(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Zheng, Quan; Zheng, Changwen; Chen, Min and Zhang, Hao (Richard)Rendering with full lens model can offer images with photorealistic lens effects, but it leads to high computational costs. This paper proposes a novel camera lens model, NeuroLens, to emulate the imaging of real camera lenses through a data‐driven approach. The mapping of image formation in a camera lens is formulated as imaging regression functions (IRFs), which map input rays to output rays. IRFs are approximated with neural networks, which compactly represent the imaging properties and support parallel evaluation on a graphics processing unit (GPU). To effectively represent spatially varying imaging properties of a camera lens, the input space spanned by incident rays is subdivided into multiple subspaces and each subspace is fitted with a separate IRF. To further raise the evaluation accuracy, a set of neural networks is trained for each IRF and the output is calculated as the average output of the set. The effectiveness of the NeuroLens is demonstrated by fitting a wide range of real camera lenses. Experimental results show that it provides higher imaging accuracy in comparison to state‐of‐the‐art camera lens models, while maintaining the high efficiency for processing camera rays.Camera lens models are indispensable components of three‐dimensional graphics. Rendering with full lens model offers images with photorealistic lens effects, but it leads to high computational costs. This paper proposes a camera lens model, NeuroLens, to emulate real camera lenses through a data‐driven approach. The mapping of image formation in a camera lens is formulated as imaging regression functions (IRFs). IRFs are approximated with neural networks, which compactly represent the imaging properties and support parallel evaluation on a GPU. To represent spatially varying imaging properties, the input space spanned by incident rays is subdivided, and each subspace is locally fitted with a separate IRF.