Browsing by Author "Celikcan, Ufuk"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Gaze-Contingent Perceptual Level of Detail Prediction(The Eurographics Association, 2023) Surace, Luca; Tursun, Cara; Celikcan, Ufuk; Didyk, Piotr; Ritschel, Tobias; Weidlich, AndreaNew virtual reality headsets and wide field-of-view displays rely on foveated rendering techniques that lower the rendering quality for peripheral vision to increase performance without a perceptible quality loss. While the concept is simple, the practical realization of the foveated rendering systems and their full exploitation are still challenging. Existing techniques focus on modulating the spatial resolution of rendering or shading rate according to the characteristics of human perception. However, most rendering systems also have a significant cost related to geometry processing. In this work, we investigate the problem of mesh simplification, also known as the level of detail (LOD) technique, for foveated rendering. We aim to maximize the amount of LOD simplification while keeping the visibility of changes to the object geometry under a selected threshold. We first propose two perceptually inspired visibility models for mesh simplification suitable for gaze-contingent rendering. The first model focuses on spatial distortions in the object silhouette and body. The second model accounts for the temporal visibility of switching between two LODs. We calibrate the two models using data from perceptual experiments and derive a computational method that predicts a suitable LOD for rendering an object at a specific eccentricity without objectionable quality loss. We apply the technique to the foveated rendering of static and dynamic objects and demonstrate the benefits in a validation experiment. Using our perceptually-driven gaze-contingent LOD selection, we achieve up to 33% of extra speedup in rendering performance of complex-geometry scenes when combined with the most recent industrial solutions, i.e., Nanite from Unreal Engine.Item NOVA: Rendering Virtual Worlds with Humans for Computer Vision Tasks(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Kerim, Abdulrahman; Aslan, Cem; Celikcan, Ufuk; Erdem, Erkut; Erdem, Aykut; Benes, Bedrich and Hauser, HelwigToday, the cutting edge of computer vision research greatly depends on the availability of large datasets, which are critical for effectively training and testing new methods. Manually annotating visual data, however, is not only a labor‐intensive process but also prone to errors. In this study, we present NOVA, a versatile framework to create realistic‐looking 3D rendered worlds containing procedurally generated humans with rich pixel‐level ground truth annotations. NOVA can simulate various environmental factors such as weather conditions or different times of day, and bring an exceptionally diverse set of humans to life, each having a distinct body shape, gender and age. To demonstrate NOVA's capabilities, we generate two synthetic datasets for person tracking. The first one includes 108 sequences, each with different levels of difficulty like tracking in crowded scenes or at nighttime and aims for testing the limits of current state‐of‐the‐art trackers. A second dataset of 97 sequences with normal weather conditions is used to show how our synthetic sequences can be utilized to train and boost the performance of deep‐learning based trackers. Our results indicate that the synthetic data generated by NOVA represents a good proxy of the real‐world and can be exploited for computer vision tasks.