Browsing by Author "Giunchi, Daniele"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item 3D Sketching for Interactive Model Retrieval in Virtual Reality(ACM, 2018) Giunchi, Daniele; James, Stuart; Steed, Anthony; Aydın, Tunç and Sýkora, DanielWe describe a novel method for searching 3D model collections using free-form sketches within a virtual environment as queries. As opposed to traditional sketch retrieval, our queries are drawn directly onto an example model. Using immersive virtual reality the user can express their query through a sketch that demonstrates the desired structure, color and texture. Unlike previous sketch-based retrieval methods, users remain immersed within the environment without relying on textual queries or 2D projections which can disconnect the user from the environment. We perform a test using queries over several descriptors, evaluating the precision in order to select the most accurate one. We show how a convolutional neural network (CNN) can create multi-view representations of colored 3D sketches. Using such a descriptor representation, our system is able to rapidly retrieve models and in this way, we provide the user with an interactive method of navigating large object datasets. Through a user study we demonstrate that by using our VR 3D model retrieval system, users can perform search more quickly and intuitively than with a naive linear browsing method. Using our system users can rapidly populate a virtual environment with specific models from a very large database, and thus the technique has the potential to be broadly applicable in immersive editing systems.Item Selecting Texture Resolution Using a Task-specific Visibility Metric(The Eurographics Association and John Wiley & Sons Ltd., 2019) Wolski, Krzysztof; Giunchi, Daniele; Kinuwaki, Shinichi; Didyk, Piotr; Myszkowski, Karol; Steed, Anthony; Mantiuk, Rafal K.; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonIn real-time rendering, the appearance of scenes is greatly affected by the quality and resolution of the textures used for image synthesis. At the same time, the size of textures determines the performance and the memory requirements of rendering. As a result, finding the optimal texture resolution is critical, but also a non-trivial task since the visibility of texture imperfections depends on underlying geometry, illumination, interactions between several texture maps, and viewing positions. Ideally, we would like to automate the task with a visibility metric, which could predict the optimal texture resolution. To maximize the performance of such a metric, it should be trained on a given task. This, however, requires sufficient user data which is often difficult to obtain. To address this problem, we develop a procedure for training an image visibility metric for a specific task while reducing the effort required to collect new data. The procedure involves generating a large dataset using an existing visibility metric followed by refining that dataset with the help of an efficient perceptual experiment. Then, such a refined dataset is used to retune the metric. This way, we augment sparse perceptual data to a large number of per-pixel annotated visibility maps which serve as the training data for application-specific visibility metrics. While our approach is general and can be potentially applied for different image distortions, we demonstrate an application in a game-engine where we optimize the resolution of various textures, such as albedo and normal maps.