Italian Chapter Conference
Permanent URI for this community
Browse
Browsing Italian Chapter Conference by Subject "Applied computing"
Now showing 1 - 13 of 13
Results Per Page
Sort Options
Item Approximating Shapes with Standard and Custom 3D Printed LEGO Bricks(The Eurographics Association, 2021) Fanni, Filippo Andrea; Dal Bello, Alberto; Sbardellini, Simone; Giachetti, Andrea; Frosini, Patrizio and Giorgi, Daniela and Melzi, Simone and Rodolà, EmanueleIn this paper, we present a work-in-progress aimed at developing a pipeline for the fabrication of shapes reproducing digital models with a combination of standard LEGO bricks and 3D printed custom elements. The pipeline starts searching for the ideal alignment of the 3D model with the brick grid. It then employs a novel approach for shape "legolization" using a outside-in heuristic to limit critical configuration, and separates an external shell and an internal part. Finally, it exploits shape booleans to create the external custom parts to be 3D printed.Item Gamification Mechanics for Playful Virtual Reality Authoring(The Eurographics Association, 2020) Naraghi-Taghi-Off, Ramtin; Horst, Robin; Dörner, Ralf; Biasotti, Silvia and Pintus, Ruggero and Berretti, StefanoAn increasing number of companies, businesses and educational institutions are becoming familiar with the term gamification, which is about integrating game elements into a non-playful context. Gamification is becoming more important in various fields, such as e-learning, where a person needs to be motivated to be productive. The use of Virtual Reality (VR) is also being researched in various application areas. Authoring of VR content is a complex task that traditionally requires programming or design skills. However, there are authoring applications that do not require such skills but are still complex to use. In this paper, we explore how gamification concepts can be applied to VR authoring to help authors create VR experiences. Using an existing authoring tool for the concept of VR nuggets as an example, we investigate appropriate gamification mechanics to familiarize authors with the tool and motivate them to use it. The proposed concepts were implemented in a prototype and used in a user study. The study report shows that our participants were able to successfully use the gamified authoring prototype and that the participants felt motivated by various gamification aspects, especially visual rewards and story elements.Item A Gaze Detection System for Neuropsychiatric Disorders Remote Diagnosis Support(The Eurographics Association, 2023) Cangelosi, Antonio; Antola, Gabriele; Iacono, Alberto Lo; Santamaria, Alfonso; Clerico, Marinella; Al-Thani, Dena; Agus, Marco; Calì, Corrado; Banterle, Francesco; Caggianese, Giuseppe; Capece, Nicola; Erra, Ugo; Lupinetti, Katia; Manfredi, GildaAccurate and early diagnosis of neuropsychiatric disorders, such as Autism Spectrum Disorders (ASD) is a significant challenge in clinical practice. This study explores the use of real-time gaze tracking as a tool for unbiased and quantitative analysis of eye gaze. The results of this study could support the diagnosis of disorders and potentially be used as a tool in the field of rehabilitation. The proposed setup consists of an RGB-D camera embedded in the latest-generation smartphones and a set of processing components for the analysis of recorded data related to patient interactivity. The proposed system is easy to use and doesn't require much knowledge or expertise. It also achieves a high level of accuracy. Because of this, it can be used remotely (telemedicine) to simplify diagnosis and rehabilitation processes. We present initial findings that show how real-time gaze tracking can be a valuable tool for doctors. It is a non-invasive device that provides unbiased quantitative data that can aid in early detection, monitoring, and treatment evaluation. This study's findings have significant implications for the advancement of ASD research. The innovative approach proposed in this study has the potential to enhance diagnostic accuracy and improve patient outcomes.Item Guiding Lens-based Exploration using Annotation Graphs(The Eurographics Association, 2021) Ahsan, Moonisa; Marton, Fabio; Pintus, Ruggero; Gobbetti, Enrico; Frosini, Patrizio and Giorgi, Daniela and Melzi, Simone and Rodolà, EmanueleWe introduce a novel approach for guiding users in the exploration of annotated 2D models using interactive visualization lenses. Information on the interesting areas of the model is encoded in an annotation graph generated at authoring time. Each graph node contains an annotation, in the form of a visual markup of the area of interest, as well as the optimal lens parameters that should be used to explore the annotated area and a scalar representing the annotation importance. Graph edges are used, instead, to represent preferred ordering relations in the presentation of annotations. A scalar associated to each edge determines the strength of this prescription. At run-time, the graph is exploited to assist users in their navigation by determining the next best annotation in the database and moving the lens towards it when the user releases interactive control. The selection is based on the current view and lens parameters, the graph content and structure, and the navigation history. This approach supports the seamless blending of an automatic tour of the data with interactive lens-based exploration. The approach is tested and discussed in the context of the exploration of multi-layer relightable models.Item Immersive Environment for Creating, Proofreading, and Exploring Skeletons of Nanometric Scale Neural Structures(The Eurographics Association, 2019) Boges, Daniya; Calì, Corrado; Magistretti, Pierre J.; Hadwiger, Markus; Sicat, Ronell; Agus, Marco; Agus, Marco and Corsini, Massimiliano and Pintus, RuggeroWe present a novel immersive environment for the exploratory analysis of nanoscale cellular reconstructions of rodent brain samples acquired through electron microscopy. The system is focused on medial axis representations (skeletons) of branched and tubular structures of brain cells, and it is specifically designed for: i) effective semi-automatic creation of skeletons from surface-based representations of cells and structures, ii) fast proofreading, i.e., correcting and editing of semi-automatically constructed skeleton representations, and iii) useful exploration, i.e., measuring, comparing, and analyzing geometric features related to cellular structures based on medial axis representations. The application runs in a standard PC-tethered virtual reality (VR) setup with a head mounted display (HMD), controllers, and tracking sensors. The system is currently used by neuroscientists for performing morphology studies on sparse reconstructions of glial cells and neurons extracted from a sample of the somatosensory cortex of a juvenile rat.Item JPEG Line-drawing Restoration With Masks(The Eurographics Association, 2023) Zhu, Yan; Yamaguchi, Yasushi; Banterle, Francesco; Caggianese, Giuseppe; Capece, Nicola; Erra, Ugo; Lupinetti, Katia; Manfredi, GildaLearning-based JPEG restoration methods usually lack consideration on the visual content of images. Even though these methods achieve satisfying results on photos, the direct application of them on line drawings, which consist of lines and white background, is not suitable. The large area of background in digital line drawings does not contain intensity information and should be constantly white (the maximum brightness). Existing JPEG restoration networks consistently fail to output constant white pixels for the background area. What's worse, training on the background can negatively impact the learning efficiency for areas where texture exists. To tackle these problems, we propose a line-drawing restoration framework that can be applied to existing state-of-the-art restoration networks. Our framework takes existing restoration networks as backbones and processes an input rasterized JPEG line drawing in two steps. First, a proposed mask-predicting network predicts a binary mask which indicates the location of lines and background in the potential undeteriorated line drawing. Then, the mask is concatenated with the input JPEG line drawing and fed into the backbone restoration network, where the conventional L1 loss is replaced by a masked Mean Square Error (MSE) loss. Besides learning-based mask generation, we also evaluate other direct mask generation methods. Experiments show that our framework with learnt binary masks achieves both better visual quality and better performance on quantitative metrics than the state-of-the-art methods in the task of JPEG line-drawing restoration.Item Mixed Reality for Orthopedic Elbow Surgery Training and Operating Room Applications: A Preliminary Analysis(The Eurographics Association, 2023) Cangelosi, Antonio; Riberi, Giacomo; Salvi, Massimo; Molinari, Filippo; Titolo, Paolo; Agus, Marco; Calì, Corrado; Banterle, Francesco; Caggianese, Giuseppe; Capece, Nicola; Erra, Ugo; Lupinetti, Katia; Manfredi, GildaThe use of Mixed Reality in medicine is widely documented to be a candidate to revolutionize surgical interventions. In this paper we present a system to simulate k-wire placement, that is a common orthopedic procedure used to stabilize fractures, dislocations, and other traumatic injuries.With the described system, it is possible to leverage Mixed Reality (MR) and advanced visualization techniques applied on a surgical simulation phantom to enhance surgical training and critical orthopedic surgical procedures. This analysis is centered on evaluating the precision and proficiency of k-wire placement in an elbow surgical phantom, designed with a 3D modeling software starting from a virtual 3D anatomical reference. By visually superimposing 3D reconstructions of internal structures and the target K-wire positioning on the physical model, it is expected not only to improve the learning curve but also to establish a foundation for potential real-time surgical guidance in challenging clinical scenarios. The performance is measured as the difference between K-wires real placement in respect to target position; the quantitative measurements are then used to compare the risk of iatrogenic injury to nerves and vascular structures of MRguided vs non MR-guided simulated interventions.Item MUSE: Modeling Uncertainty as a Support for Environment(The Eurographics Association, 2022) Miola, Marianna; Cabiddu, Daniela; Pittaluga, Simone; Vetuschi Zuccolini, Marino; Cabiddu, Daniela; Schneider, Teseo; Allegra, Dario; Catalano, Chiara Eva; Cherchi, Gianmarco; Scateni, RiccardoTo fully understand a Natural System, the representation of an environmental variable's distribution in 3D space is a mandatory and complex task. The challenge derives from a scarcity of samples number in the survey domain (e.g., logs in a reservoir, soil samples, fixed acquisition sampling stations) or an implicit difficulty in the in-situ measurement of parameters. Field or lab measurements are generally considered error-free, although not so. That aspect, combined with conceptual and numerical approximations used to model phenomena, makes the results intrinsically less performing, fading the interpretation. In this context, we design a computational infrastructure to evaluate spatial uncertainty in a multi-scenario application in Environment survey and protection, such as in environmental geochemistry, coastal oceanography, or infrastructure engineering. Our Research aims to expand the operative knowledge by developing an open-source stochastic tool, named MUSE, the acronym for Modeling Uncertainty as a Support for Environment. At this stage, the methodology mainly includes the definition of a flexible environmental data format, a geometry processing module to discretize the space, and geostatistics tools to evaluate the spatial continuity of sampled parameters, predicting random variable distribution. The implementation of the uncertainty module and the development of a graphic interface for ad-hoc visualization will be integrated as the next step. The poster summarizes research purposes, and MUSE computational code structure developed so far.Item Semantic Segmentation of High-resolution Point Clouds Representing Urban Contexts(The Eurographics Association, 2023) Romanengo, Chiara; Cabiddu, Daniela; Pittaluga, Simone; Mortara, Michela; Banterle, Francesco; Caggianese, Giuseppe; Capece, Nicola; Erra, Ugo; Lupinetti, Katia; Manfredi, GildaPoint clouds are becoming an increasingly common digital representation of real-world objects, and they are particularly efficient when dealing with large-scale objects and/or when extremely high-resolution is required. The focus of our work is on the analysis, 3D feature extraction and semantic annotation of point clouds representing urban scenes, coming from various acquisition technologies, e.g., terrestrial (fixed or mobile) or aerial laser scanning or photogrammetry; the task is challenging, due to data dimensionality and noise. In particular, we present a pipeline to segment high-resolution point clouds representing urban environments into geometric primitives; we focus on planes, cylinders and spheres, which are the main features of buildings (walls, roofs, arches, ...) and ground surfaces (streets, pavements, platforms), and identify the unique parameters of each instance. This paper focuses on the semantic segmentation of buildings, but the approach is currently being generalised to manage extended urban areas. Given a dense point cloud representing a specific building, we firstly apply a binary space partitioning method to obtain small enough sub-clouds that can be processed. Then, a combination of the well-known RANSAC algorithm and a recognition method based on the Hough transform (HT) is applied to each sub-cloud to obtain a semantic segmentation into salient elements, like façades, walls and roofs. The parameters of primitive instances are saved as metadata to document the structural element of buildings for further thematic analyses, e.g., energy efficiency. We present a case study on the city of Catania, Italy, where two buildings of historical and artistic value have been digitized at very high resolution. Our approach is able to semantically segment these huge point clouds and it proves robust to uneven sampling density, input noise and outliers.Item Smart Modelling of Geologic Stratigraphy Concepts using Sketches(The Eurographics Association, 2020) Sousa, Mario Costa; Silva, Julio Daniel Machado; Silva, Clarissa Coda Marques Machado; Carvalho, Felipe Moura De; Judice, Sicilia; Rahman, Fazilatur; Jacquemyn, Carl; Pataki, Margaret E. H.; Hampson, Gary J.; Jackson, Matthew D.; Petrovskyy, Dmytro; Geiger, Sebastian; Biasotti, Silvia and Pintus, Ruggero and Berretti, StefanoSeveral applications of Earth Science require geologically valid interpretation and visualization of complex physical structures in data-poor subsurface environments. Hand-drawn sketches and illustrations are standard practices used by domain experts for conceptualizing their observations and interpretations. These conceptual geo-sketches provide rich visual references for exploring uncertainties and helping users formulate ideas, suggest possible solutions, and make critical decisions affecting the various stages in geoscience studies and modelling workflows. In this paper, we present a sketch-based interfaces and modelling (SBIM) approach for the rapid conceptual construction of stratigraphic surfaces, which are common to most geologic modelling scales, studies, and workflows. Our SBIM approach mirrors the way domain users produce geo-sketches and uses them to construct 3D geologic models, enforcing algorithmic rules to ensure geologically-sound stratigraphic relationships are generated, and supporting different scales of geology being observed and interpreted. Results are presented for two case studies demonstrating the flexibility and broad applicability of our rule-based SBIM approach for conceptual stratigraphy.Item SPIDER: SPherical Indoor DEpth Renderer(The Eurographics Association, 2022) Tukur, Muhammad; Pintore, Giovanni; Gobbetti, Enrico; Schneider, Jens; Agus, Marco; Cabiddu, Daniela; Schneider, Teseo; Allegra, Dario; Catalano, Chiara Eva; Cherchi, Gianmarco; Scateni, RiccardoToday's Extended Reality (XR) applications that call for specific Diminished Reality (DR) strategies to hide specific classes of objects are increasingly using 360? cameras, which can capture entire areas in a single picture. In this work, we present an interactive-based image editing and rendering system named SPIDER, that takes a spherical 360? indoor scene as input. The system incorporates the output of deep learning models to abstract the segmentation and depth images of full and empty rooms to allow users to perform interactive exploration and basic editing operations on the reconstructed indoor scene, namely: i) rendering of the scene in various modalities (point cloud, polygonal, wireframe) ii) refurnishing (transferring portions of rooms) iii) deferred shading through the usage of precomputed normal maps. These kinds of scene editing and manipulations can be used for assessing the inference from deep learning models and enable several Mixed Reality (XR) applications in areas such as furniture retails, interior designs, and real estates. Moreover, it can also be useful in data augmentation, arts, designs, and paintings.Item VarIS: Variable Illumination Sphere for Facial Capture, Model Scanning, and Spatially Varying Appearance Acquisition(The Eurographics Association, 2023) Baron, Jessica; Li, Xiang; Joshi, Parisha; Itty, Nathaniel; Greene, Sarah; Dhillon, Daljit Singh J.; Patterson, Eric; Banterle, Francesco; Caggianese, Giuseppe; Capece, Nicola; Erra, Ugo; Lupinetti, Katia; Manfredi, GildaWe introduce VarIS, our Variable Illumination Sphere – a multi-purpose system for acquiring and processing real-world geometric and appearance data for computer-graphics research and production. Its key applications among many are (1) human-face capture, (2) model scanning, and (3) spatially varying material acquisition. Facial capture requires high-resolution cameras at multiple viewpoints, photometric capabilities, and a swift process due to human movement. Acquiring a digital version of a physical model is somewhat similar but with different constraints for image processing and more allowable time. Each requires detailed estimations of geometry and physically based shading properties. Measuring spatially varying light-scattering properties requires spanning four dimensions of illumination and viewpoint with angular, spatial, and spectral accuracy, and this process can also be assisted using multiple, simultaneous viewpoints or rapid switching of lights with no movement necessary. VarIS is a system of hardware and software for spherical illumination and imaging that has been custom designed and developed by our team. It has been inspired by Light Stages and goniophotometers, but costs less through use of primarily off-the-shelf components, and additionally extends capabilities beyond these devices. In this paper we describe the unique system and contributions, including practical details that could assist other researchers and practitioners.Item Visual Representation of Region Transitions in Multi-dimensional Parameter Spaces(The Eurographics Association, 2019) Fernandes, Oliver; Frey, Steffen; Reina, Guido; Ertl, Thomas; Agus, Marco and Corsini, Massimiliano and Pintus, RuggeroWe propose a novel visual representation of transitions between homogeneous regions in multi-dimensional parameter space. While our approach is generally applicable for the analysis of arbitrary continuous parameter spaces, we particularly focus on scientific applications, like physical variables in simulation ensembles. To generate our representation, we use unsupervised learning to cluster the ensemble members according to their mutual similarity. In doing this, clusters are sorted such that similar clusters are located next to each other. We then further partition the clusters into connected regions with respect to their location in parameter space. In the visualization, the resulting regions are represented as glyphs in a matrix, indicating parameter changes which induce a transition to another region. To unambiguously associate a change of data characteristics to a single parameter, we specifically isolate changes by dimension. With this, our representation provides an intuitive visualization of the parameter transitions that influence the outcome of the underlying simulation or measurement. We demonstrate the generality and utility of our approach on diverse types of data, namely simulations from the field of computational fluid dynamics and thermodynamics, as well as an ensemble of raycasting performance data.