EGVE: Eurographics Workshop on Virtual Environments
Permanent URI for this community
Browse
Browsing EGVE: Eurographics Workshop on Virtual Environments by Subject "Artificial"
Now showing 1 - 20 of 37
Results Per Page
Sort Options
Item 3D User Interfaces Using Tracked Multi-touch Mobile Devices(The Eurographics Association, 2012) Wilkes, Curtis B.; Tilden, Dan; Bowman, Doug A.; Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David RobertsMulti-touch mobile devices are becoming ubiquitous due to the proliferation of smart phone platforms such as the iPhone and Android. Recent research has explored the use of multi-touch input for 3D user interfaces on displays including large touch screens, tablets, and mobile devices. This research explores the benefits of adding six-degree-of-freedom tracking to a multi-touch mobile device for 3D interaction. We analyze and propose benefits of using tracked multi-touch mobile devices (TMMDs) with the goal of developing effective interaction techniques to handle a variety of tasks within immersive 3D user interfaces. We developed several techniques using TMMDs for virtual object manipulation, and compared our techniques to existing best-practice techniques in a series of user studies. We did not, however, find performance advantages for TMMD-based techniques. We discuss our observations and propose alternate interaction techniques and tasks that may benefit from TMMDs.Item Analysis of Depth Perception with Virtual Mask in Stereoscopic AR(The Eurographics Association, 2015) Otsuki, Mai; Kuzuoka, Hideaki; Milgram, Paul; Masataka Imura and Pablo Figueroa and Betty MohlerA practical application of Augmented Reality (AR) is see-through vision, a technique that enables a user to observe an inner object located behind a real object by superimposing the virtually visualized inner object onto the real object surface (for example, pipes and cables behind a wall or under a floor). A challenge in such applications is to provide proper depth perception when an inner virtual object image is overlaid on a real object. To improve depth perception in stereoscopic AR, we propose a method that overlays a random-dot mask on the real object surface. This method conveys to the observers the illusion of observing the virtual object through many small holes. We named this perception ''stereoscopic pseudo-transparency.'' Our experiments investigated (1) the effectiveness of the proposed method in improving the depth perception between the real object surface and the virtual object compared to existing methods, and (2) whether the proposed method can be used in an actual AR environment.Item An Augmented Reality and Virtual Reality Pillar for Exhibitions: A Subjective Exploration(The Eurographics Association, 2017) See, Zi Siang; Sunar, Mohd Shahrizal; Billinghurst, Mark; Dey, Arindam; Santano, Delas; Esmaeili, Human; Thwaites, Harold; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiThis paper presents the development of an Augmented Reality (AR) and Virtual Reality (AR) pillar, a novel approach for showing AR and VR content in a public setting. A pillar in a public exhibition venue was converted to a four-sided AR and VR showcase, and a cultural heritage exhibit of ''Boatbuilders of Pangkor'' was shown. Multimedia tablets and mobile AR head-mountdisplays (HMDs) were provided for visitors to experience multisensory AR and VR content demonstrated on the pillar. The content included AR-based videos, maps, images and text, and VR experiences that allowed visitors to view reconstructed 3D subjects and remote locations in a 360° virtual environment. In this paper, we describe the prototype system, a user evaluation study and directions for future work.Item Background Motion, Clutter, and the Impact on Virtual Object Motion Perception in Augmented Reality(The Eurographics Association, 2013) Ferrer, Vicente; Yang, Yifan; Perdomo, Alex; Quarles, John; Betty Mohler and Bruno Raffin and Hideo Saito and Oliver StaadtBackground motion and visual clutter are present in almost all augmented reality applications. However, there is minimal prior work that has investigated the effects that background motion and clutter (e.g., a busy city street) can have on the perception of virtual object motion in augmented reality. To investigate these issues, we conducted an experiment in which participants' perceptions of changes in overlaid virtual object velocity were tested with several levels of background motion, background clutter, virtual object motion, and virtual object clutter. Our experiment offers a novel approach to assessing virtual object motion perception and gives new insights into the impact that background clutter and motion has on perception in augmented reality.Item Comparing Auditory and Haptic Feedback for a Virtual Drilling Task(The Eurographics Association, 2012) Rausch, Dominik; Aspöck, Lukas; Knott, Thomas; Pelzer, Sönke; Vorländer, Michael; Kuhlen, Torsten; Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David RobertsWhile visual feedback is dominant in Virtual Environments, the use of other modalities like haptics and acoustics can enhance believability, immersion, and interaction performance. Haptic feedback is especially helpful for many interaction tasks like working with medical or precision tools. However, unlike visual and auditory feedback, haptic reproduction is often difficult to achieve due to hardware limitations. This article describes a user study to examine how auditory feedback can be used to substitute haptic feedback when interacting with a vibrating tool. Participants remove some target material with a round-headed drill while avoiding damage to the underlying surface. In the experiment, varying combinations of surface force feedback, vibration feedback, and auditory feedback are used. We describe the design of the user study and present the results, which show that auditory feedback can compensate the lack of haptic feedback.Item Development of Encountered-type Haptic Interface that can Independently Control Volume and Rigidity of 3D Virtual Object(The Eurographics Association, 2015) Takizawa, Naoki; Yano, Hiroaki; Iwata, Hiroo; Oshiro, Yukio; Ohkohchi, Nobuhiro; Masataka Imura and Pablo Figueroa and Betty MohlerThis paper describes the development of an encountered-type haptic interface that can independently present the physical characteristics of 3D virtual objects, such as shape and rigidity, in the real world. This interface consists of nonexpandable balloons, syringe pumps, pressure sensors, linear actuators, and a PC. To change the rigidity of the balloon, the volume of air in the balloon is controlled by using a linear actuator and a pressure sensor based on Hooke's law. Furthermore, to change the volume of the balloon, the exposed surface area of the balloon is controlled by using another linear actuator with a trumpet-shaped tube. Performance tests of the system were conducted, and the effectiveness of the proposed interface was verified.Item Development of Mutual Telexistence System using Virtual Projection of Operator's Egocentric Body Images(The Eurographics Association, 2015) Saraiji, MHD Yamen; Fernando, Charith Lasantha; Minamizawa, Kouta; Tachi, Susumu; Masataka Imura and Pablo Figueroa and Betty MohlerIn this paper, a mobile telexistence system that provides mutual embodiment of user's body in a remote place is discussed. A fully mobile slave robot was designed and developed to deliver visual and motion mapping with user's head and body. The user can access the robot remotely using a Head Mounted Display (HMD) and set of head trackers. This system addresses three main points that are as follows: User's body representation in a remote physical environment, preserving body ownership toward the user during teleoperation, and presenting user's body interactions and visuals into the remote side. These previous three points were addressed using virtual projection of user's body into the egocentric local view, and projecting body visuals remotely. This system is intended to be used for teleconferencing and remote social activities when no physical manipulation is required.Item Dynamic View Expansion for Improving Visual Search in Video See-through AR(The Eurographics Association, 2016) Yano, Yuki; Orlosky, Jason; Kiyokawa, Kiyoshi; Takemura, Haruo; Dirk Reiners and Daisuke Iwai and Frank SteinickeThe extension or expansion of human vision is often accomplished with video see-through head mounted displays (HMDs) because of their clarity and ability to modulate background information. However, little is known about how we should control these augmentations, and continuous augmentation can have negative consequences such as distorted motion perception. To address these problems, we propose a dynamic view expansion system that modulates vergence, translation, or scale of video see-through cameras to give users on-demand peripheral vision enhancement. Unlike other methods that modify a user’s direct field of view, we take advantage of ultrawide fisheye lenses to provide access to peripheral information that would not otherwise be available. In a series of experiments testing our prototype in real world search, identification, and matching tasks, we test these expansion methods and evaluate both user performance and subjective measures such as fatigue and simulation sickness. Results show that less head movement is required with dynamic view expansion, but performance varies with application.Item The Effects of Avatars on Presence in Virtual Environments for Persons with Mobility Impairments(The Eurographics Association, 2014) Guo, Rongkai; Samaraweera, Gayani; Quarles, John; Takuya Nojima and Dirk Reiners and Oliver StaadtThe main question we ask is: How do avatars affect presence specifically for Persons with Mobility Impairments (PMIs)? For example, PMIs' deficits in the proprioceptive sense could affect their body perception in immersive virtual reality, which could impact presence. To investigate this we replicated the classic virtual pit experiment and included a responsive full body avatar (or lack thereof) as a 3D user interface. We recruited from two different populations: 11 PMIs and another 11 Persons without Mobility Impairments (PNMIs) as a control. Each PNMI was matched to a PMI based on age, weight, height, and prior VE exposure. Results of this study indicate that avatars elicit a higher sense of presence for PMIs than for PNMIs. In addition, results suggest that PMIs are easier to immerse in VEs than PNMIs, which may further motivate the future use of VE technology for PMIs.Item An Empiric Evaluation of Confirmation Methods for Optical See-Through Head-Mounted Display Calibration(The Eurographics Association, 2012) Maier, Patrick; Dey, Arindam; Waechter, Christian A. L.; Sandor, Christian; Tönnis, Marcus; Klinker, Gudrun; Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David RobertsThe calibration of optical see-through head-mounted displays (OSTHMDs) is an important fundament for correct object alignment in augmented reality. Any calibration process for OSTHMDs requires users to align 2D points in screen space with 3D points and to confirm each alignment. In this paper, we investigate how different confirmation methods affect calibration quality. By an empiric evaluation, we compared four confirmation methods: Keyboard, Hand-held, Voice, and Waiting. We let users calibrate with a video see-through head-mounted display. This way, we were able to record videos of the alignments in parallel. Later image processing provided baseline alignments for comparison against the user generated ones. Our results provide design constraints for future calibration procedures. The Waiting method, designed to reduce head motion during confirmation, showed a significantly higher accuracy than all other methods. Averaging alignments over a time frame improved the accuracy of all methods further more. We validated our results by numerically comparing the user generated projection matrices with calculated ground truth projection matrices. The findings were also observed by several calibration procedures performed with an OSTHMD.Item Evaluating the Effects of Hand-gesture-based Interaction with Virtual Content in a 360° Movie(The Eurographics Association, 2017) Khan, Humayun; Lee, Gun A.; Hoermann, Simon; Clifford, Rory M. S.; Billinghurst, Mark; Lindeman, Robert W.; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiHead-mounted displays are becoming increasingly popular as home entertainment devices for viewing 360° movies. This paper explores the effects of adding gesture interaction with virtual content and two different hand-visualisation modes for 360° movie watching experience. The system in the study comprises of a Leap Motion sensor to track the user's hand and finger motions, in combination with a SoftKinetic RGB-D camera to capture the texture of the hands and arms. A 360° panoramic movie with embedded virtual objects was used as content. Four conditions, displaying either a point-cloud of the real hand or a rigged computer-generated hand, with and without interaction, were evaluated. Presence, agency, embodiment, and ownership, as well as the overall participant preference were measured. Results showed that participants had a strong preference for the conditions with interactive virtual content, and they felt stronger embodiment and ownership. The comparison of the two hand visualisations showed that the display of the real hand elicited stronger ownership. There was no overall difference for presence between the four conditions. These findings suggest that adding interaction with virtual content could be beneficial to the overall user experience, and that interaction should be performed using the real hand visualisation instead of the virtual hand if higher ownership is desired.Item Exploring Distant Objects with Augmented Reality(The Eurographics Association, 2013) Tatzgern, Markus; Grasset, Raphael; Veas, Eduardo; Kalkofen, Denis; Seichter, Hartmut; Schmalstieg, Dieter; Betty Mohler and Bruno Raffin and Hideo Saito and Oliver StaadtAugmented reality (AR) enables users to retrieve additional information about the real world objects and locations. Exploring such location-based information in AR requires physical movement to different viewpoints, which may be tiring and even infeasible when viewpoints are out of reach. In this paper, we present object-centric exploration techniques for handheld AR that allow users to access information freely using a virtual copy metaphor to explore large real world objects. We evaluated our interfaces in controlled conditions and collected first experiences in a real world pilot study. Based on our findings, we put forward design recommendations that should be considered by future generations of location-based AR browsers, 3D tourist guides, or in situated urban planning.Item Global Landmarks Do Not Necessarily Improve Spatial Performance in Addition to Bodily Self-Movement Cues when Learning a Large-Scale Virtual Environment(The Eurographics Association, 2015) Meilinger, Tobias; Schulte-Pelkum, Jörg; Frankenstein, Julia; Berger, Daniel; Bülthoff, Heinrich H.; Masataka Imura and Pablo Figueroa and Betty MohlerComparing spatial performance in different virtual reality setups can indicate which cues are relevant for a realistic virtual experience. Bodily self-movement cues and global orientation information were shown to increase spatial performance compared with local visual cues only. We tested the combined impact of bodily and global orientation cues by having participants learn a virtual multi corridor environment either by only walking through it, with additional distant landmarks providing heading information, or with a surrounding hall relative to which participants could determine their orientation and location. Subsequent measures on spatial memory only revealed small and non-reliable differences between the learning conditions. We conclude that additional global landmark information does not necessarily improve user's orientation within a virtual environment when bodily-self-movement cues are available.Item An HMD-based Mixed Reality System for Avatar-Mediated Remote Collaboration with Bare-hand Interaction(The Eurographics Association, 2015) Noh, Seung-Tak; Yeo, Hui-Shyong; Woo, Woontack; Masataka Imura and Pablo Figueroa and Betty MohlerWe present a novel system for mixed reality based remote collaboration system, which enables a local user to interact and collaborate with another user from remote space using natural hand motion. Unlike conventional system where the remote user appears only inside the screen, our system is able to summon the remote user into the local space, which appears as a virtual avatar in the real world view seen by the local user. To support our avatar-mediated remote collaboration concept, we derive a systematic framework design that consists of the hardware and software configuration with various devices. We explore novel techniques for calibrating and managing the coordinate system in asymmetric setup, sensor fusion between devices and generating human-like motion for the avatar. For validating our proposal, we implemented a proof-of-concept prototype using off-the-shelf hardware and report the experimental results. We believe that our system overcomes not only several limitations of previous systems but also creates new possibilities in remote collaboration domain.Item Indoor Tracking for Large Area Industrial Mixed Reality(The Eurographics Association, 2012) Scheer, Fabian; Müller, Stefan; Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David RobertsFor mixed reality (MR) applications the tracking of a video camera in a rapidly changing large environment with several hundred square meters still represents a challenging task. In contrast to an installation in a laboratory, industrial scenarios like a running factory, require minimal setup, calibration or training times of a tracking system and merely minimal changes of the environment. This paper presents a tracking system to compute the pose of a video camera mounted on a mobile carriage like device in very large indoor environments, consisting of several hundred square meters. The carriage is equipped with a touch sensitive monitor to display a live augmentation. The tracking system is based on an infrared laser device, that detects at least three out of a few retroreflective targets in the environment and compares actual target measurements with a precalibrated 2D target map. The device passes a 2D position and orientation. To obtain a six degree of freedom (DOF) pose a coordinate system adjustment method is presented, that determines the transformation between the 2D laser tracker and the image sensor of a camera. To analyse the different error sources leading to the overall error the accuracy of the system is evaluated in a controlled laboratory setup. Beyond that, an evaluation of the system in a large factory building is shown, as well as the application of the system for industrial MR discrepancy checks of complete factory buildings. Finally, the utility of the 2D scanning capabilities of the laser in conjuction with a virtually generated 2D map of the 3D model of a factory is demonstrated for MR discrepancy checks.Item Influence of Path Complexity on Spatial Overlap Perception in Virtual Environments(The Eurographics Association, 2015) Vasylevska, Khrystyna; Kaufmann, Hannes; Masataka Imura and Pablo Figueroa and Betty MohlerReal walking in large virtual indoor environments within a limited real world workspace requires effective spatial compression methods. These methods should be unnoticed by the user. Scene manipulation that creates overlapping spaces has been suggested in recent work. However, there is little research focusing on users' perception of over-lapping spaces depending on the layout of the environment. In this paper we investigate how the complexity of the path influences the perception of the overlapping spaces it connects. We compare three spatial virtual layouts with paths that differ in complexity (length and number of turns). Our results suggest that an increase of the path's length is less efficient in decreasing overlap detection than a combination of length and additional turns. Furthermore, combination of paths that differ in complexity influences the distance perception within overlapping spaces.Item The Influence of Real Human Personality on Social Presence with a Virtual Human in Augmented Reality(The Eurographics Association, 2016) Kim, Kangsoo; Bruder, Gerd; Maloney, Divine; Welch, Greg; Dirk Reiners and Daisuke Iwai and Frank SteinickeHuman responses to an interaction with a Virtual Human (VH) can be influenced by both external factors such as technologyrelated limitations, and internal factors such as individual differences in personality. While the impacts of external factors have been studied widely, and are typically controlled for in application scenarios, less attention has been devoted to the impacts of internal factors. We present the results of a human-subject experiment where we investigated a particular internal factor: the effects of extraversion-introversion traits of participants on the sense of social presence with a VH in an Augmented Reality (AR) setting. Our results indicate a positive association between a condition where the VH proactively requests help from the participant, and participants indicating higher social presence with the VH, regardless of their personality. However, we also found that extraverted participants tended to report higher social presence with the VH, compared to the introverted participants. In addition, there were differences in the duration of when the participants were looking at the VH during the interaction according to their extraversion-introversion traits. Our results suggest that a real human's personality plays a significant role in interactions with a VH, and thus should be considered when carrying out such experiments that include measures of the effects of controlled manipulations on interactions with a VH. We present the details of our experiment and discuss the findings and potential implications related to human perceptions and behaviors with a VH.Item Interpretation of Tactile Sensation using an Anthropomorphic Finger Motion Interface to Operate a Virtual Avatar(The Eurographics Association, 2014) Ujitoko, Yusuke; Hirota, Koichi; Takuya Nojima and Dirk Reiners and Oliver StaadtThe objective of the system presented in this paper is to give users tactile feedback while walking in a virtual world through an anthropomorphic finger motion interface. We determined that the synchrony between the first person perspective and proprioceptive information together with the motor activity of the user's fingers are able to induce an illusionary feeling that is equivalent to the sense of ownership of the invisible avatar's legs. Under this condition, the perception of the ground under the virtual avatar's foot is felt through the user's fingertip. The experiments indicated that using our method the scale of the tactile perception of the texture roughness was extended and that the enlargement ratio was proportional to the avatar's body (foot) size. In order to display the target tactile perception to the users, we have to control only the virtual avatar's body (foot) size and the roughness of the tactile texture. Our results suggest that in terms of tactile perception fingers can be a replacement for legs in locomotion interfaces.Item Investigation of Dynamic View Expansion for Head-Mounted Displays with Head Tracking in Virtual Environments(The Eurographics Association, 2014) Yano, Yuki; Kiyokawa, Kiyoshi; Sherstyuk, Andrei; Mashita, T.; Takemura, H.; Takuya Nojima and Dirk Reiners and Oliver StaadtHead mounted displays (HMD) are widely used for visual immersion in virtual reality (VR) systems. It is acknowledged that the narrow field of view (FOV) for most HMD models is the leading cause of insufficient quality of immersion, resulting in suboptimal user performance in various tasks in VR and early fatigue, too. Proposed solutions to this problem range from hardware-based approaches to software enhancements of the viewing process. There exist three major techniques of view expansion; minification or rendering graphics with a larger FOV than the display's FOV, motion amplification or amplifying user head rotation aiming to provide accelerated access to peripheral vision during wide sweeping head movements, and diverging left and right virtual cameras outwards in order to increase the combined binocular FOV. Static view expansion has been reported to increase user efficiency in search and navigation tasks, however the effectiveness of dynamic view expansion is not yet well understood. When applied, view expansion techniques modify the natural viewing process and alter familiar user reflex-response loops, which may result in motion sickness and poor user performance. Thus, it is vital to evaluate dynamic view expansion techniques in terms of task effectiveness and user workload. This paper details dynamic view expansion techniques, experimental settings and findings of the user study. In the user study, we investigate three view expansion techniques, applying them dynamically based on user behaviors. We evaluate the effectiveness of these methods quantitatively, by measuring and comparing user performance and user workload in a target search task. Also, we collect and compare qualitative feedback from the subjects in the experiment. Experimental results show that certain levels of minification and motion amplification increase performance by 8.2% and 6.0%, respectively, with comparable or even decreased subjective workload.Item Modifying an Identified Size of Objects Handled with Two Fingers Using Pseudo-Haptic Effects(The Eurographics Association, 2012) Ban, Yuki; Narumi, Takuji; Tanikawa, Tomohiro; Hirose, Michitaka; Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David RobertsIn our research, we aim to construct a visuo-haptic system that employs pseudo-haptic effects to provide users with the sensation of touching virtual objects of varying shapes. Thus far, we have proved that it can be possible to modify an identified curved surface shapes or angle of edges by displacing the visual representation of the user's hand. However, this method has some limitations in that we can not adapt the way of touching with two or more fingers by visually displacing the user's hand. To solve this problem, we need to not only displace the visual representation of the user's hand but also deform it. Hence, in this paper, we focus on modifying the identification of the size of objects handled with two fingers. This was achieved by deforming the visual representation of the user's hand in order to construct a novel visuo-haptic system. We devised a video see-through system, which enables us to change the perception of the shape of an object that a user is visually touching. The visual representation of the user's hand is deformed as if the user were handling a visual object, when in actuality the user is handling an object of another size. Using this system we performed an experiment to investigate the effects of visuo-haptic interaction and evaluated its effectiveness. The result showed that the perceived size of objects handled with a thumb and other finger(s) could be modified if the difference between the size of physical and visual stimuli was in the range from -40% to 35%. This indicates that our method can be applied to visuo-haptic shape display system that we proposed.