ICAT-EGVE2017
Permanent URI for this collection
Browse
Browsing ICAT-EGVE2017 by Issue Date
Now showing 1 - 20 of 34
Results Per Page
Sort Options
Item Real-time Ambient Fusion of Commodity Tracking Systems for Virtual Reality(The Eurographics Association, 2017) Fountain, Jake; Smith, Shamus P.; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiCross-compatibility of virtual reality devices is limited by the difficulty of alignment and fusion of data between systems. In this paper, a plugin for ambiently aligning the reference frames of virtual reality tracking systems is presented. The core contribution consists of a procedure for ambient calibration. The procedure describes ambient behaviors for data gathering, system calibration and fault detection. Data is ambiently collected from in-application self-directed movements, and calibration is automatically performed between dependent sensor systems. Sensor fusion is then performed by taking the most accurate data for a given body part amongst all systems. The procedure was applied to aligning a Kinect v2 with an HTC Vive and an Oculus Rift in a variety of common virtual reality scenarios. The results were compared to alignment performed with a gold standard OptiTrack motion capture system. Typical results were 20cm and 4 of error compared to the ground truth, which compares favorably with the accepted accuracy of the Kinect v2. Data collection for full calibration took on average 13 seconds of inapplication, self-directed movement. This work represents an essential development towards plug-and-play sensor fusion for virtual reality technology.Item Facial Performance Capture by Embedded Photo Reflective Sensors on A Smart Eyewear(The Eurographics Association, 2017) Asano, Nao; Masai, Katsutoshi; Sugiura, Yuta; Sugimoto, Maki; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiFacial performance capture is used for animation production that projects a performer's facial expression to a computer graphics model. Retro-reflective markers and cameras are widely used for the performance capture. To capture expressions, we need to place markers on the performer's face and calibrate the intrinsic and extrinsic parameters of cameras in advance. However, the measurable space is limited to the calibrated area. In this paper, we propose a system to capture facial performance using a smart eyewear with photo reflective sensors and machine learning technique.Item Towards Precise, Fast and Comfortable Immersive Polygon Mesh Modelling: Capitalising the Results of Past Research and Analysing the Needs of Professionals(The Eurographics Association, 2017) Ladwig, Philipp; Herder, Jens; Geiger, Christian; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiMore than three decades of ongoing research in immersive modelling has revealed many advantages of creating objects in virtual environments. Even though there are many benefits, the potential of immersive modelling has only been partly exploited due to unresolved problems such as ergonomic problems, numerous challenges with user interaction and the inability to perform exact, fast and progressive refinements. This paper explores past research, shows alternative approaches and proposes novel interaction tools for pending problems. An immersive modelling application for polygon meshes is created from scratch and tested by professional users of desktop modelling tools, such as Autodesk Maya, in order to assess the efficiency, comfort and speed of the proposed application with direct comparison to professional desktop modelling tools.Item A Mutual Motion Capture System for Face-to-face Collaboration(The Eurographics Association, 2017) Nakamura, Atsuyuki; Kiyokawa, Kiyoshi; Ratsamee, Photchara; Mashita, Tomohiro; Uranishi, Yuki; Takemura, Haruo; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiIn recent years, motion capture technology to measure the movement of the body has been used in many fields. Moreover, motion capture targeting multiple people is becoming necessary in multi-user virtual reality (VR) and augmented reality (AR) environments. It is desirable that motion capture requires no wearable devices to capture natural motion easily. Some systems require no wearable devices using an RGB-D camera fixed in the environment, but the user has to stay in front of the fixed the RGB-D camera. Therefore, in this research, proposed is a motion capture technique for a multi-user VR / AR environment using head mounted displays (HMDs), that does not limit the working range of the user nor require any wearable devices. In the proposed technique, an RGB-D camera is attached to each HMD and motion capture is carried out mutually. The motion capture accuracy is improved by modifying the depth image. A prototype system has been implemented to evaluate the effectiveness of the proposed method and motion capture accuracy has been compared with two conditions, with and without depth information correction while rotating the RGB-D camera. As a result, it was confirmed that the proposed method could decrease the number of frames with erroneous motion capture by 49% to 100% in comparison with the case without depth image conversion.Item Development of Olfactory Display Using Solenoid Valves Controlled Atomization for High Concentration Scent Emission(The Eurographics Association, 2017) Ariyakul, Yossiri; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiThis paper reports on the introduction of using atomization technique controlled by high-speed switching solenoid valves to present smells. Even though atomization has been widely used to release smells in commercial aroma diffusers, intensity of the released odor cannot be controlled. In this paper, the high speed ON/OFF switching of the solenoid valves enables the capability to control odor intensity precisely and rapidly and the atomization enables emission of high concentration odors compared with odors generated from natural evaporation method. The proposed olfactory display was evaluated by using an odor sensing system composed of a quartz crystal microbalance (QCM) gas sensor. As a result, the reproducibility and the capability to present high concentration odors with adjustable intensity of the proposed olfactory display were confirmed.Item Archives of Thrill: The V-Armchair Experience(The Eurographics Association, 2017) Passmore, Peter J.; Tennent, Paul; Walker, Brendan; Philpot, Adam; Le, Ha; Markowski, Marianne; Karamanoglu, Mehmet; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiTechnology for older people is typically concerned either with health care or accessibility of existing systems. In this paper we take a more 'entertainment-oriented' approach to developing experiences aimed at older users. We describe here the design, development and a user study of the V-Armchair, a virtual reality and motion platform based roller coaster experience. The V-Armchair constitutes a blueprint for the digital archiving of physical ride experiences through the simultaneous capture of 360 video, sound and motion. It gives access to thrill experiences to those who may not be able to go on real thrill rides, such as older riders, and it can be considered as a class of technology that could help to support 'active aging' as defined by the World Health Organisation. We discuss strategies for capturing and then 'toning down' motion experiences to make them accessible for older users. We present a study which explores the user experience of the V-Armchair with an older group (median age 63) using a DK2 headset, and a younger group (median age 25) using a CV1 headset, via thematic analysis of semi-structured interviews and a modified version of the Game Experience Questionnaire, and discuss emergent themes such as the role of the presenter, reminiscence, presence and immersion.Item Collaborative View Configurations for Multi-user Interaction with a Wall-size Display(The Eurographics Association, 2017) Kim, Hyungon; Kim, Yeongmi; Lee, Gun A.; Billinghurst, Mark; Bartneck, Christoph; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiThis paper explores the effects of different collaborative view configuration on face-to-face collaboration using a wall-size display and the relationship between view configuration and multi-user interaction. Three different view configurations (shared view, split screen, and split screen with navigation information) for multi-user collaboration with a wall-size display were introduced and evaluated in a user study. From the experiment results, several insights for designing a virtual environment with a wall-size display were discussed. The shared view configuration does not disturb collaboration despite control conflict and can provide an effective collaboration. The split screen view configuration can provide independent collaboration while it can take users' attention. The navigation information can reduce the interaction required for the navigational task while an overall interaction performance may not increase.Item Exploring Pupil Dilation in Emotional Virtual Reality Environments(The Eurographics Association, 2017) Chen, Hao; Dey, Arindam; Billinghurst, Mark; Lindeman, Robert W.; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiPrevious investigations have shown that pupil dilation can be affected by emotive pictures, audio clips, and videos. In this paper, we explore how emotive Virtual Reality (VR) content can also cause pupil dilation. VR has been shown to be able to evoke negative and positive arousal in users when they are immersed in different virtual scenes. In our research, VR scenes were used as emotional triggers. Five emotional VR scenes were designed in our study and each scene had five emotion segments; happiness, fear, anxiety, sadness, and disgust. When participants experienced the VR scenes, their pupil dilation and the brightness in the headset were captured. We found that both the negative and positive emotion segments produced pupil dilation in the VR environments. We also explored the effect of showing heart beat cues to the users, and if this could cause difference in pupil dilation. In our study, three different heart beat cues were shown to users using a combination of three channels; haptic, audio, and visual. The results showed that the haptic-visual cue caused the most significant pupil dilation change from the baseline.Item Evaluating and Comparing Game-controller based Virtual Locomotion Techniques(The Eurographics Association, 2017) Sarupuri, Bhuvaneswari; Hoermann, Simon; Whitton, Mary C.; Lindeman, Robert W.; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiThe incremental hardware costs of virtual locomotion are minimized when the technique uses interaction capabilities available in controllers and devices that are already part of the VE system, e.g., gamepads, keyboards, and multi-function controllers. We used a different locomotion technique for each of these three devices: gamepad thumb-stick (joystick walking), a customized hybrid keyboard for gaming (speedpad walking), and an innovative technique that uses the orientation and triggers of the HTC Vive controllers (TriggerWalking). We explored the efficacy of locomotion techniques using these three devices in a hide and seek task in an indoor environment. We measured task performance, simulator sickness, system usability, perceived workload, and preference. We found that users had a strong preference for TriggerWalking, which also had the least increase in simulator sickness, the highest performance score, and highest perceived usability. However, participants using TriggerWalking also had the most object and wall-collisions. Overall we found that TriggerWalking is an effective locomotion technique and that is has significant and important benefits. Future research will explore if TriggerWalking can be used with equal benefits in other virtual-environments, on different tasks, and types of movement.Item Assessing the Relevance of Eye Gaze Patterns During Collision Avoidance in Virtual Reality(The Eurographics Association, 2017) Varma, Kamala; Guy, Stephen J.; Interrante, Victoria; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiTo increase presence in virtual reality environments requires a meticulous imitation of human behavior in virtual agents. In the specific case of collision avoidance, agents' interaction will feel more natural if they are able to both display and respond to non-verbal cues. This study informs their behavior by analyzing participants' reaction to nonverbal cues. Its aim is to confirm previous work that shows head orientation to be a primary factor in collision avoidance negotiation, and to extend this to investigate the additional contribution of eye gaze direction as a cue. Fifteen participants were directed to walk towards an oncoming agent in a virtual hallway, who would exhibit various combinations of head orientation and eye gaze direction based cues. Closely prior to the potential collision the display turned black and the participant had to move in avoidance of the agent as if she were still present. Meanwhile, their own eye gaze was tracked to identify where their focus was directed and how it related to their response. Results show that the natural tendency was to avoid the agent by moving right. However, participants showed a greater compulsion to move leftward if the agent cued her own movement to the participant's right, whether through head orientation cues (consistent with previous work) or through eye gaze direction cues (extending previous work). The implications of these findings are discussed.Item 3D Reconstruction of Hand Postures by Measuring Skin Deformation on Back Hand(The Eurographics Association, 2017) Kuno, Wakaba; Sugiura, Yuta; Asano, Nao; Kawai, Wataru; Sugimoto, Maki; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiIn this research, we propose a method for reconstructing hand posture by measuring the deformation of the back of the hand with a wearable device. The deformation of skin on the back of the hand can be measured by using several photo-reflective sensors attached to a wearable device. In the learning phase, our method constructs a regression model by using the data on hand posture captured by a depth camera and data on the skin deformation of the back of the hand captured by several photoreflective sensors. In the estimation phase, by using this regression model, the posture of the hand is reconstructed from the data of the photo-reflective sensors in real-time. The posture of fingers can be estimated without hindering the natural movement of the fingers since the deformation of the back of the hand is measured without directly measuring the position of the fingers. This method can be used by users to manipulate information in a virtual environment with their fingers. We conducted an experiment to evaluate the accuracy of reconstructing hand posture with the proposed system.Item A New Approach to Utilize Augmented Reality on Precision Livestock Farming(The Eurographics Association, 2017) Zhao, Zongyuan; Yang, Wenli; Chinthammit, Winyu; Rawnsley, Richard; Neumeyer, Paul; Cahoon, Stephen; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiThis paper proposes a new method that utilizes AR to assist pasture-based dairy farmers identify and locate animal within large herds. Our proposed method uses GPS collars on cows and digital camera and on-board GPS on a mobile device to locate a selected cow and show the behavioral and other associated key metrics on our mobile application. The augmented cow's information shown on real scene video steam will help users (farmers) manage their animals with respect to welfare, health, and management interventions. By integrating GPS data with computer vision (CV) and machine learning, our mobile AR application has two major functions: 1. Searching a cow by its unique ID, and 2. Displaying information associated with a selected cow visible on screen. Our proof-of-concept application shows the potential of utilizing AR in precision livestock farming.Item User Interface Agents for Guiding Interaction with Augmented Virtual Mirrors(The Eurographics Association, 2017) Lee, Gun A.; Rudhru, Omprakash; Park, Hye Sun; Kim, Ho Won; Billinghurst, Mark; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiThis research investigates using user interface (UI) agents for guiding gesture based interaction with Augmented Virtual Mirrors. Compared to prior work in gesture interaction, where graphical symbols are used for guiding user interaction, we propose using UI agents. We explore two approaches for using UI agents: 1) using a UI agent as a delayed cursor and 2) using a UI agent as an interactive button. We conducted two user studies to evaluate the proposed designs. The results from the user studies show that UI agents are effective for guiding user interactions in a similar way as a traditional graphical user interface providing visual cues, while they are useful in emotionally engaging with users.Item An Augmented Reality and Virtual Reality Pillar for Exhibitions: A Subjective Exploration(The Eurographics Association, 2017) See, Zi Siang; Sunar, Mohd Shahrizal; Billinghurst, Mark; Dey, Arindam; Santano, Delas; Esmaeili, Human; Thwaites, Harold; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiThis paper presents the development of an Augmented Reality (AR) and Virtual Reality (AR) pillar, a novel approach for showing AR and VR content in a public setting. A pillar in a public exhibition venue was converted to a four-sided AR and VR showcase, and a cultural heritage exhibit of ''Boatbuilders of Pangkor'' was shown. Multimedia tablets and mobile AR head-mountdisplays (HMDs) were provided for visitors to experience multisensory AR and VR content demonstrated on the pillar. The content included AR-based videos, maps, images and text, and VR experiences that allowed visitors to view reconstructed 3D subjects and remote locations in a 360° virtual environment. In this paper, we describe the prototype system, a user evaluation study and directions for future work.Item Reference Framework on vSRT-method Benchmarking for MAR(The Eurographics Association, 2017) Ichikari, Ryosuke; Kurata, Takeshi; Makita, Koji; Taketomi, Takafumi; Uchiyama, Hideaki; Kondo, Tomotsugu; Mori, Shohei; Shibata, Fumihisa; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiThis paper presents a reference framework on benchmarking of vision-based spatial registration and tracking (vSRT) methods for Mixed and Augmented Reality (MAR). This framework can provide typical benchmarking processes, benchmark indicators, and trial set elements that are necessary to successfully identify, define, design, select, and apply benchmarking of vSRT methods for MAR. In addition, we summarize findings from benchmarking activities for sharing how to organize and conduct on-site and off-site competition.Item Effects of Personalized Avatar Texture Fidelity on Identity Recognition in Virtual Reality(The Eurographics Association, 2017) Thomas, Jerald; Azmandian, Mahdi; Grunwald, Sonia; Le, Donna; Krum, David; Kang, Sin-Hwa; Rosenberg, Evan Suma; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiRecent advances in 3D scanning, reconstruction, and animation techniques have made it possible to rapidly create photorealistic avatars based on real people. While it is now possible to create personalized avatars automatically with consumer-level technology, their visual fidelity still falls far short of 3D avatars created with professional cameras and manual artist effort. To evaluate the importance of investing resources in the creation of high-quality personalized avatars, we conducted an experiment to investigate the effects of varying their visual texture fidelity, specifically focusing on identity recognition of specific individuals. We designed two virtual reality experimental scenarios: (1) selecting a specific avatar from a virtual lineup and (2) searching for an avatar in a virtual crowd. Our results showed that visual fidelity had a significant impact on participants' abilities to identify specific avatars from a lineup wearing a head-mounted display. We also investigated gender effects for both the participants and the confederates from which the avatars were created.Item Dwarf or Giant: The Influence of Interpupillary Distance and Eye Height on Size Perception in Virtual Environments(The Eurographics Association, 2017) Kim, Jangyoon; Interrante, Victoria; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiThis paper addresses the question: to what extent can deliberate manipulations of interpupillary distance (IPD) and eye height be used in a virtual reality (VR) experience to influence a user's sense of their own scale with respect to their surrounding environment - evoking, for example, the illusion of being miniaturized, or of being a giant? In particular, we report the results of an experiment in which we separately study the effect of each of these body scale manipulations on users' perception of object size in a highly detailed, photorealistically rendered immersive virtual environment, using both absolute numeric measures and body-relative actions. Following a real world training session, in which participants learn to accurately report the metric sizes of individual white cubes (3''-20'') presented one at a time on a table in front of them, we conduct two blocks of VR trials using nine different combinations of IPD and eye height. In the first block of trials, participants report the perceived metric size of a virtual white cube that sits on a virtual table, at the same distance used in the real-world training, within in a realistic virtual living room filled with many objects capable of providing familiar size cues. In the second block of trials, participants use their hands to indicate the perceived size of the cube. We found that size judgments were moderately correlated (r = 0.4) between the two response methods, and that neither altered eye height (± 50cm) nor reduced (10mm) IPD had a significant effect on size judgments, but that a wider (150mm) IPD caused a significant (μ = 38%, p < 0.01) decrease in perceived cube size. These findings add new insights to our understanding of how eye height and IPD manipulations can affect peoples' perception of scale in highly realistic immersive VR scenarios.Item Sharing Gaze for Remote Instruction(The Eurographics Association, 2017) Barathan, Sathya; Lee, Gun A.; Billinghurst, Mark; Lindeman, Robert W.; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiIn this paper, we report on how sharing gaze cues can assist remote instruction. A person wearing a head-mounted display and camera can share his or her view with a remote collaborator and get assistance on completing a real-world task. This configuration has been extensively studied in the past, but there has been little research on how the addition of sharing gaze cues might affect the collaboration. This paper reports on a user study exploring how sharing the gaze of a remote expert affects the quality of collaboration over a head-worn video conferencing link. The results showed that the users performed faster when the local workers were aware of their remote collaborator's gaze, and the remote experts were in favour of shared gaze cues because of the ease-of-use and improved communication.Item ICAT-EGVE 2017: Frontmatter(Eurographics Association, 2017) Lindeman, Robert W.; Bruder, Gerd; Iwai, Daisuke; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiItem Asymmetric Bimanual Interaction for Mobile Virtual Reality(The Eurographics Association, 2017) Bai, Huidong; Nassani, Alaeddin; Ens, Barrett; Billinghurst, Mark; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiIn this paper, we explore asymmetric bimanual interaction with mobile Virtual Reality (VR). We have developed a novel two handed interface for mobile VR which uses a 6 degree of freedom (DoF) controller input for the dominant hand and full-hand gesture input for the non-dominant hand. We evaluated our method in a pilot study by comparing it to three other asymmetric bimanual interfaces (1) 3D controller and 2D touch-pad, (2) 3D gesture and 2D controller, and (3) 3D gesture and 2D touchpad in a VR translation and rotation task.We observed that using our position aware handheld controller with gesture input provided an easy and natural experience.