WICED 2020
Permanent URI for this collection
Browse
Browsing WICED 2020 by Subject "centered computing"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Designing an Adpative Assisting Interface for Learning Virtual Filmmaking(The Eurographics Association, 2020) Wu, Qiu-Jie; Kuo, Chih-Hsuan; Wu, Hui-Yin; Li, Tsai-Yen; Christie, Marc and Wu, Hui-Yin and Li, Tsai-Yen and Gandhi, VineetIn this paper, we present an adaptive assisting interface for learning virtual filmmaking. The design of the system is based on the scaffolding theory, to provide timely guidance to the user in the form of visual and audio messages that are adapted to each person's skill level and performance. The system was developed on an existing virtual filmmaking setup. We conducted a study with 24 participants, who were asked to operate the film set with or without our adaptive assisting interface. Results suggest that our system can provide users with a better learning experience and positive knowledge harvest.Item Exploring the Impact of 360º Movie Cuts in Users' Attention(The Eurographics Association, 2020) Marañes, Carlos; Gutierrez, Diego; Serrano, Ana; Christie, Marc and Wu, Hui-Yin and Li, Tsai-Yen and Gandhi, VineetVirtual Reality (VR) has become more relevant since the first devices for personal use became available on the market. New content has emerged for this new medium with different purposes such as education, traning, entertainment, etc. However, the production workflow of cinematic VR content is still in an experimental phase. The main reason is that there is controversy between content creators on how to tell a story effectively. Unlike traditional filmmaking, which has been in development for more than 100 years, movie editing in VR has brought new challenges to be addressed. Now viewers have partial control of the camera and can watch every degree of the 360º that surrounds them, with the possibility of losing important aspects of the scene that are key to understand the narrative of the movie. Directors can decide how to edit the film by combining the different shots. Nevertheless, depending on the scene before and after the cut, viewers' behavior may be influenced. To address this issue, we analyze users' behavior through cuts in a professional movie, where the narrative plays an important role, and derive new insights that could potentially influence VR content creation, informing content creators about the impact of different cuts in viewers' behavior.Item GAZED - Gaze-guided Cinematic Editing of Wide-Angle Monocular Video Recordings(The Eurographics Association, 2020) Moorthy, K. L. Bhanu; Kumar, Moneish; Subramanian, Ramanathan; Gandhi, Vineet; Christie, Marc and Wu, Hui-Yin and Li, Tsai-Yen and Gandhi, VineetWe present GAZED- eye GAZ-guided EDiting for videos captured by a solitary, static, wide-angle and high-resolution camera. Eye-gaze has been effectively employed in computational applications as a cue to capture interesting scene content; we employ gaze as a proxy to select shots for inclusion in the edited video. Given the original video, scene content and user eye-gaze tracks are combined to generate an edited video comprising of cinematically valid actor shots and shot transitions to generate an aesthetic and vivid representation of the original narrative. We model cinematic video editing as an energy minimization problem over shot selection, whose constraints capture cinematographic editing conventions. Gazed scene locations primarily determine the shots constituting the edited video. Effectiveness of GAZED against multiple competing methods is demonstrated via a psychophysical study involving 12 users and twelve performance videos. Professional video recordings of stage performances are typically created by employing skilled camera operators, who record the performance from multiple viewpoints. These multi-camera feeds, termed rushes, are then edited together to portray an eloquent story intended to maximize viewer engagement. Generating professional edits of stage performances is both difficult and challenging. Firstly, maneuvering cameras during a live performance is difficult even for experts as there is no option of retake upon error, and camera viewpoints are limited as the use of large supporting equipment (trolley, crane .) is infeasible. Secondly, manual video editing is an extremely slow and tedious process and leverages the experience of skilled editors. Overall, the need for (i) a professional camera crew, (ii) multiple cameras and supporting equipment, and (iii) expert editors escalates the process complexity and costs. Consequently, most production houses employ a large field-of-view static camera, placed far enough to capture the entire stage. This approach is widespread as it is simple to implement, and also captures the entire scene. Such static visualizations are apt for archival purposes; however, they are often unsuccessful at captivating attention when presented to the target audience. While conveying the overall context, the distant camera feed fails to bring out vivid scene details like close-up faces, character emotions and actions, and ensuing interactions which are critical for cinematic storytelling. GAZED denotes an end-to-end pipeline to generate an aesthetically edited video from a single static, wide-angle stage recording. This is inspired by prior work [GRG14], which describes how a plural camera crew can be replaced by a single highresolution static camera, and multiple virtual camera shots or rushes generated by simulating several virtual pan/tilt/zoom cameras to focus on actors and actions within the original recording. In this work, we demonstrate that the multiple rushes can be automatically edited by leveraging user eye gaze information, by modeling (virtual) shot selection as a discrete optimization problem. Eye-gaze represents an inherent guiding factor for video editing, as eyes are sensitive to interesting scene events [RKH*09,SSSM14] that need to be vividly presented in the edited video. The objective critical for video editing and the key contribution of our work is to decide which shot (or rush) needs to be selected to describe each frame of the edited video. The shot selection problem is modeled as an optimization, which incorporates gaze information along with other cost terms that model cinematic editing principles. Gazed scene locations are utilized to define gaze potentials, which measure the importance of the different shots to choose from. Gaze potentials are then combined with other terms that model cinematic principles like avoiding jump cuts (which produce jarring shot transitions), rhythm (pace of shot transitioning), avoiding transient shots . The optimization is solved using dynamic programming. [MKSG20] refers to the detailed full article.Item How Good is Good Enough? The Challenge of Evaluating Subjective Quality of AI-Edited Video Coverage of Live Events(The Eurographics Association, 2020) Radut, Miruna; Evans, Michael; To, Kristie; Nooney, Tamsin; Phillipson, Graeme; Christie, Marc and Wu, Hui-Yin and Li, Tsai-Yen and Gandhi, VineetThis paper reports on recent and ongoing work to develop empirical methods for assessment of the subjective quality of artificial intelligence (AI)-produced multicamera video. We have developed a prototype software system that recording panel performances, using a variety of didactic and machine learning techniques to intelligently crop and cut between feeds from an array of static, unmanned cameras. Evaluating the subjective quality rendered by the software's decisions regarding when and to what to cut represents an important and interesting challenge, due to the technical behaviour of the system, the large number of potential quality risks, and the need to mitigate for content specificity.