WICED 2020
Permanent URI for this collection
Browse
Browsing WICED 2020 by Author "Wu, Hui-Yin"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Designing an Adpative Assisting Interface for Learning Virtual Filmmaking(The Eurographics Association, 2020) Wu, Qiu-Jie; Kuo, Chih-Hsuan; Wu, Hui-Yin; Li, Tsai-Yen; Christie, Marc and Wu, Hui-Yin and Li, Tsai-Yen and Gandhi, VineetIn this paper, we present an adaptive assisting interface for learning virtual filmmaking. The design of the system is based on the scaffolding theory, to provide timely guidance to the user in the form of visual and audio messages that are adapted to each person's skill level and performance. The system was developed on an existing virtual filmmaking setup. We conducted a study with 24 participants, who were asked to operate the film set with or without our adaptive assisting interface. Results suggest that our system can provide users with a better learning experience and positive knowledge harvest.Item Joint Attention for Automated Video Editing(The Eurographics Association, 2020) Wu, Hui-Yin; Santarra, Trevor; Leece, Michael; Vargas, Rolando; Jhala, Arnav; Christie, Marc and Wu, Hui-Yin and Li, Tsai-Yen and Gandhi, VineetJoint attention refers to the shared focal points of attention for occupants in a space. In this work, we introduce a computational definition of joint attention for the automated editing of meetings in multi-camera environments from the AMI corpus. Using extracted head pose and individual headset amplitude as features, we developed three editing methods: (1) a naive audio-based method that selects the camera using only the headset input, (2) a rule-based edit that selects cameras at a fixed pacing using pose data, and (3) an editing algorithm using LSTM (Long-short term memory) learned joint-attention from both pose and audio data, trained on expert edits. The methods are evaluated qualitatively against the human edit, and quantitatively in a user study with 22 participants. Results indicate that LSTM-trained joint attention produces edits that are comparable to the expert edit, offering a wider range of camera views than audio, while being more generalizable as compared to rule-based methods.Item WICED 2020: Frontmatter(The Eurographics Association, 2020) Christie, Marc; Wu, Hui-Yin; Li, Tsai-Yen; Gandhi, Vineet; Christie, Marc and Wu, Hui-Yin and Li, Tsai-Yen and Gandhi, Vineet