• Login
    View Item 
    •   Eurographics DL Home
    • Eurographics Workshops and Symposia
    • EGVE: Eurographics Workshop on Virtual Environments
    • ICAT-EGVE2019
    • View Item
    •   Eurographics DL Home
    • Eurographics Workshops and Symposia
    • EGVE: Eurographics Workshop on Virtual Environments
    • ICAT-EGVE2019
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    FaceDrive: Facial Expression Driven Operation to Control Virtual Supernumerary Robotic Arms

    Thumbnail
    View/Open
    017-024.pdf (14.93Mb)
    Date
    2019
    Author
    Fukuoka, Masaaki
    verhulst, adrien ORCID
    Nakamura, Fumihiko ORCID
    Takizawa, Ryo
    Masai, Katsutoshi
    Sugimoto, Maki ORCID
    Pay-Per-View via TIB Hannover:

    Try if this item/paper is available.

    Metadata
    Show full item record
    Abstract
    Supernumerary Robotic Limbs (SRLs) can make physical activities easier, but require cooperation with the operator. To improve cooperation between the SRLs and the operator, the SRLs can try to predict the operator's intentions. A way to predict the operator's intentions is to use his/her Facial Expressions (FEs). Here we investigate the mapping between FEs and Supernumerary Robotic Arms (SRAs) commands (e.g. grab, release). To measure FEs, we used a optical sensor-based approach (here inside a HMD). The sensors data are fed to a SVM able to predict FEs. The SRAs can then carry out commands by predicting the operator's FEs (and arguably, the operator's intention). We ran a data collection study (N=10) to know which FEs assign to which robotic arm commands in a Virtual reality Environment (VE). We researched the mapping patterns by (1) performing an object reaching - grasping - releasing task using ''any'' FEs; (2) analyzing sensors data and a self-reported FE questionnaire to find the most common FEs used for a given command; (3) classifying the FEs in FEs groups. We then ran another study (N=14) to find the most effective combination of FEs groups / SRAs commands by recording task completion time. As a result, we found that the optimum combinations are: (i) Eyes + Mouth for grabbing / releasing; and (ii) Mouth for extending / contracting the arms (i.e. a along the forward axis).
    BibTeX
    @inproceedings {10.2312:egve.20191275,
    booktitle = {ICAT-EGVE 2019 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
    editor = {Kakehi, Yasuaki and Hiyama, Atsushi},
    title = {{FaceDrive: Facial Expression Driven Operation to Control Virtual Supernumerary Robotic Arms}},
    author = {Fukuoka, Masaaki and verhulst, adrien and Nakamura, Fumihiko and Takizawa, Ryo and Masai, Katsutoshi and Sugimoto, Maki},
    year = {2019},
    publisher = {The Eurographics Association},
    ISSN = {1727-530X},
    ISBN = {978-3-03868-083-3},
    DOI = {10.2312/egve.20191275}
    }
    URI
    https://doi.org/10.2312/egve.20191275
    https://diglib.eg.org:443/handle/10.2312/egve20191275
    Collections
    • ICAT-EGVE2019

    Eurographics Association copyright © 2013 - 2023 
    Send Feedback | Contact - Imprint | Data Privacy Policy | Disable Google Analytics
    Theme by @mire NV
    System hosted at  Graz University of Technology.
    TUGFhA
     

     

    Browse

    All of Eurographics DLCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

    My Account

    LoginRegister

    Statistics

    View Usage Statistics

    BibTeX | TOC

    Create BibTeX Create Table of Contents

    Eurographics Association copyright © 2013 - 2023 
    Send Feedback | Contact - Imprint | Data Privacy Policy | Disable Google Analytics
    Theme by @mire NV
    System hosted at  Graz University of Technology.
    TUGFhA