DeFT-Net: Dual-Window Extended Frequency Transformer for Rhythmic Motion Prediction
dc.contributor.author | Ademola, Adeyemi | en_US |
dc.contributor.author | Sinclair, David | en_US |
dc.contributor.author | Koniaris, Babis | en_US |
dc.contributor.author | Hannah, Samantha | en_US |
dc.contributor.author | Mitchell, Kenny | en_US |
dc.contributor.editor | Hunter, David | en_US |
dc.contributor.editor | Slingsby, Aidan | en_US |
dc.date.accessioned | 2024-09-09T05:44:59Z | |
dc.date.available | 2024-09-09T05:44:59Z | |
dc.date.issued | 2024 | |
dc.description.abstract | Enabling online virtual reality (VR) users to dance and move in a way that mirrors the real-world necessitates improvements in the accuracy of predicting human motion sequences paving way for an immersive and connected experience. However, the drawbacks of latency in networked motion tracking present a critical detriment in creating a sense of complete engagement, requiring prediction for online synchronization of remote motions. To address this challenge, we propose a novel approach that leverages a synthetically generated dataset based on supervised foot anchor placement timings of rhythmic motions to ensure periodicity resulting in reduced prediction error. Specifically, our model compromises a discrete cosine transform (DCT) to encode motion, refine high frequencies and smooth motion sequences and prevent jittery motions. We introduce a feed-forward attention mechanism to learn based on dual-window pairs of 3D key points pose histories to predict future motions. Quantitative and qualitative experiments validating on the Human3.6m dataset result in observed improvements in the MPJPE evaluation metrics protocol compared with prior state-of-the-art. | en_US |
dc.description.sectionheaders | 3D Rendering and Virtual Reality (VR) | |
dc.description.seriesinformation | Computer Graphics and Visual Computing (CGVC) | |
dc.identifier.doi | 10.2312/cgvc.20241220 | |
dc.identifier.isbn | 978-3-03868-249-3 | |
dc.identifier.pages | 7 pages | |
dc.identifier.uri | https://doi.org/10.2312/cgvc.20241220 | |
dc.identifier.uri | https://diglib.eg.org/handle/10.2312/cgvc20241220 | |
dc.publisher | The Eurographics Association | en_US |
dc.rights | Attribution 4.0 International License | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
dc.subject | CCS Concepts: Computing methodologies → Machine Learning; Motion Processing; Virtual Reality | |
dc.subject | Computing methodologies → Machine Learning | |
dc.subject | Motion Processing | |
dc.subject | Virtual Reality | |
dc.title | DeFT-Net: Dual-Window Extended Frequency Transformer for Rhythmic Motion Prediction | en_US |