VQ-Style: Disentangling Style and Content in Motion with Residual Quantized Representations

Loading...
Thumbnail Image
Date
2026
Journal Title
Journal ISSN
Volume Title
Publisher
The Eurographics Association and John Wiley & Sons Ltd.
Abstract
Human motion data is inherently rich and complex, containing both semantic content and subtle stylistic features that are challenging to model. We propose a novel method for effective disentanglement of the style and content in human motion data to facilitate style transfer. Our approach is guided by the insight that content corresponds to coarse motion attributes while style captures the finer, expressive details. To model this hierarchy, we employ Residual Vector Quantized Variational Autoencoders (RVQ-VAEs) to learn a coarse-to-fine representation of motion. We further enhance the disentanglement by integrating codebook learning with contrastive learning and a novel information leakage loss to organize the content and the style across different codebooks. We harness this disentangled representation using our simple and effective inference-time technique Quantized Code Swapping, which enables motion style transfer without requiring any fine-tuning for unseen styles. Our framework demonstrates strong versatility across multiple inference applications, including style transfer, style removal, and motion blending.
Description

        
@article{
10.1111:cgf.70377
, journal = {Computer Graphics Forum}, title = {{
VQ-Style: Disentangling Style and Content in Motion with Residual Quantized Representations
}}, author = {
Zargarbashi, Fatemeh
and
Agrawal, Dhruv
and
Buhmann, Jakob
and
Guay, Martin
and
Coros, Stelian
and
Sumner, Robert W.
}, year = {
2026
}, publisher = {
The Eurographics Association and John Wiley & Sons Ltd.
}, ISSN = {
1467-8659
}, DOI = {
10.1111/cgf.70377
} }
Citation