SPG: Style-Prompting Guidance for Style-Specific Content Creation

Loading...
Thumbnail Image
Date
2025
Journal Title
Journal ISSN
Volume Title
Publisher
The Eurographics Association and John Wiley & Sons Ltd.
Abstract
Although recent text-to-image (T2I) diffusion models excel at aligning generated images with textual prompts, controlling the visual style of the output remains a challenging task. In this work, we propose Style-Prompting Guidance (SPG), a novel sampling strategy for style-specific image generation. SPG constructs a style noise vector and leverages its directional deviation from unconditional noise to guide the diffusion process toward the target style distribution. By integrating SPG with Classifier- Free Guidance (CFG), our method achieves both semantic fidelity and style consistency. SPG is simple, robust, and compatible with controllable frameworks like ControlNet and IPAdapter, making it practical and widely applicable. Extensive experiments demonstrate the effectiveness and generality of our approach compared to state-of-the-art methods. Code is available at https://github.com/Rumbling281441/SPG.
Description

CCS Concepts: Computing methodologies → Image processing

        
@article{
10.1111:cgf.70251
, journal = {Computer Graphics Forum}, title = {{
SPG: Style-Prompting Guidance for Style-Specific Content Creation
}}, author = {
Liang, Qian
and
Chen, Zichong
and
Zhou, Yang
and
Huang, Hui
}, year = {
2025
}, publisher = {
The Eurographics Association and John Wiley & Sons Ltd.
}, ISSN = {
1467-8659
}, DOI = {
10.1111/cgf.70251
} }
Citation
Collections