Browsing by Author "Wang, Zhaowen"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Brush Stroke Synthesis with a Generative Adversarial Network Driven by Physically Based Simulation(ACM, 2018) Wu, Rundong; Chen, Zhili; Wang, Zhaowen; Yang, Jimei; Marschner, Steve; Aydın, Tunç and Sýkora, DanielWe introduce a novel approach that uses a generative adversarial network (GAN) to synthesize realistic oil painting brush strokes, where the network is trained with data generated by a high-fidelity simulator. Among approaches to digitally synthesizing natural media painting strokes, methods using physically based simulation by far produce the most realistic visual results and allow the most intuitive control of stroke variations. However, accurate physics simulations are known to be computationally expensive and often cannot meet the performance requirements of painting applications. A few existing simulation-based methods have managed to reach real-time performance at the cost of lower visual quality resulting from simplified models or lower resolution. In our work, we propose to replace the expensive fluid simulation with a neural network generator. The network takes the existing canvas and new brush trajectory information as input and produces the height and color of the paint surface as output. We build a large painting sample training dataset by feeding random strokes from artists' recordings into a high quality offline simulator. The network is able to produce visual quality comparable to the offline simulator with better performance than the existing real-time oil painting simulator. Finally, we implement a real-time painting system using the trained network with stroke splitting and patch blending and show artworks created with the system by artists. Our neural network approach opens up new opportunities for real-time applications of sophisticated and expensive physically based simulation.Item STALP: Style Transfer with Auxiliary Limited Pairing(The Eurographics Association and John Wiley & Sons Ltd., 2021) Futschik, David; Kucera, Michal; Lukác, Mike; Wang, Zhaowen; Shechtman, Eli; Sýkora, Daniel; Mitra, Niloy and Viola, IvanWe present an approach to example-based stylization of images that uses a single pair of a source image and its stylized counterpart. We demonstrate how to train an image translation network that can perform real-time semantically meaningful style transfer to a set of target images with similar content as the source image. A key added value of our approach is that it considers also consistency of target images during training. Although those have no stylized counterparts, we constrain the translation to keep the statistics of neural responses compatible with those extracted from the stylized source. In contrast to concurrent techniques that use a similar input, our approach better preserves important visual characteristics of the source style and can deliver temporally stable results without the need to explicitly handle temporal consistency. We demonstrate its practical utility on various applications including video stylization, style transfer to panoramas, faces, and 3D models.