Volume 43 (2024)
Permanent URI for this community
Browse
Browsing Volume 43 (2024) by Subject "Artificial intelligence"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
Item Cinematographic Camera Diffusion Model(The Eurographics Association and John Wiley & Sons Ltd., 2024) Jiang, Hongda; Wang, Xi; Christie, Marc; Liu, Libin; Chen, Baoquan; Bermano, Amit H.; Kalogerakis, EvangelosDesigning effective camera trajectories in virtual 3D environments is a challenging task even for experienced animators. Despite an elaborate film grammar, forged through years of experience, that enables the specification of camera motions through cinematographic properties (framing, shots sizes, angles, motions), there are endless possibilities in deciding how to place and move cameras with characters. Dealing with these possibilities is part of the complexity of the problem. While numerous techniques have been proposed in the literature (optimization-based solving, encoding of empirical rules, learning from real examples,...), the results either lack variety or ease of control. In this paper, we propose a cinematographic camera diffusion model using a transformer-based architecture to handle temporality and exploit the stochasticity of diffusion models to generate diverse and qualitative trajectories conditioned by high-level textual descriptions. We extend the work by integrating keyframing constraints and the ability to blend naturally between motions using latent interpolation, in a way to augment the degree of control of the designers. We demonstrate the strengths of this text-to-camera motion approach through qualitative and quantitative experiments and gather feedback from professional artists. The code and data are available at https://github.com/jianghd1996/Camera-control.Item Diffusion-based Human Motion Style Transfer with Semantic Guidance(The Eurographics Association and John Wiley & Sons Ltd., 2024) Hu, Lei; Zhang, Zihao; Ye, Yongjing; Xu, Yiwen; Xia, Shihong; Skouras, Melina; Wang, He3D Human motion style transfer is a fundamental problem in computer graphic and animation processing. Existing AdaINbased methods necessitate datasets with balanced style distribution and content/style labels to train the clustered latent space. However, we may encounter a single unseen style example in practical scenarios, but not in sufficient quantity to constitute a style cluster for AdaIN-based methods. Therefore, in this paper, we propose a novel two-stage framework for few-shot style transfer learning based on the diffusion model. Specifically, in the first stage, we pre-train a diffusion-based text-to-motion model as a generative prior so that it can cope with various content motion inputs. In the second stage, based on the single style example, we fine-tune the pre-trained diffusion model in a few-shot manner to make it capable of style transfer. The key idea is regarding the reverse process of diffusion as a motion-style translation process since the motion styles can be viewed as special motion variations. During the fine-tuning for style transfer, a simple yet effective semantic-guided style transfer loss coordinated with style example reconstruction loss is introduced to supervise the style transfer in CLIP semantic space. The qualitative and quantitative evaluations demonstrate that our method can achieve state-of-the-art performance and has practical applications. The source code is available at https://github.com/hlcdyy/diffusion-based-motion-style-transfer.Item G-Style: Stylized Gaussian Splatting(The Eurographics Association and John Wiley & Sons Ltd., 2024) Kovács, Áron Samuel; Hermosilla, Pedro; Raidou, Renata Georgia; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyWe introduce G -Style, a novel algorithm designed to transfer the style of an image onto a 3D scene represented using Gaussian Splatting. Gaussian Splatting is a powerful 3D representation for novel view synthesis, as-compared to other approaches based on Neural Radiance Fields-it provides fast scene renderings and user control over the scene. Recent pre-prints have demonstrated that the style of Gaussian Splatting scenes can be modified using an image exemplar. However, since the scene geometry remains fixed during the stylization process, current solutions fall short of producing satisfactory results. Our algorithm aims to address these limitations by following a three-step process: In a pre-processing step, we remove undesirable Gaussians with large projection areas or highly elongated shapes. Subsequently, we combine several losses carefully designed to preserve different scales of the style in the image, while maintaining as much as possible the integrity of the original scene content. During the stylization process and following the original design of Gaussian Splatting, we split Gaussians where additional detail is necessary within our scene by tracking the gradient of the stylized color. Our experiments demonstrate that G -Style generates high-quality stylizations within just a few minutes, outperforming existing methods both qualitatively and quantitativelyItem Learning to Rasterize Differentiably(The Eurographics Association and John Wiley & Sons Ltd., 2024) Wu, Chenghao; Mailee, Hamila; Montazeri, Zahra; Ritschel, Tobias; Garces, Elena; Haines, EricDifferentiable rasterization changes the standard formulation of primitive rasterization -by enabling gradient flow from a pixel to its underlying triangles- using distribution functions in different stages of rendering, creating a ''soft'' version of the original rasterizer. However, choosing the optimal softening function that ensures the best performance and convergence to a desired goal requires trial and error. Previous work has analyzed and compared several combinations of softening. In this work, we take it a step further and, instead of making a combinatorial choice of softening operations, parameterize the continuous space of common softening operations. We study meta-learning tunable softness functions over a set of inverse rendering tasks (2D and 3D shape, pose and occlusion) so it generalizes to new and unseen differentiable rendering tasks with optimal softness.Item P-Hologen: An End-to-End Generative Framework for Phase-Only Holograms(The Eurographics Association and John Wiley & Sons Ltd., 2024) Park, JooHyun; Jeon, YuJin; Kim, HuiYong; Baek, SeungHwan; Kang, HyeongYeop; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyHolography stands at the forefront of visual technology, offering immersive, three-dimensional visualizations through the manipulation of light wave amplitude and phase. Although generative models have been extensively explored in the image domain, their application to holograms remains relatively underexplored due to the inherent complexity of phase learning. Exploiting generative models for holograms offers exciting opportunities for advancing innovation and creativity, such as semantic-aware hologram generation and editing. Currently, the most viable approach for utilizing generative models in the hologram domain involves integrating an image-based generative model with an image-to-hologram conversion model, which comes at the cost of increased computational complexity and inefficiency. To tackle this problem, we introduce P-Hologen, the first endto- end generative framework designed for phase-only holograms (POHs). P-Hologen employs vector quantized variational autoencoders to capture the complex distributions of POHs. It also integrates the angular spectrum method into the training process, constructing latent spaces for complex phase data using strategies from the image processing domain. Extensive experiments demonstrate that P-Hologen achieves superior quality and computational efficiency compared to the existing methods. Furthermore, our model generates high-quality unseen, diverse holographic content from its learned latent space without requiring pre-existing images. Our work paves the way for new applications and methodologies in holographic content creation, opening a new era in the exploration of generative holographic content. The code for our paper is publicly available on https://github.com/james0223/P-Hologen.Item Strokes2Surface: Recovering Curve Networks From 4D Architectural Design Sketches(The Eurographics Association and John Wiley & Sons Ltd., 2024) Rasoulzadeh, Shervin; Wimmer, Michael; Stauss, Philipp; Kovacic, Iva; Bermano, Amit H.; Kalogerakis, EvangelosWe present Strokes2Surface, an offline geometry reconstruction pipeline that recovers well-connected curve networks from imprecise 4D sketches to bridge concept design and digital modeling stages in architectural design. The input to our pipeline consists of 3D strokes' polyline vertices and their timestamps as the 4th dimension, along with additional metadata recorded throughout sketching. Inspired by architectural sketching practices, our pipeline combines a classifier and two clustering models to achieve its goal. First, with a set of extracted hand-engineered features from the sketch, the classifier recognizes the type of individual strokes between those depicting boundaries (Shape strokes) and those depicting enclosed areas (Scribble strokes). Next, the two clustering models parse strokes of each type into distinct groups, each representing an individual edge or face of the intended architectural object. Curve networks are then formed through topology recovery of consolidated Shape clusters and surfaced using Scribble clusters guiding the cycle discovery. Our evaluation is threefold: We confirm the usability of the Strokes2Surface pipeline in architectural design use cases via a user study, we validate our choice of features via statistical analysis and ablation studies on our collected dataset, and we compare our outputs against a range of reconstructions computed using alternative methods.Item Stylized Face Sketch Extraction via Generative Prior with Limited Data(The Eurographics Association and John Wiley & Sons Ltd., 2024) Yun, Kwan; Seo, Kwanggyoon; Seo, Chang Wook; Yoon, Soyeon; Kim, Seongcheol; Ji, Soohyun; Ashtari, Amirsaman; Noh, Junyong; Bermano, Amit H.; Kalogerakis, EvangelosFacial sketches are both a concise way of showing the identity of a person and a means to express artistic intention. While a few techniques have recently emerged that allow sketches to be extracted in different styles, they typically rely on a large amount of data that is difficult to obtain. Here, we propose StyleSketch, a method for extracting high-resolution stylized sketches from a face image. Using the rich semantics of the deep features from a pretrained StyleGAN, we are able to train a sketch generator with 16 pairs of face and the corresponding sketch images. The sketch generator utilizes part-based losses with two-stage learning for fast convergence during training for high-quality sketch extraction. Through a set of comparisons, we show that StyleSketch outperforms existing state-of-the-art sketch extraction methods and few-shot image adaptation methods for the task of extracting high-resolution abstract face sketches.We further demonstrate the versatility of StyleSketch by extending its use to other domains and explore the possibility of semantic editing. The project page can be found in https://kwanyun.github.io/stylesketch_project.