Browsing by Author "Sridhar, Srinath"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Learning a Generative Model for Multi-Step Human-Object Interactions from Videos(The Eurographics Association and John Wiley & Sons Ltd., 2019) Wang, He; Pirk, Sören; Yumer, Ersin; Kim, Vladimir; Sener, Ozan; Sridhar, Srinath; Guibas, Leonidas; Alliez, Pierre and Pellacini, FabioCreating dynamic virtual environments consisting of humans interacting with objects is a fundamental problem in computer graphics. While it is well-accepted that agent interactions play an essential role in synthesizing such scenes, most extant techniques exclusively focus on static scenes, leaving the dynamic component out. In this paper, we present a generative model to synthesize plausible multi-step dynamic human-object interactions. Generating multi-step interactions is challenging since the space of such interactions is exponential in the number of objects, activities, and time steps. We propose to handle this combinatorial complexity by learning a lower dimensional space of plausible human-object interactions. We use action plots to represent interactions as a sequence of discrete actions along with the participating objects and their states. To build action plots, we present an automatic method that uses state-of-the-art computer vision techniques on RGB videos in order to detect individual objects and their states, extract the involved hands, and recognize the actions performed. The action plots are built from observing videos of everyday activities and are used to train a generative model based on a Recurrent Neural Network (RNN). The network learns the causal dependencies and constraints between individual actions and can be used to generate novel and diverse multi-step human-object interactions. Our representation and generative model allows new capabilities in a variety of applications such as interaction prediction, animation synthesis, and motion planning for a real robotic agent.Item Neural Fields in Visual Computing and Beyond(The Eurographics Association and John Wiley & Sons Ltd., 2022) Xie, Yiheng; Takikawa, Towaki; Saito, Shunsuke; Litany, Or; Yan, Shiqin; Khan, Numair; Tombari, Federico; Tompkin, James; Sitzmann, Vincent; Sridhar, Srinath; Meneveaux, Daniel; Patanè, GiuseppeRecent advances in machine learning have led to increased interest in solving visual computing problems using methods that employ coordinate-based neural networks. These methods, which we call neural fields, parameterize physical properties of scenes or objects across space and time. They have seen widespread success in problems such as 3D shape and image synthesis, animation of human bodies, 3D reconstruction, and pose estimation. Rapid progress has led to numerous papers, but a consolidation of the discovered knowledge has not yet emerged. We provide context, mathematical grounding, and a review of over 250 papers in the literature on neural fields. In Part I, we focus on neural field techniques by identifying common components of neural field methods, including different conditioning, representation, forward map, architecture, and manipulation methods. In Part II, we focus on applications of neural fields to different problems in visual computing, and beyond (e.g., robotics, audio). Our review shows the breadth of topics already covered in visual computing, both historically and in current incarnations, and highlights the improved quality, flexibility, and capability brought by neural field methods. Finally, we present a companion website that acts as a living database that can be continually updated by the community.