39-Issue 6
Permanent URI for this collection
Browse
Browsing 39-Issue 6 by Issue Date
Now showing 1 - 20 of 37
Results Per Page
Sort Options
Item Curve Skeleton Extraction From 3D Point Clouds Through Hybrid Feature Point Shifting and Clustering(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Hu, Hailong; Li, Zhong; Jin, Xiaogang; Deng, Zhigang; Chen, Minhong; Shen, Yi; Benes, Bedrich and Hauser, HelwigCurve skeleton is an important shape descriptor with many potential applications in computer graphics, visualization and machine intelligence. We present a curve skeleton expression based on the set of the cross‐section centroids from a point cloud model and propose a corresponding extraction approach. We first provide the substitution of a distance field for a 3D point cloud model, and then combine it with curvatures to capture hybrid feature points. By introducing relevant facets and points, we shift these hybrid feature points along the skeleton‐guided normal directions to approach local centroids, simplify them through a tensor‐based spectral clustering and finally connect them to form a primary connected curve skeleton. Furthermore, we refine the primary skeleton through pruning, trimming and smoothing. We compared our results with several state‐of‐the‐art algorithms including the rotational symmetry axis (ROSA) and ‐medial methods for incomplete point cloud data to evaluate the effectiveness and accuracy of our method.Item Constructing Human Motion Manifold With Sequential Networks(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Jang, Deok‐Kyeong; Lee, Sung‐Hee; Benes, Bedrich and Hauser, HelwigThis paper presents a novel recurrent neural network‐based method to construct a latent motion manifold that can represent a wide range of human motions in a long sequence. We introduce several new components to increase the spatial and temporal coverage in motion space while retaining the details of motion capture data. These include new regularization terms for the motion manifold, combination of two complementary decoders for predicting joint rotations and joint velocities and the addition of the forward kinematics layer to consider both joint rotation and position errors. In addition, we propose a set of loss terms that improve the overall quality of the motion manifold from various aspects, such as the capability of reconstructing not only the motion but also the latent manifold vector, and the naturalness of the motion through adversarial loss. These components contribute to creating compact and versatile motion manifold that allows for creating new motions by performing random sampling and algebraic operations, such as interpolation and analogy, in the latent motion manifold.Item Physically Based Simulation and Rendering of Urban Thermography(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Aguerre, José Pedro; García‐Nevado, Elena; Acuña Paz y Miño, Jairo; Fernández, Eduardo; Beckers, Benoit; Benes, Bedrich and Hauser, HelwigUrban thermography is a non‐invasive measurement technique commonly used for building diagnosis and energy efficiency evaluation. The physical interpretation of thermal images is a challenging task because they do not necessarily depict the real temperature of the surfaces, but one estimated from the measured incoming radiation. In this sense, the computational rendering of a thermal image can be useful to understand the results captured in a measurement campaign. The computer graphics community has proposed techniques for light rendering that are used for its thermal counterpart. In this work, a physically based simulation methodology based on a combination of the finite element method (FEM) and ray tracing is presented. The proposed methods were tested using a highly detailed urban geometry. Directional emissivity models, glossy reflectivity functions and importance sampling were used to render thermal images. The simulation results were compared with a set of measured thermograms, showing good agreement between them.Item Modelling the Soft Robot Kyma Based on Real‐Time Finite Element Method(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Martin‐Barrio, A.; Terrile, S.; Diaz‐Carrasco, M.; del Cerro, J.; Barrientos, A.; Benes, Bedrich and Hauser, HelwigModelling soft robots is a non‐trivial task since their behaviours rely on their morphology, materials and surrounding elements. These robots are very useful to safely interact with their environment and because of their inherent flexibility and adaptability skills. However, they are usually very hard to model because of their intrinsic non‐linearities. This fact presents a unique challenge in the computer graphics and simulation scopes. Current trends in these fields tend to narrow the gap between virtual and real environments. This work will explain a challenging modelling process for a cable‐driven soft robot called . For this purpose, the real‐time (FEM) is applied using the . And two methods are implemented and compared to optimize the model efficiency: a heuristic one and the . Both models are also validated with the real robot using a precise motion tracking system. Moreover, an analysis of robot–object interactions is proposed to test the compliance of the presented soft manipulator. As a result, the real‐time FEM emerges as a good solution to accurately and efficiently model the presented robot while also allowing to study the interactions with its environment.Item Making Sense of Scientific Simulation Ensembles With Semantic Interaction(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Dahshan, M.; Polys, N. F.; Jayne, R. S.; Pollyea, R. M.; Benes, Bedrich and Hauser, HelwigIn the study of complex physical systems, scientists use simulations to study the effects of different models and parameters. Seeking to understand the influence and relationships among multiple dimensions, they typically run many simulations and vary the initial conditions in what are known as ‘ensembles’. Ensembles are then a number of runs that are each multi‐dimensional and multi‐variate. In order to understand the connections between simulation parameters and patterns in the output data, we have been developing an approach to the visual analysis of scientific data that merges human expertise and intuition with machine learning and statistics. Our approach is manifested in a new visualization tool, GLEE (Graphically‐Linked Ensemble Explorer), that allows scientists to explore, search, filter and make sense of their ensembles. GLEE uses visualization and semantic interaction (SI) techniques to enable scientists to find similarities and differences between runs, find correlation(s) between different parameters and explore relations and correlations across and between different runs and parameters. Our approach supports scientists in selecting interesting subsets of runs in order to investigate and summarize the factors and statistics that show variations and consistencies across different runs. In this paper, we evaluate our tool with experts to understand its strengths and weaknesses for optimization and inverse problems.Item Accelerating Liquid Simulation With an Improved Data‐Driven Method(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Gao, Yang; Zhang, Quancheng; Li, Shuai; Hao, Aimin; Qin, Hong; Benes, Bedrich and Hauser, HelwigIn physics‐based liquid simulation for graphics applications, pressure projection consumes a significant amount of computational time and is frequently the bottleneck of the computational efficiency. How to rapidly apply the pressure projection and at the same time how to accurately capture the liquid geometry are always among the most popular topics in the current research trend in liquid simulations. In this paper, we incorporate an artificial neural network into the simulation pipeline for handling the tricky projection step for liquid animation. Compared with the previous neural‐network‐based works for gas flows, this paper advocates new advances in the composition of representative features as well as the loss functions in order to facilitate fluid simulation with free‐surface boundary. Specifically, we choose both the velocity and the level‐set function as the additional representation of the fluid states, which allows not only the motion but also the boundary position to be considered in the neural network solver. Meanwhile, we use the divergence error in the loss function to further emulate the lifelike behaviours of liquid. With these arrangements, our method could greatly accelerate the pressure projection step in liquid simulation, while maintaining fairly convincing visual results. Additionally, our neutral network performs well when being applied to new scene synthesis even with varied boundaries or scales.Item Spherical Gaussian‐based Lightcuts for Glossy Interreflections(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Huo, Y.C.; Jin, S.H.; Liu, T.; Hua, W.; Wang, R.; Bao, H.J.; Benes, Bedrich and Hauser, HelwigIt is still challenging to render directional but non‐specular reflections in complex scenes. The SG‐based (Spherical Gaussian) many‐light framework provides a scalable solution but still requires a large number of glossy virtual lights to avoid spikes as well as reduce clamping errors. Directly gathering contributions from these glossy virtual lights to each pixel in a pairwise way is very inefficient. In this paper, we propose an adaptive algorithm with tighter error bounds to efficiently compute glossy interreflections from glossy virtual lights. This approach is an extension of the Lightcuts that builds hierarchies on both lights and pixels with new error bounds and new GPU‐based traversal methods between light and pixel hierarchies. Results demonstrate that our method is able to faithfully and efficiently compute glossy interreflections in scenes with highly glossy and spatial varying reflectance. Compared with the conventional Lightcuts method, our approach generates lightcuts with only one‐fourth to one‐fifth light nodes therefore exhibits better scalability. Additionally, after being implemented on GPU, our algorithms achieve a magnitude of faster performance than the previous method.Item Data‐Driven Facial Simulation(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Romeo, M.; Schvartzman, S. C.; Benes, Bedrich and Hauser, HelwigIn Visual Effects, the creation of realistic facial performances is still a challenge that the industry is trying to overcome. Blendshape deformation is used to reproduce the action of different groups of muscles, which produces realistic static results. However, this is not sufficient to generate believable and detailed facial performances of animated digital characters.To increase the realism of facial performances, it is possible to enhance standard facial rigs using physical simulation approaches. However, setting up a simulation rig and controlling material properties according to the performance is not an easy task and could take a lot of time and iterations to get it right.We present a workflow that allows us to generate an activation map for the fibres of a set of superficial patches we call . The pseudo‐muscles are automatically identified using k‐means to cluster the data from the blendshape targets in the animation rig and compute the direction of their contraction (direction of the pseudo‐muscle fibres). We use an Extended Position–Based Dynamics solver to add physical simulation to the facial animation, controlling the behaviour of simulation through the activation map. We show the results achieved using the proposed solution on two digital humans and one fantastic cartoon character, demonstrating that the identified pseudo‐muscles approximate facial anatomy and the simulation properties are properly controlled, increasing the realism while preserving the work of animators.Item ZerNet: Convolutional Neural Networks on Arbitrary Surfaces Via Zernike Local Tangent Space Estimation(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Sun, Zhiyu; Rooke, Ethan; Charton, Jerome; He, Yusen; Lu, Jia; Baek, Stephen; Benes, Bedrich and Hauser, HelwigIn this paper, we propose a novel formulation extending convolutional neural networks (CNN) to arbitrary two‐dimensional manifolds using orthogonal basis functions called Zernike polynomials. In many areas, geometric features play a key role in understanding scientific trends and phenomena, where accurate numerical quantification of geometric features is critical. Recently, CNNs have demonstrated a substantial improvement in extracting and codifying geometric features. However, the progress is mostly centred around computer vision and its applications where an inherent grid‐like data representation is naturally present. In contrast, many geometry processing problems deal with curved surfaces and the application of CNNs is not trivial due to the lack of canonical grid‐like representation, the absence of globally consistent orientation and the incompatible local discretizations. In this paper, we show that the Zernike polynomials allow rigourous yet practical mathematical generalization of CNNs to arbitrary surfaces. We prove that the convolution of two functions can be represented as a simple dot product between Zernike coefficients and the rotation of a convolution kernel is essentially a set of 2 × 2 rotation matrices applied to the coefficients. The key contribution of this work is in such a computationally efficient but rigorous generalization of the major CNN building blocks.Item Realistic Buoyancy Model for Real‐Time Applications(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Bajo, J. M.; Patow, G.; Delrieux, C. A.; Benes, Bedrich and Hauser, HelwigFollowing Archimedes' Principle, any object immersed in a fluid is subject to an upward buoyancy force equal to the weight of the fluid displaced by the object. This simple description is the origin of a set of effects that are ubiquitous in nature, and are becoming commonplace in games, simulators and interactive animations. Although there are solutions to the fluid‐to‐solid coupling problem in some particular cases, to the best of our knowledge, comprehensive and accurate computational buoyancy models adequate in general contexts are still lacking. We propose a real‐time Graphics Processing Unit (GPU) based algorithm for realistic computation of the fluid‐to‐solid coupling problem, which is adequate for a wide generality of cases (solid or hollow objects, with permeable or leak‐proof surfaces, and with variable masses). The method incorporates the behaviour of the fluid into which the object is immersed, and decouples the computation of the physical parameters involved in the buoyancy force of the empty object from the mass of contained liquid. The dynamics of this mass of liquid are also computed, in a way such that the relation between the centre of mass of the object and the buoyancy force may vary, leading to complex, realistic beha viours such as the ones arising for instance with a sinking boat.Item DockVis: Visual Analysis of Molecular Docking Trajectories(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Furmanová, Katarína; Vávra, Ondřej; Kozlíková, Barbora; Damborský, Jiří; Vonásek, Vojtěch; Bednář, David; Byška, Jan; Benes, Bedrich and Hauser, HelwigComputation of trajectories for ligand binding and unbinding via protein tunnels and channels is important for predicting possible protein–ligand interactions. These highly complex processes can be simulated by several software tools, which provide biochemists with valuable information for drug design or protein engineering applications. This paper focuses on aiding this exploration process by introducing the DockVis visual analysis tool. DockVis operates with the multivariate output data from one of the latest available tools for the prediction of ligand transport, CaverDock. DockVis provides the users with several linked views, combining the 2D abstracted depictions of ligands and their surroundings and properties with the 3D view. In this way, we enable the users to perceive the spatial configurations of ligand passing through the protein tunnel. The users are initially visually directed to the most relevant parts of ligand trajectories, which can be then explored in higher detail by the follow‐up analyses. DockVis was designed in tight collaboration with protein engineers developing the CaverDock tool. However, the concept of DockVis can be extended to any other tool predicting ligand pathways by the molecular docking. DockVis will be made available to the wide user community as part of the Caver Analyst 3.0 software package ().Item Adaptive Block Coordinate Descent for Distortion Optimization(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Naitsat, Alexander; Zhu, Yufeng; Zeevi, Yehoshua Y.; Benes, Bedrich and Hauser, HelwigWe present a new algorithm for optimizing geometric energies and computing positively oriented simplicial mappings. Our major improvements over the state‐of‐the‐art are: (i) introduction of new energies for repairing inverted and collapsed simplices; (ii) adaptive partitioning of vertices into coordinate blocks with the blended local‐global strategy for more efficient optimization and (iii) introduction of the displacement norm for improving convergence criteria and for controlling block partitioning. Together these improvements form the basis for the Adaptive Block Coordinate Descent (ABCD) algorithm aimed at robust geometric optimization. ABCD achieves state‐of‐the‐art results in distortion minimization, even under hard positional constraints and highly distorted invalid initializations that contain thousands of collapsed and inverted elements. Starting with an invalid non‐injective initial map, ABCD behaves as a modified block coordinate descent up to the point where the current mapping is cleared of invalid simplices. Then, the algorithm converges rapidly into the chosen iterative solver. Our method is very general, fast‐converging and easily parallelizable. We show over a wide range of 2D and 3D problems that our algorithm is more robust than existing techniques for locally injective mapping.Item Progressive Acquisition of SVBRDF and Shape in Motion(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Ha, Hyunho; Baek, Seung‐Hwan; Nam, Giljoo; Kim, Min H.; Benes, Bedrich and Hauser, HelwigTo estimate appearance parameters, traditional SVBRDF acquisition methods require multiple input images to be captured with various angles of light and camera, followed by a post‐processing step. For this reason, subjects have been limited to static scenes, or a multiview system is required to capture dynamic objects. In this paper, we propose a simultaneous acquisition method of SVBRDF and shape allowing us to capture the material appearance of deformable objects in motion using a single RGBD camera. To do so, we progressively integrate photometric samples of surfaces in motion in a volumetric data structure with a deformation graph. Then, building upon recent advances of fusion‐based methods, we estimate SVBRDF parameters in motion. We make use of a conventional RGBD camera that consists of the colour and infrared cameras with active infrared illumination. The colour camera is used for capturing diffuse properties, and the infrared camera‐illumination module is employed for estimating specular properties by means of active illumination. Our joint optimization yields complete material appearance parameters. We demonstrate the effectiveness of our method with extensive evaluation on both synthetic and real data that include various deformable objects of specular and diffuse appearance.Item Quantum Coin Method for Numerical Integration(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Shimada, N. H.; Hachisuka, T.; Benes, Bedrich and Hauser, HelwigLight transport simulation in rendering is formulated as a numerical integration problem in each pixel, which is commonly estimated by Monte Carlo integration. Monte Carlo integration approximates an integral of a black‐box function by taking the average of many evaluations (i.e. samples) of the function (integrand). For queries of the integrand, Monte Carlo integration achieves the estimation error of . Recently, Johnston [Joh16] introduced quantum super‐sampling (QSS) into rendering as a numerical integration method that can run on quantum computers. QSS breaks the fundamental limitation of the convergence rate of Monte Carlo integration and achieves the faster convergence rate of approximately which is the best possible bound of any quantum algorithms we know today [NW99]. We introduce yet another quantum numerical integration algorithm, quantum coin (QCoin) [AW99], and provide numerical experiments that are unprecedented in the fields of both quantum computing and rendering. We show that QCoin's convergence rate is equivalent to QSS's. We additionally show that QCoin is fundamentally more robust under the presence of noise in actual quantum computers due to its simpler quantum circuit and the use of fewer qubits. Considering various aspects of quantum computers, we discuss how QCoin can be a more practical alternative to QSS if we were to run light transport simulation in quantum computers in the future.Item Non‐Uniform Subdivision Surfaces with Sharp Features(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Tian, Yufeng; Li, Xin; Chen, Falai; Benes, Bedrich and Hauser, HelwigSharp features are important characteristics in surface modelling. However, it is still a significantly difficult task to create complex sharp features for Non‐Uniform Rational B‐Splines compatible subdivision surfaces. Current non‐uniform subdivision methods produce sharp features generally by setting zero knot intervals, and these sharp features may have unpleasant visual effects. In this paper, we construct a non‐uniform subdivision scheme to create complex sharp features by extending the eigen‐polyhedron technique. The new scheme allows arbitrarily specifying sharp edges in the initial mesh and generates non‐uniform cubic B‐spline curves to represent the sharp features. Experimental results demonstrate that the present method can generate visually more pleasant sharp features than other existing approaches.Item Real‐Time Deformation with Coupled Cages and Skeletons(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Corda, F.; Thiery, J. M.; Livesu, M.; Puppo, E.; Boubekeur, T.; Scateni, R.; Benes, Bedrich and Hauser, HelwigSkeleton‐based and cage‐based deformation techniques represent the two most popular approaches to control real‐time deformations of digital shapes and are, to a vast extent, complementary to one another. Despite their complementary roles, high‐end modelling packages do not allow for seamless integration of such control structures, thus inducing a considerable burden on the user to maintain them synchronized. In this paper, we propose a framework that seamlessly combines rigging skeletons and deformation cages, granting artists with a real‐time deformation system that operates using any smooth combination of the two approaches. By coupling the deformation spaces of cages and skeletons, we access a much larger space, containing poses that are impossible to obtain by acting solely on a skeleton or a cage. Our method is oblivious to the specific techniques used to perform skinning and cage‐based deformation, securing it compatible with pre‐existing tools. We demonstrate the usefulness of our hybrid approach on a variety of examples.Item Image Morphing With Perceptual Constraints and STN Alignment(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Fish, N.; Zhang, R.; Perry, L.; Cohen‐Or, D.; Shechtman, E.; Barnes, C.; Benes, Bedrich and Hauser, HelwigIn image morphing, a sequence of plausible frames are synthesized and composited together to form a smooth transformation between given instances. Intermediates must remain faithful to the input, stand on their own as members of the set and maintain a well‐paced visual transition from one to the next. In this paper, we propose a conditional generative adversarial network (GAN) morphing framework operating on a pair of input images. The network is trained to synthesize frames corresponding to temporal samples along the transformation, and learns a proper shape prior that enhances the plausibility of intermediate frames. While individual frame plausibility is boosted by the adversarial setup, a special training protocol producing sequences of frames, combined with a perceptual similarity loss, promote smooth transformation over time. Explicit stating of correspondences is replaced with a grid‐based freeform deformation spatial transformer that predicts the geometric warp between the inputs, instituting the smooth geometric effect by bringing the shapes into an initial alignment. We provide comparisons to classic as well as latent space morphing techniques, and demonstrate that, given a set of images for self‐supervision, our network learns to generate visually pleasing morphing effects featuring believable in‐betweens, with robustness to changes in shape and texture, requiring no correspondence annotation.Item A Discriminative Multi‐Channel Facial Shape (MCFS) Representation and Feature Extraction for 3D Human Faces(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Gong, Xun; Li, Xin; Li, Tianrui; Liang, Yongqing; Benes, Bedrich and Hauser, HelwigBuilding an effective representation for 3D face geometry is essential for face analysis tasks, that is, landmark detection, face recognition and reconstruction. This paper proposes to use a Multi‐Channel Facial Shape (MCFS) representation that consists of depth, hand‐engineered feature and attention maps to construct a 3D facial descriptor. And, a multi‐channel adjustment mechanism, named filtered squeeze and reversed excitation (FSRE), is proposed to re‐organize MCFS data. To assign a suitable weight for each channel, FSRE is able to learn the importance of each layer automatically in the training phase. MCFS and FSRE blocks collaborate together effectively to build a robust 3D facial shape representation, which has an excellent discriminative ability. Extensive experimental results, testing on both high‐resolution and low‐resolution face datasets, show that facial features extracted by our framework outperform existing methods. This representation is stable against occlusions, data corruptions, expressions and pose variations. Also, unlike traditional methods of 3D face feature extraction, which always take minutes to create 3D features, our system can run in real time.Item From 2.5D Bas‐relief to 3D Portrait Model(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Zhang, Yu‐Wei; Wang, Wenping; Chen, Yanzhao; Liu, Hui; Ji, Zhongping; Zhang, Caiming; Benes, Bedrich and Hauser, HelwigIn contrast to 3D model that can be freely observed, p ortrait bas‐relief projects slightly from the background and is limited by fixed viewpoint. In this paper, we propose a novel method to reconstruct the underlying 3D shape from a single 2.5D bas‐relief, providing observers wider viewing perspectives. Our target is to make the reconstructed portrait has natural depth ordering and similar appearance to the input. To achieve this, we first use a 3D template face to fit the portrait. Then, we optimize the face shape by normal transfer and Poisson surface reconstruction. The hair and body regions are finally reconstructed and combined with the 3D face. From the resulting 3D shape, one can generate new reliefs with varying poses and thickness, freeing the input one from fixed view. A number of experimental results verify the effectiveness of our method.Item Real‐Time Glints Rendering With Pre‐Filtered Discrete Stochastic Microfacets(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Wang, Beibei; Deng, Hong; Holzschuch, Nicolas; Benes, Bedrich and Hauser, HelwigMany real‐life materials have a sparkling appearance. Examples include metallic paints, sparkling fabrics and snow. Simulating these sparkles is important for realistic rendering but expensive. As sparkles come from small shiny particles reflecting light into a specific direction, they are very challenging for illumination simulation. Existing approaches use a four‐dimensional hierarchy, searching for light‐reflecting particles simultaneously in space and direction. The approach is accurate, but extremely expensive. A separable model is much faster, but still not suitable for real‐time applications. The performance problem is even worse when illumination comes from environment maps, as they require either a large sample count per pixel or pre‐filtering. Pre‐filtering is incompatible with the existing sparkle models, due to the discrete multi‐scale representation. In this paper, we present a GPU‐friendly, pre‐filtered model for real‐time simulation of sparkles and glints. Our method simulates glints under both environment maps and point light sources in real time, with an added cost of just 10 ms per frame with full high‐definition resolution. Editing material properties requires extra computations but is still real time, with an added cost of 10 ms per frame.