Browsing by Author "Zeng, Wei"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Route-Aware Edge Bundling for Visualizing Origin-Destination Trails in Urban Traffic(The Eurographics Association and John Wiley & Sons Ltd., 2019) Zeng, Wei; Shen, Qiaomu; Jiang, Yuzhe; Telea, Alexandru; Gleicher, Michael and Viola, Ivan and Leitte, HeikeOrigin-destination (OD) trails describe movements across space. Typical visualizations thereof use either straight lines or plot the actual trajectories. To reduce clutter inherent to visualizing large OD datasets, bundling methods can be used. Yet, bundling OD trails in urban traffic data remains challenging. Two specific reasons hereof are the constraints implied by the underlying road network and the difficulty of finding good bundling settings. To cope with these issues, we propose a new approach called Route Aware Edge Bundling (RAEB). To handle road constraints, we first generate a hierarchical model of the road-and-trajectory data. Next, we derive optimal bundling parameters, including kernel size and number of iterations, for a user-selected level of detail of this model, thereby allowing users to explicitly trade off simplification vs accuracy. We demonstrate the added value of RAEB compared to state-of-the-art trail bundling methods on both synthetic and real-world traffic data for tasks that include the preservation of road network topology and the support of multiscale exploration.Item WYTIWYR: A User Intent-Aware Framework with Multi-modal Inputs for Visualization Retrieval(The Eurographics Association and John Wiley & Sons Ltd., 2023) Xiao, Shishi; Hou, Yihan; Jin, Cheng; Zeng, Wei; Bujack, Roxana; Archambault, Daniel; Schreck, TobiasRetrieving charts from a large corpus is a fundamental task that can benefit numerous applications such as visualization recommendations. The retrieved results are expected to conform to both explicit visual attributes (e.g., chart type, colormap) and implicit user intents (e.g., design style, context information) that vary upon application scenarios. However, existing examplebased chart retrieval methods are built upon non-decoupled and low-level visual features that are hard to interpret, while definition-based ones are constrained to pre-defined attributes that are hard to extend. In this work, we propose a new framework, namely WYTIWYR (What-You-Think-Is-What-You-Retrieve), that integrates user intents into the chart retrieval process. The framework consists of two stages: first, the Annotation stage disentangles the visual attributes within the query chart; and second, the Retrieval stage embeds the user's intent with customized text prompt as well as bitmap query chart, to recall targeted retrieval result. We develop a prototype WYTIWYR system leveraging a contrastive language-image pre-training (CLIP) model to achieve zero-shot classification as well as multi-modal input encoding, and test the prototype on a large corpus with charts crawled from the Internet. Quantitative experiments, case studies, and qualitative interviews are conducted. The results demonstrate the usability and effectiveness of our proposed framework.