By Year (all)
2023 | |
149. | Deepthi Raghunandan, Aayushi Roy, Shenzhi Shi, Niklas Elmqvist, Leilani Battle (2023): Code Code Evolution: Understanding How People Change Data Science Notebooks Over Time. Proceedings of the ACM Conference on Human Factors in Computing Systems, ACM, New York, NY, USA , 2023. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Raghunandan2023, title = {Code Code Evolution: Understanding How People Change Data Science Notebooks Over Time}, author = {Deepthi Raghunandan and Aayushi Roy and Shenzhi Shi and Niklas Elmqvist and Leilani Battle}, url = {https://users.umiacs.umd.edu/~elm/projects/cce/cce.pdf, PDF}, year = {2023}, date = {2023-04-24}, booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems}, publisher = {ACM}, address = {New York, NY, USA }, abstract = {Sensemaking is the iterative process of identifying, extracting, and explaining insights from data, where each iteration is referred to as the “sensemaking loop.” However, little is known about how sensemaking behavior evolves from exploration and explanation during this process. This gap limits our ability to understand the full scope of sensemaking, which in turn inhibits the design of tools that support the process. We contribute the first mixed-method to characterize how sensemaking evolves within computational notebooks. We study 2,574 Jupyter notebooks mined from GitHub by identifying data science notebooks that have undergone significant iterations, presenting a regression model that automatically characterizes sensemaking activity, and using this regression model to calculate and analyze shifts in activity across GitHub versions. Our results show that notebook authors participate in various sensemaking tasks over time, such as annotation, branching analysis, and documentation. We use our insights to recommend extensions to current notebook environments.}, keywords = {} } Sensemaking is the iterative process of identifying, extracting, and explaining insights from data, where each iteration is referred to as the “sensemaking loop.” However, little is known about how sensemaking behavior evolves from exploration and explanation during this process. This gap limits our ability to understand the full scope of sensemaking, which in turn inhibits the design of tools that support the process. We contribute the first mixed-method to characterize how sensemaking evolves within computational notebooks. We study 2,574 Jupyter notebooks mined from GitHub by identifying data science notebooks that have undergone significant iterations, presenting a regression model that automatically characterizes sensemaking activity, and using this regression model to calculate and analyze shifts in activity across GitHub versions. Our results show that notebook authors participate in various sensemaking tasks over time, such as annotation, branching analysis, and documentation. We use our insights to recommend extensions to current notebook environments. |
148. | Md Naimul Hoque, Md Ehtesham-Ul-Haque, Niklas Elmqvist, Syed Masum Billah (2023): Accessible Data Representation with Natural Sound. Proceedings of the ACM Conference on Human Factors in Computing Systems, ACM, New York, NY, USA, 2023. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Hoque2023, title = {Accessible Data Representation with Natural Sound}, author = {Md Naimul Hoque and Md Ehtesham-Ul-Haque and Niklas Elmqvist and Syed Masum Billah}, url = {https://users.umiacs.umd.edu/~elm/projects/susurrus/susurrus.pdf, PDF}, year = {2023}, date = {2023-04-24}, booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems}, publisher = {ACM}, address = {New York, NY, USA}, abstract = {Sonification translates data into non-speech audio. Such auditory representations can make data visualization accessible to people who are blind or have low vision (BLV). This paper presents a sonification method for translating common data visualization into a blend of natural sounds. We hypothesize that people\'s familiarity with sounds drawn from nature, such as birds singing in a forest, and their ability to listen to these sounds in parallel, will enable BLV users to perceive multiple data points being sonified at the same time. Informed by an extensive literature review and a preliminary study with 5 BLV participants, we designed an accessible data representation tool, Susurrus, that combines our sonification method with other accessibility features, such as keyboard interaction and text-to-speech feedback. Finally, we conducted a user study with 12 BLV participants and report the potential and application of natural sounds for sonification compared to existing sonification tools.}, keywords = {} } Sonification translates data into non-speech audio. Such auditory representations can make data visualization accessible to people who are blind or have low vision (BLV). This paper presents a sonification method for translating common data visualization into a blend of natural sounds. We hypothesize that people's familiarity with sounds drawn from nature, such as birds singing in a forest, and their ability to listen to these sounds in parallel, will enable BLV users to perceive multiple data points being sonified at the same time. Informed by an extensive literature review and a preliminary study with 5 BLV participants, we designed an accessible data representation tool, Susurrus, that combines our sonification method with other accessibility features, such as keyboard interaction and text-to-speech feedback. Finally, we conducted a user study with 12 BLV participants and report the potential and application of natural sounds for sonification compared to existing sonification tools. |
147. | Sungbok Shin, Sanghyun Hong, Niklas Elmqvist (2023): Perceptual Pat: A Virtual Human Visual System for Iterative Visualization Design. Proceedings of the ACM Conference on Human Factors in Computing Systems, ACM, New York, NY, USA, 2023. (Type: Conference | Abstract | Links | BibTeX) @conference{Shin2023, title = {Perceptual Pat: A Virtual Human Visual System for Iterative Visualization Design}, author = {Sungbok Shin and Sanghyun Hong and Niklas Elmqvist}, url = {https://users.umiacs.umd.edu/~elm/projects/perceptual-pat/perceptual-pat.pdf, PDF}, year = {2023}, date = {2023-04-24}, booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems}, publisher = {ACM}, address = {New York, NY, USA}, abstract = {Designing a visualization is often a process of iterative refinement where the designer improves a chart over time by adding features, improving encodings, and fixing mistakes. However, effective design requires external critique and evaluation. Unfortunately, such critique is not always available on short notice and evaluation can be costly. To address this need, we present Perceptual Pat, an extensible suite of AI and computer vision techniques that forms a virtual human visual system for supporting iterative visualization design. The system analyzes snapshots of a visualization using an extensible set of filters—including gaze maps, text recognition, color analysis, etc—and generates a report summarizing the findings. The web-based Pat Design Lab provides a version tracking system that enables the designer to track improvements over time. We validate Perceptual Pat using a longitudinal qualitative study involving 4 professional visualization designers that used the tool over a few days to design a new visualization}, keywords = {} } Designing a visualization is often a process of iterative refinement where the designer improves a chart over time by adding features, improving encodings, and fixing mistakes. However, effective design requires external critique and evaluation. Unfortunately, such critique is not always available on short notice and evaluation can be costly. To address this need, we present Perceptual Pat, an extensible suite of AI and computer vision techniques that forms a virtual human visual system for supporting iterative visualization design. The system analyzes snapshots of a visualization using an extensible set of filters—including gaze maps, text recognition, color analysis, etc—and generates a report summarizing the findings. The web-based Pat Design Lab provides a version tracking system that enables the designer to track improvements over time. We validate Perceptual Pat using a longitudinal qualitative study involving 4 professional visualization designers that used the tool over a few days to design a new visualization |
146. | David Saffo, Andrea Batch, Cody Dunne, Niklas Elmqvist (2023): Through Their Eyes and In Their Shoes: Providing Group Awareness During Collaboration Across Virtual Reality and Desktop Platforms. Proceedings of the ACM Conference on Human Factors in Computing Systems, ACM, New York, NY, USA, 2023. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Saffo2023, title = {Through Their Eyes and In Their Shoes: Providing Group Awareness During Collaboration Across Virtual Reality and Desktop Platforms}, author = {David Saffo and Andrea Batch and Cody Dunne and Niklas Elmqvist}, url = {https://users.umiacs.umd.edu/~elm/projects/vrxd/vrxd.pdf, PDF https://osf.io/wgprb/, OSF}, year = {2023}, date = {2023-04-24}, booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems}, publisher = {ACM}, address = {New York, NY, USA}, abstract = {Many collaborative data analysis situations benefit from collaborators utilizing different platforms. However, maintaining group awareness between team members using diverging devices is difficult, not least because common ground diminishes. A person using head-mounted VR cannot physically see a user on a desktop computer even while co-located, and the desktop user cannot easily relate to the VR user\'s 3D workspace. To address this, we propose the ``eyes-and-shoes\'\' principles for group awareness and abstract them into four levels of techniques. Furthermore, we evaluate these principles with a qualitative user study of 6 participant pairs synchronously collaborating across distributed desktop and VR head-mounted devices. In this study, we vary the group awareness techniques between participants and explore two visualization contexts within participants. The results of this study indicate that the more visual metaphors and views of participants diverge, the greater the level of group awareness is needed. A copy of this paper, the study preregistration, and all supplemental materials required to reproduce the study are available on https://osf.io/wgprb/. }, keywords = {} } Many collaborative data analysis situations benefit from collaborators utilizing different platforms. However, maintaining group awareness between team members using diverging devices is difficult, not least because common ground diminishes. A person using head-mounted VR cannot physically see a user on a desktop computer even while co-located, and the desktop user cannot easily relate to the VR user's 3D workspace. To address this, we propose the ``eyes-and-shoes'' principles for group awareness and abstract them into four levels of techniques. Furthermore, we evaluate these principles with a qualitative user study of 6 participant pairs synchronously collaborating across distributed desktop and VR head-mounted devices. In this study, we vary the group awareness techniques between participants and explore two visualization contexts within participants. The results of this study indicate that the more visual metaphors and views of participants diverge, the greater the level of group awareness is needed. A copy of this paper, the study preregistration, and all supplemental materials required to reproduce the study are available on https://osf.io/wgprb/. |
145. | Andrea Batch, Yipeng Ji, Mingming Fan, Jian Zhao, Niklas Elmqvist (2023): uxSense: Supporting User Experience Analysis with Visualization and Computer Vision. IEEE Transactions on Visualization & Computer Graphics, 2023. (Type: Article | Abstract | Links | BibTeX) @article{Batch2023, title = {uxSense: Supporting User Experience Analysis with Visualization and Computer Vision}, author = {Andrea Batch and Yipeng Ji and Mingming Fan and Jian Zhao and Niklas Elmqvist}, url = {https://users.umiacs.umd.edu/~elm/projects/uxsense/uxsense.pdf, PDF}, year = {2023}, date = {2023-02-01}, journal = {IEEE Transactions on Visualization & Computer Graphics}, abstract = {Analyzing user behavior from usability evaluation can be a challenging and time-consuming task, especially as the number of participants and the scale and complexity of the evaluation grows. We propose UXSENSE, a visual analytics system using machine learning methods to extract user behavior from audio and video recordings as parallel time-stamped data streams. Our implementation draws on pattern recognition, computer vision, natural language processing, and machine learning to extract user sentiment, actions, posture, spoken words, and other features from such recordings. These streams are visualized as parallel timelines in a web-based front-end, enabling the researcher to search, filter, and annotate data across time and space. We present the results of a user study involving professional UX researchers evaluating user data using uxSense. In fact, we used uxSense itself to evaluate their sessions.}, keywords = {} } Analyzing user behavior from usability evaluation can be a challenging and time-consuming task, especially as the number of participants and the scale and complexity of the evaluation grows. We propose UXSENSE, a visual analytics system using machine learning methods to extract user behavior from audio and video recordings as parallel time-stamped data streams. Our implementation draws on pattern recognition, computer vision, natural language processing, and machine learning to extract user sentiment, actions, posture, spoken words, and other features from such recordings. These streams are visualized as parallel timelines in a web-based front-end, enabling the researcher to search, filter, and annotate data across time and space. We present the results of a user study involving professional UX researchers evaluating user data using uxSense. In fact, we used uxSense itself to evaluate their sessions. |
144. | Debanjan Datta, Nathan Self, John Simeone, Amelia Meadows, Willow Outhwaite, Linda Walker, Niklas Elmqvist, Naren Ramkrishnan (2023): TimberSleuth: Visual Anomaly Detection with Human Feedback for Mitigating the Illegal Timber Trade. Information Visualization, 2023. (Type: Article | Abstract | Links | BibTeX) @article{Datta2023, title = {TimberSleuth: Visual Anomaly Detection with Human Feedback for Mitigating the Illegal Timber Trade}, author = {Debanjan Datta and Nathan Self and John Simeone and Amelia Meadows and Willow Outhwaite and Linda Walker and Niklas Elmqvist and Naren Ramkrishnan}, url = {https://users.umiacs.umd.edu/~elm/projects/timbersleuth/timbersleuth.pdf, PDF}, year = {2023}, date = {2023-02-01}, journal = {Information Visualization}, abstract = {Detecting illegal shipments in the global timber trade poses a massive challenge to enforcement agencies. The massive volume and complexity of timber shipments and obfuscations within international trade data, intentional or not, necessitates an automated system to aid in detecting specific shipments that potentially contain illegally harvested wood. To address these requirements we build a novel human-in-the-loop visual analytics system called TIMBERSLEUTH. TimberSleuth uses a novel scoring model reinforced through human feedback to improve upon the relevance of the results of the system while using an off-the-shelf anomaly detection model. Detailed evaluation is performed using real data with synthetic anomalies to test the machine intelligence that drives the system. We design interactive visualizations to enable analysis of pertinent details of anomalous trade records so that analysts can determine if a record is relevant and provide iterative feedback. This feedback is utilized by the machine learning model to improve the precision of the output.}, keywords = {} } Detecting illegal shipments in the global timber trade poses a massive challenge to enforcement agencies. The massive volume and complexity of timber shipments and obfuscations within international trade data, intentional or not, necessitates an automated system to aid in detecting specific shipments that potentially contain illegally harvested wood. To address these requirements we build a novel human-in-the-loop visual analytics system called TIMBERSLEUTH. TimberSleuth uses a novel scoring model reinforced through human feedback to improve upon the relevance of the results of the system while using an off-the-shelf anomaly detection model. Detailed evaluation is performed using real data with synthetic anomalies to test the machine intelligence that drives the system. We design interactive visualizations to enable analysis of pertinent details of anomalous trade records so that analysts can determine if a record is relevant and provide iterative feedback. This feedback is utilized by the machine learning model to improve the precision of the output. |
2022 | |
143. | Tamara L. Clegg, Keaunna Cleveland, Erianne Weight, Daniel Greene, Niklas Elmqvist (2022): Data Everyday as Community Driven Science: Athletes’ Critical Data Literacy Practices in Collegiate Sports Contexts. Journal of Research in Science Teaching, 2022. (Type: Article | Abstract | Links | BibTeX) @article{Clegg2022, title = {Data Everyday as Community Driven Science: Athletes’ Critical Data Literacy Practices in Collegiate Sports Contexts}, author = {Tamara L. Clegg and Keaunna Cleveland and Erianne Weight and Daniel Greene and Niklas Elmqvist}, url = {Data Everyday as Community Driven Science: Athletes’ Critical Data Literacy Practices in Collegiate Sports Contexts, Fulltext (HTML)}, year = {2022}, date = {2022-12-01}, journal = {Journal of Research in Science Teaching}, abstract = {In this article, we investigate the community-driven science happening organically in elite athletics as a means of engaging a community of learners—collegiate athletes, many of whom come from underrepresented groups—in STEM. We aim to recognize the data literacy practices inherent in sports play and to explore the potential of critical data literacy practices for enabling athletes to leverage data science as a means of addressing systemic racial, equity, and justice issues inherent in sports institutions. We leverage research on critical data literacies as a lens to present case studies of three athletes at an NCAA Division 1 university spanning three different sports. We focus on athletes\' experiences as they engage in critical data literacy practices and the ways they welcome, adapt, resist, and critique such engagements. Our findings indicate ways in which athletes (1) readily accept data practices espoused by their coaches and sport, (2) critique and intentionally disengage from such practices, and (3) develop their own new data productions. In order to support community-driven science, our findings point to the critical role of athletics\' organizations in promoting athletes\' access to, as well as engagement and agency with data practices on their teams.}, keywords = {} } In this article, we investigate the community-driven science happening organically in elite athletics as a means of engaging a community of learners—collegiate athletes, many of whom come from underrepresented groups—in STEM. We aim to recognize the data literacy practices inherent in sports play and to explore the potential of critical data literacy practices for enabling athletes to leverage data science as a means of addressing systemic racial, equity, and justice issues inherent in sports institutions. We leverage research on critical data literacies as a lens to present case studies of three athletes at an NCAA Division 1 university spanning three different sports. We focus on athletes' experiences as they engage in critical data literacy practices and the ways they welcome, adapt, resist, and critique such engagements. Our findings indicate ways in which athletes (1) readily accept data practices espoused by their coaches and sport, (2) critique and intentionally disengage from such practices, and (3) develop their own new data productions. In order to support community-driven science, our findings point to the critical role of athletics' organizations in promoting athletes' access to, as well as engagement and agency with data practices on their teams. |
142. | Sungbok Shin, Sunghyo Chung, Sanghyun Hong, Niklas Elmqvist (2022): A Scanner Deeply: Predicting Gaze Heatmaps on Visualizations Using Crowdsourced Eye Movement Data. IEEE Transactions on Visualization & Computer Graphics, 2022. (Type: Article | Abstract | Links | BibTeX) @article{Shin2022b, title = {A Scanner Deeply: Predicting Gaze Heatmaps on Visualizations Using Crowdsourced Eye Movement Data}, author = {Sungbok Shin, Sunghyo Chung, Sanghyun Hong, Niklas Elmqvist}, url = {http://users.umiacs.umd.edu/~elm/projects/scanner-deeply/scanner-deeply.pdf, PDF}, year = {2022}, date = {2022-10-20}, journal = {IEEE Transactions on Visualization & Computer Graphics}, abstract = {Visual perception is a key component of data visualization. Much prior empirical work uses eye movement as a proxy to understand human visual perception. Diverse apparatus and techniques have been proposed to collect eye movements, but there is still no optimal approach. In this paper, we review 30 prior works for collecting eye movements based on three axes: (1) the tracker technology used to measure eye movements; (2) the image stimulus shown to participants; and (3) the collection methodology used to gather the data. Based on this taxonomy, we employ a webcam-based eyetracking approach using task-specific visualizations as the stimulus. The low technology requirement means that virtually anyone can participate, thus enabling us to collect data at large scale using crowdsourcing: approximately 12,000 samples in total. Choosing visualization images as stimulus means that the eye movements will be specific to perceptual tasks associated with visualization. We use these data to propose a Scanner Deeply, a virtual eyetracker model that, given an image of a visualization, generates a gaze heatmap for that image. We employ a computationally efficient, yet powerful convolutional neural network for our model. We compare the results of our work with results from the DVS model and a neural network trained on the Salicon dataset. The analysis of our gaze patterns enables us to understand how users grasp the structure of visualized data. We also make our stimulus dataset of visualization images available as part of this paper’s contribution.}, keywords = {} } Visual perception is a key component of data visualization. Much prior empirical work uses eye movement as a proxy to understand human visual perception. Diverse apparatus and techniques have been proposed to collect eye movements, but there is still no optimal approach. In this paper, we review 30 prior works for collecting eye movements based on three axes: (1) the tracker technology used to measure eye movements; (2) the image stimulus shown to participants; and (3) the collection methodology used to gather the data. Based on this taxonomy, we employ a webcam-based eyetracking approach using task-specific visualizations as the stimulus. The low technology requirement means that virtually anyone can participate, thus enabling us to collect data at large scale using crowdsourcing: approximately 12,000 samples in total. Choosing visualization images as stimulus means that the eye movements will be specific to perceptual tasks associated with visualization. We use these data to propose a Scanner Deeply, a virtual eyetracker model that, given an image of a visualization, generates a gaze heatmap for that image. We employ a computationally efficient, yet powerful convolutional neural network for our model. We compare the results of our work with results from the DVS model and a neural network trained on the Salicon dataset. The analysis of our gaze patterns enables us to understand how users grasp the structure of visualized data. We also make our stimulus dataset of visualization images available as part of this paper’s contribution. |
141. | Eric Newburger, Michael Correll, Niklas Elmqvist (2022): Fitting Bell Curves to Data Distributions using Visualization. IEEE Transactions on Visualization & Computer Graphics, 2022. (Type: Article | Abstract | Links | BibTeX) @article{Newburger2022, title = {Fitting Bell Curves to Data Distributions using Visualization}, author = {Eric Newburger, Michael Correll, Niklas Elmqvist}, url = {https://users.umiacs.umd.edu/~elm/projects/fitting-bells/fitting-bells.pdf, PDF}, year = {2022}, date = {2022-10-01}, journal = {IEEE Transactions on Visualization & Computer Graphics}, abstract = {Idealized probability distributions, such as normal or other curves, lie at the root of confirmatory statistical tests. But how well do people understand these idealized curves? In practical terms, does the human visual system allow us to match sample data distributions with hypothesized population distributions from which those samples might have been drawn? And how do different visualization techniques impact this capability? This paper shares the results of a crowdsourced experiment that tested the ability of respondents to fit normal curves to four different data distribution visualizations: bar histograms, dotplot histograms, strip plots, and boxplots. We find that the crowd can estimate the center (mean) of a distribution with some success and little bias. We also find that people generally overestimate the standard deviation—which we dub the “umbrella effect” because people tend to want to cover the whole distribution using the curve, as if sheltering it from the heavens above—and that strip plots yield the best accuracy.}, keywords = {} } Idealized probability distributions, such as normal or other curves, lie at the root of confirmatory statistical tests. But how well do people understand these idealized curves? In practical terms, does the human visual system allow us to match sample data distributions with hypothesized population distributions from which those samples might have been drawn? And how do different visualization techniques impact this capability? This paper shares the results of a crowdsourced experiment that tested the ability of respondents to fit normal curves to four different data distribution visualizations: bar histograms, dotplot histograms, strip plots, and boxplots. We find that the crowd can estimate the center (mean) of a distribution with some success and little bias. We also find that people generally overestimate the standard deviation—which we dub the “umbrella effect” because people tend to want to cover the whole distribution using the curve, as if sheltering it from the heavens above—and that strip plots yield the best accuracy. |
140. | Pramod Chundury, M. Adil Yalcin, Jonathan Crabtree, Anup Mahurkar, Lisa M. Shulman, Niklas Elmqvist (2022): Contextual In-Situ Help for Visual Data Interfaces. Information Visualization, 2022. (Type: Article | Abstract | Links | BibTeX) @article{Chundury2022, title = {Contextual In-Situ Help for Visual Data Interfaces}, author = {Pramod Chundury and M. Adil Yalcin and Jonathan Crabtree and Anup Mahurkar and Lisa M. Shulman and Niklas Elmqvist}, url = {https://users.umiacs.umd.edu/~elm/projects/contextual-help/contextual-help.pdf, PDF}, year = {2022}, date = {2022-09-09}, journal = {Information Visualization}, abstract = {As the complexity of data analysis increases, even well-designed data interfaces must guide experts in transforming their theoretical knowledge into actual features supported by the tool. This challenge is even greater for casual users who are increasingly turning to data analysis to solve everyday problems. To address this challenge, we propose data-driven, contextual, in-situ help features that can be implemented in visual data interfaces. We introduce five modes of help-seeking: (1) contextual help on selected interface elements, (2) topic listing, (3) overview, (4) guided tour, and (5) notifications. The difference between our work and general user interface help systems is that data visualization provide a unique environment for embedding context-dependent data inside on-screen messaging. We demonstrate the usefulness of such contextual help through two case studies of two visual data interfaces: Keshif and POD-Vis. We implemented and evaluated the help modes with two sets of participants, and found that directly selecting user interface elements was the most useful.}, keywords = {} } As the complexity of data analysis increases, even well-designed data interfaces must guide experts in transforming their theoretical knowledge into actual features supported by the tool. This challenge is even greater for casual users who are increasingly turning to data analysis to solve everyday problems. To address this challenge, we propose data-driven, contextual, in-situ help features that can be implemented in visual data interfaces. We introduce five modes of help-seeking: (1) contextual help on selected interface elements, (2) topic listing, (3) overview, (4) guided tour, and (5) notifications. The difference between our work and general user interface help systems is that data visualization provide a unique environment for embedding context-dependent data inside on-screen messaging. We demonstrate the usefulness of such contextual help through two case studies of two visual data interfaces: Keshif and POD-Vis. We implemented and evaluated the help modes with two sets of participants, and found that directly selecting user interface elements was the most useful. |
139. | Biswaksen Patnaik, Huaishu Peng, Niklas Elmqvist (2022): Sensemaking Sans Power: Interactive Data Visualization Using Color-Changing Ink. IEEE Transactions on Visualization & Computer Graphics, 2022. (Type: Article | Abstract | Links | BibTeX) @article{Patnaik2022, title = {Sensemaking Sans Power: Interactive Data Visualization Using Color-Changing Ink}, author = {Biswaksen Patnaik and Huaishu Peng and Niklas Elmqvist}, url = {https://users.umiacs.umd.edu/~elm/projects/sense-sans-power/sense-sans-power.pdf, PDF}, year = {2022}, date = {2022-09-01}, journal = {IEEE Transactions on Visualization & Computer Graphics}, abstract = {We present an approach for interactively visualizing data using color-changing inks without the need for electronic displays or computers. Color-changing inks are a family of physical inks that change their color characteristics in response to an external stimulus such as heat, UV light, water, and pressure. Visualizations created using color-changing inks can embed interactivity in printed material without external computational media. In this paper, we survey current color-changing ink technology and then use these findings to derive a framework for how it can be used to construct interactive data representations. We also enumerate the interaction techniques possible using this technology. We then show some examples of how to use color-changing ink to create interactive visualizations on paper. While obviously limited in scope to situations where no power or computing is present, or as a complement to digital displays, our findings can be employed for paper, data physicalization, and embedded visualizations.}, keywords = {} } We present an approach for interactively visualizing data using color-changing inks without the need for electronic displays or computers. Color-changing inks are a family of physical inks that change their color characteristics in response to an external stimulus such as heat, UV light, water, and pressure. Visualizations created using color-changing inks can embed interactivity in printed material without external computational media. In this paper, we survey current color-changing ink technology and then use these findings to derive a framework for how it can be used to construct interactive data representations. We also enumerate the interaction techniques possible using this technology. We then show some examples of how to use color-changing ink to create interactive visualizations on paper. While obviously limited in scope to situations where no power or computing is present, or as a complement to digital displays, our findings can be employed for paper, data physicalization, and embedded visualizations. |
138. | Sriram Karthik Badam, Senthil Chandrasegaran, Niklas Elmqvist (2022): Integrating Annotations into Multidimensional Visual Dashboards. Information Visualization, 21 (3), pp. 270–284, 2022. (Type: Article | Abstract | Links | BibTeX) @article{Badam2022, title = {Integrating Annotations into Multidimensional Visual Dashboards}, author = {Sriram Karthik Badam and Senthil Chandrasegaran and Niklas Elmqvist}, url = {https://users.umiacs.umd.edu/~elm/projects/facetnotes/facetnotes.pdf, PDF}, year = {2022}, date = {2022-05-10}, journal = {Information Visualization}, volume = {21}, number = {3}, pages = {270--284}, abstract = {Multidimensional data is often visualized using coordinated multiple views in an interactive dashboard. However, unlike in infographics where text is often a central part of the presentation, there is currently little knowledge of how to best integrate text and annotations in a visualization dashboard. In this paper, we explore a technique called FacetNotes for presenting these textual annotations on top of any visualization within a dashboard irrespective of the scale of data shown or the design of visual representation itself. FacetNotes does so by grouping and ordering the textual annotations based on properties of (1) the individual data points associated with the annotations, and (2) the target visual representation on which they should be shown. We present this technique along with a set of user interface features and guidelines to apply it to visualization interfaces. We also demonstrate FacetNotes in a custom visual dashboard interface. Finally, results from a user study of FacetNotes show that the technique improves the scope and complexity of insights developed during visual exploration.}, keywords = {} } Multidimensional data is often visualized using coordinated multiple views in an interactive dashboard. However, unlike in infographics where text is often a central part of the presentation, there is currently little knowledge of how to best integrate text and annotations in a visualization dashboard. In this paper, we explore a technique called FacetNotes for presenting these textual annotations on top of any visualization within a dashboard irrespective of the scale of data shown or the design of visual representation itself. FacetNotes does so by grouping and ordering the textual annotations based on properties of (1) the individual data points associated with the annotations, and (2) the target visual representation on which they should be shown. We present this technique along with a set of user interface features and guidelines to apply it to visualization interfaces. We also demonstrate FacetNotes in a custom visual dashboard interface. Finally, results from a user study of FacetNotes show that the technique improves the scope and complexity of insights developed during visual exploration. |
137. | Minjeong Shin, Joohee Kim, Yunha Han, Lexing Xie, Mitchell Whitelaw, Bum Chul Kwon, Sungahn Ko, Niklas Elmqvist (2022): Roslingifier: Semi-Automated Storytelling for Animated Scatterplots. IEEE Transactions on Visualization and Computer Graphics, 2022. (Type: Article | Abstract | Links | BibTeX) @article{Shin2022, title = {Roslingifier: Semi-Automated Storytelling for Animated Scatterplots}, author = {Minjeong Shin and Joohee Kim and Yunha Han and Lexing Xie and Mitchell Whitelaw and Bum Chul Kwon and Sungahn Ko and Niklas Elmqvist}, url = {https://users.umiacs.umd.edu/~elm/projects/roslingifier/roslingifier.pdf, PDF}, year = {2022}, date = {2022-05-10}, journal = {IEEE Transactions on Visualization and Computer Graphics}, abstract = {We present Roslingifier, a data-driven storytelling method for animated scatterplots. Like its namesake, Hans Rosling (1948--2017), a professor of public health and a spellbinding public speaker, Roslingifier turns a sequence of entities changing over time---such as countries and continents with their demographic data---into an engaging narrative telling the story of the data. This data-driven storytelling method with an in-person presenter is a new genre of storytelling technique and has never been studied before. In this paper, we aim to define a design space for this new genre---data presentation---and provide a semi-automated authoring tool for helping presenters create quality presentations. From an in-depth analysis of video clips of presentations using interactive visualizations, we derive three specific techniques to achieve this: natural language narratives, visual effects that highlight events, and temporal branching that changes playback time of the animation. Our implementation of the Roslingifier method is capable of identifying and clustering significant movements, automatically generating visual highlighting and a narrative for playback, and enabling the user to customize. From two user studies, we show that Roslingifier allows users to effectively create engaging data stories and the system features help both presenters and viewers find diverse insights.}, keywords = {} } We present Roslingifier, a data-driven storytelling method for animated scatterplots. Like its namesake, Hans Rosling (1948--2017), a professor of public health and a spellbinding public speaker, Roslingifier turns a sequence of entities changing over time---such as countries and continents with their demographic data---into an engaging narrative telling the story of the data. This data-driven storytelling method with an in-person presenter is a new genre of storytelling technique and has never been studied before. In this paper, we aim to define a design space for this new genre---data presentation---and provide a semi-automated authoring tool for helping presenters create quality presentations. From an in-depth analysis of video clips of presentations using interactive visualizations, we derive three specific techniques to achieve this: natural language narratives, visual effects that highlight events, and temporal branching that changes playback time of the animation. Our implementation of the Roslingifier method is capable of identifying and clustering significant movements, automatically generating visual highlighting and a narrative for playback, and enabling the user to customize. From two user studies, we show that Roslingifier allows users to effectively create engaging data stories and the system features help both presenters and viewers find diverse insights. |
136. | Sebastian Hubenschmid, Jonathan Wieland, Daniel Immanuel Fink, Andrea Batch, Johannes Zagermann, Niklas Elmqvist, Harald Reiterer (2022): ReLive: Bridging In-Situ and Ex-Situ Visual Analytics for Analyzing Mixed Reality User Studies. Proceedings of the ACM Conference on Human Factors in Computing Systems,, pp. 24:1–24:20, ACM, New York, NY, USA, 2022. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Hubenschmid2022, title = {ReLive: Bridging In-Situ and Ex-Situ Visual Analytics for Analyzing Mixed Reality User Studies}, author = {Sebastian Hubenschmid and Jonathan Wieland and Daniel Immanuel Fink and Andrea Batch and Johannes Zagermann and Niklas Elmqvist and Harald Reiterer}, url = {https://users.umiacs.umd.edu/~elm/projects/relive/relive.pdf, PDF https://youtu.be/BaNZ02QkZ_k, Youtube}, year = {2022}, date = {2022-05-10}, booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems,}, pages = {24:1--24:20}, publisher = {ACM}, address = {New York, NY, USA}, abstract = {The nascent field of mixed reality is seeing an ever-increasing need for user studies and field evaluation, which are particularly challenging given device heterogeneity, diversity of use, and mobile deployment. Immersive analytics tools have recently emerged to support such analysis in situ, yet the complexity of the data also warrants an ex-situ analysis using more traditional non-immersive visual analytics setups. To bridge the gap between both approaches, we introduce ReLive: a mixed-immersion visual analytics framework for exploring and analyzing mixed reality user studies. ReLive combines an in-situ virtual reality view with a complementary ex-situ desktop view. While the virtual reality view allows users to relive interactive spatial recordings replicating the original study, the synchronized desktop view provides a familiar interface for analyzing aggregated data. We validated our concepts in a two-step evaluation consisting of a design walkthrough and an empirical expert user study.}, keywords = {} } The nascent field of mixed reality is seeing an ever-increasing need for user studies and field evaluation, which are particularly challenging given device heterogeneity, diversity of use, and mobile deployment. Immersive analytics tools have recently emerged to support such analysis in situ, yet the complexity of the data also warrants an ex-situ analysis using more traditional non-immersive visual analytics setups. To bridge the gap between both approaches, we introduce ReLive: a mixed-immersion visual analytics framework for exploring and analyzing mixed reality user studies. ReLive combines an in-situ virtual reality view with a complementary ex-situ desktop view. While the virtual reality view allows users to relive interactive spatial recordings replicating the original study, the synchronized desktop view provides a familiar interface for analyzing aggregated data. We validated our concepts in a two-step evaluation consisting of a design walkthrough and an empirical expert user study. |
135. | Md. Naimul Hoque, Bhavya Ghai, Niklas Elmqvist (2022): DramatVis Personae: Visual Text Analytics for Identifying Social Biases in Creative Writing. Proceedings of the ACM Conference on Designing Interactive Systems, ACM, New York, NY, USA, 2022. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Hoque2022, title = {DramatVis Personae: Visual Text Analytics for Identifying Social Biases in Creative Writing}, author = {Md. Naimul Hoque and Bhavya Ghai and Niklas Elmqvist}, url = {https://users.umiacs.umd.edu/~elm/projects/dvp/dvp.pdf, PDF}, year = {2022}, date = {2022-05-10}, booktitle = {Proceedings of the ACM Conference on Designing Interactive Systems}, publisher = {ACM}, address = {New York, NY, USA}, abstract = {Implicit biases and stereotypes are often pervasive in different forms of creative writing such as novels, screenplays, and children\'s books. To understand the kind of biases writers are concerned about and how they mitigate those in their writing, we conducted formative interviews with nine writers. The interviews suggested that despite a writer\'s best interest, tracking and managing implicit biases such as a lack of agency, supporting or submissive roles, or harmful language for characters representing marginalized groups is challenging as the story becomes longer and complicated. Based on the interviews, we developed DramatVis Personae (DVP), a visual analytics tool that allows writers to assign social identities to characters, and evaluate how characters and different intersectional social identities are represented in the story. To evaluate DVP, we first conducted think-aloud sessions with three writers and found that DVP is easy-to-use, naturally integrates into the writing process, and could potentially help writers in several critical bias identification tasks. We then conducted a follow-up user study with 11 writers and found that participants could answer questions related to bias detection more efficiently using DVP in comparison to a simple text editor. }, keywords = {} } Implicit biases and stereotypes are often pervasive in different forms of creative writing such as novels, screenplays, and children's books. To understand the kind of biases writers are concerned about and how they mitigate those in their writing, we conducted formative interviews with nine writers. The interviews suggested that despite a writer's best interest, tracking and managing implicit biases such as a lack of agency, supporting or submissive roles, or harmful language for characters representing marginalized groups is challenging as the story becomes longer and complicated. Based on the interviews, we developed DramatVis Personae (DVP), a visual analytics tool that allows writers to assign social identities to characters, and evaluate how characters and different intersectional social identities are represented in the story. To evaluate DVP, we first conducted think-aloud sessions with three writers and found that DVP is easy-to-use, naturally integrates into the writing process, and could potentially help writers in several critical bias identification tasks. We then conducted a follow-up user study with 11 writers and found that participants could answer questions related to bias detection more efficiently using DVP in comparison to a simple text editor. |
134. | Pramod Chundury, Biswaksen Patnaik, Yasmin Reyazuddin, Christine W. Tang, Jonathan Lazar, Niklas Elmqvist (2022): Towards Understanding Sensory Substitution for Accessible Visualization: An Interview Study. IEEE Transactions on Visualization & Computer Graphics, 28 (1), pp. 1084–1094, 2022. (Type: Article | Abstract | Links | BibTeX) @article{Chundury2021, title = {Towards Understanding Sensory Substitution for Accessible Visualization: An Interview Study}, author = {Pramod Chundury and Biswaksen Patnaik and Yasmin Reyazuddin and Christine W. Tang and Jonathan Lazar and Niklas Elmqvist}, url = {https://users.umiacs.umd.edu/~elm/projects/access-vis/access-vis.pdf, PDF}, year = {2022}, date = {2022-01-01}, journal = {IEEE Transactions on Visualization & Computer Graphics}, volume = {28}, number = {1}, pages = {1084--1094}, abstract = {For all its potential in supporting data analysis, particularly in exploratory situations, visualization also creates barriers: accessibility for blind and visually impaired individuals. Regardless of how effective a visualization is, providing equal access for blind users requires a paradigm shift for the visualization research community. To enact such a shift, it is not sufficient to treat visualization accessibility as merely another technical problem to overcome. Instead, supporting the millions of blind and visually impaired users around the world who have equally valid needs for data analysis as sighted individuals requires a respectful, equitable, and holistic approach that includes all users from the onset. In this paper, we draw on accessibility research methodologies to make inroads towards such an approach. We first identify the people who have specific insight into how blind people perceive the world: orientation and mobility (O&M) experts, who are instructors that teach blind individuals how to navigate the physical world using non-visual senses. We interview 10 O&M experts---all of them blind---to understand how best to use sensory substitution other than the visual sense for conveying spatial layouts. Finally, we investigate our qualitative findings using thematic analysis. While blind people in general tend to use both sound and touch to understand their surroundings, we focused on auditory affordances and how they can be used to make data visualizations accessible---using sonification and auralization. However, our experts recommended supporting a combination of senses---sound and touch---to make charts accessible as blind individuals may be more familiar with exploring tactile charts. We report results on both sound and touch affordances, and conclude by discussing implications for accessible visualization for blind individuals.}, keywords = {} } For all its potential in supporting data analysis, particularly in exploratory situations, visualization also creates barriers: accessibility for blind and visually impaired individuals. Regardless of how effective a visualization is, providing equal access for blind users requires a paradigm shift for the visualization research community. To enact such a shift, it is not sufficient to treat visualization accessibility as merely another technical problem to overcome. Instead, supporting the millions of blind and visually impaired users around the world who have equally valid needs for data analysis as sighted individuals requires a respectful, equitable, and holistic approach that includes all users from the onset. In this paper, we draw on accessibility research methodologies to make inroads towards such an approach. We first identify the people who have specific insight into how blind people perceive the world: orientation and mobility (O&M) experts, who are instructors that teach blind individuals how to navigate the physical world using non-visual senses. We interview 10 O&M experts---all of them blind---to understand how best to use sensory substitution other than the visual sense for conveying spatial layouts. Finally, we investigate our qualitative findings using thematic analysis. While blind people in general tend to use both sound and touch to understand their surroundings, we focused on auditory affordances and how they can be used to make data visualizations accessible---using sonification and auralization. However, our experts recommended supporting a combination of senses---sound and touch---to make charts accessible as blind individuals may be more familiar with exploring tactile charts. We report results on both sound and touch affordances, and conclude by discussing implications for accessible visualization for blind individuals. |
2021 | |
133. | Deepthi Raghunandan, Zhe Cui, Kartik Krishnan, Segen Tirfe, Shenzhi Shi, Tejaswi Darshan Shrestha, Leilani Battle, Niklas Elmqvist (2021): Lodestar: Supporting Independent Learning and Rapid Experimentation Through Data-Driven Analysis Recommendations. Proceedings of the Symposium on Visualization in Data Science, 2021. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Raghunandan2021, title = {Lodestar: Supporting Independent Learning and Rapid Experimentation Through Data-Driven Analysis Recommendations}, author = {Deepthi Raghunandan and Zhe Cui and Kartik Krishnan and Segen Tirfe and Shenzhi Shi and Tejaswi Darshan Shrestha and Leilani Battle and Niklas Elmqvist}, url = {https://users.umiacs.umd.edu/~elm/projects/lodestar/lodestar.pdf, PDF}, year = {2021}, date = {2021-10-01}, booktitle = {Proceedings of the Symposium on Visualization in Data Science}, abstract = {Keeping abreast of current trends, technologies, and best practices in visualization and data analysis is becoming increasingly difficult, especially for fledgling data scientists. In this paper, we propose Lodestar, an interactive computational notebook that allows users to quickly explore and construct new data science workflows by selecting from a list of automated analysis recommendations. We derive our recommendations from directed graphs of known analysis states, with two input sources: one manually curated from online data science tutorials, and another extracted through semi-automatic analysis of a corpus of over 6,000 Jupyter notebooks. We evaluate Lodestar in a formative study guiding our next set of improvements to the tool. Our results suggest that users find Lodestar useful for rapidly creating data science workflows.}, keywords = {} } Keeping abreast of current trends, technologies, and best practices in visualization and data analysis is becoming increasingly difficult, especially for fledgling data scientists. In this paper, we propose Lodestar, an interactive computational notebook that allows users to quickly explore and construct new data science workflows by selecting from a list of automated analysis recommendations. We derive our recommendations from directed graphs of known analysis states, with two input sources: one manually curated from online data science tutorials, and another extracted through semi-automatic analysis of a corpus of over 6,000 Jupyter notebooks. We evaluate Lodestar in a formative study guiding our next set of improvements to the tool. Our results suggest that users find Lodestar useful for rapidly creating data science workflows. |
132. | Weihang Wang, Sriram Karthik Badam, Niklas Elmqvist (2021): Topology-Aware Space Distortion for Structured Visualization Spaces. Information Visualization, 2021. (Type: Article | Abstract | Links | BibTeX) @article{Wang2021, title = {Topology-Aware Space Distortion for Structured Visualization Spaces}, author = {Weihang Wang and Sriram Karthik Badam and Niklas Elmqvist}, url = {https://users.umiacs.umd.edu/~elm/projects/zoomhalo/zoomhalo.pdf, PDF}, year = {2021}, date = {2021-09-29}, journal = {Information Visualization}, abstract = {We propose topology-aware space distortion (TASD), a family of interactive layout techniques for non-linearly distorting geometric space based on user attention and on the structure of the visual representation. TASD seamlessly adapts the visual substrate of any visualization to give more screen real estate to important regions of the representation at the expense of less important regions. In this paper, we present a concrete TASD technique that we call ZoomHalo for interactively distorting a two-dimensional space based on a degree-of-interest (DOI) function defined for the space. Using this DOI function, ZoomHalo derives several areas of interest, computes the available space around each area in relation to other areas and the current viewport extents, and then dynamically expands (or shrinks) each area given user input. We use our prototype to evaluate the technique in two user studies, as well as showcase examples of TASD for node-link diagrams, word clouds, and geographical maps.}, keywords = {} } We propose topology-aware space distortion (TASD), a family of interactive layout techniques for non-linearly distorting geometric space based on user attention and on the structure of the visual representation. TASD seamlessly adapts the visual substrate of any visualization to give more screen real estate to important regions of the representation at the expense of less important regions. In this paper, we present a concrete TASD technique that we call ZoomHalo for interactively distorting a two-dimensional space based on a degree-of-interest (DOI) function defined for the space. Using this DOI function, ZoomHalo derives several areas of interest, computes the available space around each area in relation to other areas and the current viewport extents, and then dynamically expands (or shrinks) each area given user input. We use our prototype to evaluate the technique in two user studies, as well as showcase examples of TASD for node-link diagrams, word clouds, and geographical maps. |
131. | Sriram Karthik Badam, Niklas Elmqvist (2021): Effects of Screen-Responsive Visualization on Data Comprehension. Information Visualization, 20 (4), pp. 229–244, 2021. (Type: Article | Abstract | Links | BibTeX) @article{Badam2021, title = {Effects of Screen-Responsive Visualization on Data Comprehension}, author = {Sriram Karthik Badam and Niklas Elmqvist}, url = {https://users.umiacs.umd.edu/~elm/projects/touchinsight/touchinsight.pdf, PDF}, year = {2021}, date = {2021-09-01}, journal = {Information Visualization}, volume = {20}, number = {4}, pages = {229--244}, abstract = {Visualization interfaces designed for heterogeneous devices such as wall displays and mobile screens must be responsive to varying display dimensions, resolution, and interaction capabilities. In this paper, we report on two user studies of visual representations for large versus small displays. The goal of our experiments was to investigate differences between a large vertical display and a mobile hand-held display in terms of the data comprehension and the quality of resulting insights. To this end, we developed a visual interface with a coordinated multiple view layout for the large display and two alternative designs of the same interface---a space-saving boundary visualization layout and an overview layout---for the mobile condition. The first experiment was a controlled laboratory study designed to evaluate the effect of display size on the perception of changes in a visual representation, and yielded significant correctness differences even while completion time remained similar. The second evaluation was a qualitative study in a practical setting and showed that participants were able to easily associate and work with the responsive visualizations. Based on the results, we conclude the paper by providing new guidelines for screen-responsive visualization interfaces.}, keywords = {} } Visualization interfaces designed for heterogeneous devices such as wall displays and mobile screens must be responsive to varying display dimensions, resolution, and interaction capabilities. In this paper, we report on two user studies of visual representations for large versus small displays. The goal of our experiments was to investigate differences between a large vertical display and a mobile hand-held display in terms of the data comprehension and the quality of resulting insights. To this end, we developed a visual interface with a coordinated multiple view layout for the large display and two alternative designs of the same interface---a space-saving boundary visualization layout and an overview layout---for the mobile condition. The first experiment was a controlled laboratory study designed to evaluate the effect of display size on the perception of changes in a visual representation, and yielded significant correctness differences even while completion time remained similar. The second evaluation was a qualitative study in a practical setting and showed that participants were able to easily associate and work with the responsive visualizations. Based on the results, we conclude the paper by providing new guidelines for screen-responsive visualization interfaces. |
130. | Deokgun Park, Mohamed Suhail, Minsheng Zheng, Cody Dunn, Eric Ragan, Niklas Elmqvist (2021): StoryFacets: A Design Study on Storytelling with Visualizations for Collaborative Data Analysis. Information Visualization, 2021. (Type: Article | Abstract | Links | BibTeX) @article{Park2021, title = {StoryFacets: A Design Study on Storytelling with Visualizations for Collaborative Data Analysis}, author = {Deokgun Park and Mohamed Suhail and Minsheng Zheng and Cody Dunn and Eric Ragan and Niklas Elmqvist}, url = {https://users.umiacs.umd.edu/~elm/projects/storyfacets/storyfacets.pdf, PDF}, year = {2021}, date = {2021-08-01}, journal = {Information Visualization}, abstract = {Tracking the sensemaking process is a well-established practice in many data analysis tools, and many visualization tools facilitate overview and recall during and after exploration. However, the resulting communication materials such as presentations or infographics often omit provenance information for the sake of simplicity. This unfortunately limits later viewers from engaging in further collaborative sensemaking or discussion about the analysis. We present a design study where we introduced visual provenance and analytics to urban transportation planning. Maintaining the provenance of all analyses was critical to support collaborative sensemaking among the many and diverse stakeholders. Our system, StoryFacets, exposes several different views of the same analysis session, each view designed for a specific audience: (1) the trail view provides a data flow canvas that supports in-depth exploration+provenance (expert analysts); (2) the dashboard view organizes visualizations and other content into a space-filling layout to support high-level analysis (managers); and (3) the slideshow view supports linear storytelling via interactive step-by-step presentations (laypersons). Views are linked so that when one is changed, provenance is maintained. Visual provenance is available on demand to support iterative sensemaking for any team member.}, keywords = {} } Tracking the sensemaking process is a well-established practice in many data analysis tools, and many visualization tools facilitate overview and recall during and after exploration. However, the resulting communication materials such as presentations or infographics often omit provenance information for the sake of simplicity. This unfortunately limits later viewers from engaging in further collaborative sensemaking or discussion about the analysis. We present a design study where we introduced visual provenance and analytics to urban transportation planning. Maintaining the provenance of all analyses was critical to support collaborative sensemaking among the many and diverse stakeholders. Our system, StoryFacets, exposes several different views of the same analysis session, each view designed for a specific audience: (1) the trail view provides a data flow canvas that supports in-depth exploration+provenance (expert analysts); (2) the dashboard view organizes visualizations and other content into a space-filling layout to support high-level analysis (managers); and (3) the slideshow view supports linear storytelling via interactive step-by-step presentations (laypersons). Views are linked so that when one is changed, provenance is maintained. Visual provenance is available on demand to support iterative sensemaking for any team member. |
129. | Arjun Choudhry, Mandar Sharma, Pramod Chundury, Thomas Kapler, Derek Gray, Naren Ramakrishnan, Niklas Elmqvist (2021): Once Upon A Time In Visualization: Understanding the Use of Textual Narratives for Causality. IEEE Transactions on Visualization & Computer Graphics, 28 (1), 2021. (Type: Article | Abstract | Links | BibTeX) @article{Choudhry2021, title = {Once Upon A Time In Visualization: Understanding the Use of Textual Narratives for Causality}, author = {Arjun Choudhry and Mandar Sharma and Pramod Chundury and Thomas Kapler and Derek Gray and Naren Ramakrishnan and Niklas Elmqvist}, url = {http://users.umiacs.umd.edu/~elm/projects/causality/onceuponatime.pdf, PDF}, year = {2021}, date = {2021-01-01}, journal = {IEEE Transactions on Visualization & Computer Graphics}, volume = {28}, number = {1}, abstract = {Causality visualization can help people understand temporal chains of events, such as messages sent in a distributed system, cause and effect in a historical conflict, or the interplay between political actors over time. However, as the scale and complexity of these event sequences grows, even these visualizations can become overwhelming to use. In this paper, we propose the use of textual narratives as a data-driven storytelling method to augment causality visualization. We first propose a design space for how textual narratives can be used to describe causal data. We then present results from a crowdsourced user study where participants were asked to recover causality information from two causality visualizations--causal graphs and Hasse diagrams--with and without an associated textual narrative. Finally, we describe CAUSEWORKS, a causality visualization system for understanding how specific interventions influence a causal model. The system incorporates an automatic textual narrative mechanism based on our design space. We validate CAUSEWORKS through interviews with experts who used the system for understanding complex events.}, keywords = {} } Causality visualization can help people understand temporal chains of events, such as messages sent in a distributed system, cause and effect in a historical conflict, or the interplay between political actors over time. However, as the scale and complexity of these event sequences grows, even these visualizations can become overwhelming to use. In this paper, we propose the use of textual narratives as a data-driven storytelling method to augment causality visualization. We first propose a design space for how textual narratives can be used to describe causal data. We then present results from a crowdsourced user study where participants were asked to recover causality information from two causality visualizations--causal graphs and Hasse diagrams--with and without an associated textual narrative. Finally, we describe CAUSEWORKS, a causality visualization system for understanding how specific interventions influence a causal model. The system incorporates an automatic textual narrative mechanism based on our design space. We validate CAUSEWORKS through interviews with experts who used the system for understanding complex events. |
128. | Brian Ondov, Fumeng Yang, Matthew Kay, Niklas Elmqvist, Steven Franconeri (2021): Revealing Perceptual Proxies with Adversarial Examples. IEEE Transactions on Visualization & Computer Graphics, 28 (1), 2021. (Type: Article | Abstract | Links | BibTeX) @article{Ondov2021, title = {Revealing Perceptual Proxies with Adversarial Examples}, author = {Brian Ondov and Fumeng Yang and Matthew Kay and Niklas Elmqvist and Steven Franconeri}, url = {http://users.umiacs.umd.edu/~elm/projects/perceptual-proxies/revealing-proxies.pdf, PDF https://osf.io/2re7b/, OSF (materials)}, year = {2021}, date = {2021-01-01}, journal = {IEEE Transactions on Visualization & Computer Graphics}, volume = {28}, number = {1}, abstract = {Data visualizations convert numbers into visual marks so that our visual system can extract data from an image instead of raw numbers. Clearly, the visual system does not compute these values as a computer would, as an arithmetic mean or a correlation.Instead, it extracts these patterns using perceptual proxies; heuristic shortcuts of the visual marks, such as a center of mass or a shape envelope. Understanding which proxies people use would lead to more effective visualizations. We present the results of a series of crowdsourced experiments that measure how powerfully a set of candidate proxies can explain human performance when comparing the mean and range of pairs of data series presented as bar charts. We generated datasets where the correct answer---the series with the larger arithmetic mean or range---was pitted against an \"adversarial\" series that should be seen as larger if the viewer uses a particular candidate proxy. We used both Bayesian logistic regression models and a robust Bayesian mixed-effects linear model to measure how strongly each adversarial proxy could drive viewers to answer incorrectly and whether different individuals may use different proxies. Finally, we attempt to construct adversarial datasets from scratch, using an iterative crowdsourcing procedure to perform black-box optimization.}, keywords = {} } Data visualizations convert numbers into visual marks so that our visual system can extract data from an image instead of raw numbers. Clearly, the visual system does not compute these values as a computer would, as an arithmetic mean or a correlation.Instead, it extracts these patterns using perceptual proxies; heuristic shortcuts of the visual marks, such as a center of mass or a shape envelope. Understanding which proxies people use would lead to more effective visualizations. We present the results of a series of crowdsourced experiments that measure how powerfully a set of candidate proxies can explain human performance when comparing the mean and range of pairs of data series presented as bar charts. We generated datasets where the correct answer---the series with the larger arithmetic mean or range---was pitted against an "adversarial" series that should be seen as larger if the viewer uses a particular candidate proxy. We used both Bayesian logistic regression models and a robust Bayesian mixed-effects linear model to measure how strongly each adversarial proxy could drive viewers to answer incorrectly and whether different individuals may use different proxies. Finally, we attempt to construct adversarial datasets from scratch, using an iterative crowdsourcing procedure to perform black-box optimization. |
2020 | |
127. | Andrea Batch, Biswaksen Patnaik, Moses Akazue, Niklas Elmqvist (2020): Scents and Sensibility: Evaluating Information Olfactation. Proceedings of the ACM Conference on Human Factors in Computing Systems, ACM, New York, NY, USA, 2020. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Batch2019, title = {Scents and Sensibility: Evaluating Information Olfactation}, author = {Andrea Batch and Biswaksen Patnaik and Moses Akazue and Niklas Elmqvist}, url = {https://users.umiacs.umd.edu/~elm/projects/info-olfac/scents-sense.pdf, PDF}, year = {2020}, date = {2020-10-01}, booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems}, publisher = {ACM}, address = {New York, NY, USA}, abstract = {Olfaction---the sense of smell---is one of the least explored of the human senses for conveying abstract information. In this paper, we conduct a comprehensive perceptual experiment on information olfactation: the use of olfactory and crossmodal sensory marks and channels to convey data. More specifically, following the example from graphical perception studies, we design an experiment that studies the perceptual accuracy of four cross-modal sensory channels---scent type, scent intensity, airflow, and temperature---for conveying three different types of data---nominal, ordinal, and quantitative. We also present details of a 24-scent multi-sensory display and its software framework that we designed in order to run this experiment. Our results yield a ranking of olfactory and cross-modal sensory channels that follows similar principles as classic rankings for visual channels.}, keywords = {} } Olfaction---the sense of smell---is one of the least explored of the human senses for conveying abstract information. In this paper, we conduct a comprehensive perceptual experiment on information olfactation: the use of olfactory and crossmodal sensory marks and channels to convey data. More specifically, following the example from graphical perception studies, we design an experiment that studies the perceptual accuracy of four cross-modal sensory channels---scent type, scent intensity, airflow, and temperature---for conveying three different types of data---nominal, ordinal, and quantitative. We also present details of a 24-scent multi-sensory display and its software framework that we designed in order to run this experiment. Our results yield a ranking of olfactory and cross-modal sensory channels that follows similar principles as classic rankings for visual channels. |
126. | Ninger Zhou, Lorraine Kisselburgh, Senthil Chandrasegaran, Karthik Badam, Niklas Elmqvist, Karthik Ramani (2020): Using Social Interaction Trace Data and Context to Predict Collaboration Quality and Creative Fluency in Collaborative Design Learning Environments. International Journal of Human-Computer Studies, 136 (102378), 2020. (Type: Article | Abstract | Links | BibTeX) @article{Zhou2020, title = {Using Social Interaction Trace Data and Context to Predict Collaboration Quality and Creative Fluency in Collaborative Design Learning Environments}, author = {Ninger Zhou and Lorraine Kisselburgh and Senthil Chandrasegaran and Karthik Badam and Niklas Elmqvist and Karthik Ramani}, url = {https://www.sciencedirect.com/science/article/abs/pii/S1071581919301442, Website}, year = {2020}, date = {2020-04-01}, journal = {International Journal of Human-Computer Studies}, volume = {136}, number = {102378}, abstract = {Engineering design typically occurs as a collaborative process situated in specific context such as computer-supported environments, however there is limited research examining the dynamics of design collaboration in specific contexts. In this study, drawing from situative learning theory, we developed two analytic lenses to broaden theoretical insights into collaborative design practices in computer-supported environments: (a) the role of spatial and material context, and (b) the role of social interactions. We randomly assigned participants to four conditions varying the material context (paper vs. tablet sketching tools) and spatial environment (private room vs commons area) as they worked collaboratively to generate ideas for a toy design task. We used wearable sociometric badges to automatically and unobtrusively collect social interaction data. Using partial least squares regression, we generated two predictive models for collaboration quality and creative fluency. We found that context matters materially to perceptions of collaboration, where those using collaboration-support tools perceived higher quality collaboration. But context matters spatially to creativity, and those situated in private spaces are more fluent in generating ideas than those in commons areas. We also found that interaction dynamics differ: synchronous interaction is important to quality collaboration, but reciprocal interaction is important to creative fluency. These findings provide important insights into the processual factors in collaborative design in computer-supported environments, and the predictive role of context and conversation dynamics. We discuss the theoretical contributions to computer-supported collaborative design, the methodological contributions of wearable sensor tools, and the practical contributions to structuring computer-supported environments for engineering design practice.}, keywords = {} } Engineering design typically occurs as a collaborative process situated in specific context such as computer-supported environments, however there is limited research examining the dynamics of design collaboration in specific contexts. In this study, drawing from situative learning theory, we developed two analytic lenses to broaden theoretical insights into collaborative design practices in computer-supported environments: (a) the role of spatial and material context, and (b) the role of social interactions. We randomly assigned participants to four conditions varying the material context (paper vs. tablet sketching tools) and spatial environment (private room vs commons area) as they worked collaboratively to generate ideas for a toy design task. We used wearable sociometric badges to automatically and unobtrusively collect social interaction data. Using partial least squares regression, we generated two predictive models for collaboration quality and creative fluency. We found that context matters materially to perceptions of collaboration, where those using collaboration-support tools perceived higher quality collaboration. But context matters spatially to creativity, and those situated in private spaces are more fluent in generating ideas than those in commons areas. We also found that interaction dynamics differ: synchronous interaction is important to quality collaboration, but reciprocal interaction is important to creative fluency. These findings provide important insights into the processual factors in collaborative design in computer-supported environments, and the predictive role of context and conversation dynamics. We discuss the theoretical contributions to computer-supported collaborative design, the methodological contributions of wearable sensor tools, and the practical contributions to structuring computer-supported environments for engineering design practice. |
125. | Amira Chalbi, Jacob Ritchie, Deok Gun Park, Jungu Choi, Nicolas Roussel, Niklas Elmqvist, Fanny Chevalier (2020): Common Fate for Animated Transitions in Visualization. IEEE Transactions on Visualization & Computer Graphics, 26 (1), 2020. (Type: Article | Abstract | Links | BibTeX) @article{Chalbi2020, title = {Common Fate for Animated Transitions in Visualization}, author = {Amira Chalbi and Jacob Ritchie and Deok Gun Park and Jungu Choi and Nicolas Roussel and Niklas Elmqvist and Fanny Chevalier}, url = {http://users.umiacs.umd.edu/~elm/projects/common-fate/common-fate.pdf, PDF}, year = {2020}, date = {2020-01-01}, journal = {IEEE Transactions on Visualization & Computer Graphics}, volume = {26}, number = {1}, abstract = {The Law of Common Fate from Gestalt psychology states that visual objects moving with the same velocity along parallel trajectories will be perceived by a human observer as grouped. However, the concept of common fate is much broader than mere velocity; in this paper we explore how common fate results from coordinated changes in luminance and size. We present results from a crowdsourced graphical perception study where we asked workers to make perceptual judgments on a series of trials involving four graphical objects under the influence of conflicting static and dynamic visual factors (position, size and luminance) used in conjunction. Our results yield the following rankings for visual grouping: motion > (dynamic luminance, size, luminance); dynamic size > (dynamic luminance, position); and dynamic luminance > size. We also conducted a follow-up experiment to evaluate the three dynamic visual factors in a more ecologically valid setting, using both a Gapminder-like animated scatterplot and a thematic map of election data. The results indicate that in practice the relative grouping strengths of these factors may depend on various parameters including the visualization characteristics and the underlying data. We discuss design implications for animated transitions in data visualization.}, keywords = {} } The Law of Common Fate from Gestalt psychology states that visual objects moving with the same velocity along parallel trajectories will be perceived by a human observer as grouped. However, the concept of common fate is much broader than mere velocity; in this paper we explore how common fate results from coordinated changes in luminance and size. We present results from a crowdsourced graphical perception study where we asked workers to make perceptual judgments on a series of trials involving four graphical objects under the influence of conflicting static and dynamic visual factors (position, size and luminance) used in conjunction. Our results yield the following rankings for visual grouping: motion > (dynamic luminance, size, luminance); dynamic size > (dynamic luminance, position); and dynamic luminance > size. We also conducted a follow-up experiment to evaluate the three dynamic visual factors in a more ecologically valid setting, using both a Gapminder-like animated scatterplot and a thematic map of election data. The results indicate that in practice the relative grouping strengths of these factors may depend on various parameters including the visualization characteristics and the underlying data. We discuss design implications for animated transitions in data visualization. |
124. | Andrea Batch, Andrew Cunningham, Maxime Cordeil, Niklas Elmqvist, Tim Dwyer, Bruce H. Thomas, Kim Marriott (2020): There Is No Spoon: Evaluating Performance, Space Use, and Presence with Expert Domain Users in Immersive Analytics. IEEE Transactions on Visualization & Computer Graphics, 26 (1), 2020. (Type: Article | Abstract | Links | BibTeX) @article{Batch2020, title = {There Is No Spoon: Evaluating Performance, Space Use, and Presence with Expert Domain Users in Immersive Analytics}, author = {Andrea Batch and Andrew Cunningham and Maxime Cordeil and Niklas Elmqvist and Tim Dwyer and Bruce H. Thomas and Kim Marriott}, url = {http://users.umiacs.umd.edu/~elm/projects/nospoon/nospoon.pdf, PDF}, year = {2020}, date = {2020-01-01}, journal = {IEEE Transactions on Visualization & Computer Graphics}, volume = {26}, number = {1}, abstract = {Immersive analytics turns the very space surrounding the user into a canvas for data analysis, supporting human cognitive abilities in myriad ways. We present the results of a design study, contextual inquiry, and longitudinal evaluation involving professional economists using a Virtual Reality (VR) system for multidimensional visualization to explore actual economic data. Results from our preregistered evaluation highlight the varied use of space depending on context (exploration vs. presentation), the organization of space to support work, and the impact of immersion on navigation and orientation in the 3D analysis space.}, keywords = {} } Immersive analytics turns the very space surrounding the user into a canvas for data analysis, supporting human cognitive abilities in myriad ways. We present the results of a design study, contextual inquiry, and longitudinal evaluation involving professional economists using a Virtual Reality (VR) system for multidimensional visualization to explore actual economic data. Results from our preregistered evaluation highlight the varied use of space depending on context (exploration vs. presentation), the organization of space to support work, and the impact of immersion on navigation and orientation in the 3D analysis space. |
123. | Nicole Jardine, Brian Ondov, Niklas Elmqvist, Steven Franconeri (2020): The Perceptual Proxies of Visual Comparison. IEEE Transactions on Visualization & Computer Graphics, 26 (1), 2020. (Type: Article | Abstract | Links | BibTeX) @article{Jardine2020, title = {The Perceptual Proxies of Visual Comparison}, author = {Nicole Jardine and Brian Ondov and Niklas Elmqvist and Steven Franconeri}, url = {http://users.umiacs.umd.edu/~elm/projects/perceptual-proxies/perceptual-proxies.pdf, PDF}, year = {2020}, date = {2020-01-01}, journal = {IEEE Transactions on Visualization & Computer Graphics}, volume = {26}, number = {1}, abstract = {Perceptual tasks in visualizations often involve comparisons. Of two sets of values depicted in two charts, which set had values that were the highest overall? Which had the widest range? Prior empirical work found that the performance on different visual comparison tasks (e.g., \"biggest delta\", \"biggest correlation\") varied widely across different combinations of marks and spatial arrangements. In this paper, we expand upon these combinations in an empirical evaluation of two new comparison tasks: the \"biggest mean\" and \"biggest range\" between two sets of values. We used a staircase procedure to titrate the difficulty of the data comparison to assess which arrangements produced the most precise comparisons for each task. We find visual comparisons of biggest mean and biggest range are supported by some chart arrangements more than others, and that this pattern is substantially different from the pattern for other tasks. To synthesize these dissonant findings, we argue that we must understand which features of a visualization are actually used by the human visual system to solve a given task. We call these perceptual proxies. For example, when comparing the means of two bar charts, the visual system might use a \"Mean length\" proxy that isolates the actual lengths of the bars and then constructs a true average across these lengths. Alternatively, it might use a \"Hull Area\" proxy that perceives an implied hull bounded by the bars of each chart and then compares the areas of these hulls. We propose a series of potential proxies across different tasks, marks, and spatial arrangements. Simple models of these proxies can be empirically evaluated for their explanatory power by matching their performance to human performance across these marks, arrangements, and tasks. We use this process to highlight candidates for perceptual proxies that might scale more broadly to explain performance in visual comparison.}, keywords = {} } Perceptual tasks in visualizations often involve comparisons. Of two sets of values depicted in two charts, which set had values that were the highest overall? Which had the widest range? Prior empirical work found that the performance on different visual comparison tasks (e.g., "biggest delta", "biggest correlation") varied widely across different combinations of marks and spatial arrangements. In this paper, we expand upon these combinations in an empirical evaluation of two new comparison tasks: the "biggest mean" and "biggest range" between two sets of values. We used a staircase procedure to titrate the difficulty of the data comparison to assess which arrangements produced the most precise comparisons for each task. We find visual comparisons of biggest mean and biggest range are supported by some chart arrangements more than others, and that this pattern is substantially different from the pattern for other tasks. To synthesize these dissonant findings, we argue that we must understand which features of a visualization are actually used by the human visual system to solve a given task. We call these perceptual proxies. For example, when comparing the means of two bar charts, the visual system might use a "Mean length" proxy that isolates the actual lengths of the bars and then constructs a true average across these lengths. Alternatively, it might use a "Hull Area" proxy that perceives an implied hull bounded by the bars of each chart and then compares the areas of these hulls. We propose a series of potential proxies across different tasks, marks, and spatial arrangements. Simple models of these proxies can be empirically evaluated for their explanatory power by matching their performance to human performance across these marks, arrangements, and tasks. We use this process to highlight candidates for perceptual proxies that might scale more broadly to explain performance in visual comparison. |
2019 | |
122. | Zhe Cui, Jayaram Kancherla, Kyle W. Chang, Niklas Elmqvist, Héctor Corrada Bravo (2019): Proactive Visual and Statistical Analysis of Genomic Data in Epiviz. Bioinformatics, 36 (7), pp. 2195–2201, 2019. (Type: Article | Abstract | Links | BibTeX) @article{Cui2020, title = {Proactive Visual and Statistical Analysis of Genomic Data in Epiviz}, author = {Zhe Cui and Jayaram Kancherla and Kyle W. Chang and Niklas Elmqvist and Héctor Corrada Bravo}, url = {https://academic.oup.com/bioinformatics/article/36/7/2195/5646643, Fulltext (HTML)}, year = {2019}, date = {2019-11-29}, journal = {Bioinformatics}, volume = {36}, number = {7}, pages = {2195--2201}, abstract = {In this article, we present Epiviz Feed, a proactive and automatic visual analytics system integrated with Epiviz that alleviates the burden of manually executing data analysis required to test biologically meaningful hypotheses. Results of interest that are proactively identified by server-side computations are listed as notifications in a feed. The feed turns genomic data analysis into a collaborative work between the analyst and the computational environment, which shortens the analysis time and allows the analyst to explore results efficiently. We discuss three ways where the proposed system advances the field of genomic data analysis: (i) takes the first step of proactive data analysis by utilizing available CPU power from the server to automate the analysis process; (ii) summarizes hypothesis test results in a way that analysts can easily understand and investigate; (iii) enables filtering and grouping of analysis results for quick search. This effort provides initial work on systems that substantially expand how computational and visualization frameworks can be tightly integrated to facilitate interactive genomic data analysis.}, keywords = {} } In this article, we present Epiviz Feed, a proactive and automatic visual analytics system integrated with Epiviz that alleviates the burden of manually executing data analysis required to test biologically meaningful hypotheses. Results of interest that are proactively identified by server-side computations are listed as notifications in a feed. The feed turns genomic data analysis into a collaborative work between the analyst and the computational environment, which shortens the analysis time and allows the analyst to explore results efficiently. We discuss three ways where the proposed system advances the field of genomic data analysis: (i) takes the first step of proactive data analysis by utilizing available CPU power from the server to automate the analysis process; (ii) summarizes hypothesis test results in a way that analysts can easily understand and investigate; (iii) enables filtering and grouping of analysis results for quick search. This effort provides initial work on systems that substantially expand how computational and visualization frameworks can be tightly integrated to facilitate interactive genomic data analysis. |
121. | Zhe Cui, Jayaram Kancherla, Hector Corrada Bravo, Niklas Elmqvist (2019): Sherpa: Leveraging User Attention for Computational Steering in Visual Analytics. Proceedings of the IEEE Symposium on Visualization in Data Science, IEEE, 2019. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Cui2019, title = {Sherpa: Leveraging User Attention for Computational Steering in Visual Analytics}, author = {Zhe Cui and Jayaram Kancherla and Hector Corrada Bravo and Niklas Elmqvist}, url = {http://users.umiacs.umd.edu/~elm/projects/sherpa/sherpa.pdf, PDF}, year = {2019}, date = {2019-10-20}, booktitle = {Proceedings of the IEEE Symposium on Visualization in Data Science}, publisher = {IEEE}, abstract = {We present Sherpa, a computational steering mechanism for progressive visual analytics that automatically prioritizes computations based on the analyst’s navigational behavior in the data. The intuition is that navigation in data space is an indication of the analyst\'s interest in the data. Sherpa implementation provides computational modules, such as statistics of biological inferences about gene regulation. The position of the navigation window on the genomic sequence over time is used to prioritize computations. In a study with genomic and visualization analysts, we found that Sherpa provided comparable accuracy to the offline condition, where computations were completed prior to analysis, with shorter completion times. We also provide a second example on stock market analysis.}, keywords = {} } We present Sherpa, a computational steering mechanism for progressive visual analytics that automatically prioritizes computations based on the analyst’s navigational behavior in the data. The intuition is that navigation in data space is an indication of the analyst's interest in the data. Sherpa implementation provides computational modules, such as statistics of biological inferences about gene regulation. The position of the navigation window on the genomic sequence over time is used to prioritize computations. In a study with genomic and visualization analysts, we found that Sherpa provided comparable accuracy to the offline condition, where computations were completed prior to analysis, with shorter completion times. We also provide a second example on stock market analysis. |
120. | Andreas Mathisen, Tom Horak, Clemens Nylandsted Klokmose, Kaj Grønbæk, Niklas Elmqvist (2019): InsideInsights: Integrating Data‐Driven Reporting in Collaborative Visual Analytics. Computer Graphics Forum, 38 (3), pp. 649–661, 2019. (Type: Article | Abstract | Links | BibTeX) @article{Mathisen2019, title = {InsideInsights: Integrating Data‐Driven Reporting in Collaborative Visual Analytics}, author = {Andreas Mathisen and Tom Horak and Clemens Nylandsted Klokmose and Kaj Grønbæk and Niklas Elmqvist}, url = {http://users.umiacs.umd.edu/~elm/projects/insideinsights/insideinsights.pdf, PDF}, year = {2019}, date = {2019-06-01}, journal = {Computer Graphics Forum}, volume = {38}, number = {3}, pages = {649--661}, abstract = {Analyzing complex data is a non‐linear process that alternates between identifying discrete facts and developing overall assessments and conclusions. In addition, data analysis rarely occurs in solitude; multiple collaborators can be engaged in the same analysis, or intermediate results can be reported to stakeholders. However, current data‐driven communication tools are detached from the analysis process and promote linear stories that forego the hierarchical and branching nature of data analysis, which leads to either too much or too little detail in the final report. We propose a conceptual design for integrated data‐driven reporting that allows for iterative structuring of insights into hierarchies linked to analytic provenance and chosen analysis views. The hierarchies become dynamic and interactive reports where collaborators can review and modify the analysis at a desired level of detail. Our web‐based InsideInsights system provides interaction techniques to annotate states of analytic components, structure annotations, and link them to appropriate presentation views. We demonstrate the generality and usefulness of our system with two use cases and a qualitative expert review.}, keywords = {} } Analyzing complex data is a non‐linear process that alternates between identifying discrete facts and developing overall assessments and conclusions. In addition, data analysis rarely occurs in solitude; multiple collaborators can be engaged in the same analysis, or intermediate results can be reported to stakeholders. However, current data‐driven communication tools are detached from the analysis process and promote linear stories that forego the hierarchical and branching nature of data analysis, which leads to either too much or too little detail in the final report. We propose a conceptual design for integrated data‐driven reporting that allows for iterative structuring of insights into hierarchies linked to analytic provenance and chosen analysis views. The hierarchies become dynamic and interactive reports where collaborators can review and modify the analysis at a desired level of detail. Our web‐based InsideInsights system provides interaction techniques to annotate states of analytic components, structure annotations, and link them to appropriate presentation views. We demonstrate the generality and usefulness of our system with two use cases and a qualitative expert review. |
119. | Jinho Choi, Sanghun Jung, Deok Gun Park, Jaegul Choo, Niklas Elmqvist (2019): Visualizing for the Non‐Visual: Enabling the Visually Impaired to Use Visualization. Computer Graphics Forum, 38 (3), pp. 249–260, 2019. (Type: Article | Abstract | Links | BibTeX) @article{Choi2019, title = {Visualizing for the Non‐Visual: Enabling the Visually Impaired to Use Visualization}, author = {Jinho Choi and Sanghun Jung and Deok Gun Park and Jaegul Choo and Niklas Elmqvist}, url = {http://users.umiacs.umd.edu/~elm/projects/vis4nonvisual/vis4nonvisual.pdf, PDF}, year = {2019}, date = {2019-06-01}, journal = {Computer Graphics Forum}, volume = {38}, number = {3}, pages = {249--260}, abstract = {The majority of visualizations on the web are still stored as raster images, making them inaccessible to visually impaired users. We propose a deep‐neural‐network‐based approach that automatically recognizes key elements in a visualization, including a visualization type, graphical elements, labels, legends, and most importantly, the original data conveyed in the visualization. We leverage such extracted information to provide visually impaired people with the reading of the extracted information. Based on interviews with visually impaired users, we built a Google Chrome extension designed to work with screen reader software to automatically decode charts on a webpage using our pipeline. We compared the performance of the back‐end algorithm with existing methods and evaluated the utility using qualitative feedback from visually impaired users.}, keywords = {} } The majority of visualizations on the web are still stored as raster images, making them inaccessible to visually impaired users. We propose a deep‐neural‐network‐based approach that automatically recognizes key elements in a visualization, including a visualization type, graphical elements, labels, legends, and most importantly, the original data conveyed in the visualization. We leverage such extracted information to provide visually impaired people with the reading of the extracted information. Based on interviews with visually impaired users, we built a Google Chrome extension designed to work with screen reader software to automatically decode charts on a webpage using our pipeline. We compared the performance of the back‐end algorithm with existing methods and evaluated the utility using qualitative feedback from visually impaired users. |
118. | Calvin Yau, Morteza Karimzadeh, Chittayong Surakitbanharn, Niklas Elmqvist, David S. Ebert (2019): Bridging the Data Analysis Communication Gap Utilizing a Three-Component Summarized Line Graph. Computer Graphics Forum, 38 (3), pp. 375–386, 2019. (Type: Article | Abstract | Links | BibTeX) @article{Yau2019, title = {Bridging the Data Analysis Communication Gap Utilizing a Three-Component Summarized Line Graph}, author = {Calvin Yau and Morteza Karimzadeh and Chittayong Surakitbanharn and Niklas Elmqvist and David S. Ebert}, url = {http://users.umiacs.umd.edu/~elm/projects/sumlinegraph/sumlinegraph.pdf, PDF}, year = {2019}, date = {2019-06-01}, journal = {Computer Graphics Forum}, volume = {38}, number = {3}, pages = {375--386}, abstract = {Communication‐minded visualizations are designed to provide their audience—managers, decision‐makers, and the public—with new knowledge. Authoring such visualizations effectively is challenging because the audience often lacks the expertise, context, and time that professional analysts have at their disposal to explore and understand datasets. We present a novel summarized line graph visualization technique designed specifically for data analysts to communicate data to decision‐makers more effectively and efficiently. Our summarized line graph reduces a large and detailed dataset of multiple quantitative time‐series into (1) representative data that provides a quick takeaway of the full dataset; (2) analytical highlights that distinguish specific insights of interest; and (3) a data envelope that summarizes the remaining aggregated data. Our summarized line graph achieved the best overall results when evaluated against line graphs, band graphs, stream graphs, and horizon graphs on four representative tasks.}, keywords = {} } Communication‐minded visualizations are designed to provide their audience—managers, decision‐makers, and the public—with new knowledge. Authoring such visualizations effectively is challenging because the audience often lacks the expertise, context, and time that professional analysts have at their disposal to explore and understand datasets. We present a novel summarized line graph visualization technique designed specifically for data analysts to communicate data to decision‐makers more effectively and efficiently. Our summarized line graph reduces a large and detailed dataset of multiple quantitative time‐series into (1) representative data that provides a quick takeaway of the full dataset; (2) analytical highlights that distinguish specific insights of interest; and (3) a data envelope that summarizes the remaining aggregated data. Our summarized line graph achieved the best overall results when evaluated against line graphs, band graphs, stream graphs, and horizon graphs on four representative tasks. |
117. | Subramanian Chidambaram, Yunbo Zhang, Venkatraghavan Sundararajan, Ana M. Villanueva, Niklas Elmqvist, Karthik Ramani (2019): Shape Structuralizer: Design, Fabrication and Exploring Structually-Sound Scaffolded Constructions using 3D Mesh Models. Proceedings of the ACM Conference on Human Factors in Computing Systems, pp. 663:1–663:12, ACM, New York, NY, USA, 2019. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Chidambaram2019, title = {Shape Structuralizer: Design, Fabrication and Exploring Structually-Sound Scaffolded Constructions using 3D Mesh Models}, author = {Subramanian Chidambaram and Yunbo Zhang and Venkatraghavan Sundararajan and Ana M. Villanueva and Niklas Elmqvist and Karthik Ramani}, url = {https://engineering.purdue.edu/cdesign/wp/wp-content/uploads/2019/02/Shape-Structuralizer-Design-Fabrication-and-User-driven-Iterative-Refinement-of-3D-Mesh-Models.pdf, PDF}, year = {2019}, date = {2019-05-01}, booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems}, pages = {663:1--663:12}, publisher = {ACM}, address = {New York, NY, USA}, abstract = {Current Computer-Aided Design (CAD) tools lack proper support for guiding novice users towards designs ready for fabrication. We propose Shape Structuralizer (SS), an interactive design support system that repurposes surface models into structural constructions using rods and custom 3D-printed joints. Shape Structuralizer embeds a recommendation system that computationally supports the user during design ideation by providing design suggestions on local refinements of the design. This strategy enables novice users to choose designs that both satisfy stress constraints as well as their personal design intent. The interactive guidance enables users to repurpose existing surface mesh models, analyze them in-situ for stress and displacement constraints, add movable joints to increase functionality, and attach a customized appearance. This also empowers novices to fabricate even complex constructs while ensuring structural soundness. We validate the Shape Structuralizer tool with a qualitative user study where we observed that even novice users were able to generate a large number of structurally safe designs for fabrication.}, keywords = {} } Current Computer-Aided Design (CAD) tools lack proper support for guiding novice users towards designs ready for fabrication. We propose Shape Structuralizer (SS), an interactive design support system that repurposes surface models into structural constructions using rods and custom 3D-printed joints. Shape Structuralizer embeds a recommendation system that computationally supports the user during design ideation by providing design suggestions on local refinements of the design. This strategy enables novice users to choose designs that both satisfy stress constraints as well as their personal design intent. The interactive guidance enables users to repurpose existing surface mesh models, analyze them in-situ for stress and displacement constraints, add movable joints to increase functionality, and attach a customized appearance. This also empowers novices to fabricate even complex constructs while ensuring structural soundness. We validate the Shape Structuralizer tool with a qualitative user study where we observed that even novice users were able to generate a large number of structurally safe designs for fabrication. |
116. | Pranathi Mylavarapu, Adil Yalcin, Xan Gregg, Niklas Elmqvist (2019): Ranked-List Visualization: A Graphical Perception Study. Proceedings of the ACM Conference on Human Factors in Computing Systems, pp. 192:1–192:12, ACM, New York, NY, USA, 2019. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Mylvarapu2019, title = {Ranked-List Visualization: A Graphical Perception Study}, author = {Pranathi Mylavarapu and Adil Yalcin and Xan Gregg and Niklas Elmqvist}, url = {http://users.umiacs.umd.edu/~elm/projects/ranked-list/ranked-list.pdf, PDF}, year = {2019}, date = {2019-05-01}, booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems}, pages = {192:1--192:12}, publisher = {ACM}, address = {New York, NY, USA}, abstract = {Visualization of ranked lists is a common occurrence, but many in-the-wild solutions fly in the face of vision science and visualization wisdom. For example, treemaps and bubble charts are commonly used for this purpose, despite the fact that the data is not hierarchical and that length is easier to perceive than area. Furthermore, several new visual representations have recently been suggested in this area, including wrapped bars, packed bars, piled bars, and Zvinca plots. To quantify the differences and trade-offs for these ranked-list visualizations, we here report on a crowdsourced graphical perception study involving six such visual representations, including the ubiquitous scrolled barchart, in three tasks: ranking (assessing a single item), comparison (two items), and average (assessing global distribution). Results show that wrapped bars may be the best choice for visualizing ranked lists, and that treemaps are surprisingly accurate despite the use of area rather than length to represent value.}, keywords = {} } Visualization of ranked lists is a common occurrence, but many in-the-wild solutions fly in the face of vision science and visualization wisdom. For example, treemaps and bubble charts are commonly used for this purpose, despite the fact that the data is not hierarchical and that length is easier to perceive than area. Furthermore, several new visual representations have recently been suggested in this area, including wrapped bars, packed bars, piled bars, and Zvinca plots. To quantify the differences and trade-offs for these ranked-list visualizations, we here report on a crowdsourced graphical perception study involving six such visual representations, including the ubiquitous scrolled barchart, in three tasks: ranking (assessing a single item), comparison (two items), and average (assessing global distribution). Results show that wrapped bars may be the best choice for visualizing ranked lists, and that treemaps are surprisingly accurate despite the use of area rather than length to represent value. |
115. | Tom Horak, Andreas Mathisen, Clemens Nylandsted Klokmose, Raimund Dachselt, Niklas Elmqvist (2019): Vistribute: Distributing Interactive Visualizations in Dynamic Multi-Device Setups. Proceedings of the ACM Conference on Human Factors in Computing Systems, pp. 616:1–616:13, ACM, New York, NY, USA, 2019. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Horak2019, title = {Vistribute: Distributing Interactive Visualizations in Dynamic Multi-Device Setups}, author = {Tom Horak and Andreas Mathisen and Clemens Nylandsted Klokmose and Raimund Dachselt and Niklas Elmqvist}, url = {http://users.umiacs.umd.edu/~elm/projects/vistribute/vistribute.pdf, PDF}, year = {2019}, date = {2019-05-01}, booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems}, pages = {616:1--616:13}, publisher = {ACM}, address = {New York, NY, USA}, abstract = {We present Vistribute, a framework for the automatic distribution of visualizations and UI components across multiple heterogeneous devices. Our framework consists of three parts: (i) a design space considering properties and relationships of interactive visualizations, devices, and user preferences in multi-display environments; (ii) specific heuristics incorporating these dimensions for guiding the distribution for a given interface and device ensemble; and (iii) a web-based implementation instantiating these heuristics to automatically generate a distribution as well as providing interaction mechanisms for user-defined adaptations. In contrast to existing UI distribution systems, we are able to infer all required information by analyzing the visualizations and devices without relying on additional input provided by users or programmers. In a qualitative study, we let experts create their own distributions and rate both other manual distributions and our automatic ones. We found that all distributions provided comparable quality, hence validating our framework.}, keywords = {} } We present Vistribute, a framework for the automatic distribution of visualizations and UI components across multiple heterogeneous devices. Our framework consists of three parts: (i) a design space considering properties and relationships of interactive visualizations, devices, and user preferences in multi-display environments; (ii) specific heuristics incorporating these dimensions for guiding the distribution for a given interface and device ensemble; and (iii) a web-based implementation instantiating these heuristics to automatically generate a distribution as well as providing interaction mechanisms for user-defined adaptations. In contrast to existing UI distribution systems, we are able to infer all required information by analyzing the visualizations and devices without relying on additional input provided by users or programmers. In a qualitative study, we let experts create their own distributions and rate both other manual distributions and our automatic ones. We found that all distributions provided comparable quality, hence validating our framework. |
114. | Zhenpeng Zhao, Rachael Marr, Jason Shaffer, Niklas Elmqvist (2019): Understanding Partitioning and Sequence in Data-Driven Storytelling: The Case for Comic Strip Narration.. Proceedings of the iConference, pp. 327–338, Springer, 2019. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Zhao2019, title = {Understanding Partitioning and Sequence in Data-Driven Storytelling: The Case for Comic Strip Narration.}, author = {Zhenpeng Zhao and Rachael Marr and Jason Shaffer and Niklas Elmqvist}, url = {http://users.umiacs.umd.edu/~elm/projects/datacomics/datacomics.pdf, PDF}, year = {2019}, date = {2019-04-01}, booktitle = {Proceedings of the iConference}, volume = {11420}, pages = {327--338}, publisher = {Springer}, abstract = {The comic strip narrative style is an effective method for data-driven storytelling. However, surely it is not enough to just add some speech bubbles and clipart to your PowerPoint slideshow to turn it into a data comic? In this paper, we investigate aspects of partitioning and sequence as fundamental mechanisms for comic strip narration: chunking complex visuals into manageable pieces, and organizing them into a meaningful order, respectively. We do this by presenting results from a qualitative study designed to elicit differences in participant behavior when solving questions using a complex infographic compared to when the same visuals are organized into a data comic.}, keywords = {} } The comic strip narrative style is an effective method for data-driven storytelling. However, surely it is not enough to just add some speech bubbles and clipart to your PowerPoint slideshow to turn it into a data comic? In this paper, we investigate aspects of partitioning and sequence as fundamental mechanisms for comic strip narration: chunking complex visuals into manageable pieces, and organizing them into a meaningful order, respectively. We do this by presenting results from a qualitative study designed to elicit differences in participant behavior when solving questions using a complex infographic compared to when the same visuals are organized into a data comic. |
113. | Biswaksen Patnaik, Andrea Batch, Niklas Elmqvist (2019): Information Olfactation: Harnessing Scent to Convey Data. IEEE Transactions on Visualization & Computer Graphics, 2019. (Type: Article | Abstract | Links | BibTeX) @article{Patnaik2019, title = {Information Olfactation: Harnessing Scent to Convey Data}, author = {Biswaksen Patnaik and Andrea Batch and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/info-olfac/info-olfac.pdf, PDF https://doi.org/10.1109/TVCG.2018.2865237, DOI}, year = {2019}, date = {2019-01-01}, journal = { IEEE Transactions on Visualization & Computer Graphics}, abstract = {Olfactory feedback for analytical tasks is a virtually unexplored area in spite of the advantages it offers for information recall, feature identification, and location detection. Here we introduce the concept of information olfactation as the fragrant sibling of information visualization, and discuss how scent can be used to convey data. Building on a review of the human olfactory system and mirroring common visualization practice, we propose olfactory marks, the substrate in which they exist, and their olfactory channels that are available to designers. To exemplify this idea, we present VISCENT: A six-scent stereo olfactory display capable of conveying olfactory glyphs of varying temperature and direction, as well as a corresponding software system that integrates the display with a traditional visualization display. Finally, we present three applications that make use of the viScent system: A 2D graph visualization, a 2D line and point chart, and an immersive analytics graph visualization in 3D virtual reality. We close the paper with a review of possible extensions of viScent and applications of information olfactation for general visualization beyond the examples in this paper.}, keywords = {} } Olfactory feedback for analytical tasks is a virtually unexplored area in spite of the advantages it offers for information recall, feature identification, and location detection. Here we introduce the concept of information olfactation as the fragrant sibling of information visualization, and discuss how scent can be used to convey data. Building on a review of the human olfactory system and mirroring common visualization practice, we propose olfactory marks, the substrate in which they exist, and their olfactory channels that are available to designers. To exemplify this idea, we present VISCENT: A six-scent stereo olfactory display capable of conveying olfactory glyphs of varying temperature and direction, as well as a corresponding software system that integrates the display with a traditional visualization display. Finally, we present three applications that make use of the viScent system: A 2D graph visualization, a 2D line and point chart, and an immersive analytics graph visualization in 3D virtual reality. We close the paper with a review of possible extensions of viScent and applications of information olfactation for general visualization beyond the examples in this paper. |
112. | Brian Ondov, Nicole Jardine, Niklas Elmqvist, Steven Franconeri (2019): Face to Face: Evaluating Visual Comparison. IEEE Transactions on Visualization & Computer Graphics, 2019. (Type: Article | Abstract | Links | BibTeX) @article{Ondov2019, title = {Face to Face: Evaluating Visual Comparison}, author = {Brian Ondov and Nicole Jardine and Niklas Elmqvist and Steven Franconeri}, url = {http://www.umiacs.umd.edu/~elm/projects/face2face/face2face.pdf, PDF https://doi.org/10.1109/TVCG.2018.2864884, DOI}, year = {2019}, date = {2019-01-01}, journal = { IEEE Transactions on Visualization & Computer Graphics}, abstract = {Data are often viewed as a single set of values, but those values frequently must be compared with another set. The existing evaluations of designs that facilitate these comparisons tend to be based on intuitive reasoning, rather than quantifiable measures. We build on this work with a series of crowdsourced experiments that use low-level perceptual comparison tasks that arise frequently in comparisons within data visualizations (e.g., which value changes the most between the two sets of data?). Participants completed these tasks across a variety of layouts: overlaid, two arrangements of juxtaposed small multiples, mirror-symmetric small multiples, and animated transitions. A staircase procedure sought the difficulty level (e.g., value change delta) that led to equivalent accuracy for each layout. Confirming prior intuition, we observe high levels of performance for overlaid versus standard small multiples. However, we also find performance improvements for both mirror symmetric small multiples and animated transitions. While some results are incongruent with common wisdom in data visualization, they align with previous work in perceptual psychology, and thus have potentially strong implications for visual comparison designs.}, keywords = {} } Data are often viewed as a single set of values, but those values frequently must be compared with another set. The existing evaluations of designs that facilitate these comparisons tend to be based on intuitive reasoning, rather than quantifiable measures. We build on this work with a series of crowdsourced experiments that use low-level perceptual comparison tasks that arise frequently in comparisons within data visualizations (e.g., which value changes the most between the two sets of data?). Participants completed these tasks across a variety of layouts: overlaid, two arrangements of juxtaposed small multiples, mirror-symmetric small multiples, and animated transitions. A staircase procedure sought the difficulty level (e.g., value change delta) that led to equivalent accuracy for each layout. Confirming prior intuition, we observe high levels of performance for overlaid versus standard small multiples. However, we also find performance improvements for both mirror symmetric small multiples and animated transitions. While some results are incongruent with common wisdom in data visualization, they align with previous work in perceptual psychology, and thus have potentially strong implications for visual comparison designs. |
111. | Sriram Karthik Badam, Zhicheng Liu, Niklas Elmqvist (2019): Elastic Documents: Coupling Text and Tables through Contextual Visualizations for Enhanced Document Reading. IEEE Transactions on Visualization & Computer Graphics, 2019. (Type: Article | Abstract | Links | BibTeX) @article{Badam2019b, title = {Elastic Documents: Coupling Text and Tables through Contextual Visualizations for Enhanced Document Reading}, author = {Sriram Karthik Badam and Zhicheng Liu and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/elastic-documents/elastic-documents.pdf, PDF https://doi.org/10.1109/TVCG.2018.2865119, DOI}, year = {2019}, date = {2019-01-01}, journal = { IEEE Transactions on Visualization & Computer Graphics}, abstract = {Today\'s data-rich documents are often complex datasets in themselves, consisting of information in different formats such as text, gures, and data tables. These additional media augment the textual narrative in the document. However, the static layout of a traditional for-print document often impedes deep understanding of its content because of the need to navigate to access content scattered throughout the text. In this paper, we seek to facilitate enhanced comprehension of such documents through a contextual visualization technique that couples text content with data tables contained in the document. We parse the text content and data tables, cross-link the components using a keyword-based matching algorithm, and generate on-demand visualizations based on the reader\'s current focus within a document. We evaluate this technique in a user study comparing our approach to a traditional reading experience. Results from our study show that (1) participants comprehend the content better with tighter coupling of text and data, (2) the contextual visualizations enable participants to develop better summaries that capture the main data-rich insights within the document, and (3) overall, our method enables participants to develop a more detailed understanding of the document content.}, keywords = {} } Today's data-rich documents are often complex datasets in themselves, consisting of information in different formats such as text, gures, and data tables. These additional media augment the textual narrative in the document. However, the static layout of a traditional for-print document often impedes deep understanding of its content because of the need to navigate to access content scattered throughout the text. In this paper, we seek to facilitate enhanced comprehension of such documents through a contextual visualization technique that couples text content with data tables contained in the document. We parse the text content and data tables, cross-link the components using a keyword-based matching algorithm, and generate on-demand visualizations based on the reader's current focus within a document. We evaluate this technique in a user study comparing our approach to a traditional reading experience. Results from our study show that (1) participants comprehend the content better with tighter coupling of text and data, (2) the contextual visualizations enable participants to develop better summaries that capture the main data-rich insights within the document, and (3) overall, our method enables participants to develop a more detailed understanding of the document content. |
110. | Sriram Karthik Badam, Andreas Mathisen, Roman Rädle, Clemens Nylandsted Klokmose, Niklas Elmqvist (2019): Vistrates: A Component Model for Ubiquitous Analytics. IEEE Transactions on Visualization & Computer Graphics, 2019. (Type: Article | Abstract | Links | BibTeX) @article{Badam2019a, title = {Vistrates: A Component Model for Ubiquitous Analytics}, author = {Sriram Karthik Badam and Andreas Mathisen and Roman Rädle and Clemens Nylandsted Klokmose and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/vistrates/vistrates.pdf, PDF https://doi.org/10.1109/TVCG.2018.2865144, DOI}, year = {2019}, date = {2019-01-01}, journal = { IEEE Transactions on Visualization & Computer Graphics}, abstract = {Visualization tools are often specialized for specic tasks, which turns the user\'s analytical workow into a fragmented process performed across many tools. In this paper, we present a component model design for data visualization to promote modular designs of visualization tools that enhance their analytical scope. Rather than fragmenting tasks across tools, the component model supports unification, where components—the building blocks of this model—can be assembled to support a wide range of tasks. Furthermore, the model also provides additional key properties, such as support for collaboration, sharing across multiple devices, and adaptive usage depending on expertise, from creating visualizations using dropdown menus, through instantiating components, to actually modifying components or creating entirely new ones from scratch using JavaScript or Python source code. To realize our model, we introduce Vistrates, a literate computing platform for developing, assembling, and sharing visualization components. From a visualization perspective, Vistrates features cross-cutting components for visual representations, interaction, collaboration, and device responsiveness maintained in a component repository. From a development perspective, Vistrates offers a collaborative programming environment where novices and experts alike can compose component pipelines for specific analytical activities. Finally, we present several Vistrates use cases that span the full range of the classic \"anytime\" and \"anywhere\" motto for ubiquitous analysis: from mobile and on-the-go usage, through office settings, to collaborative smart environments covering a variety of tasks and devices.}, keywords = {} } Visualization tools are often specialized for specic tasks, which turns the user's analytical workow into a fragmented process performed across many tools. In this paper, we present a component model design for data visualization to promote modular designs of visualization tools that enhance their analytical scope. Rather than fragmenting tasks across tools, the component model supports unification, where components—the building blocks of this model—can be assembled to support a wide range of tasks. Furthermore, the model also provides additional key properties, such as support for collaboration, sharing across multiple devices, and adaptive usage depending on expertise, from creating visualizations using dropdown menus, through instantiating components, to actually modifying components or creating entirely new ones from scratch using JavaScript or Python source code. To realize our model, we introduce Vistrates, a literate computing platform for developing, assembling, and sharing visualization components. From a visualization perspective, Vistrates features cross-cutting components for visual representations, interaction, collaboration, and device responsiveness maintained in a component repository. From a development perspective, Vistrates offers a collaborative programming environment where novices and experts alike can compose component pipelines for specific analytical activities. Finally, we present several Vistrates use cases that span the full range of the classic "anytime" and "anywhere" motto for ubiquitous analysis: from mobile and on-the-go usage, through office settings, to collaborative smart environments covering a variety of tasks and devices. |
109. | Zhe Cui, Sriram Karthik Badam, M. Adil Yalcin, Niklas Elmqvist (2019): DataSite: Proactive Visual Data Exploration with Computation of Insight-based Recommendations. Information Visualization, 18 (2), pp. 251–267, 2019. (Type: Article | Abstract | Links | BibTeX) @article{zcui2018, title = {DataSite: Proactive Visual Data Exploration with Computation of Insight-based Recommendations}, author = {Zhe Cui and Sriram Karthik Badam and M. Adil Yalcin and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/datasite/datasite.pdf, PDF https://youtu.be/EsK5uOOPO7o, Youtube}, year = {2019}, date = {2019-01-01}, journal = {Information Visualization}, volume = {18}, number = {2}, pages = {251--267}, abstract = {Effective data analysis ideally requires the analyst to have high expertise as well as high knowledge of the data. Even with such familiarity, manually pursuing all potential hypotheses and exploring all possible views is impractical. We present DataSite, a proactive visual analytics system where the burden of selecting and executing appropriate computations is shared by an automatic server-side computation engine. Salient features identified by these automatic background processes are surfaced as notifications in a feed timeline. DataSite effectively turns data analysis into a conversation between analyst and computer, thereby reducing the cognitive load and domain knowledge requirements. We validate the system with a user study comparing it to a recent visualization recommendation system, yielding significant improvement, particularly for complex analyses that existing analytics systems do not support well.}, keywords = {} } Effective data analysis ideally requires the analyst to have high expertise as well as high knowledge of the data. Even with such familiarity, manually pursuing all potential hypotheses and exploring all possible views is impractical. We present DataSite, a proactive visual analytics system where the burden of selecting and executing appropriate computations is shared by an automatic server-side computation engine. Salient features identified by these automatic background processes are surfaced as notifications in a feed timeline. DataSite effectively turns data analysis into a conversation between analyst and computer, thereby reducing the cognitive load and domain knowledge requirements. We validate the system with a user study comparing it to a recent visualization recommendation system, yielding significant improvement, particularly for complex analyses that existing analytics systems do not support well. |
2018 | |
108. | Andrea Batch, Hanuma Teja Maddali, Kyungjun Lee, Niklas Elmqvist (2018): Gesture and Action Discovery for Evaluating Virtual Environments with Semi-Supervised Segmentation of Telemetry Records. Proceedings of the IEEE Conference on Artificial Intelligence & Virtual Reality, pp. 1–10, IEEE, 2018. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Batch2018, title = {Gesture and Action Discovery for Evaluating Virtual Environments with Semi-Supervised Segmentation of Telemetry Records}, author = {Andrea Batch and Hanuma Teja Maddali and Kyungjun Lee and Niklas Elmqvist}, url = {http://users.umiacs.umd.edu/~elm/projects/hceye/vr-telemetry.pdf, PDF}, year = {2018}, date = {2018-12-10}, booktitle = {Proceedings of the IEEE Conference on Artificial Intelligence & Virtual Reality}, pages = {1--10}, publisher = {IEEE}, abstract = {In this paper, we propose a novel pipeline for semi-supervised behavioral coding of videos of users testing a device or interface, with an eye toward human-computer interaction evaluation for virtual reality. Our system applies existing statistical techniques for time-series classification, including e-divisive change point detection and \"Symbolic Aggregate approXimation\" (SAX) with agglomerative hierarchical clustering, to 3D pose telemetry data. These techniques create classes of short segments of single-person video data–short actions of potential interest called \"micro-gestures.\" A long short-term memory (LSTM) layer then learns these micro-gestures from pose features generated purely from video via a pretrained OpenPose convolutional neural network (CNN) to predict their occurrence in unlabeled test videos. We present and discuss the results from testing our system on the single user pose videos of the CMU Panoptic Dataset. }, keywords = {} } In this paper, we propose a novel pipeline for semi-supervised behavioral coding of videos of users testing a device or interface, with an eye toward human-computer interaction evaluation for virtual reality. Our system applies existing statistical techniques for time-series classification, including e-divisive change point detection and "Symbolic Aggregate approXimation" (SAX) with agglomerative hierarchical clustering, to 3D pose telemetry data. These techniques create classes of short segments of single-person video data–short actions of potential interest called "micro-gestures." A long short-term memory (LSTM) layer then learns these micro-gestures from pose features generated purely from video via a pretrained OpenPose convolutional neural network (CNN) to predict their occurrence in unlabeled test videos. We present and discuss the results from testing our system on the single user pose videos of the CMU Panoptic Dataset. |
107. | Sigfried Gold, Andrea Batch, Robert McClure, Guoqian Jiang, Hadi Kharrazi, Rishi Saripalle, Vojtech Huser, Chunhua Weng, Nancy Roderer, Ana Szarfman, Niklas Elmqvist, David Gotz (2018): Clinical Concept Value Sets and Interoperability in Health Data Analytics. Proceedings of the Annual AMIA Symposium, 2018. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Gold2018, title = {Clinical Concept Value Sets and Interoperability in Health Data Analytics}, author = {Sigfried Gold and Andrea Batch and Robert McClure and Guoqian Jiang and Hadi Kharrazi and Rishi Saripalle and Vojtech Huser and Chunhua Weng and Nancy Roderer and Ana Szarfman and Niklas Elmqvist and David Gotz}, url = {https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6371254/, article}, year = {2018}, date = {2018-11-03}, booktitle = {Proceedings of the Annual AMIA Symposium}, abstract = {This paper focuses on value sets as an essential component in the health analytics ecosystem. We discuss shared repositories of reusable value sets and offer recommendations for their further development and adoption. In order to motivate these contributions, we explain how value sets fit into specific analytic tasks and the health analytics landscape more broadly; their growing importance and ubiquity with the advent of Common Data Models, Distributed Research Networks, and the availability of higher order, reusable analytic resources like electronic phenotypes and electronic clinical quality measures; the formidable barriers to value set reuse; and our introduction of a concept-agnostic orientation to vocabulary collections. The costs of ad hoc value set management and the benefits of value set reuse are described or implied throughout. Our standards, infrastructure, and design recommendations are not systematic or comprehensive but invite further work to support value set reuse for health analytics}, keywords = {} } This paper focuses on value sets as an essential component in the health analytics ecosystem. We discuss shared repositories of reusable value sets and offer recommendations for their further development and adoption. In order to motivate these contributions, we explain how value sets fit into specific analytic tasks and the health analytics landscape more broadly; their growing importance and ubiquity with the advent of Common Data Models, Distributed Research Networks, and the availability of higher order, reusable analytic resources like electronic phenotypes and electronic clinical quality measures; the formidable barriers to value set reuse; and our introduction of a concept-agnostic orientation to vocabulary collections. The costs of ad hoc value set management and the benefits of value set reuse are described or implied throughout. Our standards, infrastructure, and design recommendations are not systematic or comprehensive but invite further work to support value set reuse for health analytics |
106. | Senthil Chandrasegaran, Devarajan Ramanujan, Niklas Elmqvist (2018): How Do Sketching and Non-Sketching Actions Convey Design Intent?. Proceedings of the ACM Conference on Designing Interactive Systems, pp. 373–385, 2018. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Chandrasegaran2018, title = {How Do Sketching and Non-Sketching Actions Convey Design Intent?}, author = {Senthil Chandrasegaran and Devarajan Ramanujan and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/design-intent/design-intent.pdf, PDF}, year = {2018}, date = {2018-06-01}, booktitle = {Proceedings of the ACM Conference on Designing Interactive Systems}, pages = {373--385}, abstract = {Sketches are much more than marks on paper; they play a key role for designers both in ideation and problem-solving as well as in communication with other designers. Thus, the act of sketching is often enriched with annotations, references, and physical actions, such as gestures or speech—all of which constitute meta-data about the designer’s reasoning. Conventional paper-based design notebooks cannot capture this rich metadata, but digital design notebooks can. To understand how and what data to capture, we conducted an observational study of design practitioners where they explore design solutions for a set of problems. We recorded and coded their sketching and non-sketching actions that reflect their exploration of the design space. We then categorized the captured meta-data and mapped observed physical actions to design intent. These findings inform the creation of future digital design notebooks that can better capture designers’ reasoning during sketching.}, keywords = {} } Sketches are much more than marks on paper; they play a key role for designers both in ideation and problem-solving as well as in communication with other designers. Thus, the act of sketching is often enriched with annotations, references, and physical actions, such as gestures or speech—all of which constitute meta-data about the designer’s reasoning. Conventional paper-based design notebooks cannot capture this rich metadata, but digital design notebooks can. To understand how and what data to capture, we conducted an observational study of design practitioners where they explore design solutions for a set of problems. We recorded and coded their sketching and non-sketching actions that reflect their exploration of the design space. We then categorized the captured meta-data and mapped observed physical actions to design intent. These findings inform the creation of future digital design notebooks that can better capture designers’ reasoning during sketching. |
105. | Fanny Chevalier, Nathalie Henry Riche, Basak Alper, Catherine Plaisant, Jeremy Boy, Niklas Elmqvist (2018): Observations and Reflections on Visualization Literacy at the Elementary School Level. IEEE Computer Graphics & Applications, 38 (3), pp. 21–29, 2018. (Type: Article | Abstract | Links | BibTeX) @article{Chevalier2018, title = {Observations and Reflections on Visualization Literacy at the Elementary School Level}, author = {Fanny Chevalier and Nathalie Henry Riche and Basak Alper and Catherine Plaisant and Jeremy Boy and Niklas Elmqvist}, url = {http://www.cs.umd.edu/hcil/trs/2018-06/2018-06.pdf, PDF}, year = {2018}, date = {2018-05-01}, journal = {IEEE Computer Graphics & Applications}, volume = {38}, number = {3}, pages = {21--29}, abstract = {In this article, we share our reflections on visualization literacy and how it might be better developed in early education. We base this on lessons we learned while studying how teachers instruct, and how students acquire basic visualization principles and skills in elementary school. We use these findings to propose directions for future research on visualization literacy. }, keywords = {} } In this article, we share our reflections on visualization literacy and how it might be better developed in early education. We base this on lessons we learned while studying how teachers instruct, and how students acquire basic visualization principles and skills in elementary school. We use these findings to propose directions for future research on visualization literacy. |
104. | Justin Wagner, Florin Chelaru, Jayaram Kancherla, Joseph N. Paulson, Alexander Zhang, Victor Felix, Anup Mahurkar, Niklas Elmqvist, Hector Corrada Bravo (2018): Metaviz: interactive statistical and visual analysis of metagenomic data. Nucleic Acids Research, 46 (6), pp. 2777–2787, 2018. (Type: Article | Abstract | Links | BibTeX) @article{Wagner2018, title = {Metaviz: interactive statistical and visual analysis of metagenomic data}, author = {Justin Wagner and Florin Chelaru and Jayaram Kancherla and Joseph N. Paulson and Alexander Zhang and Victor Felix and Anup Mahurkar and Niklas Elmqvist and Hector Corrada Bravo}, url = {https://academic.oup.com/nar/article/46/6/2777/4909991, Article}, year = {2018}, date = {2018-03-06}, journal = {Nucleic Acids Research}, volume = {46}, number = {6}, pages = {2777--2787}, abstract = {Large studies profiling microbial communities and their association with healthy or disease phenotypes are now commonplace. Processed data from many of these studies are publicly available but significant effort is required for users to effectively organize, explore and integrate it, limiting the utility of these rich data resources. Effective integrative and interactive visual and statistical tools to analyze many metagenomic samples can greatly increase the value of these data for researchers. We present Metaviz, a tool for interactive exploratory data analysis of annotated microbiome taxonomic community profiles derived from marker gene or whole metagenome shotgun sequencing. Metaviz is uniquely designed to address the challenge of browsing the hierarchical structure of metagenomic data features while rendering visualizations of data values that are dynamically updated in response to user navigation. We use Metaviz to provide the UMD Metagenome Browser web service, allowing users to browse and explore data for more than 7000 microbiomes from published studies. Users can also deploy Metaviz as a web service, or use it to analyze data through the metavizr package to interoperate with state-of-the-art analysis tools available through Bioconductor. Metaviz is free and open source with the code, documentation and tutorials publicly accessible.}, keywords = {} } Large studies profiling microbial communities and their association with healthy or disease phenotypes are now commonplace. Processed data from many of these studies are publicly available but significant effort is required for users to effectively organize, explore and integrate it, limiting the utility of these rich data resources. Effective integrative and interactive visual and statistical tools to analyze many metagenomic samples can greatly increase the value of these data for researchers. We present Metaviz, a tool for interactive exploratory data analysis of annotated microbiome taxonomic community profiles derived from marker gene or whole metagenome shotgun sequencing. Metaviz is uniquely designed to address the challenge of browsing the hierarchical structure of metagenomic data features while rendering visualizations of data values that are dynamically updated in response to user navigation. We use Metaviz to provide the UMD Metagenome Browser web service, allowing users to browse and explore data for more than 7000 microbiomes from published studies. Users can also deploy Metaviz as a web service, or use it to analyze data through the metavizr package to interoperate with state-of-the-art analysis tools available through Bioconductor. Metaviz is free and open source with the code, documentation and tutorials publicly accessible. |
103. | Zhe Cui, Shivalik Sen, Sriram Karthik Badam, Niklas Elmqvist (2018): VisHive: Supporting Web-based Visualization through Ad-hoc Computational Clusters of Mobile Devices. Information Visualization, 2018. (Type: Article | Abstract | Links | BibTeX) @article{Cui2018, title = {VisHive: Supporting Web-based Visualization through Ad-hoc Computational Clusters of Mobile Devices}, author = {Zhe Cui and Shivalik Sen and Sriram Karthik Badam and Niklas Elmqvist }, url = {http://www.umiacs.umd.edu/~elm/projects/vishive/vishive.pdf, PDF}, year = {2018}, date = {2018-01-01}, journal = {Information Visualization}, abstract = {Current web-based visualizations are designed for single computers and cannot make use of additional devices on the client side, even if today’s users often have access to several, such as a tablet, a smartphone, and a smartwatch. We present a framework for ad-hoc computational clusters that leverage these local devices for visualization computations. Furthermore, we present an instantiating JavaScript toolkit called VisHive for constructing web-based visualization applications that can transparently connect multiple devices---called cells---into such ad-hoc clusters---called a hive---for local computation. Hives are formed either using a matchmaking service or through manual configuration. Cells are organized into a master-slave architecture, where the master provides the visual interface to the user and controls the slaves, and the slaves perform computation. VisHive is built entirely using current web technologies, runs in the native browser of each cell, and requires no specific software to be downloaded on the involved devices. We demonstrate VisHive using four distributed examples: a text analytics visualization, a database query for exploratory visualization, a DBSCAN clustering running on multiple nodes, and a Principal Component Analysis implementation. }, keywords = {} } Current web-based visualizations are designed for single computers and cannot make use of additional devices on the client side, even if today’s users often have access to several, such as a tablet, a smartphone, and a smartwatch. We present a framework for ad-hoc computational clusters that leverage these local devices for visualization computations. Furthermore, we present an instantiating JavaScript toolkit called VisHive for constructing web-based visualization applications that can transparently connect multiple devices---called cells---into such ad-hoc clusters---called a hive---for local computation. Hives are formed either using a matchmaking service or through manual configuration. Cells are organized into a master-slave architecture, where the master provides the visual interface to the user and controls the slaves, and the slaves perform computation. VisHive is built entirely using current web technologies, runs in the native browser of each cell, and requires no specific software to be downloaded on the involved devices. We demonstrate VisHive using four distributed examples: a text analytics visualization, a database query for exploratory visualization, a DBSCAN clustering running on multiple nodes, and a Principal Component Analysis implementation. |
102. | Deok Gun Park, Steven M. Drucker, Roland Fernandez, Niklas Elmqvist (2018): ATOM: A Grammar for Unit Visualization. IEEE Transactions on Visualization & Computer Graphics, 2018. (Type: Article | Abstract | Links | BibTeX) @article{Park2018, title = {ATOM: A Grammar for Unit Visualization}, author = {Deok Gun Park and Steven M. Drucker and Roland Fernandez and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/atom/atom.pdf, PDF}, year = {2018}, date = {2018-01-01}, journal = { IEEE Transactions on Visualization & Computer Graphics}, abstract = {Unit visualizations are a family of visualizations where every data item is represented by a unique visual mark---a visual unit---during visual encoding. For certain datasets and tasks, unit visualizations can provide more information, better match the user\'s mental model, and enable novel interactions compared to traditional aggregated visualizations. Current visualization grammars cannot fully describe the unit visualization family. In this paper, we characterize the design space of unit visualizations to derive a grammar that can express them. The resulting grammar is called ATOM, and is based on passing data through a series of layout operations that divide the output of previous operations recursively until the size and position of every data point can be determined. We evaluate the expressive power of the grammar by both using it to describe existing unit visualizations, as well as to suggest new unit visualizations.}, keywords = {} } Unit visualizations are a family of visualizations where every data item is represented by a unique visual mark---a visual unit---during visual encoding. For certain datasets and tasks, unit visualizations can provide more information, better match the user's mental model, and enable novel interactions compared to traditional aggregated visualizations. Current visualization grammars cannot fully describe the unit visualization family. In this paper, we characterize the design space of unit visualizations to derive a grammar that can express them. The resulting grammar is called ATOM, and is based on passing data through a series of layout operations that divide the output of previous operations recursively until the size and position of every data point can be determined. We evaluate the expressive power of the grammar by both using it to describe existing unit visualizations, as well as to suggest new unit visualizations. |
101. | Jiawei Zhang, Chittayong Surakitbanharn, Niklas Elmqvist, Ross Maciejewski, Zhenyu Quan, David Ebert (2018): TopoText: Context-Preserving Semantic Exploration Across Multiple Spatial Scales. Proceedings of the ACM Conference on Human Factors in Computing Systems, 2018. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Zhang2018, title = {TopoText: Context-Preserving Semantic Exploration Across Multiple Spatial Scales}, author = {Jiawei Zhang and Chittayong Surakitbanharn and Niklas Elmqvist and Ross Maciejewski and Zhenyu Quan and David Ebert}, url = {http://www.umiacs.umd.edu/~elm/projects/topotext/topotext.pdf, PDF}, year = {2018}, date = {2018-01-01}, booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems}, abstract = {TopoText is a context-preserving technique for visualizing semantic data for multi-scale spatial aggregates to gain insight into spatial phenomena. Conventional exploration requires users to navigate across multiple scales but only presents the information related to the current scale. This limitation potentially adds more steps of interaction and cognitive overload to the users. TopoText renders multi-scale aggregates into a single visual display combining novel text-based encoding and layout methods that draw labels along the boundary or filled within the aggregates. The text itself not only summarizes the semantics at each individual scale, but also indicates the spatial coverage of the aggregates and their underlying hierarchical relationships. We validate TopoText with both a user study as well as several application examples.}, keywords = {} } TopoText is a context-preserving technique for visualizing semantic data for multi-scale spatial aggregates to gain insight into spatial phenomena. Conventional exploration requires users to navigate across multiple scales but only presents the information related to the current scale. This limitation potentially adds more steps of interaction and cognitive overload to the users. TopoText renders multi-scale aggregates into a single visual display combining novel text-based encoding and layout methods that draw labels along the boundary or filled within the aggregates. The text itself not only summarizes the semantics at each individual scale, but also indicates the spatial coverage of the aggregates and their underlying hierarchical relationships. We validate TopoText with both a user study as well as several application examples. |
100. | Tom Horak, Sriram Karthik Badam, Niklas Elmqvist, Raimund Dachselt (2018): When David Meets Goliath: Combining Smartwatches with a Large Vertical Display for Visual Data Exploration. Proceedings of the ACM Conference on Human Factors in Computing Systems, 2018. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Horak2018, title = {When David Meets Goliath: Combining Smartwatches with a Large Vertical Display for Visual Data Exploration}, author = {Tom Horak and Sriram Karthik Badam and Niklas Elmqvist and Raimund Dachselt}, url = {http://www.umiacs.umd.edu/~elm/projects/david-goliath/david-goliath.pdf, PDF}, year = {2018}, date = {2018-01-01}, booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems}, abstract = {We explore the combination of smartwatches and a large interactive display to support visual data analysis. These two extremes of interactive surfaces are increasingly popular, but feature different characteristics—display and input modalities, personal/public use, performance, and portability. In this paper, we first identify possible roles for both devices and the interplay between them through an example scenario. We then propose a conceptual framework to enable analysts to explore data items, track interaction histories, and alter visualization configurations through mechanisms using both devices in combination. We validate an implementation of our framework through a formative evaluation and a user study. The results show that this device combination, compared to just a large display, allows users to develop complex insights more fluidly by leveraging the roles of the two devices. Finally, we report on the interaction patterns and interplay between the devices for visual exploration as observed during our study.}, keywords = {} } We explore the combination of smartwatches and a large interactive display to support visual data analysis. These two extremes of interactive surfaces are increasingly popular, but feature different characteristics—display and input modalities, personal/public use, performance, and portability. In this paper, we first identify possible roles for both devices and the interplay between them through an example scenario. We then propose a conceptual framework to enable analysts to explore data items, track interaction histories, and alter visualization configurations through mechanisms using both devices in combination. We validate an implementation of our framework through a formative evaluation and a user study. The results show that this device combination, compared to just a large display, allows users to develop complex insights more fluidly by leveraging the roles of the two devices. Finally, we report on the interaction patterns and interplay between the devices for visual exploration as observed during our study. |
2017 | |
99. | Andrea Batch, Niklas Elmqvist (2017): The Interactive Visualization Gap in Initial Exploratory Data Analysis. IEEE Transactions on Visualization & Computer Graphics, 2017. (Type: Article | Abstract | Links | BibTeX) @article{Batch2017, title = {The Interactive Visualization Gap in Initial Exploratory Data Analysis}, author = {Andrea Batch and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/visgap/visgap.pdf, PDF}, year = {2017}, date = {2017-10-01}, journal = {IEEE Transactions on Visualization & Computer Graphics}, abstract = {Data scientists and other analytic professionals often use interactive visualization in the dissemination phase at the end of a workflow during which findings are communicated to a wider audience. Visualization scientists, however, hold that interactive representation of data can also be used during exploratory analysis itself. Since the use of interactive visualization is optional rather than mandatory, this leaves a “visualization gap” during initial exploratory analysis that is the onus of visualization researchers to fill. In this paper, we explore areas where visualization would be beneficial in applied research by conducting a design study using a novel variation on contextual inquiry conducted with professional data analysts. Based on these interviews and experiments, we propose a set of interactive initial exploratory visualization guidelines which we believe will promote adoption by this type of user.}, keywords = {} } Data scientists and other analytic professionals often use interactive visualization in the dissemination phase at the end of a workflow during which findings are communicated to a wider audience. Visualization scientists, however, hold that interactive representation of data can also be used during exploratory analysis itself. Since the use of interactive visualization is optional rather than mandatory, this leaves a “visualization gap” during initial exploratory analysis that is the onus of visualization researchers to fill. In this paper, we explore areas where visualization would be beneficial in applied research by conducting a design study using a novel variation on contextual inquiry conducted with professional data analysts. Based on these interviews and experiments, we propose a set of interactive initial exploratory visualization guidelines which we believe will promote adoption by this type of user. |
98. | Deok Gun Park, Seungyeon Kim, Jurim Lee, Jaegul Choo, Nicholas Diakopoulos, Niklas Elmqvist (2017): ConceptVector: Text Visual Analytics via Interactive Lexicon Building using Word Embedding. IEEE Transactions on Visualization & Computer Graphics, 2017. (Type: Article | Abstract | Links | BibTeX) @article{Park2017, title = {ConceptVector: Text Visual Analytics via Interactive Lexicon Building using Word Embedding}, author = {Deok Gun Park and Seungyeon Kim and Jurim Lee and Jaegul Choo and Nicholas Diakopoulos and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/conceptvector/conceptvector.pdf, PDF}, year = {2017}, date = {2017-10-01}, journal = {IEEE Transactions on Visualization & Computer Graphics}, abstract = {Central to many text analysis methods is the notion of a concept: a set of semantically related keywords characterizing a specific object, phenomenon, or theme. Advances in word embedding allow building such concepts from a small set of seed terms. However, naive application of such techniques may result in false positive errors because of the polysemy of human language. To mitigate this problem, we present a visual analytics system called ConceptVector that guides the user in building such concepts and then using them to analyze documents. Document-analysis case studies with real-world datasets demonstrate the fine-grained analysis provided by ConceptVector. To support the elaborate modeling of concepts using user seed terms, we introduce a bipolar concept model and support for irrelevant words. We validate the interactive lexicon building interface via a user study and expert reviews. The quantitative evaluation shows that the bipolar lexicon generated with our methods is comparable to human-generated ones.}, keywords = {} } Central to many text analysis methods is the notion of a concept: a set of semantically related keywords characterizing a specific object, phenomenon, or theme. Advances in word embedding allow building such concepts from a small set of seed terms. However, naive application of such techniques may result in false positive errors because of the polysemy of human language. To mitigate this problem, we present a visual analytics system called ConceptVector that guides the user in building such concepts and then using them to analyze documents. Document-analysis case studies with real-world datasets demonstrate the fine-grained analysis provided by ConceptVector. To support the elaborate modeling of concepts using user seed terms, we introduce a bipolar concept model and support for irrelevant words. We validate the interactive lexicon building interface via a user study and expert reviews. The quantitative evaluation shows that the bipolar lexicon generated with our methods is comparable to human-generated ones. |
97. | Sriram Karthik Badam, Niklas Elmqvist (2017): Visfer: Camera-based Visual Data Transfer for Cross-Device Visualization. Information Visualization, 2017. (Type: Article | Abstract | Links | BibTeX) @article{Badam2017bb, title = {Visfer: Camera-based Visual Data Transfer for Cross-Device Visualization}, author = {Sriram Karthik Badam and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/qrvis/visfer.pdf, PDF}, year = {2017}, date = {2017-09-08}, journal = {Information Visualization}, abstract = {Going beyond the desktop to leverage novel devices—such as smartphones, tablets, or large displays—for visual sensemaking typically requires supporting extraneous operations for device discovery, interaction sharing, and view management. Such operations can be time-consuming and tedious, and distract the user from the actual analysis. Embodied interaction models in these multi-device environments can take advantage of the natural interaction and physicality afforded by multimodal devices and help effectively carry out these operations in visual sensemaking. In this paper, we present cross-device interaction models for visualization spaces, that are embodied in nature, by conducting a user study to elicit actions from participants that could trigger a portrayed effect of sharing visualizations (and therefore information) across devices. We then explore one common interaction style from this design elicitation called Visfer, a technique for effortlessly sharing visualizations across devices using the visual medium. More specifically, this technique involves taking pictures of visualizations, or rather the QR codes augmenting them, on a display using the built-in camera on a handheld device. Our contributions include a conceptual framework for cross-device interaction and the Visfer technique itself, as well as transformation guidelines to exploit the capabilities of each specific device and a web framework for encoding visualization components into animated QR codes, which capture multiple frames of QR codes to embed more information. Beyond this, we also present the results from a performance evaluation for the visual data transfer enabled by Visfer. We end the paper by presenting the application examples of our Visfer framework. }, keywords = {} } Going beyond the desktop to leverage novel devices—such as smartphones, tablets, or large displays—for visual sensemaking typically requires supporting extraneous operations for device discovery, interaction sharing, and view management. Such operations can be time-consuming and tedious, and distract the user from the actual analysis. Embodied interaction models in these multi-device environments can take advantage of the natural interaction and physicality afforded by multimodal devices and help effectively carry out these operations in visual sensemaking. In this paper, we present cross-device interaction models for visualization spaces, that are embodied in nature, by conducting a user study to elicit actions from participants that could trigger a portrayed effect of sharing visualizations (and therefore information) across devices. We then explore one common interaction style from this design elicitation called Visfer, a technique for effortlessly sharing visualizations across devices using the visual medium. More specifically, this technique involves taking pictures of visualizations, or rather the QR codes augmenting them, on a display using the built-in camera on a handheld device. Our contributions include a conceptual framework for cross-device interaction and the Visfer technique itself, as well as transformation guidelines to exploit the capabilities of each specific device and a web framework for encoding visualization components into animated QR codes, which capture multiple frames of QR codes to embed more information. Beyond this, we also present the results from a performance evaluation for the visual data transfer enabled by Visfer. We end the paper by presenting the application examples of our Visfer framework. |
96. | M. Adil Yalcin, Niklas Elmqvist, Benjamin B. Bederson (2017): Keshif: Rapid and Expressive Tabular Data Exploration for Novices. IEEE Transactions on Visualization & Computer Graphics, 2017. (Type: Article | Abstract | Links | BibTeX) @article{Yalcin2017b, title = {Keshif: Rapid and Expressive Tabular Data Exploration for Novices}, author = {M. Adil Yalcin and Niklas Elmqvist and Benjamin B. Bederson}, url = {http://www.umiacs.umd.edu/~elm/projects/keshif/keshif.pdf, PDF}, year = {2017}, date = {2017-05-19}, journal = {IEEE Transactions on Visualization & Computer Graphics}, abstract = {General purpose graphical interfaces for data exploration are typically based on manual visualization and interaction specifications. While designing manual specification can be very expressive, it demands high efforts to make effective decisions, therefore reducing exploratory speed. Instead, principled automated designs can increase exploratory speed, decrease learning efforts, help avoid ineffective decisions, and therefore better support data analytics novices. Towards these goals, we present Keshif, a new systematic design for tabular data exploration. To summarize a given dataset, Keshif aggregates records by value within attribute summaries, and visualizes aggregate characteristics using a consistent design based on data types. To reveal data distribution details, Keshif features three complementary linked selections: highlighting, filtering, and comparison. Keshif further increases expressiveness through aggregate metrics, absolute/part-of scale modes, calculated attributes, and saved selections, all working in synchrony. Its automated design approach also simplifies authoring of dashboards composed of summaries and individual records from raw data using fluid interaction. We show examples selected from 160+ datasets from diverse domains. Our study with novices shows that after exploring raw data for 15 minutes, our participants reached close to 30 data insights on average, comparable to other studies with skilled users using more complex tools.}, keywords = {} } General purpose graphical interfaces for data exploration are typically based on manual visualization and interaction specifications. While designing manual specification can be very expressive, it demands high efforts to make effective decisions, therefore reducing exploratory speed. Instead, principled automated designs can increase exploratory speed, decrease learning efforts, help avoid ineffective decisions, and therefore better support data analytics novices. Towards these goals, we present Keshif, a new systematic design for tabular data exploration. To summarize a given dataset, Keshif aggregates records by value within attribute summaries, and visualizes aggregate characteristics using a consistent design based on data types. To reveal data distribution details, Keshif features three complementary linked selections: highlighting, filtering, and comparison. Keshif further increases expressiveness through aggregate metrics, absolute/part-of scale modes, calculated attributes, and saved selections, all working in synchrony. Its automated design approach also simplifies authoring of dashboards composed of summaries and individual records from raw data using fluid interaction. We show examples selected from 160+ datasets from diverse domains. Our study with novices shows that after exploring raw data for 15 minutes, our participants reached close to 30 data insights on average, comparable to other studies with skilled users using more complex tools. |
95. | M. Adil Yalcin, Niklas Elmqvist, Benjamin B. Bederson (2017): Raising the Bars: Evaluating Treemaps vs. Wrapped Bars for Dense Visualization of Sorted Numeric Data. Proceedings of Graphics Interface, 2017. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Yalcin2017, title = {Raising the Bars: Evaluating Treemaps vs. Wrapped Bars for Dense Visualization of Sorted Numeric Data}, author = {M. Adil Yalcin and Niklas Elmqvist and Benjamin B. Bederson}, url = {http://www.umiacs.umd.edu/~elm/projects/raising-bars/RaisingTheBars-GI2017.pdf, PDF}, year = {2017}, date = {2017-05-15}, booktitle = {Proceedings of Graphics Interface}, abstract = {A standard (single-column) bar chart can effectively visualize a sorted list of numeric records. However, the chart height limits the number of visible records. To show more records, the bars could be made thinner (which could hinder identifying records individually), and scrolling requires interaction to see the overview. Treemaps have been used in practice in non-hierarchical settings for dense visualization of numeric data. Alternatively, we consider wrapped bars, a multi-column bar chart that uses length instead of area to encode numeric values. We compare treemaps and wrapped bars based on their design characteristics, and graphical perception performance for comparison, ranking, and overview tasks using crowdsourced experiments. Our analysis found that wrapped bars perceptually outperform treemaps in all three tasks for dense visualization of non-hierarchical, sorted numeric data.}, keywords = {} } A standard (single-column) bar chart can effectively visualize a sorted list of numeric records. However, the chart height limits the number of visible records. To show more records, the bars could be made thinner (which could hinder identifying records individually), and scrolling requires interaction to see the overview. Treemaps have been used in practice in non-hierarchical settings for dense visualization of numeric data. Alternatively, we consider wrapped bars, a multi-column bar chart that uses length instead of area to encode numeric values. We compare treemaps and wrapped bars based on their design characteristics, and graphical perception performance for comparison, ranking, and overview tasks using crowdsourced experiments. Our analysis found that wrapped bars perceptually outperform treemaps in all three tasks for dense visualization of non-hierarchical, sorted numeric data. |
94. | Senthil Chandrasegaran, Sriram Karthik Badam, Ninger Zhou, Zhenpeng Zhao, Lorraine Kisselburgh, Kylie Peppler Niklas Elmqvist, Karthik Ramani (2017): Merging Sketches for Creative Design Exploration: An Evaluation of Physical and Cognitive Operations. Proceedings of Graphics Interface, 2017. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Chandrasegaran2017, title = {Merging Sketches for Creative Design Exploration: An Evaluation of Physical and Cognitive Operations}, author = {Senthil Chandrasegaran and Sriram Karthik Badam and Ninger Zhou and Zhenpeng Zhao and Lorraine Kisselburgh and Kylie Peppler Niklas Elmqvist and Karthik Ramani}, url = {http://www.umiacs.umd.edu/~elm/projects/merge-study/merge-study.pdf, PDF}, year = {2017}, date = {2017-05-15}, booktitle = {Proceedings of Graphics Interface}, abstract = {Despite its grounding in creativity techniques, merging multiple source sketches to create new ideas has received scant attention in design literature. In this paper, we identify the physical operations that in merging sketch components. We also introduce cognitive operations of reuse, repurpose, refactor, and reinterpret, and explore their relevance to creative design. To examine the relationship of cognitive operations, physical techniques, and creative sketch outcomes, we conducted a qualitative user study where student designers merged existing sketches to generate either an alternative design, or an unrelated new design. We compared two digital selection techniques: freeform selection, and a stroke-cluster-based \"object select\" technique. The resulting merge sketches were subjected to crowdsourced evaluation of these sketches, and manual coding for the use of cognitive operations. Our findings establish a firm connection between the proposed cognitive operations and the context and outcome of creative tasks. Key findings indicate that reinterpret cognitive operations correlate strongly with creativity in merged sketches, while reuse operations correlate negatively with creativity. Furthermore, freeform selection techniques are preferred significantly by designers. We discuss the empirical contributions of understanding the use of cognitive operations during design exploration, and the practical implications for designing interfaces in digital tools that facilitate creativity in merging sketches. }, keywords = {} } Despite its grounding in creativity techniques, merging multiple source sketches to create new ideas has received scant attention in design literature. In this paper, we identify the physical operations that in merging sketch components. We also introduce cognitive operations of reuse, repurpose, refactor, and reinterpret, and explore their relevance to creative design. To examine the relationship of cognitive operations, physical techniques, and creative sketch outcomes, we conducted a qualitative user study where student designers merged existing sketches to generate either an alternative design, or an unrelated new design. We compared two digital selection techniques: freeform selection, and a stroke-cluster-based "object select" technique. The resulting merge sketches were subjected to crowdsourced evaluation of these sketches, and manual coding for the use of cognitive operations. Our findings establish a firm connection between the proposed cognitive operations and the context and outcome of creative tasks. Key findings indicate that reinterpret cognitive operations correlate strongly with creativity in merged sketches, while reuse operations correlate negatively with creativity. Furthermore, freeform selection techniques are preferred significantly by designers. We discuss the empirical contributions of understanding the use of cognitive operations during design exploration, and the practical implications for designing interfaces in digital tools that facilitate creativity in merging sketches. |
93. | Sriram Karthik Badam, Zehua Zheng, Emily Wall, Alex Endert, Niklas Elmqvist (2017): Supporting Team-First Visual Analytics through Group Activity Representations. Proceedings of Graphics Interface, 2017. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Badam2017, title = {Supporting Team-First Visual Analytics through Group Activity Representations}, author = {Sriram Karthik Badam and Zehua Zheng and Emily Wall and Alex Endert and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/group-awareness/group-awareness.pdf, PDF}, year = {2017}, date = {2017-05-15}, booktitle = {Proceedings of Graphics Interface}, abstract = {Collaborative visual analytics (CVA) involves sensemaking activities within teams of analysts based on coordination of work across team members, awareness of team activity, and communication of hypotheses, observations, and insights. We introduce a new type of CVA tools based on the notion of \"team-first\" visual analytics, where supporting the analytical process and needs of the entire team is the primary focus of the graphical user interface before that of the individual analysts. To this end, we present the design space and guidelines for team-first tools in terms of conveying analyst presence, focus, and activity within the interface. We then introduce InsightsDrive, a CVA tool for multidimensional data, that contains team-first features into the interface through group activity visualizations. This includes (1) in-situ representations that show the focus regions of all users integrated in the data visualizations themselves using color-coded selection shadows, as well as (2) ex-situ representations showing the data coverage of each analyst using multidimensional visual representations. We conducted two user studies, one with individual analysts to identify the affordances of different visual representations to inform data coverage, and the other to evaluate the performance of our team-first design with exsitu and in-situ awareness for visual analytic tasks. Our results give an understanding of the performance of our team-first features and unravel their advantages for team coordination.}, keywords = {} } Collaborative visual analytics (CVA) involves sensemaking activities within teams of analysts based on coordination of work across team members, awareness of team activity, and communication of hypotheses, observations, and insights. We introduce a new type of CVA tools based on the notion of "team-first" visual analytics, where supporting the analytical process and needs of the entire team is the primary focus of the graphical user interface before that of the individual analysts. To this end, we present the design space and guidelines for team-first tools in terms of conveying analyst presence, focus, and activity within the interface. We then introduce InsightsDrive, a CVA tool for multidimensional data, that contains team-first features into the interface through group activity visualizations. This includes (1) in-situ representations that show the focus regions of all users integrated in the data visualizations themselves using color-coded selection shadows, as well as (2) ex-situ representations showing the data coverage of each analyst using multidimensional visual representations. We conducted two user studies, one with individual analysts to identify the affordances of different visual representations to inform data coverage, and the other to evaluate the performance of our team-first design with exsitu and in-situ awareness for visual analytic tasks. Our results give an understanding of the performance of our team-first features and unravel their advantages for team coordination. |
92. | Sriram Karthik Badam, Niklas Elmqvist, Jean-Daniel Fekete (2017): Steering the Craft: UI Elements and Visualizations for Supporting Progressive Visual Analytics. Computer Graphics Forum, 36 2017. (Type: Article | Abstract | Links | BibTeX) @article{Badam2017b, title = {Steering the Craft: UI Elements and Visualizations for Supporting Progressive Visual Analytics}, author = {Sriram Karthik Badam and Niklas Elmqvist and Jean-Daniel Fekete}, url = {http://www.umiacs.umd.edu/~elm/projects/insightsfeed/insightsfeed.pdf, PDF}, year = {2017}, date = {2017-05-15}, journal = {Computer Graphics Forum}, volume = {36}, abstract = {Progressive visual analytics (PVA) has emerged in recent years to manage the latency of data analysis systems. When analysis is performed progressively, rough estimates of the results are generated quickly and are then improved over time. Analysts can therefore monitor the progression of the results, steer the analysis algorithms, and make early decisions if the estimates provide a convincing picture. In this article, we describe interface design guidelines for helping users understand progressively updating results and make early decisions based on progressive estimates. To illustrate our ideas, we present a prototype PVA tool called InsightsFeed for exploring Twitter data at scale. As validation, we investigate the tradeoffs of our tool when exploring a Twitter dataset in a user study. We report the usage patterns in making early decisions using the user interface, guiding computational methods, and exploring different subsets of the dataset, compared to sequential analysis without progression.}, keywords = {} } Progressive visual analytics (PVA) has emerged in recent years to manage the latency of data analysis systems. When analysis is performed progressively, rough estimates of the results are generated quickly and are then improved over time. Analysts can therefore monitor the progression of the results, steer the analysis algorithms, and make early decisions if the estimates provide a convincing picture. In this article, we describe interface design guidelines for helping users understand progressively updating results and make early decisions based on progressive estimates. To illustrate our ideas, we present a prototype PVA tool called InsightsFeed for exploring Twitter data at scale. As validation, we investigate the tradeoffs of our tool when exploring a Twitter dataset in a user study. We report the usage patterns in making early decisions using the user interface, guiding computational methods, and exploring different subsets of the dataset, compared to sequential analysis without progression. |
91. | Senthil Chandrasegaran, Sriram Karthik Badam, Lorraine Kisselburgh, Karthik Ramani (2017): Integrating Visual Analytics Support for Grounded Theory Practice in Qualitative Text Analysis. Computer Graphics Forum, 36 2017. (Type: Article | Abstract | Links | BibTeX) @article{Chandrasegaran2017c, title = {Integrating Visual Analytics Support for Grounded Theory Practice in Qualitative Text Analysis}, author = {Senthil Chandrasegaran and Sriram Karthik Badam and Lorraine Kisselburgh and Karthik Ramani}, url = {http://www.umiacs.umd.edu/~elm/projects/gthelper/gthelper.pdf, PDF}, year = {2017}, date = {2017-05-15}, journal = {Computer Graphics Forum}, volume = {36}, abstract = {We present an argument for using visual analytics to aid Grounded Theory methodologies in qualitative data analysis. Grounded theory methods involve the inductive analysis of data to generate novel insights and theoretical constructs. Making sense of unstructured text data is uniquely suited for visual analytics. Using natural language processing techniques such as parts-of-speech tagging, retrieving information content, and topic modeling, different parts of the data can be structured and semantically associated, and interactively explored, thereby providing conceptual depth to the guided discovery process. We review grounded theory methods and identify processes that can be enhanced through visual analytic techniques. Next, we develop an interface for qualitative text analysis, and evaluate our design with qualitative research practitioners who analyze texts with and without visual analytics support. The results of our study suggest how visual analytics can be incorporated into qualitative data analysis tools, and the analytic and interpretive benefits that can result.}, keywords = {} } We present an argument for using visual analytics to aid Grounded Theory methodologies in qualitative data analysis. Grounded theory methods involve the inductive analysis of data to generate novel insights and theoretical constructs. Making sense of unstructured text data is uniquely suited for visual analytics. Using natural language processing techniques such as parts-of-speech tagging, retrieving information content, and topic modeling, different parts of the data can be structured and semantically associated, and interactively explored, thereby providing conceptual depth to the guided discovery process. We review grounded theory methods and identify processes that can be enhanced through visual analytic techniques. Next, we develop an interface for qualitative text analysis, and evaluate our design with qualitative research practitioners who analyze texts with and without visual analytics support. The results of our study suggest how visual analytics can be incorporated into qualitative data analysis tools, and the analytic and interpretive benefits that can result. |
90. | Jiawei Zhang, Abish Malik, Benjamin Ahlbrand, Niklas Elmqvist, Ross Maciejewski, David S. Ebert (2017): TopoGroups: Context-Preserving Visual Illustration of Multi-Scale Spatial Aggregates. Proceedings of the ACM Conference on Human Factors in Computing Systems, pp. 2940–2951, ACM, 2017. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Zhang2017, title = {TopoGroups: Context-Preserving Visual Illustration of Multi-Scale Spatial Aggregates}, author = {Jiawei Zhang and Abish Malik and Benjamin Ahlbrand and Niklas Elmqvist and Ross Maciejewski and David S. Ebert}, url = {http://www.umiacs.umd.edu/~elm/projects/topogroups/topogroups.pdf, PDF}, year = {2017}, date = {2017-05-08}, booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems}, pages = {2940--2951}, publisher = {ACM}, abstract = {Spatial datasets, such as tweets in a geographic area, often exhibit different distribution patterns at multiple levels of scale, such as live updates about events occurring in very specific locations on the map. Navigating in such multi-scale data-rich spaces is often inefficient, requires users to choose between overview or detail information, and does not support identifying spatial patterns at varying scales. In this paper, we propose TopoGroups, a novel context-preserving technique that aggregates spatial data into hierarchical clusters to improve exploration and navigation at multiple spatial scales. The technique uses a boundary distortion algorithm to minimize the visual clutter caused by overlapping aggregates. Our user study explores multiple visual encoding strategies for TopoGroups including color, transparency, shading, and shapes in order to convey the hierarchical and statistical information of the geographical aggregates at different scales.}, keywords = {} } Spatial datasets, such as tweets in a geographic area, often exhibit different distribution patterns at multiple levels of scale, such as live updates about events occurring in very specific locations on the map. Navigating in such multi-scale data-rich spaces is often inefficient, requires users to choose between overview or detail information, and does not support identifying spatial patterns at varying scales. In this paper, we propose TopoGroups, a novel context-preserving technique that aggregates spatial data into hierarchical clusters to improve exploration and navigation at multiple spatial scales. The technique uses a boundary distortion algorithm to minimize the visual clutter caused by overlapping aggregates. Our user study explores multiple visual encoding strategies for TopoGroups including color, transparency, shading, and shapes in order to convey the hierarchical and statistical information of the geographical aggregates at different scales. |
89. | Cecil Piya, Vinayak, Senthil Chandrasegaran, Niklas Elmqvist, Karthik Ramani (2017): Co-3Deator: A Team-First Collaborative 3D Design Ideation Tool. Proceedings of the ACM Conference on Human Factors in Computing Systems, pp. 6581–6592, 2017. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Piya2017, title = {Co-3Deator: A Team-First Collaborative 3D Design Ideation Tool}, author = {Cecil Piya and Vinayak and Senthil Chandrasegaran and Niklas Elmqvist and Karthik Ramani}, url = {http://www.umiacs.umd.edu/~elm/projects/co3deator/co3deator.pdf, PDF}, year = {2017}, date = {2017-05-08}, booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems}, pages = {6581--6592}, abstract = {We present CO-3DEATOR, a sketch-based collaborative 3D modeling system based on the notion of “team-first” ideation tools, where the needs and processes of the entire design team come before that of an individual designer. Co-3Deator includes two specific team-first features: a concept component hierarchy which provides a design representation suitable for multi-level sharing and reusing of design information, and a collaborative design explorer for storing, viewing, and accessing hierarchical design data during collaborative design activities. We conduct two controlled user studies, one with individual designers to elicit the form and functionality of the collaborative design explorer, and the other with design teams to evaluate the utility of the concept component hierarchy and design explorer towards collaborative design ideation. Our results support our rationale for both of the proposed team-first collaboration mechanisms and suggest further ways to streamline collaborative design.}, keywords = {} } We present CO-3DEATOR, a sketch-based collaborative 3D modeling system based on the notion of “team-first” ideation tools, where the needs and processes of the entire design team come before that of an individual designer. Co-3Deator includes two specific team-first features: a concept component hierarchy which provides a design representation suitable for multi-level sharing and reusing of design information, and a collaborative design explorer for storing, viewing, and accessing hierarchical design data during collaborative design activities. We conduct two controlled user studies, one with individual designers to elicit the form and functionality of the collaborative design explorer, and the other with design teams to evaluate the utility of the concept component hierarchy and design explorer towards collaborative design ideation. Our results support our rationale for both of the proposed team-first collaboration mechanisms and suggest further ways to streamline collaborative design. |
88. | Senthil Chandrasegaran, Sriram Karthik Badam, Lorraine Kisselburgh, Kylie Peppler, Niklas Elmqvist, Karthik Ramani (2017): VizScribe: A Visual Analytics Approach to Understand Designer Behavior. International Journal of Human-Computer Interaction, 100 pp. 66–80, 2017. (Type: Article | Abstract | Links | BibTeX) @article{Chandrasegaran2017b, title = {VizScribe: A Visual Analytics Approach to Understand Designer Behavior}, author = {Senthil Chandrasegaran and Sriram Karthik Badam and Lorraine Kisselburgh and Kylie Peppler and Niklas Elmqvist and Karthik Ramani}, url = {http://www.umiacs.umd.edu/~elm/projects/vizscribe/vizscribe.pdf, PDF}, year = {2017}, date = {2017-01-02}, journal = {International Journal of Human-Computer Interaction}, volume = {100}, pages = {66--80}, abstract = {Design protocol analysis is a technique to understand designers’ cognitive processes by analyzing sequences of observations on their behavior. These observations typically use audio, video, and transcript data in order to gain insights into the designer\'s behavior and the design process. The recent availability of sophisticated sensing technology has made such data highly multimodal, requiring more flexible protocol analysis tools. To address this need, we present VizScribe, a visual analytics framework that employs multiple coordinated multiple views that enable the viewing of such data from different perspectives. VizScribe allows designers to create, customize, and extend interactive visualizations for design protocol data such as video, transcripts, sketches, sensor data, and user logs. User studies where design researchers used VizScribe for protocol analysis indicated that the linked views and interactive navigation offered by VizScribe afforded the researchers multiple, useful ways to approach and interpret such multimodal data.}, keywords = {} } Design protocol analysis is a technique to understand designers’ cognitive processes by analyzing sequences of observations on their behavior. These observations typically use audio, video, and transcript data in order to gain insights into the designer's behavior and the design process. The recent availability of sophisticated sensing technology has made such data highly multimodal, requiring more flexible protocol analysis tools. To address this need, we present VizScribe, a visual analytics framework that employs multiple coordinated multiple views that enable the viewing of such data from different perspectives. VizScribe allows designers to create, customize, and extend interactive visualizations for design protocol data such as video, transcripts, sketches, sensor data, and user logs. User studies where design researchers used VizScribe for protocol analysis indicated that the linked views and interactive navigation offered by VizScribe afforded the researchers multiple, useful ways to approach and interpret such multimodal data. |
2016 | |
87. | Matthias Nielsen, Niklas Elmqvist, Kaj Grønbæk (2016): Scribble Query: Fluid Touch Brushing for Multivariate Data Visualization. Proceedings of the Australian Conference on Human-Computer Interaction, 2016. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Nielsen2016, title = {Scribble Query: Fluid Touch Brushing for Multivariate Data Visualization}, author = {Matthias Nielsen and Niklas Elmqvist and Kaj Grønbæk}, url = {http://www.umiacs.umd.edu/~elm/projects/scribble-query/scribble-query.pdf, PDF}, year = {2016}, date = {2016-12-01}, booktitle = {Proceedings of the Australian Conference on Human-Computer Interaction}, abstract = {The wide availability of touch-enabled devices is a unique opportunity for visualization research to invent novel techniques to fluently explore, analyse, and understand complex and large-scale data. In this paper, we introduce Scribble Query, a novel interaction technique for fluid freehand scribbling (casual drawing) on touch-enabled devices to support interactive querying in data visualizations. Inspired by the low-entry yet rich interaction of touch drawing applications, a Scribble Query can be created with a single touch stroke yet have the expressiveness of multiple brushes (a conventionally used interaction technique). We have applied the Scribble Query interaction technique in a multivariate visualization tool, deployed the tool with domain experts from five different domains, and conducted deployment studies with these domain experts on their utilization of multivariate visualization with Scribble Query. The studies suggest that Scribble Query has a low entry barrier facilitating easy adoption, casual and infrequent usage, and in one case, enabled live dissemination of findings by the domain expert to managers in the organization.}, keywords = {} } The wide availability of touch-enabled devices is a unique opportunity for visualization research to invent novel techniques to fluently explore, analyse, and understand complex and large-scale data. In this paper, we introduce Scribble Query, a novel interaction technique for fluid freehand scribbling (casual drawing) on touch-enabled devices to support interactive querying in data visualizations. Inspired by the low-entry yet rich interaction of touch drawing applications, a Scribble Query can be created with a single touch stroke yet have the expressiveness of multiple brushes (a conventionally used interaction technique). We have applied the Scribble Query interaction technique in a multivariate visualization tool, deployed the tool with domain experts from five different domains, and conducted deployment studies with these domain experts on their utilization of multivariate visualization with Scribble Query. The studies suggest that Scribble Query has a low entry barrier facilitating easy adoption, casual and infrequent usage, and in one case, enabled live dissemination of findings by the domain expert to managers in the organization. |
86. | Sriram Karthik Badam, Feresteh Amini, Niklas Elmqvist, Pourang Irani (2016): Supporting Visual Exploration for Multiple Users in Large Display Environments. Proceedings of the IEEE Conference on Visual Analytics Science & Technology, 2016. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Badam2016b, title = {Supporting Visual Exploration for Multiple Users in Large Display Environments}, author = {Sriram Karthik Badam and Feresteh Amini and Niklas Elmqvist and Pourang Irani}, url = {http://umiacs.umd.edu/~elm/projects/multiuser-vis/multiuser-vis.pdf, PDF https://www.youtube.com/watch?v=xd7G_q8nocc, Youtube}, year = {2016}, date = {2016-10-21}, booktitle = {Proceedings of the IEEE Conference on Visual Analytics Science & Technology}, abstract = {We present a design space exploration of interaction techniques for supporting multiple collaborators exploring data on a shared large display. Our proposed solution is based on users controlling individual lenses using both explicit gestures as well as proxemics: the spatial relations between people and physical artifacts such as their distance, orientation, and movement. We discuss different design considerations for implicit and explicit interactions through the lens, and evaluate the user experience to find a balance between the implicit and explicit interaction styles. Our findings indicate that users favor implicit interaction through proxemics for navigation and collaboration, but prefer using explicit mid-air gestures to perform actions that are perceived to be direct, such as terminating a lens composition. Based on these results, we propose a hybrid technique utilizing both proxemics and mid-air gestures, along with examples applying this technique to other datasets. Finally, we performed a usability evaluation of the hybrid technique and observed user performance improvements in the presence of both implicit and explicit interaction styles. }, keywords = {} } We present a design space exploration of interaction techniques for supporting multiple collaborators exploring data on a shared large display. Our proposed solution is based on users controlling individual lenses using both explicit gestures as well as proxemics: the spatial relations between people and physical artifacts such as their distance, orientation, and movement. We discuss different design considerations for implicit and explicit interactions through the lens, and evaluate the user experience to find a balance between the implicit and explicit interaction styles. Our findings indicate that users favor implicit interaction through proxemics for navigation and collaboration, but prefer using explicit mid-air gestures to perform actions that are perceived to be direct, such as terminating a lens composition. Based on these results, we propose a hybrid technique utilizing both proxemics and mid-air gestures, along with examples applying this technique to other datasets. Finally, we performed a usability evaluation of the hybrid technique and observed user performance improvements in the presence of both implicit and explicit interaction styles. |
85. | Minjeong Kim, Kyeongpil Kang, Deokgun Park, Jaegul Choo, Niklas Elmqvist (2016): TopicLens: Efficient Multi-Level Visual Topic Exploration of Large-Scale Document Collections. IEEE Transactions on Visualization and Computer Graphics, 23 (1), pp. 151–160, 2016. (Type: Article | Abstract | Links | BibTeX) @article{Kim2017, title = {TopicLens: Efficient Multi-Level Visual Topic Exploration of Large-Scale Document Collections}, author = {Minjeong Kim and Kyeongpil Kang and Deokgun Park and Jaegul Choo and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/topiclens/topiclens.pdf, PDF https://www.youtube.com/watch?v=RKC5w9dZmXQ, Youtube}, year = {2016}, date = {2016-08-10}, journal = {IEEE Transactions on Visualization and Computer Graphics}, volume = {23}, number = {1}, pages = {151--160}, abstract = {Topic modeling, which reveals underlying topics of a document corpus, has been actively adopted in visual analytics for large-scale document collections. However, due to its significant processing time and non-interactive nature, topic modeling has so far not been tightly integrated into a visual analytics workflow. Instead, most such systems are limited to utilizing a fixed, initial set of topics. Motivated by this gap in the literature, we propose a novel interaction technique called TopicLens that allows a user to dynamically explore data through a lens interface where topic modeling and the corresponding 2D embedding are efficiently computed on the fly. To support this interaction in real time while maintaining view consistency, we propose a novel efficient topic modeling method and a semi-supervised 2D embedding algorithm. Our work is based on improving state-of-the-art methods such as nonnegative matrix factorization and t-distributed stochastic neighbor embedding. Furthermore, we have built a web-based visual analytics system integrated with TopicLens. We use this system to measure the performance and the visualization quality of our proposed methods. We provide several scenarios showcasing the capability of TopicLens using real-world datasets.}, keywords = {} } Topic modeling, which reveals underlying topics of a document corpus, has been actively adopted in visual analytics for large-scale document collections. However, due to its significant processing time and non-interactive nature, topic modeling has so far not been tightly integrated into a visual analytics workflow. Instead, most such systems are limited to utilizing a fixed, initial set of topics. Motivated by this gap in the literature, we propose a novel interaction technique called TopicLens that allows a user to dynamically explore data through a lens interface where topic modeling and the corresponding 2D embedding are efficiently computed on the fly. To support this interaction in real time while maintaining view consistency, we propose a novel efficient topic modeling method and a semi-supervised 2D embedding algorithm. Our work is based on improving state-of-the-art methods such as nonnegative matrix factorization and t-distributed stochastic neighbor embedding. Furthermore, we have built a web-based visual analytics system integrated with TopicLens. We use this system to measure the performance and the visualization quality of our proposed methods. We provide several scenarios showcasing the capability of TopicLens using real-world datasets. |
84. | Deok Gun Park, Simranjit Singh, Nicholas Diakopoulos, Niklas Elmqvist (2016): Supporting Comment Moderators in Identifying High Quality Online News Comments. Proceedings of the ACM Conference on Human Factors in Computing Systems, pp. 1114–1125, 2016. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Park2016, title = {Supporting Comment Moderators in Identifying High Quality Online News Comments}, author = {Deok Gun Park and Simranjit Singh and Nicholas Diakopoulos and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/commentiq/commentiq.pdf, PDF}, year = {2016}, date = {2016-05-05}, booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems}, pages = {1114--1125}, abstract = {Online comments submitted by readers of news articles can provide valuable feedback and critique, personal views and perspectives, and opportunities for discussion. The varying quality of these comments necessitates that publishers remove the low quality ones, but there is also a growing awareness that by identifying and highlighting high quality contributions this can promote the general quality of the community. In this paper we take a user-centered design approach towards developing a system, CommentIQ, which supports comment moderators in interactively identifying high quality comments using a combination of comment analytic scores as well as visualizations and flexible UI components. We evaluated this system with professional comment moderators working at local and national news outlets and provide insights into the utility and appropriateness of features for journalistic tasks, as well as how the system may enable or transform journalistic practices around online comments.}, keywords = {} } Online comments submitted by readers of news articles can provide valuable feedback and critique, personal views and perspectives, and opportunities for discussion. The varying quality of these comments necessitates that publishers remove the low quality ones, but there is also a growing awareness that by identifying and highlighting high quality contributions this can promote the general quality of the community. In this paper we take a user-centered design approach towards developing a system, CommentIQ, which supports comment moderators in interactively identifying high quality comments using a combination of comment analytic scores as well as visualizations and flexible UI components. We evaluated this system with professional comment moderators working at local and national news outlets and provide insights into the utility and appropriateness of features for journalistic tasks, as well as how the system may enable or transform journalistic practices around online comments. |
83. | Sriram Karthik Badam, Jieqiong Zhao, Shivalik Sen, Niklas Elmqvist, David Ebert (2016): TimeFork: Interactive Prediction of Time Series. Proceedings of the ACM Conference on Human Factors in Computing Systems, pp. 5409–5420, 2016. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Badam2016, title = {TimeFork: Interactive Prediction of Time Series}, author = {Sriram Karthik Badam and Jieqiong Zhao and Shivalik Sen and Niklas Elmqvist and David Ebert}, url = {http://www.umiacs.umd.edu/~elm/projects/timefork/timefork.pdf, PDF}, year = {2016}, date = {2016-05-05}, booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems}, pages = {5409--5420}, abstract = {We present TimeFork, an interactive prediction technique to support users predicting the future of time-series data, such as in financial, scientific, or medical domains. TimeFork combines visual representations of multiple time series with prediction information generated by computational models. Using this method, analysts engage in a back-and-forth dialogue with the computational model by alternating between manually predicting future changes through interaction and letting the model automatically determine the most likely outcomes, to eventually come to a common prediction using the model. This computer-supported prediction approach allows for harnessing the user’s knowledge of factors influencing future behavior, as well as sophisticated computational models drawing on past performance. To validate the TimeFork technique, we conducted a user study in a stock market prediction game. We present evidence of improved performance for participants using TimeFork compared to fully manual or fully automatic predictions, and characterize qualitative usage patterns observed during the user study.}, keywords = {} } We present TimeFork, an interactive prediction technique to support users predicting the future of time-series data, such as in financial, scientific, or medical domains. TimeFork combines visual representations of multiple time series with prediction information generated by computational models. Using this method, analysts engage in a back-and-forth dialogue with the computational model by alternating between manually predicting future changes through interaction and letting the model automatically determine the most likely outcomes, to eventually come to a common prediction using the model. This computer-supported prediction approach allows for harnessing the user’s knowledge of factors influencing future behavior, as well as sophisticated computational models drawing on past performance. To validate the TimeFork technique, we conducted a user study in a stock market prediction game. We present evidence of improved performance for participants using TimeFork compared to fully manual or fully automatic predictions, and characterize qualitative usage patterns observed during the user study. |
82. | Udayan Umapathi, Niklas Elmqvist (2016): Mushaca: A 3-Degrees-of-Freedom Mouse Supporting Rotation. International Journal of Human-Computer Interaction, 32 (6), pp. 481–492, 2016. (Type: Article | Abstract | Links | BibTeX) @article{Umapathi2016, title = {Mushaca: A 3-Degrees-of-Freedom Mouse Supporting Rotation}, author = {Udayan Umapathi and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/mushaca/mushaca.pdf, PDF}, year = {2016}, date = {2016-03-09}, journal = {International Journal of Human-Computer Interaction}, volume = {32}, number = {6}, pages = {481--492}, abstract = {Based on kinesiology research demonstrating that translation and rotation are inseparable actions in the physical world, we present Mushaca, a 3-degrees-of-freedom mouse that senses rotation in addition to traditional planar position. We present an optical realization of the Mushaca device based on two optical sensors and then evaluate the device through a series of controlled experiments. Our results show that rotation is indeed a useful input modality for a pointing device, and also give some insight into how users perceive the changing coordinate system of the rotating mouse and adapt to this change through kinesthetic learning.}, keywords = {} } Based on kinesiology research demonstrating that translation and rotation are inseparable actions in the physical world, we present Mushaca, a 3-degrees-of-freedom mouse that senses rotation in addition to traditional planar position. We present an optical realization of the Mushaca device based on two optical sensors and then evaluate the device through a series of controlled experiments. Our results show that rotation is indeed a useful input modality for a pointing device, and also give some insight into how users perceive the changing coordinate system of the rotating mouse and adapt to this change through kinesthetic learning. |
2015 | |
81. | William Z. Bernstein, Devarajan Ramanujan, Devadatta M. Kulkarni, Jeffrey Tew, Niklas Elmqvist, Fu Zhao, Karthik Ramani (2015): Mutually Coordinated Visualization of Product and Supply Chain Metadata for Sustainable Design. Journal of Mechanical Design, 137 (12), pp. 121101, 2015. (Type: Article | Abstract | Links | BibTeX) @article{Bernstein2015, title = {Mutually Coordinated Visualization of Product and Supply Chain Metadata for Sustainable Design}, author = {William Z. Bernstein and Devarajan Ramanujan and Devadatta M. Kulkarni and Jeffrey Tew and Niklas Elmqvist and Fu Zhao and Karthik Ramani}, url = {http://doi.org/10.1115/1.4031293, DOI}, year = {2015}, date = {2015-10-01}, journal = {Journal of Mechanical Design}, volume = {137}, number = {12}, pages = {121101}, abstract = {In this paper, we present a novel visualization framework for product and supply chain metadata in the context of redesign-related decision scenarios. Our framework is based on the idea of overlaying product-related metadata onto the interactive graph representations of a supply chain and its associated product architecture. By coupling environmental data with graph-based visualizations of product architecture, our framework provides a novel decision platform for expert designers. Here, the user can balance the advantages of a redesign opportunity and manage the associated risk on the product and supply chain. For demonstration, we present ViSER, an interactive visualization tool that provides an interface consisting of different mutually coordinated views providing multiple perspectives on a particular supply chain presentation. To explore the utility of ViSER, we conduct a domain expert exploration using a case study of peripheral computer equipment. Results indicate that ViSER enables new affordances within the decision making process for supply chain redesign.}, keywords = {} } In this paper, we present a novel visualization framework for product and supply chain metadata in the context of redesign-related decision scenarios. Our framework is based on the idea of overlaying product-related metadata onto the interactive graph representations of a supply chain and its associated product architecture. By coupling environmental data with graph-based visualizations of product architecture, our framework provides a novel decision platform for expert designers. Here, the user can balance the advantages of a redesign opportunity and manage the associated risk on the product and supply chain. For demonstration, we present ViSER, an interactive visualization tool that provides an interface consisting of different mutually coordinated views providing multiple perspectives on a particular supply chain presentation. To explore the utility of ViSER, we conduct a domain expert exploration using a case study of peripheral computer equipment. Results indicate that ViSER enables new affordances within the decision making process for supply chain redesign. |
80. | Sujin Jang, Niklas Elmqvist, Karthik Ramani (2015): MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data. IEEE Transactions on Visualization and Computer Graphics, 21 (1), pp. 21–30, 2015. (Type: Article | Abstract | Links | BibTeX) @article{Jang2015, title = {MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data}, author = {Sujin Jang and Niklas Elmqvist and Karthik Ramani}, url = {http://www.umiacs.umd.edu/~elm/projects/motionflow/motionflow.pdf, PDF}, year = {2015}, date = {2015-08-14}, journal = {IEEE Transactions on Visualization and Computer Graphics}, volume = {21}, number = {1}, pages = {21--30}, abstract = {Pattern analysis of human motions, which is useful in many research areas, requires understanding and comparison of different styles of motion patterns. However, working with human motion tracking data to support such analysis poses great challenges. In this paper, we propose MotionFlow, a visual analytics system that provides an effective overview of various motion patterns based on an interactive flow visualization. This visualization formulates a motion sequence as transitions between static poses, and aggregates these sequences into a tree diagram to construct a set of motion patterns. The system also allows the users to directly reflect the context of data and their perception of pose similarities in generating representative pose states. We provide local and global controls over the partition-based clustering process. To support the users in organizing unstructured motion data into pattern groups, we designed a set of interactions that enables searching for similar motion sequences from the data, detailed exploration of data subsets, and creating and modifying the group of motion patterns. To evaluate the usability of MotionFlow, we conducted a user study with six researchers with expertise in gesture-based interaction design. They used MotionFlow to explore and organize unstructured motion tracking data. Results show that the researchers were able to easily learn how to use MotionFlow, and the system effectively supported their pattern analysis activities, including leveraging their perception and domain knowledge.}, keywords = {} } Pattern analysis of human motions, which is useful in many research areas, requires understanding and comparison of different styles of motion patterns. However, working with human motion tracking data to support such analysis poses great challenges. In this paper, we propose MotionFlow, a visual analytics system that provides an effective overview of various motion patterns based on an interactive flow visualization. This visualization formulates a motion sequence as transitions between static poses, and aggregates these sequences into a tree diagram to construct a set of motion patterns. The system also allows the users to directly reflect the context of data and their perception of pose similarities in generating representative pose states. We provide local and global controls over the partition-based clustering process. To support the users in organizing unstructured motion data into pattern groups, we designed a set of interactions that enables searching for similar motion sequences from the data, detailed exploration of data subsets, and creating and modifying the group of motion patterns. To evaluate the usability of MotionFlow, we conducted a user study with six researchers with expertise in gesture-based interaction design. They used MotionFlow to explore and organize unstructured motion tracking data. Results show that the researchers were able to easily learn how to use MotionFlow, and the system effectively supported their pattern analysis activities, including leveraging their perception and domain knowledge. |
79. | Mehmet Adil Yalcin, Niklas Elmqvist, Benjamin B. Bederson (2015): AggreSet: Rich and Scalable Set Exploration using Visualizations of Element Aggregations. IEEE Transactions on Visualization and Computer Graphics, 21 (1), pp. 688–697, 2015. (Type: Article | Abstract | Links | BibTeX) @article{Yalcin2015, title = {AggreSet: Rich and Scalable Set Exploration using Visualizations of Element Aggregations}, author = {Mehmet Adil Yalcin and Niklas Elmqvist and Benjamin B. Bederson}, url = {http://www.umiacs.umd.edu/~elm/projects/aggreset/aggreset.pdf, PDF}, year = {2015}, date = {2015-08-14}, journal = {IEEE Transactions on Visualization and Computer Graphics}, volume = {21}, number = {1}, pages = {688--697}, abstract = {Datasets commonly include multi-value (set-typed) attributes that describe set memberships over elements, such as genres per movie or courses taken per student. Set-typed attributes describe rich relations across elements, sets, and the set intersections. Increasing the number of sets results in a combinatorial growth of relations and creates scalability challenges. Exploratory tasks (e.g. selection, comparison) have commonly been designed in separation for set-typed attributes, which reduces interface consistency. To improve on scalability and to support rich, contextual exploration of set-typed data, we present AggreSet. AggreSet creates aggregations for each data dimension: sets, set-degrees, set-pair intersections, and other attributes. It visualizes the element count per aggregate using a matrix plot for set-pair intersections, and histograms for set lists, set-degrees and other attributes. Its non-overlapping visual design is scalable to numerous and large sets. AggreSet supports selection, filtering, and comparison as core exploratory tasks. It allows analysis of set relations inluding subsets, disjoint sets and set intersection strength, and also features perceptual set ordering for detecting patterns in set matrices. Its interaction is designed for rich and rapid data exploration. We demonstrate results on a wide range of datasets from different domains with varying characteristics, and report on expert reviews and a case study using student enrollment and degree data with assistant deans at a major public university.}, keywords = {} } Datasets commonly include multi-value (set-typed) attributes that describe set memberships over elements, such as genres per movie or courses taken per student. Set-typed attributes describe rich relations across elements, sets, and the set intersections. Increasing the number of sets results in a combinatorial growth of relations and creates scalability challenges. Exploratory tasks (e.g. selection, comparison) have commonly been designed in separation for set-typed attributes, which reduces interface consistency. To improve on scalability and to support rich, contextual exploration of set-typed data, we present AggreSet. AggreSet creates aggregations for each data dimension: sets, set-degrees, set-pair intersections, and other attributes. It visualizes the element count per aggregate using a matrix plot for set-pair intersections, and histograms for set lists, set-degrees and other attributes. Its non-overlapping visual design is scalable to numerous and large sets. AggreSet supports selection, filtering, and comparison as core exploratory tasks. It allows analysis of set relations inluding subsets, disjoint sets and set intersection strength, and also features perceptual set ordering for detecting patterns in set matrices. Its interaction is designed for rich and rapid data exploration. We demonstrate results on a wide range of datasets from different domains with varying characteristics, and report on expert reviews and a case study using student enrollment and degree data with assistant deans at a major public university. |
78. | Niklas Elmqvist, Ji Soo Yi (2015): Patterns for Visualization Evaluation. Information Visualization, 14 (3), pp. 250–269, 2015. (Type: Article | Abstract | Links | BibTeX) @article{Elmqvist2015, title = {Patterns for Visualization Evaluation}, author = {Niklas Elmqvist and Ji Soo Yi}, url = {http://www.umiacs.umd.edu/~elm/projects/eval-patterns/eval-patterns.pdf, Paper http://visevalpatterns.wikia.com/, Wiki}, year = {2015}, date = {2015-07-01}, journal = {Information Visualization}, volume = {14}, number = {3}, pages = {250--269}, abstract = {We propose a pattern-based approach to evaluating data visualization: a set of general and reusable solutions to commonly occurring problems in evaluating visualization tools, techniques, and systems. Patterns have had significant impact in a wide array of disciplines, particularly software engineering, and we believe that they provide a powerful lens for characterizing visualization evaluation practices by offering practical, tried-and-tested tips, and tricks that can be adopted immediately. The 20 patterns presented here have also been added to a freely editable Wiki repository. The motivation for creating this evaluation pattern language is to (a) capture and formalize \"dark\" practices for visualization evaluation not currently recorded in the literature, (b) disseminate these hard-won experiences to researchers and practitioners alike, (c) provide a standardized vocabulary for designing visualization evaluation, and (d) invite the community to add new evaluation patterns to a growing repository of patterns.}, keywords = {} } We propose a pattern-based approach to evaluating data visualization: a set of general and reusable solutions to commonly occurring problems in evaluating visualization tools, techniques, and systems. Patterns have had significant impact in a wide array of disciplines, particularly software engineering, and we believe that they provide a powerful lens for characterizing visualization evaluation practices by offering practical, tried-and-tested tips, and tricks that can be adopted immediately. The 20 patterns presented here have also been added to a freely editable Wiki repository. The motivation for creating this evaluation pattern language is to (a) capture and formalize "dark" practices for visualization evaluation not currently recorded in the literature, (b) disseminate these hard-won experiences to researchers and practitioners alike, (c) provide a standardized vocabulary for designing visualization evaluation, and (d) invite the community to add new evaluation patterns to a growing repository of patterns. |
77. | Alexandru Dancu, Mickael Fourgeaud, Mohammad Obaid, Morten Fjeld, Niklas Elmqvist (2015): Map Navigation Using a Wearable Mid-air Display. Proceedings of the ACM Conference on Human-Computer Interaction with Mobile Devices and Services, pp. 71–76, 2015. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Dancu2015, title = {Map Navigation Using a Wearable Mid-air Display}, author = {Alexandru Dancu and Mickael Fourgeaud and Mohammad Obaid and Morten Fjeld and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/midairmap/midairmap.pdf, Paper https://www.youtube.com/watch?v=yswf1bJafp8, Talk}, year = {2015}, date = {2015-07-01}, booktitle = {Proceedings of the ACM Conference on Human-Computer Interaction with Mobile Devices and Services}, journal = {Proceedings of the ACM Conference on Human-Computer Interaction with Mobile Devices and Services}, pages = {71--76}, abstract = {Advances in display technologies will soon make wearable mid-air displays---devices that project dynamic images floating in mid-air relative to a mobile user---widely available. This kind of device will offer new input and output modalities compared to current mobile devices, and display information on the go. In this paper, we present a functional prototype for the purpose of understanding these modalities in more detail, including suitable applications and device placement. We first collected results from an online survey that identified map navigation as one of the most desirable applications and suggested placement preferences. Based on these rankings, we built a physical mid-air display prototype consisting of mobile phone, pico projector, and a holder frame, mountable in two different configurations: wrist and chest. We then designed a user study, asking participants to navigate different physical routes using map navigation displayed in midair. Participants considered the wrist mount to be three times safer in map navigation than the chest mount. The study results validate the use of a mid-air display for map navigation. Based on both our online survey and user study, we derive implications for the design of wearable mid-air displays.}, keywords = {} } Advances in display technologies will soon make wearable mid-air displays---devices that project dynamic images floating in mid-air relative to a mobile user---widely available. This kind of device will offer new input and output modalities compared to current mobile devices, and display information on the go. In this paper, we present a functional prototype for the purpose of understanding these modalities in more detail, including suitable applications and device placement. We first collected results from an online survey that identified map navigation as one of the most desirable applications and suggested placement preferences. Based on these rankings, we built a physical mid-air display prototype consisting of mobile phone, pico projector, and a holder frame, mountable in two different configurations: wrist and chest. We then designed a user study, asking participants to navigate different physical routes using map navigation displayed in midair. Participants considered the wrist mount to be three times safer in map navigation than the chest mount. The study results validate the use of a mid-air display for map navigation. Based on both our online survey and user study, we derive implications for the design of wearable mid-air displays. |
76. | Zhenpeng Zhao, William Benjamin, Niklas Elmqvist, K. Ramani (2015): Sketcholution: Interaction Histories for Sketching. International Journal of Human-Computer Studies, 82 pp. 11–20, 2015. (Type: Article | Abstract | Links | BibTeX) @article{Zhao2015, title = {Sketcholution: Interaction Histories for Sketching}, author = {Zhenpeng Zhao and William Benjamin and Niklas Elmqvist and K. Ramani}, url = {http://www.umiacs.umd.edu/~elm/projects/sketcholution/sketcholution.pdf, Paper https://www.youtube.com/watch?v=SYvkIdJQtEk, Youtube video}, year = {2015}, date = {2015-05-16}, journal = {International Journal of Human-Computer Studies}, volume = {82}, pages = {11--20}, abstract = {We present Sketcholution, a method for automatically creating visual histories of hand-drawn sketches. Such visual histories are useful for a designer to reflect on a sketch, communicate ideas to others, and fork from or revert to an earlier point in the creative process. Our approach uses a bottom-up agglomerative clustering mechanism that groups adjacent frames based on their perceptual similarity while maintaining the causality of how a sketch was constructed. The resulting aggregation dendrogram can be cut at any level depending on available display space, and can be used to create a visual history consisting of either a comic strip of highlights, or a single annotated summary frame. We conducted a user study comparing the speed and accuracy of participants recovering causality in a sketch history using comic strips, summary frames, and simple animations. Although animations with interaction may seem better than static graphics, our results show that both comic strip and summary frame significantly outperform animation.}, keywords = {} } We present Sketcholution, a method for automatically creating visual histories of hand-drawn sketches. Such visual histories are useful for a designer to reflect on a sketch, communicate ideas to others, and fork from or revert to an earlier point in the creative process. Our approach uses a bottom-up agglomerative clustering mechanism that groups adjacent frames based on their perceptual similarity while maintaining the causality of how a sketch was constructed. The resulting aggregation dendrogram can be cut at any level depending on available display space, and can be used to create a visual history consisting of either a comic strip of highlights, or a single annotated summary frame. We conducted a user study comparing the speed and accuracy of participants recovering causality in a sketch history using comic strips, summary frames, and simple animations. Although animations with interaction may seem better than static graphics, our results show that both comic strip and summary frame significantly outperform animation. |
75. | Jungu Choi, Deok Gun Park, Yuetling Wong, Eli Fisher, Niklas Elmqvist (2015): VisDock: A Toolkit for Cross-Cutting Interactions in Visualization. IEEE Transactions on Visualization & Computer Graphics, 21 (9), pp. 1087–1100, 2015. (Type: Article | Abstract | Links | BibTeX) @article{Choi2015, title = {VisDock: A Toolkit for Cross-Cutting Interactions in Visualization}, author = {Jungu Choi and Deok Gun Park and Yuetling Wong and Eli Fisher and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/visdock/visdock.pdf, Paper https://www.youtube.com/watch?v=LUC-nGR-fOk, Youtube video}, year = {2015}, date = {2015-03-21}, journal = {IEEE Transactions on Visualization & Computer Graphics}, volume = {21}, number = {9}, pages = {1087--1100}, abstract = {Standard user applications provide a range of cross-cutting interaction techniques that are common to virtually all such tools: selection, filtering, navigation, layer management, and cut-and-paste.We present VisDock, a JavaScript mixin library that provides a core set of these cross-cutting interaction techniques for visualization, including selection (lasso, paths, shape selection, etc), layer management (visibility, transparency, set operations, etc), navigation (pan, zoom, overview, magnifying lenses, etc), and annotation (point-based, region-based, data-space based, etc). To showcase the utility of the library, we have released it as Open Source and integrated it with a large number of existing web-based visualizations. Furthermore, we have evaluated VisDock using qualitative studies with both developers utilizing the toolkit to build new web-based visualizations, as well as with end-users utilizing it to explore movie ratings data. Results from these studies highlight the usability and effectiveness of the toolkit from both developer and end-user perspectives.}, keywords = {} } Standard user applications provide a range of cross-cutting interaction techniques that are common to virtually all such tools: selection, filtering, navigation, layer management, and cut-and-paste.We present VisDock, a JavaScript mixin library that provides a core set of these cross-cutting interaction techniques for visualization, including selection (lasso, paths, shape selection, etc), layer management (visibility, transparency, set operations, etc), navigation (pan, zoom, overview, magnifying lenses, etc), and annotation (point-based, region-based, data-space based, etc). To showcase the utility of the library, we have released it as Open Source and integrated it with a large number of existing web-based visualizations. Furthermore, we have evaluated VisDock using qualitative studies with both developers utilizing the toolkit to build new web-based visualizations, as well as with end-users utilizing it to explore movie ratings data. Results from these studies highlight the usability and effectiveness of the toolkit from both developer and end-user perspectives. |
74. | Yuetling Wong, Jieqiong Zhao, Niklas Elmqvist (2015): Evaluating Social Navigation Visualization in Online Geographic Maps. International Journal of Human-Computer Interaction, 31 (2), pp. 118–127, 2015. (Type: Article | Abstract | Links | BibTeX) @article{Wong2015, title = {Evaluating Social Navigation Visualization in Online Geographic Maps}, author = {Yuetling Wong and Jieqiong Zhao and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/socnav-eval/socnav-eval.pdf, Paper}, year = {2015}, date = {2015-02-22}, journal = {International Journal of Human-Computer Interaction}, volume = {31}, number = {2}, pages = {118--127}, abstract = {Social navigation enables emergent collaboration between independent collaborators by exposing the behavior of each individual. This is a powerful idea for web-based visualization, where the work of one user can inform other users interacting with the same visualization. We present results from a crowdsourced user study evaluating the value of such social navigation cues for a geographic map service. Our results show significantly improved performance for participants who interacted with the map when the visual footprints of previous users were visible.}, keywords = {} } Social navigation enables emergent collaboration between independent collaborators by exposing the behavior of each individual. This is a powerful idea for web-based visualization, where the work of one user can inform other users interacting with the same visualization. We present results from a crowdsourced user study evaluating the value of such social navigation cues for a geographic map service. Our results show significantly improved performance for participants who interacted with the map when the visual footprints of previous users were visible. |
73. | Samah Gad, Waqas Javed, Sohaib Ghani, Niklas Elmqvist, Tom Ewing, Keith N. Hampton, Naren Ramakrishnan (2015): ThemeDelta: Dynamic Segmentations over Temporal Topic Models. IEEE Transactions on Visualization and Computer Graphics, 21 (5), pp. 672–685, 2015. (Type: Article | Abstract | Links | BibTeX) @article{Gad2015, title = {ThemeDelta: Dynamic Segmentations over Temporal Topic Models}, author = {Samah Gad and Waqas Javed and Sohaib Ghani and Niklas Elmqvist and Tom Ewing and Keith N. Hampton and Naren Ramakrishnan}, url = {http://www.umiacs.umd.edu/~elm/projects/theme-delta/theme-delta.pdf, Paper}, year = {2015}, date = {2015-02-17}, journal = {IEEE Transactions on Visualization and Computer Graphics}, volume = {21}, number = {5}, pages = {672--685}, abstract = {We present ThemeDelta, a visual analytics system for extracting and visualizing temporal trends, clustering, and reorganization in time-indexed textual datasets. ThemeDelta is supported by a dynamic temporal segmentation algorithm that integrates with topic modeling algorithms to identify change points where significant shifts in topics occur. This algorithm detects not only the clustering and associations of keywords in a time period, but also their convergence into topics (groups of keywords) that may later diverge into new groups. The visual representation of ThemeDelta uses sinuous, variable-width lines to show this evolution on a timeline, utilizing color for categories, and line width for keyword strength. We demonstrate how interaction with ThemeDelta helps capture the rise and fall of topics by analyzing archives of historical newspapers, of U.S. presidential campaign speeches, and of social messages collected through iNeighbors, a web-based social website. ThemeDelta was evaluated using a qualitative expert user study involving three researchers from rhetoric and history using the historical newspapers corpus.}, keywords = {} } We present ThemeDelta, a visual analytics system for extracting and visualizing temporal trends, clustering, and reorganization in time-indexed textual datasets. ThemeDelta is supported by a dynamic temporal segmentation algorithm that integrates with topic modeling algorithms to identify change points where significant shifts in topics occur. This algorithm detects not only the clustering and associations of keywords in a time period, but also their convergence into topics (groups of keywords) that may later diverge into new groups. The visual representation of ThemeDelta uses sinuous, variable-width lines to show this evolution on a timeline, utilizing color for categories, and line width for keyword strength. We demonstrate how interaction with ThemeDelta helps capture the rise and fall of topics by analyzing archives of historical newspapers, of U.S. presidential campaign speeches, and of social messages collected through iNeighbors, a web-based social website. ThemeDelta was evaluated using a qualitative expert user study involving three researchers from rhetoric and history using the historical newspapers corpus. |
72. | Sriram Karthik Badam, Eli Raymond Fisher, Niklas Elmqvist (2015): Munin: A Peer-to-Peer Middleware for Ubiquitous Analytics and Visualization Spaces. IEEE Transactions on Visualization & Computer Graphics, 21 (2), pp. 215–228, 2015. (Type: Article | Abstract | Links | BibTeX) @article{Badam2015, title = {Munin: A Peer-to-Peer Middleware for Ubiquitous Analytics and Visualization Spaces}, author = {Sriram Karthik Badam and Eli Raymond Fisher and Niklas Elmqvist }, url = {http://www.umiacs.umd.edu/~elm/projects/munin/munin.pdf, Paper https://www.youtube.com/watch?v=ZKIXSdUm6-s, Video http://www.slideshare.net/NickElm/munin-a-peertopeer-middleware-forubiquitous-analytics-and-visualization-spaces, Slides}, year = {2015}, date = {2015-02-01}, journal = {IEEE Transactions on Visualization & Computer Graphics}, volume = {21}, number = {2}, pages = {215--228}, abstract = {We present Munin, a software framework for building ubiquitous analytics environments consisting of multiple input and output surfaces, such as tabletop displays, wall-mounted displays, and mobile devices. Munin utilizes a service-based model where each device provides one or more dynamically loaded services for input, display, or computation. Using a peer-to-peer model for communication, it leverages IP multicast to replicate the shared state among the peers. Input is handled through a shared event channel that lets input and output devices be fully decoupled. It also provides a data-driven scene graph to delegate rendering to peers, thus creating a robust, fault-tolerant, decentralized system. In this paper, we describe Munin\'s general design and architecture, provide several examples of how we are using the framework for ubiquitous analytics and visualization, and present a case study on building a Munin assembly for multidimensional visualization. We also present performance results and anecdotal user feedback for the framework that suggests that combining a service-oriented, data-driven model with middleware support for data sharing and event handling eases the design and execution of high performance distributed visualizations.}, keywords = {} } We present Munin, a software framework for building ubiquitous analytics environments consisting of multiple input and output surfaces, such as tabletop displays, wall-mounted displays, and mobile devices. Munin utilizes a service-based model where each device provides one or more dynamically loaded services for input, display, or computation. Using a peer-to-peer model for communication, it leverages IP multicast to replicate the shared state among the peers. Input is handled through a shared event channel that lets input and output devices be fully decoupled. It also provides a data-driven scene graph to delegate rendering to peers, thus creating a robust, fault-tolerant, decentralized system. In this paper, we describe Munin's general design and architecture, provide several examples of how we are using the framework for ubiquitous analytics and visualization, and present a case study on building a Munin assembly for multidimensional visualization. We also present performance results and anecdotal user feedback for the framework that suggests that combining a service-oriented, data-driven model with middleware support for data sharing and event handling eases the design and execution of high performance distributed visualizations. |
2014 | |
71. | Jonathan C. Roberts, Panagiotis D. Ritsos, Sriram Karthik Badam, Dominique Brodbeck, Jessie Kennedy, Niklas Elmqvist (2014): Visualization Beyond the Desktop --- The Next Big Thing. IEEE Computer Graphics & Applications, 34 (6), pp. 26–34, 2014. (Type: Article | Abstract | Links | BibTeX) @article{Roberts2014, title = {Visualization Beyond the Desktop --- The Next Big Thing}, author = {Jonathan C. Roberts and Panagiotis D. Ritsos and Sriram Karthik Badam and Dominique Brodbeck and Jessie Kennedy and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/beyond-desktop/beyond-desktop.pdf, Paper}, year = {2014}, date = {2014-12-02}, journal = {IEEE Computer Graphics & Applications}, volume = {34}, number = {6}, pages = {26--34}, abstract = {Visualization is coming of age: with visual depictions being seamlessly integrated into documents and data visualization techniques being used to understand datasets that are ever-growing in size and complexity, the term visualization is becoming used in everyday conversations. But we are on a cusp; visualization researchers need to develop and adapt to today\'s new devices and tomorrows technology. Today, we are interacting with visual depictions through a mouse. Tomorrow, we will be touching, swiping, grasping, feeling, hearing, smelling and even tasting our data. The next big thing is multi-sensory visualization that goes beyond the desktop.}, keywords = {} } Visualization is coming of age: with visual depictions being seamlessly integrated into documents and data visualization techniques being used to understand datasets that are ever-growing in size and complexity, the term visualization is becoming used in everyday conversations. But we are on a cusp; visualization researchers need to develop and adapt to today's new devices and tomorrows technology. Today, we are interacting with visual depictions through a mouse. Tomorrow, we will be touching, swiping, grasping, feeling, hearing, smelling and even tasting our data. The next big thing is multi-sensory visualization that goes beyond the desktop. |
70. | Sungahn Ko, Jieqiong Zhao, Jing Xia, Shehzad Afzal, Xiaoyu Wang, Greg Abram, Niklas Elmqvist, Len Kne, David Van Riper, Kelly Gaither, Shaun Kennedy, William Tolone, William Ribarsky, David S. Ebert (2014): VASA: Interactive Computational Steering of Large Asynchronous Simulation Pipelines for Societal Infrastructure. IEEE Transactions on Visualization & Computer Graphics, 20 (12), pp. 1853–1862, 2014. (Type: Article | Abstract | Links | BibTeX) @article{Ko2014, title = {VASA: Interactive Computational Steering of Large Asynchronous Simulation Pipelines for Societal Infrastructure}, author = {Sungahn Ko and Jieqiong Zhao and Jing Xia and Shehzad Afzal and Xiaoyu Wang and Greg Abram and Niklas Elmqvist and Len Kne and David Van Riper and Kelly Gaither and Shaun Kennedy and William Tolone and William Ribarsky and David S. Ebert}, url = {http://www.umiacs.umd.edu/~elm/projects/vasa/vasa.pdf}, year = {2014}, date = {2014-11-13}, journal = {IEEE Transactions on Visualization & Computer Graphics}, volume = {20}, number = {12}, pages = {1853--1862}, abstract = {We present VASA, a visual analytics platform consisting of a desktop application, a component model, and a suite of distributed simulation components for modeling the impact of societal threats such as weather, food contamination, and traffic on critical infrastructure such as supply chains, road networks, and power grids. Each component encapsulates a high-fidelity simulation model that together form an asynchronous simulation pipeline: a system of systems of individual simulations with a common data and parameter exchange format. At the heart of VASA is theWorkbench, a visual analytics application providing three distinct features: (1) low-fidelity approximations of the distributed simulation components using local simulation proxies to enable analysts to interactively configure a simulation run; (2) computational steering mechanisms to manage the execution of individual simulation components; and (3) spatiotemporal and interactive methods to explore the combined results of a simulation run. We showcase the utility of the platform using examples involving supply chains during a hurricane as well as food contamination in a fast food restaurant chain.}, keywords = {} } We present VASA, a visual analytics platform consisting of a desktop application, a component model, and a suite of distributed simulation components for modeling the impact of societal threats such as weather, food contamination, and traffic on critical infrastructure such as supply chains, road networks, and power grids. Each component encapsulates a high-fidelity simulation model that together form an asynchronous simulation pipeline: a system of systems of individual simulations with a common data and parameter exchange format. At the heart of VASA is theWorkbench, a visual analytics application providing three distinct features: (1) low-fidelity approximations of the distributed simulation components using local simulation proxies to enable analysts to interactively configure a simulation run; (2) computational steering mechanisms to manage the execution of individual simulation components; and (3) spatiotemporal and interactive methods to explore the combined results of a simulation run. We showcase the utility of the platform using examples involving supply chains during a hurricane as well as food contamination in a fast food restaurant chain. |
69. | Krishna Madhavan, Niklas Elmqvist, Mihaela Vorvoreanu, Xin Chen, Yuetling Wong, Hanjun Xian, Zhihua Dong, Aditya Johri (2014): DIA2: Web-based Cyberinfrastructure for Visual Analytics of Funding Portfolios. IEEE Transactions on Visualization & Computer Graphics, 20 (12), pp. 1823–1832, 2014. (Type: Article | Abstract | Links | BibTeX) @article{Madhavan2014, title = {DIA2: Web-based Cyberinfrastructure for Visual Analytics of Funding Portfolios}, author = {Krishna Madhavan and Niklas Elmqvist and Mihaela Vorvoreanu and Xin Chen and Yuetling Wong and Hanjun Xian and Zhihua Dong and Aditya Johri}, url = {http://www.umiacs.umd.edu/~elm/projects/dia2/dia2-vast2014.pdf, Paper}, year = {2014}, date = {2014-11-13}, journal = {IEEE Transactions on Visualization & Computer Graphics}, volume = {20}, number = {12}, pages = {1823--1832}, abstract = {We present a design study of the Deep Insights Anywhere, Anytime (DIA2) platform, a web-based visual analytics system that allows program managers and academic staff at the U.S. National Science Foundation to search, view, and analyze their research funding portfolio. The goal of this system is to facilitate usersʼ understanding of both past and currently active research awards in order to make more informed decisions of their future funding. This user group is characterized by high expertise yet not necessarily high literacy in visualization and visual analytics--they are essentially \"casual experts\"--and thus require careful visual and information design, including adhering to user experience standards, providing a self-instructive interface, and progressively refining visualizations to minimize complexity. We discuss the challenges of designing a system for \"casual experts\" and highlight how we addressed this issue by modeling the organizational structure and workflows of the NSF within our system. We discuss each stage of the design process, starting with formative interviews, participatory design, prototypes, and finally live deployments and evaluation with stakeholders.}, keywords = {} } We present a design study of the Deep Insights Anywhere, Anytime (DIA2) platform, a web-based visual analytics system that allows program managers and academic staff at the U.S. National Science Foundation to search, view, and analyze their research funding portfolio. The goal of this system is to facilitate usersʼ understanding of both past and currently active research awards in order to make more informed decisions of their future funding. This user group is characterized by high expertise yet not necessarily high literacy in visualization and visual analytics--they are essentially "casual experts"--and thus require careful visual and information design, including adhering to user experience standards, providing a self-instructive interface, and progressively refining visualizations to minimize complexity. We discuss the challenges of designing a system for "casual experts" and highlight how we addressed this issue by modeling the organizational structure and workflows of the NSF within our system. We discuss each stage of the design process, starting with formative interviews, participatory design, prototypes, and finally live deployments and evaluation with stakeholders. |
68. | Sriram Karthik Badam, Senthil Chandrasegaran, Niklas Elmqvist, Karthik Ramani (2014): Tracing and Sketching Performance using Blunt-Tipped Styli on Direct-Touch Tablets. Proceedings of the ACM Conference on Advanced Visual Interfaces, pp. 193–200, 2014. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Badam2014a, title = {Tracing and Sketching Performance using Blunt-Tipped Styli on Direct-Touch Tablets}, author = {Sriram Karthik Badam and Senthil Chandrasegaran and Niklas Elmqvist and Karthik Ramani}, url = {http://www.umiacs.umd.edu/~elm/projects/sketch-media/sketch-media.pdf, Paper http://www.slideshare.net/NickElm/tracing-and-sketching-performance-using-blunttipped-styli-on-directtouch-tablets, Slides}, year = {2014}, date = {2014-07-01}, booktitle = {Proceedings of the ACM Conference on Advanced Visual Interfaces}, pages = {193--200}, abstract = {Direct-touch tablets are quickly replacing traditional pen-and-paper tools in many applications, but not in case of the designer’s sketchbook. In this paper, we explore the tradeoffs inherent in replacing such paper sketchbooks with digital tablets in terms of two major tasks: tracing and free-hand sketching. Given the importance of the pen for sketching, we also study the impact of using a blunt-and-soft-tipped capacitive stylus in tablet settings. We thus conducted experiments to evaluate three sketch media: pen-paper, finger-tablet, and stylus-tablet based on the above tasks. We analyzed the tracing data with respect to speed and accuracy, and the quality of the free-hand sketches through a crowdsourced survey. The pen-paper and stylus-tablet media both performed significantly better than the finger-tablet medium in accuracy, while the pen-paper sketches were significantly rated higher quality compared to both tablet interfaces. A follow-up study comparing the performance of this stylus with a sharp, hard-tip version showed no significant difference in tracing performance, though participants preferred the sharp tip for sketching.}, keywords = {} } Direct-touch tablets are quickly replacing traditional pen-and-paper tools in many applications, but not in case of the designer’s sketchbook. In this paper, we explore the tradeoffs inherent in replacing such paper sketchbooks with digital tablets in terms of two major tasks: tracing and free-hand sketching. Given the importance of the pen for sketching, we also study the impact of using a blunt-and-soft-tipped capacitive stylus in tablet settings. We thus conducted experiments to evaluate three sketch media: pen-paper, finger-tablet, and stylus-tablet based on the above tasks. We analyzed the tracing data with respect to speed and accuracy, and the quality of the free-hand sketches through a crowdsourced survey. The pen-paper and stylus-tablet media both performed significantly better than the finger-tablet medium in accuracy, while the pen-paper sketches were significantly rated higher quality compared to both tablet interfaces. A follow-up study comparing the performance of this stylus with a sharp, hard-tip version showed no significant difference in tracing performance, though participants preferred the sharp tip for sketching. |
67. | Sujin Jang, Niklas Elmqvist, Karthik Ramani (2014): GestureAnalyzer: Visual Analytics for Exploratory Analysis of Gesture Patterns. Proceedings of the ACM Symposium on Spatial User Interfaces, pp. 30–39, 2014. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Sujin2014, title = {GestureAnalyzer: Visual Analytics for Exploratory Analysis of Gesture Patterns}, author = {Sujin Jang and Niklas Elmqvist and Karthik Ramani}, url = {http://www.umiacs.umd.edu/~elm/projects/gesture-analyzer/gesture-analyzer.pdf, Paper}, year = {2014}, date = {2014-07-01}, booktitle = {Proceedings of the ACM Symposium on Spatial User Interfaces}, pages = {30--39}, abstract = {Understanding the intent behind human gestures is a critical problem in the design of gestural interactions. A common method to observe and understand how users express gestures is to use elicitation studies. However, these studies require time-consuming analysis of user data to identify gesture patterns. Also, the analysis by humans cannot describe gestures in as detail as in data-based representations of motion features. In this paper, we present GestureAnalyzer, a system that supports exploratory analysis of gesture patterns by applying interactive clustering and visualization techniques to motion tracking data. GestureAnalyzer enables rapid categorization of similar gestures, and visual investigation of various geometric and kinematic properties of user gestures. We describe the system components, and then demonstrate its utility through a case study on mid-air hand gestures obtained from elicitation studies.}, keywords = {} } Understanding the intent behind human gestures is a critical problem in the design of gestural interactions. A common method to observe and understand how users express gestures is to use elicitation studies. However, these studies require time-consuming analysis of user data to identify gesture patterns. Also, the analysis by humans cannot describe gestures in as detail as in data-based representations of motion features. In this paper, we present GestureAnalyzer, a system that supports exploratory analysis of gesture patterns by applying interactive clustering and visualization techniques to motion tracking data. GestureAnalyzer enables rapid categorization of similar gestures, and visual investigation of various geometric and kinematic properties of user gestures. We describe the system components, and then demonstrate its utility through a case study on mid-air hand gestures obtained from elicitation studies. |
66. | Sriram Karthik Badam, Niklas Elmqvist (2014): PolyChrome: A Cross-Device Framework for Collaborative Web Visualization. Proceedings of the ACM Conference on Interactive Tabletops and Surfaces, pp. 109–118, 2014. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Badam2014b, title = {PolyChrome: A Cross-Device Framework for Collaborative Web Visualization}, author = {Sriram Karthik Badam and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/polychrome/polychrome.pdf, Paper http://www.slideshare.net/NickElm/polychrome-a-crossdevice-framework-for-collaborative-web-visualization, Slides}, year = {2014}, date = {2014-07-01}, booktitle = {Proceedings of the ACM Conference on Interactive Tabletops and Surfaces}, journal = {Proceedings of the ACM Conference on Interactive Tabletops and Surfaces}, pages = {109--118}, abstract = {We present PolyChrome, an application framework for creating web-based collaborative visualizations that can span multiple devices. The framework supports (1) co-browsing new web applications as well as legacy websites with no migration costs (i.e., a distributed web browser); (2) an API to develop new web applications that can synchronize the UI state on multiple devices to support synchronous and asynchronous collaboration; and (3) maintenance of state and input events on a server to handle common issues with distributed applications such as consistency management, conflict resolution, and undo operations. We describe PolyChrome\'s general design, architecture, and implementation followed by application examples showcasing collaborative web visualizations created using the framework. Finally, we present performance results that suggest that PolyChrome adds minimal overhead compared to single-device applications.}, keywords = {} } We present PolyChrome, an application framework for creating web-based collaborative visualizations that can span multiple devices. The framework supports (1) co-browsing new web applications as well as legacy websites with no migration costs (i.e., a distributed web browser); (2) an API to develop new web applications that can synchronize the UI state on multiple devices to support synchronous and asynchronous collaboration; and (3) maintenance of state and input events on a server to handle common issues with distributed applications such as consistency management, conflict resolution, and undo operations. We describe PolyChrome's general design, architecture, and implementation followed by application examples showcasing collaborative web visualizations created using the framework. Finally, we present performance results that suggest that PolyChrome adds minimal overhead compared to single-device applications. |
65. | William Benjamin, Senthil Chandrasegaran, Devarajan Ramanujan, Niklas Elmqvist, SVN Vishwanathan, Karthik Ramani (2014): Juxtapoze: supporting serendipity and creative expression in clipart compositions. Proceedings of the ACM Conference on Human Factors in Computing Systems, pp. 341–350, 2014. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Benjamin2014, title = {Juxtapoze: supporting serendipity and creative expression in clipart compositions}, author = {William Benjamin and Senthil Chandrasegaran and Devarajan Ramanujan and Niklas Elmqvist and SVN Vishwanathan and Karthik Ramani}, url = {http://www.umiacs.umd.edu/~elm/projects/juxtapoze/juxtapoze.pdf, Paper https://youtu.be/YkLFX16fSrA, Youtube video}, year = {2014}, date = {2014-01-01}, booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems}, pages = {341--350}, abstract = {Juxtapoze is a clipart composition workflow that supports creative expression and serendipitous discoveries in the shape domain. We achieve creative expression by supporting a workflow of searching, editing, and composing: the user queries the shape database using strokes, selects the desired search result, and finally modifies the selected image before composing it into the overall drawing. Serendipitous discovery of shapes is facilitated by allowing multiple exploration channels, such as doodles, shape filtering, and relaxed search. Results from a qualitative evaluation show that Juxtapoze makes the process of creating image compositions enjoyable and supports creative expression and serendipity.}, keywords = {} } Juxtapoze is a clipart composition workflow that supports creative expression and serendipitous discoveries in the shape domain. We achieve creative expression by supporting a workflow of searching, editing, and composing: the user queries the shape database using strokes, selects the desired search result, and finally modifies the selected image before composing it into the overall drawing. Serendipitous discovery of shapes is facilitated by allowing multiple exploration channels, such as doodles, shape filtering, and relaxed search. Results from a qualitative evaluation show that Juxtapoze makes the process of creating image compositions enjoyable and supports creative expression and serendipity. |
64. | Ahmad M. M. Razip, Shehzad Afzak, Matthew Potrawski, Ross Maciejewski, Yun Jang, Niklas Elmqvist, David S. Ebert (2014): A Mobile Visual Analytics Approach for Law Enforcement Situation Awareness. Proceedings of the IEEE Pacific Symposium on Visualization, pp. 1235–1244, 2014. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Razip2014, title = {A Mobile Visual Analytics Approach for Law Enforcement Situation Awareness}, author = {Ahmad M. M. Razip and Shehzad Afzak and Matthew Potrawski and Ross Maciejewski and Yun Jang and Niklas Elmqvist and David S. Ebert}, url = {http://www.umiacs.umd.edu/~elm/projects/iVALET/iVALET.pdf, Paper}, year = {2014}, date = {2014-01-01}, booktitle = {Proceedings of the IEEE Pacific Symposium on Visualization}, pages = {1235--1244}, abstract = {The advent of modern smartphones and handheld devices has given analysts, decision-makers, and even the general public the ability to rapidly ingest data and translate it into actionable information on-the-go. In this paper, we explore the design and use of a mobile visual analytics toolkit for public safety data that equips law enforcement agencies with effective situation awareness and risk assessment tools. Our system provides users with a suite of interactive tools that allow them to perform analysis and detect trends, patterns and anomalies among criminal, traffic and civil (CTC) incidents. The system also provides interactive risk assessment tools that allow users to identify regions of potential high risk and determine the risk at any user-specified location and time. Our system has been designed for the iPhone/iPad environment and is currently being used and evaluated by a consortium of law enforcement agencies. We report their use of the system and some initial feedback.}, keywords = {} } The advent of modern smartphones and handheld devices has given analysts, decision-makers, and even the general public the ability to rapidly ingest data and translate it into actionable information on-the-go. In this paper, we explore the design and use of a mobile visual analytics toolkit for public safety data that equips law enforcement agencies with effective situation awareness and risk assessment tools. Our system provides users with a suite of interactive tools that allow them to perform analysis and detect trends, patterns and anomalies among criminal, traffic and civil (CTC) incidents. The system also provides interactive risk assessment tools that allow users to identify regions of potential high risk and determine the risk at any user-specified location and time. Our system has been designed for the iPhone/iPad environment and is currently being used and evaluated by a consortium of law enforcement agencies. We report their use of the system and some initial feedback. |
63. | Zhenpeng Zhao, Sriram Karthik Badam, Senthil Chandrasegaran, Deo Gun Park, Niklas Elmqvist, Lorraine Kisselburgh, Karthik Ramani (2014): skWiki: A Multimedia Sketching System for Collaborative Creativity. Proceedings of the ACM Conference on Human Factors in Computing Systems, pp. 1235–1244, 2014. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Zhao2014, title = {skWiki: A Multimedia Sketching System for Collaborative Creativity}, author = {Zhenpeng Zhao and Sriram Karthik Badam and Senthil Chandrasegaran and Deo Gun Park and Niklas Elmqvist and Lorraine Kisselburgh and Karthik Ramani}, url = {http://www.umiacs.umd.edu/~elm/projects/skwiki/skwiki.pdf, Paper https://www.youtube.com/watch?v=QxtTR14EXFQ, Video http://www.slideshare.net/NickElm/skwiki-a-multimedia-sketching-system-for-collaborative-creativity, Slides}, year = {2014}, date = {2014-01-01}, booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems}, pages = {1235--1244}, abstract = {We present skWiki, a web application framework for collaborative creativity in digital multimedia projects, including text, hand-drawn sketches, and photographs. skWiki overcomes common drawbacks of existing wiki software by providing a rich viewer/editor architecture for all media types that is integrated into the web browser itself, thus avoiding dependence on client-side editors. Instead of files, skWiki uses the concept of paths as trajectories of persistent state over time. This model has intrinsic support for collaborative editing, including cloning, branching, and merging paths edited by multiple contributors. We demonstrate skWiki\'s utility using a qualitative, sketching-based user study.}, keywords = {} } We present skWiki, a web application framework for collaborative creativity in digital multimedia projects, including text, hand-drawn sketches, and photographs. skWiki overcomes common drawbacks of existing wiki software by providing a rich viewer/editor architecture for all media types that is integrated into the web browser itself, thus avoiding dependence on client-side editors. Instead of files, skWiki uses the concept of paths as trajectories of persistent state over time. This model has intrinsic support for collaborative editing, including cloning, branching, and merging paths edited by multiple contributors. We demonstrate skWiki's utility using a qualitative, sketching-based user study. |
62. | Eli Raymond Fisher, Sriram Karthik Badam, Niklas Elmqvist (2014): Designing Peer-to-Peer Distributed User Interfaces: Case Studies on Building Distributed Applications. International Journal of Human-Computer Studies, 72 (1), pp. 100–110, 2014. (Type: Article | Abstract | Links | BibTeX) @article{Fisher2014, title = {Designing Peer-to-Peer Distributed User Interfaces: Case Studies on Building Distributed Applications}, author = {Eli Raymond Fisher and Sriram Karthik Badam and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/dui-design/dui-design.pdf, Paper}, year = {2014}, date = {2014-01-01}, journal = {International Journal of Human-Computer Studies}, volume = {72}, number = {1}, pages = {100--110}, abstract = {Building a distributed user interface (DUI) application should ideally not require any additional effort beyond that necessary to build a non-distributed interface. In practice, however, DUI development is fraught with several technical challenges such as synchronization, resource management, and data transfer. In this paper, we present three case studies on building distributed user interface applications: a distributed media player for multiple displays and controls, a collaborative search system integrating a tabletop and mobile devices, and a multiplayer Tetris game for multi-surface use. While there exist several possible network architectures for such applications, our particular approach focuses on peer-to-peer (P2P) architectures. This focus leads to a number of challenges and opportunities. Drawing from these studies, we derive general challenges for P2P DUI development in terms of design, architecture, and implementation. We conclude with some general guidelines for practical DUI application development using peer-to-peer architectures.}, keywords = {} } Building a distributed user interface (DUI) application should ideally not require any additional effort beyond that necessary to build a non-distributed interface. In practice, however, DUI development is fraught with several technical challenges such as synchronization, resource management, and data transfer. In this paper, we present three case studies on building distributed user interface applications: a distributed media player for multiple displays and controls, a collaborative search system integrating a tabletop and mobile devices, and a multiplayer Tetris game for multi-surface use. While there exist several possible network architectures for such applications, our particular approach focuses on peer-to-peer (P2P) architectures. This focus leads to a number of challenges and opportunities. Drawing from these studies, we derive general challenges for P2P DUI development in terms of design, architecture, and implementation. We conclude with some general guidelines for practical DUI application development using peer-to-peer architectures. |
2013 | |
61. | Stephen MacNeil, Niklas Elmqvist (2013): Visualization Mosaics for Multivariate Visual Exploration. Computer Graphics Forum, 32 (6), pp. 38–50, 2013. (Type: Article | Abstract | Links | BibTeX) @article{MacNeil2013, title = {Visualization Mosaics for Multivariate Visual Exploration}, author = {Stephen MacNeil and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/mosaics/mosaics.pdf, Paper}, year = {2013}, date = {2013-06-01}, journal = {Computer Graphics Forum}, volume = {32}, number = {6}, pages = {38--50}, abstract = {We present a new model for creating composite visualizations of multidimensional datasets using simple visual representations such as point charts, scatterplots, and parallel coordinates as components. Each visual representation is contained in a tile, and the tiles are arranged in a mosaic of views using a space-filling slice-and-dice layout. Tiles can be created, resized, split, or merged using a versatile set of interaction techniques, and the visual representation of individual tiles can also be dynamically changed to another representation. Because each tile is self-contained and independent, it can be implemented in any programming language, on any platform, and using any visual representation. We also propose a formalism for expressing visualization mosaics. A web-based implementation called MosaicJS supporting multidimensional visual exploration showcases the versatility of the concept and illustrates how it can be used to integrate visualization components provided by different toolkits.}, keywords = {} } We present a new model for creating composite visualizations of multidimensional datasets using simple visual representations such as point charts, scatterplots, and parallel coordinates as components. Each visual representation is contained in a tile, and the tiles are arranged in a mosaic of views using a space-filling slice-and-dice layout. Tiles can be created, resized, split, or merged using a versatile set of interaction techniques, and the visual representation of individual tiles can also be dynamically changed to another representation. Because each tile is self-contained and independent, it can be implemented in any programming language, on any platform, and using any visual representation. We also propose a formalism for expressing visualization mosaics. A web-based implementation called MosaicJS supporting multidimensional visual exploration showcases the versatility of the concept and illustrates how it can be used to integrate visualization components provided by different toolkits. |
60. | Niklas Elmqvist, Pourang Irani (2013): Ubiquitous Analytics: Interacting with Big Data Anywhere, Anytime. IEEE Computer, 46 (4), pp. 86–89, 2013. (Type: Article | Abstract | Links | BibTeX) @article{Elmqvist2013, title = {Ubiquitous Analytics: Interacting with Big Data Anywhere, Anytime}, author = {Niklas Elmqvist and Pourang Irani}, url = {http://www.umiacs.umd.edu/~elm/projects/ubilytics/ubilytics.pdf, Paper}, year = {2013}, date = {2013-01-01}, journal = {IEEE Computer}, volume = {46}, number = {4}, pages = {86--89}, abstract = {With more than 4 billion mobile devices in the world today, mobile computing is quickly becoming the universal computational platform of the world. Building on this new wave of mobile devices are personal computing activities such as microblogging, social networking, and photo sharing, which are intrinsically mobile phenomena that occur while on-the-go. Mobility is now propagating to more professional activities such as data analytics, which need no longer be restricted to the workplace. In fact, the rise of big data increasingly demands that we be able to access data resources anytime and anywhere, whether to support decisions and activities for travel, telecommuting, or distributed teamwork. In other words, it is high time to fully realize Mark Weiser’s vision of ubiquitous computing in the realm of data analytics.}, keywords = {} } With more than 4 billion mobile devices in the world today, mobile computing is quickly becoming the universal computational platform of the world. Building on this new wave of mobile devices are personal computing activities such as microblogging, social networking, and photo sharing, which are intrinsically mobile phenomena that occur while on-the-go. Mobility is now propagating to more professional activities such as data analytics, which need no longer be restricted to the workplace. In fact, the rise of big data increasingly demands that we be able to access data resources anytime and anywhere, whether to support decisions and activities for travel, telecommuting, or distributed teamwork. In other words, it is high time to fully realize Mark Weiser’s vision of ubiquitous computing in the realm of data analytics. |
59. | Sohaib Ghani, Bumchul Kwon, Seungyoon Lee, Ji-Soo Yi, Niklas Elmqvist (2013): Visual Analytics for Multimodal Social Network Analysis: A Design Study with Social Scientists. IEEE Transactions on Visualization and Computer Graphics, 19 (12), pp. 2032–2041, 2013. (Type: Article | Abstract | Links | BibTeX) @article{Ghani2013, title = {Visual Analytics for Multimodal Social Network Analysis: A Design Study with Social Scientists}, author = {Sohaib Ghani and Bumchul Kwon and Seungyoon Lee and Ji-Soo Yi and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/mmgraph/mmgraph.pdf}, year = {2013}, date = {2013-01-01}, journal = {IEEE Transactions on Visualization and Computer Graphics}, volume = {19}, number = {12}, pages = {2032--2041}, abstract = {Social network analysis (SNA) is becoming increasingly concerned not only with actors and their relations, but also with distinguishing between different types of such entities. For example, social scientists may want to investigate asymmetric relations in organizations with strict chains of command, or incorporate non-actors such as conferences and projects when analyzing co-authorship patterns. Multimodal social networks are those where actors and relations belong to different types, or modes, and multimodal social network analysis (mSNA) is accordingly SNA for such networks. In this paper, we present a design study that we conducted with several social scientist collaborators on how to support mSNA using visual analytics tools. Based on an open-ended, formative design process, we devised a visual representation called parallel node-link bands (PNLBs) that splits modes into separate bands and renders connections between adjacent ones, similar to the list view in Jigsaw. We then used the tool in a qualitative evaluation involving five social scientists whose feedback informed a second design phase that incorporated additional network metrics. Finally, we conducted a second qualitative evaluation with our social scientist collaborators that provided further insights on the utility of the PNLBs representation and the potential of visual analytics for mSNA.}, keywords = {} } Social network analysis (SNA) is becoming increasingly concerned not only with actors and their relations, but also with distinguishing between different types of such entities. For example, social scientists may want to investigate asymmetric relations in organizations with strict chains of command, or incorporate non-actors such as conferences and projects when analyzing co-authorship patterns. Multimodal social networks are those where actors and relations belong to different types, or modes, and multimodal social network analysis (mSNA) is accordingly SNA for such networks. In this paper, we present a design study that we conducted with several social scientist collaborators on how to support mSNA using visual analytics tools. Based on an open-ended, formative design process, we devised a visual representation called parallel node-link bands (PNLBs) that splits modes into separate bands and renders connections between adjacent ones, similar to the list view in Jigsaw. We then used the tool in a qualitative evaluation involving five social scientists whose feedback informed a second design phase that incorporated additional network metrics. Finally, we conducted a second qualitative evaluation with our social scientist collaborators that provided further insights on the utility of the PNLBs representation and the potential of visual analytics for mSNA. |
58. | Waqas Javed, Niklas Elmqvist (2013): Stack Zooming for Multi-Focus Interaction in Skewed-Aspect Visual Spaces. IEEE Transactions on Visualization and Computer Graphics, 19 (8), pp. 1362–1374, 2013. (Type: Article | Abstract | Links | BibTeX) @article{Javed2013b, title = {Stack Zooming for Multi-Focus Interaction in Skewed-Aspect Visual Spaces}, author = {Waqas Javed and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/stackzoom/stackzoom-journal.pdf, Paper}, year = {2013}, date = {2013-01-01}, journal = {IEEE Transactions on Visualization and Computer Graphics}, volume = {19}, number = {8}, pages = {1362--1374}, abstract = {Many 2D visual spaces have a virtually one-dimensional nature with very high aspect ratio between the dimensions: examples include time-series data, multimedia data such as sound or video, text documents, and bipartite graphs. Common among these is that the space can become very large, e.g., temperature measurements could span a long time period, surveillance video could cover entire days or weeks, and documents can have thousands of pages. Many analysis tasks for such spaces require several foci while retaining context and distance awareness. In this extended version of our IEEE PacificVis 2010 paper, we introduce a method for supporting this kind of multi-focus interaction that we call stack zooming. The approach is based on building hierarchies of 1D strips stacked on top of each other, where each subsequent stack represents a higher zoom level, and sibling strips represent branches in the exploration. Correlation graphics show the relation between stacks and strips of different levels, providing context and distance awareness for the foci. The zoom hierarchies can also be used as graphical histories and for communicating insights to stakeholders, and can be further extended with annotation and integrated statistics.}, keywords = {} } Many 2D visual spaces have a virtually one-dimensional nature with very high aspect ratio between the dimensions: examples include time-series data, multimedia data such as sound or video, text documents, and bipartite graphs. Common among these is that the space can become very large, e.g., temperature measurements could span a long time period, surveillance video could cover entire days or weeks, and documents can have thousands of pages. Many analysis tasks for such spaces require several foci while retaining context and distance awareness. In this extended version of our IEEE PacificVis 2010 paper, we introduce a method for supporting this kind of multi-focus interaction that we call stack zooming. The approach is based on building hierarchies of 1D strips stacked on top of each other, where each subsequent stack represents a higher zoom level, and sibling strips represent branches in the exploration. Correlation graphics show the relation between stacks and strips of different levels, providing context and distance awareness for the foci. The zoom hierarchies can also be used as graphical histories and for communicating insights to stakeholders, and can be further extended with annotation and integrated statistics. |
57. | Waqas Javed, Niklas Elmqvist (2013): ExPlates: Spatializing Interactive Analysis to Scaffold Visual Exploration. Computer Graphics Forum, 32 (2), pp. 441–450, 2013. (Type: Article | Abstract | Links | BibTeX) @article{Javed2013, title = {ExPlates: Spatializing Interactive Analysis to Scaffold Visual Exploration}, author = {Waqas Javed and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/explates/explates.pdf, Paper http://www.slideshare.net/NickElm/ex-plates-online, Slides https://www.youtube.com/watch?v=UNhlhFUcDDo, Youtube Video}, year = {2013}, date = {2013-01-01}, journal = {Computer Graphics Forum}, volume = {32}, number = {2}, pages = {441--450}, abstract = {Visual exploration involves using visual representations to investigate data where the goals of the process are unclear and poorly defined. However, this often places unduly high cognitive load on the user, particularly in terms of keeping track of multiple investigative branches, remembering earlier results, and correlating between different views. We propose a new methodology for automatically spatializing the individual steps in visual exploration onto a large visual canvas, allowing users to easily recall, reflect, and assess their progress. We also present a web-based implementation of our methodology called ExPlatesJS where users can manipulate multidimensional data in their browsers, automatically building visual queries as they explore the data.}, keywords = {} } Visual exploration involves using visual representations to investigate data where the goals of the process are unclear and poorly defined. However, this often places unduly high cognitive load on the user, particularly in terms of keeping track of multiple investigative branches, remembering earlier results, and correlating between different views. We propose a new methodology for automatically spatializing the individual steps in visual exploration onto a large visual canvas, allowing users to easily recall, reflect, and assess their progress. We also present a web-based implementation of our methodology called ExPlatesJS where users can manipulate multidimensional data in their browsers, automatically building visual queries as they explore the data. |
2012 | |
56. | Waqas Javed, Sohaib Ghani, Niklas Elmqvist (2012): GravNav: Using a Gravity Model for Multi-Scale Navigation. Proceedings of the ACM Conference on Advanced Visual Interfaces, pp. 217–224, 2012. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Javed2012c, title = {GravNav: Using a Gravity Model for Multi-Scale Navigation}, author = {Waqas Javed and Sohaib Ghani and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/gravnav/gravnav.pdf}, year = {2012}, date = {2012-01-01}, booktitle = {Proceedings of the ACM Conference on Advanced Visual Interfaces}, pages = {217--224}, abstract = {We present gravity navigation (GravNav), a family of multi-scale navigation techniques that use a gravity-inspired model for assisting navigation in large visual 2D spaces based on the interest and salience of visual objects in the space. GravNav is an instance of topology-aware navigation, which makes use of the structure of the visual space to aid navigation. We have performed a controlled study comparing GravNav to standard zoom and pan navigation, with and without variable-rate zoom control. Our results show a significant improvement for GravNav over standard navigation, particularly when coupled with variable-rate zoom. We also report findings on user behavior in multi-scale navigation.}, keywords = {} } We present gravity navigation (GravNav), a family of multi-scale navigation techniques that use a gravity-inspired model for assisting navigation in large visual 2D spaces based on the interest and salience of visual objects in the space. GravNav is an instance of topology-aware navigation, which makes use of the structure of the visual space to aid navigation. We have performed a controlled study comparing GravNav to standard zoom and pan navigation, with and without variable-rate zoom control. Our results show a significant improvement for GravNav over standard navigation, particularly when coupled with variable-rate zoom. We also report findings on user behavior in multi-scale navigation. |
55. | Waqas Javed, Sohaib Ghani, Niklas Elmqvist (2012): PolyZoom: Multiscale and Multifocus Exploration in 2D Visual Spaces. Proceedings of the ACM Conference on Human Factors in Computing Systems, pp. 287–296, 2012. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Javed2012b, title = {PolyZoom: Multiscale and Multifocus Exploration in 2D Visual Spaces}, author = {Waqas Javed and Sohaib Ghani and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/polyzoom/polyzoom.pdf}, year = {2012}, date = {2012-01-01}, booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems}, pages = {287--296}, abstract = {The most common techniques for navigating in multiscale visual spaces are pan, zoom, and bird’s eye views. However, these techniques are often tedious and cumbersome to use, especially when objects of interest are located far apart. We present the PolyZoom technique where users progressively build hierarchies of focus regions, stacked on each other such that each subsequent level shows a higher magnification. Correlation graphics show the relation between parent and child viewports in the hierarchy. To validate the new technique, we compare it to standard navigation techniques in two user studies, one on multiscale visual search and the other on multifocus interaction. Results show that PolyZoom performs better than current standard techniques. }, keywords = {} } The most common techniques for navigating in multiscale visual spaces are pan, zoom, and bird’s eye views. However, these techniques are often tedious and cumbersome to use, especially when objects of interest are located far apart. We present the PolyZoom technique where users progressively build hierarchies of focus regions, stacked on each other such that each subsequent level shows a higher magnification. Correlation graphics show the relation between parent and child viewports in the hierarchy. To validate the new technique, we compare it to standard navigation techniques in two user studies, one on multiscale visual search and the other on multifocus interaction. Results show that PolyZoom performs better than current standard techniques. |
54. | Waqas Javed, Niklas Elmqvist (2012): Exploring the Design Space of Composite Visualization. Proceedings of the IEEE Pacific Symposium on Visualization, pp. 1–8, 2012. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Javed2012a, title = {Exploring the Design Space of Composite Visualization}, author = {Waqas Javed and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/compvis/compvis.pdf}, year = {2012}, date = {2012-01-01}, booktitle = {Proceedings of the IEEE Pacific Symposium on Visualization}, pages = {1--8}, abstract = {We propose the notion of composite visualization views (CVVs) as a theoretical model that unifies the existing coordinated multiple views (CMV) paradigm with other strategies for combining visual representations in the same geometrical space. We identify five such strategies--called CVV design patterns--based on an extensive review of the literature in composite visualization. We go on to show how these design patterns can all be expressed in terms of a design space describing the correlation between two visualizations in terms of spatial mapping as well as the data relationships between items in the visualizations. We also discuss how to use this design space to suggest potential directions for future research.}, keywords = {} } We propose the notion of composite visualization views (CVVs) as a theoretical model that unifies the existing coordinated multiple views (CMV) paradigm with other strategies for combining visual representations in the same geometrical space. We identify five such strategies--called CVV design patterns--based on an extensive review of the literature in composite visualization. We go on to show how these design patterns can all be expressed in terms of a design space describing the correlation between two visualizations in terms of spatial mapping as well as the data relationships between items in the visualizations. We also discuss how to use this design space to suggest potential directions for future research. |
53. | Abish Malik, Ross Maciejewski, Yun Jang, Whitney Huang, Niklas Elmqvist, David Ebert (2012): A Correlative Analysis Process in a Visual Analytics Environment. Proceedings of the IEEE Conference on Visual Analytics Science and Technology, pp. 33–42, 2012. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Malik2012, title = {A Correlative Analysis Process in a Visual Analytics Environment}, author = {Abish Malik and Ross Maciejewski and Yun Jang and Whitney Huang and Niklas Elmqvist and David Ebert}, url = {https://ieeexplore.ieee.org/document/6400491, IEEE Xplore}, year = {2012}, date = {2012-01-01}, booktitle = {Proceedings of the IEEE Conference on Visual Analytics Science and Technology}, pages = {33--42}, abstract = {Finding patterns and trends in spatial and temporal datasets has been a long studied problem in statistics and different domains of science. This paper presents a visual analytics approach for the interactive exploration and analysis of spatiotemporal correlations among multivariate datasets. Our approach enables users to discover correlations and explore potentially causal or predictive links at different spatiotemporal aggregation levels among the datasets, and allows them to understand the underlying statistical foundations that precede the analysis. Our technique utilizes the Pearson\'s product-moment correlation coefficient and factors in the lead or lag between different datasets to detect trends and periodic patterns amongst them.}, keywords = {} } Finding patterns and trends in spatial and temporal datasets has been a long studied problem in statistics and different domains of science. This paper presents a visual analytics approach for the interactive exploration and analysis of spatiotemporal correlations among multivariate datasets. Our approach enables users to discover correlations and explore potentially causal or predictive links at different spatiotemporal aggregation levels among the datasets, and allows them to understand the underlying statistical foundations that precede the analysis. Our technique utilizes the Pearson's product-moment correlation coefficient and factors in the lead or lag between different datasets to detect trends and periodic patterns amongst them. |
52. | Will McGrath, Brian Bowman, David McCallum, Juan-David Hincapie-Ramos, Niklas Elmqvist, Pourang Irani (2012): Branch-Explore-Merge: Facilitating Real-Time Revision Control in Collaborative Visual Exploration. Proceedings of the ACM Conference on Interactive Tabletops and Surfaces, pp. 235–244, 2012. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{McGrath2012, title = {Branch-Explore-Merge: Facilitating Real-Time Revision Control in Collaborative Visual Exploration}, author = {Will McGrath and Brian Bowman and David McCallum and Juan-David Hincapie-Ramos and Niklas Elmqvist and Pourang Irani}, url = {http://www.umiacs.umd.edu/~elm/projects/bem/bem.pdf}, year = {2012}, date = {2012-01-01}, booktitle = {Proceedings of the ACM Conference on Interactive Tabletops and Surfaces}, pages = {235--244}, abstract = {Collaborative work is characterized by participants seamlessly transitioning from working together (coupled) to working alone (decoupled). Groupware should therefore facilitate smoothly varying coupling throughout the entire collaborative session. Towards achieving such transitions for collaborative exploration and search, we propose a protocol based on managing revisions for each collaborator exploring a dataset. The protocol allows participants to diverge from the shared analysis path (branch), study the data independently (explore), and then contribute back their findings onto the shared display (merge). We apply this concept to collaborative search in multidimensional data, and propose an implementation where the public view is a tabletop display and the private views are embedded in handheld tablets. We then use this implementation to perform a qualitative user study involving a real estate dataset. Results show that participants leverage the BEM protocol, spend significant time using their private views (40% to 80% of total task time), and apply public view changes for consultation with collaborators.}, keywords = {} } Collaborative work is characterized by participants seamlessly transitioning from working together (coupled) to working alone (decoupled). Groupware should therefore facilitate smoothly varying coupling throughout the entire collaborative session. Towards achieving such transitions for collaborative exploration and search, we propose a protocol based on managing revisions for each collaborator exploring a dataset. The protocol allows participants to diverge from the shared analysis path (branch), study the data independently (explore), and then contribute back their findings onto the shared display (merge). We apply this concept to collaborative search in multidimensional data, and propose an implementation where the public view is a tabletop display and the private views are embedded in handheld tablets. We then use this implementation to perform a qualitative user study involving a real estate dataset. Results show that participants leverage the BEM protocol, spend significant time using their private views (40% to 80% of total task time), and apply public view changes for consultation with collaborators. |
51. | Sundar Murugappan, Vinayak, Niklas Elmqvist, Karthik Ramani (2012): Extended Multitouch: Recovering Touch Posture and Differentiating Users using a Depth Camera. Proceedings of the ACM Symposium on User Interface Software and Technology, pp. 487–496, 2012. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Murugappan2012, title = {Extended Multitouch: Recovering Touch Posture and Differentiating Users using a Depth Camera}, author = {Sundar Murugappan and Vinayak and Niklas Elmqvist and Karthik Ramani}, url = {http://www.umiacs.umd.edu/~elm/projects/emtouch/emtouch.pdf}, year = {2012}, date = {2012-01-01}, booktitle = {Proceedings of the ACM Symposium on User Interface Software and Technology}, pages = {487--496}, abstract = {Multitouch surfaces are becoming prevalent, but most existing technologies are only capable of detecting the user’s actual points of contact on the surface and not the identity, posture, and handedness of the user. In this paper, we define the concept of extended multitouch interaction as a richer input modality that includes all of this information. We further present a practical solution to achieve this on tabletop displays based on mounting a single commodity depth camera above a horizontal surface. This will enable us to not only detect when the surface is being touched, but also recover the user’s exact finger and hand posture, as well as distinguish between different users and their handedness. We validate our approach using two user studies, and deploy the technique in a scratchpad tool and in a pen + touch sketch tool.}, keywords = {} } Multitouch surfaces are becoming prevalent, but most existing technologies are only capable of detecting the user’s actual points of contact on the surface and not the identity, posture, and handedness of the user. In this paper, we define the concept of extended multitouch interaction as a richer input modality that includes all of this information. We further present a practical solution to achieve this on tabletop displays based on mounting a single commodity depth camera above a horizontal surface. This will enable us to not only detect when the surface is being touched, but also recover the user’s exact finger and hand posture, as well as distinguish between different users and their handedness. We validate our approach using two user studies, and deploy the technique in a scratchpad tool and in a pen + touch sketch tool. |
50. | Shehzad Afzal, Ross Maciejewski, Yun Jang, Niklas Elmqvist, David Ebert (2012): Spatial Text Visualization Using Automatic Typographic Maps. IEEE Computer Graphics and Applications (Proc. Vis/InfoVis 2012), 18 (12), pp. 2556-2564, 2012. (Type: Article | Abstract | Links | BibTeX) @article{Afzal2012, title = {Spatial Text Visualization Using Automatic Typographic Maps}, author = {Shehzad Afzal and Ross Maciejewski and Yun Jang and Niklas Elmqvist and David Ebert}, url = {http://www.umiacs.umd.edu/~elm/projects/typomapvis/typomapvis.pdf}, year = {2012}, date = {2012-01-01}, journal = {IEEE Computer Graphics and Applications (Proc. Vis/InfoVis 2012)}, volume = {18}, number = {12}, pages = {2556-2564}, abstract = {We present a method for automatically building typographic maps that merge text and spatial data into a visual representation where text alone forms the graphical features. We further show how to use this approach to visualize spatial data such as traffic density, crime rate, or demographic data. The technique accepts a vector representation of a geographic map and spatializes the textual labels in the space onto polylines and polygons based on user-defined visual attributes and constraints. Our sample implementation runs as a Web service, spatializing shape files from the OpenStreetMap project into typographic maps for any region.}, keywords = {} } We present a method for automatically building typographic maps that merge text and spatial data into a visual representation where text alone forms the graphical features. We further show how to use this approach to visualize spatial data such as traffic density, crime rate, or demographic data. The technique accepts a vector representation of a geographic map and spatializes the textual labels in the space onto polylines and polygons based on user-defined visual attributes and constraints. Our sample implementation runs as a Web service, spatializing shape files from the OpenStreetMap project into typographic maps for any region. |
49. | Brian Bowman, Niklas Elmqvist, T.J. Jankun-Kelly (2012): Toward Visualization for Games: Theory, Design Space, and Patterns. IEEE Transactions on Visualization and Computer Graphics, 18 (11), pp. 1956-1968, 2012. (Type: Article | Abstract | Links | BibTeX) @article{Bowman2012, title = {Toward Visualization for Games: Theory, Design Space, and Patterns}, author = {Brian Bowman and Niklas Elmqvist and T.J. Jankun-Kelly}, url = {http://www.umiacs.umd.edu/~elm/projects/visgames/visgames.pdf}, year = {2012}, date = {2012-01-01}, journal = {IEEE Transactions on Visualization and Computer Graphics}, volume = {18}, number = {11}, pages = {1956-1968}, abstract = {Electronic games are starting to incorporate in-game telemetry that collects data about player, team, and community performance on a massive scale, and as data begins to accumulate, so does the demand for effectively analyzing this data. In this paper, we use examples from both old and new games of different genres to explore the theory and design space of visualization for games. Drawing on these examples, we define a design space for this novel research topic and use it to formulate design patterns for how to best apply visualization technology to games. We then discuss the implications that this new framework will potentially have on the design and development of game and visualization technology in the future.}, keywords = {} } Electronic games are starting to incorporate in-game telemetry that collects data about player, team, and community performance on a massive scale, and as data begins to accumulate, so does the demand for effectively analyzing this data. In this paper, we use examples from both old and new games of different genres to explore the theory and design space of visualization for games. Drawing on these examples, we define a design space for this novel research topic and use it to formulate design patterns for how to best apply visualization technology to games. We then discuss the implications that this new framework will potentially have on the design and development of game and visualization technology in the future. |
48. | Niklas Elmqvist, David Ebert (2012): Leveraging Multidisciplinarity in a Visual Analytics Graduate Course. IEEE Computer Graphics and Applications, 32 (3), pp. 84–87, 2012. (Type: Article | Abstract | Links | BibTeX) @article{Elmqvist2012, title = {Leveraging Multidisciplinarity in a Visual Analytics Graduate Course}, author = {Niklas Elmqvist and David Ebert}, url = {http://www.umiacs.umd.edu/~elm/projects/va-education/va-education.pdf}, year = {2012}, date = {2012-01-01}, journal = {IEEE Computer Graphics and Applications}, volume = {32}, number = {3}, pages = {84--87}, abstract = {There is a growing demand in engineering, business, science, research, and industry for students with visual analytics expertise, but teaching visual analytics is challenging due to the multidisciplinary nature of the topic matter, the diverse backgrounds of the students, and the corresponding requirements on the instructor. We report some best practices from our experience teaching several offerings of a visual analytics graduate course at Purdue University where we leveraged these multidisciplinary challenges to our advantage instead of attempting to mitigate them.}, keywords = {} } There is a growing demand in engineering, business, science, research, and industry for students with visual analytics expertise, but teaching visual analytics is challenging due to the multidisciplinary nature of the topic matter, the diverse backgrounds of the students, and the corresponding requirements on the instructor. We report some best practices from our experience teaching several offerings of a visual analytics graduate course at Purdue University where we leveraged these multidisciplinary challenges to our advantage instead of attempting to mitigate them. |
47. | Sohaib Ghani, Niklas Elmqvist, Ji-Soo Yi (2012): Perception of Animated Node-Link Diagrams for Dynamic Graphs. Computer Graphics Forum, 31 (3), pp. 1205–1214, 2012. (Type: Article | Abstract | Links | BibTeX) @article{3Ghani2012, title = {Perception of Animated Node-Link Diagrams for Dynamic Graphs}, author = {Sohaib Ghani and Niklas Elmqvist and Ji-Soo Yi}, url = {http://www.umiacs.umd.edu/~elm/projects/dyngraph/dyngraph.pdf}, year = {2012}, date = {2012-01-01}, journal = {Computer Graphics Forum}, volume = {31}, number = {3}, pages = {1205--1214}, abstract = {Effective visualization of dynamic graphs remains an open research topic, and many state-of-the-art tools use animated node-link diagrams for this purpose. Despite its intuitiveness, the effectiveness of animation in node-link diagrams has been questioned, and several empirical studies have shown that animation is not necessarily superior to static visualizations. However, the exact mechanics of perceiving animated node-link diagrams are still unclear. In this paper, we study the impact of different dynamic graph metrics on user perception of the animation. After deriving candidate visual graph metrics, we perform an exploratory user study where participants are asked to reconstruct the event sequence in animated node-link diagrams. Based on these findings, we conduct a second user study where we investigate the most important visual metrics in depth. Our findings show that node speed and target separation are prominent visual metrics to predict the performance of event sequencing tasks.}, keywords = {} } Effective visualization of dynamic graphs remains an open research topic, and many state-of-the-art tools use animated node-link diagrams for this purpose. Despite its intuitiveness, the effectiveness of animation in node-link diagrams has been questioned, and several empirical studies have shown that animation is not necessarily superior to static visualizations. However, the exact mechanics of perceiving animated node-link diagrams are still unclear. In this paper, we study the impact of different dynamic graph metrics on user perception of the animation. After deriving candidate visual graph metrics, we perform an exploratory user study where participants are asked to reconstruct the event sequence in animated node-link diagrams. Based on these findings, we conduct a second user study where we investigate the most important visual metrics in depth. Our findings show that node speed and target separation are prominent visual metrics to predict the performance of event sequencing tasks. |
46. | KyungTae Kim, Niklas Elmqvist (2012): Embodied Lenses for Collaborative Visual Queries on Tabletop Displays. Information Visualization, 11 (4), pp. 336–355, 2012. (Type: Article | Abstract | Links | BibTeX) @article{Kim2012, title = {Embodied Lenses for Collaborative Visual Queries on Tabletop Displays}, author = {KyungTae Kim and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/emblens/emblens.pdf}, year = {2012}, date = {2012-01-01}, journal = {Information Visualization}, volume = {11}, number = {4}, pages = {336--355}, abstract = {We introduce embodied lenses for visual queries on tabletop surfaces using physical interaction. The lenses are simply thin sheets of paper or transparent foil decorated with fiducial markers, allowing them to be tracked by a diffuse illumination tabletop display. The physical affordance of these embodied lenses allow them to be overlapped, causing composition in the underlying virtual space. We perform a formative evaluation to study users’ conceptual models for overlapping physical lenses. This is followed by a quantitative user study comparing performance for embodied versus purely virtual lenses. Results show that embodied lenses are equally efficient compared to purely virtual lenses, and also support tactile and eyes-free interaction. We then present several examples of the technique, including image layers, map layers, image manipulation, and multidimensional data visualization. The technique is simple, cheap, and can be integrated into many existing tabletop displays.}, keywords = {} } We introduce embodied lenses for visual queries on tabletop surfaces using physical interaction. The lenses are simply thin sheets of paper or transparent foil decorated with fiducial markers, allowing them to be tracked by a diffuse illumination tabletop display. The physical affordance of these embodied lenses allow them to be overlapped, causing composition in the underlying virtual space. We perform a formative evaluation to study users’ conceptual models for overlapping physical lenses. This is followed by a quantitative user study comparing performance for embodied versus purely virtual lenses. Results show that embodied lenses are equally efficient compared to purely virtual lenses, and also support tactile and eyes-free interaction. We then present several examples of the technique, including image layers, map layers, image manipulation, and multidimensional data visualization. The technique is simple, cheap, and can be integrated into many existing tabletop displays. |
45. | Bumchul Kwon, Waqas Javed, Sohaib Ghani, Niklas Elmqvist, Ji-Soo Yi, David Ebert (2012): Evaluating the Role of Time in Investigative Analysis of Document Collections. IEEE Transactions on Visualization and Computer Graphics, 18 (11), pp. 199–2004, 2012. (Type: Article | Abstract | Links | BibTeX) @article{Kwon2012, title = {Evaluating the Role of Time in Investigative Analysis of Document Collections}, author = {Bumchul Kwon and Waqas Javed and Sohaib Ghani and Niklas Elmqvist and Ji-Soo Yi and David Ebert}, url = {http://www.umiacs.umd.edu/~elm/projects/time-analysis/time-analysis.pdf}, year = {2012}, date = {2012-01-01}, journal = {IEEE Transactions on Visualization and Computer Graphics}, volume = {18}, number = {11}, pages = {199--2004}, abstract = {Time is a universal and essential aspect of data in any investigative analysis. It helps analysts establish causality, build storylines from evidence, and reject infeasible hypotheses. For this reason, many investigative analysis tools provide visual representations designed for making sense of temporal data. However, the field of visual analytics still needs more evidence explaining how temporal visualization actually aids the analysis process, as well as design recommendations for how to build these visualizations. To fill this gap, we conducted an insight-based qualitative study to investigate the influence of temporal visualization on investigative analysis. We found that visualizing temporal information helped participants externalize chains of events. Another contribution of our work is the lightweight evaluation approach used to collect, visualize, and analyze insight.}, keywords = {} } Time is a universal and essential aspect of data in any investigative analysis. It helps analysts establish causality, build storylines from evidence, and reject infeasible hypotheses. For this reason, many investigative analysis tools provide visual representations designed for making sense of temporal data. However, the field of visual analytics still needs more evidence explaining how temporal visualization actually aids the analysis process, as well as design recommendations for how to build these visualizations. To fill this gap, we conducted an insight-based qualitative study to investigate the influence of temporal visualization on investigative analysis. We found that visualizing temporal information helped participants externalize chains of events. Another contribution of our work is the lightweight evaluation approach used to collect, visualize, and analyze insight. |
44. | Krishna Madhavan, Mihaela Vorvoreanu, Niklas Elmqvist, Aditya Johri, Naren Ramakrishnan, G. Alan Wang, Ann McKenna (2012): Portfolio Mining. IEEE Computer, 45 (10), pp. 95–99, 2012. (Type: Article | Abstract | Links | BibTeX) @article{Madhavan2012, title = {Portfolio Mining}, author = {Krishna Madhavan and Mihaela Vorvoreanu and Niklas Elmqvist and Aditya Johri and Naren Ramakrishnan and G. Alan Wang and Ann McKenna}, url = {https://ieeexplore.ieee.org/document/6329888, IEEE Xplore}, year = {2012}, date = {2012-01-01}, journal = {IEEE Computer}, volume = {45}, number = {10}, pages = {95--99}, abstract = {Portfolio mining facilitates the creation of actionable knowledge, catalyzes innovations, and sustains research communities.}, keywords = {} } Portfolio mining facilitates the creation of actionable knowledge, catalyzes innovations, and sustains research communities. |
2011 | |
43. | Niklas Elmqvist, Pierre Dragicevic, Jean-Daniel Fekete (2011): Color Lens: Adaptive Color Scale Optimization for Visual Exploration. IEEE Transactions on Visualization and Computer Graphics, 17 (6), pp. 795-807, 2011. (Type: Article | Abstract | Links | BibTeX) @article{Elmqvist2011b, title = {Color Lens: Adaptive Color Scale Optimization for Visual Exploration}, author = {Niklas Elmqvist and Pierre Dragicevic and Jean-Daniel Fekete}, url = {http://www.umiacs.umd.edu/~elm/projects/colorlens/colorlens.pdf}, year = {2011}, date = {2011-06-01}, journal = {IEEE Transactions on Visualization and Computer Graphics}, volume = {17}, number = {6}, pages = {795-807}, abstract = {Visualization applications routinely map quantitative attributes to color using color scales. Although color is an effective visualization channel, it is limited by both display hardware and the human visual system. We propose a new interaction technique that overcomes these limitations by dynamically optimizing color scales based on a set of sampling lenses. The technique inspects the lens contents in data space, optimizes the initial color scale, and then renders the contents of the lens to the screen using the modified color scale. We present two prototype implementations of this pipeline and describe several case studies involving both information visualization and image inspection applications. We validate our approach with two mutually linked and complementary user studies comparing the Color Lens with explicit contrast control for visual search.}, keywords = {} } Visualization applications routinely map quantitative attributes to color using color scales. Although color is an effective visualization channel, it is limited by both display hardware and the human visual system. We propose a new interaction technique that overcomes these limitations by dynamically optimizing color scales based on a set of sampling lenses. The technique inspects the lens contents in data space, optimizes the initial color scale, and then renders the contents of the lens to the screen using the modified color scale. We present two prototype implementations of this pipeline and describe several case studies involving both information visualization and image inspection applications. We validate our approach with two mutually linked and complementary user studies comparing the Color Lens with explicit contrast control for visual search. |
42. | Sohaib Ghani, Nathalie Henry Riche, Niklas Elmqvist (2011): Dynamic Insets for Context-Aware Graph Navigation. Computer Graphics Forum, 30 (3), pp. 861-870, 2011. (Type: Article | Abstract | Links | BibTeX) @article{Ghani2011, title = {Dynamic Insets for Context-Aware Graph Navigation}, author = {Sohaib Ghani and Nathalie Henry Riche and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/dyninsets/dyninsets.pdf}, year = {2011}, date = {2011-06-01}, journal = {Computer Graphics Forum}, volume = {30}, number = {3}, pages = {861-870}, abstract = {Maintaining both overview and detail while navigating in graphs, such as road networks, airline route maps, or social networks, is difficult, especially when targets of interest are located far apart. We present a navigation technique called Dynamic Insets that provides context awareness for graph navigation. Dynamic insets utilize the topological structure of the network to draw a visual inset for off-screen nodes that shows a portion of the surrounding area for links leaving the edge of the screen. We implement dynamic insets for general graph navigation as well as geographical maps. We also present results from a set of user studies that show that our technique is more efficient than most of the existing techniques for graph navigation in different networks.}, keywords = {} } Maintaining both overview and detail while navigating in graphs, such as road networks, airline route maps, or social networks, is difficult, especially when targets of interest are located far apart. We present a navigation technique called Dynamic Insets that provides context awareness for graph navigation. Dynamic insets utilize the topological structure of the network to draw a visual inset for off-screen nodes that shows a portion of the surrounding area for links leaving the edge of the screen. We implement dynamic insets for general graph navigation as well as geographical maps. We also present results from a set of user studies that show that our technique is more efficient than most of the existing techniques for graph navigation in different networks. |
41. | Pierre Dragicevic, Anastasia Bezerianos, Waqas Javed, Niklas Elmqvist, Jean-Daniel Fekete (2011): Temporal Distortion for Animated Transitions. Proceedings of the ACM Conference on Human Factors in Computing Systems, pp. 2009-2018, 2011. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Dragicevic2011, title = {Temporal Distortion for Animated Transitions}, author = {Pierre Dragicevic and Anastasia Bezerianos and Waqas Javed and Niklas Elmqvist and Jean-Daniel Fekete}, url = {http://www.umiacs.umd.edu/~elm/projects/timedistort/timedistort.pdf}, year = {2011}, date = {2011-01-01}, booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems}, pages = {2009-2018}, abstract = {Animated transitions are popular in many visual applications but they can be difficult to follow, especially when many objects move at the same time. One informal design guideline for creating effective animated transitions has long been the use of slow-in/slow-out pacing, but no empirical data exist to support this practice. We remedy this by studying object tracking performance under different conditions of temporal distortion, i.e., constant speed transitions, slow-in/slow-out, fast-in/fast-out, and an adaptive technique that slows down the visually complex parts of the animation. Slow-in/slow-out outperformed other techniques, but we saw technique differences depending on the type of visual transition.}, keywords = {} } Animated transitions are popular in many visual applications but they can be difficult to follow, especially when many objects move at the same time. One informal design guideline for creating effective animated transitions has long been the use of slow-in/slow-out pacing, but no empirical data exist to support this practice. We remedy this by studying object tracking performance under different conditions of temporal distortion, i.e., constant speed transitions, slow-in/slow-out, fast-in/fast-out, and an adaptive technique that slows down the visually complex parts of the animation. Slow-in/slow-out outperformed other techniques, but we saw technique differences depending on the type of visual transition. |
40. | Sohaib Ghani, Niklas Elmqvist (2011): Improving Revisitation in Graphs through Static Spatial Features. Proceedings of Graphics Interface, pp. 175-182, 2011. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Ghani2011b, title = {Improving Revisitation in Graphs through Static Spatial Features}, author = {Sohaib Ghani and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/ssgf/ssgf.pdf}, year = {2011}, date = {2011-01-01}, booktitle = {Proceedings of Graphics Interface}, pages = {175-182}, abstract = {People generally remember locations in visual spaces with respect to spatial features and landmarks. Geographical maps provide many spatial features and hence are easy to remember. However, graphs are often visualized as node-link diagrams with few spatial features. We evaluate whether adding static spatial features to node-link diagrams will help in graph revisitation. We discuss three strategies for embellishing a graph and evaluate each in a user study. In our first study, we evaluate how to best add background features to a graph. In the second, we encode position using node size and color. In the third and final study, we take the best techniques from the first and second study, as well as shapes added to the graph as virtual landmarks, to find the best combination of spatial features for graph revisitation. We discuss the user study results and give our recommendations for design of graph visualization software.}, keywords = {} } People generally remember locations in visual spaces with respect to spatial features and landmarks. Geographical maps provide many spatial features and hence are easy to remember. However, graphs are often visualized as node-link diagrams with few spatial features. We evaluate whether adding static spatial features to node-link diagrams will help in graph revisitation. We discuss three strategies for embellishing a graph and evaluate each in a user study. In our first study, we evaluate how to best add background features to a graph. In the second, we encode position using node size and color. In the third and final study, we take the best techniques from the first and second study, as well as shapes added to the graph as virtual landmarks, to find the best combination of spatial features for graph revisitation. We discuss the user study results and give our recommendations for design of graph visualization software. |
39. | Waqas Javed, KyungTae Kim, Sohaib Ghani, Niklas Elmqvist (2011): Evaluating Physical/Virtual Occlusion Management Techniques for Horizontal Displays. Proceedings of INTERACT, pp. 391-408, 2011. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Javed2011, title = {Evaluating Physical/Virtual Occlusion Management Techniques for Horizontal Displays}, author = {Waqas Javed and KyungTae Kim and Sohaib Ghani and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/occtable/occtable.pdf}, year = {2011}, date = {2011-01-01}, booktitle = {Proceedings of INTERACT}, pages = {391-408}, abstract = {We evaluate unguided and guided visual search performance for a set of techniques that mitigate occlusion between physical and virtual objects on a tabletop display. The techniques are derived from a general model of hybrid physical/virtual occlusion, and take increasingly drastic measures to make the user aware of, identify, and access hidden objects---but with increasingly space-consuming and disruptive impact on the display. Performance is different depending on the visual display, suggesting a tradeoff between management strength and visual space deformation.}, keywords = {} } We evaluate unguided and guided visual search performance for a set of techniques that mitigate occlusion between physical and virtual objects on a tabletop display. The techniques are derived from a general model of hybrid physical/virtual occlusion, and take increasingly drastic measures to make the user aware of, identify, and access hidden objects---but with increasingly space-consuming and disruptive impact on the display. Performance is different depending on the visual display, suggesting a tradeoff between management strength and visual space deformation. |
38. | KyungTae Kim, Sungahn Ko, Niklas Elmqvist, David Ebert (2011): WordBridge: Using Composite Tag Clouds in Node-Link Diagrams for Visualizing Content and Relations in Text Corpora. Proceedings of the Hawaii International Conference on System Sciences, pp. ., 2011. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Kim2011, title = {WordBridge: Using Composite Tag Clouds in Node-Link Diagrams for Visualizing Content and Relations in Text Corpora}, author = {KyungTae Kim and Sungahn Ko and Niklas Elmqvist and David Ebert}, url = {http://www.umiacs.umd.edu/~elm/projects/wordbridge/wordbridge.pdf}, year = {2011}, date = {2011-01-01}, booktitle = {Proceedings of the Hawaii International Conference on System Sciences}, pages = {.}, abstract = {We introduce WordBridge, a novel graph-based visualization technique for showing relationships between entities in text corpora. The technique is a node-link visualization where both nodes and links are tag clouds. Using these tag clouds, WordBridge can reveal relationships by representing not only entities and their connections, but also the nature of their relationship using representative keywords for nodes and edges. In this paper, we apply the technique to an interactive web-based visual analytics environment---Apropos---where a user can explore a text corpus using WordBridge. We validate the technique using several case studies based on document collections such as intelligence reports, co-authorship networks, and works of fiction.}, keywords = {} } We introduce WordBridge, a novel graph-based visualization technique for showing relationships between entities in text corpora. The technique is a node-link visualization where both nodes and links are tag clouds. Using these tag clouds, WordBridge can reveal relationships by representing not only entities and their connections, but also the nature of their relationship using representative keywords for nodes and edges. In this paper, we apply the technique to an interactive web-based visual analytics environment---Apropos---where a user can explore a text corpus using WordBridge. We validate the technique using several case studies based on document collections such as intelligence reports, co-authorship networks, and works of fiction. |
37. | Sungahn Ko, KyungTae Kim, Tejas Kulkarni, Niklas Elmqvist (2011): Applying Mobile Device Soft Keyboards to Collaborative Multitouch Tabletop Displays: Design and Evaluation. Proceedings of the ACM Conference on Interactive Tabletops and Surfaces, pp. 130-139, 2011. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Ko2011, title = {Applying Mobile Device Soft Keyboards to Collaborative Multitouch Tabletop Displays: Design and Evaluation}, author = {Sungahn Ko and KyungTae Kim and Tejas Kulkarni and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/table-text/table-text.pdf}, year = {2011}, date = {2011-01-01}, booktitle = {Proceedings of the ACM Conference on Interactive Tabletops and Surfaces}, pages = {130-139}, abstract = {We present an evaluation of text entry methods for tabletop displays given small display space allocations, an increasingly important design constraint as tabletops become collaborative platforms. Small space is already a requirement of mobile text entry methods, and these can often be easily ported to tabletop settings. The purpose of this work is to determine whether these mobile text entry methods are equally useful for tabletop displays, or whether there are unique aspects of text entry on large, horizontal surfaces that influence design. Our evaluation consists of two studies designed to elicit differences between the mobile and tabletop domains. Results show that standard soft keyboards perform best, even at small space allocations. Furthermore, occlusion-reduction methods like Shift do not yield significant improvements to text entry; we speculate that this is due to the low ratio of resolution per surface units (i.e., DPI) for current tabletops.}, keywords = {} } We present an evaluation of text entry methods for tabletop displays given small display space allocations, an increasingly important design constraint as tabletops become collaborative platforms. Small space is already a requirement of mobile text entry methods, and these can often be easily ported to tabletop settings. The purpose of this work is to determine whether these mobile text entry methods are equally useful for tabletop displays, or whether there are unique aspects of text entry on large, horizontal surfaces that influence design. Our evaluation consists of two studies designed to elicit differences between the mobile and tabletop domains. Results show that standard soft keyboards perform best, even at small space allocations. Furthermore, occlusion-reduction methods like Shift do not yield significant improvements to text entry; we speculate that this is due to the low ratio of resolution per surface units (i.e., DPI) for current tabletops. |
36. | Bumchul Kwon, Waqas Javed, Niklas Elmqvist, Ji-Soo Yi (2011): Direct Manipulation Through Surrogate Objects. Proceedings of the ACM Conference on Human Factors in Computing Systems, pp. 627-636, 2011. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Kwon2011, title = {Direct Manipulation Through Surrogate Objects}, author = {Bumchul Kwon and Waqas Javed and Niklas Elmqvist and Ji-Soo Yi}, url = {http://www.umiacs.umd.edu/~elm/projects/surrogate/surrogate.pdf}, year = {2011}, date = {2011-01-01}, booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems}, pages = {627-636}, abstract = {Direct manipulation has had major influence on interface design since it was proposed by Shneiderman in 1982. Although directness generally benefits users, direct manipulation also has weaknesses. In some cases, such as when a user needs to manipulate small, attribute-rich objects or multiple objects simultaneously, indirect manipulation may be more efficient at the cost of directness or intuitiveness of the interaction. Several techniques have been developed over the years to address these issues, but these are all isolated and limited efforts with no coherent underlying principle. We propose the notion of Surrogate Interaction that ties together a large subset of these techniques through the use of a surrogate object that allow users to interact with the surrogate instead of the domain object. We believe that formalizing this family of interaction techniques will provide an additional and powerful interface design alternative for interaction designers, as well as uncover opportunities for future research. }, keywords = {} } Direct manipulation has had major influence on interface design since it was proposed by Shneiderman in 1982. Although directness generally benefits users, direct manipulation also has weaknesses. In some cases, such as when a user needs to manipulate small, attribute-rich objects or multiple objects simultaneously, indirect manipulation may be more efficient at the cost of directness or intuitiveness of the interaction. Several techniques have been developed over the years to address these issues, but these are all isolated and limited efforts with no coherent underlying principle. We propose the notion of Surrogate Interaction that ties together a large subset of these techniques through the use of a surrogate object that allow users to interact with the surrogate instead of the domain object. We believe that formalizing this family of interaction techniques will provide an additional and powerful interface design alternative for interaction designers, as well as uncover opportunities for future research. |
35. | Niklas Elmqvist, Andrew Vande Moere, Hans-Christian Jetter, Daniel Cernea, Harald Reiterer, T.-J. Jankun-Kelly (2011): Fluid Interaction for Information Visualization. Information Visualization, 10 (4), pp. 327-340, 2011. (Type: Article | Abstract | Links | BibTeX) @article{Elmqvist2011, title = {Fluid Interaction for Information Visualization}, author = {Niklas Elmqvist and Andrew Vande Moere and Hans-Christian Jetter and Daniel Cernea and Harald Reiterer and T.-J. Jankun-Kelly}, url = {http://www.umiacs.umd.edu/~elm/projects/fluidity/fluidity.pdf}, year = {2011}, date = {2011-01-01}, journal = {Information Visualization}, volume = {10}, number = {4}, pages = {327-340}, abstract = {Despite typically receiving little emphasis in visualization research, interaction in visualization is the catalyst for the user\'s dialogue with the data, and, ultimately, the user’s actual understanding and insight into this data. There are many possible reasons for this skewed balance between the visual and interactive aspects of a visualization. One reason is that interaction is an intangible concept that is difficult to design, quantify, and evaluate. Unlike for visual design, there are few examples that show visualization practitioners and researchers how to best design the interaction for a new visualization. In this paper, we attempt to address this issue by collecting examples of visualizations with \"best-in-class\" interaction and using them to extract practical design guidelines for future designers and researchers. We call this concept fluid interaction, and we propose an operational definition in terms of the direct manipulation and embodied interaction paradigms, the psychological concept of \"flow\", and Norman’s gulfs of execution and evaluation.}, keywords = {} } Despite typically receiving little emphasis in visualization research, interaction in visualization is the catalyst for the user's dialogue with the data, and, ultimately, the user’s actual understanding and insight into this data. There are many possible reasons for this skewed balance between the visual and interactive aspects of a visualization. One reason is that interaction is an intangible concept that is difficult to design, quantify, and evaluate. Unlike for visual design, there are few examples that show visualization practitioners and researchers how to best design the interaction for a new visualization. In this paper, we attempt to address this issue by collecting examples of visualizations with "best-in-class" interaction and using them to extract practical design guidelines for future designers and researchers. We call this concept fluid interaction, and we propose an operational definition in terms of the direct manipulation and embodied interaction paradigms, the psychological concept of "flow", and Norman’s gulfs of execution and evaluation. |
34. | Petra Isenberg, Niklas Elmqvist, Daniel Cernea, Jean Scholtz, Kwan-Liu Ma, Hans Hagen (2011): Collaborative Visualization: Definition, Challenges, and Research Agenda. Information Visualization, 10 (4), pp. 310-326, 2011. (Type: Article | Abstract | Links | BibTeX) @article{Isenberg2011, title = {Collaborative Visualization: Definition, Challenges, and Research Agenda}, author = {Petra Isenberg and Niklas Elmqvist and Daniel Cernea and Jean Scholtz and Kwan-Liu Ma and Hans Hagen}, url = {http://www.umiacs.umd.edu/~elm/projects/collabvis/collabvis.pdf}, year = {2011}, date = {2011-01-01}, journal = {Information Visualization}, volume = {10}, number = {4}, pages = {310-326}, abstract = {The conflux of two growing areas of technology---collaboration and visualization---into a new research direction, collaborative visualization, provides new research challenges. Technology now allows us to easily connect and collaborate with one another---in settings as diverse as over networked computers, across mobile devices, or using shared displays such as interactive walls and tabletop surfaces. Digital information is now regularly accessed by multiple people in order to share information, to view it together, to analyze it, or to form decisions. Visualizations are used to deal more effectively with large amounts of information while interactive visualizations allow users to explore the underlying data. While researchers face many challenges in collaboration and in visualization, the emergence of collaborative visualization poses additional challenges but is also an exciting opportunity to reach new audiences and applications for visualization tools and techniques. The purpose of this article is (1) to provide a definition, clear scope, and overview of the evolving field of collaborative visualization, (2) to help pinpoint the unique focus of collaborative visualization with its specific aspects, challenges, and requirements within the intersection of general computer-supported cooperative work (CSCW) and visualization research, and (3) to draw attention to important future research questions to be addressed by the community. We conclude by discussing a research agenda for future work on collaborative visualization and urge for a new generation of visualization tools that are designed with collaboration in mind from their very inception.}, keywords = {} } The conflux of two growing areas of technology---collaboration and visualization---into a new research direction, collaborative visualization, provides new research challenges. Technology now allows us to easily connect and collaborate with one another---in settings as diverse as over networked computers, across mobile devices, or using shared displays such as interactive walls and tabletop surfaces. Digital information is now regularly accessed by multiple people in order to share information, to view it together, to analyze it, or to form decisions. Visualizations are used to deal more effectively with large amounts of information while interactive visualizations allow users to explore the underlying data. While researchers face many challenges in collaboration and in visualization, the emergence of collaborative visualization poses additional challenges but is also an exciting opportunity to reach new audiences and applications for visualization tools and techniques. The purpose of this article is (1) to provide a definition, clear scope, and overview of the evolving field of collaborative visualization, (2) to help pinpoint the unique focus of collaborative visualization with its specific aspects, challenges, and requirements within the intersection of general computer-supported cooperative work (CSCW) and visualization research, and (3) to draw attention to important future research questions to be addressed by the community. We conclude by discussing a research agenda for future work on collaborative visualization and urge for a new generation of visualization tools that are designed with collaboration in mind from their very inception. |
2010 | |
33. | Waqas Javed, Niklas Elmqvist (2010): Stack Zooming for Multi-Focus Interaction in Time-Series Data Visualization. Proceedings of the IEEE Pacific Symposium on Visualization, pp. 33–40, 2010. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Javed2010, title = {Stack Zooming for Multi-Focus Interaction in Time-Series Data Visualization}, author = {Waqas Javed and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/stackzoom/stackzoom.pdf, Paper https://www.youtube.com/watch?v=dK0De4XPm5Y, Youtube video http://www.slideshare.net/NickElm/stack-zooming-for-multifocus-interaction-in-timeseries-data-visualization, Slides}, year = {2010}, date = {2010-01-01}, booktitle = {Proceedings of the IEEE Pacific Symposium on Visualization}, pages = {33--40}, abstract = {Information visualization shows tremendous potential for helping both expert and casual users alike make sense of temporal data, but current time series visualization tools provide poor support for comparing several foci in a temporal dataset while retaining context and distance awareness. We introduce a method for supporting this kind of multi-focus interaction that we call stack zooming. The approach is based on the user interactively building hierarchies of 1D strips stacked on top of each other, where each subsequent stack represents a higher zoom level, and sibling strips represent branches in the visual exploration. Correlation graphics show the relation between stacks and strips of different levels, providing context and distance awareness among the focus points. The zoom hierarchies can also be used as graphical histories and for communicating insights to stakeholders. We also discuss how visual spaces that support stack zooming can be extended with annotation and local statistics computations that fit the hierarchical stacking metaphor.}, keywords = {} } Information visualization shows tremendous potential for helping both expert and casual users alike make sense of temporal data, but current time series visualization tools provide poor support for comparing several foci in a temporal dataset while retaining context and distance awareness. We introduce a method for supporting this kind of multi-focus interaction that we call stack zooming. The approach is based on the user interactively building hierarchies of 1D strips stacked on top of each other, where each subsequent stack represents a higher zoom level, and sibling strips represent branches in the visual exploration. Correlation graphics show the relation between stacks and strips of different levels, providing context and distance awareness among the focus points. The zoom hierarchies can also be used as graphical histories and for communicating insights to stakeholders. We also discuss how visual spaces that support stack zooming can be extended with annotation and local statistics computations that fit the hierarchical stacking metaphor. |
32. | KyungTae Kim, Waqas Javed, Cary Williams, Niklas Elmqvist, Pourang Irani (2010): Hugin: A Framework Awareness and Coordination in Mixed-Presence Collaborative Information Visualization. Proceedings of the ACM Conference on Interactive Tabletops and Surfaces, pp. 231–240, 2010. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Kim2010, title = {Hugin: A Framework Awareness and Coordination in Mixed-Presence Collaborative Information Visualization}, author = {KyungTae Kim and Waqas Javed and Cary Williams and Niklas Elmqvist and Pourang Irani}, url = {http://www.umiacs.umd.edu/~elm/projects/hugin/hugin.pdf}, year = {2010}, date = {2010-01-01}, booktitle = {Proceedings of the ACM Conference on Interactive Tabletops and Surfaces}, pages = {231--240}, abstract = {Analysts are increasingly encountering datasets that are larger and more complex than ever before. Effectively exploring such datasets requires collaboration between multiple analysts, who more often than not are distributed in time or in space. Mixed-presence groupware provide a shared workspace medium that supports this combination of co-located and distributed collaboration. However, collaborative visualization systems for such distributed settings have their own cost and are still uncommon in the visualization community. We present Hugin, a novel layer-based graphical framework for this kind of mixed-presence synchronous collaborative visualization over digital tabletop displays. The design of the framework focuses on issues like awareness and access control, while using information visualization for the collaborative data exploration on network-connected tabletops. To validate the usefulness of the framework, we also present examples of how Hugin can be used to implement new visualizations supporting these collaborative mechanisms.}, keywords = {} } Analysts are increasingly encountering datasets that are larger and more complex than ever before. Effectively exploring such datasets requires collaboration between multiple analysts, who more often than not are distributed in time or in space. Mixed-presence groupware provide a shared workspace medium that supports this combination of co-located and distributed collaboration. However, collaborative visualization systems for such distributed settings have their own cost and are still uncommon in the visualization community. We present Hugin, a novel layer-based graphical framework for this kind of mixed-presence synchronous collaborative visualization over digital tabletop displays. The design of the framework focuses on issues like awareness and access control, while using information visualization for the collaborative data exploration on network-connected tabletops. To validate the usefulness of the framework, we also present examples of how Hugin can be used to implement new visualizations supporting these collaborative mechanisms. |
31. | Anastasia Bezerianos, Fanny Chevalier, Pierre Dragicevic, Niklas Elmqvist, Jean-Daniel Fekete (2010): GraphDice: A System for Exploring Multivariate Social Networks. Computer Graphics Forum, 29 (3), pp. 863–872, 2010. (Type: Article | Abstract | Links | BibTeX) @article{Bezerianos2010, title = {GraphDice: A System for Exploring Multivariate Social Networks}, author = {Anastasia Bezerianos and Fanny Chevalier and Pierre Dragicevic and Niklas Elmqvist and Jean-Daniel Fekete}, url = {http://www.umiacs.umd.edu/~elm/projects/graphdice/graphdice.pdf}, year = {2010}, date = {2010-01-01}, journal = {Computer Graphics Forum}, volume = {29}, number = {3}, pages = {863--872}, abstract = {Social networks collected by historians or sociologists typically have a large number of actors and edge attributes. Applying social network analysis (SNA) algorithms to these networks produces additional attributes such as degree, centrality, and clustering coefficients. Understanding the effects of this plethora of attributes is one of the main challenges of multivariate SNA. We present the design of GraphDice, a multivariate network visualization system for exploring the attribute space of edges and actors. GraphDice builds upon the ScatterDice system for its main multidimensional navigation paradigm, and extends it with novel mechanisms to support network exploration in general and SNA tasks in particular. Novel mechanisms include visualization of attributes of interval type and projection of numerical edge attributes to node attributes. We show how these extensions to the original ScatterDice system allow to support complex visual analysis tasks on networks with hundreds of actors and up to 30 attributes, while providing a simple and consistent interface for interacting with network data.}, keywords = {} } Social networks collected by historians or sociologists typically have a large number of actors and edge attributes. Applying social network analysis (SNA) algorithms to these networks produces additional attributes such as degree, centrality, and clustering coefficients. Understanding the effects of this plethora of attributes is one of the main challenges of multivariate SNA. We present the design of GraphDice, a multivariate network visualization system for exploring the attribute space of edges and actors. GraphDice builds upon the ScatterDice system for its main multidimensional navigation paradigm, and extends it with novel mechanisms to support network exploration in general and SNA tasks in particular. Novel mechanisms include visualization of attributes of interval type and projection of numerical edge attributes to node attributes. We show how these extensions to the original ScatterDice system allow to support complex visual analysis tasks on networks with hundreds of actors and up to 30 attributes, while providing a simple and consistent interface for interacting with network data. |
30. | Niklas Elmqvist, Nathalie Henry, Yann Riche, Jean-Daniel Fekete (2010): Mélange: Space Folding for Visual Exploration. IEEE Transactions on Visualization and Computer Graphics, 16 (3), pp. 468–483, 2010. (Type: Article | Abstract | Links | BibTeX) @article{Elmqvist2010a, title = {Mélange: Space Folding for Visual Exploration}, author = {Niklas Elmqvist and Nathalie Henry and Yann Riche and Jean-Daniel Fekete}, url = {http://www.umiacs.umd.edu/~elm/projects/melange/melange-journal.pdf}, year = {2010}, date = {2010-01-01}, journal = {IEEE Transactions on Visualization and Computer Graphics}, volume = {16}, number = {3}, pages = {468--483}, abstract = {Navigating in large geometric spaces---such as maps, social networks, or long documents---typically require a sequence of pan and zoom actions. However, this strategy is often ineffective and cumbersome, especially when trying to study and compare several distant objects. We propose a new distortion technique that folds the intervening space to guarantee visibility of multiple focus regions. The folds themselves show contextual information and support unfolding and paging interactions. We conducted a study comparing the space-folding technique to existing approaches, and found that participants performed significantly better with the new technique. We also describe how to implement this distortion technique, and give an in-depth case study on how to apply it to the visualization of large-scale 1D time-series data.}, keywords = {} } Navigating in large geometric spaces---such as maps, social networks, or long documents---typically require a sequence of pan and zoom actions. However, this strategy is often ineffective and cumbersome, especially when trying to study and compare several distant objects. We propose a new distortion technique that folds the intervening space to guarantee visibility of multiple focus regions. The folds themselves show contextual information and support unfolding and paging interactions. We conducted a study comparing the space-folding technique to existing approaches, and found that participants performed significantly better with the new technique. We also describe how to implement this distortion technique, and give an in-depth case study on how to apply it to the visualization of large-scale 1D time-series data. |
29. | Niklas Elmqvist, Jean-Daniel Fekete (2010): Hierarchical Aggregation for Information Visualization: Overview, Techniques and Design Guidelines. IEEE Transactions on Visualization and Computer Graphics, 16 (3), pp. 439–454, 2010. (Type: Article | Abstract | Links | BibTeX) @article{Elmqvist2010b, title = {Hierarchical Aggregation for Information Visualization: Overview, Techniques and Design Guidelines}, author = {Niklas Elmqvist and Jean-Daniel Fekete}, url = {http://www.umiacs.umd.edu/~elm/projects/hieragg/hieragg.pdf}, year = {2010}, date = {2010-01-01}, journal = {IEEE Transactions on Visualization and Computer Graphics}, volume = {16}, number = {3}, pages = {439--454}, abstract = {We present a model for building, visualizing, and interacting with multiscale representations of information visualization techniques using hierarchical aggregation. The motivation for this work is to make visual representations more visually scalable and less cluttered. The model allows for augmenting existing techniques with multiscale functionality, as well as for designing new visualization and interaction techniques that conform to this new class of visual representations. We give some examples of how to use the model for standard information visualization techniques such as scatterplots, parallel coordinates, and node-link diagrams, and discuss existing techniques that are based on hierarchical aggregation. This yields a set of design guidelines for aggregated visualizations. We also present a basic vocabulary of interaction techniques suitable for navigating these multiscale visualizations.}, keywords = {} } We present a model for building, visualizing, and interacting with multiscale representations of information visualization techniques using hierarchical aggregation. The motivation for this work is to make visual representations more visually scalable and less cluttered. The model allows for augmenting existing techniques with multiscale functionality, as well as for designing new visualization and interaction techniques that conform to this new class of visual representations. We give some examples of how to use the model for standard information visualization techniques such as scatterplots, parallel coordinates, and node-link diagrams, and discuss existing techniques that are based on hierarchical aggregation. This yields a set of design guidelines for aggregated visualizations. We also present a basic vocabulary of interaction techniques suitable for navigating these multiscale visualizations. |
28. | Waqas Javed, Bryan McDonnel, Niklas Elmqvist (2010): Graphical Perception of Multiple Time Series. IEEE Transactions on Visualization and Computer Graphics (Proc. IEEE InfoVis 2010), 16 (6), pp. 927–934, 2010. (Type: Article | Abstract | Links | BibTeX) @article{Javed2010b, title = {Graphical Perception of Multiple Time Series}, author = {Waqas Javed and Bryan McDonnel and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/multilinevis/multilinevis.pdf}, year = {2010}, date = {2010-01-01}, journal = {IEEE Transactions on Visualization and Computer Graphics (Proc. IEEE InfoVis 2010)}, volume = {16}, number = {6}, pages = {927--934}, abstract = {Line graphs have been the visualization of choice for temporal data ever since the days of William Playfair (1759–1823), but realistic temporal analysis tasks often include multiple simultaneous time series. In this work, we explore user performance for comparison, slope, and discrimination tasks for different line graph techniques involving multiple time series. Our results show that techniques that create separate charts for each time series—such as small multiples and horizon graphs---are generally more efficient for comparisons across time series with a large visual span. On the other hand, shared-space techniques---like standard line graphs---are typically more efficient for comparisons over smaller visual spans where the impact of overlap and clutter is reduced.}, keywords = {} } Line graphs have been the visualization of choice for temporal data ever since the days of William Playfair (1759–1823), but realistic temporal analysis tasks often include multiple simultaneous time series. In this work, we explore user performance for comparison, slope, and discrimination tasks for different line graph techniques involving multiple time series. Our results show that techniques that create separate charts for each time series—such as small multiples and horizon graphs---are generally more efficient for comparisons across time series with a large visual span. On the other hand, shared-space techniques---like standard line graphs---are typically more efficient for comparisons over smaller visual spans where the impact of overlap and clutter is reduced. |
27. | Ji-Soo Yi, Niklas Elmqvist, Seungyoon Lee (2010): TimeMatrix: Visualizing Temporal Social Networks Using Interactive Matrix-Based Visualizations. International Journal of Human-Computer Interaction, 26 (11-12), pp. 1031–1051, 2010. (Type: Article | Abstract | Links | BibTeX) @article{Yi2010, title = {TimeMatrix: Visualizing Temporal Social Networks Using Interactive Matrix-Based Visualizations}, author = {Ji-Soo Yi and Niklas Elmqvist and Seungyoon Lee}, url = {https://www.youtube.com/watch?v=PjJOPX_ezzc, Youtube video}, year = {2010}, date = {2010-01-01}, journal = {International Journal of Human-Computer Interaction}, volume = {26}, number = {11-12}, pages = {1031--1051}, abstract = {Visualization plays a crucial role in understanding dynamic social networks at many different levels (i.e., group, subgroup, and individual). Node-link-based visualization techniques are currently widely used for these tasks and have been demonstrated to be effective, but we found that they also have limitations in representing temporal changes, particularly at the individual and subgroup levels. To overcome these limitations, we present a new network visualization technique, called \"TimeMatrix,\" based on a matrix representation. Interaction techniques, such as overlay controls, a temporal range slider, semantic zooming, and integrated network statistical measures, support analysts in studying temporal social networks. To validate our design, we present a user study involving three social scientists analyzing inter-organizational collaboration data. The study demonstrates how TimeMatrix may help analysts gain insights about the temporal aspects of network data that can be subsequently tested with network analytic methods.}, keywords = {} } Visualization plays a crucial role in understanding dynamic social networks at many different levels (i.e., group, subgroup, and individual). Node-link-based visualization techniques are currently widely used for these tasks and have been demonstrated to be effective, but we found that they also have limitations in representing temporal changes, particularly at the individual and subgroup levels. To overcome these limitations, we present a new network visualization technique, called "TimeMatrix," based on a matrix representation. Interaction techniques, such as overlay controls, a temporal range slider, semantic zooming, and integrated network statistical measures, support analysts in studying temporal social networks. To validate our design, we present a user study involving three social scientists analyzing inter-organizational collaboration data. The study demonstrates how TimeMatrix may help analysts gain insights about the temporal aspects of network data that can be subsequently tested with network analytic methods. |
2009 | |
26. | Jean-Daniel Fekete, Niklas Elmqvist, Yves Guiard (2009): Motion-Pointing: Target Selection using Elliptical Motions. Proceedings of the ACM Conference on Human Factors in Computing Systems, pp. 289–298, 2009. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Elmqvist2009a, title = {Motion-Pointing: Target Selection using Elliptical Motions}, author = {Jean-Daniel Fekete and Niklas Elmqvist and Yves Guiard}, url = {http://www.umiacs.umd.edu/~elm/projects/motionpointing/motionpointing.pdf, Paper}, year = {2009}, date = {2009-01-01}, booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems}, pages = {289--298}, abstract = {We present a novel method called motion-pointing for selecting a set of visual items, such as push-buttons or radio-buttons, without actually pointing to them. Instead, each potential target displays an animated point we call the driver. To select a specific item, the user only has to imitate the motion of its driver using the input device. Once the motion has been recognized by the system, the user can confirm the selection to trigger the action. We consider cyclic motions on an elliptic trajectory with a specific period, and study the most effective methods for real-time matching such a trajectory, as well as the range of parameters a human can reliably reproduce. We then show how to implement motion-pointing in real applications using an interaction technique we call move-and-stroke. Finally, we measure the input throughput and error rate of move-and-stroke in a controlled experiment. We show that the selection time is linearly proportional to the number of input bits conveyed up to 6 bits, confirming that motion-pointing is a practical input method.}, keywords = {} } We present a novel method called motion-pointing for selecting a set of visual items, such as push-buttons or radio-buttons, without actually pointing to them. Instead, each potential target displays an animated point we call the driver. To select a specific item, the user only has to imitate the motion of its driver using the input device. Once the motion has been recognized by the system, the user can confirm the selection to trigger the action. We consider cyclic motions on an elliptic trajectory with a specific period, and study the most effective methods for real-time matching such a trajectory, as well as the range of parameters a human can reliably reproduce. We then show how to implement motion-pointing in real applications using an interaction technique we call move-and-stroke. Finally, we measure the input throughput and error rate of move-and-stroke in a controlled experiment. We show that the selection time is linearly proportional to the number of input bits conveyed up to 6 bits, confirming that motion-pointing is a practical input method. |
25. | Niklas Elmqvist, Ulf Assarsson, Philippas Tsigas (2009): Dynamic Transparency for 3D Visualization: Design and Evaluation. International Journal of Virtual Reality, 8 (1), pp. 65–78, 2009. (Type: Article | Abstract | Links | BibTeX) @article{Elmqvist2009b, title = {Dynamic Transparency for 3D Visualization: Design and Evaluation}, author = {Niklas Elmqvist and Ulf Assarsson and Philippas Tsigas}, url = {http://www.umiacs.umd.edu/~elm/projects/dyntrans/dyntrans-journal.pdf, Paper https://www.youtube.com/watch?v=77N5KVbbEmQ, Youtube video http://www.slideshare.net/NickElm/employing-dynamic-transparency-for-3d-occlusion-management-design-issues-and-evaluation, Slides}, year = {2009}, date = {2009-01-01}, journal = {International Journal of Virtual Reality}, volume = {8}, number = {1}, pages = {65--78}, abstract = {Recent developments in occlusion management for 3D environments often involve the use of dynamic transparency, or \"virtual X-ray vision\", to promote target discovery and access in complex 3D worlds. However, there are many different approaches to achieving this effect and their actual utility for the user has yet to be evaluated. Furthermore, the introduction of semitransparent surfaces adds additional visual complexity that may actually have a negative impact on task performance. In this paper, we report on an empirical user study investigating these human aspects of dynamic transparency. Our implementation of the technique is an image-space algorithm built using modern programmable shaders to achieve real-time performance and visually pleasing results. Results from the user study indicate that dynamic transparency provides superior performance for perceptual tasks in terms of both efficiency and correctness. Subjective ratings are also firmly in favor of the method.}, keywords = {} } Recent developments in occlusion management for 3D environments often involve the use of dynamic transparency, or "virtual X-ray vision", to promote target discovery and access in complex 3D worlds. However, there are many different approaches to achieving this effect and their actual utility for the user has yet to be evaluated. Furthermore, the introduction of semitransparent surfaces adds additional visual complexity that may actually have a negative impact on task performance. In this paper, we report on an empirical user study investigating these human aspects of dynamic transparency. Our implementation of the technique is an image-space algorithm built using modern programmable shaders to achieve real-time performance and visually pleasing results. Results from the user study indicate that dynamic transparency provides superior performance for perceptual tasks in terms of both efficiency and correctness. Subjective ratings are also firmly in favor of the method. |
24. | Bryan McDonnel, Niklas Elmqvist (2009): Towards Utilizing GPUs in Information Visualization: A Model and Implementation of Image-Space Operations. IEEE Transactions on Visualization and Computer Graphics, 15 (6), pp. 1105–1112, 2009. (Type: Article | Abstract | Links | BibTeX) @article{McDonnel2009, title = {Towards Utilizing GPUs in Information Visualization: A Model and Implementation of Image-Space Operations}, author = {Bryan McDonnel and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/gpuvis/gpuvis.pdf, Paper http://www.slideshare.net/NickElm/towards-utilizing-gpus-in-information-visualization-a-model-and-implementation-of-imagespace-operations, Slides}, year = {2009}, date = {2009-01-01}, journal = {IEEE Transactions on Visualization and Computer Graphics}, volume = {15}, number = {6}, pages = {1105--1112}, abstract = {Modern programmable GPUs represent a vast potential in terms of performance and visual flexibility for information visualization research, but surprisingly few applications even begin to utilize this potential. In this paper, we conjecture that this may be due to the mismatch between the high-level abstract data types commonly visualized in our field, and the low-level floating-point model supported by current GPU shader languages. To help remedy this situation, we present a refinement of the traditional information visualization pipeline that is amenable to implementation using GPU shaders. The refinement consists of a final image-space step in the pipeline where the multivariate data of the visualization is sampled in the resolution of the current view. To concretize the theoretical aspects of this work, we also present a visual programming environment for constructing visualization shaders using a simple drag-and-drop interface. Finally, we give some examples of the use of shaders for well-known visualization techniques.}, keywords = {} } Modern programmable GPUs represent a vast potential in terms of performance and visual flexibility for information visualization research, but surprisingly few applications even begin to utilize this potential. In this paper, we conjecture that this may be due to the mismatch between the high-level abstract data types commonly visualized in our field, and the low-level floating-point model supported by current GPU shader languages. To help remedy this situation, we present a refinement of the traditional information visualization pipeline that is amenable to implementation using GPU shaders. The refinement consists of a final image-space step in the pipeline where the multivariate data of the visualization is sampled in the resolution of the current view. To concretize the theoretical aspects of this work, we also present a visual programming environment for constructing visualization shaders using a simple drag-and-drop interface. Finally, we give some examples of the use of shaders for well-known visualization techniques. |
2008 | |
23. | Niklas Elmqvist, Jean-Daniel Fekete (2008): Semantic Pointing for Object Picking in Complex 3D Environments. Proceedings of Graphics Interface, pp. 243–250, 2008. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Elmqvist2008f, title = {Semantic Pointing for Object Picking in Complex 3D Environments}, author = {Niklas Elmqvist and Jean-Daniel Fekete}, url = {http://www.umiacs.umd.edu/~elm/projects/sempoint3d/sempoint3d.pdf, Paper https://www.youtube.com/watch?v=Ebv7QG0Z6lM, Youtube video}, year = {2008}, date = {2008-01-01}, booktitle = {Proceedings of Graphics Interface}, pages = {243--250}, abstract = {Today\'s large and high-resolution displays coupled with powerful graphics hardware offer the potential for highly realistic 3D virtual environments, but also cause increased target acquisition difficulty for users interacting with these environments. We present an adaptation of semantic pointing to object picking in 3D environments. Essentially, semantic picking shrinks empty space and expands potential targets on the screen by dynamically adjusting the ratio between movement in visual space and motor space for relative input devices such as the mouse. Our implementation operates in the image-space using a hierarchical representation of the standard stencil buffer to allow for real-time calculation of the closest targets for all positions on the screen. An informal user study indicates that subjects perform more accurate pointing with semantic 3D pointing than without.}, keywords = {} } Today's large and high-resolution displays coupled with powerful graphics hardware offer the potential for highly realistic 3D virtual environments, but also cause increased target acquisition difficulty for users interacting with these environments. We present an adaptation of semantic pointing to object picking in 3D environments. Essentially, semantic picking shrinks empty space and expands potential targets on the screen by dynamically adjusting the ratio between movement in visual space and motor space for relative input devices such as the mouse. Our implementation operates in the image-space using a hierarchical representation of the standard stencil buffer to allow for real-time calculation of the closest targets for all positions on the screen. An informal user study indicates that subjects perform more accurate pointing with semantic 3D pointing than without. |
22. | Niklas Elmqvist, Nathalie Henry, Yann Riche, Jean-Daniel Fekete (2008): Mélange: Space Folding for Multi-Focus Interaction. Proceedings of the ACM Conference on Human Factors in Computing Systems, pp. 1333–1342, 2008. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Elmqvist2008d, title = {Mélange: Space Folding for Multi-Focus Interaction}, author = {Niklas Elmqvist and Nathalie Henry and Yann Riche and Jean-Daniel Fekete}, url = {http://www.umiacs.umd.edu/~elm/projects/melange/melange.pdf, Paper https://www.youtube.com/watch?v=I1KiO1iZ1DI, Youtube video http://www.slideshare.net/NickElm/melange-space-folding-for-multifocus-interaction, Slides}, year = {2008}, date = {2008-01-01}, booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems}, pages = {1333--1342}, abstract = {Interaction and navigation in large geometric spaces typically require a sequence of pan and zoom actions. This strategy is often ineffective and cumbersome, especially when trying to study several distant objects. We propose a new distortion technique that folds the intervening space to guarantee visibility of multiple focus regions. The folds themselves show contextual information and support unfolding and paging interactions. Compared to previous work, our method provides more context and distance awareness. We conducted a study comparing the space-folding technique to existing approaches, and found that participants performed significantly better with the new technique.}, keywords = {} } Interaction and navigation in large geometric spaces typically require a sequence of pan and zoom actions. This strategy is often ineffective and cumbersome, especially when trying to study several distant objects. We propose a new distortion technique that folds the intervening space to guarantee visibility of multiple focus regions. The folds themselves show contextual information and support unfolding and paging interactions. Compared to previous work, our method provides more context and distance awareness. We conducted a study comparing the space-folding technique to existing approaches, and found that participants performed significantly better with the new technique. |
21. | Niklas Elmqvist, Mihail Eduard Tudoreanu, Philippas Tsigas (2008): Evaluating Motion Constraints for 3D Wayfinding in Immersive and Desktop Virtual Environments. Proceedings of the ACM Conference on Human Factors in Computing Systems, pp. 1769–1778, 2008. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Elmqvist2008c, title = {Evaluating Motion Constraints for 3D Wayfinding in Immersive and Desktop Virtual Environments}, author = {Niklas Elmqvist and Mihail Eduard Tudoreanu and Philippas Tsigas}, url = {http://www.umiacs.umd.edu/~elm/projects/motcon/motcon.pdf, Paper https://www.youtube.com/watch?v=LRVTyoeuhpo, Youtube video http://www.slideshare.net/NickElm/evaluating-motion-constraints-for-3d-wayfinding-in-immersive-and-desktop-virtual-environments, Slides}, year = {2008}, date = {2008-01-01}, booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems}, pages = {1769--1778}, abstract = {Motion constraints providing guidance for 3D navigation have recently been suggested as a way of offloading some of the cognitive effort of traversing complex 3D environments on a computer. We present findings from an evaluation of the benefits of this practice where users achieved significantly better results in memory recall and performance when given access to such a guidance method. The study was conducted on both standard desktop computers with mouse and keyboard, as well as on an immersive CAVE system. Interestingly, our results also show that the improvements were more dramatic for desktop users than for CAVE users, even outperforming the latter. Furthermore, the study indicates that allowing the users to retain local control over the navigation on the desktop platform helps them in familiarizing themselves with the 3D world.}, keywords = {} } Motion constraints providing guidance for 3D navigation have recently been suggested as a way of offloading some of the cognitive effort of traversing complex 3D environments on a computer. We present findings from an evaluation of the benefits of this practice where users achieved significantly better results in memory recall and performance when given access to such a guidance method. The study was conducted on both standard desktop computers with mouse and keyboard, as well as on an immersive CAVE system. Interestingly, our results also show that the improvements were more dramatic for desktop users than for CAVE users, even outperforming the latter. Furthermore, the study indicates that allowing the users to retain local control over the navigation on the desktop platform helps them in familiarizing themselves with the 3D world. |
20. | Niklas Elmqvist, Thanh-Nghi Do, Howard Goodell, Nathalie Henry, Jean-Daniel Fekete (2008): ZAME: Interactive Large-Scale Graph Visualization. Proceedings of the IEEE Pacific Symposium on Visualization, pp. 215–222, 2008. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Elmqvist2008b, title = {ZAME: Interactive Large-Scale Graph Visualization}, author = {Niklas Elmqvist and Thanh-Nghi Do and Howard Goodell and Nathalie Henry and Jean-Daniel Fekete}, url = {http://www.umiacs.umd.edu/~elm/projects/zame/zame.pdf, Paper https://www.youtube.com/watch?v=Zr25Lt_pmfw, Youtube video}, year = {2008}, date = {2008-01-01}, booktitle = {Proceedings of the IEEE Pacific Symposium on Visualization}, pages = {215--222}, abstract = {We present the Zoomable Adjacency Matrix Explorer (ZAME), a visualization tool for exploring graphs at a scale of millions of nodes and edges. ZAME is based on an adjacency matrix graph representation aggregated at multiple scales. It allows analysts to explore a graph at many levels, zooming and panning with interactive performance from an overview to the most detailed views. Several components work together in the ZAME tool to make this possible. Efficient matrix ordering algorithms group related elements. Individual data cases are aggregated into higher-order meta representations. Aggregates are arranged into a pyramid hierarchy that allows for on-demand paging to GPU shader programs to support smooth multiscale browsing. Using ZAME, we are able to explore the entire French Wikipedia---over 500,000 articles and 6,000,000 links---with interactive performance on standard consumer-level computer hardware.}, keywords = {} } We present the Zoomable Adjacency Matrix Explorer (ZAME), a visualization tool for exploring graphs at a scale of millions of nodes and edges. ZAME is based on an adjacency matrix graph representation aggregated at multiple scales. It allows analysts to explore a graph at many levels, zooming and panning with interactive performance from an overview to the most detailed views. Several components work together in the ZAME tool to make this possible. Efficient matrix ordering algorithms group related elements. Individual data cases are aggregated into higher-order meta representations. Aggregates are arranged into a pyramid hierarchy that allows for on-demand paging to GPU shader programs to support smooth multiscale browsing. Using ZAME, we are able to explore the entire French Wikipedia---over 500,000 articles and 6,000,000 links---with interactive performance on standard consumer-level computer hardware. |
19. | Niklas Elmqvist, Pierre Dragicevic, Jean-Daniel Fekete (2008): Rolling the Dice: Multidimensional Visual Exploration using Scatterplot Matrix Navigation. IEEE Transactions on Visualization and Computer Graphics, 14 (6), pp. 1141–1148, 2008. (Type: Article | Abstract | Links | BibTeX) @article{Elmqvist2008g, title = {Rolling the Dice: Multidimensional Visual Exploration using Scatterplot Matrix Navigation}, author = {Niklas Elmqvist and Pierre Dragicevic and Jean-Daniel Fekete}, url = {http://www.umiacs.umd.edu/~elm/projects/scatterdice/scatterdice.pdf, Paper https://www.youtube.com/watch?v=E1birsp9iYk, Youtube video http://www.slideshare.net/NickElm/rolling-the-dice-multidimensional-visual-exploration-using-scatterplot-matrix-navigation, Slides}, year = {2008}, date = {2008-01-01}, journal = {IEEE Transactions on Visualization and Computer Graphics}, volume = {14}, number = {6}, pages = {1141--1148}, abstract = {Scatterplots remain one of the most popular and widely-used visual representations for multidimensional data due to their simplicity, familiarity and visual clarity, even if they lack some of the flexibility and visual expressiveness of newer multidimensional visualization techniques. This paper presents new interactive methods to explore multidimensional data using scatterplots. This exploration is performed using a matrix of scatterplots that gives an overview of the possible configurations, thumbnails of the scatterplots, and support for interactive navigation in the multidimensional space. Transitions between scatterplots are performed as animated rotations in 3D space, somewhat akin to rolling dice. Users can iteratively build queries using bounding volumes in the dataset, sculpting the query from different viewpoints to become more and more refined. Furthermore, the dimensions in the navigation space can be reordered, manually or automatically, to highlight salient correlations and differences among them. An example scenario presents the interaction techniques supporting smooth and effortless visual exploration of multidimensional datasets.}, keywords = {} } Scatterplots remain one of the most popular and widely-used visual representations for multidimensional data due to their simplicity, familiarity and visual clarity, even if they lack some of the flexibility and visual expressiveness of newer multidimensional visualization techniques. This paper presents new interactive methods to explore multidimensional data using scatterplots. This exploration is performed using a matrix of scatterplots that gives an overview of the possible configurations, thumbnails of the scatterplots, and support for interactive navigation in the multidimensional space. Transitions between scatterplots are performed as animated rotations in 3D space, somewhat akin to rolling dice. Users can iteratively build queries using bounding volumes in the dataset, sculpting the query from different viewpoints to become more and more refined. Furthermore, the dimensions in the navigation space can be reordered, manually or automatically, to highlight salient correlations and differences among them. An example scenario presents the interaction techniques supporting smooth and effortless visual exploration of multidimensional datasets. |
18. | Niklas Elmqvist, Philippas Tsigas (2008): A Taxonomy of 3D Occlusion Management for Visualization. IEEE Transactions on Visualization and Computer Graphics, 14 (5), pp. 1095–1109, 2008. (Type: Article | Abstract | Links | BibTeX) @article{Elmqvist2008e, title = {A Taxonomy of 3D Occlusion Management for Visualization}, author = {Niklas Elmqvist and Philippas Tsigas}, url = {http://www.umiacs.umd.edu/~elm/projects/occmgt/occmgt-journal.pdf, Paper}, year = {2008}, date = {2008-01-01}, journal = {IEEE Transactions on Visualization and Computer Graphics}, volume = {14}, number = {5}, pages = {1095--1109}, abstract = {While an important factor in depth perception, the occlusion effect in 3D environments also has a detrimental impact on tasks involving discovery, access, and spatial relation of objects in a 3D visualization. A number of interactive techniques have been developed in recent years to directly or indirectly deal with this problem using a wide range of different approaches. In this paper, we build on previous work on mapping out the problem space of 3D occlusion by defining a taxonomy of the design space of occlusion management techniques in an effort to formalize a common terminology and theoretical framework for this class of interactions. We classify a total of 50 different techniques for occlusion management using our taxonomy and then go on to analyze the results, deriving a set of five orthogonal design patterns for effective reduction of 3D occlusion. We also discuss the \"gaps\" in the design space, areas of the taxonomy not yet populated with existing techniques, and use these to suggest future research directions into occlusion management.}, keywords = {} } While an important factor in depth perception, the occlusion effect in 3D environments also has a detrimental impact on tasks involving discovery, access, and spatial relation of objects in a 3D visualization. A number of interactive techniques have been developed in recent years to directly or indirectly deal with this problem using a wide range of different approaches. In this paper, we build on previous work on mapping out the problem space of 3D occlusion by defining a taxonomy of the design space of occlusion management techniques in an effort to formalize a common terminology and theoretical framework for this class of interactions. We classify a total of 50 different techniques for occlusion management using our taxonomy and then go on to analyze the results, deriving a set of five orthogonal design patterns for effective reduction of 3D occlusion. We also discuss the "gaps" in the design space, areas of the taxonomy not yet populated with existing techniques, and use these to suggest future research directions into occlusion management. |
17. | Niklas Elmqvist, John Stasko, Philippas Tsigas (2008): DataMeadow: A Visual Canvas for Analysis of Large-Scale Multivariate Data. Information Visualization, 7 (1), pp. 18–33, 2008. (Type: Article | Abstract | Links | BibTeX) @article{Elmqvist2008a, title = {DataMeadow: A Visual Canvas for Analysis of Large-Scale Multivariate Data}, author = {Niklas Elmqvist and John Stasko and Philippas Tsigas}, url = {http://www.umiacs.umd.edu/~elm/projects/datameadow/datameadow-journal.pdf, Paper https://www.youtube.com/watch?v=FO2MsmtWX_4, Youtube video http://www.slideshare.net/NickElm/datameadow-a-visual-canvas-for-analysis-of-largescale-multivariate-data, Slides}, year = {2008}, date = {2008-01-01}, journal = {Information Visualization}, volume = {7}, number = {1}, pages = {18--33}, abstract = {Supporting visual analytics of multiple large-scale multidimensional datasets requires a high degree of interactivity and user control beyond the conventional challenges of visualizing such datasets. We present the DataMeadow, a visual canvas providing rich interaction for constructing visual queries using graphical set representations called DataRoses. A DataRose is essentially a starplot of selected columns in a dataset displayed as multivariate visualizations with dynamic query sliders integrated into each axis. The purpose of the DataMeadow is to allow users to create advanced visual queries by iteratively selecting and filtering into the multidimensional data. Furthermore, the canvas provides a clear history of the analysis that can be annotated to facilitate dissemination of analytical results to stakeholders. A powerful direct manipulation interface allows for selection, filtering, and creation of sets, subsets, and data dependencies. We have evaluated our system using a qualitative expert review involving two visualization researchers. Results from this review are favorable for the new method.}, keywords = {} } Supporting visual analytics of multiple large-scale multidimensional datasets requires a high degree of interactivity and user control beyond the conventional challenges of visualizing such datasets. We present the DataMeadow, a visual canvas providing rich interaction for constructing visual queries using graphical set representations called DataRoses. A DataRose is essentially a starplot of selected columns in a dataset displayed as multivariate visualizations with dynamic query sliders integrated into each axis. The purpose of the DataMeadow is to allow users to create advanced visual queries by iteratively selecting and filtering into the multidimensional data. Furthermore, the canvas provides a clear history of the analysis that can be annotated to facilitate dissemination of analytical results to stakeholders. A powerful direct manipulation interface allows for selection, filtering, and creation of sets, subsets, and data dependencies. We have evaluated our system using a qualitative expert review involving two visualization researchers. Results from this review are favorable for the new method. |
2007 | |
16. | Niklas Elmqvist, Mihail Eduard Tudoreanu, Philippas Tsigas (2007): Tour Generation for Exploration of 3D Virtual Environments. Proceedings of the ACM Symposium on Virtual Reality Software and Technology, pp. 207–210, 2007. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Elmqvist2007i, title = {Tour Generation for Exploration of 3D Virtual Environments}, author = {Niklas Elmqvist and Mihail Eduard Tudoreanu and Philippas Tsigas}, url = {http://www.umiacs.umd.edu/~elm/projects/tourgen/tourgen.pdf, Paper https://www.youtube.com/watch?v=LRVTyoeuhpo, Youtube video}, year = {2007}, date = {2007-01-01}, booktitle = {Proceedings of the ACM Symposium on Virtual Reality Software and Technology}, pages = {207--210}, abstract = {Navigation in complex and large-scale 3D virtual environments has been shown to be a difficult task, imposing a high cognitive load on the user. In this paper, we present a comprehensive method for assisting users in exploring and understanding such 3D worlds. The method consists of two distinct phases: an off-line computation step deriving a grand tour using the world geometry and any semantic target information as input, and an on-line interactive navigation step providing guided exploration and improved spatial perception for the user. The former phase is based on a voxelized version of the geometrical dataset that is used to compute a connectivity graph for use in a TSP-like formulation of the problem. The latter phase takes the output tour from the off-line step as input for guiding 3D navigation through the environment.}, keywords = {} } Navigation in complex and large-scale 3D virtual environments has been shown to be a difficult task, imposing a high cognitive load on the user. In this paper, we present a comprehensive method for assisting users in exploring and understanding such 3D worlds. The method consists of two distinct phases: an off-line computation step deriving a grand tour using the world geometry and any semantic target information as input, and an on-line interactive navigation step providing guided exploration and improved spatial perception for the user. The former phase is based on a voxelized version of the geometrical dataset that is used to compute a connectivity graph for use in a TSP-like formulation of the problem. The latter phase takes the output tour from the off-line step as input for guiding 3D navigation through the environment. |
15. | Niklas Elmqvist, John Stasko, Philippas Tsigas (2007): DataMeadow: A Visual Canvas for Analysis of Large-Scale Multivariate Data. Proceedings of the IEEE Symposium on Visual Analytics Science and Technology, pp. 187–194, 2007. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Elmqvist2007h, title = {DataMeadow: A Visual Canvas for Analysis of Large-Scale Multivariate Data}, author = {Niklas Elmqvist and John Stasko and Philippas Tsigas}, url = {http://www.umiacs.umd.edu/~elm/projects/datameadow/datameadow.pdf, Paper https://www.youtube.com/watch?v=FO2MsmtWX_4, Youtube video http://www.slideshare.net/NickElm/datameadow-a-visual-canvas-for-analysis-of-largescale-multivariate-data, Slides}, year = {2007}, date = {2007-01-01}, booktitle = {Proceedings of the IEEE Symposium on Visual Analytics Science and Technology}, pages = {187--194}, abstract = {Supporting visual analytics of multiple large-scale multidimensional datasets requires a high degree of interactivity and user control beyond the conventional challenges of visualizing such datasets. We present the DataMeadow, a visual canvas providing rich interaction for constructing visual queries using graphical set representations called DataRoses. A DataRose is essentially a starplot of selected columns in a dataset displayed as multivariate visualizations with dynamic query sliders integrated into each axis. The purpose of the DataMeadow is to allow users to create advanced visual queries by iteratively selecting and filtering into the multidimensional data. Furthermore, the canvas provides a clear history of the analysis that can be annotated to facilitate dissemination of analytical results to outsiders. Towards this end, the DataMeadow has a direct manipulation interface for selection, filtering, and creation of sets, subsets, and data dependencies using both simple and complex mouse gestures. We have evaluated our system using a qualitative expert review involving two researchers working in the area. Results from this review are favorable for our new method.}, keywords = {} } Supporting visual analytics of multiple large-scale multidimensional datasets requires a high degree of interactivity and user control beyond the conventional challenges of visualizing such datasets. We present the DataMeadow, a visual canvas providing rich interaction for constructing visual queries using graphical set representations called DataRoses. A DataRose is essentially a starplot of selected columns in a dataset displayed as multivariate visualizations with dynamic query sliders integrated into each axis. The purpose of the DataMeadow is to allow users to create advanced visual queries by iteratively selecting and filtering into the multidimensional data. Furthermore, the canvas provides a clear history of the analysis that can be annotated to facilitate dissemination of analytical results to outsiders. Towards this end, the DataMeadow has a direct manipulation interface for selection, filtering, and creation of sets, subsets, and data dependencies using both simple and complex mouse gestures. We have evaluated our system using a qualitative expert review involving two researchers working in the area. Results from this review are favorable for our new method. |
14. | Niklas Elmqvist, Ulf Assarsson, Philippas Tsigas (2007): Employing Dynamic Transparency for 3D Occlusion Management: Design Issues and Evaluation. Proceedings of INTERACT, pp. 532–545, 2007. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Elmqvist2007b, title = {Employing Dynamic Transparency for 3D Occlusion Management: Design Issues and Evaluation}, author = {Niklas Elmqvist and Ulf Assarsson and Philippas Tsigas}, url = {http://www.umiacs.umd.edu/~elm/projects/dyntrans/dyntrans.pdf, Paper https://www.youtube.com/watch?v=77N5KVbbEmQ, Youtube video http://www.slideshare.net/NickElm/employing-dynamic-transparency-for-3d-occlusion-management-design-issues-and-evaluation, Slides}, year = {2007}, date = {2007-01-01}, booktitle = {Proceedings of INTERACT}, pages = {532--545}, abstract = {Recent developments in occlusion management for 3D environments often involve the use of dynamic transparency, or virtual \"X-ray vision\", to promote target discovery and access in complex 3D worlds. However, there are many different approaches to achieving this effect and their actual utility for the user has yet to be evaluated. Furthermore, the introduction of semi-transparent surfaces adds additional visual complexity that may actually have a negative impact on task performance. In this paper, we report on an empirical user study comparing dynamic transparency to standard viewpoint controls. Our implementation of the technique is an image-space algorithm built using modern programmable shaders to achieve real-time performance and visually pleasing results. Results from the user study indicate that dynamic transparency is superior for perceptual tasks in terms of both efficiency and correctness.}, keywords = {} } Recent developments in occlusion management for 3D environments often involve the use of dynamic transparency, or virtual "X-ray vision", to promote target discovery and access in complex 3D worlds. However, there are many different approaches to achieving this effect and their actual utility for the user has yet to be evaluated. Furthermore, the introduction of semi-transparent surfaces adds additional visual complexity that may actually have a negative impact on task performance. In this paper, we report on an empirical user study comparing dynamic transparency to standard viewpoint controls. Our implementation of the technique is an image-space algorithm built using modern programmable shaders to achieve real-time performance and visually pleasing results. Results from the user study indicate that dynamic transparency is superior for perceptual tasks in terms of both efficiency and correctness. |
13. | Niklas Elmqvist, Philippas Tsigas (2007): TrustNeighborhoods: Visualizing Trust in Distributed File Systems. Proceedings of the Eurographics/IEEE VGTC Symposium on Visualization, pp. 107–114, 2007. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Elmqvist2007e, title = {TrustNeighborhoods: Visualizing Trust in Distributed File Systems}, author = {Niklas Elmqvist and Philippas Tsigas}, url = {http://www.umiacs.umd.edu/~elm/projects/trustvis/trustvis.pdf, Paper}, year = {2007}, date = {2007-01-01}, booktitle = {Proceedings of the Eurographics/IEEE VGTC Symposium on Visualization}, pages = {107--114}, abstract = {We present TrustNeighborhoods, a security trust visualization for situational awareness on the Internet aimed at novice and intermediate users of a distributed file sharing system. The TrustNeighborhoods technique uses the metaphor of a multi-layered city or fortress to intuitively represent trust as a simple geographic relation. The visualization uses a radial space-filling layout; there is a 2D mode for editing and configuration, as well as a 3D mode for exploration and overview. In addition, the 3D mode supports a simple animated \"fly-to\" command that is intended to show the user the context and trust of a particular document by zooming in on the document and its immediate neighborhood in the 3D city. The visualization is intended for integration into an existing desktop environment, connecting to the distributed file sharing mechanisms of the environment and non-obtrusively displaying a 3D orientation animation in the background for any file being accessed over the network. A formal user study shows that the technique supports significantly higher trust assignment accuracy than manual trust assignment at the cost of only a minor time investment.}, keywords = {} } We present TrustNeighborhoods, a security trust visualization for situational awareness on the Internet aimed at novice and intermediate users of a distributed file sharing system. The TrustNeighborhoods technique uses the metaphor of a multi-layered city or fortress to intuitively represent trust as a simple geographic relation. The visualization uses a radial space-filling layout; there is a 2D mode for editing and configuration, as well as a 3D mode for exploration and overview. In addition, the 3D mode supports a simple animated "fly-to" command that is intended to show the user the context and trust of a particular document by zooming in on the document and its immediate neighborhood in the 3D city. The visualization is intended for integration into an existing desktop environment, connecting to the distributed file sharing mechanisms of the environment and non-obtrusively displaying a 3D orientation animation in the background for any file being accessed over the network. A formal user study shows that the technique supports significantly higher trust assignment accuracy than manual trust assignment at the cost of only a minor time investment. |
12. | Niklas Elmqvist, Philippas Tsigas (2007): A Taxonomy of 3D Occlusion Management Techniques. Proceedings of the IEEE Conference on Virtual Reality, pp. 51–58, 2007. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Elmqvist2007f, title = {A Taxonomy of 3D Occlusion Management Techniques}, author = {Niklas Elmqvist and Philippas Tsigas}, url = {http://www.umiacs.umd.edu/~elm/projects/occmgt/occmgt.pdf, Paper}, year = {2007}, date = {2007-01-01}, booktitle = {Proceedings of the IEEE Conference on Virtual Reality}, pages = {51--58}, abstract = {While an important factor in depth perception, the occlusion effect in 3D environments also has a detrimental impact on tasks involving discovery, access, and spatial relation of objects in a 3D visualization. A number of interactive techniques have been developed in recent years to directly or indirectly deal with this problem using a wide range of different approaches. In this paper, we build on previous work on mapping out the problem space of 3D occlusion by defining a taxonomy of the design space of occlusion management techniques in an effort to formalize a common terminology and theoretical framework for this class of interactions. We classify a total of 25 different techniques for occlusion management using our taxonomy and then go on to analyze the results, deriving a set of five orthogonal design patterns for effective reduction of 3D occlusion. We also discuss the \"gaps\" in the design space, areas of the taxonomy not yet populated with existing techniques, and use these to suggest future research directions into occlusion management.}, keywords = {} } While an important factor in depth perception, the occlusion effect in 3D environments also has a detrimental impact on tasks involving discovery, access, and spatial relation of objects in a 3D visualization. A number of interactive techniques have been developed in recent years to directly or indirectly deal with this problem using a wide range of different approaches. In this paper, we build on previous work on mapping out the problem space of 3D occlusion by defining a taxonomy of the design space of occlusion management techniques in an effort to formalize a common terminology and theoretical framework for this class of interactions. We classify a total of 25 different techniques for occlusion management using our taxonomy and then go on to analyze the results, deriving a set of five orthogonal design patterns for effective reduction of 3D occlusion. We also discuss the "gaps" in the design space, areas of the taxonomy not yet populated with existing techniques, and use these to suggest future research directions into occlusion management. |
11. | Niklas Elmqvist, Philippas Tsigas (2007): View-Projection Animation for 3D Occlusion Management. Computers & Graphics, 31 (6), pp. 864–876, 2007. (Type: Article | Abstract | Links | BibTeX) @article{Elmqvist2007j, title = {View-Projection Animation for 3D Occlusion Management}, author = {Niklas Elmqvist and Philippas Tsigas}, url = {http://www.umiacs.umd.edu/~elm/projects/pmorph/pmorph-journal.pdf, Paper}, year = {2007}, date = {2007-01-01}, journal = {Computers & Graphics}, volume = {31}, number = {6}, pages = {864--876}, abstract = {Inter-object occlusion is inherent to 3D environments and is one of the challenges of using 3D instead of 2D computer graphics for visualization. Based on an analysis of this effect, we present an interaction technique for view-projection animation that reduces inter-object occlusion in 3D environments without modifying the geometrical properties of the objects themselves. The technique allows for smooth on-demand animation between parallel and perspective projection modes as well as online manipulation of view parameters, enabling the user to quickly and easily adapt the view to reduce occlusion. A user study indicates that the technique provides many of the occlusion reduction benefits of traditional camera movement, but without the need to actually change the viewpoint. We have also implemented a prototype of the technique in the Blender 3D modeler.}, keywords = {} } Inter-object occlusion is inherent to 3D environments and is one of the challenges of using 3D instead of 2D computer graphics for visualization. Based on an analysis of this effect, we present an interaction technique for view-projection animation that reduces inter-object occlusion in 3D environments without modifying the geometrical properties of the objects themselves. The technique allows for smooth on-demand animation between parallel and perspective projection modes as well as online manipulation of view parameters, enabling the user to quickly and easily adapt the view to reduce occlusion. A user study indicates that the technique provides many of the occlusion reduction benefits of traditional camera movement, but without the need to actually change the viewpoint. We have also implemented a prototype of the technique in the Blender 3D modeler. |
10. | Niklas Elmqvist, Philippas Tsigas (2007): CiteWiz: A Tool for the Visualization of Scientific Citation Networks. Information Visualization, 6 (3), pp. 215–232, 2007. (Type: Article | Abstract | Links | BibTeX) @article{Elmqvist2007c, title = {CiteWiz: A Tool for the Visualization of Scientific Citation Networks}, author = {Niklas Elmqvist and Philippas Tsigas}, url = {http://www.umiacs.umd.edu/~elm/projects/citewiz/citewiz.pdf, Paper}, year = {2007}, date = {2007-01-01}, journal = {Information Visualization}, volume = {6}, number = {3}, pages = {215--232}, abstract = {We present CiteWiz, an extensible framework for visualization of scientific citation networks. The system is based on a taxonomy of citation database usage for researchers, and provides a timeline visualization for overviews and an influence visualization for detailed views. The timeline displays the general chronology and importance of authors and articles in a citation database, whereas the influence visualization is implemented using the Growing Polygons technique, suitably modified to the context of browsing citation data. Using the latter technique, hierarchies of articles with potentially very long citation chains can be graphically represented. The visualization is augmented with mechanisms for parent-child visualization and suitable interaction techniques for interacting with the view hierarchy and the individual articles in the dataset. We also provide an interactive concept map for keywords and co-authorship using a basic force-directed graph layout scheme. A formal user study indicates that CiteWiz is significantly more efficient than traditional database interfaces for high-level analysis tasks relating to influence and overviews, and equally efficient for low-level tasks such as finding a paper and correlating bibliographical data.}, keywords = {} } We present CiteWiz, an extensible framework for visualization of scientific citation networks. The system is based on a taxonomy of citation database usage for researchers, and provides a timeline visualization for overviews and an influence visualization for detailed views. The timeline displays the general chronology and importance of authors and articles in a citation database, whereas the influence visualization is implemented using the Growing Polygons technique, suitably modified to the context of browsing citation data. Using the latter technique, hierarchies of articles with potentially very long citation chains can be graphically represented. The visualization is augmented with mechanisms for parent-child visualization and suitable interaction techniques for interacting with the view hierarchy and the individual articles in the dataset. We also provide an interactive concept map for keywords and co-authorship using a basic force-directed graph layout scheme. A formal user study indicates that CiteWiz is significantly more efficient than traditional database interfaces for high-level analysis tasks relating to influence and overviews, and equally efficient for low-level tasks such as finding a paper and correlating bibliographical data. |
9. | Niklas Elmqvist, Mihail Eduard Tudoreanu (2007): Occlusion Management in Immersive and Desktop 3D Virtual Environments: Theory and Evaluation. International Journal of Virtual Reality, 6 (2), pp. 21–32, 2007. (Type: Article | Abstract | Links | BibTeX) @article{Elmqvist2007d, title = {Occlusion Management in Immersive and Desktop 3D Virtual Environments: Theory and Evaluation}, author = {Niklas Elmqvist and Mihail Eduard Tudoreanu}, url = {http://www.umiacs.umd.edu/~elm/projects/balloonprobe/balloonprobe-journal.pdf}, year = {2007}, date = {2007-01-01}, journal = {International Journal of Virtual Reality}, volume = {6}, number = {2}, pages = {21--32}, abstract = {We present an empirical usability experiment studying the relative strengths and weaknesses of three different occlusion management techniques for discovering and accessing objects in information-rich 3D virtual environments. More specifically, the study compares standard 3D navigation, generalized fisheye techniques using object scaling and transparency, and the BalloonProbe interactive 3D space distortion technique. Subjects are asked to complete a number of representative tasks, including counting, pattern recognition, and object relation, in different kinds of environments and on both immersive and desktop-based VR systems. The environments include a free-space abstract 3D environment and a virtual 3D walkthrough application for a simple building floor. Our results confirm the general guideline that each task calls for a specialized interaction---no single technique performed best across all tasks and worlds. The results also indicate a clear trade-off between speed and accuracy: simple navigation was the fastest but also most error-prone technique, whereas spherical BalloonProbe and transparency-based fisheye proved the most accurate but required longer completion time, making it suitable for applications where mistakes incur a high cost.}, keywords = {} } We present an empirical usability experiment studying the relative strengths and weaknesses of three different occlusion management techniques for discovering and accessing objects in information-rich 3D virtual environments. More specifically, the study compares standard 3D navigation, generalized fisheye techniques using object scaling and transparency, and the BalloonProbe interactive 3D space distortion technique. Subjects are asked to complete a number of representative tasks, including counting, pattern recognition, and object relation, in different kinds of environments and on both immersive and desktop-based VR systems. The environments include a free-space abstract 3D environment and a virtual 3D walkthrough application for a simple building floor. Our results confirm the general guideline that each task calls for a specialized interaction---no single technique performed best across all tasks and worlds. The results also indicate a clear trade-off between speed and accuracy: simple navigation was the fastest but also most error-prone technique, whereas spherical BalloonProbe and transparency-based fisheye proved the most accurate but required longer completion time, making it suitable for applications where mistakes incur a high cost. |
8. | Nathalie Henry, Howard Goodell, Niklas Elmqvist, Jean-Daniel Fekete (2007): 20 Years of Four HCI Conferences: A Visual Exploration. International Journal of Human-Computer Interaction, 23 (3), pp. 239–285, 2007. (Type: Article | Abstract | Links | BibTeX) @article{Henry2007, title = {20 Years of Four HCI Conferences: A Visual Exploration}, author = {Nathalie Henry and Howard Goodell and Niklas Elmqvist and Jean-Daniel Fekete}, url = {http://www.umiacs.umd.edu/~elm/projects/20yearshci/20yearshci.pdf, Paper}, year = {2007}, date = {2007-01-01}, journal = {International Journal of Human-Computer Interaction}, volume = {23}, number = {3}, pages = {239--285}, abstract = {We present a visual exploration of the field of human–computer interaction (HCI) through the author and article metadata of four of its major conferences: the ACM conferences on Computer-Human Interaction (CHI), User Interface Software and Technology, and Advanced Visual Interfaces and the IEEE Symposium on Information Visualization. This article describes many global and local patterns we discovered in this data set, together with the exploration process that produced them. Some expected patterns emerged, such as that---like most social networks---coauthorship and citation networks exhibit a power-law degree distribution, with a few widely collaborating authors and highly cited articles. Also, the prestigious and long-established CHI conference has the highest impact (citations by the others). Unexpected insights included that the years when a given conference was most selective are not correlated with those that produced its most highly referenced articles and that influential authors have distinct patterns of collaboration. An interesting sidelight is that methods from the HCI field---exploratory data analysis by information visualization and direct-manipulation interaction---proved useful for this analysis. They allowed us to take an open-ended, exploratory approach, guided by the data itself. As we answered our original questions, new ones arose; as we confirmed patterns we expected, we discovered refinements, exceptions, and fascinating new ones.}, keywords = {} } We present a visual exploration of the field of human–computer interaction (HCI) through the author and article metadata of four of its major conferences: the ACM conferences on Computer-Human Interaction (CHI), User Interface Software and Technology, and Advanced Visual Interfaces and the IEEE Symposium on Information Visualization. This article describes many global and local patterns we discovered in this data set, together with the exploration process that produced them. Some expected patterns emerged, such as that---like most social networks---coauthorship and citation networks exhibit a power-law degree distribution, with a few widely collaborating authors and highly cited articles. Also, the prestigious and long-established CHI conference has the highest impact (citations by the others). Unexpected insights included that the years when a given conference was most selective are not correlated with those that produced its most highly referenced articles and that influential authors have distinct patterns of collaboration. An interesting sidelight is that methods from the HCI field---exploratory data analysis by information visualization and direct-manipulation interaction---proved useful for this analysis. They allowed us to take an open-ended, exploratory approach, guided by the data itself. As we answered our original questions, new ones arose; as we confirmed patterns we expected, we discovered refinements, exceptions, and fascinating new ones. |
2006 | |
7. | Niklas Elmqvist, Mihail Eduard Tudoreanu (2006): Evaluating the Effectiveness of Occlusion Reduction Techniques for 3D Virtual Environments. Proceedings of the ACM Symposium on Virtual Reality Software and Technology, pp. 9-18, 2006. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Elmqvist2006a, title = {Evaluating the Effectiveness of Occlusion Reduction Techniques for 3D Virtual Environments}, author = {Niklas Elmqvist and Mihail Eduard Tudoreanu}, url = {http://www.umiacs.umd.edu/~elm/projects/balloonprobe/balloonprobe-full.pdf, Paper https://www.youtube.com/watch?v=ynqG3JE6744, Youtube video}, year = {2006}, date = {2006-01-01}, booktitle = {Proceedings of the ACM Symposium on Virtual Reality Software and Technology}, pages = {9-18}, abstract = {We present an empirical usability experiment studying the relative strengths and weaknesses of three different occlusion reduction techniques for discovering and accessing objects in information-rich 3D virtual environments. More specifically, the study compares standard 3D navigation, generalized fisheye techniques using object scaling and transparency, and the BalloonProbe interactive 3D space distortion technique. Subjects are asked to complete a number of different tasks, including counting, pattern recognition, and object relation, in different kinds of environments with various properties. The environments include a free-space abstract 3D environment and a virtual 3D walkthrough application for a simple building floor. The study involved 16 subjects and was conducted in a three-sided CAVE environment. Our results confirm the general guideline that each task calls for a specialized interaction---no single technique performed best across all tasks and worlds. The results also indicate a clear trade-off between speed and accuracy; simple navigation was the fastest but also most error-prone technique, whereas spherical BalloonProbe proved the most accurate but required longer completion time, making it suitable for applications where mistakes incur a high cost.}, keywords = {} } We present an empirical usability experiment studying the relative strengths and weaknesses of three different occlusion reduction techniques for discovering and accessing objects in information-rich 3D virtual environments. More specifically, the study compares standard 3D navigation, generalized fisheye techniques using object scaling and transparency, and the BalloonProbe interactive 3D space distortion technique. Subjects are asked to complete a number of different tasks, including counting, pattern recognition, and object relation, in different kinds of environments with various properties. The environments include a free-space abstract 3D environment and a virtual 3D walkthrough application for a simple building floor. The study involved 16 subjects and was conducted in a three-sided CAVE environment. Our results confirm the general guideline that each task calls for a specialized interaction---no single technique performed best across all tasks and worlds. The results also indicate a clear trade-off between speed and accuracy; simple navigation was the fastest but also most error-prone technique, whereas spherical BalloonProbe proved the most accurate but required longer completion time, making it suitable for applications where mistakes incur a high cost. |
6. | Niklas Elmqvist, Philippas Tsigas (2006): View Projection Animation for Occlusion Reduction. Proceedings of the ACM Conference on Advanced Visual Interfaces, pp. 471–475, 2006. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Elmqvist2006b, title = {View Projection Animation for Occlusion Reduction}, author = {Niklas Elmqvist and Philippas Tsigas}, url = {http://www.umiacs.umd.edu/~elm/projects/pmorph/pmorph.pdf, Paper}, year = {2006}, date = {2006-01-01}, booktitle = {Proceedings of the ACM Conference on Advanced Visual Interfaces}, pages = {471--475}, abstract = {Inter-object occlusion is inherent to 3D environments and is one of the challenges of using 3D instead of 2D computer graphics for information visualization. In this paper, we examine this occlusion problem by building a theoretical framework of its causes and components. As a result of this analysis, we present an interaction technique for view projection animation that reduces inter-object occlusion in 3D environments without modifying the geometrical properties of the objects themselves. The technique provides smooth on-demand animation between parallel and perspective projection modes as well as online manipulation of view parameters, allowing the user to quickly and easily adapt the view to avoid occlusion. A user study indicates that the technique significantly improves object discovery over normal perspective views. We have also implemented a prototype of the technique in the Blender 3D modeller.}, keywords = {} } Inter-object occlusion is inherent to 3D environments and is one of the challenges of using 3D instead of 2D computer graphics for information visualization. In this paper, we examine this occlusion problem by building a theoretical framework of its causes and components. As a result of this analysis, we present an interaction technique for view projection animation that reduces inter-object occlusion in 3D environments without modifying the geometrical properties of the objects themselves. The technique provides smooth on-demand animation between parallel and perspective projection modes as well as online manipulation of view parameters, allowing the user to quickly and easily adapt the view to avoid occlusion. A user study indicates that the technique significantly improves object discovery over normal perspective views. We have also implemented a prototype of the technique in the Blender 3D modeller. |
5. | Samuel Sandberg, Calle Håkansson, Niklas Elmqvist, Philippas Tsigas, Fang Chen (2006): Using 3D Audio Guidance to Locate Indoor Static Objects. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, pp. 1581–1584, 2006. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Sandberg2006a, title = {Using 3D Audio Guidance to Locate Indoor Static Objects}, author = {Samuel Sandberg and Calle Håkansson and Niklas Elmqvist and Philippas Tsigas and Fang Chen}, url = {http://www.umiacs.umd.edu/~elm/projects/3daudio/3daudio.pdf, Paper}, year = {2006}, date = {2006-01-01}, booktitle = {Proceedings of the Human Factors and Ergonomics Society Annual Meeting}, pages = {1581--1584}, abstract = {Is 3D audio an interesting technology for displaying navigational information in an indoor environment? This study found no significant differences between map- and 3D audio navigation. The user tasks tested involved finding objects in a real office environment. In order to conduct the study, a custom-made 3D audio system was built based on a public-domain HRTF-library to playback 3D sound beacons through a pair of earphones. Our results indicate that 3D audio is indeed a qualified candidate for navigation systems, and may be especially suitable for environments or individuals where vision is obstructed, insufficient, or unavailable. The study also suggests that special cues should be added to the pure spatial information to emphasize important information.}, keywords = {} } Is 3D audio an interesting technology for displaying navigational information in an indoor environment? This study found no significant differences between map- and 3D audio navigation. The user tasks tested involved finding objects in a real office environment. In order to conduct the study, a custom-made 3D audio system was built based on a public-domain HRTF-library to playback 3D sound beacons through a pair of earphones. Our results indicate that 3D audio is indeed a qualified candidate for navigation systems, and may be especially suitable for environments or individuals where vision is obstructed, insufficient, or unavailable. The study also suggests that special cues should be added to the pure spatial information to emphasize important information. |
2005 | |
4. | Niklas Elmqvist (2005): BalloonProbe: Reducing Occlusion in 3D using Interactive Space Distortion. Proceedings of the ACM Symposium on Virtual Reality Software and Technology, pp. 134–137, 2005. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Elmqvist2005a, title = {BalloonProbe: Reducing Occlusion in 3D using Interactive Space Distortion}, author = {Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/balloonprobe/balloonprobe.pdf, Paper https://www.youtube.com/watch?v=ynqG3JE6744, Youtube video}, year = {2005}, date = {2005-01-01}, booktitle = {Proceedings of the ACM Symposium on Virtual Reality Software and Technology}, pages = {134--137}, abstract = {Using a 3D virtual environment for information visualization is a promising approach, but can in many cases be plagued by a phenomenon of literally not being able to see the forest for the trees. Some parts of the 3D visualization will inevitably occlude other parts, leading both to loss of efficiency and, more seriously, correctness; users may have to change their viewpoint in a non-trivial way to be able to access hidden objects, and, worse, they may not even discover some of the objects in the visualization due to this inter-object occlusion. In this paper, we present a space distortion interaction technique called the BalloonProbe which, on the user’s command, inflates a spherical force field that repels objects around the 3D cursor to the surface of the sphere, separating occluding objects from each other. Inflating and deflating the sphere is performed through smooth animation, ghosted traces showing the displacement of each repelled object. Our prototype implementation uses a 3D cursor for positioning as well as for inflating and deflating the force field \"balloon\". Informal testing suggests that the BalloonProbe is a powerful way of giving users interactive control over occlusion in 3D visualizations.}, keywords = {} } Using a 3D virtual environment for information visualization is a promising approach, but can in many cases be plagued by a phenomenon of literally not being able to see the forest for the trees. Some parts of the 3D visualization will inevitably occlude other parts, leading both to loss of efficiency and, more seriously, correctness; users may have to change their viewpoint in a non-trivial way to be able to access hidden objects, and, worse, they may not even discover some of the objects in the visualization due to this inter-object occlusion. In this paper, we present a space distortion interaction technique called the BalloonProbe which, on the user’s command, inflates a spherical force field that repels objects around the 3D cursor to the surface of the sphere, separating occluding objects from each other. Inflating and deflating the sphere is performed through smooth animation, ghosted traces showing the displacement of each repelled object. Our prototype implementation uses a 3D cursor for positioning as well as for inflating and deflating the force field "balloon". Informal testing suggests that the BalloonProbe is a powerful way of giving users interactive control over occlusion in 3D visualizations. |
2004 | |
3. | Niklas Elmqvist, Philippas Tsigas (2004): Animated Visualization of Causal Relations Through Growing 2D Geometry. Information Visualization, 3 (3), pp. 154–172, 2004. (Type: Article | Abstract | Links | BibTeX) @article{Elmqvist2004a, title = {Animated Visualization of Causal Relations Through Growing 2D Geometry}, author = {Niklas Elmqvist and Philippas Tsigas}, url = {http://www.umiacs.umd.edu/~elm/projects/causality/causality.pdf, Paper}, year = {2004}, date = {2004-01-01}, journal = {Information Visualization}, volume = {3}, number = {3}, pages = {154--172}, abstract = {Causality visualization is an important tool for many scientific domains that involve complex interactions between multiple entities (examples include parallel and distributed systems in computer science). However, traditional visualization techniques such as Hasse diagrams are not well-suited to large system executions, and users often have difficulties answering even basic questions using them, or have to spend inordinate amounts of time to do so. In this paper we present the Growing Squares and Growing Polygons methods, two sibling visualization techniques that were designed to solve this problem by providing efficient 2D causality visualization through the use of color, texture, and animation. Both techniques have abandoned the traditional linear timeline and instead map the time parameter to the size of geometrical primitives representing the processes; in the Growing Squares case, each process is a color-coded square that receives color influences from other process squares as messages reach it; in the Growing Polygons case, each process is instead an n-sided polygon consisting of triangular sectors showing color-coded influences from the other processes. We have performed user studies of both techniques, comparing them with Hasse diagrams, and they have been shown to be significantly more efficient than old techniques, both in terms of objective performance as well as the subjective opinion of the test subjects (the Growing Squares technique is, however, only significantly more efficient for small systems).}, keywords = {} } Causality visualization is an important tool for many scientific domains that involve complex interactions between multiple entities (examples include parallel and distributed systems in computer science). However, traditional visualization techniques such as Hasse diagrams are not well-suited to large system executions, and users often have difficulties answering even basic questions using them, or have to spend inordinate amounts of time to do so. In this paper we present the Growing Squares and Growing Polygons methods, two sibling visualization techniques that were designed to solve this problem by providing efficient 2D causality visualization through the use of color, texture, and animation. Both techniques have abandoned the traditional linear timeline and instead map the time parameter to the size of geometrical primitives representing the processes; in the Growing Squares case, each process is a color-coded square that receives color influences from other process squares as messages reach it; in the Growing Polygons case, each process is instead an n-sided polygon consisting of triangular sectors showing color-coded influences from the other processes. We have performed user studies of both techniques, comparing them with Hasse diagrams, and they have been shown to be significantly more efficient than old techniques, both in terms of objective performance as well as the subjective opinion of the test subjects (the Growing Squares technique is, however, only significantly more efficient for small systems). |
2003 | |
2. | Niklas Elmqvist, Philippas Tsigas (2003): Causality Visualization Using Animated Growing Polygons. Proceedings of the IEEE Symposium on Information Visualization, pp. 189–196, 2003. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Elmqvist2003b, title = {Causality Visualization Using Animated Growing Polygons}, author = {Niklas Elmqvist and Philippas Tsigas}, url = {http://www.umiacs.umd.edu/~elm/projects/causality/growing-polys.pdf, Paper}, year = {2003}, date = {2003-01-01}, booktitle = {Proceedings of the IEEE Symposium on Information Visualization}, pages = {189--196}, abstract = {We present Growing Polygons, a novel visualization technique for the graphical representation of causal relations and information flow in a system of interacting processes. Using this method, individual processes are displayed as partitioned polygons with color-coded segments showing dependencies to other processes. The entire visualization is also animated to communicate the dynamic execution of the system to the user. The results from a comparative user study of the method show that the Growing Polygons technique is significantly more efficient than the traditional Hasse diagram visualization for analysis tasks related to deducing information flow in a system for both small and large executions. Furthermore, our findings indicate that the correctness when solving causality tasks is significantly improved using our method. In addition, the subjective ratings of the users rank the method as superior in all regards, including usability, efficiency, and enjoyability.}, keywords = {} } We present Growing Polygons, a novel visualization technique for the graphical representation of causal relations and information flow in a system of interacting processes. Using this method, individual processes are displayed as partitioned polygons with color-coded segments showing dependencies to other processes. The entire visualization is also animated to communicate the dynamic execution of the system to the user. The results from a comparative user study of the method show that the Growing Polygons technique is significantly more efficient than the traditional Hasse diagram visualization for analysis tasks related to deducing information flow in a system for both small and large executions. Furthermore, our findings indicate that the correctness when solving causality tasks is significantly improved using our method. In addition, the subjective ratings of the users rank the method as superior in all regards, including usability, efficiency, and enjoyability. |
1. | Niklas Elmqvist, Philippas Tsigas (2003): Growing Squares: Animated Visualization of Causal Relations. Proceedings of the ACM Symposium on Software Visualization, pp. 17–26, 2003. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Elmqvist2003a, title = {Growing Squares: Animated Visualization of Causal Relations}, author = {Niklas Elmqvist and Philippas Tsigas}, url = {http://www.umiacs.umd.edu/~elm/projects/causality/causalviz.pdf, Paper}, year = {2003}, date = {2003-01-01}, booktitle = {Proceedings of the ACM Symposium on Software Visualization}, pages = {17--26}, abstract = {We present a novel information visualization technique for the graphical representation of causal relations, that is based on the metaphor of color pools spreading over time on a piece of paper. Messages between processes in the system affect the colors of their respective pool, making it possible to quickly see the influences each process has received. This technique, called Growing Squares, has been evaluated in a comparative user study and shown to be significantly faster and more efficient for sparse data sets than the traditional Hasse diagram visualization. Growing Squares were also more efficient for large data sets, but not significantly so. Test subjects clearly favored Growing Squares over old methods, naming the new technique easier, more efficient, and much more enjoyable to use.}, keywords = {} } We present a novel information visualization technique for the graphical representation of causal relations, that is based on the metaphor of color pools spreading over time on a piece of paper. Messages between processes in the system affect the colors of their respective pool, making it possible to quickly see the influences each process has received. This technique, called Growing Squares, has been evaluated in a comparative user study and shown to be significantly faster and more efficient for sparse data sets than the traditional Hasse diagram visualization. Growing Squares were also more efficient for large data sets, but not significantly so. Test subjects clearly favored Growing Squares over old methods, naming the new technique easier, more efficient, and much more enjoyable to use. |
Theses
- N. Elmqvist. 3D Occlusion Management and Causality Visualization. Ph.D. thesis, Chalmers University of Technology (Göteborg), Technical Report No. 2550, ISBN/ISSN: 91-7291-869-1, 2006. [PDF]
- N. Elmqvist. Visualization of Causal Relations. Lic. thesis, Chalmers University of Technology (Göteborg), Technical Report No. 38L, ISBN/ISSN: 1651-4963, 2004. [PDF]
- N. Elmqvist. 3Dwm: Three-Dimensional User Interfaces Using Fast Constructive Solid Geometry. M.Sc. thesis, Chalmers University of Technology (Göteborg), 2001.