Journal Articles
2023 | |
85. | Andrea Batch, Yipeng Ji, Mingming Fan, Jian Zhao, Niklas Elmqvist (2023): uxSense: Supporting User Experience Analysis with Visualization and Computer Vision. IEEE Transactions on Visualization & Computer Graphics, 2023. (Type: Article | Abstract | Links | BibTeX) @article{Batch2023, title = {uxSense: Supporting User Experience Analysis with Visualization and Computer Vision}, author = {Andrea Batch and Yipeng Ji and Mingming Fan and Jian Zhao and Niklas Elmqvist}, url = {https://users.umiacs.umd.edu/~elm/projects/uxsense/uxsense.pdf, PDF}, year = {2023}, date = {2023-02-01}, journal = {IEEE Transactions on Visualization & Computer Graphics}, abstract = {Analyzing user behavior from usability evaluation can be a challenging and time-consuming task, especially as the number of participants and the scale and complexity of the evaluation grows. We propose UXSENSE, a visual analytics system using machine learning methods to extract user behavior from audio and video recordings as parallel time-stamped data streams. Our implementation draws on pattern recognition, computer vision, natural language processing, and machine learning to extract user sentiment, actions, posture, spoken words, and other features from such recordings. These streams are visualized as parallel timelines in a web-based front-end, enabling the researcher to search, filter, and annotate data across time and space. We present the results of a user study involving professional UX researchers evaluating user data using uxSense. In fact, we used uxSense itself to evaluate their sessions.}, keywords = {} } Analyzing user behavior from usability evaluation can be a challenging and time-consuming task, especially as the number of participants and the scale and complexity of the evaluation grows. We propose UXSENSE, a visual analytics system using machine learning methods to extract user behavior from audio and video recordings as parallel time-stamped data streams. Our implementation draws on pattern recognition, computer vision, natural language processing, and machine learning to extract user sentiment, actions, posture, spoken words, and other features from such recordings. These streams are visualized as parallel timelines in a web-based front-end, enabling the researcher to search, filter, and annotate data across time and space. We present the results of a user study involving professional UX researchers evaluating user data using uxSense. In fact, we used uxSense itself to evaluate their sessions. |
84. | Debanjan Datta, Nathan Self, John Simeone, Amelia Meadows, Willow Outhwaite, Linda Walker, Niklas Elmqvist, Naren Ramkrishnan (2023): TimberSleuth: Visual Anomaly Detection with Human Feedback for Mitigating the Illegal Timber Trade. Information Visualization, 2023. (Type: Article | Abstract | Links | BibTeX) @article{Datta2023, title = {TimberSleuth: Visual Anomaly Detection with Human Feedback for Mitigating the Illegal Timber Trade}, author = {Debanjan Datta and Nathan Self and John Simeone and Amelia Meadows and Willow Outhwaite and Linda Walker and Niklas Elmqvist and Naren Ramkrishnan}, url = {https://users.umiacs.umd.edu/~elm/projects/timbersleuth/timbersleuth.pdf, PDF}, year = {2023}, date = {2023-02-01}, journal = {Information Visualization}, abstract = {Detecting illegal shipments in the global timber trade poses a massive challenge to enforcement agencies. The massive volume and complexity of timber shipments and obfuscations within international trade data, intentional or not, necessitates an automated system to aid in detecting specific shipments that potentially contain illegally harvested wood. To address these requirements we build a novel human-in-the-loop visual analytics system called TIMBERSLEUTH. TimberSleuth uses a novel scoring model reinforced through human feedback to improve upon the relevance of the results of the system while using an off-the-shelf anomaly detection model. Detailed evaluation is performed using real data with synthetic anomalies to test the machine intelligence that drives the system. We design interactive visualizations to enable analysis of pertinent details of anomalous trade records so that analysts can determine if a record is relevant and provide iterative feedback. This feedback is utilized by the machine learning model to improve the precision of the output.}, keywords = {} } Detecting illegal shipments in the global timber trade poses a massive challenge to enforcement agencies. The massive volume and complexity of timber shipments and obfuscations within international trade data, intentional or not, necessitates an automated system to aid in detecting specific shipments that potentially contain illegally harvested wood. To address these requirements we build a novel human-in-the-loop visual analytics system called TIMBERSLEUTH. TimberSleuth uses a novel scoring model reinforced through human feedback to improve upon the relevance of the results of the system while using an off-the-shelf anomaly detection model. Detailed evaluation is performed using real data with synthetic anomalies to test the machine intelligence that drives the system. We design interactive visualizations to enable analysis of pertinent details of anomalous trade records so that analysts can determine if a record is relevant and provide iterative feedback. This feedback is utilized by the machine learning model to improve the precision of the output. |
2022 | |
83. | Tamara L. Clegg, Keaunna Cleveland, Erianne Weight, Daniel Greene, Niklas Elmqvist (2022): Data Everyday as Community Driven Science: Athletes’ Critical Data Literacy Practices in Collegiate Sports Contexts. Journal of Research in Science Teaching, 2022. (Type: Article | Abstract | Links | BibTeX) @article{Clegg2022, title = {Data Everyday as Community Driven Science: Athletes’ Critical Data Literacy Practices in Collegiate Sports Contexts}, author = {Tamara L. Clegg and Keaunna Cleveland and Erianne Weight and Daniel Greene and Niklas Elmqvist}, url = {Data Everyday as Community Driven Science: Athletes’ Critical Data Literacy Practices in Collegiate Sports Contexts, Fulltext (HTML)}, year = {2022}, date = {2022-12-01}, journal = {Journal of Research in Science Teaching}, abstract = {In this article, we investigate the community-driven science happening organically in elite athletics as a means of engaging a community of learners—collegiate athletes, many of whom come from underrepresented groups—in STEM. We aim to recognize the data literacy practices inherent in sports play and to explore the potential of critical data literacy practices for enabling athletes to leverage data science as a means of addressing systemic racial, equity, and justice issues inherent in sports institutions. We leverage research on critical data literacies as a lens to present case studies of three athletes at an NCAA Division 1 university spanning three different sports. We focus on athletes\' experiences as they engage in critical data literacy practices and the ways they welcome, adapt, resist, and critique such engagements. Our findings indicate ways in which athletes (1) readily accept data practices espoused by their coaches and sport, (2) critique and intentionally disengage from such practices, and (3) develop their own new data productions. In order to support community-driven science, our findings point to the critical role of athletics\' organizations in promoting athletes\' access to, as well as engagement and agency with data practices on their teams.}, keywords = {} } In this article, we investigate the community-driven science happening organically in elite athletics as a means of engaging a community of learners—collegiate athletes, many of whom come from underrepresented groups—in STEM. We aim to recognize the data literacy practices inherent in sports play and to explore the potential of critical data literacy practices for enabling athletes to leverage data science as a means of addressing systemic racial, equity, and justice issues inherent in sports institutions. We leverage research on critical data literacies as a lens to present case studies of three athletes at an NCAA Division 1 university spanning three different sports. We focus on athletes' experiences as they engage in critical data literacy practices and the ways they welcome, adapt, resist, and critique such engagements. Our findings indicate ways in which athletes (1) readily accept data practices espoused by their coaches and sport, (2) critique and intentionally disengage from such practices, and (3) develop their own new data productions. In order to support community-driven science, our findings point to the critical role of athletics' organizations in promoting athletes' access to, as well as engagement and agency with data practices on their teams. |
82. | Sungbok Shin, Sunghyo Chung, Sanghyun Hong, Niklas Elmqvist (2022): A Scanner Deeply: Predicting Gaze Heatmaps on Visualizations Using Crowdsourced Eye Movement Data. IEEE Transactions on Visualization & Computer Graphics, 2022. (Type: Article | Abstract | Links | BibTeX) @article{Shin2022b, title = {A Scanner Deeply: Predicting Gaze Heatmaps on Visualizations Using Crowdsourced Eye Movement Data}, author = {Sungbok Shin, Sunghyo Chung, Sanghyun Hong, Niklas Elmqvist}, url = {http://users.umiacs.umd.edu/~elm/projects/scanner-deeply/scanner-deeply.pdf, PDF}, year = {2022}, date = {2022-10-20}, journal = {IEEE Transactions on Visualization & Computer Graphics}, abstract = {Visual perception is a key component of data visualization. Much prior empirical work uses eye movement as a proxy to understand human visual perception. Diverse apparatus and techniques have been proposed to collect eye movements, but there is still no optimal approach. In this paper, we review 30 prior works for collecting eye movements based on three axes: (1) the tracker technology used to measure eye movements; (2) the image stimulus shown to participants; and (3) the collection methodology used to gather the data. Based on this taxonomy, we employ a webcam-based eyetracking approach using task-specific visualizations as the stimulus. The low technology requirement means that virtually anyone can participate, thus enabling us to collect data at large scale using crowdsourcing: approximately 12,000 samples in total. Choosing visualization images as stimulus means that the eye movements will be specific to perceptual tasks associated with visualization. We use these data to propose a Scanner Deeply, a virtual eyetracker model that, given an image of a visualization, generates a gaze heatmap for that image. We employ a computationally efficient, yet powerful convolutional neural network for our model. We compare the results of our work with results from the DVS model and a neural network trained on the Salicon dataset. The analysis of our gaze patterns enables us to understand how users grasp the structure of visualized data. We also make our stimulus dataset of visualization images available as part of this paper’s contribution.}, keywords = {} } Visual perception is a key component of data visualization. Much prior empirical work uses eye movement as a proxy to understand human visual perception. Diverse apparatus and techniques have been proposed to collect eye movements, but there is still no optimal approach. In this paper, we review 30 prior works for collecting eye movements based on three axes: (1) the tracker technology used to measure eye movements; (2) the image stimulus shown to participants; and (3) the collection methodology used to gather the data. Based on this taxonomy, we employ a webcam-based eyetracking approach using task-specific visualizations as the stimulus. The low technology requirement means that virtually anyone can participate, thus enabling us to collect data at large scale using crowdsourcing: approximately 12,000 samples in total. Choosing visualization images as stimulus means that the eye movements will be specific to perceptual tasks associated with visualization. We use these data to propose a Scanner Deeply, a virtual eyetracker model that, given an image of a visualization, generates a gaze heatmap for that image. We employ a computationally efficient, yet powerful convolutional neural network for our model. We compare the results of our work with results from the DVS model and a neural network trained on the Salicon dataset. The analysis of our gaze patterns enables us to understand how users grasp the structure of visualized data. We also make our stimulus dataset of visualization images available as part of this paper’s contribution. |
81. | Eric Newburger, Michael Correll, Niklas Elmqvist (2022): Fitting Bell Curves to Data Distributions using Visualization. IEEE Transactions on Visualization & Computer Graphics, 2022. (Type: Article | Abstract | Links | BibTeX) @article{Newburger2022, title = {Fitting Bell Curves to Data Distributions using Visualization}, author = {Eric Newburger, Michael Correll, Niklas Elmqvist}, url = {https://users.umiacs.umd.edu/~elm/projects/fitting-bells/fitting-bells.pdf, PDF}, year = {2022}, date = {2022-10-01}, journal = {IEEE Transactions on Visualization & Computer Graphics}, abstract = {Idealized probability distributions, such as normal or other curves, lie at the root of confirmatory statistical tests. But how well do people understand these idealized curves? In practical terms, does the human visual system allow us to match sample data distributions with hypothesized population distributions from which those samples might have been drawn? And how do different visualization techniques impact this capability? This paper shares the results of a crowdsourced experiment that tested the ability of respondents to fit normal curves to four different data distribution visualizations: bar histograms, dotplot histograms, strip plots, and boxplots. We find that the crowd can estimate the center (mean) of a distribution with some success and little bias. We also find that people generally overestimate the standard deviation—which we dub the “umbrella effect” because people tend to want to cover the whole distribution using the curve, as if sheltering it from the heavens above—and that strip plots yield the best accuracy.}, keywords = {} } Idealized probability distributions, such as normal or other curves, lie at the root of confirmatory statistical tests. But how well do people understand these idealized curves? In practical terms, does the human visual system allow us to match sample data distributions with hypothesized population distributions from which those samples might have been drawn? And how do different visualization techniques impact this capability? This paper shares the results of a crowdsourced experiment that tested the ability of respondents to fit normal curves to four different data distribution visualizations: bar histograms, dotplot histograms, strip plots, and boxplots. We find that the crowd can estimate the center (mean) of a distribution with some success and little bias. We also find that people generally overestimate the standard deviation—which we dub the “umbrella effect” because people tend to want to cover the whole distribution using the curve, as if sheltering it from the heavens above—and that strip plots yield the best accuracy. |
80. | Pramod Chundury, M. Adil Yalcin, Jonathan Crabtree, Anup Mahurkar, Lisa M. Shulman, Niklas Elmqvist (2022): Contextual In-Situ Help for Visual Data Interfaces. Information Visualization, 2022. (Type: Article | Abstract | Links | BibTeX) @article{Chundury2022, title = {Contextual In-Situ Help for Visual Data Interfaces}, author = {Pramod Chundury and M. Adil Yalcin and Jonathan Crabtree and Anup Mahurkar and Lisa M. Shulman and Niklas Elmqvist}, url = {https://users.umiacs.umd.edu/~elm/projects/contextual-help/contextual-help.pdf, PDF}, year = {2022}, date = {2022-09-09}, journal = {Information Visualization}, abstract = {As the complexity of data analysis increases, even well-designed data interfaces must guide experts in transforming their theoretical knowledge into actual features supported by the tool. This challenge is even greater for casual users who are increasingly turning to data analysis to solve everyday problems. To address this challenge, we propose data-driven, contextual, in-situ help features that can be implemented in visual data interfaces. We introduce five modes of help-seeking: (1) contextual help on selected interface elements, (2) topic listing, (3) overview, (4) guided tour, and (5) notifications. The difference between our work and general user interface help systems is that data visualization provide a unique environment for embedding context-dependent data inside on-screen messaging. We demonstrate the usefulness of such contextual help through two case studies of two visual data interfaces: Keshif and POD-Vis. We implemented and evaluated the help modes with two sets of participants, and found that directly selecting user interface elements was the most useful.}, keywords = {} } As the complexity of data analysis increases, even well-designed data interfaces must guide experts in transforming their theoretical knowledge into actual features supported by the tool. This challenge is even greater for casual users who are increasingly turning to data analysis to solve everyday problems. To address this challenge, we propose data-driven, contextual, in-situ help features that can be implemented in visual data interfaces. We introduce five modes of help-seeking: (1) contextual help on selected interface elements, (2) topic listing, (3) overview, (4) guided tour, and (5) notifications. The difference between our work and general user interface help systems is that data visualization provide a unique environment for embedding context-dependent data inside on-screen messaging. We demonstrate the usefulness of such contextual help through two case studies of two visual data interfaces: Keshif and POD-Vis. We implemented and evaluated the help modes with two sets of participants, and found that directly selecting user interface elements was the most useful. |
79. | Biswaksen Patnaik, Huaishu Peng, Niklas Elmqvist (2022): Sensemaking Sans Power: Interactive Data Visualization Using Color-Changing Ink. IEEE Transactions on Visualization & Computer Graphics, 2022. (Type: Article | Abstract | Links | BibTeX) @article{Patnaik2022, title = {Sensemaking Sans Power: Interactive Data Visualization Using Color-Changing Ink}, author = {Biswaksen Patnaik and Huaishu Peng and Niklas Elmqvist}, url = {https://users.umiacs.umd.edu/~elm/projects/sense-sans-power/sense-sans-power.pdf, PDF}, year = {2022}, date = {2022-09-01}, journal = {IEEE Transactions on Visualization & Computer Graphics}, abstract = {We present an approach for interactively visualizing data using color-changing inks without the need for electronic displays or computers. Color-changing inks are a family of physical inks that change their color characteristics in response to an external stimulus such as heat, UV light, water, and pressure. Visualizations created using color-changing inks can embed interactivity in printed material without external computational media. In this paper, we survey current color-changing ink technology and then use these findings to derive a framework for how it can be used to construct interactive data representations. We also enumerate the interaction techniques possible using this technology. We then show some examples of how to use color-changing ink to create interactive visualizations on paper. While obviously limited in scope to situations where no power or computing is present, or as a complement to digital displays, our findings can be employed for paper, data physicalization, and embedded visualizations.}, keywords = {} } We present an approach for interactively visualizing data using color-changing inks without the need for electronic displays or computers. Color-changing inks are a family of physical inks that change their color characteristics in response to an external stimulus such as heat, UV light, water, and pressure. Visualizations created using color-changing inks can embed interactivity in printed material without external computational media. In this paper, we survey current color-changing ink technology and then use these findings to derive a framework for how it can be used to construct interactive data representations. We also enumerate the interaction techniques possible using this technology. We then show some examples of how to use color-changing ink to create interactive visualizations on paper. While obviously limited in scope to situations where no power or computing is present, or as a complement to digital displays, our findings can be employed for paper, data physicalization, and embedded visualizations. |
78. | Sriram Karthik Badam, Senthil Chandrasegaran, Niklas Elmqvist (2022): Integrating Annotations into Multidimensional Visual Dashboards. Information Visualization, 21 (3), pp. 270–284, 2022. (Type: Article | Abstract | Links | BibTeX) @article{Badam2022, title = {Integrating Annotations into Multidimensional Visual Dashboards}, author = {Sriram Karthik Badam and Senthil Chandrasegaran and Niklas Elmqvist}, url = {https://users.umiacs.umd.edu/~elm/projects/facetnotes/facetnotes.pdf, PDF}, year = {2022}, date = {2022-05-10}, journal = {Information Visualization}, volume = {21}, number = {3}, pages = {270--284}, abstract = {Multidimensional data is often visualized using coordinated multiple views in an interactive dashboard. However, unlike in infographics where text is often a central part of the presentation, there is currently little knowledge of how to best integrate text and annotations in a visualization dashboard. In this paper, we explore a technique called FacetNotes for presenting these textual annotations on top of any visualization within a dashboard irrespective of the scale of data shown or the design of visual representation itself. FacetNotes does so by grouping and ordering the textual annotations based on properties of (1) the individual data points associated with the annotations, and (2) the target visual representation on which they should be shown. We present this technique along with a set of user interface features and guidelines to apply it to visualization interfaces. We also demonstrate FacetNotes in a custom visual dashboard interface. Finally, results from a user study of FacetNotes show that the technique improves the scope and complexity of insights developed during visual exploration.}, keywords = {} } Multidimensional data is often visualized using coordinated multiple views in an interactive dashboard. However, unlike in infographics where text is often a central part of the presentation, there is currently little knowledge of how to best integrate text and annotations in a visualization dashboard. In this paper, we explore a technique called FacetNotes for presenting these textual annotations on top of any visualization within a dashboard irrespective of the scale of data shown or the design of visual representation itself. FacetNotes does so by grouping and ordering the textual annotations based on properties of (1) the individual data points associated with the annotations, and (2) the target visual representation on which they should be shown. We present this technique along with a set of user interface features and guidelines to apply it to visualization interfaces. We also demonstrate FacetNotes in a custom visual dashboard interface. Finally, results from a user study of FacetNotes show that the technique improves the scope and complexity of insights developed during visual exploration. |
77. | Minjeong Shin, Joohee Kim, Yunha Han, Lexing Xie, Mitchell Whitelaw, Bum Chul Kwon, Sungahn Ko, Niklas Elmqvist (2022): Roslingifier: Semi-Automated Storytelling for Animated Scatterplots. IEEE Transactions on Visualization and Computer Graphics, 2022. (Type: Article | Abstract | Links | BibTeX) @article{Shin2022, title = {Roslingifier: Semi-Automated Storytelling for Animated Scatterplots}, author = {Minjeong Shin and Joohee Kim and Yunha Han and Lexing Xie and Mitchell Whitelaw and Bum Chul Kwon and Sungahn Ko and Niklas Elmqvist}, url = {https://users.umiacs.umd.edu/~elm/projects/roslingifier/roslingifier.pdf, PDF}, year = {2022}, date = {2022-05-10}, journal = {IEEE Transactions on Visualization and Computer Graphics}, abstract = {We present Roslingifier, a data-driven storytelling method for animated scatterplots. Like its namesake, Hans Rosling (1948--2017), a professor of public health and a spellbinding public speaker, Roslingifier turns a sequence of entities changing over time---such as countries and continents with their demographic data---into an engaging narrative telling the story of the data. This data-driven storytelling method with an in-person presenter is a new genre of storytelling technique and has never been studied before. In this paper, we aim to define a design space for this new genre---data presentation---and provide a semi-automated authoring tool for helping presenters create quality presentations. From an in-depth analysis of video clips of presentations using interactive visualizations, we derive three specific techniques to achieve this: natural language narratives, visual effects that highlight events, and temporal branching that changes playback time of the animation. Our implementation of the Roslingifier method is capable of identifying and clustering significant movements, automatically generating visual highlighting and a narrative for playback, and enabling the user to customize. From two user studies, we show that Roslingifier allows users to effectively create engaging data stories and the system features help both presenters and viewers find diverse insights.}, keywords = {} } We present Roslingifier, a data-driven storytelling method for animated scatterplots. Like its namesake, Hans Rosling (1948--2017), a professor of public health and a spellbinding public speaker, Roslingifier turns a sequence of entities changing over time---such as countries and continents with their demographic data---into an engaging narrative telling the story of the data. This data-driven storytelling method with an in-person presenter is a new genre of storytelling technique and has never been studied before. In this paper, we aim to define a design space for this new genre---data presentation---and provide a semi-automated authoring tool for helping presenters create quality presentations. From an in-depth analysis of video clips of presentations using interactive visualizations, we derive three specific techniques to achieve this: natural language narratives, visual effects that highlight events, and temporal branching that changes playback time of the animation. Our implementation of the Roslingifier method is capable of identifying and clustering significant movements, automatically generating visual highlighting and a narrative for playback, and enabling the user to customize. From two user studies, we show that Roslingifier allows users to effectively create engaging data stories and the system features help both presenters and viewers find diverse insights. |
76. | Pramod Chundury, Biswaksen Patnaik, Yasmin Reyazuddin, Christine W. Tang, Jonathan Lazar, Niklas Elmqvist (2022): Towards Understanding Sensory Substitution for Accessible Visualization: An Interview Study. IEEE Transactions on Visualization & Computer Graphics, 28 (1), pp. 1084–1094, 2022. (Type: Article | Abstract | Links | BibTeX) @article{Chundury2021, title = {Towards Understanding Sensory Substitution for Accessible Visualization: An Interview Study}, author = {Pramod Chundury and Biswaksen Patnaik and Yasmin Reyazuddin and Christine W. Tang and Jonathan Lazar and Niklas Elmqvist}, url = {https://users.umiacs.umd.edu/~elm/projects/access-vis/access-vis.pdf, PDF}, year = {2022}, date = {2022-01-01}, journal = {IEEE Transactions on Visualization & Computer Graphics}, volume = {28}, number = {1}, pages = {1084--1094}, abstract = {For all its potential in supporting data analysis, particularly in exploratory situations, visualization also creates barriers: accessibility for blind and visually impaired individuals. Regardless of how effective a visualization is, providing equal access for blind users requires a paradigm shift for the visualization research community. To enact such a shift, it is not sufficient to treat visualization accessibility as merely another technical problem to overcome. Instead, supporting the millions of blind and visually impaired users around the world who have equally valid needs for data analysis as sighted individuals requires a respectful, equitable, and holistic approach that includes all users from the onset. In this paper, we draw on accessibility research methodologies to make inroads towards such an approach. We first identify the people who have specific insight into how blind people perceive the world: orientation and mobility (O&M) experts, who are instructors that teach blind individuals how to navigate the physical world using non-visual senses. We interview 10 O&M experts---all of them blind---to understand how best to use sensory substitution other than the visual sense for conveying spatial layouts. Finally, we investigate our qualitative findings using thematic analysis. While blind people in general tend to use both sound and touch to understand their surroundings, we focused on auditory affordances and how they can be used to make data visualizations accessible---using sonification and auralization. However, our experts recommended supporting a combination of senses---sound and touch---to make charts accessible as blind individuals may be more familiar with exploring tactile charts. We report results on both sound and touch affordances, and conclude by discussing implications for accessible visualization for blind individuals.}, keywords = {} } For all its potential in supporting data analysis, particularly in exploratory situations, visualization also creates barriers: accessibility for blind and visually impaired individuals. Regardless of how effective a visualization is, providing equal access for blind users requires a paradigm shift for the visualization research community. To enact such a shift, it is not sufficient to treat visualization accessibility as merely another technical problem to overcome. Instead, supporting the millions of blind and visually impaired users around the world who have equally valid needs for data analysis as sighted individuals requires a respectful, equitable, and holistic approach that includes all users from the onset. In this paper, we draw on accessibility research methodologies to make inroads towards such an approach. We first identify the people who have specific insight into how blind people perceive the world: orientation and mobility (O&M) experts, who are instructors that teach blind individuals how to navigate the physical world using non-visual senses. We interview 10 O&M experts---all of them blind---to understand how best to use sensory substitution other than the visual sense for conveying spatial layouts. Finally, we investigate our qualitative findings using thematic analysis. While blind people in general tend to use both sound and touch to understand their surroundings, we focused on auditory affordances and how they can be used to make data visualizations accessible---using sonification and auralization. However, our experts recommended supporting a combination of senses---sound and touch---to make charts accessible as blind individuals may be more familiar with exploring tactile charts. We report results on both sound and touch affordances, and conclude by discussing implications for accessible visualization for blind individuals. |
2021 | |
75. | Weihang Wang, Sriram Karthik Badam, Niklas Elmqvist (2021): Topology-Aware Space Distortion for Structured Visualization Spaces. Information Visualization, 2021. (Type: Article | Abstract | Links | BibTeX) @article{Wang2021, title = {Topology-Aware Space Distortion for Structured Visualization Spaces}, author = {Weihang Wang and Sriram Karthik Badam and Niklas Elmqvist}, url = {https://users.umiacs.umd.edu/~elm/projects/zoomhalo/zoomhalo.pdf, PDF}, year = {2021}, date = {2021-09-29}, journal = {Information Visualization}, abstract = {We propose topology-aware space distortion (TASD), a family of interactive layout techniques for non-linearly distorting geometric space based on user attention and on the structure of the visual representation. TASD seamlessly adapts the visual substrate of any visualization to give more screen real estate to important regions of the representation at the expense of less important regions. In this paper, we present a concrete TASD technique that we call ZoomHalo for interactively distorting a two-dimensional space based on a degree-of-interest (DOI) function defined for the space. Using this DOI function, ZoomHalo derives several areas of interest, computes the available space around each area in relation to other areas and the current viewport extents, and then dynamically expands (or shrinks) each area given user input. We use our prototype to evaluate the technique in two user studies, as well as showcase examples of TASD for node-link diagrams, word clouds, and geographical maps.}, keywords = {} } We propose topology-aware space distortion (TASD), a family of interactive layout techniques for non-linearly distorting geometric space based on user attention and on the structure of the visual representation. TASD seamlessly adapts the visual substrate of any visualization to give more screen real estate to important regions of the representation at the expense of less important regions. In this paper, we present a concrete TASD technique that we call ZoomHalo for interactively distorting a two-dimensional space based on a degree-of-interest (DOI) function defined for the space. Using this DOI function, ZoomHalo derives several areas of interest, computes the available space around each area in relation to other areas and the current viewport extents, and then dynamically expands (or shrinks) each area given user input. We use our prototype to evaluate the technique in two user studies, as well as showcase examples of TASD for node-link diagrams, word clouds, and geographical maps. |
74. | Sriram Karthik Badam, Niklas Elmqvist (2021): Effects of Screen-Responsive Visualization on Data Comprehension. Information Visualization, 20 (4), pp. 229–244, 2021. (Type: Article | Abstract | Links | BibTeX) @article{Badam2021, title = {Effects of Screen-Responsive Visualization on Data Comprehension}, author = {Sriram Karthik Badam and Niklas Elmqvist}, url = {https://users.umiacs.umd.edu/~elm/projects/touchinsight/touchinsight.pdf, PDF}, year = {2021}, date = {2021-09-01}, journal = {Information Visualization}, volume = {20}, number = {4}, pages = {229--244}, abstract = {Visualization interfaces designed for heterogeneous devices such as wall displays and mobile screens must be responsive to varying display dimensions, resolution, and interaction capabilities. In this paper, we report on two user studies of visual representations for large versus small displays. The goal of our experiments was to investigate differences between a large vertical display and a mobile hand-held display in terms of the data comprehension and the quality of resulting insights. To this end, we developed a visual interface with a coordinated multiple view layout for the large display and two alternative designs of the same interface---a space-saving boundary visualization layout and an overview layout---for the mobile condition. The first experiment was a controlled laboratory study designed to evaluate the effect of display size on the perception of changes in a visual representation, and yielded significant correctness differences even while completion time remained similar. The second evaluation was a qualitative study in a practical setting and showed that participants were able to easily associate and work with the responsive visualizations. Based on the results, we conclude the paper by providing new guidelines for screen-responsive visualization interfaces.}, keywords = {} } Visualization interfaces designed for heterogeneous devices such as wall displays and mobile screens must be responsive to varying display dimensions, resolution, and interaction capabilities. In this paper, we report on two user studies of visual representations for large versus small displays. The goal of our experiments was to investigate differences between a large vertical display and a mobile hand-held display in terms of the data comprehension and the quality of resulting insights. To this end, we developed a visual interface with a coordinated multiple view layout for the large display and two alternative designs of the same interface---a space-saving boundary visualization layout and an overview layout---for the mobile condition. The first experiment was a controlled laboratory study designed to evaluate the effect of display size on the perception of changes in a visual representation, and yielded significant correctness differences even while completion time remained similar. The second evaluation was a qualitative study in a practical setting and showed that participants were able to easily associate and work with the responsive visualizations. Based on the results, we conclude the paper by providing new guidelines for screen-responsive visualization interfaces. |
73. | Deokgun Park, Mohamed Suhail, Minsheng Zheng, Cody Dunn, Eric Ragan, Niklas Elmqvist (2021): StoryFacets: A Design Study on Storytelling with Visualizations for Collaborative Data Analysis. Information Visualization, 2021. (Type: Article | Abstract | Links | BibTeX) @article{Park2021, title = {StoryFacets: A Design Study on Storytelling with Visualizations for Collaborative Data Analysis}, author = {Deokgun Park and Mohamed Suhail and Minsheng Zheng and Cody Dunn and Eric Ragan and Niklas Elmqvist}, url = {https://users.umiacs.umd.edu/~elm/projects/storyfacets/storyfacets.pdf, PDF}, year = {2021}, date = {2021-08-01}, journal = {Information Visualization}, abstract = {Tracking the sensemaking process is a well-established practice in many data analysis tools, and many visualization tools facilitate overview and recall during and after exploration. However, the resulting communication materials such as presentations or infographics often omit provenance information for the sake of simplicity. This unfortunately limits later viewers from engaging in further collaborative sensemaking or discussion about the analysis. We present a design study where we introduced visual provenance and analytics to urban transportation planning. Maintaining the provenance of all analyses was critical to support collaborative sensemaking among the many and diverse stakeholders. Our system, StoryFacets, exposes several different views of the same analysis session, each view designed for a specific audience: (1) the trail view provides a data flow canvas that supports in-depth exploration+provenance (expert analysts); (2) the dashboard view organizes visualizations and other content into a space-filling layout to support high-level analysis (managers); and (3) the slideshow view supports linear storytelling via interactive step-by-step presentations (laypersons). Views are linked so that when one is changed, provenance is maintained. Visual provenance is available on demand to support iterative sensemaking for any team member.}, keywords = {} } Tracking the sensemaking process is a well-established practice in many data analysis tools, and many visualization tools facilitate overview and recall during and after exploration. However, the resulting communication materials such as presentations or infographics often omit provenance information for the sake of simplicity. This unfortunately limits later viewers from engaging in further collaborative sensemaking or discussion about the analysis. We present a design study where we introduced visual provenance and analytics to urban transportation planning. Maintaining the provenance of all analyses was critical to support collaborative sensemaking among the many and diverse stakeholders. Our system, StoryFacets, exposes several different views of the same analysis session, each view designed for a specific audience: (1) the trail view provides a data flow canvas that supports in-depth exploration+provenance (expert analysts); (2) the dashboard view organizes visualizations and other content into a space-filling layout to support high-level analysis (managers); and (3) the slideshow view supports linear storytelling via interactive step-by-step presentations (laypersons). Views are linked so that when one is changed, provenance is maintained. Visual provenance is available on demand to support iterative sensemaking for any team member. |
72. | Arjun Choudhry, Mandar Sharma, Pramod Chundury, Thomas Kapler, Derek Gray, Naren Ramakrishnan, Niklas Elmqvist (2021): Once Upon A Time In Visualization: Understanding the Use of Textual Narratives for Causality. IEEE Transactions on Visualization & Computer Graphics, 28 (1), 2021. (Type: Article | Abstract | Links | BibTeX) @article{Choudhry2021, title = {Once Upon A Time In Visualization: Understanding the Use of Textual Narratives for Causality}, author = {Arjun Choudhry and Mandar Sharma and Pramod Chundury and Thomas Kapler and Derek Gray and Naren Ramakrishnan and Niklas Elmqvist}, url = {http://users.umiacs.umd.edu/~elm/projects/causality/onceuponatime.pdf, PDF}, year = {2021}, date = {2021-01-01}, journal = {IEEE Transactions on Visualization & Computer Graphics}, volume = {28}, number = {1}, abstract = {Causality visualization can help people understand temporal chains of events, such as messages sent in a distributed system, cause and effect in a historical conflict, or the interplay between political actors over time. However, as the scale and complexity of these event sequences grows, even these visualizations can become overwhelming to use. In this paper, we propose the use of textual narratives as a data-driven storytelling method to augment causality visualization. We first propose a design space for how textual narratives can be used to describe causal data. We then present results from a crowdsourced user study where participants were asked to recover causality information from two causality visualizations--causal graphs and Hasse diagrams--with and without an associated textual narrative. Finally, we describe CAUSEWORKS, a causality visualization system for understanding how specific interventions influence a causal model. The system incorporates an automatic textual narrative mechanism based on our design space. We validate CAUSEWORKS through interviews with experts who used the system for understanding complex events.}, keywords = {} } Causality visualization can help people understand temporal chains of events, such as messages sent in a distributed system, cause and effect in a historical conflict, or the interplay between political actors over time. However, as the scale and complexity of these event sequences grows, even these visualizations can become overwhelming to use. In this paper, we propose the use of textual narratives as a data-driven storytelling method to augment causality visualization. We first propose a design space for how textual narratives can be used to describe causal data. We then present results from a crowdsourced user study where participants were asked to recover causality information from two causality visualizations--causal graphs and Hasse diagrams--with and without an associated textual narrative. Finally, we describe CAUSEWORKS, a causality visualization system for understanding how specific interventions influence a causal model. The system incorporates an automatic textual narrative mechanism based on our design space. We validate CAUSEWORKS through interviews with experts who used the system for understanding complex events. |
71. | Brian Ondov, Fumeng Yang, Matthew Kay, Niklas Elmqvist, Steven Franconeri (2021): Revealing Perceptual Proxies with Adversarial Examples. IEEE Transactions on Visualization & Computer Graphics, 28 (1), 2021. (Type: Article | Abstract | Links | BibTeX) @article{Ondov2021, title = {Revealing Perceptual Proxies with Adversarial Examples}, author = {Brian Ondov and Fumeng Yang and Matthew Kay and Niklas Elmqvist and Steven Franconeri}, url = {http://users.umiacs.umd.edu/~elm/projects/perceptual-proxies/revealing-proxies.pdf, PDF https://osf.io/2re7b/, OSF (materials)}, year = {2021}, date = {2021-01-01}, journal = {IEEE Transactions on Visualization & Computer Graphics}, volume = {28}, number = {1}, abstract = {Data visualizations convert numbers into visual marks so that our visual system can extract data from an image instead of raw numbers. Clearly, the visual system does not compute these values as a computer would, as an arithmetic mean or a correlation.Instead, it extracts these patterns using perceptual proxies; heuristic shortcuts of the visual marks, such as a center of mass or a shape envelope. Understanding which proxies people use would lead to more effective visualizations. We present the results of a series of crowdsourced experiments that measure how powerfully a set of candidate proxies can explain human performance when comparing the mean and range of pairs of data series presented as bar charts. We generated datasets where the correct answer---the series with the larger arithmetic mean or range---was pitted against an \"adversarial\" series that should be seen as larger if the viewer uses a particular candidate proxy. We used both Bayesian logistic regression models and a robust Bayesian mixed-effects linear model to measure how strongly each adversarial proxy could drive viewers to answer incorrectly and whether different individuals may use different proxies. Finally, we attempt to construct adversarial datasets from scratch, using an iterative crowdsourcing procedure to perform black-box optimization.}, keywords = {} } Data visualizations convert numbers into visual marks so that our visual system can extract data from an image instead of raw numbers. Clearly, the visual system does not compute these values as a computer would, as an arithmetic mean or a correlation.Instead, it extracts these patterns using perceptual proxies; heuristic shortcuts of the visual marks, such as a center of mass or a shape envelope. Understanding which proxies people use would lead to more effective visualizations. We present the results of a series of crowdsourced experiments that measure how powerfully a set of candidate proxies can explain human performance when comparing the mean and range of pairs of data series presented as bar charts. We generated datasets where the correct answer---the series with the larger arithmetic mean or range---was pitted against an "adversarial" series that should be seen as larger if the viewer uses a particular candidate proxy. We used both Bayesian logistic regression models and a robust Bayesian mixed-effects linear model to measure how strongly each adversarial proxy could drive viewers to answer incorrectly and whether different individuals may use different proxies. Finally, we attempt to construct adversarial datasets from scratch, using an iterative crowdsourcing procedure to perform black-box optimization. |
2020 | |
70. | Ninger Zhou, Lorraine Kisselburgh, Senthil Chandrasegaran, Karthik Badam, Niklas Elmqvist, Karthik Ramani (2020): Using Social Interaction Trace Data and Context to Predict Collaboration Quality and Creative Fluency in Collaborative Design Learning Environments. International Journal of Human-Computer Studies, 136 (102378), 2020. (Type: Article | Abstract | Links | BibTeX) @article{Zhou2020, title = {Using Social Interaction Trace Data and Context to Predict Collaboration Quality and Creative Fluency in Collaborative Design Learning Environments}, author = {Ninger Zhou and Lorraine Kisselburgh and Senthil Chandrasegaran and Karthik Badam and Niklas Elmqvist and Karthik Ramani}, url = {https://www.sciencedirect.com/science/article/abs/pii/S1071581919301442, Website}, year = {2020}, date = {2020-04-01}, journal = {International Journal of Human-Computer Studies}, volume = {136}, number = {102378}, abstract = {Engineering design typically occurs as a collaborative process situated in specific context such as computer-supported environments, however there is limited research examining the dynamics of design collaboration in specific contexts. In this study, drawing from situative learning theory, we developed two analytic lenses to broaden theoretical insights into collaborative design practices in computer-supported environments: (a) the role of spatial and material context, and (b) the role of social interactions. We randomly assigned participants to four conditions varying the material context (paper vs. tablet sketching tools) and spatial environment (private room vs commons area) as they worked collaboratively to generate ideas for a toy design task. We used wearable sociometric badges to automatically and unobtrusively collect social interaction data. Using partial least squares regression, we generated two predictive models for collaboration quality and creative fluency. We found that context matters materially to perceptions of collaboration, where those using collaboration-support tools perceived higher quality collaboration. But context matters spatially to creativity, and those situated in private spaces are more fluent in generating ideas than those in commons areas. We also found that interaction dynamics differ: synchronous interaction is important to quality collaboration, but reciprocal interaction is important to creative fluency. These findings provide important insights into the processual factors in collaborative design in computer-supported environments, and the predictive role of context and conversation dynamics. We discuss the theoretical contributions to computer-supported collaborative design, the methodological contributions of wearable sensor tools, and the practical contributions to structuring computer-supported environments for engineering design practice.}, keywords = {} } Engineering design typically occurs as a collaborative process situated in specific context such as computer-supported environments, however there is limited research examining the dynamics of design collaboration in specific contexts. In this study, drawing from situative learning theory, we developed two analytic lenses to broaden theoretical insights into collaborative design practices in computer-supported environments: (a) the role of spatial and material context, and (b) the role of social interactions. We randomly assigned participants to four conditions varying the material context (paper vs. tablet sketching tools) and spatial environment (private room vs commons area) as they worked collaboratively to generate ideas for a toy design task. We used wearable sociometric badges to automatically and unobtrusively collect social interaction data. Using partial least squares regression, we generated two predictive models for collaboration quality and creative fluency. We found that context matters materially to perceptions of collaboration, where those using collaboration-support tools perceived higher quality collaboration. But context matters spatially to creativity, and those situated in private spaces are more fluent in generating ideas than those in commons areas. We also found that interaction dynamics differ: synchronous interaction is important to quality collaboration, but reciprocal interaction is important to creative fluency. These findings provide important insights into the processual factors in collaborative design in computer-supported environments, and the predictive role of context and conversation dynamics. We discuss the theoretical contributions to computer-supported collaborative design, the methodological contributions of wearable sensor tools, and the practical contributions to structuring computer-supported environments for engineering design practice. |
69. | Amira Chalbi, Jacob Ritchie, Deok Gun Park, Jungu Choi, Nicolas Roussel, Niklas Elmqvist, Fanny Chevalier (2020): Common Fate for Animated Transitions in Visualization. IEEE Transactions on Visualization & Computer Graphics, 26 (1), 2020. (Type: Article | Abstract | Links | BibTeX) @article{Chalbi2020, title = {Common Fate for Animated Transitions in Visualization}, author = {Amira Chalbi and Jacob Ritchie and Deok Gun Park and Jungu Choi and Nicolas Roussel and Niklas Elmqvist and Fanny Chevalier}, url = {http://users.umiacs.umd.edu/~elm/projects/common-fate/common-fate.pdf, PDF}, year = {2020}, date = {2020-01-01}, journal = {IEEE Transactions on Visualization & Computer Graphics}, volume = {26}, number = {1}, abstract = {The Law of Common Fate from Gestalt psychology states that visual objects moving with the same velocity along parallel trajectories will be perceived by a human observer as grouped. However, the concept of common fate is much broader than mere velocity; in this paper we explore how common fate results from coordinated changes in luminance and size. We present results from a crowdsourced graphical perception study where we asked workers to make perceptual judgments on a series of trials involving four graphical objects under the influence of conflicting static and dynamic visual factors (position, size and luminance) used in conjunction. Our results yield the following rankings for visual grouping: motion > (dynamic luminance, size, luminance); dynamic size > (dynamic luminance, position); and dynamic luminance > size. We also conducted a follow-up experiment to evaluate the three dynamic visual factors in a more ecologically valid setting, using both a Gapminder-like animated scatterplot and a thematic map of election data. The results indicate that in practice the relative grouping strengths of these factors may depend on various parameters including the visualization characteristics and the underlying data. We discuss design implications for animated transitions in data visualization.}, keywords = {} } The Law of Common Fate from Gestalt psychology states that visual objects moving with the same velocity along parallel trajectories will be perceived by a human observer as grouped. However, the concept of common fate is much broader than mere velocity; in this paper we explore how common fate results from coordinated changes in luminance and size. We present results from a crowdsourced graphical perception study where we asked workers to make perceptual judgments on a series of trials involving four graphical objects under the influence of conflicting static and dynamic visual factors (position, size and luminance) used in conjunction. Our results yield the following rankings for visual grouping: motion > (dynamic luminance, size, luminance); dynamic size > (dynamic luminance, position); and dynamic luminance > size. We also conducted a follow-up experiment to evaluate the three dynamic visual factors in a more ecologically valid setting, using both a Gapminder-like animated scatterplot and a thematic map of election data. The results indicate that in practice the relative grouping strengths of these factors may depend on various parameters including the visualization characteristics and the underlying data. We discuss design implications for animated transitions in data visualization. |
68. | Andrea Batch, Andrew Cunningham, Maxime Cordeil, Niklas Elmqvist, Tim Dwyer, Bruce H. Thomas, Kim Marriott (2020): There Is No Spoon: Evaluating Performance, Space Use, and Presence with Expert Domain Users in Immersive Analytics. IEEE Transactions on Visualization & Computer Graphics, 26 (1), 2020. (Type: Article | Abstract | Links | BibTeX) @article{Batch2020, title = {There Is No Spoon: Evaluating Performance, Space Use, and Presence with Expert Domain Users in Immersive Analytics}, author = {Andrea Batch and Andrew Cunningham and Maxime Cordeil and Niklas Elmqvist and Tim Dwyer and Bruce H. Thomas and Kim Marriott}, url = {http://users.umiacs.umd.edu/~elm/projects/nospoon/nospoon.pdf, PDF}, year = {2020}, date = {2020-01-01}, journal = {IEEE Transactions on Visualization & Computer Graphics}, volume = {26}, number = {1}, abstract = {Immersive analytics turns the very space surrounding the user into a canvas for data analysis, supporting human cognitive abilities in myriad ways. We present the results of a design study, contextual inquiry, and longitudinal evaluation involving professional economists using a Virtual Reality (VR) system for multidimensional visualization to explore actual economic data. Results from our preregistered evaluation highlight the varied use of space depending on context (exploration vs. presentation), the organization of space to support work, and the impact of immersion on navigation and orientation in the 3D analysis space.}, keywords = {} } Immersive analytics turns the very space surrounding the user into a canvas for data analysis, supporting human cognitive abilities in myriad ways. We present the results of a design study, contextual inquiry, and longitudinal evaluation involving professional economists using a Virtual Reality (VR) system for multidimensional visualization to explore actual economic data. Results from our preregistered evaluation highlight the varied use of space depending on context (exploration vs. presentation), the organization of space to support work, and the impact of immersion on navigation and orientation in the 3D analysis space. |
67. | Nicole Jardine, Brian Ondov, Niklas Elmqvist, Steven Franconeri (2020): The Perceptual Proxies of Visual Comparison. IEEE Transactions on Visualization & Computer Graphics, 26 (1), 2020. (Type: Article | Abstract | Links | BibTeX) @article{Jardine2020, title = {The Perceptual Proxies of Visual Comparison}, author = {Nicole Jardine and Brian Ondov and Niklas Elmqvist and Steven Franconeri}, url = {http://users.umiacs.umd.edu/~elm/projects/perceptual-proxies/perceptual-proxies.pdf, PDF}, year = {2020}, date = {2020-01-01}, journal = {IEEE Transactions on Visualization & Computer Graphics}, volume = {26}, number = {1}, abstract = {Perceptual tasks in visualizations often involve comparisons. Of two sets of values depicted in two charts, which set had values that were the highest overall? Which had the widest range? Prior empirical work found that the performance on different visual comparison tasks (e.g., \"biggest delta\", \"biggest correlation\") varied widely across different combinations of marks and spatial arrangements. In this paper, we expand upon these combinations in an empirical evaluation of two new comparison tasks: the \"biggest mean\" and \"biggest range\" between two sets of values. We used a staircase procedure to titrate the difficulty of the data comparison to assess which arrangements produced the most precise comparisons for each task. We find visual comparisons of biggest mean and biggest range are supported by some chart arrangements more than others, and that this pattern is substantially different from the pattern for other tasks. To synthesize these dissonant findings, we argue that we must understand which features of a visualization are actually used by the human visual system to solve a given task. We call these perceptual proxies. For example, when comparing the means of two bar charts, the visual system might use a \"Mean length\" proxy that isolates the actual lengths of the bars and then constructs a true average across these lengths. Alternatively, it might use a \"Hull Area\" proxy that perceives an implied hull bounded by the bars of each chart and then compares the areas of these hulls. We propose a series of potential proxies across different tasks, marks, and spatial arrangements. Simple models of these proxies can be empirically evaluated for their explanatory power by matching their performance to human performance across these marks, arrangements, and tasks. We use this process to highlight candidates for perceptual proxies that might scale more broadly to explain performance in visual comparison.}, keywords = {} } Perceptual tasks in visualizations often involve comparisons. Of two sets of values depicted in two charts, which set had values that were the highest overall? Which had the widest range? Prior empirical work found that the performance on different visual comparison tasks (e.g., "biggest delta", "biggest correlation") varied widely across different combinations of marks and spatial arrangements. In this paper, we expand upon these combinations in an empirical evaluation of two new comparison tasks: the "biggest mean" and "biggest range" between two sets of values. We used a staircase procedure to titrate the difficulty of the data comparison to assess which arrangements produced the most precise comparisons for each task. We find visual comparisons of biggest mean and biggest range are supported by some chart arrangements more than others, and that this pattern is substantially different from the pattern for other tasks. To synthesize these dissonant findings, we argue that we must understand which features of a visualization are actually used by the human visual system to solve a given task. We call these perceptual proxies. For example, when comparing the means of two bar charts, the visual system might use a "Mean length" proxy that isolates the actual lengths of the bars and then constructs a true average across these lengths. Alternatively, it might use a "Hull Area" proxy that perceives an implied hull bounded by the bars of each chart and then compares the areas of these hulls. We propose a series of potential proxies across different tasks, marks, and spatial arrangements. Simple models of these proxies can be empirically evaluated for their explanatory power by matching their performance to human performance across these marks, arrangements, and tasks. We use this process to highlight candidates for perceptual proxies that might scale more broadly to explain performance in visual comparison. |
2019 | |
66. | Zhe Cui, Jayaram Kancherla, Kyle W. Chang, Niklas Elmqvist, Héctor Corrada Bravo (2019): Proactive Visual and Statistical Analysis of Genomic Data in Epiviz. Bioinformatics, 36 (7), pp. 2195–2201, 2019. (Type: Article | Abstract | Links | BibTeX) @article{Cui2020, title = {Proactive Visual and Statistical Analysis of Genomic Data in Epiviz}, author = {Zhe Cui and Jayaram Kancherla and Kyle W. Chang and Niklas Elmqvist and Héctor Corrada Bravo}, url = {https://academic.oup.com/bioinformatics/article/36/7/2195/5646643, Fulltext (HTML)}, year = {2019}, date = {2019-11-29}, journal = {Bioinformatics}, volume = {36}, number = {7}, pages = {2195--2201}, abstract = {In this article, we present Epiviz Feed, a proactive and automatic visual analytics system integrated with Epiviz that alleviates the burden of manually executing data analysis required to test biologically meaningful hypotheses. Results of interest that are proactively identified by server-side computations are listed as notifications in a feed. The feed turns genomic data analysis into a collaborative work between the analyst and the computational environment, which shortens the analysis time and allows the analyst to explore results efficiently. We discuss three ways where the proposed system advances the field of genomic data analysis: (i) takes the first step of proactive data analysis by utilizing available CPU power from the server to automate the analysis process; (ii) summarizes hypothesis test results in a way that analysts can easily understand and investigate; (iii) enables filtering and grouping of analysis results for quick search. This effort provides initial work on systems that substantially expand how computational and visualization frameworks can be tightly integrated to facilitate interactive genomic data analysis.}, keywords = {} } In this article, we present Epiviz Feed, a proactive and automatic visual analytics system integrated with Epiviz that alleviates the burden of manually executing data analysis required to test biologically meaningful hypotheses. Results of interest that are proactively identified by server-side computations are listed as notifications in a feed. The feed turns genomic data analysis into a collaborative work between the analyst and the computational environment, which shortens the analysis time and allows the analyst to explore results efficiently. We discuss three ways where the proposed system advances the field of genomic data analysis: (i) takes the first step of proactive data analysis by utilizing available CPU power from the server to automate the analysis process; (ii) summarizes hypothesis test results in a way that analysts can easily understand and investigate; (iii) enables filtering and grouping of analysis results for quick search. This effort provides initial work on systems that substantially expand how computational and visualization frameworks can be tightly integrated to facilitate interactive genomic data analysis. |
65. | Andreas Mathisen, Tom Horak, Clemens Nylandsted Klokmose, Kaj Grønbæk, Niklas Elmqvist (2019): InsideInsights: Integrating Data‐Driven Reporting in Collaborative Visual Analytics. Computer Graphics Forum, 38 (3), pp. 649–661, 2019. (Type: Article | Abstract | Links | BibTeX) @article{Mathisen2019, title = {InsideInsights: Integrating Data‐Driven Reporting in Collaborative Visual Analytics}, author = {Andreas Mathisen and Tom Horak and Clemens Nylandsted Klokmose and Kaj Grønbæk and Niklas Elmqvist}, url = {http://users.umiacs.umd.edu/~elm/projects/insideinsights/insideinsights.pdf, PDF}, year = {2019}, date = {2019-06-01}, journal = {Computer Graphics Forum}, volume = {38}, number = {3}, pages = {649--661}, abstract = {Analyzing complex data is a non‐linear process that alternates between identifying discrete facts and developing overall assessments and conclusions. In addition, data analysis rarely occurs in solitude; multiple collaborators can be engaged in the same analysis, or intermediate results can be reported to stakeholders. However, current data‐driven communication tools are detached from the analysis process and promote linear stories that forego the hierarchical and branching nature of data analysis, which leads to either too much or too little detail in the final report. We propose a conceptual design for integrated data‐driven reporting that allows for iterative structuring of insights into hierarchies linked to analytic provenance and chosen analysis views. The hierarchies become dynamic and interactive reports where collaborators can review and modify the analysis at a desired level of detail. Our web‐based InsideInsights system provides interaction techniques to annotate states of analytic components, structure annotations, and link them to appropriate presentation views. We demonstrate the generality and usefulness of our system with two use cases and a qualitative expert review.}, keywords = {} } Analyzing complex data is a non‐linear process that alternates between identifying discrete facts and developing overall assessments and conclusions. In addition, data analysis rarely occurs in solitude; multiple collaborators can be engaged in the same analysis, or intermediate results can be reported to stakeholders. However, current data‐driven communication tools are detached from the analysis process and promote linear stories that forego the hierarchical and branching nature of data analysis, which leads to either too much or too little detail in the final report. We propose a conceptual design for integrated data‐driven reporting that allows for iterative structuring of insights into hierarchies linked to analytic provenance and chosen analysis views. The hierarchies become dynamic and interactive reports where collaborators can review and modify the analysis at a desired level of detail. Our web‐based InsideInsights system provides interaction techniques to annotate states of analytic components, structure annotations, and link them to appropriate presentation views. We demonstrate the generality and usefulness of our system with two use cases and a qualitative expert review. |
64. | Jinho Choi, Sanghun Jung, Deok Gun Park, Jaegul Choo, Niklas Elmqvist (2019): Visualizing for the Non‐Visual: Enabling the Visually Impaired to Use Visualization. Computer Graphics Forum, 38 (3), pp. 249–260, 2019. (Type: Article | Abstract | Links | BibTeX) @article{Choi2019, title = {Visualizing for the Non‐Visual: Enabling the Visually Impaired to Use Visualization}, author = {Jinho Choi and Sanghun Jung and Deok Gun Park and Jaegul Choo and Niklas Elmqvist}, url = {http://users.umiacs.umd.edu/~elm/projects/vis4nonvisual/vis4nonvisual.pdf, PDF}, year = {2019}, date = {2019-06-01}, journal = {Computer Graphics Forum}, volume = {38}, number = {3}, pages = {249--260}, abstract = {The majority of visualizations on the web are still stored as raster images, making them inaccessible to visually impaired users. We propose a deep‐neural‐network‐based approach that automatically recognizes key elements in a visualization, including a visualization type, graphical elements, labels, legends, and most importantly, the original data conveyed in the visualization. We leverage such extracted information to provide visually impaired people with the reading of the extracted information. Based on interviews with visually impaired users, we built a Google Chrome extension designed to work with screen reader software to automatically decode charts on a webpage using our pipeline. We compared the performance of the back‐end algorithm with existing methods and evaluated the utility using qualitative feedback from visually impaired users.}, keywords = {} } The majority of visualizations on the web are still stored as raster images, making them inaccessible to visually impaired users. We propose a deep‐neural‐network‐based approach that automatically recognizes key elements in a visualization, including a visualization type, graphical elements, labels, legends, and most importantly, the original data conveyed in the visualization. We leverage such extracted information to provide visually impaired people with the reading of the extracted information. Based on interviews with visually impaired users, we built a Google Chrome extension designed to work with screen reader software to automatically decode charts on a webpage using our pipeline. We compared the performance of the back‐end algorithm with existing methods and evaluated the utility using qualitative feedback from visually impaired users. |
63. | Calvin Yau, Morteza Karimzadeh, Chittayong Surakitbanharn, Niklas Elmqvist, David S. Ebert (2019): Bridging the Data Analysis Communication Gap Utilizing a Three-Component Summarized Line Graph. Computer Graphics Forum, 38 (3), pp. 375–386, 2019. (Type: Article | Abstract | Links | BibTeX) @article{Yau2019, title = {Bridging the Data Analysis Communication Gap Utilizing a Three-Component Summarized Line Graph}, author = {Calvin Yau and Morteza Karimzadeh and Chittayong Surakitbanharn and Niklas Elmqvist and David S. Ebert}, url = {http://users.umiacs.umd.edu/~elm/projects/sumlinegraph/sumlinegraph.pdf, PDF}, year = {2019}, date = {2019-06-01}, journal = {Computer Graphics Forum}, volume = {38}, number = {3}, pages = {375--386}, abstract = {Communication‐minded visualizations are designed to provide their audience—managers, decision‐makers, and the public—with new knowledge. Authoring such visualizations effectively is challenging because the audience often lacks the expertise, context, and time that professional analysts have at their disposal to explore and understand datasets. We present a novel summarized line graph visualization technique designed specifically for data analysts to communicate data to decision‐makers more effectively and efficiently. Our summarized line graph reduces a large and detailed dataset of multiple quantitative time‐series into (1) representative data that provides a quick takeaway of the full dataset; (2) analytical highlights that distinguish specific insights of interest; and (3) a data envelope that summarizes the remaining aggregated data. Our summarized line graph achieved the best overall results when evaluated against line graphs, band graphs, stream graphs, and horizon graphs on four representative tasks.}, keywords = {} } Communication‐minded visualizations are designed to provide their audience—managers, decision‐makers, and the public—with new knowledge. Authoring such visualizations effectively is challenging because the audience often lacks the expertise, context, and time that professional analysts have at their disposal to explore and understand datasets. We present a novel summarized line graph visualization technique designed specifically for data analysts to communicate data to decision‐makers more effectively and efficiently. Our summarized line graph reduces a large and detailed dataset of multiple quantitative time‐series into (1) representative data that provides a quick takeaway of the full dataset; (2) analytical highlights that distinguish specific insights of interest; and (3) a data envelope that summarizes the remaining aggregated data. Our summarized line graph achieved the best overall results when evaluated against line graphs, band graphs, stream graphs, and horizon graphs on four representative tasks. |
62. | Biswaksen Patnaik, Andrea Batch, Niklas Elmqvist (2019): Information Olfactation: Harnessing Scent to Convey Data. IEEE Transactions on Visualization & Computer Graphics, 2019. (Type: Article | Abstract | Links | BibTeX) @article{Patnaik2019, title = {Information Olfactation: Harnessing Scent to Convey Data}, author = {Biswaksen Patnaik and Andrea Batch and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/info-olfac/info-olfac.pdf, PDF https://doi.org/10.1109/TVCG.2018.2865237, DOI}, year = {2019}, date = {2019-01-01}, journal = { IEEE Transactions on Visualization & Computer Graphics}, abstract = {Olfactory feedback for analytical tasks is a virtually unexplored area in spite of the advantages it offers for information recall, feature identification, and location detection. Here we introduce the concept of information olfactation as the fragrant sibling of information visualization, and discuss how scent can be used to convey data. Building on a review of the human olfactory system and mirroring common visualization practice, we propose olfactory marks, the substrate in which they exist, and their olfactory channels that are available to designers. To exemplify this idea, we present VISCENT: A six-scent stereo olfactory display capable of conveying olfactory glyphs of varying temperature and direction, as well as a corresponding software system that integrates the display with a traditional visualization display. Finally, we present three applications that make use of the viScent system: A 2D graph visualization, a 2D line and point chart, and an immersive analytics graph visualization in 3D virtual reality. We close the paper with a review of possible extensions of viScent and applications of information olfactation for general visualization beyond the examples in this paper.}, keywords = {} } Olfactory feedback for analytical tasks is a virtually unexplored area in spite of the advantages it offers for information recall, feature identification, and location detection. Here we introduce the concept of information olfactation as the fragrant sibling of information visualization, and discuss how scent can be used to convey data. Building on a review of the human olfactory system and mirroring common visualization practice, we propose olfactory marks, the substrate in which they exist, and their olfactory channels that are available to designers. To exemplify this idea, we present VISCENT: A six-scent stereo olfactory display capable of conveying olfactory glyphs of varying temperature and direction, as well as a corresponding software system that integrates the display with a traditional visualization display. Finally, we present three applications that make use of the viScent system: A 2D graph visualization, a 2D line and point chart, and an immersive analytics graph visualization in 3D virtual reality. We close the paper with a review of possible extensions of viScent and applications of information olfactation for general visualization beyond the examples in this paper. |
61. | Brian Ondov, Nicole Jardine, Niklas Elmqvist, Steven Franconeri (2019): Face to Face: Evaluating Visual Comparison. IEEE Transactions on Visualization & Computer Graphics, 2019. (Type: Article | Abstract | Links | BibTeX) @article{Ondov2019, title = {Face to Face: Evaluating Visual Comparison}, author = {Brian Ondov and Nicole Jardine and Niklas Elmqvist and Steven Franconeri}, url = {http://www.umiacs.umd.edu/~elm/projects/face2face/face2face.pdf, PDF https://doi.org/10.1109/TVCG.2018.2864884, DOI}, year = {2019}, date = {2019-01-01}, journal = { IEEE Transactions on Visualization & Computer Graphics}, abstract = {Data are often viewed as a single set of values, but those values frequently must be compared with another set. The existing evaluations of designs that facilitate these comparisons tend to be based on intuitive reasoning, rather than quantifiable measures. We build on this work with a series of crowdsourced experiments that use low-level perceptual comparison tasks that arise frequently in comparisons within data visualizations (e.g., which value changes the most between the two sets of data?). Participants completed these tasks across a variety of layouts: overlaid, two arrangements of juxtaposed small multiples, mirror-symmetric small multiples, and animated transitions. A staircase procedure sought the difficulty level (e.g., value change delta) that led to equivalent accuracy for each layout. Confirming prior intuition, we observe high levels of performance for overlaid versus standard small multiples. However, we also find performance improvements for both mirror symmetric small multiples and animated transitions. While some results are incongruent with common wisdom in data visualization, they align with previous work in perceptual psychology, and thus have potentially strong implications for visual comparison designs.}, keywords = {} } Data are often viewed as a single set of values, but those values frequently must be compared with another set. The existing evaluations of designs that facilitate these comparisons tend to be based on intuitive reasoning, rather than quantifiable measures. We build on this work with a series of crowdsourced experiments that use low-level perceptual comparison tasks that arise frequently in comparisons within data visualizations (e.g., which value changes the most between the two sets of data?). Participants completed these tasks across a variety of layouts: overlaid, two arrangements of juxtaposed small multiples, mirror-symmetric small multiples, and animated transitions. A staircase procedure sought the difficulty level (e.g., value change delta) that led to equivalent accuracy for each layout. Confirming prior intuition, we observe high levels of performance for overlaid versus standard small multiples. However, we also find performance improvements for both mirror symmetric small multiples and animated transitions. While some results are incongruent with common wisdom in data visualization, they align with previous work in perceptual psychology, and thus have potentially strong implications for visual comparison designs. |
60. | Sriram Karthik Badam, Zhicheng Liu, Niklas Elmqvist (2019): Elastic Documents: Coupling Text and Tables through Contextual Visualizations for Enhanced Document Reading. IEEE Transactions on Visualization & Computer Graphics, 2019. (Type: Article | Abstract | Links | BibTeX) @article{Badam2019b, title = {Elastic Documents: Coupling Text and Tables through Contextual Visualizations for Enhanced Document Reading}, author = {Sriram Karthik Badam and Zhicheng Liu and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/elastic-documents/elastic-documents.pdf, PDF https://doi.org/10.1109/TVCG.2018.2865119, DOI}, year = {2019}, date = {2019-01-01}, journal = { IEEE Transactions on Visualization & Computer Graphics}, abstract = {Today\'s data-rich documents are often complex datasets in themselves, consisting of information in different formats such as text, gures, and data tables. These additional media augment the textual narrative in the document. However, the static layout of a traditional for-print document often impedes deep understanding of its content because of the need to navigate to access content scattered throughout the text. In this paper, we seek to facilitate enhanced comprehension of such documents through a contextual visualization technique that couples text content with data tables contained in the document. We parse the text content and data tables, cross-link the components using a keyword-based matching algorithm, and generate on-demand visualizations based on the reader\'s current focus within a document. We evaluate this technique in a user study comparing our approach to a traditional reading experience. Results from our study show that (1) participants comprehend the content better with tighter coupling of text and data, (2) the contextual visualizations enable participants to develop better summaries that capture the main data-rich insights within the document, and (3) overall, our method enables participants to develop a more detailed understanding of the document content.}, keywords = {} } Today's data-rich documents are often complex datasets in themselves, consisting of information in different formats such as text, gures, and data tables. These additional media augment the textual narrative in the document. However, the static layout of a traditional for-print document often impedes deep understanding of its content because of the need to navigate to access content scattered throughout the text. In this paper, we seek to facilitate enhanced comprehension of such documents through a contextual visualization technique that couples text content with data tables contained in the document. We parse the text content and data tables, cross-link the components using a keyword-based matching algorithm, and generate on-demand visualizations based on the reader's current focus within a document. We evaluate this technique in a user study comparing our approach to a traditional reading experience. Results from our study show that (1) participants comprehend the content better with tighter coupling of text and data, (2) the contextual visualizations enable participants to develop better summaries that capture the main data-rich insights within the document, and (3) overall, our method enables participants to develop a more detailed understanding of the document content. |
59. | Sriram Karthik Badam, Andreas Mathisen, Roman Rädle, Clemens Nylandsted Klokmose, Niklas Elmqvist (2019): Vistrates: A Component Model for Ubiquitous Analytics. IEEE Transactions on Visualization & Computer Graphics, 2019. (Type: Article | Abstract | Links | BibTeX) @article{Badam2019a, title = {Vistrates: A Component Model for Ubiquitous Analytics}, author = {Sriram Karthik Badam and Andreas Mathisen and Roman Rädle and Clemens Nylandsted Klokmose and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/vistrates/vistrates.pdf, PDF https://doi.org/10.1109/TVCG.2018.2865144, DOI}, year = {2019}, date = {2019-01-01}, journal = { IEEE Transactions on Visualization & Computer Graphics}, abstract = {Visualization tools are often specialized for specic tasks, which turns the user\'s analytical workow into a fragmented process performed across many tools. In this paper, we present a component model design for data visualization to promote modular designs of visualization tools that enhance their analytical scope. Rather than fragmenting tasks across tools, the component model supports unification, where components—the building blocks of this model—can be assembled to support a wide range of tasks. Furthermore, the model also provides additional key properties, such as support for collaboration, sharing across multiple devices, and adaptive usage depending on expertise, from creating visualizations using dropdown menus, through instantiating components, to actually modifying components or creating entirely new ones from scratch using JavaScript or Python source code. To realize our model, we introduce Vistrates, a literate computing platform for developing, assembling, and sharing visualization components. From a visualization perspective, Vistrates features cross-cutting components for visual representations, interaction, collaboration, and device responsiveness maintained in a component repository. From a development perspective, Vistrates offers a collaborative programming environment where novices and experts alike can compose component pipelines for specific analytical activities. Finally, we present several Vistrates use cases that span the full range of the classic \"anytime\" and \"anywhere\" motto for ubiquitous analysis: from mobile and on-the-go usage, through office settings, to collaborative smart environments covering a variety of tasks and devices.}, keywords = {} } Visualization tools are often specialized for specic tasks, which turns the user's analytical workow into a fragmented process performed across many tools. In this paper, we present a component model design for data visualization to promote modular designs of visualization tools that enhance their analytical scope. Rather than fragmenting tasks across tools, the component model supports unification, where components—the building blocks of this model—can be assembled to support a wide range of tasks. Furthermore, the model also provides additional key properties, such as support for collaboration, sharing across multiple devices, and adaptive usage depending on expertise, from creating visualizations using dropdown menus, through instantiating components, to actually modifying components or creating entirely new ones from scratch using JavaScript or Python source code. To realize our model, we introduce Vistrates, a literate computing platform for developing, assembling, and sharing visualization components. From a visualization perspective, Vistrates features cross-cutting components for visual representations, interaction, collaboration, and device responsiveness maintained in a component repository. From a development perspective, Vistrates offers a collaborative programming environment where novices and experts alike can compose component pipelines for specific analytical activities. Finally, we present several Vistrates use cases that span the full range of the classic "anytime" and "anywhere" motto for ubiquitous analysis: from mobile and on-the-go usage, through office settings, to collaborative smart environments covering a variety of tasks and devices. |
58. | Zhe Cui, Sriram Karthik Badam, M. Adil Yalcin, Niklas Elmqvist (2019): DataSite: Proactive Visual Data Exploration with Computation of Insight-based Recommendations. Information Visualization, 18 (2), pp. 251–267, 2019. (Type: Article | Abstract | Links | BibTeX) @article{zcui2018, title = {DataSite: Proactive Visual Data Exploration with Computation of Insight-based Recommendations}, author = {Zhe Cui and Sriram Karthik Badam and M. Adil Yalcin and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/datasite/datasite.pdf, PDF https://youtu.be/EsK5uOOPO7o, Youtube}, year = {2019}, date = {2019-01-01}, journal = {Information Visualization}, volume = {18}, number = {2}, pages = {251--267}, abstract = {Effective data analysis ideally requires the analyst to have high expertise as well as high knowledge of the data. Even with such familiarity, manually pursuing all potential hypotheses and exploring all possible views is impractical. We present DataSite, a proactive visual analytics system where the burden of selecting and executing appropriate computations is shared by an automatic server-side computation engine. Salient features identified by these automatic background processes are surfaced as notifications in a feed timeline. DataSite effectively turns data analysis into a conversation between analyst and computer, thereby reducing the cognitive load and domain knowledge requirements. We validate the system with a user study comparing it to a recent visualization recommendation system, yielding significant improvement, particularly for complex analyses that existing analytics systems do not support well.}, keywords = {} } Effective data analysis ideally requires the analyst to have high expertise as well as high knowledge of the data. Even with such familiarity, manually pursuing all potential hypotheses and exploring all possible views is impractical. We present DataSite, a proactive visual analytics system where the burden of selecting and executing appropriate computations is shared by an automatic server-side computation engine. Salient features identified by these automatic background processes are surfaced as notifications in a feed timeline. DataSite effectively turns data analysis into a conversation between analyst and computer, thereby reducing the cognitive load and domain knowledge requirements. We validate the system with a user study comparing it to a recent visualization recommendation system, yielding significant improvement, particularly for complex analyses that existing analytics systems do not support well. |
2018 | |
57. | Fanny Chevalier, Nathalie Henry Riche, Basak Alper, Catherine Plaisant, Jeremy Boy, Niklas Elmqvist (2018): Observations and Reflections on Visualization Literacy at the Elementary School Level. IEEE Computer Graphics & Applications, 38 (3), pp. 21–29, 2018. (Type: Article | Abstract | Links | BibTeX) @article{Chevalier2018, title = {Observations and Reflections on Visualization Literacy at the Elementary School Level}, author = {Fanny Chevalier and Nathalie Henry Riche and Basak Alper and Catherine Plaisant and Jeremy Boy and Niklas Elmqvist}, url = {http://www.cs.umd.edu/hcil/trs/2018-06/2018-06.pdf, PDF}, year = {2018}, date = {2018-05-01}, journal = {IEEE Computer Graphics & Applications}, volume = {38}, number = {3}, pages = {21--29}, abstract = {In this article, we share our reflections on visualization literacy and how it might be better developed in early education. We base this on lessons we learned while studying how teachers instruct, and how students acquire basic visualization principles and skills in elementary school. We use these findings to propose directions for future research on visualization literacy. }, keywords = {} } In this article, we share our reflections on visualization literacy and how it might be better developed in early education. We base this on lessons we learned while studying how teachers instruct, and how students acquire basic visualization principles and skills in elementary school. We use these findings to propose directions for future research on visualization literacy. |
56. | Justin Wagner, Florin Chelaru, Jayaram Kancherla, Joseph N. Paulson, Alexander Zhang, Victor Felix, Anup Mahurkar, Niklas Elmqvist, Hector Corrada Bravo (2018): Metaviz: interactive statistical and visual analysis of metagenomic data. Nucleic Acids Research, 46 (6), pp. 2777–2787, 2018. (Type: Article | Abstract | Links | BibTeX) @article{Wagner2018, title = {Metaviz: interactive statistical and visual analysis of metagenomic data}, author = {Justin Wagner and Florin Chelaru and Jayaram Kancherla and Joseph N. Paulson and Alexander Zhang and Victor Felix and Anup Mahurkar and Niklas Elmqvist and Hector Corrada Bravo}, url = {https://academic.oup.com/nar/article/46/6/2777/4909991, Article}, year = {2018}, date = {2018-03-06}, journal = {Nucleic Acids Research}, volume = {46}, number = {6}, pages = {2777--2787}, abstract = {Large studies profiling microbial communities and their association with healthy or disease phenotypes are now commonplace. Processed data from many of these studies are publicly available but significant effort is required for users to effectively organize, explore and integrate it, limiting the utility of these rich data resources. Effective integrative and interactive visual and statistical tools to analyze many metagenomic samples can greatly increase the value of these data for researchers. We present Metaviz, a tool for interactive exploratory data analysis of annotated microbiome taxonomic community profiles derived from marker gene or whole metagenome shotgun sequencing. Metaviz is uniquely designed to address the challenge of browsing the hierarchical structure of metagenomic data features while rendering visualizations of data values that are dynamically updated in response to user navigation. We use Metaviz to provide the UMD Metagenome Browser web service, allowing users to browse and explore data for more than 7000 microbiomes from published studies. Users can also deploy Metaviz as a web service, or use it to analyze data through the metavizr package to interoperate with state-of-the-art analysis tools available through Bioconductor. Metaviz is free and open source with the code, documentation and tutorials publicly accessible.}, keywords = {} } Large studies profiling microbial communities and their association with healthy or disease phenotypes are now commonplace. Processed data from many of these studies are publicly available but significant effort is required for users to effectively organize, explore and integrate it, limiting the utility of these rich data resources. Effective integrative and interactive visual and statistical tools to analyze many metagenomic samples can greatly increase the value of these data for researchers. We present Metaviz, a tool for interactive exploratory data analysis of annotated microbiome taxonomic community profiles derived from marker gene or whole metagenome shotgun sequencing. Metaviz is uniquely designed to address the challenge of browsing the hierarchical structure of metagenomic data features while rendering visualizations of data values that are dynamically updated in response to user navigation. We use Metaviz to provide the UMD Metagenome Browser web service, allowing users to browse and explore data for more than 7000 microbiomes from published studies. Users can also deploy Metaviz as a web service, or use it to analyze data through the metavizr package to interoperate with state-of-the-art analysis tools available through Bioconductor. Metaviz is free and open source with the code, documentation and tutorials publicly accessible. |
55. | Zhe Cui, Shivalik Sen, Sriram Karthik Badam, Niklas Elmqvist (2018): VisHive: Supporting Web-based Visualization through Ad-hoc Computational Clusters of Mobile Devices. Information Visualization, 2018. (Type: Article | Abstract | Links | BibTeX) @article{Cui2018, title = {VisHive: Supporting Web-based Visualization through Ad-hoc Computational Clusters of Mobile Devices}, author = {Zhe Cui and Shivalik Sen and Sriram Karthik Badam and Niklas Elmqvist }, url = {http://www.umiacs.umd.edu/~elm/projects/vishive/vishive.pdf, PDF}, year = {2018}, date = {2018-01-01}, journal = {Information Visualization}, abstract = {Current web-based visualizations are designed for single computers and cannot make use of additional devices on the client side, even if today’s users often have access to several, such as a tablet, a smartphone, and a smartwatch. We present a framework for ad-hoc computational clusters that leverage these local devices for visualization computations. Furthermore, we present an instantiating JavaScript toolkit called VisHive for constructing web-based visualization applications that can transparently connect multiple devices---called cells---into such ad-hoc clusters---called a hive---for local computation. Hives are formed either using a matchmaking service or through manual configuration. Cells are organized into a master-slave architecture, where the master provides the visual interface to the user and controls the slaves, and the slaves perform computation. VisHive is built entirely using current web technologies, runs in the native browser of each cell, and requires no specific software to be downloaded on the involved devices. We demonstrate VisHive using four distributed examples: a text analytics visualization, a database query for exploratory visualization, a DBSCAN clustering running on multiple nodes, and a Principal Component Analysis implementation. }, keywords = {} } Current web-based visualizations are designed for single computers and cannot make use of additional devices on the client side, even if today’s users often have access to several, such as a tablet, a smartphone, and a smartwatch. We present a framework for ad-hoc computational clusters that leverage these local devices for visualization computations. Furthermore, we present an instantiating JavaScript toolkit called VisHive for constructing web-based visualization applications that can transparently connect multiple devices---called cells---into such ad-hoc clusters---called a hive---for local computation. Hives are formed either using a matchmaking service or through manual configuration. Cells are organized into a master-slave architecture, where the master provides the visual interface to the user and controls the slaves, and the slaves perform computation. VisHive is built entirely using current web technologies, runs in the native browser of each cell, and requires no specific software to be downloaded on the involved devices. We demonstrate VisHive using four distributed examples: a text analytics visualization, a database query for exploratory visualization, a DBSCAN clustering running on multiple nodes, and a Principal Component Analysis implementation. |
54. | Deok Gun Park, Steven M. Drucker, Roland Fernandez, Niklas Elmqvist (2018): ATOM: A Grammar for Unit Visualization. IEEE Transactions on Visualization & Computer Graphics, 2018. (Type: Article | Abstract | Links | BibTeX) @article{Park2018, title = {ATOM: A Grammar for Unit Visualization}, author = {Deok Gun Park and Steven M. Drucker and Roland Fernandez and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/atom/atom.pdf, PDF}, year = {2018}, date = {2018-01-01}, journal = { IEEE Transactions on Visualization & Computer Graphics}, abstract = {Unit visualizations are a family of visualizations where every data item is represented by a unique visual mark---a visual unit---during visual encoding. For certain datasets and tasks, unit visualizations can provide more information, better match the user\'s mental model, and enable novel interactions compared to traditional aggregated visualizations. Current visualization grammars cannot fully describe the unit visualization family. In this paper, we characterize the design space of unit visualizations to derive a grammar that can express them. The resulting grammar is called ATOM, and is based on passing data through a series of layout operations that divide the output of previous operations recursively until the size and position of every data point can be determined. We evaluate the expressive power of the grammar by both using it to describe existing unit visualizations, as well as to suggest new unit visualizations.}, keywords = {} } Unit visualizations are a family of visualizations where every data item is represented by a unique visual mark---a visual unit---during visual encoding. For certain datasets and tasks, unit visualizations can provide more information, better match the user's mental model, and enable novel interactions compared to traditional aggregated visualizations. Current visualization grammars cannot fully describe the unit visualization family. In this paper, we characterize the design space of unit visualizations to derive a grammar that can express them. The resulting grammar is called ATOM, and is based on passing data through a series of layout operations that divide the output of previous operations recursively until the size and position of every data point can be determined. We evaluate the expressive power of the grammar by both using it to describe existing unit visualizations, as well as to suggest new unit visualizations. |
2017 | |
53. | Andrea Batch, Niklas Elmqvist (2017): The Interactive Visualization Gap in Initial Exploratory Data Analysis. IEEE Transactions on Visualization & Computer Graphics, 2017. (Type: Article | Abstract | Links | BibTeX) @article{Batch2017, title = {The Interactive Visualization Gap in Initial Exploratory Data Analysis}, author = {Andrea Batch and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/visgap/visgap.pdf, PDF}, year = {2017}, date = {2017-10-01}, journal = {IEEE Transactions on Visualization & Computer Graphics}, abstract = {Data scientists and other analytic professionals often use interactive visualization in the dissemination phase at the end of a workflow during which findings are communicated to a wider audience. Visualization scientists, however, hold that interactive representation of data can also be used during exploratory analysis itself. Since the use of interactive visualization is optional rather than mandatory, this leaves a “visualization gap” during initial exploratory analysis that is the onus of visualization researchers to fill. In this paper, we explore areas where visualization would be beneficial in applied research by conducting a design study using a novel variation on contextual inquiry conducted with professional data analysts. Based on these interviews and experiments, we propose a set of interactive initial exploratory visualization guidelines which we believe will promote adoption by this type of user.}, keywords = {} } Data scientists and other analytic professionals often use interactive visualization in the dissemination phase at the end of a workflow during which findings are communicated to a wider audience. Visualization scientists, however, hold that interactive representation of data can also be used during exploratory analysis itself. Since the use of interactive visualization is optional rather than mandatory, this leaves a “visualization gap” during initial exploratory analysis that is the onus of visualization researchers to fill. In this paper, we explore areas where visualization would be beneficial in applied research by conducting a design study using a novel variation on contextual inquiry conducted with professional data analysts. Based on these interviews and experiments, we propose a set of interactive initial exploratory visualization guidelines which we believe will promote adoption by this type of user. |
52. | Deok Gun Park, Seungyeon Kim, Jurim Lee, Jaegul Choo, Nicholas Diakopoulos, Niklas Elmqvist (2017): ConceptVector: Text Visual Analytics via Interactive Lexicon Building using Word Embedding. IEEE Transactions on Visualization & Computer Graphics, 2017. (Type: Article | Abstract | Links | BibTeX) @article{Park2017, title = {ConceptVector: Text Visual Analytics via Interactive Lexicon Building using Word Embedding}, author = {Deok Gun Park and Seungyeon Kim and Jurim Lee and Jaegul Choo and Nicholas Diakopoulos and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/conceptvector/conceptvector.pdf, PDF}, year = {2017}, date = {2017-10-01}, journal = {IEEE Transactions on Visualization & Computer Graphics}, abstract = {Central to many text analysis methods is the notion of a concept: a set of semantically related keywords characterizing a specific object, phenomenon, or theme. Advances in word embedding allow building such concepts from a small set of seed terms. However, naive application of such techniques may result in false positive errors because of the polysemy of human language. To mitigate this problem, we present a visual analytics system called ConceptVector that guides the user in building such concepts and then using them to analyze documents. Document-analysis case studies with real-world datasets demonstrate the fine-grained analysis provided by ConceptVector. To support the elaborate modeling of concepts using user seed terms, we introduce a bipolar concept model and support for irrelevant words. We validate the interactive lexicon building interface via a user study and expert reviews. The quantitative evaluation shows that the bipolar lexicon generated with our methods is comparable to human-generated ones.}, keywords = {} } Central to many text analysis methods is the notion of a concept: a set of semantically related keywords characterizing a specific object, phenomenon, or theme. Advances in word embedding allow building such concepts from a small set of seed terms. However, naive application of such techniques may result in false positive errors because of the polysemy of human language. To mitigate this problem, we present a visual analytics system called ConceptVector that guides the user in building such concepts and then using them to analyze documents. Document-analysis case studies with real-world datasets demonstrate the fine-grained analysis provided by ConceptVector. To support the elaborate modeling of concepts using user seed terms, we introduce a bipolar concept model and support for irrelevant words. We validate the interactive lexicon building interface via a user study and expert reviews. The quantitative evaluation shows that the bipolar lexicon generated with our methods is comparable to human-generated ones. |
51. | Sriram Karthik Badam, Niklas Elmqvist (2017): Visfer: Camera-based Visual Data Transfer for Cross-Device Visualization. Information Visualization, 2017. (Type: Article | Abstract | Links | BibTeX) @article{Badam2017bb, title = {Visfer: Camera-based Visual Data Transfer for Cross-Device Visualization}, author = {Sriram Karthik Badam and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/qrvis/visfer.pdf, PDF}, year = {2017}, date = {2017-09-08}, journal = {Information Visualization}, abstract = {Going beyond the desktop to leverage novel devices—such as smartphones, tablets, or large displays—for visual sensemaking typically requires supporting extraneous operations for device discovery, interaction sharing, and view management. Such operations can be time-consuming and tedious, and distract the user from the actual analysis. Embodied interaction models in these multi-device environments can take advantage of the natural interaction and physicality afforded by multimodal devices and help effectively carry out these operations in visual sensemaking. In this paper, we present cross-device interaction models for visualization spaces, that are embodied in nature, by conducting a user study to elicit actions from participants that could trigger a portrayed effect of sharing visualizations (and therefore information) across devices. We then explore one common interaction style from this design elicitation called Visfer, a technique for effortlessly sharing visualizations across devices using the visual medium. More specifically, this technique involves taking pictures of visualizations, or rather the QR codes augmenting them, on a display using the built-in camera on a handheld device. Our contributions include a conceptual framework for cross-device interaction and the Visfer technique itself, as well as transformation guidelines to exploit the capabilities of each specific device and a web framework for encoding visualization components into animated QR codes, which capture multiple frames of QR codes to embed more information. Beyond this, we also present the results from a performance evaluation for the visual data transfer enabled by Visfer. We end the paper by presenting the application examples of our Visfer framework. }, keywords = {} } Going beyond the desktop to leverage novel devices—such as smartphones, tablets, or large displays—for visual sensemaking typically requires supporting extraneous operations for device discovery, interaction sharing, and view management. Such operations can be time-consuming and tedious, and distract the user from the actual analysis. Embodied interaction models in these multi-device environments can take advantage of the natural interaction and physicality afforded by multimodal devices and help effectively carry out these operations in visual sensemaking. In this paper, we present cross-device interaction models for visualization spaces, that are embodied in nature, by conducting a user study to elicit actions from participants that could trigger a portrayed effect of sharing visualizations (and therefore information) across devices. We then explore one common interaction style from this design elicitation called Visfer, a technique for effortlessly sharing visualizations across devices using the visual medium. More specifically, this technique involves taking pictures of visualizations, or rather the QR codes augmenting them, on a display using the built-in camera on a handheld device. Our contributions include a conceptual framework for cross-device interaction and the Visfer technique itself, as well as transformation guidelines to exploit the capabilities of each specific device and a web framework for encoding visualization components into animated QR codes, which capture multiple frames of QR codes to embed more information. Beyond this, we also present the results from a performance evaluation for the visual data transfer enabled by Visfer. We end the paper by presenting the application examples of our Visfer framework. |
50. | M. Adil Yalcin, Niklas Elmqvist, Benjamin B. Bederson (2017): Keshif: Rapid and Expressive Tabular Data Exploration for Novices. IEEE Transactions on Visualization & Computer Graphics, 2017. (Type: Article | Abstract | Links | BibTeX) @article{Yalcin2017b, title = {Keshif: Rapid and Expressive Tabular Data Exploration for Novices}, author = {M. Adil Yalcin and Niklas Elmqvist and Benjamin B. Bederson}, url = {http://www.umiacs.umd.edu/~elm/projects/keshif/keshif.pdf, PDF}, year = {2017}, date = {2017-05-19}, journal = {IEEE Transactions on Visualization & Computer Graphics}, abstract = {General purpose graphical interfaces for data exploration are typically based on manual visualization and interaction specifications. While designing manual specification can be very expressive, it demands high efforts to make effective decisions, therefore reducing exploratory speed. Instead, principled automated designs can increase exploratory speed, decrease learning efforts, help avoid ineffective decisions, and therefore better support data analytics novices. Towards these goals, we present Keshif, a new systematic design for tabular data exploration. To summarize a given dataset, Keshif aggregates records by value within attribute summaries, and visualizes aggregate characteristics using a consistent design based on data types. To reveal data distribution details, Keshif features three complementary linked selections: highlighting, filtering, and comparison. Keshif further increases expressiveness through aggregate metrics, absolute/part-of scale modes, calculated attributes, and saved selections, all working in synchrony. Its automated design approach also simplifies authoring of dashboards composed of summaries and individual records from raw data using fluid interaction. We show examples selected from 160+ datasets from diverse domains. Our study with novices shows that after exploring raw data for 15 minutes, our participants reached close to 30 data insights on average, comparable to other studies with skilled users using more complex tools.}, keywords = {} } General purpose graphical interfaces for data exploration are typically based on manual visualization and interaction specifications. While designing manual specification can be very expressive, it demands high efforts to make effective decisions, therefore reducing exploratory speed. Instead, principled automated designs can increase exploratory speed, decrease learning efforts, help avoid ineffective decisions, and therefore better support data analytics novices. Towards these goals, we present Keshif, a new systematic design for tabular data exploration. To summarize a given dataset, Keshif aggregates records by value within attribute summaries, and visualizes aggregate characteristics using a consistent design based on data types. To reveal data distribution details, Keshif features three complementary linked selections: highlighting, filtering, and comparison. Keshif further increases expressiveness through aggregate metrics, absolute/part-of scale modes, calculated attributes, and saved selections, all working in synchrony. Its automated design approach also simplifies authoring of dashboards composed of summaries and individual records from raw data using fluid interaction. We show examples selected from 160+ datasets from diverse domains. Our study with novices shows that after exploring raw data for 15 minutes, our participants reached close to 30 data insights on average, comparable to other studies with skilled users using more complex tools. |
49. | Sriram Karthik Badam, Niklas Elmqvist, Jean-Daniel Fekete (2017): Steering the Craft: UI Elements and Visualizations for Supporting Progressive Visual Analytics. Computer Graphics Forum, 36 2017. (Type: Article | Abstract | Links | BibTeX) @article{Badam2017b, title = {Steering the Craft: UI Elements and Visualizations for Supporting Progressive Visual Analytics}, author = {Sriram Karthik Badam and Niklas Elmqvist and Jean-Daniel Fekete}, url = {http://www.umiacs.umd.edu/~elm/projects/insightsfeed/insightsfeed.pdf, PDF}, year = {2017}, date = {2017-05-15}, journal = {Computer Graphics Forum}, volume = {36}, abstract = {Progressive visual analytics (PVA) has emerged in recent years to manage the latency of data analysis systems. When analysis is performed progressively, rough estimates of the results are generated quickly and are then improved over time. Analysts can therefore monitor the progression of the results, steer the analysis algorithms, and make early decisions if the estimates provide a convincing picture. In this article, we describe interface design guidelines for helping users understand progressively updating results and make early decisions based on progressive estimates. To illustrate our ideas, we present a prototype PVA tool called InsightsFeed for exploring Twitter data at scale. As validation, we investigate the tradeoffs of our tool when exploring a Twitter dataset in a user study. We report the usage patterns in making early decisions using the user interface, guiding computational methods, and exploring different subsets of the dataset, compared to sequential analysis without progression.}, keywords = {} } Progressive visual analytics (PVA) has emerged in recent years to manage the latency of data analysis systems. When analysis is performed progressively, rough estimates of the results are generated quickly and are then improved over time. Analysts can therefore monitor the progression of the results, steer the analysis algorithms, and make early decisions if the estimates provide a convincing picture. In this article, we describe interface design guidelines for helping users understand progressively updating results and make early decisions based on progressive estimates. To illustrate our ideas, we present a prototype PVA tool called InsightsFeed for exploring Twitter data at scale. As validation, we investigate the tradeoffs of our tool when exploring a Twitter dataset in a user study. We report the usage patterns in making early decisions using the user interface, guiding computational methods, and exploring different subsets of the dataset, compared to sequential analysis without progression. |
48. | Senthil Chandrasegaran, Sriram Karthik Badam, Lorraine Kisselburgh, Karthik Ramani (2017): Integrating Visual Analytics Support for Grounded Theory Practice in Qualitative Text Analysis. Computer Graphics Forum, 36 2017. (Type: Article | Abstract | Links | BibTeX) @article{Chandrasegaran2017c, title = {Integrating Visual Analytics Support for Grounded Theory Practice in Qualitative Text Analysis}, author = {Senthil Chandrasegaran and Sriram Karthik Badam and Lorraine Kisselburgh and Karthik Ramani}, url = {http://www.umiacs.umd.edu/~elm/projects/gthelper/gthelper.pdf, PDF}, year = {2017}, date = {2017-05-15}, journal = {Computer Graphics Forum}, volume = {36}, abstract = {We present an argument for using visual analytics to aid Grounded Theory methodologies in qualitative data analysis. Grounded theory methods involve the inductive analysis of data to generate novel insights and theoretical constructs. Making sense of unstructured text data is uniquely suited for visual analytics. Using natural language processing techniques such as parts-of-speech tagging, retrieving information content, and topic modeling, different parts of the data can be structured and semantically associated, and interactively explored, thereby providing conceptual depth to the guided discovery process. We review grounded theory methods and identify processes that can be enhanced through visual analytic techniques. Next, we develop an interface for qualitative text analysis, and evaluate our design with qualitative research practitioners who analyze texts with and without visual analytics support. The results of our study suggest how visual analytics can be incorporated into qualitative data analysis tools, and the analytic and interpretive benefits that can result.}, keywords = {} } We present an argument for using visual analytics to aid Grounded Theory methodologies in qualitative data analysis. Grounded theory methods involve the inductive analysis of data to generate novel insights and theoretical constructs. Making sense of unstructured text data is uniquely suited for visual analytics. Using natural language processing techniques such as parts-of-speech tagging, retrieving information content, and topic modeling, different parts of the data can be structured and semantically associated, and interactively explored, thereby providing conceptual depth to the guided discovery process. We review grounded theory methods and identify processes that can be enhanced through visual analytic techniques. Next, we develop an interface for qualitative text analysis, and evaluate our design with qualitative research practitioners who analyze texts with and without visual analytics support. The results of our study suggest how visual analytics can be incorporated into qualitative data analysis tools, and the analytic and interpretive benefits that can result. |
47. | Senthil Chandrasegaran, Sriram Karthik Badam, Lorraine Kisselburgh, Kylie Peppler, Niklas Elmqvist, Karthik Ramani (2017): VizScribe: A Visual Analytics Approach to Understand Designer Behavior. International Journal of Human-Computer Interaction, 100 pp. 66–80, 2017. (Type: Article | Abstract | Links | BibTeX) @article{Chandrasegaran2017b, title = {VizScribe: A Visual Analytics Approach to Understand Designer Behavior}, author = {Senthil Chandrasegaran and Sriram Karthik Badam and Lorraine Kisselburgh and Kylie Peppler and Niklas Elmqvist and Karthik Ramani}, url = {http://www.umiacs.umd.edu/~elm/projects/vizscribe/vizscribe.pdf, PDF}, year = {2017}, date = {2017-01-02}, journal = {International Journal of Human-Computer Interaction}, volume = {100}, pages = {66--80}, abstract = {Design protocol analysis is a technique to understand designers’ cognitive processes by analyzing sequences of observations on their behavior. These observations typically use audio, video, and transcript data in order to gain insights into the designer\'s behavior and the design process. The recent availability of sophisticated sensing technology has made such data highly multimodal, requiring more flexible protocol analysis tools. To address this need, we present VizScribe, a visual analytics framework that employs multiple coordinated multiple views that enable the viewing of such data from different perspectives. VizScribe allows designers to create, customize, and extend interactive visualizations for design protocol data such as video, transcripts, sketches, sensor data, and user logs. User studies where design researchers used VizScribe for protocol analysis indicated that the linked views and interactive navigation offered by VizScribe afforded the researchers multiple, useful ways to approach and interpret such multimodal data.}, keywords = {} } Design protocol analysis is a technique to understand designers’ cognitive processes by analyzing sequences of observations on their behavior. These observations typically use audio, video, and transcript data in order to gain insights into the designer's behavior and the design process. The recent availability of sophisticated sensing technology has made such data highly multimodal, requiring more flexible protocol analysis tools. To address this need, we present VizScribe, a visual analytics framework that employs multiple coordinated multiple views that enable the viewing of such data from different perspectives. VizScribe allows designers to create, customize, and extend interactive visualizations for design protocol data such as video, transcripts, sketches, sensor data, and user logs. User studies where design researchers used VizScribe for protocol analysis indicated that the linked views and interactive navigation offered by VizScribe afforded the researchers multiple, useful ways to approach and interpret such multimodal data. |
2016 | |
46. | Minjeong Kim, Kyeongpil Kang, Deokgun Park, Jaegul Choo, Niklas Elmqvist (2016): TopicLens: Efficient Multi-Level Visual Topic Exploration of Large-Scale Document Collections. IEEE Transactions on Visualization and Computer Graphics, 23 (1), pp. 151–160, 2016. (Type: Article | Abstract | Links | BibTeX) @article{Kim2017, title = {TopicLens: Efficient Multi-Level Visual Topic Exploration of Large-Scale Document Collections}, author = {Minjeong Kim and Kyeongpil Kang and Deokgun Park and Jaegul Choo and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/topiclens/topiclens.pdf, PDF https://www.youtube.com/watch?v=RKC5w9dZmXQ, Youtube}, year = {2016}, date = {2016-08-10}, journal = {IEEE Transactions on Visualization and Computer Graphics}, volume = {23}, number = {1}, pages = {151--160}, abstract = {Topic modeling, which reveals underlying topics of a document corpus, has been actively adopted in visual analytics for large-scale document collections. However, due to its significant processing time and non-interactive nature, topic modeling has so far not been tightly integrated into a visual analytics workflow. Instead, most such systems are limited to utilizing a fixed, initial set of topics. Motivated by this gap in the literature, we propose a novel interaction technique called TopicLens that allows a user to dynamically explore data through a lens interface where topic modeling and the corresponding 2D embedding are efficiently computed on the fly. To support this interaction in real time while maintaining view consistency, we propose a novel efficient topic modeling method and a semi-supervised 2D embedding algorithm. Our work is based on improving state-of-the-art methods such as nonnegative matrix factorization and t-distributed stochastic neighbor embedding. Furthermore, we have built a web-based visual analytics system integrated with TopicLens. We use this system to measure the performance and the visualization quality of our proposed methods. We provide several scenarios showcasing the capability of TopicLens using real-world datasets.}, keywords = {} } Topic modeling, which reveals underlying topics of a document corpus, has been actively adopted in visual analytics for large-scale document collections. However, due to its significant processing time and non-interactive nature, topic modeling has so far not been tightly integrated into a visual analytics workflow. Instead, most such systems are limited to utilizing a fixed, initial set of topics. Motivated by this gap in the literature, we propose a novel interaction technique called TopicLens that allows a user to dynamically explore data through a lens interface where topic modeling and the corresponding 2D embedding are efficiently computed on the fly. To support this interaction in real time while maintaining view consistency, we propose a novel efficient topic modeling method and a semi-supervised 2D embedding algorithm. Our work is based on improving state-of-the-art methods such as nonnegative matrix factorization and t-distributed stochastic neighbor embedding. Furthermore, we have built a web-based visual analytics system integrated with TopicLens. We use this system to measure the performance and the visualization quality of our proposed methods. We provide several scenarios showcasing the capability of TopicLens using real-world datasets. |
45. | Udayan Umapathi, Niklas Elmqvist (2016): Mushaca: A 3-Degrees-of-Freedom Mouse Supporting Rotation. International Journal of Human-Computer Interaction, 32 (6), pp. 481–492, 2016. (Type: Article | Abstract | Links | BibTeX) @article{Umapathi2016, title = {Mushaca: A 3-Degrees-of-Freedom Mouse Supporting Rotation}, author = {Udayan Umapathi and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/mushaca/mushaca.pdf, PDF}, year = {2016}, date = {2016-03-09}, journal = {International Journal of Human-Computer Interaction}, volume = {32}, number = {6}, pages = {481--492}, abstract = {Based on kinesiology research demonstrating that translation and rotation are inseparable actions in the physical world, we present Mushaca, a 3-degrees-of-freedom mouse that senses rotation in addition to traditional planar position. We present an optical realization of the Mushaca device based on two optical sensors and then evaluate the device through a series of controlled experiments. Our results show that rotation is indeed a useful input modality for a pointing device, and also give some insight into how users perceive the changing coordinate system of the rotating mouse and adapt to this change through kinesthetic learning.}, keywords = {} } Based on kinesiology research demonstrating that translation and rotation are inseparable actions in the physical world, we present Mushaca, a 3-degrees-of-freedom mouse that senses rotation in addition to traditional planar position. We present an optical realization of the Mushaca device based on two optical sensors and then evaluate the device through a series of controlled experiments. Our results show that rotation is indeed a useful input modality for a pointing device, and also give some insight into how users perceive the changing coordinate system of the rotating mouse and adapt to this change through kinesthetic learning. |
2015 | |
44. | William Z. Bernstein, Devarajan Ramanujan, Devadatta M. Kulkarni, Jeffrey Tew, Niklas Elmqvist, Fu Zhao, Karthik Ramani (2015): Mutually Coordinated Visualization of Product and Supply Chain Metadata for Sustainable Design. Journal of Mechanical Design, 137 (12), pp. 121101, 2015. (Type: Article | Abstract | Links | BibTeX) @article{Bernstein2015, title = {Mutually Coordinated Visualization of Product and Supply Chain Metadata for Sustainable Design}, author = {William Z. Bernstein and Devarajan Ramanujan and Devadatta M. Kulkarni and Jeffrey Tew and Niklas Elmqvist and Fu Zhao and Karthik Ramani}, url = {http://doi.org/10.1115/1.4031293, DOI}, year = {2015}, date = {2015-10-01}, journal = {Journal of Mechanical Design}, volume = {137}, number = {12}, pages = {121101}, abstract = {In this paper, we present a novel visualization framework for product and supply chain metadata in the context of redesign-related decision scenarios. Our framework is based on the idea of overlaying product-related metadata onto the interactive graph representations of a supply chain and its associated product architecture. By coupling environmental data with graph-based visualizations of product architecture, our framework provides a novel decision platform for expert designers. Here, the user can balance the advantages of a redesign opportunity and manage the associated risk on the product and supply chain. For demonstration, we present ViSER, an interactive visualization tool that provides an interface consisting of different mutually coordinated views providing multiple perspectives on a particular supply chain presentation. To explore the utility of ViSER, we conduct a domain expert exploration using a case study of peripheral computer equipment. Results indicate that ViSER enables new affordances within the decision making process for supply chain redesign.}, keywords = {} } In this paper, we present a novel visualization framework for product and supply chain metadata in the context of redesign-related decision scenarios. Our framework is based on the idea of overlaying product-related metadata onto the interactive graph representations of a supply chain and its associated product architecture. By coupling environmental data with graph-based visualizations of product architecture, our framework provides a novel decision platform for expert designers. Here, the user can balance the advantages of a redesign opportunity and manage the associated risk on the product and supply chain. For demonstration, we present ViSER, an interactive visualization tool that provides an interface consisting of different mutually coordinated views providing multiple perspectives on a particular supply chain presentation. To explore the utility of ViSER, we conduct a domain expert exploration using a case study of peripheral computer equipment. Results indicate that ViSER enables new affordances within the decision making process for supply chain redesign. |
43. | Sujin Jang, Niklas Elmqvist, Karthik Ramani (2015): MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data. IEEE Transactions on Visualization and Computer Graphics, 21 (1), pp. 21–30, 2015. (Type: Article | Abstract | Links | BibTeX) @article{Jang2015, title = {MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data}, author = {Sujin Jang and Niklas Elmqvist and Karthik Ramani}, url = {http://www.umiacs.umd.edu/~elm/projects/motionflow/motionflow.pdf, PDF}, year = {2015}, date = {2015-08-14}, journal = {IEEE Transactions on Visualization and Computer Graphics}, volume = {21}, number = {1}, pages = {21--30}, abstract = {Pattern analysis of human motions, which is useful in many research areas, requires understanding and comparison of different styles of motion patterns. However, working with human motion tracking data to support such analysis poses great challenges. In this paper, we propose MotionFlow, a visual analytics system that provides an effective overview of various motion patterns based on an interactive flow visualization. This visualization formulates a motion sequence as transitions between static poses, and aggregates these sequences into a tree diagram to construct a set of motion patterns. The system also allows the users to directly reflect the context of data and their perception of pose similarities in generating representative pose states. We provide local and global controls over the partition-based clustering process. To support the users in organizing unstructured motion data into pattern groups, we designed a set of interactions that enables searching for similar motion sequences from the data, detailed exploration of data subsets, and creating and modifying the group of motion patterns. To evaluate the usability of MotionFlow, we conducted a user study with six researchers with expertise in gesture-based interaction design. They used MotionFlow to explore and organize unstructured motion tracking data. Results show that the researchers were able to easily learn how to use MotionFlow, and the system effectively supported their pattern analysis activities, including leveraging their perception and domain knowledge.}, keywords = {} } Pattern analysis of human motions, which is useful in many research areas, requires understanding and comparison of different styles of motion patterns. However, working with human motion tracking data to support such analysis poses great challenges. In this paper, we propose MotionFlow, a visual analytics system that provides an effective overview of various motion patterns based on an interactive flow visualization. This visualization formulates a motion sequence as transitions between static poses, and aggregates these sequences into a tree diagram to construct a set of motion patterns. The system also allows the users to directly reflect the context of data and their perception of pose similarities in generating representative pose states. We provide local and global controls over the partition-based clustering process. To support the users in organizing unstructured motion data into pattern groups, we designed a set of interactions that enables searching for similar motion sequences from the data, detailed exploration of data subsets, and creating and modifying the group of motion patterns. To evaluate the usability of MotionFlow, we conducted a user study with six researchers with expertise in gesture-based interaction design. They used MotionFlow to explore and organize unstructured motion tracking data. Results show that the researchers were able to easily learn how to use MotionFlow, and the system effectively supported their pattern analysis activities, including leveraging their perception and domain knowledge. |
42. | Mehmet Adil Yalcin, Niklas Elmqvist, Benjamin B. Bederson (2015): AggreSet: Rich and Scalable Set Exploration using Visualizations of Element Aggregations. IEEE Transactions on Visualization and Computer Graphics, 21 (1), pp. 688–697, 2015. (Type: Article | Abstract | Links | BibTeX) @article{Yalcin2015, title = {AggreSet: Rich and Scalable Set Exploration using Visualizations of Element Aggregations}, author = {Mehmet Adil Yalcin and Niklas Elmqvist and Benjamin B. Bederson}, url = {http://www.umiacs.umd.edu/~elm/projects/aggreset/aggreset.pdf, PDF}, year = {2015}, date = {2015-08-14}, journal = {IEEE Transactions on Visualization and Computer Graphics}, volume = {21}, number = {1}, pages = {688--697}, abstract = {Datasets commonly include multi-value (set-typed) attributes that describe set memberships over elements, such as genres per movie or courses taken per student. Set-typed attributes describe rich relations across elements, sets, and the set intersections. Increasing the number of sets results in a combinatorial growth of relations and creates scalability challenges. Exploratory tasks (e.g. selection, comparison) have commonly been designed in separation for set-typed attributes, which reduces interface consistency. To improve on scalability and to support rich, contextual exploration of set-typed data, we present AggreSet. AggreSet creates aggregations for each data dimension: sets, set-degrees, set-pair intersections, and other attributes. It visualizes the element count per aggregate using a matrix plot for set-pair intersections, and histograms for set lists, set-degrees and other attributes. Its non-overlapping visual design is scalable to numerous and large sets. AggreSet supports selection, filtering, and comparison as core exploratory tasks. It allows analysis of set relations inluding subsets, disjoint sets and set intersection strength, and also features perceptual set ordering for detecting patterns in set matrices. Its interaction is designed for rich and rapid data exploration. We demonstrate results on a wide range of datasets from different domains with varying characteristics, and report on expert reviews and a case study using student enrollment and degree data with assistant deans at a major public university.}, keywords = {} } Datasets commonly include multi-value (set-typed) attributes that describe set memberships over elements, such as genres per movie or courses taken per student. Set-typed attributes describe rich relations across elements, sets, and the set intersections. Increasing the number of sets results in a combinatorial growth of relations and creates scalability challenges. Exploratory tasks (e.g. selection, comparison) have commonly been designed in separation for set-typed attributes, which reduces interface consistency. To improve on scalability and to support rich, contextual exploration of set-typed data, we present AggreSet. AggreSet creates aggregations for each data dimension: sets, set-degrees, set-pair intersections, and other attributes. It visualizes the element count per aggregate using a matrix plot for set-pair intersections, and histograms for set lists, set-degrees and other attributes. Its non-overlapping visual design is scalable to numerous and large sets. AggreSet supports selection, filtering, and comparison as core exploratory tasks. It allows analysis of set relations inluding subsets, disjoint sets and set intersection strength, and also features perceptual set ordering for detecting patterns in set matrices. Its interaction is designed for rich and rapid data exploration. We demonstrate results on a wide range of datasets from different domains with varying characteristics, and report on expert reviews and a case study using student enrollment and degree data with assistant deans at a major public university. |
41. | Niklas Elmqvist, Ji Soo Yi (2015): Patterns for Visualization Evaluation. Information Visualization, 14 (3), pp. 250–269, 2015. (Type: Article | Abstract | Links | BibTeX) @article{Elmqvist2015, title = {Patterns for Visualization Evaluation}, author = {Niklas Elmqvist and Ji Soo Yi}, url = {http://www.umiacs.umd.edu/~elm/projects/eval-patterns/eval-patterns.pdf, Paper http://visevalpatterns.wikia.com/, Wiki}, year = {2015}, date = {2015-07-01}, journal = {Information Visualization}, volume = {14}, number = {3}, pages = {250--269}, abstract = {We propose a pattern-based approach to evaluating data visualization: a set of general and reusable solutions to commonly occurring problems in evaluating visualization tools, techniques, and systems. Patterns have had significant impact in a wide array of disciplines, particularly software engineering, and we believe that they provide a powerful lens for characterizing visualization evaluation practices by offering practical, tried-and-tested tips, and tricks that can be adopted immediately. The 20 patterns presented here have also been added to a freely editable Wiki repository. The motivation for creating this evaluation pattern language is to (a) capture and formalize \"dark\" practices for visualization evaluation not currently recorded in the literature, (b) disseminate these hard-won experiences to researchers and practitioners alike, (c) provide a standardized vocabulary for designing visualization evaluation, and (d) invite the community to add new evaluation patterns to a growing repository of patterns.}, keywords = {} } We propose a pattern-based approach to evaluating data visualization: a set of general and reusable solutions to commonly occurring problems in evaluating visualization tools, techniques, and systems. Patterns have had significant impact in a wide array of disciplines, particularly software engineering, and we believe that they provide a powerful lens for characterizing visualization evaluation practices by offering practical, tried-and-tested tips, and tricks that can be adopted immediately. The 20 patterns presented here have also been added to a freely editable Wiki repository. The motivation for creating this evaluation pattern language is to (a) capture and formalize "dark" practices for visualization evaluation not currently recorded in the literature, (b) disseminate these hard-won experiences to researchers and practitioners alike, (c) provide a standardized vocabulary for designing visualization evaluation, and (d) invite the community to add new evaluation patterns to a growing repository of patterns. |
40. | Zhenpeng Zhao, William Benjamin, Niklas Elmqvist, K. Ramani (2015): Sketcholution: Interaction Histories for Sketching. International Journal of Human-Computer Studies, 82 pp. 11–20, 2015. (Type: Article | Abstract | Links | BibTeX) @article{Zhao2015, title = {Sketcholution: Interaction Histories for Sketching}, author = {Zhenpeng Zhao and William Benjamin and Niklas Elmqvist and K. Ramani}, url = {http://www.umiacs.umd.edu/~elm/projects/sketcholution/sketcholution.pdf, Paper https://www.youtube.com/watch?v=SYvkIdJQtEk, Youtube video}, year = {2015}, date = {2015-05-16}, journal = {International Journal of Human-Computer Studies}, volume = {82}, pages = {11--20}, abstract = {We present Sketcholution, a method for automatically creating visual histories of hand-drawn sketches. Such visual histories are useful for a designer to reflect on a sketch, communicate ideas to others, and fork from or revert to an earlier point in the creative process. Our approach uses a bottom-up agglomerative clustering mechanism that groups adjacent frames based on their perceptual similarity while maintaining the causality of how a sketch was constructed. The resulting aggregation dendrogram can be cut at any level depending on available display space, and can be used to create a visual history consisting of either a comic strip of highlights, or a single annotated summary frame. We conducted a user study comparing the speed and accuracy of participants recovering causality in a sketch history using comic strips, summary frames, and simple animations. Although animations with interaction may seem better than static graphics, our results show that both comic strip and summary frame significantly outperform animation.}, keywords = {} } We present Sketcholution, a method for automatically creating visual histories of hand-drawn sketches. Such visual histories are useful for a designer to reflect on a sketch, communicate ideas to others, and fork from or revert to an earlier point in the creative process. Our approach uses a bottom-up agglomerative clustering mechanism that groups adjacent frames based on their perceptual similarity while maintaining the causality of how a sketch was constructed. The resulting aggregation dendrogram can be cut at any level depending on available display space, and can be used to create a visual history consisting of either a comic strip of highlights, or a single annotated summary frame. We conducted a user study comparing the speed and accuracy of participants recovering causality in a sketch history using comic strips, summary frames, and simple animations. Although animations with interaction may seem better than static graphics, our results show that both comic strip and summary frame significantly outperform animation. |
39. | Jungu Choi, Deok Gun Park, Yuetling Wong, Eli Fisher, Niklas Elmqvist (2015): VisDock: A Toolkit for Cross-Cutting Interactions in Visualization. IEEE Transactions on Visualization & Computer Graphics, 21 (9), pp. 1087–1100, 2015. (Type: Article | Abstract | Links | BibTeX) @article{Choi2015, title = {VisDock: A Toolkit for Cross-Cutting Interactions in Visualization}, author = {Jungu Choi and Deok Gun Park and Yuetling Wong and Eli Fisher and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/visdock/visdock.pdf, Paper https://www.youtube.com/watch?v=LUC-nGR-fOk, Youtube video}, year = {2015}, date = {2015-03-21}, journal = {IEEE Transactions on Visualization & Computer Graphics}, volume = {21}, number = {9}, pages = {1087--1100}, abstract = {Standard user applications provide a range of cross-cutting interaction techniques that are common to virtually all such tools: selection, filtering, navigation, layer management, and cut-and-paste.We present VisDock, a JavaScript mixin library that provides a core set of these cross-cutting interaction techniques for visualization, including selection (lasso, paths, shape selection, etc), layer management (visibility, transparency, set operations, etc), navigation (pan, zoom, overview, magnifying lenses, etc), and annotation (point-based, region-based, data-space based, etc). To showcase the utility of the library, we have released it as Open Source and integrated it with a large number of existing web-based visualizations. Furthermore, we have evaluated VisDock using qualitative studies with both developers utilizing the toolkit to build new web-based visualizations, as well as with end-users utilizing it to explore movie ratings data. Results from these studies highlight the usability and effectiveness of the toolkit from both developer and end-user perspectives.}, keywords = {} } Standard user applications provide a range of cross-cutting interaction techniques that are common to virtually all such tools: selection, filtering, navigation, layer management, and cut-and-paste.We present VisDock, a JavaScript mixin library that provides a core set of these cross-cutting interaction techniques for visualization, including selection (lasso, paths, shape selection, etc), layer management (visibility, transparency, set operations, etc), navigation (pan, zoom, overview, magnifying lenses, etc), and annotation (point-based, region-based, data-space based, etc). To showcase the utility of the library, we have released it as Open Source and integrated it with a large number of existing web-based visualizations. Furthermore, we have evaluated VisDock using qualitative studies with both developers utilizing the toolkit to build new web-based visualizations, as well as with end-users utilizing it to explore movie ratings data. Results from these studies highlight the usability and effectiveness of the toolkit from both developer and end-user perspectives. |
38. | Yuetling Wong, Jieqiong Zhao, Niklas Elmqvist (2015): Evaluating Social Navigation Visualization in Online Geographic Maps. International Journal of Human-Computer Interaction, 31 (2), pp. 118–127, 2015. (Type: Article | Abstract | Links | BibTeX) @article{Wong2015, title = {Evaluating Social Navigation Visualization in Online Geographic Maps}, author = {Yuetling Wong and Jieqiong Zhao and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/socnav-eval/socnav-eval.pdf, Paper}, year = {2015}, date = {2015-02-22}, journal = {International Journal of Human-Computer Interaction}, volume = {31}, number = {2}, pages = {118--127}, abstract = {Social navigation enables emergent collaboration between independent collaborators by exposing the behavior of each individual. This is a powerful idea for web-based visualization, where the work of one user can inform other users interacting with the same visualization. We present results from a crowdsourced user study evaluating the value of such social navigation cues for a geographic map service. Our results show significantly improved performance for participants who interacted with the map when the visual footprints of previous users were visible.}, keywords = {} } Social navigation enables emergent collaboration between independent collaborators by exposing the behavior of each individual. This is a powerful idea for web-based visualization, where the work of one user can inform other users interacting with the same visualization. We present results from a crowdsourced user study evaluating the value of such social navigation cues for a geographic map service. Our results show significantly improved performance for participants who interacted with the map when the visual footprints of previous users were visible. |
37. | Samah Gad, Waqas Javed, Sohaib Ghani, Niklas Elmqvist, Tom Ewing, Keith N. Hampton, Naren Ramakrishnan (2015): ThemeDelta: Dynamic Segmentations over Temporal Topic Models. IEEE Transactions on Visualization and Computer Graphics, 21 (5), pp. 672–685, 2015. (Type: Article | Abstract | Links | BibTeX) @article{Gad2015, title = {ThemeDelta: Dynamic Segmentations over Temporal Topic Models}, author = {Samah Gad and Waqas Javed and Sohaib Ghani and Niklas Elmqvist and Tom Ewing and Keith N. Hampton and Naren Ramakrishnan}, url = {http://www.umiacs.umd.edu/~elm/projects/theme-delta/theme-delta.pdf, Paper}, year = {2015}, date = {2015-02-17}, journal = {IEEE Transactions on Visualization and Computer Graphics}, volume = {21}, number = {5}, pages = {672--685}, abstract = {We present ThemeDelta, a visual analytics system for extracting and visualizing temporal trends, clustering, and reorganization in time-indexed textual datasets. ThemeDelta is supported by a dynamic temporal segmentation algorithm that integrates with topic modeling algorithms to identify change points where significant shifts in topics occur. This algorithm detects not only the clustering and associations of keywords in a time period, but also their convergence into topics (groups of keywords) that may later diverge into new groups. The visual representation of ThemeDelta uses sinuous, variable-width lines to show this evolution on a timeline, utilizing color for categories, and line width for keyword strength. We demonstrate how interaction with ThemeDelta helps capture the rise and fall of topics by analyzing archives of historical newspapers, of U.S. presidential campaign speeches, and of social messages collected through iNeighbors, a web-based social website. ThemeDelta was evaluated using a qualitative expert user study involving three researchers from rhetoric and history using the historical newspapers corpus.}, keywords = {} } We present ThemeDelta, a visual analytics system for extracting and visualizing temporal trends, clustering, and reorganization in time-indexed textual datasets. ThemeDelta is supported by a dynamic temporal segmentation algorithm that integrates with topic modeling algorithms to identify change points where significant shifts in topics occur. This algorithm detects not only the clustering and associations of keywords in a time period, but also their convergence into topics (groups of keywords) that may later diverge into new groups. The visual representation of ThemeDelta uses sinuous, variable-width lines to show this evolution on a timeline, utilizing color for categories, and line width for keyword strength. We demonstrate how interaction with ThemeDelta helps capture the rise and fall of topics by analyzing archives of historical newspapers, of U.S. presidential campaign speeches, and of social messages collected through iNeighbors, a web-based social website. ThemeDelta was evaluated using a qualitative expert user study involving three researchers from rhetoric and history using the historical newspapers corpus. |
36. | Sriram Karthik Badam, Eli Raymond Fisher, Niklas Elmqvist (2015): Munin: A Peer-to-Peer Middleware for Ubiquitous Analytics and Visualization Spaces. IEEE Transactions on Visualization & Computer Graphics, 21 (2), pp. 215–228, 2015. (Type: Article | Abstract | Links | BibTeX) @article{Badam2015, title = {Munin: A Peer-to-Peer Middleware for Ubiquitous Analytics and Visualization Spaces}, author = {Sriram Karthik Badam and Eli Raymond Fisher and Niklas Elmqvist }, url = {http://www.umiacs.umd.edu/~elm/projects/munin/munin.pdf, Paper https://www.youtube.com/watch?v=ZKIXSdUm6-s, Video http://www.slideshare.net/NickElm/munin-a-peertopeer-middleware-forubiquitous-analytics-and-visualization-spaces, Slides}, year = {2015}, date = {2015-02-01}, journal = {IEEE Transactions on Visualization & Computer Graphics}, volume = {21}, number = {2}, pages = {215--228}, abstract = {We present Munin, a software framework for building ubiquitous analytics environments consisting of multiple input and output surfaces, such as tabletop displays, wall-mounted displays, and mobile devices. Munin utilizes a service-based model where each device provides one or more dynamically loaded services for input, display, or computation. Using a peer-to-peer model for communication, it leverages IP multicast to replicate the shared state among the peers. Input is handled through a shared event channel that lets input and output devices be fully decoupled. It also provides a data-driven scene graph to delegate rendering to peers, thus creating a robust, fault-tolerant, decentralized system. In this paper, we describe Munin\'s general design and architecture, provide several examples of how we are using the framework for ubiquitous analytics and visualization, and present a case study on building a Munin assembly for multidimensional visualization. We also present performance results and anecdotal user feedback for the framework that suggests that combining a service-oriented, data-driven model with middleware support for data sharing and event handling eases the design and execution of high performance distributed visualizations.}, keywords = {} } We present Munin, a software framework for building ubiquitous analytics environments consisting of multiple input and output surfaces, such as tabletop displays, wall-mounted displays, and mobile devices. Munin utilizes a service-based model where each device provides one or more dynamically loaded services for input, display, or computation. Using a peer-to-peer model for communication, it leverages IP multicast to replicate the shared state among the peers. Input is handled through a shared event channel that lets input and output devices be fully decoupled. It also provides a data-driven scene graph to delegate rendering to peers, thus creating a robust, fault-tolerant, decentralized system. In this paper, we describe Munin's general design and architecture, provide several examples of how we are using the framework for ubiquitous analytics and visualization, and present a case study on building a Munin assembly for multidimensional visualization. We also present performance results and anecdotal user feedback for the framework that suggests that combining a service-oriented, data-driven model with middleware support for data sharing and event handling eases the design and execution of high performance distributed visualizations. |
2014 | |
35. | Jonathan C. Roberts, Panagiotis D. Ritsos, Sriram Karthik Badam, Dominique Brodbeck, Jessie Kennedy, Niklas Elmqvist (2014): Visualization Beyond the Desktop --- The Next Big Thing. IEEE Computer Graphics & Applications, 34 (6), pp. 26–34, 2014. (Type: Article | Abstract | Links | BibTeX) @article{Roberts2014, title = {Visualization Beyond the Desktop --- The Next Big Thing}, author = {Jonathan C. Roberts and Panagiotis D. Ritsos and Sriram Karthik Badam and Dominique Brodbeck and Jessie Kennedy and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/beyond-desktop/beyond-desktop.pdf, Paper}, year = {2014}, date = {2014-12-02}, journal = {IEEE Computer Graphics & Applications}, volume = {34}, number = {6}, pages = {26--34}, abstract = {Visualization is coming of age: with visual depictions being seamlessly integrated into documents and data visualization techniques being used to understand datasets that are ever-growing in size and complexity, the term visualization is becoming used in everyday conversations. But we are on a cusp; visualization researchers need to develop and adapt to today\'s new devices and tomorrows technology. Today, we are interacting with visual depictions through a mouse. Tomorrow, we will be touching, swiping, grasping, feeling, hearing, smelling and even tasting our data. The next big thing is multi-sensory visualization that goes beyond the desktop.}, keywords = {} } Visualization is coming of age: with visual depictions being seamlessly integrated into documents and data visualization techniques being used to understand datasets that are ever-growing in size and complexity, the term visualization is becoming used in everyday conversations. But we are on a cusp; visualization researchers need to develop and adapt to today's new devices and tomorrows technology. Today, we are interacting with visual depictions through a mouse. Tomorrow, we will be touching, swiping, grasping, feeling, hearing, smelling and even tasting our data. The next big thing is multi-sensory visualization that goes beyond the desktop. |
34. | Sungahn Ko, Jieqiong Zhao, Jing Xia, Shehzad Afzal, Xiaoyu Wang, Greg Abram, Niklas Elmqvist, Len Kne, David Van Riper, Kelly Gaither, Shaun Kennedy, William Tolone, William Ribarsky, David S. Ebert (2014): VASA: Interactive Computational Steering of Large Asynchronous Simulation Pipelines for Societal Infrastructure. IEEE Transactions on Visualization & Computer Graphics, 20 (12), pp. 1853–1862, 2014. (Type: Article | Abstract | Links | BibTeX) @article{Ko2014, title = {VASA: Interactive Computational Steering of Large Asynchronous Simulation Pipelines for Societal Infrastructure}, author = {Sungahn Ko and Jieqiong Zhao and Jing Xia and Shehzad Afzal and Xiaoyu Wang and Greg Abram and Niklas Elmqvist and Len Kne and David Van Riper and Kelly Gaither and Shaun Kennedy and William Tolone and William Ribarsky and David S. Ebert}, url = {http://www.umiacs.umd.edu/~elm/projects/vasa/vasa.pdf}, year = {2014}, date = {2014-11-13}, journal = {IEEE Transactions on Visualization & Computer Graphics}, volume = {20}, number = {12}, pages = {1853--1862}, abstract = {We present VASA, a visual analytics platform consisting of a desktop application, a component model, and a suite of distributed simulation components for modeling the impact of societal threats such as weather, food contamination, and traffic on critical infrastructure such as supply chains, road networks, and power grids. Each component encapsulates a high-fidelity simulation model that together form an asynchronous simulation pipeline: a system of systems of individual simulations with a common data and parameter exchange format. At the heart of VASA is theWorkbench, a visual analytics application providing three distinct features: (1) low-fidelity approximations of the distributed simulation components using local simulation proxies to enable analysts to interactively configure a simulation run; (2) computational steering mechanisms to manage the execution of individual simulation components; and (3) spatiotemporal and interactive methods to explore the combined results of a simulation run. We showcase the utility of the platform using examples involving supply chains during a hurricane as well as food contamination in a fast food restaurant chain.}, keywords = {} } We present VASA, a visual analytics platform consisting of a desktop application, a component model, and a suite of distributed simulation components for modeling the impact of societal threats such as weather, food contamination, and traffic on critical infrastructure such as supply chains, road networks, and power grids. Each component encapsulates a high-fidelity simulation model that together form an asynchronous simulation pipeline: a system of systems of individual simulations with a common data and parameter exchange format. At the heart of VASA is theWorkbench, a visual analytics application providing three distinct features: (1) low-fidelity approximations of the distributed simulation components using local simulation proxies to enable analysts to interactively configure a simulation run; (2) computational steering mechanisms to manage the execution of individual simulation components; and (3) spatiotemporal and interactive methods to explore the combined results of a simulation run. We showcase the utility of the platform using examples involving supply chains during a hurricane as well as food contamination in a fast food restaurant chain. |
33. | Krishna Madhavan, Niklas Elmqvist, Mihaela Vorvoreanu, Xin Chen, Yuetling Wong, Hanjun Xian, Zhihua Dong, Aditya Johri (2014): DIA2: Web-based Cyberinfrastructure for Visual Analytics of Funding Portfolios. IEEE Transactions on Visualization & Computer Graphics, 20 (12), pp. 1823–1832, 2014. (Type: Article | Abstract | Links | BibTeX) @article{Madhavan2014, title = {DIA2: Web-based Cyberinfrastructure for Visual Analytics of Funding Portfolios}, author = {Krishna Madhavan and Niklas Elmqvist and Mihaela Vorvoreanu and Xin Chen and Yuetling Wong and Hanjun Xian and Zhihua Dong and Aditya Johri}, url = {http://www.umiacs.umd.edu/~elm/projects/dia2/dia2-vast2014.pdf, Paper}, year = {2014}, date = {2014-11-13}, journal = {IEEE Transactions on Visualization & Computer Graphics}, volume = {20}, number = {12}, pages = {1823--1832}, abstract = {We present a design study of the Deep Insights Anywhere, Anytime (DIA2) platform, a web-based visual analytics system that allows program managers and academic staff at the U.S. National Science Foundation to search, view, and analyze their research funding portfolio. The goal of this system is to facilitate usersʼ understanding of both past and currently active research awards in order to make more informed decisions of their future funding. This user group is characterized by high expertise yet not necessarily high literacy in visualization and visual analytics--they are essentially \"casual experts\"--and thus require careful visual and information design, including adhering to user experience standards, providing a self-instructive interface, and progressively refining visualizations to minimize complexity. We discuss the challenges of designing a system for \"casual experts\" and highlight how we addressed this issue by modeling the organizational structure and workflows of the NSF within our system. We discuss each stage of the design process, starting with formative interviews, participatory design, prototypes, and finally live deployments and evaluation with stakeholders.}, keywords = {} } We present a design study of the Deep Insights Anywhere, Anytime (DIA2) platform, a web-based visual analytics system that allows program managers and academic staff at the U.S. National Science Foundation to search, view, and analyze their research funding portfolio. The goal of this system is to facilitate usersʼ understanding of both past and currently active research awards in order to make more informed decisions of their future funding. This user group is characterized by high expertise yet not necessarily high literacy in visualization and visual analytics--they are essentially "casual experts"--and thus require careful visual and information design, including adhering to user experience standards, providing a self-instructive interface, and progressively refining visualizations to minimize complexity. We discuss the challenges of designing a system for "casual experts" and highlight how we addressed this issue by modeling the organizational structure and workflows of the NSF within our system. We discuss each stage of the design process, starting with formative interviews, participatory design, prototypes, and finally live deployments and evaluation with stakeholders. |
32. | Eli Raymond Fisher, Sriram Karthik Badam, Niklas Elmqvist (2014): Designing Peer-to-Peer Distributed User Interfaces: Case Studies on Building Distributed Applications. International Journal of Human-Computer Studies, 72 (1), pp. 100–110, 2014. (Type: Article | Abstract | Links | BibTeX) @article{Fisher2014, title = {Designing Peer-to-Peer Distributed User Interfaces: Case Studies on Building Distributed Applications}, author = {Eli Raymond Fisher and Sriram Karthik Badam and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/dui-design/dui-design.pdf, Paper}, year = {2014}, date = {2014-01-01}, journal = {International Journal of Human-Computer Studies}, volume = {72}, number = {1}, pages = {100--110}, abstract = {Building a distributed user interface (DUI) application should ideally not require any additional effort beyond that necessary to build a non-distributed interface. In practice, however, DUI development is fraught with several technical challenges such as synchronization, resource management, and data transfer. In this paper, we present three case studies on building distributed user interface applications: a distributed media player for multiple displays and controls, a collaborative search system integrating a tabletop and mobile devices, and a multiplayer Tetris game for multi-surface use. While there exist several possible network architectures for such applications, our particular approach focuses on peer-to-peer (P2P) architectures. This focus leads to a number of challenges and opportunities. Drawing from these studies, we derive general challenges for P2P DUI development in terms of design, architecture, and implementation. We conclude with some general guidelines for practical DUI application development using peer-to-peer architectures.}, keywords = {} } Building a distributed user interface (DUI) application should ideally not require any additional effort beyond that necessary to build a non-distributed interface. In practice, however, DUI development is fraught with several technical challenges such as synchronization, resource management, and data transfer. In this paper, we present three case studies on building distributed user interface applications: a distributed media player for multiple displays and controls, a collaborative search system integrating a tabletop and mobile devices, and a multiplayer Tetris game for multi-surface use. While there exist several possible network architectures for such applications, our particular approach focuses on peer-to-peer (P2P) architectures. This focus leads to a number of challenges and opportunities. Drawing from these studies, we derive general challenges for P2P DUI development in terms of design, architecture, and implementation. We conclude with some general guidelines for practical DUI application development using peer-to-peer architectures. |
2013 | |
31. | Stephen MacNeil, Niklas Elmqvist (2013): Visualization Mosaics for Multivariate Visual Exploration. Computer Graphics Forum, 32 (6), pp. 38–50, 2013. (Type: Article | Abstract | Links | BibTeX) @article{MacNeil2013, title = {Visualization Mosaics for Multivariate Visual Exploration}, author = {Stephen MacNeil and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/mosaics/mosaics.pdf, Paper}, year = {2013}, date = {2013-06-01}, journal = {Computer Graphics Forum}, volume = {32}, number = {6}, pages = {38--50}, abstract = {We present a new model for creating composite visualizations of multidimensional datasets using simple visual representations such as point charts, scatterplots, and parallel coordinates as components. Each visual representation is contained in a tile, and the tiles are arranged in a mosaic of views using a space-filling slice-and-dice layout. Tiles can be created, resized, split, or merged using a versatile set of interaction techniques, and the visual representation of individual tiles can also be dynamically changed to another representation. Because each tile is self-contained and independent, it can be implemented in any programming language, on any platform, and using any visual representation. We also propose a formalism for expressing visualization mosaics. A web-based implementation called MosaicJS supporting multidimensional visual exploration showcases the versatility of the concept and illustrates how it can be used to integrate visualization components provided by different toolkits.}, keywords = {} } We present a new model for creating composite visualizations of multidimensional datasets using simple visual representations such as point charts, scatterplots, and parallel coordinates as components. Each visual representation is contained in a tile, and the tiles are arranged in a mosaic of views using a space-filling slice-and-dice layout. Tiles can be created, resized, split, or merged using a versatile set of interaction techniques, and the visual representation of individual tiles can also be dynamically changed to another representation. Because each tile is self-contained and independent, it can be implemented in any programming language, on any platform, and using any visual representation. We also propose a formalism for expressing visualization mosaics. A web-based implementation called MosaicJS supporting multidimensional visual exploration showcases the versatility of the concept and illustrates how it can be used to integrate visualization components provided by different toolkits. |
30. | Niklas Elmqvist, Pourang Irani (2013): Ubiquitous Analytics: Interacting with Big Data Anywhere, Anytime. IEEE Computer, 46 (4), pp. 86–89, 2013. (Type: Article | Abstract | Links | BibTeX) @article{Elmqvist2013, title = {Ubiquitous Analytics: Interacting with Big Data Anywhere, Anytime}, author = {Niklas Elmqvist and Pourang Irani}, url = {http://www.umiacs.umd.edu/~elm/projects/ubilytics/ubilytics.pdf, Paper}, year = {2013}, date = {2013-01-01}, journal = {IEEE Computer}, volume = {46}, number = {4}, pages = {86--89}, abstract = {With more than 4 billion mobile devices in the world today, mobile computing is quickly becoming the universal computational platform of the world. Building on this new wave of mobile devices are personal computing activities such as microblogging, social networking, and photo sharing, which are intrinsically mobile phenomena that occur while on-the-go. Mobility is now propagating to more professional activities such as data analytics, which need no longer be restricted to the workplace. In fact, the rise of big data increasingly demands that we be able to access data resources anytime and anywhere, whether to support decisions and activities for travel, telecommuting, or distributed teamwork. In other words, it is high time to fully realize Mark Weiser’s vision of ubiquitous computing in the realm of data analytics.}, keywords = {} } With more than 4 billion mobile devices in the world today, mobile computing is quickly becoming the universal computational platform of the world. Building on this new wave of mobile devices are personal computing activities such as microblogging, social networking, and photo sharing, which are intrinsically mobile phenomena that occur while on-the-go. Mobility is now propagating to more professional activities such as data analytics, which need no longer be restricted to the workplace. In fact, the rise of big data increasingly demands that we be able to access data resources anytime and anywhere, whether to support decisions and activities for travel, telecommuting, or distributed teamwork. In other words, it is high time to fully realize Mark Weiser’s vision of ubiquitous computing in the realm of data analytics. |
29. | Sohaib Ghani, Bumchul Kwon, Seungyoon Lee, Ji-Soo Yi, Niklas Elmqvist (2013): Visual Analytics for Multimodal Social Network Analysis: A Design Study with Social Scientists. IEEE Transactions on Visualization and Computer Graphics, 19 (12), pp. 2032–2041, 2013. (Type: Article | Abstract | Links | BibTeX) @article{Ghani2013, title = {Visual Analytics for Multimodal Social Network Analysis: A Design Study with Social Scientists}, author = {Sohaib Ghani and Bumchul Kwon and Seungyoon Lee and Ji-Soo Yi and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/mmgraph/mmgraph.pdf}, year = {2013}, date = {2013-01-01}, journal = {IEEE Transactions on Visualization and Computer Graphics}, volume = {19}, number = {12}, pages = {2032--2041}, abstract = {Social network analysis (SNA) is becoming increasingly concerned not only with actors and their relations, but also with distinguishing between different types of such entities. For example, social scientists may want to investigate asymmetric relations in organizations with strict chains of command, or incorporate non-actors such as conferences and projects when analyzing co-authorship patterns. Multimodal social networks are those where actors and relations belong to different types, or modes, and multimodal social network analysis (mSNA) is accordingly SNA for such networks. In this paper, we present a design study that we conducted with several social scientist collaborators on how to support mSNA using visual analytics tools. Based on an open-ended, formative design process, we devised a visual representation called parallel node-link bands (PNLBs) that splits modes into separate bands and renders connections between adjacent ones, similar to the list view in Jigsaw. We then used the tool in a qualitative evaluation involving five social scientists whose feedback informed a second design phase that incorporated additional network metrics. Finally, we conducted a second qualitative evaluation with our social scientist collaborators that provided further insights on the utility of the PNLBs representation and the potential of visual analytics for mSNA.}, keywords = {} } Social network analysis (SNA) is becoming increasingly concerned not only with actors and their relations, but also with distinguishing between different types of such entities. For example, social scientists may want to investigate asymmetric relations in organizations with strict chains of command, or incorporate non-actors such as conferences and projects when analyzing co-authorship patterns. Multimodal social networks are those where actors and relations belong to different types, or modes, and multimodal social network analysis (mSNA) is accordingly SNA for such networks. In this paper, we present a design study that we conducted with several social scientist collaborators on how to support mSNA using visual analytics tools. Based on an open-ended, formative design process, we devised a visual representation called parallel node-link bands (PNLBs) that splits modes into separate bands and renders connections between adjacent ones, similar to the list view in Jigsaw. We then used the tool in a qualitative evaluation involving five social scientists whose feedback informed a second design phase that incorporated additional network metrics. Finally, we conducted a second qualitative evaluation with our social scientist collaborators that provided further insights on the utility of the PNLBs representation and the potential of visual analytics for mSNA. |
28. | Waqas Javed, Niklas Elmqvist (2013): Stack Zooming for Multi-Focus Interaction in Skewed-Aspect Visual Spaces. IEEE Transactions on Visualization and Computer Graphics, 19 (8), pp. 1362–1374, 2013. (Type: Article | Abstract | Links | BibTeX) @article{Javed2013b, title = {Stack Zooming for Multi-Focus Interaction in Skewed-Aspect Visual Spaces}, author = {Waqas Javed and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/stackzoom/stackzoom-journal.pdf, Paper}, year = {2013}, date = {2013-01-01}, journal = {IEEE Transactions on Visualization and Computer Graphics}, volume = {19}, number = {8}, pages = {1362--1374}, abstract = {Many 2D visual spaces have a virtually one-dimensional nature with very high aspect ratio between the dimensions: examples include time-series data, multimedia data such as sound or video, text documents, and bipartite graphs. Common among these is that the space can become very large, e.g., temperature measurements could span a long time period, surveillance video could cover entire days or weeks, and documents can have thousands of pages. Many analysis tasks for such spaces require several foci while retaining context and distance awareness. In this extended version of our IEEE PacificVis 2010 paper, we introduce a method for supporting this kind of multi-focus interaction that we call stack zooming. The approach is based on building hierarchies of 1D strips stacked on top of each other, where each subsequent stack represents a higher zoom level, and sibling strips represent branches in the exploration. Correlation graphics show the relation between stacks and strips of different levels, providing context and distance awareness for the foci. The zoom hierarchies can also be used as graphical histories and for communicating insights to stakeholders, and can be further extended with annotation and integrated statistics.}, keywords = {} } Many 2D visual spaces have a virtually one-dimensional nature with very high aspect ratio between the dimensions: examples include time-series data, multimedia data such as sound or video, text documents, and bipartite graphs. Common among these is that the space can become very large, e.g., temperature measurements could span a long time period, surveillance video could cover entire days or weeks, and documents can have thousands of pages. Many analysis tasks for such spaces require several foci while retaining context and distance awareness. In this extended version of our IEEE PacificVis 2010 paper, we introduce a method for supporting this kind of multi-focus interaction that we call stack zooming. The approach is based on building hierarchies of 1D strips stacked on top of each other, where each subsequent stack represents a higher zoom level, and sibling strips represent branches in the exploration. Correlation graphics show the relation between stacks and strips of different levels, providing context and distance awareness for the foci. The zoom hierarchies can also be used as graphical histories and for communicating insights to stakeholders, and can be further extended with annotation and integrated statistics. |
27. | Waqas Javed, Niklas Elmqvist (2013): ExPlates: Spatializing Interactive Analysis to Scaffold Visual Exploration. Computer Graphics Forum, 32 (2), pp. 441–450, 2013. (Type: Article | Abstract | Links | BibTeX) @article{Javed2013, title = {ExPlates: Spatializing Interactive Analysis to Scaffold Visual Exploration}, author = {Waqas Javed and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/explates/explates.pdf, Paper http://www.slideshare.net/NickElm/ex-plates-online, Slides https://www.youtube.com/watch?v=UNhlhFUcDDo, Youtube Video}, year = {2013}, date = {2013-01-01}, journal = {Computer Graphics Forum}, volume = {32}, number = {2}, pages = {441--450}, abstract = {Visual exploration involves using visual representations to investigate data where the goals of the process are unclear and poorly defined. However, this often places unduly high cognitive load on the user, particularly in terms of keeping track of multiple investigative branches, remembering earlier results, and correlating between different views. We propose a new methodology for automatically spatializing the individual steps in visual exploration onto a large visual canvas, allowing users to easily recall, reflect, and assess their progress. We also present a web-based implementation of our methodology called ExPlatesJS where users can manipulate multidimensional data in their browsers, automatically building visual queries as they explore the data.}, keywords = {} } Visual exploration involves using visual representations to investigate data where the goals of the process are unclear and poorly defined. However, this often places unduly high cognitive load on the user, particularly in terms of keeping track of multiple investigative branches, remembering earlier results, and correlating between different views. We propose a new methodology for automatically spatializing the individual steps in visual exploration onto a large visual canvas, allowing users to easily recall, reflect, and assess their progress. We also present a web-based implementation of our methodology called ExPlatesJS where users can manipulate multidimensional data in their browsers, automatically building visual queries as they explore the data. |
2012 | |
26. | Shehzad Afzal, Ross Maciejewski, Yun Jang, Niklas Elmqvist, David Ebert (2012): Spatial Text Visualization Using Automatic Typographic Maps. IEEE Computer Graphics and Applications (Proc. Vis/InfoVis 2012), 18 (12), pp. 2556-2564, 2012. (Type: Article | Abstract | Links | BibTeX) @article{Afzal2012, title = {Spatial Text Visualization Using Automatic Typographic Maps}, author = {Shehzad Afzal and Ross Maciejewski and Yun Jang and Niklas Elmqvist and David Ebert}, url = {http://www.umiacs.umd.edu/~elm/projects/typomapvis/typomapvis.pdf}, year = {2012}, date = {2012-01-01}, journal = {IEEE Computer Graphics and Applications (Proc. Vis/InfoVis 2012)}, volume = {18}, number = {12}, pages = {2556-2564}, abstract = {We present a method for automatically building typographic maps that merge text and spatial data into a visual representation where text alone forms the graphical features. We further show how to use this approach to visualize spatial data such as traffic density, crime rate, or demographic data. The technique accepts a vector representation of a geographic map and spatializes the textual labels in the space onto polylines and polygons based on user-defined visual attributes and constraints. Our sample implementation runs as a Web service, spatializing shape files from the OpenStreetMap project into typographic maps for any region.}, keywords = {} } We present a method for automatically building typographic maps that merge text and spatial data into a visual representation where text alone forms the graphical features. We further show how to use this approach to visualize spatial data such as traffic density, crime rate, or demographic data. The technique accepts a vector representation of a geographic map and spatializes the textual labels in the space onto polylines and polygons based on user-defined visual attributes and constraints. Our sample implementation runs as a Web service, spatializing shape files from the OpenStreetMap project into typographic maps for any region. |
25. | Brian Bowman, Niklas Elmqvist, T.J. Jankun-Kelly (2012): Toward Visualization for Games: Theory, Design Space, and Patterns. IEEE Transactions on Visualization and Computer Graphics, 18 (11), pp. 1956-1968, 2012. (Type: Article | Abstract | Links | BibTeX) @article{Bowman2012, title = {Toward Visualization for Games: Theory, Design Space, and Patterns}, author = {Brian Bowman and Niklas Elmqvist and T.J. Jankun-Kelly}, url = {http://www.umiacs.umd.edu/~elm/projects/visgames/visgames.pdf}, year = {2012}, date = {2012-01-01}, journal = {IEEE Transactions on Visualization and Computer Graphics}, volume = {18}, number = {11}, pages = {1956-1968}, abstract = {Electronic games are starting to incorporate in-game telemetry that collects data about player, team, and community performance on a massive scale, and as data begins to accumulate, so does the demand for effectively analyzing this data. In this paper, we use examples from both old and new games of different genres to explore the theory and design space of visualization for games. Drawing on these examples, we define a design space for this novel research topic and use it to formulate design patterns for how to best apply visualization technology to games. We then discuss the implications that this new framework will potentially have on the design and development of game and visualization technology in the future.}, keywords = {} } Electronic games are starting to incorporate in-game telemetry that collects data about player, team, and community performance on a massive scale, and as data begins to accumulate, so does the demand for effectively analyzing this data. In this paper, we use examples from both old and new games of different genres to explore the theory and design space of visualization for games. Drawing on these examples, we define a design space for this novel research topic and use it to formulate design patterns for how to best apply visualization technology to games. We then discuss the implications that this new framework will potentially have on the design and development of game and visualization technology in the future. |
24. | Niklas Elmqvist, David Ebert (2012): Leveraging Multidisciplinarity in a Visual Analytics Graduate Course. IEEE Computer Graphics and Applications, 32 (3), pp. 84–87, 2012. (Type: Article | Abstract | Links | BibTeX) @article{Elmqvist2012, title = {Leveraging Multidisciplinarity in a Visual Analytics Graduate Course}, author = {Niklas Elmqvist and David Ebert}, url = {http://www.umiacs.umd.edu/~elm/projects/va-education/va-education.pdf}, year = {2012}, date = {2012-01-01}, journal = {IEEE Computer Graphics and Applications}, volume = {32}, number = {3}, pages = {84--87}, abstract = {There is a growing demand in engineering, business, science, research, and industry for students with visual analytics expertise, but teaching visual analytics is challenging due to the multidisciplinary nature of the topic matter, the diverse backgrounds of the students, and the corresponding requirements on the instructor. We report some best practices from our experience teaching several offerings of a visual analytics graduate course at Purdue University where we leveraged these multidisciplinary challenges to our advantage instead of attempting to mitigate them.}, keywords = {} } There is a growing demand in engineering, business, science, research, and industry for students with visual analytics expertise, but teaching visual analytics is challenging due to the multidisciplinary nature of the topic matter, the diverse backgrounds of the students, and the corresponding requirements on the instructor. We report some best practices from our experience teaching several offerings of a visual analytics graduate course at Purdue University where we leveraged these multidisciplinary challenges to our advantage instead of attempting to mitigate them. |
23. | Sohaib Ghani, Niklas Elmqvist, Ji-Soo Yi (2012): Perception of Animated Node-Link Diagrams for Dynamic Graphs. Computer Graphics Forum, 31 (3), pp. 1205–1214, 2012. (Type: Article | Abstract | Links | BibTeX) @article{3Ghani2012, title = {Perception of Animated Node-Link Diagrams for Dynamic Graphs}, author = {Sohaib Ghani and Niklas Elmqvist and Ji-Soo Yi}, url = {http://www.umiacs.umd.edu/~elm/projects/dyngraph/dyngraph.pdf}, year = {2012}, date = {2012-01-01}, journal = {Computer Graphics Forum}, volume = {31}, number = {3}, pages = {1205--1214}, abstract = {Effective visualization of dynamic graphs remains an open research topic, and many state-of-the-art tools use animated node-link diagrams for this purpose. Despite its intuitiveness, the effectiveness of animation in node-link diagrams has been questioned, and several empirical studies have shown that animation is not necessarily superior to static visualizations. However, the exact mechanics of perceiving animated node-link diagrams are still unclear. In this paper, we study the impact of different dynamic graph metrics on user perception of the animation. After deriving candidate visual graph metrics, we perform an exploratory user study where participants are asked to reconstruct the event sequence in animated node-link diagrams. Based on these findings, we conduct a second user study where we investigate the most important visual metrics in depth. Our findings show that node speed and target separation are prominent visual metrics to predict the performance of event sequencing tasks.}, keywords = {} } Effective visualization of dynamic graphs remains an open research topic, and many state-of-the-art tools use animated node-link diagrams for this purpose. Despite its intuitiveness, the effectiveness of animation in node-link diagrams has been questioned, and several empirical studies have shown that animation is not necessarily superior to static visualizations. However, the exact mechanics of perceiving animated node-link diagrams are still unclear. In this paper, we study the impact of different dynamic graph metrics on user perception of the animation. After deriving candidate visual graph metrics, we perform an exploratory user study where participants are asked to reconstruct the event sequence in animated node-link diagrams. Based on these findings, we conduct a second user study where we investigate the most important visual metrics in depth. Our findings show that node speed and target separation are prominent visual metrics to predict the performance of event sequencing tasks. |
22. | KyungTae Kim, Niklas Elmqvist (2012): Embodied Lenses for Collaborative Visual Queries on Tabletop Displays. Information Visualization, 11 (4), pp. 336–355, 2012. (Type: Article | Abstract | Links | BibTeX) @article{Kim2012, title = {Embodied Lenses for Collaborative Visual Queries on Tabletop Displays}, author = {KyungTae Kim and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/emblens/emblens.pdf}, year = {2012}, date = {2012-01-01}, journal = {Information Visualization}, volume = {11}, number = {4}, pages = {336--355}, abstract = {We introduce embodied lenses for visual queries on tabletop surfaces using physical interaction. The lenses are simply thin sheets of paper or transparent foil decorated with fiducial markers, allowing them to be tracked by a diffuse illumination tabletop display. The physical affordance of these embodied lenses allow them to be overlapped, causing composition in the underlying virtual space. We perform a formative evaluation to study users’ conceptual models for overlapping physical lenses. This is followed by a quantitative user study comparing performance for embodied versus purely virtual lenses. Results show that embodied lenses are equally efficient compared to purely virtual lenses, and also support tactile and eyes-free interaction. We then present several examples of the technique, including image layers, map layers, image manipulation, and multidimensional data visualization. The technique is simple, cheap, and can be integrated into many existing tabletop displays.}, keywords = {} } We introduce embodied lenses for visual queries on tabletop surfaces using physical interaction. The lenses are simply thin sheets of paper or transparent foil decorated with fiducial markers, allowing them to be tracked by a diffuse illumination tabletop display. The physical affordance of these embodied lenses allow them to be overlapped, causing composition in the underlying virtual space. We perform a formative evaluation to study users’ conceptual models for overlapping physical lenses. This is followed by a quantitative user study comparing performance for embodied versus purely virtual lenses. Results show that embodied lenses are equally efficient compared to purely virtual lenses, and also support tactile and eyes-free interaction. We then present several examples of the technique, including image layers, map layers, image manipulation, and multidimensional data visualization. The technique is simple, cheap, and can be integrated into many existing tabletop displays. |
21. | Bumchul Kwon, Waqas Javed, Sohaib Ghani, Niklas Elmqvist, Ji-Soo Yi, David Ebert (2012): Evaluating the Role of Time in Investigative Analysis of Document Collections. IEEE Transactions on Visualization and Computer Graphics, 18 (11), pp. 199–2004, 2012. (Type: Article | Abstract | Links | BibTeX) @article{Kwon2012, title = {Evaluating the Role of Time in Investigative Analysis of Document Collections}, author = {Bumchul Kwon and Waqas Javed and Sohaib Ghani and Niklas Elmqvist and Ji-Soo Yi and David Ebert}, url = {http://www.umiacs.umd.edu/~elm/projects/time-analysis/time-analysis.pdf}, year = {2012}, date = {2012-01-01}, journal = {IEEE Transactions on Visualization and Computer Graphics}, volume = {18}, number = {11}, pages = {199--2004}, abstract = {Time is a universal and essential aspect of data in any investigative analysis. It helps analysts establish causality, build storylines from evidence, and reject infeasible hypotheses. For this reason, many investigative analysis tools provide visual representations designed for making sense of temporal data. However, the field of visual analytics still needs more evidence explaining how temporal visualization actually aids the analysis process, as well as design recommendations for how to build these visualizations. To fill this gap, we conducted an insight-based qualitative study to investigate the influence of temporal visualization on investigative analysis. We found that visualizing temporal information helped participants externalize chains of events. Another contribution of our work is the lightweight evaluation approach used to collect, visualize, and analyze insight.}, keywords = {} } Time is a universal and essential aspect of data in any investigative analysis. It helps analysts establish causality, build storylines from evidence, and reject infeasible hypotheses. For this reason, many investigative analysis tools provide visual representations designed for making sense of temporal data. However, the field of visual analytics still needs more evidence explaining how temporal visualization actually aids the analysis process, as well as design recommendations for how to build these visualizations. To fill this gap, we conducted an insight-based qualitative study to investigate the influence of temporal visualization on investigative analysis. We found that visualizing temporal information helped participants externalize chains of events. Another contribution of our work is the lightweight evaluation approach used to collect, visualize, and analyze insight. |
20. | Krishna Madhavan, Mihaela Vorvoreanu, Niklas Elmqvist, Aditya Johri, Naren Ramakrishnan, G. Alan Wang, Ann McKenna (2012): Portfolio Mining. IEEE Computer, 45 (10), pp. 95–99, 2012. (Type: Article | Abstract | Links | BibTeX) @article{Madhavan2012, title = {Portfolio Mining}, author = {Krishna Madhavan and Mihaela Vorvoreanu and Niklas Elmqvist and Aditya Johri and Naren Ramakrishnan and G. Alan Wang and Ann McKenna}, url = {https://ieeexplore.ieee.org/document/6329888, IEEE Xplore}, year = {2012}, date = {2012-01-01}, journal = {IEEE Computer}, volume = {45}, number = {10}, pages = {95--99}, abstract = {Portfolio mining facilitates the creation of actionable knowledge, catalyzes innovations, and sustains research communities.}, keywords = {} } Portfolio mining facilitates the creation of actionable knowledge, catalyzes innovations, and sustains research communities. |
2011 | |
19. | Niklas Elmqvist, Pierre Dragicevic, Jean-Daniel Fekete (2011): Color Lens: Adaptive Color Scale Optimization for Visual Exploration. IEEE Transactions on Visualization and Computer Graphics, 17 (6), pp. 795-807, 2011. (Type: Article | Abstract | Links | BibTeX) @article{Elmqvist2011b, title = {Color Lens: Adaptive Color Scale Optimization for Visual Exploration}, author = {Niklas Elmqvist and Pierre Dragicevic and Jean-Daniel Fekete}, url = {http://www.umiacs.umd.edu/~elm/projects/colorlens/colorlens.pdf}, year = {2011}, date = {2011-06-01}, journal = {IEEE Transactions on Visualization and Computer Graphics}, volume = {17}, number = {6}, pages = {795-807}, abstract = {Visualization applications routinely map quantitative attributes to color using color scales. Although color is an effective visualization channel, it is limited by both display hardware and the human visual system. We propose a new interaction technique that overcomes these limitations by dynamically optimizing color scales based on a set of sampling lenses. The technique inspects the lens contents in data space, optimizes the initial color scale, and then renders the contents of the lens to the screen using the modified color scale. We present two prototype implementations of this pipeline and describe several case studies involving both information visualization and image inspection applications. We validate our approach with two mutually linked and complementary user studies comparing the Color Lens with explicit contrast control for visual search.}, keywords = {} } Visualization applications routinely map quantitative attributes to color using color scales. Although color is an effective visualization channel, it is limited by both display hardware and the human visual system. We propose a new interaction technique that overcomes these limitations by dynamically optimizing color scales based on a set of sampling lenses. The technique inspects the lens contents in data space, optimizes the initial color scale, and then renders the contents of the lens to the screen using the modified color scale. We present two prototype implementations of this pipeline and describe several case studies involving both information visualization and image inspection applications. We validate our approach with two mutually linked and complementary user studies comparing the Color Lens with explicit contrast control for visual search. |
18. | Sohaib Ghani, Nathalie Henry Riche, Niklas Elmqvist (2011): Dynamic Insets for Context-Aware Graph Navigation. Computer Graphics Forum, 30 (3), pp. 861-870, 2011. (Type: Article | Abstract | Links | BibTeX) @article{Ghani2011, title = {Dynamic Insets for Context-Aware Graph Navigation}, author = {Sohaib Ghani and Nathalie Henry Riche and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/dyninsets/dyninsets.pdf}, year = {2011}, date = {2011-06-01}, journal = {Computer Graphics Forum}, volume = {30}, number = {3}, pages = {861-870}, abstract = {Maintaining both overview and detail while navigating in graphs, such as road networks, airline route maps, or social networks, is difficult, especially when targets of interest are located far apart. We present a navigation technique called Dynamic Insets that provides context awareness for graph navigation. Dynamic insets utilize the topological structure of the network to draw a visual inset for off-screen nodes that shows a portion of the surrounding area for links leaving the edge of the screen. We implement dynamic insets for general graph navigation as well as geographical maps. We also present results from a set of user studies that show that our technique is more efficient than most of the existing techniques for graph navigation in different networks.}, keywords = {} } Maintaining both overview and detail while navigating in graphs, such as road networks, airline route maps, or social networks, is difficult, especially when targets of interest are located far apart. We present a navigation technique called Dynamic Insets that provides context awareness for graph navigation. Dynamic insets utilize the topological structure of the network to draw a visual inset for off-screen nodes that shows a portion of the surrounding area for links leaving the edge of the screen. We implement dynamic insets for general graph navigation as well as geographical maps. We also present results from a set of user studies that show that our technique is more efficient than most of the existing techniques for graph navigation in different networks. |
17. | Niklas Elmqvist, Andrew Vande Moere, Hans-Christian Jetter, Daniel Cernea, Harald Reiterer, T.-J. Jankun-Kelly (2011): Fluid Interaction for Information Visualization. Information Visualization, 10 (4), pp. 327-340, 2011. (Type: Article | Abstract | Links | BibTeX) @article{Elmqvist2011, title = {Fluid Interaction for Information Visualization}, author = {Niklas Elmqvist and Andrew Vande Moere and Hans-Christian Jetter and Daniel Cernea and Harald Reiterer and T.-J. Jankun-Kelly}, url = {http://www.umiacs.umd.edu/~elm/projects/fluidity/fluidity.pdf}, year = {2011}, date = {2011-01-01}, journal = {Information Visualization}, volume = {10}, number = {4}, pages = {327-340}, abstract = {Despite typically receiving little emphasis in visualization research, interaction in visualization is the catalyst for the user\'s dialogue with the data, and, ultimately, the user’s actual understanding and insight into this data. There are many possible reasons for this skewed balance between the visual and interactive aspects of a visualization. One reason is that interaction is an intangible concept that is difficult to design, quantify, and evaluate. Unlike for visual design, there are few examples that show visualization practitioners and researchers how to best design the interaction for a new visualization. In this paper, we attempt to address this issue by collecting examples of visualizations with \"best-in-class\" interaction and using them to extract practical design guidelines for future designers and researchers. We call this concept fluid interaction, and we propose an operational definition in terms of the direct manipulation and embodied interaction paradigms, the psychological concept of \"flow\", and Norman’s gulfs of execution and evaluation.}, keywords = {} } Despite typically receiving little emphasis in visualization research, interaction in visualization is the catalyst for the user's dialogue with the data, and, ultimately, the user’s actual understanding and insight into this data. There are many possible reasons for this skewed balance between the visual and interactive aspects of a visualization. One reason is that interaction is an intangible concept that is difficult to design, quantify, and evaluate. Unlike for visual design, there are few examples that show visualization practitioners and researchers how to best design the interaction for a new visualization. In this paper, we attempt to address this issue by collecting examples of visualizations with "best-in-class" interaction and using them to extract practical design guidelines for future designers and researchers. We call this concept fluid interaction, and we propose an operational definition in terms of the direct manipulation and embodied interaction paradigms, the psychological concept of "flow", and Norman’s gulfs of execution and evaluation. |
16. | Petra Isenberg, Niklas Elmqvist, Daniel Cernea, Jean Scholtz, Kwan-Liu Ma, Hans Hagen (2011): Collaborative Visualization: Definition, Challenges, and Research Agenda. Information Visualization, 10 (4), pp. 310-326, 2011. (Type: Article | Abstract | Links | BibTeX) @article{Isenberg2011, title = {Collaborative Visualization: Definition, Challenges, and Research Agenda}, author = {Petra Isenberg and Niklas Elmqvist and Daniel Cernea and Jean Scholtz and Kwan-Liu Ma and Hans Hagen}, url = {http://www.umiacs.umd.edu/~elm/projects/collabvis/collabvis.pdf}, year = {2011}, date = {2011-01-01}, journal = {Information Visualization}, volume = {10}, number = {4}, pages = {310-326}, abstract = {The conflux of two growing areas of technology---collaboration and visualization---into a new research direction, collaborative visualization, provides new research challenges. Technology now allows us to easily connect and collaborate with one another---in settings as diverse as over networked computers, across mobile devices, or using shared displays such as interactive walls and tabletop surfaces. Digital information is now regularly accessed by multiple people in order to share information, to view it together, to analyze it, or to form decisions. Visualizations are used to deal more effectively with large amounts of information while interactive visualizations allow users to explore the underlying data. While researchers face many challenges in collaboration and in visualization, the emergence of collaborative visualization poses additional challenges but is also an exciting opportunity to reach new audiences and applications for visualization tools and techniques. The purpose of this article is (1) to provide a definition, clear scope, and overview of the evolving field of collaborative visualization, (2) to help pinpoint the unique focus of collaborative visualization with its specific aspects, challenges, and requirements within the intersection of general computer-supported cooperative work (CSCW) and visualization research, and (3) to draw attention to important future research questions to be addressed by the community. We conclude by discussing a research agenda for future work on collaborative visualization and urge for a new generation of visualization tools that are designed with collaboration in mind from their very inception.}, keywords = {} } The conflux of two growing areas of technology---collaboration and visualization---into a new research direction, collaborative visualization, provides new research challenges. Technology now allows us to easily connect and collaborate with one another---in settings as diverse as over networked computers, across mobile devices, or using shared displays such as interactive walls and tabletop surfaces. Digital information is now regularly accessed by multiple people in order to share information, to view it together, to analyze it, or to form decisions. Visualizations are used to deal more effectively with large amounts of information while interactive visualizations allow users to explore the underlying data. While researchers face many challenges in collaboration and in visualization, the emergence of collaborative visualization poses additional challenges but is also an exciting opportunity to reach new audiences and applications for visualization tools and techniques. The purpose of this article is (1) to provide a definition, clear scope, and overview of the evolving field of collaborative visualization, (2) to help pinpoint the unique focus of collaborative visualization with its specific aspects, challenges, and requirements within the intersection of general computer-supported cooperative work (CSCW) and visualization research, and (3) to draw attention to important future research questions to be addressed by the community. We conclude by discussing a research agenda for future work on collaborative visualization and urge for a new generation of visualization tools that are designed with collaboration in mind from their very inception. |
2010 | |
15. | Anastasia Bezerianos, Fanny Chevalier, Pierre Dragicevic, Niklas Elmqvist, Jean-Daniel Fekete (2010): GraphDice: A System for Exploring Multivariate Social Networks. Computer Graphics Forum, 29 (3), pp. 863–872, 2010. (Type: Article | Abstract | Links | BibTeX) @article{Bezerianos2010, title = {GraphDice: A System for Exploring Multivariate Social Networks}, author = {Anastasia Bezerianos and Fanny Chevalier and Pierre Dragicevic and Niklas Elmqvist and Jean-Daniel Fekete}, url = {http://www.umiacs.umd.edu/~elm/projects/graphdice/graphdice.pdf}, year = {2010}, date = {2010-01-01}, journal = {Computer Graphics Forum}, volume = {29}, number = {3}, pages = {863--872}, abstract = {Social networks collected by historians or sociologists typically have a large number of actors and edge attributes. Applying social network analysis (SNA) algorithms to these networks produces additional attributes such as degree, centrality, and clustering coefficients. Understanding the effects of this plethora of attributes is one of the main challenges of multivariate SNA. We present the design of GraphDice, a multivariate network visualization system for exploring the attribute space of edges and actors. GraphDice builds upon the ScatterDice system for its main multidimensional navigation paradigm, and extends it with novel mechanisms to support network exploration in general and SNA tasks in particular. Novel mechanisms include visualization of attributes of interval type and projection of numerical edge attributes to node attributes. We show how these extensions to the original ScatterDice system allow to support complex visual analysis tasks on networks with hundreds of actors and up to 30 attributes, while providing a simple and consistent interface for interacting with network data.}, keywords = {} } Social networks collected by historians or sociologists typically have a large number of actors and edge attributes. Applying social network analysis (SNA) algorithms to these networks produces additional attributes such as degree, centrality, and clustering coefficients. Understanding the effects of this plethora of attributes is one of the main challenges of multivariate SNA. We present the design of GraphDice, a multivariate network visualization system for exploring the attribute space of edges and actors. GraphDice builds upon the ScatterDice system for its main multidimensional navigation paradigm, and extends it with novel mechanisms to support network exploration in general and SNA tasks in particular. Novel mechanisms include visualization of attributes of interval type and projection of numerical edge attributes to node attributes. We show how these extensions to the original ScatterDice system allow to support complex visual analysis tasks on networks with hundreds of actors and up to 30 attributes, while providing a simple and consistent interface for interacting with network data. |
14. | Niklas Elmqvist, Nathalie Henry, Yann Riche, Jean-Daniel Fekete (2010): Mélange: Space Folding for Visual Exploration. IEEE Transactions on Visualization and Computer Graphics, 16 (3), pp. 468–483, 2010. (Type: Article | Abstract | Links | BibTeX) @article{Elmqvist2010a, title = {Mélange: Space Folding for Visual Exploration}, author = {Niklas Elmqvist and Nathalie Henry and Yann Riche and Jean-Daniel Fekete}, url = {http://www.umiacs.umd.edu/~elm/projects/melange/melange-journal.pdf}, year = {2010}, date = {2010-01-01}, journal = {IEEE Transactions on Visualization and Computer Graphics}, volume = {16}, number = {3}, pages = {468--483}, abstract = {Navigating in large geometric spaces---such as maps, social networks, or long documents---typically require a sequence of pan and zoom actions. However, this strategy is often ineffective and cumbersome, especially when trying to study and compare several distant objects. We propose a new distortion technique that folds the intervening space to guarantee visibility of multiple focus regions. The folds themselves show contextual information and support unfolding and paging interactions. We conducted a study comparing the space-folding technique to existing approaches, and found that participants performed significantly better with the new technique. We also describe how to implement this distortion technique, and give an in-depth case study on how to apply it to the visualization of large-scale 1D time-series data.}, keywords = {} } Navigating in large geometric spaces---such as maps, social networks, or long documents---typically require a sequence of pan and zoom actions. However, this strategy is often ineffective and cumbersome, especially when trying to study and compare several distant objects. We propose a new distortion technique that folds the intervening space to guarantee visibility of multiple focus regions. The folds themselves show contextual information and support unfolding and paging interactions. We conducted a study comparing the space-folding technique to existing approaches, and found that participants performed significantly better with the new technique. We also describe how to implement this distortion technique, and give an in-depth case study on how to apply it to the visualization of large-scale 1D time-series data. |
13. | Niklas Elmqvist, Jean-Daniel Fekete (2010): Hierarchical Aggregation for Information Visualization: Overview, Techniques and Design Guidelines. IEEE Transactions on Visualization and Computer Graphics, 16 (3), pp. 439–454, 2010. (Type: Article | Abstract | Links | BibTeX) @article{Elmqvist2010b, title = {Hierarchical Aggregation for Information Visualization: Overview, Techniques and Design Guidelines}, author = {Niklas Elmqvist and Jean-Daniel Fekete}, url = {http://www.umiacs.umd.edu/~elm/projects/hieragg/hieragg.pdf}, year = {2010}, date = {2010-01-01}, journal = {IEEE Transactions on Visualization and Computer Graphics}, volume = {16}, number = {3}, pages = {439--454}, abstract = {We present a model for building, visualizing, and interacting with multiscale representations of information visualization techniques using hierarchical aggregation. The motivation for this work is to make visual representations more visually scalable and less cluttered. The model allows for augmenting existing techniques with multiscale functionality, as well as for designing new visualization and interaction techniques that conform to this new class of visual representations. We give some examples of how to use the model for standard information visualization techniques such as scatterplots, parallel coordinates, and node-link diagrams, and discuss existing techniques that are based on hierarchical aggregation. This yields a set of design guidelines for aggregated visualizations. We also present a basic vocabulary of interaction techniques suitable for navigating these multiscale visualizations.}, keywords = {} } We present a model for building, visualizing, and interacting with multiscale representations of information visualization techniques using hierarchical aggregation. The motivation for this work is to make visual representations more visually scalable and less cluttered. The model allows for augmenting existing techniques with multiscale functionality, as well as for designing new visualization and interaction techniques that conform to this new class of visual representations. We give some examples of how to use the model for standard information visualization techniques such as scatterplots, parallel coordinates, and node-link diagrams, and discuss existing techniques that are based on hierarchical aggregation. This yields a set of design guidelines for aggregated visualizations. We also present a basic vocabulary of interaction techniques suitable for navigating these multiscale visualizations. |
12. | Waqas Javed, Bryan McDonnel, Niklas Elmqvist (2010): Graphical Perception of Multiple Time Series. IEEE Transactions on Visualization and Computer Graphics (Proc. IEEE InfoVis 2010), 16 (6), pp. 927–934, 2010. (Type: Article | Abstract | Links | BibTeX) @article{Javed2010b, title = {Graphical Perception of Multiple Time Series}, author = {Waqas Javed and Bryan McDonnel and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/multilinevis/multilinevis.pdf}, year = {2010}, date = {2010-01-01}, journal = {IEEE Transactions on Visualization and Computer Graphics (Proc. IEEE InfoVis 2010)}, volume = {16}, number = {6}, pages = {927--934}, abstract = {Line graphs have been the visualization of choice for temporal data ever since the days of William Playfair (1759–1823), but realistic temporal analysis tasks often include multiple simultaneous time series. In this work, we explore user performance for comparison, slope, and discrimination tasks for different line graph techniques involving multiple time series. Our results show that techniques that create separate charts for each time series—such as small multiples and horizon graphs---are generally more efficient for comparisons across time series with a large visual span. On the other hand, shared-space techniques---like standard line graphs---are typically more efficient for comparisons over smaller visual spans where the impact of overlap and clutter is reduced.}, keywords = {} } Line graphs have been the visualization of choice for temporal data ever since the days of William Playfair (1759–1823), but realistic temporal analysis tasks often include multiple simultaneous time series. In this work, we explore user performance for comparison, slope, and discrimination tasks for different line graph techniques involving multiple time series. Our results show that techniques that create separate charts for each time series—such as small multiples and horizon graphs---are generally more efficient for comparisons across time series with a large visual span. On the other hand, shared-space techniques---like standard line graphs---are typically more efficient for comparisons over smaller visual spans where the impact of overlap and clutter is reduced. |
11. | Ji-Soo Yi, Niklas Elmqvist, Seungyoon Lee (2010): TimeMatrix: Visualizing Temporal Social Networks Using Interactive Matrix-Based Visualizations. International Journal of Human-Computer Interaction, 26 (11-12), pp. 1031–1051, 2010. (Type: Article | Abstract | Links | BibTeX) @article{Yi2010, title = {TimeMatrix: Visualizing Temporal Social Networks Using Interactive Matrix-Based Visualizations}, author = {Ji-Soo Yi and Niklas Elmqvist and Seungyoon Lee}, url = {https://www.youtube.com/watch?v=PjJOPX_ezzc, Youtube video}, year = {2010}, date = {2010-01-01}, journal = {International Journal of Human-Computer Interaction}, volume = {26}, number = {11-12}, pages = {1031--1051}, abstract = {Visualization plays a crucial role in understanding dynamic social networks at many different levels (i.e., group, subgroup, and individual). Node-link-based visualization techniques are currently widely used for these tasks and have been demonstrated to be effective, but we found that they also have limitations in representing temporal changes, particularly at the individual and subgroup levels. To overcome these limitations, we present a new network visualization technique, called \"TimeMatrix,\" based on a matrix representation. Interaction techniques, such as overlay controls, a temporal range slider, semantic zooming, and integrated network statistical measures, support analysts in studying temporal social networks. To validate our design, we present a user study involving three social scientists analyzing inter-organizational collaboration data. The study demonstrates how TimeMatrix may help analysts gain insights about the temporal aspects of network data that can be subsequently tested with network analytic methods.}, keywords = {} } Visualization plays a crucial role in understanding dynamic social networks at many different levels (i.e., group, subgroup, and individual). Node-link-based visualization techniques are currently widely used for these tasks and have been demonstrated to be effective, but we found that they also have limitations in representing temporal changes, particularly at the individual and subgroup levels. To overcome these limitations, we present a new network visualization technique, called "TimeMatrix," based on a matrix representation. Interaction techniques, such as overlay controls, a temporal range slider, semantic zooming, and integrated network statistical measures, support analysts in studying temporal social networks. To validate our design, we present a user study involving three social scientists analyzing inter-organizational collaboration data. The study demonstrates how TimeMatrix may help analysts gain insights about the temporal aspects of network data that can be subsequently tested with network analytic methods. |
2009 | |
10. | Niklas Elmqvist, Ulf Assarsson, Philippas Tsigas (2009): Dynamic Transparency for 3D Visualization: Design and Evaluation. International Journal of Virtual Reality, 8 (1), pp. 65–78, 2009. (Type: Article | Abstract | Links | BibTeX) @article{Elmqvist2009b, title = {Dynamic Transparency for 3D Visualization: Design and Evaluation}, author = {Niklas Elmqvist and Ulf Assarsson and Philippas Tsigas}, url = {http://www.umiacs.umd.edu/~elm/projects/dyntrans/dyntrans-journal.pdf, Paper https://www.youtube.com/watch?v=77N5KVbbEmQ, Youtube video http://www.slideshare.net/NickElm/employing-dynamic-transparency-for-3d-occlusion-management-design-issues-and-evaluation, Slides}, year = {2009}, date = {2009-01-01}, journal = {International Journal of Virtual Reality}, volume = {8}, number = {1}, pages = {65--78}, abstract = {Recent developments in occlusion management for 3D environments often involve the use of dynamic transparency, or \"virtual X-ray vision\", to promote target discovery and access in complex 3D worlds. However, there are many different approaches to achieving this effect and their actual utility for the user has yet to be evaluated. Furthermore, the introduction of semitransparent surfaces adds additional visual complexity that may actually have a negative impact on task performance. In this paper, we report on an empirical user study investigating these human aspects of dynamic transparency. Our implementation of the technique is an image-space algorithm built using modern programmable shaders to achieve real-time performance and visually pleasing results. Results from the user study indicate that dynamic transparency provides superior performance for perceptual tasks in terms of both efficiency and correctness. Subjective ratings are also firmly in favor of the method.}, keywords = {} } Recent developments in occlusion management for 3D environments often involve the use of dynamic transparency, or "virtual X-ray vision", to promote target discovery and access in complex 3D worlds. However, there are many different approaches to achieving this effect and their actual utility for the user has yet to be evaluated. Furthermore, the introduction of semitransparent surfaces adds additional visual complexity that may actually have a negative impact on task performance. In this paper, we report on an empirical user study investigating these human aspects of dynamic transparency. Our implementation of the technique is an image-space algorithm built using modern programmable shaders to achieve real-time performance and visually pleasing results. Results from the user study indicate that dynamic transparency provides superior performance for perceptual tasks in terms of both efficiency and correctness. Subjective ratings are also firmly in favor of the method. |
9. | Bryan McDonnel, Niklas Elmqvist (2009): Towards Utilizing GPUs in Information Visualization: A Model and Implementation of Image-Space Operations. IEEE Transactions on Visualization and Computer Graphics, 15 (6), pp. 1105–1112, 2009. (Type: Article | Abstract | Links | BibTeX) @article{McDonnel2009, title = {Towards Utilizing GPUs in Information Visualization: A Model and Implementation of Image-Space Operations}, author = {Bryan McDonnel and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/gpuvis/gpuvis.pdf, Paper http://www.slideshare.net/NickElm/towards-utilizing-gpus-in-information-visualization-a-model-and-implementation-of-imagespace-operations, Slides}, year = {2009}, date = {2009-01-01}, journal = {IEEE Transactions on Visualization and Computer Graphics}, volume = {15}, number = {6}, pages = {1105--1112}, abstract = {Modern programmable GPUs represent a vast potential in terms of performance and visual flexibility for information visualization research, but surprisingly few applications even begin to utilize this potential. In this paper, we conjecture that this may be due to the mismatch between the high-level abstract data types commonly visualized in our field, and the low-level floating-point model supported by current GPU shader languages. To help remedy this situation, we present a refinement of the traditional information visualization pipeline that is amenable to implementation using GPU shaders. The refinement consists of a final image-space step in the pipeline where the multivariate data of the visualization is sampled in the resolution of the current view. To concretize the theoretical aspects of this work, we also present a visual programming environment for constructing visualization shaders using a simple drag-and-drop interface. Finally, we give some examples of the use of shaders for well-known visualization techniques.}, keywords = {} } Modern programmable GPUs represent a vast potential in terms of performance and visual flexibility for information visualization research, but surprisingly few applications even begin to utilize this potential. In this paper, we conjecture that this may be due to the mismatch between the high-level abstract data types commonly visualized in our field, and the low-level floating-point model supported by current GPU shader languages. To help remedy this situation, we present a refinement of the traditional information visualization pipeline that is amenable to implementation using GPU shaders. The refinement consists of a final image-space step in the pipeline where the multivariate data of the visualization is sampled in the resolution of the current view. To concretize the theoretical aspects of this work, we also present a visual programming environment for constructing visualization shaders using a simple drag-and-drop interface. Finally, we give some examples of the use of shaders for well-known visualization techniques. |
2008 | |
8. | Niklas Elmqvist, Pierre Dragicevic, Jean-Daniel Fekete (2008): Rolling the Dice: Multidimensional Visual Exploration using Scatterplot Matrix Navigation. IEEE Transactions on Visualization and Computer Graphics, 14 (6), pp. 1141–1148, 2008. (Type: Article | Abstract | Links | BibTeX) @article{Elmqvist2008g, title = {Rolling the Dice: Multidimensional Visual Exploration using Scatterplot Matrix Navigation}, author = {Niklas Elmqvist and Pierre Dragicevic and Jean-Daniel Fekete}, url = {http://www.umiacs.umd.edu/~elm/projects/scatterdice/scatterdice.pdf, Paper https://www.youtube.com/watch?v=E1birsp9iYk, Youtube video http://www.slideshare.net/NickElm/rolling-the-dice-multidimensional-visual-exploration-using-scatterplot-matrix-navigation, Slides}, year = {2008}, date = {2008-01-01}, journal = {IEEE Transactions on Visualization and Computer Graphics}, volume = {14}, number = {6}, pages = {1141--1148}, abstract = {Scatterplots remain one of the most popular and widely-used visual representations for multidimensional data due to their simplicity, familiarity and visual clarity, even if they lack some of the flexibility and visual expressiveness of newer multidimensional visualization techniques. This paper presents new interactive methods to explore multidimensional data using scatterplots. This exploration is performed using a matrix of scatterplots that gives an overview of the possible configurations, thumbnails of the scatterplots, and support for interactive navigation in the multidimensional space. Transitions between scatterplots are performed as animated rotations in 3D space, somewhat akin to rolling dice. Users can iteratively build queries using bounding volumes in the dataset, sculpting the query from different viewpoints to become more and more refined. Furthermore, the dimensions in the navigation space can be reordered, manually or automatically, to highlight salient correlations and differences among them. An example scenario presents the interaction techniques supporting smooth and effortless visual exploration of multidimensional datasets.}, keywords = {} } Scatterplots remain one of the most popular and widely-used visual representations for multidimensional data due to their simplicity, familiarity and visual clarity, even if they lack some of the flexibility and visual expressiveness of newer multidimensional visualization techniques. This paper presents new interactive methods to explore multidimensional data using scatterplots. This exploration is performed using a matrix of scatterplots that gives an overview of the possible configurations, thumbnails of the scatterplots, and support for interactive navigation in the multidimensional space. Transitions between scatterplots are performed as animated rotations in 3D space, somewhat akin to rolling dice. Users can iteratively build queries using bounding volumes in the dataset, sculpting the query from different viewpoints to become more and more refined. Furthermore, the dimensions in the navigation space can be reordered, manually or automatically, to highlight salient correlations and differences among them. An example scenario presents the interaction techniques supporting smooth and effortless visual exploration of multidimensional datasets. |
7. | Niklas Elmqvist, Philippas Tsigas (2008): A Taxonomy of 3D Occlusion Management for Visualization. IEEE Transactions on Visualization and Computer Graphics, 14 (5), pp. 1095–1109, 2008. (Type: Article | Abstract | Links | BibTeX) @article{Elmqvist2008e, title = {A Taxonomy of 3D Occlusion Management for Visualization}, author = {Niklas Elmqvist and Philippas Tsigas}, url = {http://www.umiacs.umd.edu/~elm/projects/occmgt/occmgt-journal.pdf, Paper}, year = {2008}, date = {2008-01-01}, journal = {IEEE Transactions on Visualization and Computer Graphics}, volume = {14}, number = {5}, pages = {1095--1109}, abstract = {While an important factor in depth perception, the occlusion effect in 3D environments also has a detrimental impact on tasks involving discovery, access, and spatial relation of objects in a 3D visualization. A number of interactive techniques have been developed in recent years to directly or indirectly deal with this problem using a wide range of different approaches. In this paper, we build on previous work on mapping out the problem space of 3D occlusion by defining a taxonomy of the design space of occlusion management techniques in an effort to formalize a common terminology and theoretical framework for this class of interactions. We classify a total of 50 different techniques for occlusion management using our taxonomy and then go on to analyze the results, deriving a set of five orthogonal design patterns for effective reduction of 3D occlusion. We also discuss the \"gaps\" in the design space, areas of the taxonomy not yet populated with existing techniques, and use these to suggest future research directions into occlusion management.}, keywords = {} } While an important factor in depth perception, the occlusion effect in 3D environments also has a detrimental impact on tasks involving discovery, access, and spatial relation of objects in a 3D visualization. A number of interactive techniques have been developed in recent years to directly or indirectly deal with this problem using a wide range of different approaches. In this paper, we build on previous work on mapping out the problem space of 3D occlusion by defining a taxonomy of the design space of occlusion management techniques in an effort to formalize a common terminology and theoretical framework for this class of interactions. We classify a total of 50 different techniques for occlusion management using our taxonomy and then go on to analyze the results, deriving a set of five orthogonal design patterns for effective reduction of 3D occlusion. We also discuss the "gaps" in the design space, areas of the taxonomy not yet populated with existing techniques, and use these to suggest future research directions into occlusion management. |
6. | Niklas Elmqvist, John Stasko, Philippas Tsigas (2008): DataMeadow: A Visual Canvas for Analysis of Large-Scale Multivariate Data. Information Visualization, 7 (1), pp. 18–33, 2008. (Type: Article | Abstract | Links | BibTeX) @article{Elmqvist2008a, title = {DataMeadow: A Visual Canvas for Analysis of Large-Scale Multivariate Data}, author = {Niklas Elmqvist and John Stasko and Philippas Tsigas}, url = {http://www.umiacs.umd.edu/~elm/projects/datameadow/datameadow-journal.pdf, Paper https://www.youtube.com/watch?v=FO2MsmtWX_4, Youtube video http://www.slideshare.net/NickElm/datameadow-a-visual-canvas-for-analysis-of-largescale-multivariate-data, Slides}, year = {2008}, date = {2008-01-01}, journal = {Information Visualization}, volume = {7}, number = {1}, pages = {18--33}, abstract = {Supporting visual analytics of multiple large-scale multidimensional datasets requires a high degree of interactivity and user control beyond the conventional challenges of visualizing such datasets. We present the DataMeadow, a visual canvas providing rich interaction for constructing visual queries using graphical set representations called DataRoses. A DataRose is essentially a starplot of selected columns in a dataset displayed as multivariate visualizations with dynamic query sliders integrated into each axis. The purpose of the DataMeadow is to allow users to create advanced visual queries by iteratively selecting and filtering into the multidimensional data. Furthermore, the canvas provides a clear history of the analysis that can be annotated to facilitate dissemination of analytical results to stakeholders. A powerful direct manipulation interface allows for selection, filtering, and creation of sets, subsets, and data dependencies. We have evaluated our system using a qualitative expert review involving two visualization researchers. Results from this review are favorable for the new method.}, keywords = {} } Supporting visual analytics of multiple large-scale multidimensional datasets requires a high degree of interactivity and user control beyond the conventional challenges of visualizing such datasets. We present the DataMeadow, a visual canvas providing rich interaction for constructing visual queries using graphical set representations called DataRoses. A DataRose is essentially a starplot of selected columns in a dataset displayed as multivariate visualizations with dynamic query sliders integrated into each axis. The purpose of the DataMeadow is to allow users to create advanced visual queries by iteratively selecting and filtering into the multidimensional data. Furthermore, the canvas provides a clear history of the analysis that can be annotated to facilitate dissemination of analytical results to stakeholders. A powerful direct manipulation interface allows for selection, filtering, and creation of sets, subsets, and data dependencies. We have evaluated our system using a qualitative expert review involving two visualization researchers. Results from this review are favorable for the new method. |
2007 | |
5. | Niklas Elmqvist, Philippas Tsigas (2007): View-Projection Animation for 3D Occlusion Management. Computers & Graphics, 31 (6), pp. 864–876, 2007. (Type: Article | Abstract | Links | BibTeX) @article{Elmqvist2007j, title = {View-Projection Animation for 3D Occlusion Management}, author = {Niklas Elmqvist and Philippas Tsigas}, url = {http://www.umiacs.umd.edu/~elm/projects/pmorph/pmorph-journal.pdf, Paper}, year = {2007}, date = {2007-01-01}, journal = {Computers & Graphics}, volume = {31}, number = {6}, pages = {864--876}, abstract = {Inter-object occlusion is inherent to 3D environments and is one of the challenges of using 3D instead of 2D computer graphics for visualization. Based on an analysis of this effect, we present an interaction technique for view-projection animation that reduces inter-object occlusion in 3D environments without modifying the geometrical properties of the objects themselves. The technique allows for smooth on-demand animation between parallel and perspective projection modes as well as online manipulation of view parameters, enabling the user to quickly and easily adapt the view to reduce occlusion. A user study indicates that the technique provides many of the occlusion reduction benefits of traditional camera movement, but without the need to actually change the viewpoint. We have also implemented a prototype of the technique in the Blender 3D modeler.}, keywords = {} } Inter-object occlusion is inherent to 3D environments and is one of the challenges of using 3D instead of 2D computer graphics for visualization. Based on an analysis of this effect, we present an interaction technique for view-projection animation that reduces inter-object occlusion in 3D environments without modifying the geometrical properties of the objects themselves. The technique allows for smooth on-demand animation between parallel and perspective projection modes as well as online manipulation of view parameters, enabling the user to quickly and easily adapt the view to reduce occlusion. A user study indicates that the technique provides many of the occlusion reduction benefits of traditional camera movement, but without the need to actually change the viewpoint. We have also implemented a prototype of the technique in the Blender 3D modeler. |
4. | Niklas Elmqvist, Philippas Tsigas (2007): CiteWiz: A Tool for the Visualization of Scientific Citation Networks. Information Visualization, 6 (3), pp. 215–232, 2007. (Type: Article | Abstract | Links | BibTeX) @article{Elmqvist2007c, title = {CiteWiz: A Tool for the Visualization of Scientific Citation Networks}, author = {Niklas Elmqvist and Philippas Tsigas}, url = {http://www.umiacs.umd.edu/~elm/projects/citewiz/citewiz.pdf, Paper}, year = {2007}, date = {2007-01-01}, journal = {Information Visualization}, volume = {6}, number = {3}, pages = {215--232}, abstract = {We present CiteWiz, an extensible framework for visualization of scientific citation networks. The system is based on a taxonomy of citation database usage for researchers, and provides a timeline visualization for overviews and an influence visualization for detailed views. The timeline displays the general chronology and importance of authors and articles in a citation database, whereas the influence visualization is implemented using the Growing Polygons technique, suitably modified to the context of browsing citation data. Using the latter technique, hierarchies of articles with potentially very long citation chains can be graphically represented. The visualization is augmented with mechanisms for parent-child visualization and suitable interaction techniques for interacting with the view hierarchy and the individual articles in the dataset. We also provide an interactive concept map for keywords and co-authorship using a basic force-directed graph layout scheme. A formal user study indicates that CiteWiz is significantly more efficient than traditional database interfaces for high-level analysis tasks relating to influence and overviews, and equally efficient for low-level tasks such as finding a paper and correlating bibliographical data.}, keywords = {} } We present CiteWiz, an extensible framework for visualization of scientific citation networks. The system is based on a taxonomy of citation database usage for researchers, and provides a timeline visualization for overviews and an influence visualization for detailed views. The timeline displays the general chronology and importance of authors and articles in a citation database, whereas the influence visualization is implemented using the Growing Polygons technique, suitably modified to the context of browsing citation data. Using the latter technique, hierarchies of articles with potentially very long citation chains can be graphically represented. The visualization is augmented with mechanisms for parent-child visualization and suitable interaction techniques for interacting with the view hierarchy and the individual articles in the dataset. We also provide an interactive concept map for keywords and co-authorship using a basic force-directed graph layout scheme. A formal user study indicates that CiteWiz is significantly more efficient than traditional database interfaces for high-level analysis tasks relating to influence and overviews, and equally efficient for low-level tasks such as finding a paper and correlating bibliographical data. |
3. | Niklas Elmqvist, Mihail Eduard Tudoreanu (2007): Occlusion Management in Immersive and Desktop 3D Virtual Environments: Theory and Evaluation. International Journal of Virtual Reality, 6 (2), pp. 21–32, 2007. (Type: Article | Abstract | Links | BibTeX) @article{Elmqvist2007d, title = {Occlusion Management in Immersive and Desktop 3D Virtual Environments: Theory and Evaluation}, author = {Niklas Elmqvist and Mihail Eduard Tudoreanu}, url = {http://www.umiacs.umd.edu/~elm/projects/balloonprobe/balloonprobe-journal.pdf}, year = {2007}, date = {2007-01-01}, journal = {International Journal of Virtual Reality}, volume = {6}, number = {2}, pages = {21--32}, abstract = {We present an empirical usability experiment studying the relative strengths and weaknesses of three different occlusion management techniques for discovering and accessing objects in information-rich 3D virtual environments. More specifically, the study compares standard 3D navigation, generalized fisheye techniques using object scaling and transparency, and the BalloonProbe interactive 3D space distortion technique. Subjects are asked to complete a number of representative tasks, including counting, pattern recognition, and object relation, in different kinds of environments and on both immersive and desktop-based VR systems. The environments include a free-space abstract 3D environment and a virtual 3D walkthrough application for a simple building floor. Our results confirm the general guideline that each task calls for a specialized interaction---no single technique performed best across all tasks and worlds. The results also indicate a clear trade-off between speed and accuracy: simple navigation was the fastest but also most error-prone technique, whereas spherical BalloonProbe and transparency-based fisheye proved the most accurate but required longer completion time, making it suitable for applications where mistakes incur a high cost.}, keywords = {} } We present an empirical usability experiment studying the relative strengths and weaknesses of three different occlusion management techniques for discovering and accessing objects in information-rich 3D virtual environments. More specifically, the study compares standard 3D navigation, generalized fisheye techniques using object scaling and transparency, and the BalloonProbe interactive 3D space distortion technique. Subjects are asked to complete a number of representative tasks, including counting, pattern recognition, and object relation, in different kinds of environments and on both immersive and desktop-based VR systems. The environments include a free-space abstract 3D environment and a virtual 3D walkthrough application for a simple building floor. Our results confirm the general guideline that each task calls for a specialized interaction---no single technique performed best across all tasks and worlds. The results also indicate a clear trade-off between speed and accuracy: simple navigation was the fastest but also most error-prone technique, whereas spherical BalloonProbe and transparency-based fisheye proved the most accurate but required longer completion time, making it suitable for applications where mistakes incur a high cost. |
2. | Nathalie Henry, Howard Goodell, Niklas Elmqvist, Jean-Daniel Fekete (2007): 20 Years of Four HCI Conferences: A Visual Exploration. International Journal of Human-Computer Interaction, 23 (3), pp. 239–285, 2007. (Type: Article | Abstract | Links | BibTeX) @article{Henry2007, title = {20 Years of Four HCI Conferences: A Visual Exploration}, author = {Nathalie Henry and Howard Goodell and Niklas Elmqvist and Jean-Daniel Fekete}, url = {http://www.umiacs.umd.edu/~elm/projects/20yearshci/20yearshci.pdf, Paper}, year = {2007}, date = {2007-01-01}, journal = {International Journal of Human-Computer Interaction}, volume = {23}, number = {3}, pages = {239--285}, abstract = {We present a visual exploration of the field of human–computer interaction (HCI) through the author and article metadata of four of its major conferences: the ACM conferences on Computer-Human Interaction (CHI), User Interface Software and Technology, and Advanced Visual Interfaces and the IEEE Symposium on Information Visualization. This article describes many global and local patterns we discovered in this data set, together with the exploration process that produced them. Some expected patterns emerged, such as that---like most social networks---coauthorship and citation networks exhibit a power-law degree distribution, with a few widely collaborating authors and highly cited articles. Also, the prestigious and long-established CHI conference has the highest impact (citations by the others). Unexpected insights included that the years when a given conference was most selective are not correlated with those that produced its most highly referenced articles and that influential authors have distinct patterns of collaboration. An interesting sidelight is that methods from the HCI field---exploratory data analysis by information visualization and direct-manipulation interaction---proved useful for this analysis. They allowed us to take an open-ended, exploratory approach, guided by the data itself. As we answered our original questions, new ones arose; as we confirmed patterns we expected, we discovered refinements, exceptions, and fascinating new ones.}, keywords = {} } We present a visual exploration of the field of human–computer interaction (HCI) through the author and article metadata of four of its major conferences: the ACM conferences on Computer-Human Interaction (CHI), User Interface Software and Technology, and Advanced Visual Interfaces and the IEEE Symposium on Information Visualization. This article describes many global and local patterns we discovered in this data set, together with the exploration process that produced them. Some expected patterns emerged, such as that---like most social networks---coauthorship and citation networks exhibit a power-law degree distribution, with a few widely collaborating authors and highly cited articles. Also, the prestigious and long-established CHI conference has the highest impact (citations by the others). Unexpected insights included that the years when a given conference was most selective are not correlated with those that produced its most highly referenced articles and that influential authors have distinct patterns of collaboration. An interesting sidelight is that methods from the HCI field---exploratory data analysis by information visualization and direct-manipulation interaction---proved useful for this analysis. They allowed us to take an open-ended, exploratory approach, guided by the data itself. As we answered our original questions, new ones arose; as we confirmed patterns we expected, we discovered refinements, exceptions, and fascinating new ones. |
2004 | |
1. | Niklas Elmqvist, Philippas Tsigas (2004): Animated Visualization of Causal Relations Through Growing 2D Geometry. Information Visualization, 3 (3), pp. 154–172, 2004. (Type: Article | Abstract | Links | BibTeX) @article{Elmqvist2004a, title = {Animated Visualization of Causal Relations Through Growing 2D Geometry}, author = {Niklas Elmqvist and Philippas Tsigas}, url = {http://www.umiacs.umd.edu/~elm/projects/causality/causality.pdf, Paper}, year = {2004}, date = {2004-01-01}, journal = {Information Visualization}, volume = {3}, number = {3}, pages = {154--172}, abstract = {Causality visualization is an important tool for many scientific domains that involve complex interactions between multiple entities (examples include parallel and distributed systems in computer science). However, traditional visualization techniques such as Hasse diagrams are not well-suited to large system executions, and users often have difficulties answering even basic questions using them, or have to spend inordinate amounts of time to do so. In this paper we present the Growing Squares and Growing Polygons methods, two sibling visualization techniques that were designed to solve this problem by providing efficient 2D causality visualization through the use of color, texture, and animation. Both techniques have abandoned the traditional linear timeline and instead map the time parameter to the size of geometrical primitives representing the processes; in the Growing Squares case, each process is a color-coded square that receives color influences from other process squares as messages reach it; in the Growing Polygons case, each process is instead an n-sided polygon consisting of triangular sectors showing color-coded influences from the other processes. We have performed user studies of both techniques, comparing them with Hasse diagrams, and they have been shown to be significantly more efficient than old techniques, both in terms of objective performance as well as the subjective opinion of the test subjects (the Growing Squares technique is, however, only significantly more efficient for small systems).}, keywords = {} } Causality visualization is an important tool for many scientific domains that involve complex interactions between multiple entities (examples include parallel and distributed systems in computer science). However, traditional visualization techniques such as Hasse diagrams are not well-suited to large system executions, and users often have difficulties answering even basic questions using them, or have to spend inordinate amounts of time to do so. In this paper we present the Growing Squares and Growing Polygons methods, two sibling visualization techniques that were designed to solve this problem by providing efficient 2D causality visualization through the use of color, texture, and animation. Both techniques have abandoned the traditional linear timeline and instead map the time parameter to the size of geometrical primitives representing the processes; in the Growing Squares case, each process is a color-coded square that receives color influences from other process squares as messages reach it; in the Growing Polygons case, each process is instead an n-sided polygon consisting of triangular sectors showing color-coded influences from the other processes. We have performed user studies of both techniques, comparing them with Hasse diagrams, and they have been shown to be significantly more efficient than old techniques, both in terms of objective performance as well as the subjective opinion of the test subjects (the Growing Squares technique is, however, only significantly more efficient for small systems). |