A quick review of the five journal papers we will be presenting at the IEEE VIS 2018 conference in October in Berlin, Germany.

Road to Berlin: The HCIL at the Upcoming IEEE VIS 2018

Final notifications for the upcoming IEEE VIS 2018 were handed out earlier this month (a list of accepted papers can be found here), and it is now confirmed that my students and I will be presenting a total of five papers at the conference: four papers at IEEE InfoVis 2018, and one journal paper in the TVCG track at the conference. This is certainly the most papers that I have ever published at a single conference (my record at VIS is two papers in a single year at VAST), and, as always, the credit for this lies with my awesome Ph.D. students at the UMD Human-Computer Interaction Laboratory (HCIL) and our amazing collaborators. In this brief post, I will give an overview for all five papers.

IEEE VIS is the premier data visualization event in the world, dating back to 1990 and accepting only the best contributions to this field. The event consists of three primary events—the IEEE conferences on visual analytics (VAST), information visualization (InfoVis), and scientific visualization (SciVis)—as well as a large number of smaller symposia, workshops, and meetings. Papers accepted to these three primary events are published in a special issue of IEEE TVCG, the premier visualization journal and one of the top journals in computer science as a whole (impact factor 3.078). This year, VIS will take place in Berlin, Germany, and I am looking forward to visiting that beautiful city in October.

Anyway, below are the five papers we will present, each followed by a brief description.

Elastic Documents: Linking Tables with Text

  • S. K. Badam, Z. Liu, N. Elmqvist. Elastic Documents: Coupling Text and Tables through Contextual Visualizations for Enhanced Document Reading. IEEE Transactions on Visualization & Computer Graphics (Proc. VAST/InfoVis/SciVis 2018), to appear, 2019. [PDF]

The outcome of Karthik Badam’s internship with Leo Liu last summer (2017), this work introduces a new model for viewing data-rich documents as databases of different linked media, and proposes a specific technique for showing contextual visualizations linked to running text. In practice, the Elastic Document viewer uses simple natural language processing to automatically show relevant visualizations of selected data from a data table in the margins of the document.

Vistrates: A Platform for Ubiquitous Analytics

  • S. K. Badam, A. Mathisen, R. Rädle, C. N. Klokmose, N. Elmqvist. Vistrates: A Component Model for Ubiquitous Analytics. IEEE Transactions on Visualization & Computer Graphics (Proc. VAST/InfoVis/SciVis 2018), to appear, 2019. [PDF]

Built as part of a collaboration with Clemens Nylandsted Klokmose (who visited UMD in Spring 2018), Roman Rädle, and Andreas Mathisen from Aarhus University in Denmark, Vistrates is our exciting new open source platform for building ubiquitous visual analytics tools that can be distributed and shared across any device. A powerful component model allows even a novice user to connect existing visualizations, transformations, and datasets into an interactive visualization using just drag-and-drop interaction. Vistrates is amazingly powerful, and I look forward to building a lot of our future ubiquitous analytics tools using the platform.

Face to Face: Evaluating Visual Comparison

  • B. Ondov, N. Jardine, N. Elmqvist, S. Franconeri. Face to Face: Evaluating Visual Comparison. IEEE Transactions on Visualization & Computer Graphics (Proc. VAST/InfoVis/SciVis 2018), to appear, 2019. [PDF]

Devised together with Nicole Jardin and Steve Franconeri at Northwestern, this work was all about trying to understand visual comparison from a perceptual level. My student Brian Ondov, who just started in the UMD CS Ph.D. program last fall, is the creator of a bioinformatics tool using radial sunburst charts called Krona, and one of his most recent contributions to the tool was a bipartite comparison mode (VIS 2017 poster) where two separate datasets were shown side-by-side using half circles. One of the layouts used the metaphor of mirroring where the starbursts were arranged on top of each other as if the lower half was a mirror image of the top half and thus facilitating comparison between the two. This lead us to ponder whether mirroring in itself was a useful way to facilitate comparison even for other visualizations; one example is “population pyramids“, where charts representing the two different sexes are shown on either side of a vertical axis. Anyway, long story short (and more details in a future post), we investigated these ideas for bars and lines using a fairly low-level “biggest mover” task, and found that mirroring actually was not all that useful, but, surprisingly, that animation was. More details in the paper and probably also in a forthcoming blog post.

Information Olfactation: Harnessing Scent to Convey Data

  • B. Patnaik, A. Batch, N. Elmqvist. Information Olfactation: Harnessing Scent to Convey Data. IEEE Transactions on Visualization & Computer Graphics (Proc. VAST/InfoVis/SciVis 2018), to appear, 2019. [PDF]

Easily our most crazy idea to date, this paper was spearheaded by my Ph.D. student Andrea Batch, who has made immersive analytics her main research area. In this project, Andrea advised HCI masters student Biswaksen Patnaik in their exploration of how to use olfaction—the sense of smell—for the purpose of conveying data. As we delved deeper into this topic and built two separate olfactory displays, we realized that this may open up an entirely new topic of information olfactation (cf. information visualization), where olfactory channels and glyphs can be used to convey information in a structured way. I don’t expect that this idea will have anywhere near the same penetration as visualization or even sonification (in other words, don’t hold your breath for InfoSmell 2019), but it is still an interesting and mostly overlooked topic. The paper itself is a review of a significant amount of prior literature in perceptual psychology, our model for information olfactation, as well as descriptions of our two prototype displays—a tabletop and a wearable model—and three applications using them. You can see an example of the two prototypes in the featured image for this blog post.

Atom: A Grammar for Unit Visualization

  • D. Park, S. Drucker, R. Fernandez, N. Elmqvist. ATOM: A Grammar for Unit Visualization. IEEE Transactions on Visualization & Computer Graphics, to appear, 2018. [PDF]

This last paper is actually not an IEEE VIS 2018 paper, but rather a paper that was accepted to IEEE TVCG where we took advantage of the opportunity to apply for a conference presentation. The work itself stemmed from an internship that my student (now newly-minted Ph.D. and soon-to-be assistant professor!) Deok Gun Park did with Steve Drucker and Roland Fernandez at Microsoft Research in Summer 2016. The paper proposes the notion of unit visualizations as an emerging family of visual representations that maintain a strict equality between individual data points and their unique visual mark (in contrast to aggregating visualizations, such as histograms, barcharts, and many others). Beyond showcasing past and present examples of such unit visualizations, we also derive a new visual grammar called Atom that allows for specifying these visualizations in a declarative manner.


As we’ve done in the past, we’ll try to write individual blog posts on each of these papers in detail as we get closer to the conference. For now, this is a good starting point for all of the HCIL work that will be on display at VIS in Berlin in October! Hope to see you there!