We are at IEEE VIS 2015 in Chicago this week, presenting five papers from my research group. UMIACS has posted a story about this here. Here is my original description of these five papers in popular terms:
Movies may belong to several genres, recipes may contain many different genres, and people may belong to several organizations and groups. Understanding such patterns becomes challenging as the number and combinations grow: for example, why are romantic comedies so successful, and why do certain ingredients—such as apple and cinnamon—go together so well? Our tool AggreSet tries to help people answer such questions using a visual and interactive approach.
In the future, computers will be increasingly controlled by our natural behavior, such as through speech, gestures, and body language. Speech is an entire research area of its own, and has recently made significant progress. Gestures and body language are equally challenging because every person has their own personalized movement pattern. In the MotionFlow system, we aim to help designers of gesture-based interfaces by creating graphical pictures of movement patterns collected from a large number of test participants and grouping them together so that common patterns are visible.
Much progress has recently been made in trying to remedy the information overload problem our society is under through automatic methods called topic modeling where documents are summarized by the significant words they contain. However, for document collections that change over time—such as Twitter feeds, daily newspaper issues, and speeches in an ongoing political campaign—these topics may also evolve, where words appear, disappear, or move from one topic to another. The ThemeDelta project consists of two parts: a topic modeling component that automatically figures out the natural time periods in the document collection, such as major stories appearing in the news cycle or the ebb of flow of discussions on social media, and a visualization component that shows how words move fluidly from one topic to another over time, similar to a river delta where meandering rivers converge and diverge in response to flow and topography.
One of the great successes of the last few decades of computer interfaces was the identification of copy-and-paste as well as drag-and-drop: common operations that transcend and connect individual applications. In the VisDock project, we try to do the same for data visualization: what are the common tasks that people do in a stock market line chart, a housing bar chart, or a pie chart showing relative household expenses? We found four such tasks: selecting items, filtering them, labeling them, and zooming in and out of the chart. We also create a toolbar similar to Photoshop, but designed for data visualizations.
Building on the intuition that a picture is worth a thousand words, data visualization uses images to help people understand large or complex data: a picture of your spending habits is often more visceral and memorable than mere numbers. However, as data continues to grow in both size and complexity, we need to find new methods to aid people in getting to grips with this flow of information. In this paper, we try to lay out a plan for the future on how to take advantage of the new breed of computer technology to improve visualization: touch-based, pen-based, gestural, immersive, pervasive, and ubiquitous visualization. In other words, our vision paints a future where data can be viewed and analyzed anytime and anywhere.