• Full professor in the College of Information Studies (iSchool), University of Maryland, College Park (2019-)
  • Affiliate full professor in the Department of Computer Science, University of Maryland, College Park (2019-)
  • Director of the Human-Computer Interaction Laboratory (HCIL) (2016-2021)
  • Distinguished Scientist of the ACM (2018)
  • Member of University of Maryland Institute for Advanced Computer Studies (2014-)
  • IEEE InfoVis papers co-chair (2016, 2017, 2020)
  • Morgan & Claypool Synthesis Lectures on Visualization series editor (2014-)
  • Director of the Master of Science in Human-Computer Interaction program at the College of Information Studies (iSchool) at University of Maryland, College Park (2014-2018)

Also see my biography page.

Personal Statement

The below is derived from my application for promotion to Full Professor in Summer 2018. The citations refer to publications in my CV. Full materials can be found here.

Steve Jobs once famously stated that “computers are like a bicycle for the mind,” and this motto is particularly true for the research area of data visualization—my research area—where we use interactive graphical representations of data to amplify cognition. Put differently, visualization scaffolds what essentially makes us human: our capacity for rational thought. Instead of endeavoring to remove people from the analytical process entirely, which is increasingly the approach taken by machine learning and artificial intelligence, visualization engages individuals as integral parts of a sensemaking loop where the computer and the human are separate, but often equal, partners. People have used technology to improve their capabilities and overcome their limitations since the dawn of time. Visualization is just a tool in a long line of tools, but its potential for supporting the human in truly understanding vast oceans of data is unparalleled.

My approach to visualization research is grounded in the areas of human-computer interaction, cognitive science, and ubiquitous computing. My view is that interaction is a cognitive catalyst for sensemaking; essentially, that merely viewing data is insufficient and that manipulation is what truly enables insight. My goal is to leverage the new generation of hardware—touch, pens, gestures, mobile, and multimodal—to design, build, and evaluate new tools for making sense of data. My unique contribution, first proposed in 2013, is the vision of a ubiquitous form of data analytics (ubilytics) [J30], where ever-present networked devices can be harnessed for analysis and decision-making anytime and anywhere. Spurred by papers, workshops, and talks, this idea is now gaining momentum in the field and has contributed to the topic of immersive analytics.

Here I will not only describe this vision in more detail, but also outline my efforts within teaching as well as service to the university and my professional community.

Scholarship and Research Methodology

My research area is data visualization, human-computer interaction, and visual analytics, with my primary scientific communities being the IEEE VIS (http://www.ieeevis.org/) and the ACM CHI (https://sigchi.org/) conferences. I have been very productive in research output in these areas, particularly since joining UMD in 2014, with more than 110 academic papers in strictly peer-reviewed journals and conference proceedings (almost doubled since 2014). In particular, I have co-authored 28 papers in IEEE Transactions on Visualization & Computer Graphics (TVCG) and 14 papers in the ACM CHI conference, both top venues for visualization and HCI research, respectively. According to Google Scholar, my work has been cited thousands of times and I have an h-index of 35, putting me on equal footing with many full professors in my area.

My research methodology is a mix of theory, design, and evaluation. The problems I attack are real problems posed by real users, and I strive to involve these users in the design process in a user-centered, participatory fashion. Because the problems are real and require solutions, all of my work is characterized by a strong software engineering component. New HCI techniques must be empirically evaluated, and for this to be possible we need prototype implementations. After iterative design and development, my approach is to evaluate the new technique using a blend of qualitative and quantitative methods. I often deploy my tools in the field and over time.

Significantly, I study data visualization through the lenses of Norman’s Gulfs of Evaluation and Execution, which fit well with data visualization: here, evaluation refers to the capacity of the user to discern the state of the computer system, which in visualization maps to the actual visual representation, whereas execution refers to the mechanisms the computer system provides to change its state, which corresponds to the interactivity of the visual representations. I find this to be a useful organizing framework for my work: the visual vs. interactive computing aspects.

Visual Computing: Making Sense of Big Data

Visualization creates graphical representations of data to offload computation, re-represent data, and constrain problem solving, thus allowing a user to view, analyze, and understand datasets larger than would be possible with less visual formats. However, for truly big data, we invariably reach a point when there are simply not enough pixels to go around. Large or many displays do not generally help for this situation, as human perceptual limitations dominate. In my research, I have addressed this problem through an aggregation approach by recursively combining data into a hierarchy of discrete cluster levels to create a multiscale representation of the dataset. I then manage the resulting hierarchy in two ways: by (1) visually representing these aggregate entities that consist of potentially thousands of data cases [J10, C12, J12], and by (2) providing interaction techniques for navigating the multiscale space [J11, C17, C26, C27, J27].

Nevertheless, computing has today reached a point where we often must go beyond the confines of a single monitor and look at display spaces consisting of multiple, often heterogeneous, displays [J32]. My work investigates how we can leverage the ecosystem of digital devices (smartphones, tablets, music players, laptops, head-mounted displays, etc.) in our surroundings to form shared display spaces that allow the seamless transfer and viewing of big data. By being embedded into the real world to a much higher degree than was previously possible, these display spaces can support the visual fabric that ubiquitous analytics relies on.

To make this possible, we need open and standardized infrastructures that will support meshing this sea of devices into a coherent whole. My work during the last few years has focused on prototyping and evaluating several infrastructures. My first attempt was Munin [J37], a Java framework for distributed visualization based on a peer-to-peer network (P2P) infrastructure. However, the web’s rich and growing ecosystem of libraries, APIs, and standards is a better platform for device-independent visualizations than Java. This resulted in the PolyChrome [C37] JavaScript P2P framework. Most recently in 2018, my research group has partnered with collaborators at Aarhus University in Denmark to build a new Open Source platform called Vistrates [J63] that allow for visual programming using reusable components in a truly distributed, shareable, and malleable web-based form. The platform supports easily building cross-device and distributed visualization applications using standard web technologies.

Interactive Computing: Interaction as a Cognitive Catalyst

While often receiving scant attention in visualization research compared to visual encodings, interaction is much more than the interface used to control our visualization tools. Instead, interaction serves as a critical catalyst for understanding because it places direct control of the data into the hands of the user [J19]. This reduces the visual representation into a mere medium where the interaction takes place. In fact, post-cognitivist frameworks such as distributed cognition model information flow in a cognitive system—such as an analyst using a computational device to view and understand data—as the transfer of internal and external representational states across different media—such as the device’s screen, the user’s mind, and a piece of paper used to take notes—through interactions between them. Designing seamless interactions in the analytical computing system will amplify this flow. Humans do not think in a vacuum; rather, we surround ourselves with surfaces, spaces, artifacts, and other people that support the cognitive task. Consider spreading financial reports on your desk when working on your stock portfolio; annotating, stacking, and organizing bills in your office when balancing your checkbook; or gathering your family around a dining table littered with catalogues, maps, and notepads when planning your vacation. Put simply, action is a catalyst for understanding.

A ubiquitous approach to interaction, then, would endeavor to reduce or eliminate the barriers between users and the data they interact with (i.e., reducing the Gulf of Execution). Such a fluid form [J19] of human-data interaction must scaffold people in managing large and complex data, serving as the interactive counterpart to world-embedded display environments. I scaffold interaction in two primary ways: using computational support, and through novel devices.

Computational support for human-data interaction fits into the nascent research area of visual analytics, where computational methods such as machine learning and data mining are integrated in the sensemaking loop to support human users in their analysis process. My work on TimeFork [C41] combines automatic prediction with human insight for time-series data. ConceptVector [J55] aids rapid dictionary building through advanced word embedding. TopicLens [J48] provides real-time topic modeling within a user-controlled lens. 

My work on novel devices for analytics go beyond traditional computers equipped with mouse and keyboard and into the space of novel platforms such as touch-based, gestural, and tangible computing to augment human abilities using the action-as-catalyst concept. Our extended multitouch concept [C28] uses a Kinect depth camera to infer touch and hand posture. Tracking people in a physical space allows for utilizing proxemics to understand how they use their bodies to relate to each other and to large displays [C42]. In fact, smartwatches can serve as powerful companions for data visualization on such large displays, which we utilize in our David + Goliath framework [C49]. Finally, our most recent work studies conveying data through olfactory displays [J61], which resulted in my students building two physical display prototypes.

Future Research Outlook

The ultimate goal for my research is to enable real users to solve real tasks—such as understanding large-scale datasets, seeing the structure of huge hierarchies, or navigating large information spaces—that were previously beyond their reach. I am applying these ideas to furthering science, society, and democracy in an effort I call Visualization for Good (Vis4Good), which includes improving data and visualization literacy [J57], supporting medical science [C52], enabling scientific discovery [J59], and promoting public safety [C29, C32, J35].

Teaching Philosophy

Teaching is just another example of how we as scholars communicate our knowledge to a broader audience, be it our students, our colleagues, or the general public. Classrooms and one-on-one mentoring alike give me a stage where I can share my passion for computing, and I have found that the more enthusiastic I am, the more infected my students become. 

In my four years at the UMD iSchool, I have developed three new visualization courses from scratch (two graduate, one undergraduate). All three are immensely popular to the point where they fill up to capacity the same day registration opens and the size of the waitlist rivals the number of seats. My course evaluation scores typically rank well above the college average.

I lead a large research group, advising a total of nine Ph.D. students. Since joining UMD, I have graduated two Ph.D.s, with an additional three within a year of graduation. My advising method is inclusive, practical, and hands-on. I received the Purdue Graduate Student Mentoring Award in 2014, and was a runner-up for the UMD Graduate Faculty Mentor of the Year Award in 2017.

Service to the Scientific Community

Service to the scientific community is central to my personal mission, and I have so far served on more than 50 technical program committees. I am an associate editor of IEEE TVCG and the Information Visualization journal, and series co-editor of the Morgan & Claypool Synthesis Lectures on Visualization. The pinnacle of my professional service so far came in 2016 and 2017, when I was chosen to serve as papers co-chair for the IEEE InfoVis conference. Chairing was challenging but rewarding, and I introduced several innovations, including a revised call for papers, a reviewer scorecard, and multiple educational efforts to improve review quality.

I have also realized that my own experiences have given me insight into the academic enterprise, and I have begun to publish blog articles on this topic on my website. Blogs are an excellent complement to academic publications, and I will continue this practice as a service to the field. 

Service to the University

When I joined UMD in 2014, I also became the director of the iSchool’s Master of Science in Human-Computer Interaction (HCIM) program. That fall, the HCIM program was in trouble, with zero incoming students. The program was research-oriented, but wasn’t able to compete effectively with older, more well-known, and higher ranked programs. My first action as director was thus to change the admission criteria to focus on students with a design background (rather than research) who were only looking to get a masters degree and then join industry, essentially turning the program into a professional one. This decision has since been validated several times over: the program has now more than 40 students, and our incoming class for 2018 is 50!

In 2016, I was invited to become the eighth director of UMD’s Human-Computer Interaction Laboratory (HCIL), the oldest such lab in North America and one of the most reputable in the world. Serving as director has been the highest honor of my career so far, and going forward, I plan to provide leadership to the HCIL, protect its legacy, and introduce some ideas of my own.