My thoughts and guidelines on visualizing non-3D data using 3D graphics.

3D Visualization for Nonspatial Data: Guidelines and Challenges

By Niklas Elmqvist, University of Maryland, College Park

This year at the IEEE VIS 2017 conference, I was given the privilege to speak about something that I never thought I’d be able to speak about at VIS: my “dark” past of trying to use 3D graphics to visualize non-spatial data. My keynote was called “Towards Ubiquitous Analytics: Reflections on Two Decades of Anytime, Anywhere Analytics“, and I received a lot of positive feedback about it, so I decided to turn it into a slightly longer form here. 

Actually, it would be more accurate to say that I selfishly grabbed the opportunity to speak about this when I was asked by the organizers of the Immersive Analytics 2017 workshop to give a keynote, and I immediately turned it into a story about myself (sorry!).

Anyway, immersive analytics is loosely defined as the use of novel computing technology to enable situated and streamlined sensemaking, and as I thought about this invitation in more depth and started adding up the years, I realized I had been doing some form of another of this for almost 20 years (hey, I’m getting old!). In this blog post, I will tell a little of this story, and—more importantly—use my experience from all this work to derive a couple of rules and cautions about how to do immersive and ubiquitous analytics correctly.

By the way, while this blog post elaborates on my talk a little more, the slides for my Immersive Analytics 2017 keynote can be found here.

The Story So Far

I grew up a computer gamer, and for me, the thing about games is that they from the very beginning enticed me to not just want to play them, but also to want to build them. All the way from when I was around 7 years old up through my time as an undergraduate student, I was building computer games of increasing sophistication, and I was pretty sure that I’d join the game industry as a professional game developer as soon as I graduated. I even interviewed with some famous game development companies during my last semester. However, for some reason that I am still not sure about to this day (but which I am grateful for), I nixed these plans and instead continued in academia as a Ph.D. student working on visualization (in fact, I believe my Ph.D. advisor had a big hand in convincing me).

Of course, you can take the gamer out of the game development industry, but you can’t take gaming out of the gamer, so I spent most of my Ph.D. studies looking for ways to apply 3D graphics to data visualization. However, the data I was working on was inherently abstract and non-spatial in nature, so this was an uphill battle. My first forays in this area, which conceivably could be seen as ideas towards immersive or ubiquitous analytics, included the 3Dwm open source project (from 1999), which was a 3DUI graphical shell for virtual reality, as well as the 3DVN augmented reality navigation project (from 2004).

Soon enough, however, my Ph.D. work became all about trying to find ways to mitigate or eliminate problems with using 3D graphics on 2D screens for visualization purposes. More specifically, I ended up focusing on one such problem: occlusion, or the phenomenon that nearby objects obscure most distant ones in a 3D environment. More on this below, but when I eventually graduated with my Ph.D. and moved on to a postdoc position with Jean-Daniel Fekete at Inria in France (2007), I was mostly disillusioned about the prospects of using 3D graphics for non-spatial and abstract data. Accordingly, I stopped all my research on this topic.

Of course, today, ten years later, VR and AR hardware has finally caught up and is now mature and available at a price point where even hobby consumers can buy them for their own homes. For that reason, you could argue that I did my research 10 years too early. With the technology finally here, I am now carefully and mindfully exploring ways to bring immersive 3D back into my research through projects I call “ubiquitous analytics” and “situated data streams”. Those are the topics of another blog post. Nevertheless, my time working with on 3D graphics for visualization has earned me some hard-won experience I wanted to share here.

Overview

The main feature of my keynote at Immersive Analytics 2017 was a list of rules of thumb for what to do and not to do when bringing abstract and non-spatial data to 3D in order to enable a situated, immersive, and ubiquitous workflow for analytics. This list comes both from my own experiences, as well as the writings, teachings, and discussions of people such as Tamara Munzner, Jean-Daniel Fekete, Ben Shneiderman, and many others. I don’t take special credit for any of these rules other than to say that I have experienced and tried to mitigate them all firsthand.

Below is the list; in the following subsections I talk about each of them in more detail.

  1. Actual 3D immersion is not always necessary;
  2. No unnecessary/unmotivated 3D;
  3. 3D perspective is not your friend;
  4. Occlusion is a problem;
  5. 3D navigation is hard;
  6. Rotated 3D text is not legible; and
  7. Don’t replicate the real world (unless part of the data).

Rule #1 – Actual 3D Immersion is Not Always Necessary

I think the instinct for many people when they see the “immersive analytics” moniker is that this is all about analytics in virtual and augmented reality. However, as Benjamin Bach mentioned during the opening talk for the Immersive Analytics 2017 workshop, the “immersive” part of the IA definition does not necessarily refer to 3D immersion, but can also refer to a situated or in-the-flow type of sensemaking. I would like to emphasize this as well: you can be situated without having to wear a head-mounted display; you can be immersed while analyzing data on your tablet, you can be in the flow while viewing scatterplots on your smartwatch. It’s all about how you design your tools rather than the specific hardware that is being used.

In other words, this is a cautionary argument: it’s not necessary to go to full-fledged 3D immersion using VR or AR hardware to enable an immersive analytics workflow. This is also one of the reasons why I think the “immersive” part of the IA concept is a little bit unfortunate; I much better prefer the word “ubiquitous”—as in “ubiquitous computing” (which is a better connotation)—yielding “ubiquitous analytics” (UA, or ubilytics) to refer to anytime and anywhere sensemaking. Incidentally, this is also the name I chose for my own long-term research project on this topic that I launched in 2012 together with Pourang Irani at University of Manitoba.

Rule #2 – No Unnecessary/Unmotivated 3D

In the immortal words of Peter Parker’s Uncle Ben, “with great power comes great responsibility.” 3D graphics has the capacity to create photorealistic scenes that mimic the real world, but such great capacity comes at a cost (a lot of these costs are covered below). Most importantly, while you would be forgiven for instinctively thinking that three dimensions is better than two, the problem is that this third dimension brings with it a whole host of problems. Too many times have I reviewed papers for various visualization and HCI venues where the authors are breathlessly building their work on the assumption that 3D > 2D simply because 3 > 2. Nothing can be further from the truth.

Rather than such a simplistic argument, the argument should be the following:

  • 3D data (volumes, 3D flows, 3D trajectories) => 3D representations
  • 2D data (space, areas, 2D flows, 2D trajectories) => 2D representations
  • 1D data (lists, rankings, sequences) => 1D representations
  • (Multidimensional or nonspatial data => 1D or 2D representations)

Basically, this means that if your data is three-dimensional in nature, by all means, go for a three-dimensional representation. However, in almost every other situation, you should be primarily looking at 2D (maps or areas) or even 1D representations (lists or paths). In fact—and this might be a little controversial—if you are dealing with multidimensional or nonspatial data, 1D or 2D representations are most likely your best bet.

Rule #3 – 3D Perspective is Not Your Friend

I didn’t specify the costs of 3D representation in my blanket statement about unnecessary and unmotivated 3D above, so this is the place to go into a little more detail. One of these costs is that when your goal is to create precise and efficient spatial visual representations of data, 3D perspective is not your friend. To make a 3D environment realistic, most software uses a perspective projection, which means that you get a foreshortening effect where distant objects are nonlinearly transformed, causing shapes to be distorted and objects to be scaled down inversersely proportionally to the square of the distance. While this mimics the way we perceive the real world, it is counterproductive when you want to interpret distance information as accurately as possible. It is compounded by the fact that in an artificial data representation, depth cues such as familiar shapes, common reference points, and atmospheric coloring (where more distant objects tend towards blue) are typically non-existent.

This is a tricky problem to solve, particularly if a visualization is being integrated into a real environment in an augmented reality setting because this precludes using axonometric projections where perspective foreshortening is eliminated (at the cost of realism). Stereoscopic rendering, where each eye has a slightly different viewpoint to mimic the real world, can promote depth perception, particularly when the user moves their head from side to side to generate motion parallax of the 3D objects. Nevertheless, the easiest solution is probably just to eliminate using the depth dimension as a significant information-carrying visual channel in the visualization.

Rule #4 – Occlusion is a Problem

Oh boy, this is a big one for me, maybe particularly because I wrote my Ph.D. dissertation on this topic. Basically, occlusion refers to the highly obvious phenomenon that nearby objects can occlude distant objects, even if the nearby object is relatively small (just consider covering your eyes with your hand; it doesn’t matter how big the object in front of you is, you won’t see it). Occlusion is a major depth cue for humans when we perceive the real world, but it is also a major pain in the neck when we are trying to create 3D visual representations where perceiving all of the data is vital. In my Ph.D. work, I further classified this problem into several levels: (1) having no knowledge of an object that is entirely hidden; (2) having awareness of hidden objects; (3) being able to identify hidden objects; and (4) fully accessing hidden objects.

These levels are useful when thinking about how to manage occlusion because different techniques have different strengths. For example, in my work, I designed a “dynamic transparency” approach that essentially provided X-ray vision to the viewer, allowing them to see through nearby objects so that nothing was hidden (the caveat here is that this approach may yield a “reverse occlusion” effect, where distant objects hide nearby ones). Such an approach mitigates all levels, including full access to hidden objects. Other techniques, such as displacing objects or highlighting their presence, have other capabilities. Nevertheless, this is a tricky problem to manage and important to be aware of.

Rule #5 – 3D Navigation is Hard

I’m a gamer, and I routinely play first-person games that require me to navigate complex 3D environments using a mouse and a keyboard, but even I have trouble when you ask me to fly around a 3D world in full six degrees of freedom (6DOF). (I am old enough to remember the game Descent, which asked players to fly around labyrinthine levels in full 6DOF. It was not easy.) Most people are not like me, and, accordingly, they will have an even harder time to navigate if the data is presented in a 3D form that requires complex navigation.

The solution to this challenge is to either eliminate 3D navigation (for example, in my dissertation research, I designed an algorithm that automatically computed an optimal tour through a 3D environment), or to constrain it to a familiar form (ground plane rather than flying) and to, ideally, let the viewpoint be controlled using physical navigation. In other words, instead of asking people to fly around in a 3D space using the mouse and keyboard, let them use normal walking in the real world to navigate around the virtual space. As it happens, most humans (though not all!) are rather proficient in such physical navigation, so this is really the best solution. Practically speaking, this means either using room-scale VR or fully tracked AR.

Rule #6 – Rotated 3D Text is Not Legible

One of the first things I did when I was working on the 3Dwm project was to create 3D text labels that could be placed anywhere in the virtual environment, and when I added VNC support, I amused myself by placing 2D applications on the sides of a 3D cube. Turns out, even if this looks cool, this is a patently bad idea because distorting text in this way makes reading it significantly more difficult. A 3D signpost may be a faithful replication of the real world (turns out, replicating the real world is actually a bad idea in itself, see the next rule), but it is hardly useful because some signs will invariably be pointing in a direction where you can’t read them. We can do better than that.

How to do better? It’s really simple. In a 3D environment, if you want your labels to be legible, make sure that they are always facing the user (i.e. perpendicular to the viewing direction). In fact, this rule can be generalized to say that any 2D interface component, like a dialog or a window, should be displayed so that it is always oriented towards the user. Yes, the result may not look at all like it would in the real world, but if it’s efficiency and accuracy we’re after, we can do better than the real world. Let it go.

Rule #7 – Don’t Replicate the Real World

Many 3D user interface designers invariably have a phase where they try to replicate the real world in their interfaces, thinking that the familiarity will entice the user and make the transition easier. After all, many of the buttons, sliders, and windows in our 2D graphical environments draw inspiration from real-world physical objects, so why would it not make sense to continue this tradition when building our 3D user interfaces? I’ve seen quite a few 3D models of offices where the designers clearly thought this was a good idea, because isn’t the 2D workspace in Windows or MacOS called the desktop anyway? Heck, I did this in the 3Dwm system myself.

Resist this temptation. Unless the real world is somehow part of the data that you’re trying to display, for example because you are previsualizing a new building in its locale, or showing a fluid simulation in situ, you should not try to generate faithful representations of the real world. Why? Because the whole purpose of a computer is to augment our abilities and eliminate our limitations, many of them imposed by the physical world. This is actually a generalization of the previous rule; yes, reading text upside down is difficult in the real world, but this is a reason to simply eliminate upside down text, not a reason to support it because of realism. We don’t need realism if it’s efficiency and accuracy we are trying to achieve. It doesn’t make sense to force the user to fly over to the 3D model of the printer to print a document on the physical printer. There are much more efficient ways to do this in a computer (the standard printing dialog comes to mind).

Again, I should emphasize this rule only applies if you are designing an interface with efficiency, productivity, and accuracy in mind. If you’re moving around in a historical site, playing a 3D game, or viewing airflow around the space shuttle, by all means, do replicate the real world. It’s part of the experience. But be sure to leave the physical limitations of the real world behind you when you can. Your users will thank you for it.

Conclusion

In this slightly expanded version of my keynote on my experience with 3D visualization for nonspatial data, I’ve tried to list my hard-won experiences on how to do this correctly. As always, these are my own opinions, and they are by no means written in stone. Part of my message at the IA workshop was that these “rules” were intended to be a little provocative—after all, workshops are supposed to be venues for lively discussion and friendly disagreement. I am open to hearing your views and takes on these ideas.

My thoughts here are also cautionary and somewhat negative in tone, and I’ve received some pushback on this. This is absolutely true, but I refer again to Uncle Ben. Just because our world is three-dimensional doesn’t mean that this is somehow a “natural” representation for data. Understanding our physical 3D surroundings is a complex challenge for our visual and cognitive system, and there are many limitations that fetter our abilities. I understand that a lot of people are excited about the new VR and AR hardware that is becoming available, but don’t let the shininess of these new toys blind you. Visualizing data in 3D is certainly “cool” and can many times promote visceral, affective, and immersive experiences that a more “boring” visual representation cannot. However, if your goal is efficient and accurate perception of data, then this “boring” representation is still probably your best bet, no matter all the cool hardware you now have on hand.