Ubiquitous Analytics
Computing is becoming increasingly embedded in our everyday lives: mobile devices are growing smaller yet more powerful, large displays are getting cheaper, and our physical environments are turning intelligent and are integrating an increasing number of digital processors. Meanwhile, data is everywhere, and people need to leverage all of this digital infrastructure to turn it into actionable information about their hobbies, health, and personal interest. Ubiquitous analytics—or ubilytics—is a new paradigm for interacting with data that is staking out a new digital future of ever-present, always-on computing; one that can support manipulating, thinking about, and analyzing data anytime, anywhere.
The Ubilytics project is funded from February 2013 to January 2018 by a U.S. National Science Foundation CAREER grant, award #1253863 and #1539534 (moved to UMD).
2023 |
David Saffo, Andrea Batch, Cody Dunne, Niklas Elmqvist (2023): Through Their Eyes and In Their Shoes: Providing Group Awareness During Collaboration Across Virtual Reality and Desktop Platforms. In: Proceedings of the ACM Conference on Human Factors in Computing Systems, ACM, New York, NY, USA, 2023. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Saffo2023, title = {Through Their Eyes and In Their Shoes: Providing Group Awareness During Collaboration Across Virtual Reality and Desktop Platforms}, author = {David Saffo and Andrea Batch and Cody Dunne and Niklas Elmqvist}, url = {https://users.umiacs.umd.edu/~elm/projects/vrxd/vrxd.pdf, PDF https://osf.io/wgprb/, OSF}, year = {2023}, date = {2023-04-24}, booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems}, publisher = {ACM}, address = {New York, NY, USA}, abstract = {Many collaborative data analysis situations benefit from collaborators utilizing different platforms. However, maintaining group awareness between team members using diverging devices is difficult, not least because common ground diminishes. A person using head-mounted VR cannot physically see a user on a desktop computer even while co-located, and the desktop user cannot easily relate to the VR user\'s 3D workspace. To address this, we propose the ``eyes-and-shoes\'\' principles for group awareness and abstract them into four levels of techniques. Furthermore, we evaluate these principles with a qualitative user study of 6 participant pairs synchronously collaborating across distributed desktop and VR head-mounted devices. In this study, we vary the group awareness techniques between participants and explore two visualization contexts within participants. The results of this study indicate that the more visual metaphors and views of participants diverge, the greater the level of group awareness is needed. A copy of this paper, the study preregistration, and all supplemental materials required to reproduce the study are available on https://osf.io/wgprb/. }, keywords = {} } Many collaborative data analysis situations benefit from collaborators utilizing different platforms. However, maintaining group awareness between team members using diverging devices is difficult, not least because common ground diminishes. A person using head-mounted VR cannot physically see a user on a desktop computer even while co-located, and the desktop user cannot easily relate to the VR user's 3D workspace. To address this, we propose the ``eyes-and-shoes'' principles for group awareness and abstract them into four levels of techniques. Furthermore, we evaluate these principles with a qualitative user study of 6 participant pairs synchronously collaborating across distributed desktop and VR head-mounted devices. In this study, we vary the group awareness techniques between participants and explore two visualization contexts within participants. The results of this study indicate that the more visual metaphors and views of participants diverge, the greater the level of group awareness is needed. A copy of this paper, the study preregistration, and all supplemental materials required to reproduce the study are available on https://osf.io/wgprb/. |
2022 |
Sebastian Hubenschmid, Jonathan Wieland, Daniel Immanuel Fink, Andrea Batch, Johannes Zagermann, Niklas Elmqvist, Harald Reiterer (2022): ReLive: Bridging In-Situ and Ex-Situ Visual Analytics for Analyzing Mixed Reality User Studies. In: Proceedings of the ACM Conference on Human Factors in Computing Systems,, pp. 24:1–24:20, ACM, New York, NY, USA, 2022. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Hubenschmid2022, title = {ReLive: Bridging In-Situ and Ex-Situ Visual Analytics for Analyzing Mixed Reality User Studies}, author = {Sebastian Hubenschmid and Jonathan Wieland and Daniel Immanuel Fink and Andrea Batch and Johannes Zagermann and Niklas Elmqvist and Harald Reiterer}, url = {https://users.umiacs.umd.edu/~elm/projects/relive/relive.pdf, PDF https://youtu.be/BaNZ02QkZ_k, Youtube}, year = {2022}, date = {2022-05-10}, booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems,}, pages = {24:1--24:20}, publisher = {ACM}, address = {New York, NY, USA}, abstract = {The nascent field of mixed reality is seeing an ever-increasing need for user studies and field evaluation, which are particularly challenging given device heterogeneity, diversity of use, and mobile deployment. Immersive analytics tools have recently emerged to support such analysis in situ, yet the complexity of the data also warrants an ex-situ analysis using more traditional non-immersive visual analytics setups. To bridge the gap between both approaches, we introduce ReLive: a mixed-immersion visual analytics framework for exploring and analyzing mixed reality user studies. ReLive combines an in-situ virtual reality view with a complementary ex-situ desktop view. While the virtual reality view allows users to relive interactive spatial recordings replicating the original study, the synchronized desktop view provides a familiar interface for analyzing aggregated data. We validated our concepts in a two-step evaluation consisting of a design walkthrough and an empirical expert user study.}, keywords = {} } The nascent field of mixed reality is seeing an ever-increasing need for user studies and field evaluation, which are particularly challenging given device heterogeneity, diversity of use, and mobile deployment. Immersive analytics tools have recently emerged to support such analysis in situ, yet the complexity of the data also warrants an ex-situ analysis using more traditional non-immersive visual analytics setups. To bridge the gap between both approaches, we introduce ReLive: a mixed-immersion visual analytics framework for exploring and analyzing mixed reality user studies. ReLive combines an in-situ virtual reality view with a complementary ex-situ desktop view. While the virtual reality view allows users to relive interactive spatial recordings replicating the original study, the synchronized desktop view provides a familiar interface for analyzing aggregated data. We validated our concepts in a two-step evaluation consisting of a design walkthrough and an empirical expert user study. |
2019 |
Sriram Karthik Badam, Andreas Mathisen, Roman Rädle, Clemens Nylandsted Klokmose, Niklas Elmqvist (2019): Vistrates: A Component Model for Ubiquitous Analytics. In: IEEE Transactions on Visualization & Computer Graphics, 2019. (Type: Article | Abstract | Links | BibTeX) @article{Badam2019a, title = {Vistrates: A Component Model for Ubiquitous Analytics}, author = {Sriram Karthik Badam and Andreas Mathisen and Roman Rädle and Clemens Nylandsted Klokmose and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/vistrates/vistrates.pdf, PDF https://doi.org/10.1109/TVCG.2018.2865144, DOI}, year = {2019}, date = {2019-01-01}, journal = { IEEE Transactions on Visualization & Computer Graphics}, abstract = {Visualization tools are often specialized for specic tasks, which turns the user\'s analytical workow into a fragmented process performed across many tools. In this paper, we present a component model design for data visualization to promote modular designs of visualization tools that enhance their analytical scope. Rather than fragmenting tasks across tools, the component model supports unification, where components—the building blocks of this model—can be assembled to support a wide range of tasks. Furthermore, the model also provides additional key properties, such as support for collaboration, sharing across multiple devices, and adaptive usage depending on expertise, from creating visualizations using dropdown menus, through instantiating components, to actually modifying components or creating entirely new ones from scratch using JavaScript or Python source code. To realize our model, we introduce Vistrates, a literate computing platform for developing, assembling, and sharing visualization components. From a visualization perspective, Vistrates features cross-cutting components for visual representations, interaction, collaboration, and device responsiveness maintained in a component repository. From a development perspective, Vistrates offers a collaborative programming environment where novices and experts alike can compose component pipelines for specific analytical activities. Finally, we present several Vistrates use cases that span the full range of the classic \"anytime\" and \"anywhere\" motto for ubiquitous analysis: from mobile and on-the-go usage, through office settings, to collaborative smart environments covering a variety of tasks and devices.}, keywords = {} } Visualization tools are often specialized for specic tasks, which turns the user's analytical workow into a fragmented process performed across many tools. In this paper, we present a component model design for data visualization to promote modular designs of visualization tools that enhance their analytical scope. Rather than fragmenting tasks across tools, the component model supports unification, where components—the building blocks of this model—can be assembled to support a wide range of tasks. Furthermore, the model also provides additional key properties, such as support for collaboration, sharing across multiple devices, and adaptive usage depending on expertise, from creating visualizations using dropdown menus, through instantiating components, to actually modifying components or creating entirely new ones from scratch using JavaScript or Python source code. To realize our model, we introduce Vistrates, a literate computing platform for developing, assembling, and sharing visualization components. From a visualization perspective, Vistrates features cross-cutting components for visual representations, interaction, collaboration, and device responsiveness maintained in a component repository. From a development perspective, Vistrates offers a collaborative programming environment where novices and experts alike can compose component pipelines for specific analytical activities. Finally, we present several Vistrates use cases that span the full range of the classic "anytime" and "anywhere" motto for ubiquitous analysis: from mobile and on-the-go usage, through office settings, to collaborative smart environments covering a variety of tasks and devices. |
2017 |
Sriram Karthik Badam, Niklas Elmqvist (2017): Visfer: Camera-based Visual Data Transfer for Cross-Device Visualization. In: Information Visualization, 2017. (Type: Article | Abstract | Links | BibTeX) @article{Badam2017bb, title = {Visfer: Camera-based Visual Data Transfer for Cross-Device Visualization}, author = {Sriram Karthik Badam and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/qrvis/visfer.pdf, PDF}, year = {2017}, date = {2017-09-08}, journal = {Information Visualization}, abstract = {Going beyond the desktop to leverage novel devices—such as smartphones, tablets, or large displays—for visual sensemaking typically requires supporting extraneous operations for device discovery, interaction sharing, and view management. Such operations can be time-consuming and tedious, and distract the user from the actual analysis. Embodied interaction models in these multi-device environments can take advantage of the natural interaction and physicality afforded by multimodal devices and help effectively carry out these operations in visual sensemaking. In this paper, we present cross-device interaction models for visualization spaces, that are embodied in nature, by conducting a user study to elicit actions from participants that could trigger a portrayed effect of sharing visualizations (and therefore information) across devices. We then explore one common interaction style from this design elicitation called Visfer, a technique for effortlessly sharing visualizations across devices using the visual medium. More specifically, this technique involves taking pictures of visualizations, or rather the QR codes augmenting them, on a display using the built-in camera on a handheld device. Our contributions include a conceptual framework for cross-device interaction and the Visfer technique itself, as well as transformation guidelines to exploit the capabilities of each specific device and a web framework for encoding visualization components into animated QR codes, which capture multiple frames of QR codes to embed more information. Beyond this, we also present the results from a performance evaluation for the visual data transfer enabled by Visfer. We end the paper by presenting the application examples of our Visfer framework. }, keywords = {} } Going beyond the desktop to leverage novel devices—such as smartphones, tablets, or large displays—for visual sensemaking typically requires supporting extraneous operations for device discovery, interaction sharing, and view management. Such operations can be time-consuming and tedious, and distract the user from the actual analysis. Embodied interaction models in these multi-device environments can take advantage of the natural interaction and physicality afforded by multimodal devices and help effectively carry out these operations in visual sensemaking. In this paper, we present cross-device interaction models for visualization spaces, that are embodied in nature, by conducting a user study to elicit actions from participants that could trigger a portrayed effect of sharing visualizations (and therefore information) across devices. We then explore one common interaction style from this design elicitation called Visfer, a technique for effortlessly sharing visualizations across devices using the visual medium. More specifically, this technique involves taking pictures of visualizations, or rather the QR codes augmenting them, on a display using the built-in camera on a handheld device. Our contributions include a conceptual framework for cross-device interaction and the Visfer technique itself, as well as transformation guidelines to exploit the capabilities of each specific device and a web framework for encoding visualization components into animated QR codes, which capture multiple frames of QR codes to embed more information. Beyond this, we also present the results from a performance evaluation for the visual data transfer enabled by Visfer. We end the paper by presenting the application examples of our Visfer framework. |
2015 |
Sujin Jang, Niklas Elmqvist, Karthik Ramani (2015): MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data. In: IEEE Transactions on Visualization and Computer Graphics, 21 (1), pp. 21–30, 2015. (Type: Article | Abstract | Links | BibTeX) @article{Jang2015, title = {MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data}, author = {Sujin Jang and Niklas Elmqvist and Karthik Ramani}, url = {http://www.umiacs.umd.edu/~elm/projects/motionflow/motionflow.pdf, PDF}, year = {2015}, date = {2015-08-14}, journal = {IEEE Transactions on Visualization and Computer Graphics}, volume = {21}, number = {1}, pages = {21--30}, abstract = {Pattern analysis of human motions, which is useful in many research areas, requires understanding and comparison of different styles of motion patterns. However, working with human motion tracking data to support such analysis poses great challenges. In this paper, we propose MotionFlow, a visual analytics system that provides an effective overview of various motion patterns based on an interactive flow visualization. This visualization formulates a motion sequence as transitions between static poses, and aggregates these sequences into a tree diagram to construct a set of motion patterns. The system also allows the users to directly reflect the context of data and their perception of pose similarities in generating representative pose states. We provide local and global controls over the partition-based clustering process. To support the users in organizing unstructured motion data into pattern groups, we designed a set of interactions that enables searching for similar motion sequences from the data, detailed exploration of data subsets, and creating and modifying the group of motion patterns. To evaluate the usability of MotionFlow, we conducted a user study with six researchers with expertise in gesture-based interaction design. They used MotionFlow to explore and organize unstructured motion tracking data. Results show that the researchers were able to easily learn how to use MotionFlow, and the system effectively supported their pattern analysis activities, including leveraging their perception and domain knowledge.}, keywords = {} } Pattern analysis of human motions, which is useful in many research areas, requires understanding and comparison of different styles of motion patterns. However, working with human motion tracking data to support such analysis poses great challenges. In this paper, we propose MotionFlow, a visual analytics system that provides an effective overview of various motion patterns based on an interactive flow visualization. This visualization formulates a motion sequence as transitions between static poses, and aggregates these sequences into a tree diagram to construct a set of motion patterns. The system also allows the users to directly reflect the context of data and their perception of pose similarities in generating representative pose states. We provide local and global controls over the partition-based clustering process. To support the users in organizing unstructured motion data into pattern groups, we designed a set of interactions that enables searching for similar motion sequences from the data, detailed exploration of data subsets, and creating and modifying the group of motion patterns. To evaluate the usability of MotionFlow, we conducted a user study with six researchers with expertise in gesture-based interaction design. They used MotionFlow to explore and organize unstructured motion tracking data. Results show that the researchers were able to easily learn how to use MotionFlow, and the system effectively supported their pattern analysis activities, including leveraging their perception and domain knowledge. |
Alexandru Dancu, Mickael Fourgeaud, Mohammad Obaid, Morten Fjeld, Niklas Elmqvist (2015): Map Navigation Using a Wearable Mid-air Display. In: Proceedings of the ACM Conference on Human-Computer Interaction with Mobile Devices and Services, pp. 71–76, 2015. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Dancu2015, title = {Map Navigation Using a Wearable Mid-air Display}, author = {Alexandru Dancu and Mickael Fourgeaud and Mohammad Obaid and Morten Fjeld and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/midairmap/midairmap.pdf, Paper https://www.youtube.com/watch?v=yswf1bJafp8, Talk}, year = {2015}, date = {2015-07-01}, booktitle = {Proceedings of the ACM Conference on Human-Computer Interaction with Mobile Devices and Services}, journal = {Proceedings of the ACM Conference on Human-Computer Interaction with Mobile Devices and Services}, pages = {71--76}, abstract = {Advances in display technologies will soon make wearable mid-air displays---devices that project dynamic images floating in mid-air relative to a mobile user---widely available. This kind of device will offer new input and output modalities compared to current mobile devices, and display information on the go. In this paper, we present a functional prototype for the purpose of understanding these modalities in more detail, including suitable applications and device placement. We first collected results from an online survey that identified map navigation as one of the most desirable applications and suggested placement preferences. Based on these rankings, we built a physical mid-air display prototype consisting of mobile phone, pico projector, and a holder frame, mountable in two different configurations: wrist and chest. We then designed a user study, asking participants to navigate different physical routes using map navigation displayed in midair. Participants considered the wrist mount to be three times safer in map navigation than the chest mount. The study results validate the use of a mid-air display for map navigation. Based on both our online survey and user study, we derive implications for the design of wearable mid-air displays.}, keywords = {} } Advances in display technologies will soon make wearable mid-air displays---devices that project dynamic images floating in mid-air relative to a mobile user---widely available. This kind of device will offer new input and output modalities compared to current mobile devices, and display information on the go. In this paper, we present a functional prototype for the purpose of understanding these modalities in more detail, including suitable applications and device placement. We first collected results from an online survey that identified map navigation as one of the most desirable applications and suggested placement preferences. Based on these rankings, we built a physical mid-air display prototype consisting of mobile phone, pico projector, and a holder frame, mountable in two different configurations: wrist and chest. We then designed a user study, asking participants to navigate different physical routes using map navigation displayed in midair. Participants considered the wrist mount to be three times safer in map navigation than the chest mount. The study results validate the use of a mid-air display for map navigation. Based on both our online survey and user study, we derive implications for the design of wearable mid-air displays. |
Sriram Karthik Badam, Eli Raymond Fisher, Niklas Elmqvist (2015): Munin: A Peer-to-Peer Middleware for Ubiquitous Analytics and Visualization Spaces. In: IEEE Transactions on Visualization & Computer Graphics, 21 (2), pp. 215–228, 2015. (Type: Article | Abstract | Links | BibTeX) @article{Badam2015, title = {Munin: A Peer-to-Peer Middleware for Ubiquitous Analytics and Visualization Spaces}, author = {Sriram Karthik Badam and Eli Raymond Fisher and Niklas Elmqvist }, url = {http://www.umiacs.umd.edu/~elm/projects/munin/munin.pdf, Paper https://www.youtube.com/watch?v=ZKIXSdUm6-s, Video http://www.slideshare.net/NickElm/munin-a-peertopeer-middleware-forubiquitous-analytics-and-visualization-spaces, Slides}, year = {2015}, date = {2015-02-01}, journal = {IEEE Transactions on Visualization & Computer Graphics}, volume = {21}, number = {2}, pages = {215--228}, abstract = {We present Munin, a software framework for building ubiquitous analytics environments consisting of multiple input and output surfaces, such as tabletop displays, wall-mounted displays, and mobile devices. Munin utilizes a service-based model where each device provides one or more dynamically loaded services for input, display, or computation. Using a peer-to-peer model for communication, it leverages IP multicast to replicate the shared state among the peers. Input is handled through a shared event channel that lets input and output devices be fully decoupled. It also provides a data-driven scene graph to delegate rendering to peers, thus creating a robust, fault-tolerant, decentralized system. In this paper, we describe Munin\'s general design and architecture, provide several examples of how we are using the framework for ubiquitous analytics and visualization, and present a case study on building a Munin assembly for multidimensional visualization. We also present performance results and anecdotal user feedback for the framework that suggests that combining a service-oriented, data-driven model with middleware support for data sharing and event handling eases the design and execution of high performance distributed visualizations.}, keywords = {} } We present Munin, a software framework for building ubiquitous analytics environments consisting of multiple input and output surfaces, such as tabletop displays, wall-mounted displays, and mobile devices. Munin utilizes a service-based model where each device provides one or more dynamically loaded services for input, display, or computation. Using a peer-to-peer model for communication, it leverages IP multicast to replicate the shared state among the peers. Input is handled through a shared event channel that lets input and output devices be fully decoupled. It also provides a data-driven scene graph to delegate rendering to peers, thus creating a robust, fault-tolerant, decentralized system. In this paper, we describe Munin's general design and architecture, provide several examples of how we are using the framework for ubiquitous analytics and visualization, and present a case study on building a Munin assembly for multidimensional visualization. We also present performance results and anecdotal user feedback for the framework that suggests that combining a service-oriented, data-driven model with middleware support for data sharing and event handling eases the design and execution of high performance distributed visualizations. |
2014 |
Jonathan C. Roberts, Panagiotis D. Ritsos, Sriram Karthik Badam, Dominique Brodbeck, Jessie Kennedy, Niklas Elmqvist (2014): Visualization Beyond the Desktop --- The Next Big Thing. In: IEEE Computer Graphics & Applications, 34 (6), pp. 26–34, 2014. (Type: Article | Abstract | Links | BibTeX) @article{Roberts2014, title = {Visualization Beyond the Desktop --- The Next Big Thing}, author = {Jonathan C. Roberts and Panagiotis D. Ritsos and Sriram Karthik Badam and Dominique Brodbeck and Jessie Kennedy and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/beyond-desktop/beyond-desktop.pdf, Paper}, year = {2014}, date = {2014-12-02}, journal = {IEEE Computer Graphics & Applications}, volume = {34}, number = {6}, pages = {26--34}, abstract = {Visualization is coming of age: with visual depictions being seamlessly integrated into documents and data visualization techniques being used to understand datasets that are ever-growing in size and complexity, the term visualization is becoming used in everyday conversations. But we are on a cusp; visualization researchers need to develop and adapt to today\'s new devices and tomorrows technology. Today, we are interacting with visual depictions through a mouse. Tomorrow, we will be touching, swiping, grasping, feeling, hearing, smelling and even tasting our data. The next big thing is multi-sensory visualization that goes beyond the desktop.}, keywords = {} } Visualization is coming of age: with visual depictions being seamlessly integrated into documents and data visualization techniques being used to understand datasets that are ever-growing in size and complexity, the term visualization is becoming used in everyday conversations. But we are on a cusp; visualization researchers need to develop and adapt to today's new devices and tomorrows technology. Today, we are interacting with visual depictions through a mouse. Tomorrow, we will be touching, swiping, grasping, feeling, hearing, smelling and even tasting our data. The next big thing is multi-sensory visualization that goes beyond the desktop. |
Sujin Jang, Niklas Elmqvist, Karthik Ramani (2014): GestureAnalyzer: Visual Analytics for Exploratory Analysis of Gesture Patterns. In: Proceedings of the ACM Symposium on Spatial User Interfaces, pp. 30–39, 2014. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Sujin2014, title = {GestureAnalyzer: Visual Analytics for Exploratory Analysis of Gesture Patterns}, author = {Sujin Jang and Niklas Elmqvist and Karthik Ramani}, url = {http://www.umiacs.umd.edu/~elm/projects/gesture-analyzer/gesture-analyzer.pdf, Paper}, year = {2014}, date = {2014-07-01}, booktitle = {Proceedings of the ACM Symposium on Spatial User Interfaces}, pages = {30--39}, abstract = {Understanding the intent behind human gestures is a critical problem in the design of gestural interactions. A common method to observe and understand how users express gestures is to use elicitation studies. However, these studies require time-consuming analysis of user data to identify gesture patterns. Also, the analysis by humans cannot describe gestures in as detail as in data-based representations of motion features. In this paper, we present GestureAnalyzer, a system that supports exploratory analysis of gesture patterns by applying interactive clustering and visualization techniques to motion tracking data. GestureAnalyzer enables rapid categorization of similar gestures, and visual investigation of various geometric and kinematic properties of user gestures. We describe the system components, and then demonstrate its utility through a case study on mid-air hand gestures obtained from elicitation studies.}, keywords = {} } Understanding the intent behind human gestures is a critical problem in the design of gestural interactions. A common method to observe and understand how users express gestures is to use elicitation studies. However, these studies require time-consuming analysis of user data to identify gesture patterns. Also, the analysis by humans cannot describe gestures in as detail as in data-based representations of motion features. In this paper, we present GestureAnalyzer, a system that supports exploratory analysis of gesture patterns by applying interactive clustering and visualization techniques to motion tracking data. GestureAnalyzer enables rapid categorization of similar gestures, and visual investigation of various geometric and kinematic properties of user gestures. We describe the system components, and then demonstrate its utility through a case study on mid-air hand gestures obtained from elicitation studies. |
Sriram Karthik Badam, Niklas Elmqvist (2014): PolyChrome: A Cross-Device Framework for Collaborative Web Visualization. In: Proceedings of the ACM Conference on Interactive Tabletops and Surfaces, pp. 109–118, 2014. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{Badam2014b, title = {PolyChrome: A Cross-Device Framework for Collaborative Web Visualization}, author = {Sriram Karthik Badam and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/polychrome/polychrome.pdf, Paper http://www.slideshare.net/NickElm/polychrome-a-crossdevice-framework-for-collaborative-web-visualization, Slides}, year = {2014}, date = {2014-07-01}, booktitle = {Proceedings of the ACM Conference on Interactive Tabletops and Surfaces}, journal = {Proceedings of the ACM Conference on Interactive Tabletops and Surfaces}, pages = {109--118}, abstract = {We present PolyChrome, an application framework for creating web-based collaborative visualizations that can span multiple devices. The framework supports (1) co-browsing new web applications as well as legacy websites with no migration costs (i.e., a distributed web browser); (2) an API to develop new web applications that can synchronize the UI state on multiple devices to support synchronous and asynchronous collaboration; and (3) maintenance of state and input events on a server to handle common issues with distributed applications such as consistency management, conflict resolution, and undo operations. We describe PolyChrome\'s general design, architecture, and implementation followed by application examples showcasing collaborative web visualizations created using the framework. Finally, we present performance results that suggest that PolyChrome adds minimal overhead compared to single-device applications.}, keywords = {} } We present PolyChrome, an application framework for creating web-based collaborative visualizations that can span multiple devices. The framework supports (1) co-browsing new web applications as well as legacy websites with no migration costs (i.e., a distributed web browser); (2) an API to develop new web applications that can synchronize the UI state on multiple devices to support synchronous and asynchronous collaboration; and (3) maintenance of state and input events on a server to handle common issues with distributed applications such as consistency management, conflict resolution, and undo operations. We describe PolyChrome's general design, architecture, and implementation followed by application examples showcasing collaborative web visualizations created using the framework. Finally, we present performance results that suggest that PolyChrome adds minimal overhead compared to single-device applications. |
Eli Raymond Fisher, Sriram Karthik Badam, Niklas Elmqvist (2014): Designing Peer-to-Peer Distributed User Interfaces: Case Studies on Building Distributed Applications. In: International Journal of Human-Computer Studies, 72 (1), pp. 100–110, 2014. (Type: Article | Abstract | Links | BibTeX) @article{Fisher2014, title = {Designing Peer-to-Peer Distributed User Interfaces: Case Studies on Building Distributed Applications}, author = {Eli Raymond Fisher and Sriram Karthik Badam and Niklas Elmqvist}, url = {http://www.umiacs.umd.edu/~elm/projects/dui-design/dui-design.pdf, Paper}, year = {2014}, date = {2014-01-01}, journal = {International Journal of Human-Computer Studies}, volume = {72}, number = {1}, pages = {100--110}, abstract = {Building a distributed user interface (DUI) application should ideally not require any additional effort beyond that necessary to build a non-distributed interface. In practice, however, DUI development is fraught with several technical challenges such as synchronization, resource management, and data transfer. In this paper, we present three case studies on building distributed user interface applications: a distributed media player for multiple displays and controls, a collaborative search system integrating a tabletop and mobile devices, and a multiplayer Tetris game for multi-surface use. While there exist several possible network architectures for such applications, our particular approach focuses on peer-to-peer (P2P) architectures. This focus leads to a number of challenges and opportunities. Drawing from these studies, we derive general challenges for P2P DUI development in terms of design, architecture, and implementation. We conclude with some general guidelines for practical DUI application development using peer-to-peer architectures.}, keywords = {} } Building a distributed user interface (DUI) application should ideally not require any additional effort beyond that necessary to build a non-distributed interface. In practice, however, DUI development is fraught with several technical challenges such as synchronization, resource management, and data transfer. In this paper, we present three case studies on building distributed user interface applications: a distributed media player for multiple displays and controls, a collaborative search system integrating a tabletop and mobile devices, and a multiplayer Tetris game for multi-surface use. While there exist several possible network architectures for such applications, our particular approach focuses on peer-to-peer (P2P) architectures. This focus leads to a number of challenges and opportunities. Drawing from these studies, we derive general challenges for P2P DUI development in terms of design, architecture, and implementation. We conclude with some general guidelines for practical DUI application development using peer-to-peer architectures. |
2013 |
Niklas Elmqvist, Pourang Irani (2013): Ubiquitous Analytics: Interacting with Big Data Anywhere, Anytime. In: IEEE Computer, 46 (4), pp. 86–89, 2013. (Type: Article | Abstract | Links | BibTeX) @article{Elmqvist2013, title = {Ubiquitous Analytics: Interacting with Big Data Anywhere, Anytime}, author = {Niklas Elmqvist and Pourang Irani}, url = {http://www.umiacs.umd.edu/~elm/projects/ubilytics/ubilytics.pdf, Paper}, year = {2013}, date = {2013-01-01}, journal = {IEEE Computer}, volume = {46}, number = {4}, pages = {86--89}, abstract = {With more than 4 billion mobile devices in the world today, mobile computing is quickly becoming the universal computational platform of the world. Building on this new wave of mobile devices are personal computing activities such as microblogging, social networking, and photo sharing, which are intrinsically mobile phenomena that occur while on-the-go. Mobility is now propagating to more professional activities such as data analytics, which need no longer be restricted to the workplace. In fact, the rise of big data increasingly demands that we be able to access data resources anytime and anywhere, whether to support decisions and activities for travel, telecommuting, or distributed teamwork. In other words, it is high time to fully realize Mark Weiser’s vision of ubiquitous computing in the realm of data analytics.}, keywords = {} } With more than 4 billion mobile devices in the world today, mobile computing is quickly becoming the universal computational platform of the world. Building on this new wave of mobile devices are personal computing activities such as microblogging, social networking, and photo sharing, which are intrinsically mobile phenomena that occur while on-the-go. Mobility is now propagating to more professional activities such as data analytics, which need no longer be restricted to the workplace. In fact, the rise of big data increasingly demands that we be able to access data resources anytime and anywhere, whether to support decisions and activities for travel, telecommuting, or distributed teamwork. In other words, it is high time to fully realize Mark Weiser’s vision of ubiquitous computing in the realm of data analytics. |
2012 |
Will McGrath, Brian Bowman, David McCallum, Juan-David Hincapie-Ramos, Niklas Elmqvist, Pourang Irani (2012): Branch-Explore-Merge: Facilitating Real-Time Revision Control in Collaborative Visual Exploration. In: Proceedings of the ACM Conference on Interactive Tabletops and Surfaces, pp. 235–244, 2012. (Type: Inproceeding | Abstract | Links | BibTeX) @inproceedings{McGrath2012, title = {Branch-Explore-Merge: Facilitating Real-Time Revision Control in Collaborative Visual Exploration}, author = {Will McGrath and Brian Bowman and David McCallum and Juan-David Hincapie-Ramos and Niklas Elmqvist and Pourang Irani}, url = {http://www.umiacs.umd.edu/~elm/projects/bem/bem.pdf}, year = {2012}, date = {2012-01-01}, booktitle = {Proceedings of the ACM Conference on Interactive Tabletops and Surfaces}, pages = {235--244}, abstract = {Collaborative work is characterized by participants seamlessly transitioning from working together (coupled) to working alone (decoupled). Groupware should therefore facilitate smoothly varying coupling throughout the entire collaborative session. Towards achieving such transitions for collaborative exploration and search, we propose a protocol based on managing revisions for each collaborator exploring a dataset. The protocol allows participants to diverge from the shared analysis path (branch), study the data independently (explore), and then contribute back their findings onto the shared display (merge). We apply this concept to collaborative search in multidimensional data, and propose an implementation where the public view is a tabletop display and the private views are embedded in handheld tablets. We then use this implementation to perform a qualitative user study involving a real estate dataset. Results show that participants leverage the BEM protocol, spend significant time using their private views (40% to 80% of total task time), and apply public view changes for consultation with collaborators.}, keywords = {} } Collaborative work is characterized by participants seamlessly transitioning from working together (coupled) to working alone (decoupled). Groupware should therefore facilitate smoothly varying coupling throughout the entire collaborative session. Towards achieving such transitions for collaborative exploration and search, we propose a protocol based on managing revisions for each collaborator exploring a dataset. The protocol allows participants to diverge from the shared analysis path (branch), study the data independently (explore), and then contribute back their findings onto the shared display (merge). We apply this concept to collaborative search in multidimensional data, and propose an implementation where the public view is a tabletop display and the private views are embedded in handheld tablets. We then use this implementation to perform a qualitative user study involving a real estate dataset. Results show that participants leverage the BEM protocol, spend significant time using their private views (40% to 80% of total task time), and apply public view changes for consultation with collaborators. |