Brian D. Hall, PhD


PROJECTS

Augmented Chironomia for Presenting Data to Remote Audiences

 Received an Honorable Mention award at ACM UIST 2022ACM UIST 2022 | PDF

To facilitate engaging and nuanced conversations around data, we contribute a touchless approach to interacting directly with visualization in remote presentations. We combine dynamic charts overlaid on a presenter’s webcam feed with continuous bimanual hand tracking, demonstrating interactions that highlight and manipulate chart elements appearing in the foreground. These interactions are simultaneously functional and deictic, and some allow for the addition of “rhetorical flourish”, or expressive movement used when speaking about quantities, categories, and time intervals. We evaluated our approach in two studies with professionals who routinely deliver and attend presentations about data. The first study considered the presenter perspective, where 12 participants delivered presentations to a remote audience using a presentation environment incorporating our approach. The second study considered the audience experience of 17 participants who attended presentations supported by our environment. Finally, we reflect on observations from these studies and discuss related implications for engaging remote audiences in conversations about data.

A Survey of Tasks and Visualizations in Multiverse Analysis Reports

CGF 2022 | PDF

Analysing data from experiments is a complex, multi-step process, often with multiple defensible choices available at each step. While analysts often report a single analysis without documenting how it was chosen, this can cause serious transparency and methodological issues. To make the sensitivity of analysis results to analytical choices transparent, some statisticians and methodologists advocate the use of ‘multiverse analysis’: reporting the full range of outcomes that result from all combinations of defensible analytic choices. Summarizing this combinatorial explosion of statistical results presents unique challenges; several approaches to visualizing the output of multiverse analyses have been proposed across a variety of fields (e.g. psychology, statistics, economics, neuroscience). In this article, we (1) introduce a consistent conceptual framework and terminology for multiverse analyses that can be applied across fields; (2) identify the tasks researchers try to accomplish when visualizing multiverse analyses and (3) classify multiverse visualizations into ‘archetypes’, assessing how well each archetype supports each task. Our work sets a foundation for subsequent research on developing visualization tools and techniques to support multiverse analysis and its reporting.

MRAT: The Mixed Reality Analytics Toolkit

Received a Best Paper award at CHI 2020 ACM CHI 2020 | PDF

Significant tool support exists for the development of mixed reality (MR) applications; however, there is a lack of tools for analyzing MR experiences. We elicit requirements for future tools through interviews with 8 university research, instructional, and media teams using AR/VR in a variety of domains. While we find a common need for capturing how users perform tasks in MR, the primary differences were in terms of heuristics and metrics relevant to each project. Particularly in the early project stages, teams were uncertain about what data should, and even could, be collected with MR technologies. We designed the Mixed Reality Analytics Toolkit (MRAT) to instrument MR apps via visual editors without programming and enable rapid data collection and filtering for visualizations of MR user sessions. With MRAT, we contribute flexible interaction tracking and task definition concepts, an extensible set of heuristic techniques and metrics to measure task success, and visual inspection tools with in-situ visualizations in MR. Focusing on a multi-user, cross-device MR crisis simulation and triage training app as a case study, we then show the benefits of using MRAT, not only for user testing of MR apps, but also performance tuning throughout the design process.

What is mixed reality?

Received an Honorable Mention award at ACM CHI 2019 ACM CHI 2019 | PDF

What is Mixed Reality (MR)? To revisit this question given the many recent developments, we conducted interviews with ten AR/VR experts from academia and industry, as well as a literature survey of 68 papers. We find that, while there are prominent examples, there is no universally agreed on, one-size-fits-all definition of MR. Rather, we identified six partially competing notions from the literature and experts' responses. We then started to isolate the different aspects of reality relevant for MR experiences, going beyond the primarily visual notions and extending to audio, motion, haptics, taste, and smell. We distill our findings into a conceptual framework with seven dimensions to characterize MR applications in terms of the number of environments, number of users, level of immersion, level of virtuality, degree of interaction, input, and output. Our goal with this paper is to support classification and discussion of MR applications' design and provide a better means to researchers to contextualize their work within the increasingly fragmented MR landscape.

XD-AR: challenges and opportunities in cross-device augmented reality application development

Received the Best Paper award at EICS 2018 Proceedings of the ACM on Human-Computer Interaction 2018 | PDF

Augmented Reality (AR) developers face a proliferation of new platforms, devices, and frameworks. This often leads to applications being limited to a single platform and makes it hard to support collaborative AR scenarios involving multiple different devices. This paper presents XD-AR, a cross-device AR application development framework designed to unify input and output across hand-held, head-worn, and projective AR displays. XD-AR's design was informed by challenging scenarios for AR applications, a technical review of existing AR platforms, and a survey of 30 AR designers, developers, and users. Based on the results, we developed a taxonomy of AR system components and identified key challenges and opportunities in making them work together. We discuss how our taxonomy can guide the design of future AR platforms and applications and how cross-device interaction challenges could be addressed. We illustrate this when using XD-AR to implement two challenging AR applications from the literature in a device-agnostic way.

Improving Human Interfaces for Commercial Camera Drone Systems

Awarded 2nd place in the Student Research Competition at CHI 2017 ACM CHI 2017PDF | Poster

This project reports on a pilot study investigating both the opportunities for improvement, and the dangers present, in the currently available human interfaces used to operate commercial camera drones. Nine participants completed flight operation missions in six experimental configurations, using two different drones, operated using line of sight, live camera-feed on a tablet, and through a first-person view headset designed for use by drone racers. Qualitative and quantitative methods explore the risks for damage and injury posed by current system design, and identify areas of potential improvement.

User Adaptability to System Delay

Awarded 3rd place in the Student Research Competition at CHI 2016 ACM CHI 2016 | PDF | Poster

This double-blind, controlled, counter-balanced experiment examines the effects of system delay on users. Sixty-one participants completed 8 computerized tasks under 4 varied levels of time delay (from 500ms to 2000ms), with time and accuracy recorded. Participants rapidly adapted to system delay, committing neither errors of omission nor commission more frequently due to slowed system response. Time to finish tasks increased with system delay, but users were only slowed down by approximately as much time as was introduced by system delay itself. Implications for estimating the potential value of reduced system delay are discussed, as well as study limitations and suggestions for future work.