Fall in HCI@KAIST (2024 Fall Colloquium)

This year’s HCI@KAIST fall colloquium invited four speakers from diverse HCI domains.


Parastoo Abtahi from Princeton

From Haptic illusions to Beyond Real interaction in VR

dvances in audiovisual rendering have led to the commercialization of virtual reality (VR) hardware; however, haptic technology has not kept up with these advances. While haptic devices aim to bridge this gap by simulating the sensation of touch, many hardware limitations make realistic touch interactions in VR challenging. In my research, I explore how by understanding human perception, we can design VR interactions that not only overcome the current limitations of VR hardware but also extend our abilities beyond what is possible in the real world. In this talk, I will present my work on redirection illusions that leverage the limits of human perception to improve the perceived performance of encountered-type haptic devices, such as improving the position accuracy of drones, the speed of tabletop robots, and the resolution of shape displays when used for haptics in VR. I will then present a framework I have developed through the lens of sensorimotor control theory to argue for the exploration and evaluation of VR interactions that go beyond mimicking reality.

Ken Pfeuffer from Aarus

Eye-Hand Symbiosis as a new interaction paradigm in HCI

Smartglasses and Extended Reality (XR) headsets are advancing at an incredible pace toward the next personal computer platform for various use cases. Eye-hand interfaces—like “gaze and pinch”— offer a new, powerful interaction paradigm of navigating 3D interfaces afforded by the devices. Major tech players like Meta and Apple are starting to adopt these interfaces in their technology, which represents the beginning of a UI paradigm shift to multimodal. In this talk, I discuss our work on the scientific foundations of this paradigm, the design space across devices and use cases, as well as recent advances and implications on future HCI.

William Odom from Simon Fraser

How could we design technology for occasional, indefinite and ongoing use? Exploring Long-Term Human-Data and Human-Nature relations with Capra

As the practice of hiking becomes increasingly captured through personal data, it is timely to consider what kinds of data encounters might support forms of noticing and connecting to nature as well as one’s self and life history over time.  How might digital records of hiking be captured in ways that offer alternative perspectives on these experiences as they are explored and lived-with? In what ways, could human-nature relations change as they are considered through different vantage points? And, considering the ongoing engagement with hiking as a lifelong activity, how might the use of personal hiking data unfold, grow, and evolve as a person, their archive, and their memories age over time? To investigate these questions, designed of Capra — a system that brings together the capture, storage, and exploration of personal hiking data with an emphasis on longer-term, occasional yet indefinite use. Over several years, our team adopted a designer-researcher approach where we progressively designed, built, refined, and tested Capra. This talk will detail how we negotiated conceptual and practical tensions in designing technology that opens opportunities for interpretive and reflective (and non-use) experiences of personal hiking data as one’s archive expands and evolves. I will conclude with opportunities our work opens for exploring models of interaction for technologies that may be used some of the time, over a long time.

Ruofei Du from Google AR

Computational Interaction for a Universally Accessible Metaverse

The emergent revolution of generative AI and spatial computing will fundamentally change the way we work and live. However, it remains a challenge how to make information universally accessible, and specifically, how to make generative AI and spatial computing useful in our daily life. In this talk, we will delve into a series of innovations in augmented programming, augmented interaction, and augmented communication, that aim to make both the virtual metaverse and the physical world universally accessible.
With Visual Blocks and InstructPipe, we empower novice users to unleash their inner creativity, by rapidly building machine learning pipelines with visual programming and generative AI. With Depth Lab, Ad hoc UI, Artificial Object Intelligence, and Finger Switches, we present real-time 3D interactions with depth maps, objects, and micro-gestures. Finally, with CollaboVR, GazeChat, Visual Captions, ThingShare, and ChatDirector, we enrich communication with mid-air sketches, gaze-aware 3D photos, LLM-informed visuals, object-focused views, and co-presented avatars.
We conclude the talk with highlights of the Google I/O Keynote, offering a visionary glimpse into the future of a universally accessible metaverse.

Fall in HCI@KAIST (2023 Fall Colloquium)

This year’s HCI@KAIST fall colloquium invited four speakers from diverse HCI domains.


Sherry Tongshuang Wu from Carnegie Mellon University

Practival AI Systems and Effective Human-AI Collaboration

As AI systems (such as LLMs) rapidly advance, they can now perform tasks that were once exclusive to humans. This trend indicates a shift towards extensive collaboration with LLMs, where humans delegate tasks to them while focusing on higher-level skills unique to their capabilities. However, haphazard pairing of humans and AIs can lead to negative consequences, such as blind trust in incorrect AI outputs and decreased human productivity. In this talk, I will discuss our effort in promoting effective human-AI collaboration, by ensuring competence in both humans and AIs for their respective responsibilities and enhancing their collaboration. I will cover three themes: (1) Evaluating LLMs on specific usage scenarios; (2) Building task-specific interactions that maximize LLM usabilities; and (3) Training and guiding humans to optimize their collaboration skills with AI systems. In my final remarks, I will reflect on how AI advances can be viewed through the lens of their usefulness to actual human users.

Sang Won Lee from Virginia Tech

Toward Computuer-mediated Empathy

This talk discusses ways to design computational systems that facilitate empathic communication and collaboration in various domains. In contrast to using technologies to develop users’ empathy for targets, I emphasize the duality of empathy and highlight empowering targets to express, reveal, and reflect on themselves. An ongoing framework will be introduced, and I will focus on recent projects that explore sharing perspectives, self-expression, and self-reflection as a means to mediate empathy in interactive systems from target perspectives.

Janghee Cho from National University of Singapore

Design for Sustainable Life in the Work-From-Home Era

Navigating the complexities of the contemporary human experience is precarious, marked by latent but pervasive anxiety and uncertainty. In this talk, I draw on a reflective design approach that emphasizes the value of human agency and meaning-making processes to discuss design implications for technologies that could help people (re)establish a sense of normalcy in their everyday lives. Specifically, the focus centers on recent projects that investigate the role of data-driven technology in addressing well-being issues within remote and hybrid work settings, where individuals grapple with blurred boundaries between home and work.

Audrey Desjardins from University of Washington

Data Imaginaries

0s and 1s on a screen. The Cloud. Fast moving. Clean. Efficient. Exponentially growing. Data Centers. Code on the black screen of a terminal window. Buzzing. Such images construct part of commonly shared imaginaries around data. As data increasingly become part of the most intimate parts of people’s lives (often at home), it remains a largely invisible phenomenon. In particular, one of the leading challenges currently facing Internet of Things (IoT) is algorithmic transparency and accountability with regards to how IoT data are collected, what is inferred and who they are shared with. From a home dweller’s perspective, data may be available for review and reflection via graphs, spreadsheets, and dashboards (if at all available!).In this talk, I instead argue for other modes of encountering IoT data: ways that are creative, critical, subtle, performative, and at times analog or fictional. By translating data into ceramic artifacts, performance and interactive installation experiments, fiction stories, imagined sounds, faded fabric, and even data cookies, I show a diversity of approaches for engaging data that might capture people’s attention and imagination. As a result, this work uncovers ways to make data more real, showing its messiness and complexities, and opens questions about how data might be interpreted, and by whom.