Fall in HCI@KAIST (2024 Fall Colloquium)

This year’s HCI@KAIST fall colloquium invited four speakers from diverse HCI domains.


Parastoo Abtahi from Princeton

From Haptic illusions to Beyond Real interaction in VR

dvances in audiovisual rendering have led to the commercialization of virtual reality (VR) hardware; however, haptic technology has not kept up with these advances. While haptic devices aim to bridge this gap by simulating the sensation of touch, many hardware limitations make realistic touch interactions in VR challenging. In my research, I explore how by understanding human perception, we can design VR interactions that not only overcome the current limitations of VR hardware but also extend our abilities beyond what is possible in the real world. In this talk, I will present my work on redirection illusions that leverage the limits of human perception to improve the perceived performance of encountered-type haptic devices, such as improving the position accuracy of drones, the speed of tabletop robots, and the resolution of shape displays when used for haptics in VR. I will then present a framework I have developed through the lens of sensorimotor control theory to argue for the exploration and evaluation of VR interactions that go beyond mimicking reality.

Ken Pfeuffer from Aarus

Eye-Hand Symbiosis as a new interaction paradigm in HCI

Smartglasses and Extended Reality (XR) headsets are advancing at an incredible pace toward the next personal computer platform for various use cases. Eye-hand interfaces—like “gaze and pinch”— offer a new, powerful interaction paradigm of navigating 3D interfaces afforded by the devices. Major tech players like Meta and Apple are starting to adopt these interfaces in their technology, which represents the beginning of a UI paradigm shift to multimodal. In this talk, I discuss our work on the scientific foundations of this paradigm, the design space across devices and use cases, as well as recent advances and implications on future HCI.

William Odom from Simon Fraser

How could we design technology for occasional, indefinite and ongoing use? Exploring Long-Term Human-Data and Human-Nature relations with Capra

As the practice of hiking becomes increasingly captured through personal data, it is timely to consider what kinds of data encounters might support forms of noticing and connecting to nature as well as one’s self and life history over time.  How might digital records of hiking be captured in ways that offer alternative perspectives on these experiences as they are explored and lived-with? In what ways, could human-nature relations change as they are considered through different vantage points? And, considering the ongoing engagement with hiking as a lifelong activity, how might the use of personal hiking data unfold, grow, and evolve as a person, their archive, and their memories age over time? To investigate these questions, designed of Capra — a system that brings together the capture, storage, and exploration of personal hiking data with an emphasis on longer-term, occasional yet indefinite use. Over several years, our team adopted a designer-researcher approach where we progressively designed, built, refined, and tested Capra. This talk will detail how we negotiated conceptual and practical tensions in designing technology that opens opportunities for interpretive and reflective (and non-use) experiences of personal hiking data as one’s archive expands and evolves. I will conclude with opportunities our work opens for exploring models of interaction for technologies that may be used some of the time, over a long time.

Ruofei Du from Google AR

Computational Interaction for a Universally Accessible Metaverse

The emergent revolution of generative AI and spatial computing will fundamentally change the way we work and live. However, it remains a challenge how to make information universally accessible, and specifically, how to make generative AI and spatial computing useful in our daily life. In this talk, we will delve into a series of innovations in augmented programming, augmented interaction, and augmented communication, that aim to make both the virtual metaverse and the physical world universally accessible.
With Visual Blocks and InstructPipe, we empower novice users to unleash their inner creativity, by rapidly building machine learning pipelines with visual programming and generative AI. With Depth Lab, Ad hoc UI, Artificial Object Intelligence, and Finger Switches, we present real-time 3D interactions with depth maps, objects, and micro-gestures. Finally, with CollaboVR, GazeChat, Visual Captions, ThingShare, and ChatDirector, we enrich communication with mid-air sketches, gaze-aware 3D photos, LLM-informed visuals, object-focused views, and co-presented avatars.
We conclude the talk with highlights of the Google I/O Keynote, offering a visionary glimpse into the future of a universally accessible metaverse.

Leave a Reply

Your email address will not be published.