CHI 2025

CHI 2025
DATE
  April 26 – May 1, 2025 LOCATION  Yokohama, Japan

We’re excited to share KAIST’s strong presence at CHI 2025! Our researchers have contributed 46 Full Papers, 22 Late-Breaking Works, 2 Interactivities, 4 Workshops, 2 Video Showcases, 1 Special Interest Group, and 1 Doctoral Consortium.

This impressive showing reflects the quality HCI research happening across labs and departments at KAIST. Browse the publication list below to learn more about our work and connect with our researchers at the conference.

 

KAIST Night at CHI25 ✨

We’re excited to announce our KAIST Night at CHI 2025! 

Event Details

  • Date & Time: Tuesday, April 29th, 8:00pm-10:00pm
  • Capacity: ~200 people
  • Detailed venue information will follow soon.
  🎟️ Ask your KAIST friend about a ticket!

Sponsors

Paper Publications

AACessTalk: Fostering Communication between Minimally Verbal Autistic Children and Parents with Contextual Guidance and Card Recommendation

CHI'25 Best Paper

Dasom Choi (KAIST), SoHyun Park (NAVER Cloud), Kyungah Lee (Daegu University), Hwajung Hong (KAIST), Young-Ho Kim (NAVER AI Lab)

As minimally verbal autistic (MVA) children communicate with parents through few words and nonverbal cues, parents often struggle to encourage their children to express subtle emotions and needs and to grasp their nuanced signals. We present AACessTalk, a tablet-based, AI-mediated communication system that facilitates meaningful exchanges between an MVA child and a parent. AACessTalk provides real-time guides to the parent to engage the child in conversation and, in turn, recommends contextual vocabulary cards to the child. Through a two-week deployment study with 11 MVA child-parent dyads, we examine how AACessTalk fosters everyday conversation practice and mutual engagement. Our findings show high engagement from all dyads, leading to increased frequency of conversation and turn-taking. AACessTalk also encouraged parents to explore their own interaction strategies and empowered the children to have more agency in communication. We discuss the implications of designing technologies for balanced communication dynamics in parent-MVA child interaction.

AMUSE: Human-AI Collaborative Songwriting with Multimodal Inspirations

CHI'25 Best Paper

Yewon Kim (KAIST), Sung-Ju Lee (KAIST), Chris Donahue (Carnegie Mellon University)

Songwriting is often driven by multimodal inspirations, such as imagery, narratives, or existing music, yet songwriters remain unsupported by current music AI systems in incorporating these multimodal inputs into their creative processes. We introduce Amuse, a songwriting assistant that transforms multimodal (image, text, or audio) inputs into chord progressions that can be seamlessly incorporated into songwriters’ creative process. A key feature of Amuse is its novel method for generating coherent chords that are relevant to music keywords in the absence of datasets with paired examples of multimodal inputs and chords. Specifically, we propose a method that leverages multimodal language models to convert multimodal inputs into noisy chord suggestions and uses a unimodal chord model to filter the suggestions. A user study with songwriters shows that Amuse effectively supports transforming multimodal ideas into coherent musical suggestions, enhancing users’ agency and creativity throughout the songwriting process.

ChoreoCraft: In-situ Crafting of Choreography in Virtual Reality through Creativity Support Tool

CHI'25 Honorable Mention

Hyunyoung Han (KAIST)*, Kyungeun Jung (KAIST)*, Sang Ho Yoon (KAIST)
(* Both Authors contributed equally to this research) 

Choreographers face increasing pressure to create content rapidly, driven by growing demand in social media, entertainment, and commercial sectors, often compromising creativity. This study introduces ChoreoCraft, a novel in-situ virtual reality (VR) choreographic system designed to enhance the creation process of choreography. Through contextual inquiries with professional choreographers, we identified key challenges such as memory dependency, creative plateaus, and abstract feedback to formulate design implications. Then, we propose a VR choreography creation system embedded with a context-aware choreography suggestion system and a choreography analysis system, all grounded in choreographers’ creative processes and mental models. Our study results demonstrated that ChoreoCraft fosters creativity, reduces memory dependency, and improves efficiency in choreography creation. Participants reported high satisfaction with the system’s ability to overcome creative plateaus and provide objective feedback. Our work advances creativity support tools by providing digital assistance in dance composition that values artistic autonomy while fostering innovation and efficiency.

Juggling Extra Limbs: Identifying Control Strategies for Supernumerary Multi-Arms in Virtual Reality

CHI'25 Honorable Mention

Hongyu Zhou (The University of Sydney), Tom Kip (The University of Sydney), Yihao Dong (The University of Sydney), Andrea Bianchi (KAIST), Zhanna Sarsenbayeva (The University of Sydney), Anusha Withana (The University of Sydney)

Using supernumerary multi-limbs for complex tasks is a growing research focus in Virtual Reality (VR) and robotics. Understanding how users integrate extra limbs with their own to achieve shared goals is crucial for developing efficient supernumeraries. This paper presents an exploratory user study (N=14) investigating strategies for controlling virtual supernumerary limbs with varying autonomy levels in VR object manipulation tasks. Using a Wizard-of-Oz approach to simulate semi-autonomous limbs, we collected both qualitative and quantitative data. Results show participants adapted control strategies based on task complexity and system autonomy, affecting task delegation, coordination, and body ownership. Based on these findings, we propose guidelines—commands, demonstration, delegation, and labeling instructions—to improve multi-limb interaction design by adapting autonomy to user needs and fostering better context-aware experiences.

Private Yet Social: How LLM Chatbots Support and Challenge Eating Disorder Recovery

CHI'25 Honorable Mention

Ryuhaerang Choi (KAIST), Taehan Kim (KAIST), Subin Park (KAIST), Jennifer G Kim (Georgia Institute of Technology), Sung-Ju Lee (KAIST)

Eating disorders (ED) are complex mental health conditions that require long-term management and support. Recent advancements in large language model (LLM)-based chatbots offer the potential to assist individuals in receiving immediate support. Yet, concerns remain about their reliability and safety in sensitive contexts such as ED. We explore the opportunities and potential harms of using LLM-based chatbots for ED recovery. We observe the interactions between 26 participants with ED and an LLM-based chatbot, WellnessBot, designed to support ED recovery, over 10 days. We discovered that our participants have felt empowered in recovery by discussing ED-related stories with the chatbot, which served as a personal yet social avenue. However, we also identified harmful chatbot responses, especially concerning individuals with ED, that went unnoticed partly due to participants’ unquestioning trust in the chatbot’s reliability. Based on these findings, we provide design implications for safe and effective LLM-based interventions in ED management.

Reimagining Personal Data: Unlocking the Potential of AI-Generated Images in Personal Data Meaning-Making

CHI'25 Honorable Mention

Soobin Park (KAIST), Hankyung Kim (KAIST), Youn-kyung Lim (KAIST)

Image-generative AI provides new opportunities to transform personal data into alternative visual forms. In this paper, we illustrate the potential of AI-generated images in facilitating meaningful engagement with personal data. In a formative autobiographical design study, we explored the design and use of AI-generated images derived from personal data. Informed by this study, we designed a web-based application as a probe that represents personal data through generative images utilizing Open AI’s GPT-4 model and DALL-E 3. We then conducted a 21-day diary study and interviews using the probe with 16 participants to investigate users’ in-depth experiences with images generated by AI in everyday lives. Our findings reveal new qualities of experiences in users’ engagement with data, highlighting how participants constructed personal meaning from their data through imagination and speculation on AI-generated images. We conclude by discussing the potential and concerns of leveraging image-generative AI for personal data meaning-making.

ShamAIn: Designing Superior Conversational AI Inspired by Shamanism

CHI'25 Honorable Mention

Hyungjun Cho (KAIST), Jiyeon Amy Seo (University of Michigan), Jiwon Lee (KAIST), Chang-Min Kim (KAIST), Tek-Jin Nam (KAIST)

This paper presents the design process, outcomes, and installation of ShamAIn, a multi-modal embodiment of conversational AI inspired by the beliefs and symbols of Korean shamanism. Adopting a research-through-design approach, we offer an alternative perspective on conversational AI design, emphasizing perceived superiority. ShamAIn was developed based on strategies derived from investigating people’s experiences with shamanistic counseling and rituals. We deployed the system in an exhibition room for six weeks, during which 20 participants made multiple visits to engage with ShamAIn. Through subsequent in-depth interviews, we found that participants felt a sense of awe toward ShamAIn and engaged in interactions with humility and respect. Our participants disclosed personal and profound concerns, reflecting deeply on the responses they received. Consequently, they relied on ShamAIn and formed relationships in which they received support. In the discussion, we present the design implications of conversational AI perceived as superior to humans, along with the ethical considerations involved in designing such AI.

T2IRay: Design of Thumb-to-Index based Indirect Pointing for Continuous and Robust AR/VR Input

CHI'25 Honorable Mention

Jina Kim (KAIST), Yang Zhang (University of California, Los Angeles), Sang Ho Yoon (KAIST)

Free-hand interactions have been widely deployed for AR/VR interfaces to promote a natural and seamless interaction experience. Among various types of hand interactions, microgestures are still limited in supporting discrete inputs and in lacking a continuous interaction theme. To this end, we propose a new pointing technique, T2IRay, which enables continuous indirect pointing through microgestures for continuous spatial input. We employ our own local coordinate system based on the thumb-to-index finger relationship to map the computed raycasting direction for indirect pointing in a virtual environment. Furthermore, we examine various mapping methodologies and collect thumb-click behaviors to formulate thumb-to-index microgesture design guidelines to foster continuous, reliable input. We evaluate the design parameters for mapping indirect pointing with acceptable speed, depth, and range. We collect and analyze the characteristics of click behaviors for future implementation. Our research demonstrates the potential and practicality of free-hand micro-finger input methods for advancing future interaction paradigms.

AReading with Smartphones: Understanding the Trade-offs between Enhanced Legibility and Display Switching Costs in Hybrid AR Interfaces

Sunyoung Bang (KAIST), Hyunjin Lee (KAIST), Seo Young Oh (KAIST), Woontack Woo (KAIST)

This research investigates the use of hybrid user interfaces to enhance text readability in augmented reality (AR) by combining optical see-through head-mounted displays with smartphones. While this integration can improve information legibility, it may also introduce display switching side effects. The extent to which these side effects hinder user experience and when the benefits outweigh drawbacks remain unclear. To address this gap, we conducted an empirical study (N=24) to evaluate how hybrid user interfaces affect AR reading tasks across different content distances, which induce varying levels of display switching. Our findings show that hybrid user interfaces offer significant readability benefits compared to using the HMD only, reducing mental and physical demands when reading text linked to content at closer distances. However, as the distance between displays increases, the compensatory behaviors users adopt to manage increased switching costs negate these benefits, making hybrid user interfaces less effective. Based on these findings, we suggest (1) using smartphones as supplementary displays for text in reading-intensive tasks, (2) implementing adaptive display positioning to minimize switching overhead in such scenarios, and (3) adjusting the smartphone’s role based on content distance for less intensive reading tasks. These insights provide guidance for optimizing smartphone integration in hybrid interfaces and enhancing AR systems for reading applications.

Back to the 1990s, BeeperRedux!: Revisiting Retro Technology to Reflect Communication Quality and Experience in the Digital Age

Jiyeon Amy Seo (University of Michigan), Hyungjun Cho (KAIST), Seolhee Lee (Seoul National University), EunJeong Cheon (Syracuse University)

As computer-mediated communication tools have evolved from beepers to 2G cell phones, and now to today’s smartphones, people have consistently embraced these technologies to maintain relationships and enhance the convenience of their daily lives. However, while contemporary communication technologies clearly diverge from their traditional roles, few studies have critically examined their effects, particularly in relation to communication quality and relationships. To address what contemporary technologies may have overlooked, our study revisits retro communication technologies—specifically, the beeper. We recreated the beeper experience through BeeperRedux, a mobile application, and conducted a two-week deployment study involving ten groups. Our findings highlight three valuable aspects of retro communication technologies: fostering sincerity, restoring recipients’ autonomy over their communication, and prioritizing offline engagement. In the discussion, we present design guidelines for improving technology-mediated communication and offer methodological reflections on recreating obsolete technology to empirically explore past experiences.

Birds of a Rhythm: The Effects of Haptic Pattern Similarity on People's Social Perceptions in Virtual Reality

Hyuckjin Jang (KAIST), Jeongmi Lee (KAIST)

Virtual reality (VR) expands opportunities for social interaction, yet its heavy reliance on visual cues can limit social engagement and hinder immersive experiences in visually overwhelming situations. To explore alternative social cues beyond the visual domain, we verified the potential of haptic cues for social identification in VR by examining the effects of haptic pattern similarity on social perceptions. Unique haptic patterns were assigned to participants and virtual agents for identification, while the similarity of haptic patterns was manipulated (same, similar, distinct). The results demonstrated that participants maintained closer interpersonal distances and reported higher senses of belonging, social connection, and comfort toward agents as the similarity of patterns increased. Our findings validate the potential of haptic patterns in social identification and provide scientific evidence that homophily extends beyond the visual domain to the haptic domain. We also suggest a novel haptic-based methodology for conveying relationship information and enhancing social VR experiences.

BudsID: Mobile-Ready and Expressive Finger Identification Input for Earbuds

CHI'25

Jiwan Kim (KAIST), Mingyu Han (UNIST), Ian Oakley (KAIST)

Wireless earbuds are an appealing platform for wearable computing on-the-go. However, their small size and out-of-view location mean they support limited different inputs. We propose finger identification input on earbuds as a novel technique to resolve these problems. This technique involves associating touches by different fingers with different responses. To enable it on earbuds, we adapted prior work on smartwatches to develop a wireless earbud featuring a magnetometer that detects fields from a magnetic ring. A first study reveals participants achieve rapid, precise earbud touches with different fingers, even while mobile (time: 0.98s, errors: 5.6%). Furthermore, touching fingers can be accurately classified (96.9%). A second study shows strong performance with a more expressive technique involving multi-finger double-taps (inter-touch time: 0.39s, errors: 2.8%) while maintaining high accuracy (94.7%). We close by exploring and evaluating the design of earbud finger identification applications and demonstrating the feasibility of our system on low-resource devices.

CounterStress: Enhancing Stress Coping Planning through Counterfactual Explanations in Personal Informatics

Gyuwon Jung (KAIST), Uichin Lee (KAIST)

Personal informatics (PI) systems have been utilized to help individuals manage health issues such as stress by leveraging insights from self-tracking data. However, PI users may struggle to develop effective coping strategies because factors influencing stress are often difficult to change in practice, and multiple factors can contribute to stress simultaneously. In this study, we introduce CounterStress, a PI system designed to assist users in identifying contextual changes needed to address high-stress situations. CounterStress employs counterfactual explanations to identify and suggest alternative contextual changes, offering users actionable strategies to achieve a desired state. We conducted both lab-based and field user studies with 12 participants to evaluate the system’s usability and applicability, focusing on the benefits of counterfactual-based coping strategies, how users select viable strategies, and their real-world applications. Based on our findings, we discuss design implications for effectively leveraging counterfactuals in PI systems to support users’ stress-coping planning.

Cross, Dwell, or Pinch: Designing and Evaluating Around-Device Selection Methods for Unmodified Smartwatches

Jiwan Kim (KAIST), Jiwan Son (KAIST), Ian Oakley (KAIST)

Smartwatches offer powerful features, but their small touchscreens limit the expressiveness of the input that can be achieved. To address this issue, we present, and open-source, the first sonar-based around-device input on an unmodified consumer smartwatch. We achieve this using a fine-grained, one-dimensional sonar-based finger-tracking system. In addition, we use this system to investigate the fundamental issue of how to trigger selections during around-device smartwatch input through two studies. The first examines the methods of double-crossing, dwell, and finger tap in a binary task, while the second considers a subset of these designs in a multi-target task and in the presence and absence of haptic feedback. Results showed double-crossing was optimal for binary tasks, while dwell excelled in multi-target scenarios, and haptic feedback enhanced comfort but not performance. These findings offer design insights for future around-device smartwatch interfaces that can be directly deployed on today’s consumer hardware.

DataSentry: Building Missing Data Management System for In-the-Wild Mobile Sensor Data Collection through Multi-Year Iterative Design Approach

Yugyeong Jung (KAIST), Hei Yiu Law (KAIST), Hadong Lee (Seoul National University), Junmo Lee (KAIST), Bongshin Lee (Yonsei University), Uichin Lee (KAIST)

Mobile sensor data collection in people’s daily lives is essential for understanding fine-grained human behaviors. However, in-the-wild data collection often results in missing data due to participant and system-related issues. While existing monitoring systems in the mobile sensing field provide an opportunity to detect missing data, they fall short in monitoring data across many participants and sensors and diagnosing the root causes of missing data, accounting for heterogeneous sensing characteristics of mobile sensor data. To address these limitations, we undertook a multi-year iterative design process to develop a system for monitoring missing data in mobile sensor data collection. Our final prototype, DataSentry, enables the detection, diagnosis, and addressing of missing data issues across many participants and sensors, considering both within- and between-person variability. Based on the iterative design process, we share our experiences, lessons learned, and design implications for developing advanced missing data management systems.

ExploreSelf: Fostering User-driven Exploration and Reflection on Personal Challenges with Adaptive Guidance by Large Language Models

Inhwa Song (KAIST), SoHyun Park (NAVER Cloud), Sachin R Pendse (Georgia Institute of Technology), Jessica Lee Schleider (Northwestern University), Munmun De Choudhury (Georgia Institute of Technology), Young-Ho Kim (NAVER AI Lab)

Expressing stressful experiences in words is proven to improve mental and physical health, but individuals often disengage with writing interventions as they struggle to organize their thoughts and emotions. Reflective prompts have been used to provide direction, and large language models (LLMs) have demonstrated the potential to provide tailored guidance. However, current systems often limit users’ flexibility to direct their reflections. We thus present ExploreSelf, an LLM-driven application designed to empower users to control their reflective journey, providing adaptive support through dynamically generated questions. Through an exploratory study with 19 participants, we examine how participants explore and reflect on personal challenges using ExploreSelf. Our findings demonstrate that participants valued the flexible navigation of adaptive guidance to control their reflective journey, leading to deeper engagement and insight. Building on our findings, we discuss the implications of designing LLM-driven tools that facilitate user-driven and effective reflection of personal challenges.

Exploring Design Spaces to Facilitate Household Collaboration for Cohabiting Couples

Gahyeon Bae (KAIST)*, Seo Kyoung Park (KAIST)*, Taewan Kim (Samsung Electronics), Hwajung Hong (KAIST)
(* Both Authors contributed equally to this research) 

Household collaboration among cohabiting couples presents unique challenges due to the intimate nature of the relationships and the lack of external rewards. Current efficiency-oriented technologies neglect these distinct dynamics. Our study aims to examine the real-world context and underlying needs of couples in their collaborative homemaking. We conducted a 10-day empirical investigation involving six Korean couples, supplemented by a probe approach to facilitate reflection on their current homemaking practices. We identified the requirement for ideal household collaboration as a ‘shared ritual for celebratory interaction’ and pinpointed the challenges in achieving this goal. We propose three design opportunities for domestic technology to address this gap: strengthening the meaning of housework around family values, supporting recognition of the partner’s efforts through visualization, and initiating negotiation through defamiliarization. These insights extend the design considerations for domestic technologies, advocating for a broader understanding of the values contributing to satisfactory homemaking activities within the household.

Exploring Modular Prompt Design for Emotion and Mental Health Recognition

Minseo Kim (Hankuk University of Foreign Studies), Taemin Kim (Hansung University), Thu Hoang Anh Vo (KAIST), Yugyeong Jung (KAIST), Uichin Lee (KAIST)

Recent advances in large language models (LLM) offered human-like capabilities for comprehending emotion and mental states. Prior studies explored diverse prompt engineering techniques for improving classification performance, but there is a lack of analysis of prompt design space and the impact of each component. To bridge this gap, we conduct a qualitative thematic analysis of existing prompts for emotion and mental health classification tasks to define the key components for prompt design space. We then evaluate the impact of major prompt components, such as persona and task instruction, on classification performance by using four LLM models and five datasets. Modular prompt design offers new insights into examining performance variability as well as promoting transparency and reproducibility in LLM-based tasks within health and well-being intervention systems.

Generating Highlight Videos of a User-Specified Length using Most Replayed Data

Minsun Kim (KAIST), Dawon Lee (KAIST), Junyong Noh (KAIST)

A highlight is a short edit of the original video that includes the most engaging moments. Given the rigid timing of TV commercial slots and length limits of social media uploads, generating highlights of specific lengths is crucial. Previous research on automatic highlight generation often overlooked the control over the duration of the final video, producing highlights of arbitrary lengths. We propose a novel system that automatically generates highlights of any user-specified length. Our system leverages Most Replayed Data (MRD), which identifies how frequently a video has been watched over time, to gauge the most engaging parts. It then optimizes the final editing path by adjusting internal segment durations. We evaluated the quality of our system’s outputs through two user studies, including a comparison with highlights created by human editors. Results show that our system can automatically produce highlights that are indistinguishable from those created by humans in viewing experience.

HapticGen: Generative Text-to-Vibration Model for Streamlining Haptic Design

Youjin Sung (KAIST), Kevin John (Arizona State University), Sang Ho Yoon (KAIST), Hasti Seifi (Arizona State University)

Designing haptic effects is a complex, time-consuming process requiring specialized skills and tools. To support haptic design, we introduce HapticGen, a generative model designed to create vibrotactile signals from text inputs. We conducted a formative workshop to identify requirements for an AI-driven haptic model. Given the limited size of existing haptic datasets, we trained HapticGen on a large, labeled dataset of 335k audio samples using an automated audio-to-haptic conversion method. Expert haptic designers then used HapticGen’s integrated interface to prompt and rate signals, creating a haptic-specific preference dataset for fine-tuning. We evaluated the fine-tuned HapticGen with 32 users, qualitatively and quantitatively, in an A/B comparison against a baseline text-to-audio model with audio-to-haptic conversion. Results show significant improvements in five haptic experience (e.g., realism) and system usability factors (e.g., future use). Qualitative feedback indicates HapticGen streamlines the ideation process for designers and helps generate diverse, nuanced vibrations.

I Was Told to Install the Antivirus App, but I'm Not Sure I Need It: Understanding Smartphone Antivirus Software Adoption and User Perceptions

Seyoung Jin (Sungkyunkwan University), Heewon Baek (Sungkyunkwan university), Uichin Lee (KAIST), Hyoungshick Kim (Sungkyunkwan University)

The rising threat of mobile malware has prompted security vendors to recommend antivirus software for smartphones, yet user misconceptions, regulatory requirements, and improper use undermine its effectiveness. Our mixed-method study, consisting of in-depth interviews with 23 participants and a survey of 250 participants, examines smartphone antivirus software adoption in South Korea, where mandatory installation for banking and other financial apps is common. Many users confuse antivirus software with general security tools and remain unaware of its limited scope. Adoption is significantly influenced by perceived vulnerability, response efficacy, self-efficacy, social norms, and awareness, while concerns about system performance and skepticism about necessity lead to discontinuation or non-use. Mandatory installations for financial apps in South Korea contribute to user misconceptions, negative perceptions, and a false sense of security. These findings highlight the need for targeted user education, clearer communication about mobile-specific threats, and efforts to promote informed and effective engagement with antivirus software.

Into the Unknown: Leveraging Conversational AI in Supporting Young Migrants' Journeys Towards Cultural Adaptation

Sunok Lee (Aalto University), Dasom Choi (KAIST), Lucy Truong (Aalto University), Nitin Sawhney (Aalto University), Henna Paakki (Aalto University)

Accelerated globalization has made migration commonplace, creating significant cultural adaptation challenges, particularly for young migrants. While HCI research has explored the role of technology in migrants’ cultural adaptation, there is a need to address the diverse cultural backgrounds and needs of young migrants specifically. Recognizing the potential of conversational AI to adapt to diverse cultural contexts, we investigate how young migrants could use this technology in their adaptation journey and explore its societal implementation. Through individual workshops with young migrants and stakeholder interviews—including AI practitioners, public sector workers, policy experts, and social scientist—we found that both groups of participants expect conversational AI to support young migrants in connecting with the host culture before migration, exploring the home culture, and aligning identities across home and host cultures. However, challenges such as expectation gaps and cultural bias may hinder cultural adaptation. We discuss design considerations for culturally sensitive AI that empower young migrants and propose strategies to enhance societal readiness for AI-driven cultural adaptation.

Less Talk, More Trust: Understanding Players' In-game Assessment of Communication Processes in League of Legends

Juhoon Lee (KAIST), Seoyoung Kim (KAIST), Yeon Su Park (KAIST), Juho Kim (KAIST), Jeong-woo Jang (KAIST), Joseph Seering (KAIST)

Creating games together is both a playful and effective way to develop skills in computational thinking, collaboration, and more. However, game development can be challenging for younger developers who lack formal training. While teenage developers frequently turn to online communities for peer support, their experiences may vary. To better understand the benefits and challenges teens face within online developer communities, we conducted interviews with 18 teenagers who created games or elements in Roblox and received peer support from one or more online Roblox developer communities. Our findings show that developer communities provide teens with valuable resources for technical, social, and career growth. However, teenagers also struggle with inter-user conflicts and a lack of community structure, leading to difficulties in handling complex issues that may arise, such as financial scams. Based on these insights, we propose takeaways for creating positive and safe online spaces for teenage game creators.

Leveling Up Together: Fostering Positive Growth and Safe Online Spaces for Teen Roblox Developers

Yubin Choi (KAIST), Jeanne Choi (KAIST), Joseph Seering (KAIST)

Creating games together is both a playful and effective way to develop skills in computational thinking, collaboration, and more. However, game development can be challenging for younger developers who lack formal training. While teenage developers frequently turn to online communities for peer support, their experiences may vary. To better understand the benefits and challenges teens face within online developer communities, we conducted interviews with 18 teenagers who created games or elements in Roblox and received peer support from one or more online Roblox developer communities. Our findings show that developer communities provide teens with valuable resources for technical, social, and career growth. However, teenagers also struggle with inter-user conflicts and a lack of community structure, leading to difficulties in handling complex issues that may arise, such as financial scams. Based on these insights, we propose takeaways for creating positive and safe online spaces for teenage game creators.

Like Adding a Small Weight to a Scale About to Tip: Personalizing Micro-Financial Incentives for Digital Wellbeing

Sueun Jang (KAIST), Youngseok Seo (KAIST), Woohyeok Choi (Kangwon National University), Uichin Lee (KAIST)

Personalized behavior change interventions can be effective as they dynamically adapt to an individual’s context. Financial incentives, a commonly used intervention in commercial applications and policy-making, offer a mechanism for creating personalized micro-interventions that are both quantifiable and amenable to systematic evaluation. However, the effectiveness of such personalized micro-financial incentives in real-world settings remains largely unexplored. In this study, we propose a personalization strategy that dynamically adjusts the amount of micro-financial incentives to promote smartphone use regulation and explore its efficacy and user experience through a four-week, in-the-wild user study. The results demonstrate that the proposed method is highly cost-effective without compromising intervention effectiveness. Based on these findings, we discuss the role of micro-financial incentives in enhancing awareness, design considerations for personalized micro-financial incentive systems, and their potential benefits and limitations concerning motivation change.

Living Alongside Areca: Exploring Human Experiences with Things Expressing Thoughts and Emotions

Hyungjun Cho (KAIST), Tek-Jin Nam (KAIST)

Technological advancements such as LLMs have enabled everyday things to use language, fostering increased anthropomorphism during interactions. This study employs material speculation to investigate how people experience things that express their thoughts, emotions, and intentions. We utilized Areca, an air purifier capable of keeping a diary, and placed it in the everyday spaces of eight participants over three weeks. Weekly interviews were conducted to capture participants’ evolving interactions with Areca, concluding with a session collaboratively speculating on the future of everyday things. Our findings indicate that things expressing thoughts, emotions, and intentions can be perceived as possessing agency beyond mere functionality. While some participants exhibited emotional engagement with Areca over time, responses varied, including moments of detachment. We conclude with design implications for HCI designers, offering insights into how emerging technologies may shape human-thing relationships in complex ways.

Looking but Not Focusing: Defining Gaze-Based Indices of Attention Lapses and Classifying Attentional States

Eugene Hwang (KAIST), Jeongmi Lee (KAIST)

Identifying objective markers of attentional states is critical, particularly in real-world scenarios where attentional lapses have serious consequences. In this study, we identified gaze-based indices of attentional lapses and validated them by examining their impact on the performance of classification models. We designed a virtual reality visual search task that encouraged active eye movements to define dynamic gaze-based metrics of different attentional states (zone in/out). The results revealed significant differences in both reactive ocular features, such as first fixation and saccade onset latency, and global ocular features, such as saccade amplitude, depending on the attentional state. Moreover, the performance of the classification models improved significantly when trained only on the proven gaze-based and behavioral indices rather than all available features, with the highest prediction accuracy of 79.3%. We highlight the importance of the preliminary studies before model training and provide generalizable gaze-based indices of attentional states for practical applications.

Modes of Interaction with Navigation Apps

Ju Yeon Jung (KAIST), Tom Steinberger (KAIST)

Despite many HCI studies of diverse factors shaping users’ navigation experiences, how to design navigation systems to be adaptable to all of these factors remains a challenge. To address this challenge, we study general variations in users’ intended navigation experiences. Based on 30 interviews, we find that interactions with navigation apps can be subsumed under three “modes”: follow, modify, and background. For each mode of interaction, we highlight users’ key motivations, interactions with apps, and challenges. We propose these modes as higher-level concepts for exploring how to enable the details of navigation support to be adaptable to users’ generally intended navigation experiences. We discuss broader implications for issues of efficiency and overreliance in our experience of the physical environments through navigation apps.

OptiSub: Optimizing Video Subtitle Presentation for Varied Display and Font Sizes via Speech Pause-Driven Chunking

Dawon Lee (KAIST), Jongwoo Choi (KAIST), Junyong Noh (KAIST)

Viewers desire to watch video content with subtitles in various font sizes according to their viewing environment and personal preferences. Unfortunately, because a chunk of the subtitle—a segment of the text corpus displayed on the screen at once—is typically constructed based on one specific font size, text truncation or awkward line breaks can occur when different font sizes are utilized. While existing methods address this problem by reconstructing subtitle chunks based on maximum character counts, they overlook synchronization of the display time with the content, often causing misaligned text. We introduce OptiSub, a fully automated method that optimizes subtitle segmentation to fit any user-specified font size while ensuring synchronization with the content. Our method leverages the timing of speech pauses within the video for synchronization. Experimental results, including a user study comparing OptiSub with previous methods, demonstrate its effectiveness and practicality across diverse font sizes and input videos.

Over the Mouse: Navigating across the GUI with Finger-Lifting Operation Mouse

YoungIn Kim (KAIST), Yohan Yun (KAIST), Taejun Kim (KAIST), Geehyuk Lee (KAIST)

Modern GUIs often have a hierarchical structure, i.e., the z-axis of the GUI interaction space. However, conventional mice do not support effective navigation along the z-axis, leading to increased physical movements and cognitive load. To address this inefficiency, we present the OtMouse, a novel mouse that supports finger-lifting operations by detecting finger height through proximity sensors embedded beneath the mouse buttons, and ‘Over the Mouse’ (OtM) interface, a set of interaction techniques along the z-axis of the GUI interaction space with the OtMouse. Initially, We evaluated the performance of finger-lifting operations (n = 8) with the OtMouse for two- and three-level lifting discrimination tasks. Subsequently, we conducted a user study (n = 16) to compare the usability of the OtM interface and traditional mouse interface for three representative tasks: ‘Context Switch,’ ‘Video Preview,’ and ‘Map Zooming.’ The results showed that OtM interface was both qualitatively and quantitatively superior to using traditional mouse in the Context Switch and Video Preview tasks. This research contributes to the ongoing efforts to enhance mouse-based GUI navigation experiences.

Peerspective: A Study on Reciprocal Tracking for Self-awareness and Relational Insight

Kwangyoung Lee (KAIST), Yeohyun Jung (KAIST), Gyuwon Jung (KAIST), Xi Lu (University at Buffalo,SUNY), Hwajung Hong (KAIST)

Personal informatics helps individuals understand themselves, but it often struggles to capture non-conscious behaviors such as stress responses, habitual actions, and communication styles. Incorporating social aspects into PI systems offers new perspectives on self-understanding, yet prior research has largely focused on unidirectional approaches that center benefits on the primary tracker. To address this gap, we introduce the Peerspective study, which explores reciprocal tracking—a bidirectional practice where two participants observe and provide feedback to each other, fostering mutual self-understanding and collaboration. In a week-long study with eight peer dyads, we explored how reciprocal observation and feedback influence self-awareness and interpersonal relationships. Our findings reveal that reciprocal tracking not only helps participants uncover blind spots and expand their self-concepts but also enhances empathy, deepens communication, and promotes sustained engagement. We discuss key facilitators and challenges of integrating reciprocity into personal informatics systems and offer design considerations for supporting collaborative tracking in everyday contexts.

PinchCatcher: Enabling Multi-selection for Gaze+Pinch

Jinwook Kim (KAIST), Sangmin Park (KAIST), Qiushi Zhou (Aarhus University), Mar Gonzalez-Franco (Google), Jeongmi Lee (KAIST), Ken Pfeuffer (Aarhus University)

This paper investigates multi-selection in XR interfaces based on eye and hand interaction. We propose enabling multi-selection using different variations of techniques that combine gaze with a semi-pinch gesture, allowing users to select multiple objects, while on the way to a full-pinch. While our exploration is based on the semi-pinch mode for activating a quasi-mode, we explore four methods for confirming subselections in multi-selection mode, varying in effort and complexity: dwell-time (SemiDwell), swipe (SemiSwipe), tilt (SemiTilt), and non-dominant hand input (SemiNDH), and compare them to a baseline technique. In the user study, we evaluate their effectiveness in reducing task completion time, errors, and effort. The results indicate the strengths and weaknesses of each technique, with SemiSwipe and SemiDwell as the most preferred methods by participants. We also demonstrate their utility in file managing and RTS gaming application scenarios. This study provides valuable insights to advance 3D input systems in XR.

PlanTogether: Facilitating AI Application Planning Using Information Graphs and Large Language Models

Dae Hyun Kim (Yonsei University), Daeheon Jeong (KAIST), Shakhnozakhon Yadgarova (KAIST), Hyungyu Shin (KAIST), Jinho Son (Algorithm Labs), Hariharan Subramonyam (Stanford University), Juho Kim (KAIST)

In client-AI expert collaborations, the planning stage of AI application development begins from the client; a client outlines their needs and expectations while assessing available resources (pre-collaboration planning). Despite the importance of pre-collaboration plans for discussions with AI experts for iteration and development, the client often fails to reflect their needs and expectations into a concrete actionable plan. To facilitate pre-collaboration planning, we introduce PlanTogether, a system that generates tailored client support using large language models and a Planning Information Graph, whose nodes and edges represent information in the plan and the information dependencies. Using the graph, the system links and presents information that guides client’s reasoning; it provides tips and suggestions based on relevant information and displays an overview to help understand the progression through the plan. A user study validates the effectiveness of PlanTogether in helping clients navigate information dependencies and write actionable plans reflecting their domain expertise.

Proxona: Supporting Creators' Sensemaking and Ideation with LLM-Powered Audience Personas

Yoonseo Choi (KAIST), Eun Jeong Kang (Cornell University), Seulgi Choi (KAIST), Min Kyung Lee (University of Texas at Austin), Juho Kim (KAIST)

A content creator’s success depends on understanding their audience, but existing tools fail to provide in-depth insights and actionable feedback necessary for effectively targeting their audience. We present Proxona, an LLM-powered system that transforms static audience comments into interactive, multi-dimensional personas, allowing creators to engage with them to gain insights, gather simulated feedback, and refine content. Proxona distills audience traits from comments, into dimensions (categories) and values (attributes), then clusters them into interactive personas representing audience segments. Technical evaluations show that Proxona generates diverse dimensions and values, enabling the creation of personas that sufficiently reflect the audience and support data grounded conversation. User evaluation with 11 creators confirmed that Proxona helped creators discover hidden audiences, gain persona-informed insights on early-stage content, and allowed them to confidently employ strategies when iteratively creating storylines. Proxona introduces a novel creator-audience interaction framework and fosters a persona-driven, co-creative process.

Quantifying Social Connection With Verbal and Non-Verbal Behaviors in Virtual Reality Conversations

Hyunchul Kim (KAIST), Jeongmi Lee (KAIST)

As virtual reality (VR) continues to evolve as a platform for gathering and collaboration, new forms of communication using voice and avatars are being actively studied. However, the objective and dynamic assessment of social experiences in VR remains a significant challenge, while obtrusive self-report methods prevail. This study aims to identify verbal and nonverbal behavioral indices of perceived social experience in the context of virtual conversations. In our experiment, 52 participants engaged in a ten-minute dyadic conversation in VR and rated the level of social experiences, while turn-taking patterns and behavioral (gaze, pose) data were recorded. The results indicated that rapid response time, longer speech duration, longer gaze duration during turn-taking gaps, and higher nodding frequency during turns predicted the dynamic changes in users’ social experience. By providing objective and unobtrusive measures of social interactions, this study contributes to enhancing the understanding and improvement of social VR experiences.

SpatIO: Spatial Physical Computing Toolkit Based on Extended Reality

Seung Hyeon Han (KAIST), Yeeun Han (KAIST), Kyeongho Park (KAIST), Sangjun Lee (KAIST), Woohun Lee (KAIST)

Proper placement of sensors and actuators is one of the key factors when designing spatial and proxemic interactions. However, current physical computing tools do not effectively support placing components in three-dimensional space, often forcing designers to build and test prototypes without precise spatial configuration. To address this, we propose the concept of spatial physical computing and present SpatIO, an XR-based physical computing toolkit that supports a continuous end-to-end workflow. SpatIO consists of three interconnected subsystems: SpatIO Environment for composing and testing prototypes with virtual sensors and actuators, SpatIO Module for converting virtually placed components into physical ones, and SpatIO Code for authoring interactions with spatial visualization of data flow. Through a comparative user study with 20 designers, we found that SpatIO significantly altered workflow order, encouraged broader exploration of component placement, enhanced spatial correlation between code and components, and promoted in-situ bodily testing.

Sprayable Sound: Exploring the Experiential and Design Potential of Physically Spraying Sound Interaction

Jongik Jeon (KAIST), Chang Hee Lee (KAIST)

Perfume and fragrance have captivated people for centuries across different cultures. Inspired by the ephemeral nature of sprayable olfactory interactions and experiences, we explore the potential of applying a similar interaction principle to the auditory modality. In this paper, we present SoundMist, a sonic interaction method that enables users to generate ephemeral auditory presences by physically dispersing a liquid into the air, much like the fading phenomenon of fragrance. We conducted a study to understand the experiential factors inherent in sprayable sound interaction and held an ideation workshop to identify potential design spaces or opportunities that this interaction could shape. Our findings, derived from thematic analysis, suggest that physically sprayable sound interaction can induce experiences related to four key factors—materiality of sound produced by dispersed liquid particles, different sounds entangled with each liquid, illusive perception of temporally floating sound, and enjoyment derived from blending different sounds—and can be applied to artistic practices, safety indications, multisensory approaches, and emotional interfaces.

TeachTune: Reviewing Pedagogical Agents Against Diverse Student Profiles with Simulated Students

Hyoungwook Jin (KAIST), Minju Yoo (Ewha Womans University), Jeongeon Park (University of California San Diego), Yokyung Lee (KAIST), Xu Wang (University of Michigan), Juho Kim (KAIST)

Large language models (LLMs) can empower teachers to build pedagogical conversational agents (PCAs) customized for their students. As students have different prior knowledge and motivation levels, teachers must review the adaptivity of their PCAs to diverse students. Existing chatbot reviewing methods (e.g., direct chat and benchmarks) are either manually intensive for multiple iterations or limited to testing only single-turn interactions. We present TeachTune, where teachers can create simulated students and review PCAs by observing automated chats between PCAs and simulated students. Our technical pipeline instructs an LLM-based student to simulate prescribed knowledge levels and traits, helping teachers explore diverse conversation patterns. Our pipeline could produce simulated students whose behaviors correlate highly to their input knowledge and motivation levels within 5% and 10% accuracy gaps. Thirty science teachers designed PCAs in a between-subjects study, and using TeachTune resulted in a lower task load and higher student profile coverage over a baseline.

The Design Space for Online Restorative Justice Tools: A Case Study with ApoloBot

Bich Ngoc (Rubi) Doan (KAIST), Joseph Seering (KAIST)

Volunteer moderators use various strategies to address online harms within their communities. Although punitive measures like content removal or account bans are common, recent research has explored the potential for restorative justice as an alternative framework to address the distinct needs of victims, offenders, and community members. In this study, we take steps toward identifying a more concrete design space for restorative justice-oriented tools by developing ApoloBot, a Discord bot designed to facilitate apologies when harm occurs in online communities. We present results from two rounds of interviews: first, with moderators giving feedback about the design of ApoloBot, and second, after a subset of these moderators have deployed ApoloBot in their communities. This study builds on prior work to yield more detailed insights regarding the potential of adopting online restorative justice tools, including opportunities, challenges, and implications for future designs.

The Effect of In-Car Agent Embodiment on Different Types of Information Delivery

Bonhee Ku (KAIST), Chang-Min Kim (KAIST), Hyungjun Cho (KAIST), Jisu Park (KAIST), Tek-Jin Nam (KAIST)

As vehicles become more advanced, in-car agents must manage increasingly complex interactions, heightening the need for effective information delivery. This paper investigates how different embodiments of in-car agents affect the delivery of various information types. We developed the ‘Drop-lit’ prototype to explore three embodiment features: physicality, characterization, and movement. In a user study with 20 participants, we compared three representative agent designs: abstraction, digital character, and mixed-media, across six categories of in-car information. Additionally, a co-design session allowed participants to self-customize and combine embodiment features for six specific driving scenarios. Results indicated that mixed-media agents were most effective for urgent warnings, digital characters for recommendations, and abstracted agents for simple reference information. The study also revealed how embodiment influenced experiential factors such as attention-grabbing, urgency, friendliness, trustworthiness, and playfulness, offering insights for optimizing agent design to enhance user engagement and information delivery in automotive contexts.

Think Together and Work Better: Combining Humans' and LLMs' Think-Aloud Outcomes for Effective Text Evaluation

SeongYeub Chu (KAIST), JongWoo Kim (Kaist), Mun Yong Yi (KAIST)

This study introduces \textbf{InteractEval}, a framework that integrates the outcomes of Think-Aloud (TA) conducted by humans and LLMs to generate attributes for checklist-based text evaluation. By combining humans’ flexibility and high-level reasoning with LLMs’ consistency and extensive knowledge, InteractEval outperforms text evaluation baselines on a text summarization benchmark (SummEval) and an essay scoring benchmark (ELLIPSE). Furthermore, an in-depth analysis shows that it promotes divergent thinking in both humans and LLMs, leading to the generation of a wider range of relevant attributes and enhancement of text evaluation performance. A subsequent comparative analysis reveals that humans excel at identifying attributes related to internal quality (Coherence and Fluency), but LLMs perform better at those attributes related to external alignment (Consistency and Relevance). Consequently, leveraging both humans and LLMs together produces the best evaluation outcomes, highlighting the necessity of effectively combining humans and LLMs in an automated checklist-based text evaluation.

Understanding Practical Challenges and Enablers for Embedding Environmental Perspectives in Digital Product Design and Development

Minha Lee (KAIST), Soyeong Min (KAIST), Gahyeon Kim (KAIST), Sangsu Lee (KAIST)

Although awareness of and urgency around the environmental impact of energy consumption in digital infrastructures such as data centers are gradually increasing, many academic efforts still struggle to translate research into practical, real-world applications for reducing digital carbon footprints. Recent studies have highlighted incorporating environmental interventions such as sustainable interaction design (SID) into digital product development practices holds significant potential to reduce their carbon footprint, but integrating sustainability perspectives into everyday design and development practices remains limited in the industry. In this study, we report on the results of in-depth interviews with eight practitioners who have attempted to embed environmental interventions into their practices, capturing their experiences that highlight complex challenges and motivational enablers within the organizational context. Based on these findings, we propose implications for the broader engagement in sustainability-centered design and development practices that resonate with the organizational complexities.

Understanding and Improving User Adoption and Security Awareness in Password Checkup Services

Sanghak Oh (Sungkyunkwan University), Heewon Baek (Sungkyunkwan university), Jun Ho Huh (Samsung Research), Taeyoung Kim (Sungkyunkwan University), Woojin Jeon (Sungkyunkwan University), Ian Oakley (KAIST), Hyoungshick Kim (Sungkyunkwan University)

Password checkup services (PCS) identify compromised, reused, or weak passwords, helping users secure at-risk accounts. However, adoption rates are low. We investigated factors influencing PCS use and password change challenges via an online survey (n=238). Key adoption factors were “perceived usefulness,” “ease of use,” and “self efficacy.” We also identified barriers to changing compromised passwords, including alert fatigue, low perceived urgency, and reliance on other security measures. We then designed interfaces mitigating these issues through clearer messaging and automation (e.g., simultaneous password changes and direct links to change pages). A user study (N=50) showed our designs significantly improved password change success rates, reaching 40% and 74% in runtime alert and PCS checkup reporting scenarios, respectively (compared to 16% and 60% with a baseline).

User Experience of LLM-based Recommendation Systems: A Case of Music Recommendation

Sojeong Yun (KAIST), Youn-kyung Lim (KAIST)

The advancement of large language models (LLMs) now allows users to actively interact with conversational recommendation systems (CRS) and build their own personalized recommendation services tailored to their unique needs and goals. This experience offers users a significantly higher level of controllability compared to traditional RS, enabling an entirely new dimension of recommendation experiences. Building on this context, this study explored the unique experiences that LLM-powered CRS can provide compared to traditional RS. Through a three-week diary study with 12 participants using custom GPTs for music recommendations, we found that LLM-powered CRS can (1) help users clarify implicit needs, (2) support unique exploration, and (3) facilitate a deeper understanding of musical preferences. Based on these findings, we discuss the new design space enabled by LLM-powered CRS and highlight its potential to support more personalized, user-driven recommendation experiences.

VibWalk: Mapping Lower-limb Haptic Experiences of Everyday Walking

Shih Ying-Lei (HITSZ), Dongxu Tang (HITSZ), Weiming Hu (HITSZ), Sang Ho Yoon (KAIST), Yitian Shao (HITSZ)

Walking is among the most common human activities where the feet can gather rich tactile information from the ground. The dynamic contact between the feet and the ground generates vibration signals that can be sensed by the foot skin. While existing research focuses on foot pressure sensing and lower-limb interactions, methods of decoding tactile information from foot vibrations remain underexplored. Here, we propose a foot-equipped wearable system capable of recording wideband vibration signals during walking activities. By enabling location-based recording, our system generates maps of haptic data that encode information on ground materials, lower-limb activities, and road conditions. Its efficacy was demonstrated through studies involving 31 users walking over 18 different ground textures, achieving an overall identification accuracy exceeding 95\% (cross-user accuracy of 87\%). Our system allows pedestrians to map haptic information through their daily walking activities, which has potential applications in creating digitalized walking experiences and monitoring road conditions.

“What If Smart Homes Could See Our Homes?”: Exploring DIY Smart Home Building Experiences with VLM-Based Camera Sensors

Sojeong Yun (KAIST), Youn-kyung Lim (KAIST)

The advancement of Vision-Language Model (VLM) camera sensors, which enable autonomous understanding of household situations without user intervention, has the potential to completely transform the DIY smart home building experience. Will this simplify or complicate the DIY smart home process? Additionally, what features do users want to create using these sensors? To explore this, we conducted a three-week diary-based experience prototyping study with 12 participants. Participants recorded their daily activities, used GPT to analyze the images, and manually customized and tested smart home features based on the analysis. The study revealed three key findings: (1) participants’ expectations for VLM camera-based smart homes, (2) the impact of VLM camera sensor characteristics on the DIY process, and (3) users’ concerns. Through the findings of this study, we propose design implications to support the DIY smart home building process with VLM camera sensors, and discuss living with intelligence.

Interactivity

Hyewon Lee (KAIST), Christopher Bannon (KAIST), Andrea Bianchi (KAIST)

Camera layout is a critical aspect of digital production, shaping the narrative and emotional tone of animation, CGI, and previsualization. However, managing camera movements in 3D space is a known challenge due to the complexity of controlling six degrees of freedom (DoF). Virtual camera systems aim to replicate the functionality of physical cameras, allowing users to manipulate a virtual camera within a digital scene. However, these systems often struggle with environmental constraints and can lead to spatial disorientation. To address these limitations, we present CamARa, an augmented camera layout system that leverages augmented reality (AR) for intuitive, spatially aware interaction. CamARa allows users to explore and create camera movements with a mobile device, utilizing spatial reference in the environment. It visualizes movement trajectories in AR, facilitating iterative and collaborative design. This work highlights the potential of AR as a versatile alternative to traditional and VR-based camera layout tools, bridging the gap between physical and virtual spaces to support digital cinematographic workflows.

Yeeun Shin (KAIST), Seung Hyeon Han (KAIST), Woohun Lee (KAIST)

Immersive authoring provides a powerful 3D content creation experience in virtual reality (VR) by freeing users from the tedious loop of desktop editing and VR validation. However, complex control panels required for creative tasks often disrupt immersion with awkward or unstable spatial interactions. To address this, we present Desk Console, an authoring interface that transforms 2D control panels into virtual 3D controls on a physical desk, enabling tangible spatial interactions similar to those in the real world. Desk Console transforms traditional control panels into 3D representations based on input types and provides passive haptic feedback through the desk’s physical surface. We demonstrate Desk Console’s capabilities through an interactive 3D scene design application.

Late-Breaking Work

Applying the Gricean Maxims to a Human-LLM Interaction Cycle: Design Insights from a Participatory Approach

Yoonsu Kim (KAIST), Brandon Chin (University of California Berkeley), Kihoon Son (KAIST), Seoyoung Kim (KAIST), Juho Kim (KAIST)

Traditional content warnings on film streaming services are limited to warnings in the form of text or pictograms that only offer broad categorizations at the start for a few seconds. This method does not provide details on the timing and intensity of sensitive scenes. To explore the potential for improving content warnings, we investigated users’ perceptions of the current system and their expectations for a new content warning system. This was achieved through participatory design workshops involving 11 participants. We found users’ expectations in three aspects: 1) develop a more nuanced understanding of their personal sensitivities beyond content sensitivities, 2) enable a trigger-centric film exploration process, and 3) allow for predictions regarding the timing of scenes and mitigating the intensity of sensitive content. Our study initiates a preliminary exploration of advanced content warnings, incorporating users’ specific expectations and creative ideas, with the goal of fostering safer viewing experiences.

"I know my data doesn't leave my phone, but still feel like being wiretapped": Understanding (Mis)Perceptions of On-Device AI Vishing Detection Apps

Subin Park (KAIST), Hyungjun Yoon (KAIST), Janu Kim (KAIST), Hyoungshick Kim (Sungkyunkwan University), Sung-Ju Lee (KAIST)

Vishing, or voice phishing, is a growing global threat exploiting calls to steal sensitive information or money. While on-device AI apps offer promising solutions for real-time vishing detection by analyzing the content of phone conversations, little is known about user perspectives on these tools. To address this gap, we conducted a study with 30 participants using a prototype app featuring on-device AI for speech recognition and vishing detection. We found negligible impacts of on-device AI vishing detection models on smartphone usage satisfaction, but user interviews revealed persistent privacy concerns. Despite the system’s use of on-device AI to ensure data security, some participants reported feeling “being wiretapped.” These findings highlight the need to design privacy-preserving on-device AI solutions and improve user understanding to encourage widespread adoption.

Ball20: An In-Hand Near-Spherical 20-Sided Tangible Controller for Diverse Gesture Interaction in AR/VR

Sunbum Kim (KAIST), Kyunghwan Kim (KAIST), Changsung Lim (KAIST), Geehyuk Lee (KAIST)

Spherical tangible devices have been explored in various studies to support effective object manipulation and enhance immersive experiences in augmented and virtual reality environments. However, because their spherical form makes it difficult to incorporate traditional input channels, their applicability and use as general-purpose input devices remain limited. In this paper, we present the Ball20, an in-hand near-spherical 20-sided tangible controller with independent force sensing on each face, designed to enable diverse gesture interactions. We developed the Ball20 hardware, designed a gesture set, and implemented a drawing application to demonstrate the Ball20 concept. In the first user study, we evaluated the feasibility of using the Ball20 for a drawing application and collected feedback. In the second user study, we further refined the Ball20 and conducted a quantitative usability evaluation.

Beyond the Badge: Designing Digital Mental Health Interventions for Korean Investigative Officers

Sieun Kim (Ulsan national institute of science and technology), Seonmi Lee (Ulsan National Institute of Science & Technology (UNIST)), Insu Choe (UNIST), Hwajung Hong (KAIST), Dooyoung Jung (Ulsan National Institute of Science & Technology (UNIST))

This study explores the unique mental health challenges faced by Korean investigative police officers and how these shape their help-seeking behaviors within cultural and occupational contexts. Based on interviews with 19 officers, four key themes emerged: hierarchical organizational stress, internalization of a responsible police image, routine exposure to negative events, and misalignment between internal growth needs and external acknowledgement. The findings provide actionable design implications for tailored digital mental health interventions, such as integrating micro-interventions into workflows to address hierarchical organizational stress and demanding workloads, recognizing emotional labor through positive rewards to support the moral and self-sacrificing police identity, and fostering mutual reliance via anonymous digital communication platforms to mitigate the effects of routine exposure to negative events. These strategies address officers’ specific challenges while aligning with their professional and cultural nuances. Although focused on Korean investigative police officers, the study offers valuable insights for designing role-specific and culturally informed solutions in other high-stress professions.

Bridging Bond Beyond Life: Designing VR Memorial Space with Stakeholder Collaboration via Research through Design

Heejae Bae (KAIST), Nayeong Kim (KAIST), Sehee Lee (KAIST), Tak Yeon Lee (KAIST)

The integration of digital technologies into memorialization practices offers opportunities to transcend physical and temporal limitations. However, designing personalized memorial spaces that address the diverse needs of the dying and the bereaved remains underexplored. Using a Research through Design (RtD) approach, we conducted a three-phase study: participatory design, VR memorial space development, and user testing. This study highlights three key aspects: 1) the value of VR memorial spaces as bonding mediums, 2) the role of a design process that engages users through co-design, development, and user testing in addressing the needs of the dying and the bereaved, and 3) design elements that enhance the VR memorial experience. This research lays the foundation for personalized VR memorialization practices, providing insights into how technology can enrich remembrance and relational experiences.

Can LLMs See What I See? A Focus on Five Prompt Engineering Techniques for Evaluating UX on a Shopping Site

Subin Shin (Yonsei University), Jeesun Oh (KAIST), Sangwon Lee (Yonsei University)

Usability testing is essential for improving digital user experiences but has practical limitations in terms of cost-effectiveness. Recent advancements in multimodal Large Language Models (LLMs), like ChatGPT-4, offer new possibilities for UX evaluations. This study investigated the most effective prompt engineering techniques for identifying UX issues in digital interfaces. To achieve this, five prompt engineering techniques were carefully selected based on previous research, and the outputs generated using these techniques were analyzed based on severity assessment criteria.We discovered that Role Prompting and (Zero-Shot) Chain of Thought Prompting were highly effective. Further investigation revealed that a hybrid approach combining both techniques produced the best results. Our findings shed light on the possibility of using multimodal LLM as a UX evaluator, offering meaningful value for future advancements in LLM-based UX evaluations.

CausalCFF: Causal Analysis between User Stress Level and Contextually Filtered Features Extracted from Mobile Sensor Data

Panyu Zhang (KAIST), Gyuwon Jung (KAIST), Uzair Ahmed (KAIST), Uichin Lee (KAIST)

Nowadays, it’s possible to deliver interventions through mobile technologies to improve users’ mental and physical health. Causal analysis may help researchers identify the potential causes of the health issues and design proper interventions. However, in previous studies, causal analysis is mainly conducted between single sensor data features such as walking activity duration and perceived stress. There is a lack of research into causal analysis between more complicated behavior features which could be derived from multiple sensor features and target well-being labels. To address this gap, we propose CausalCFF, a framework that investigates causal relationships between contextually filtered behavioral features (e.g., walking duration at workplace locations) and well-being outcomes (e.g., stress). Our analysis identifies frequent workplace visits during periods of reduced home time as the most salient cause for elevated stress levels, highlighting the framework’s ability to target context-specific behavioral biomarkers for human well-being. The code is also made available.

DocVoyager: Anticipating User’s Information Needs and Guiding Document Reading through Question Answering

Yoonjoo Lee (KAIST), Nedim Lipka (Adobe Systems ), Zichao Wang (Adobe), Ryan Rossi (Adobe Research), Puneet Mathur (University of Maryland College Park), Tong Sun (Adobe Research), Alexa Siu (Adobe Research)

People often approach complex documents (e.g., academic papers and professional reports) with diverse goals and varying levels of prior knowledge. Even when reading the same document, the paths to consuming relevant information can differ significantly depending on the reader’s information needs. However, the linear format and dense content of these documents make users manually sift through content, often leading to inefficient and narrow information consumption. To address this, we explore design principles that guide users along customized reading paths via questing answering. Applying these principles to guide academic paper reading, we introduce DocVoyager, a novel document-reading interface that adapts to users’ goals by suggesting tailored, goal-based questions. DocVoyager leverages Large Language Models (LLMs) to anticipate users’ information needs and dynamically suggests questions based on prior interactions. Our study found that participants easily focused on information relevant to their goals and engaged effectively with the document content using DocVoyager.

Expandora: Broadening Design Exploration with Text-to-Image Model

DaEun Choi (KAIST), Kihoon Son (KAIST), HyunJoon Jung (Adobe), Juho Kim (KAIST)

Broad exploration of references is critical in the visual design process. While text-to-image (T2I) models offer efficiency and customization of exploration, they often limit support for divergence in exploration. We conducted a formative study (N=6) to investigate the limitations of current interaction with the T2I model for broad exploration and found that designers struggle to articulate exploratory intentions and manage iterative, non-linear workflows. To address these challenges, we developed Expandora. Users can specify their exploratory intentions and desired diversity levels through structured input, and using an LLM-based pipeline, Expandora generates tailored prompt variations. The results are displayed in a mindmap-like interface that encourages non-linear workflows. A user study (N=8) demonstrated that Expandora significantly increases prompt diversity, the number of prompts users tried within a given time, and user satisfaction compared to the baseline. Nonetheless, its limitations in supporting convergent thinking suggest opportunities for holistically improving creative processes.

Exploring the Potential of Generative AI for Supporting Middle-Aged Individuals in Retirement Transitions

Minseo Park (KAIST), Youn-kyung Lim (KAIST)

The widespread adoption of Generative AI (GenAI) has fueled research exploring applications across diverse age groups and domains. However, its potential to support individuals in retirement transitions remains underexplored. This study aims to uncover the potential of GenAI in this context by examining its current and anticipated roles through semi-structured interviews with 10 middle-aged individuals navigating retirement transitions. The findings highlight three key roles of GenAI as an identity navigator, self-actualization facilitator, and connection catalyst. Building on the findings, the study identifies novel design opportunities for GenAI-based systems that assist an integrated journey from self-discovery to self-actualization, and support both direct and indirect connection to the world. The research aims to inspire the HCI community to further investigate these new design possibilities and attract attention to the unique context of retirement transition.

How to Better Translate Participant Quotes Using LLMs: Exploring Practices and Challenges of Non-Native English Researchers

Huisung Kwon (KAIST), Soyeong Min (KAIST), Sangsu Lee (KAIST)

Non-native English researchers face challenges in translating qualitative data, as risks of distortions and loss of nuance can affect research trustworthiness. While large language models (LLMs) provide advantages in translation, their limitations—such as inconsistent results and difficulties in capturing culturally and linguistically specific contexts—bring attention to how researchers can better integrate LLMs for accurate and contextual quote translation. To better understand this, we interviewed Korean HCI researchers about how they employ LLMs for quote translation. Our findings highlight five categorized practices used by researchers to ensure precise and comprehensible quote translation, as well as remaining challenges, including cultural bias in global LLMs and potential data leakage risks. We conclude by proposing four recommendations for researchers on using LLMs for better participants’ quote translation, including effective translation strategies and ethical considerations.

Immersive Prototyping for Robot Design with 3D Sketching and VR Acting in Reconstructed Workspace

Joon Hyub Lee (KAIST), Siripon Sutthiwanna (KAIST), Sungjae Shin (KAIST), Taegyu Jin (KAIST), Hyun Myung (KAIST), Seok-Hyung Bae (KAIST)

Rapid advances in robotics and AI technologies are opening the possibility of a wide range of commercial robot products with diverse shapes, sizes, and structures specialized for various environments, contexts, and roles. This trend calls for innovative tools and methods for designing robots as products. In this study, we propose a novel immersive prototyping system and workflow for quickly and easily creating desired robot shapes and structures through 3D sketching in target environments, realistically experiencing their movements and services through VR acting in the same environments, and iteratively improving their designs based on contextual insights. We conducted an extended robot design workshop with participants from backgrounds in robotics engineering and industrial design. The results show that the proposed system and workflow can help robot designers produce highly creative and compelling design outcomes, while also identifying areas for future improvement.

Introducing 3D Sketching to Overcome Challenges of View-Consistency and Progressive Development in 2D Generative AI-Based Car Exterior Design

Seung-Jun Lee (KAIST), Joon Hyub Lee (KAIST), Seok-Hyung Bae (KAIST)

Recent advances in 2D generative AI are beginning to find applications in highly specialized fields, such as car exterior design. However, the current 2D-centric approach has several limitations: each viewpoint requires a new sketch; maintaining consistency across different viewpoints is challenging; steering design development in the desired direction can be difficult. To address these limitations, we propose a novel design workflow that integrates 3D sketching with 2D generative AI for car exterior design. This workflow enables car designers to seamlessly transition between expressive 3D sketching, detailed 2D drawing, and realistic 2D generation, facilitating view-consistent and progressive design development. We conducted an in-depth user test with a professional car designer, who used our system to produce car exterior concepts for all major body types, demonstrating its potential usefulness during the early stages of car design.

LIGS: Developing an LLM-infused Game System for Emergent Narrative

Jin Jeong (KAIST), Tak Yeon Lee (KAIST)

Games with emergent narratives enable users to craft their own stories through simulation-based mechanics, but have shown various limitations due to the deterministic nature of traditional algorithms. Generative AI, including LLMs(large language models), is proposed as a solution by automating tasks and dynamically generating content tailored to evolving gameplay contexts. While prior studies have explored the applications of LLMs, there is still limited understanding of the challenges and user experiences that emerge when integrating these models into systems. This paper introduces LIGS, an LLM-Infused Game System designed for emergent narratives, and presents a prototype game used to observe actual gameplay experiences and progression. Our findings indicate that participants find the freedom of action and the resulting narrative progression engaging. However, LLM can cause various misunderstandings, posing potential risks to the overall experience of the game. Based on these findings, we propose design considerations to address these issues.

Press Start to Continue: A Thematic Analysis of the Iterative Process of Hardcore Players with Disabilities Adapting to Gameplay Difficulties

Eunbyul Park (KAIST), Jihun Chae (KAIST), Karam Eum (KAIST), Eunhye Choi (KAIST), Hyunyoung Oh (KAIST), Young Yim Doh (KAIST)

Playing video games can empower players with disabilities by providing them opportunities for connection, achievement, and cultural participation. However, as they continue playing, they need to devise alternative ways to access inaccessible game goals and manage social demands from multiplayer games. This study investigated how players with disabilities navigate these difficulties by analyzing interviews with five hardcore players with disabilities. The findings emphasize the critical role of available resources, including accessibility features, inclusive design supporting experimentation, and robust community support in enabling players to continue playing. To do so, players adapt to game difficulties through an iterative process of employing coping strategies using available resources. The findings highlight the importance of game environment, social, and cultural resources in supporting participants’ continued gameplay and provide related insights.

Securing Gesture Passwords Against Shoulder Surfing Using Behavioral Features

Eunyong Cheon (UNIST ), Ian Oakley (KAIST)

Gestures drawn on touchscreens are an emerging smartphone authentication method with good usability and a theoretically large password space. However, they are highly susceptible to observation attacks. To address this, recent research has demonstrated the effectiveness of implicit authentication techniques and behavioral features in strengthening the security of knowledge-based passwords such as PINs or patterns. Building on these ideas, this late-breaking work explores the design of gesture password hardening techniques. We examine the impact of incorporating behavioral biometric features captured by commercial smartphone sensors to enhance existing gesture recognition techniques. We comprehensively evaluate our system in a controlled (N=20) study setting, showing it effectively resists video observation attacks (EER=3.56%). These results indicate that behavioral features can substantially improve the security of gesture passwords against this common threat.

ShoeGenAI: A Creativity Support Tool for High-Feasible Shoe Product Design

Hui-Jun Kim (Dong-eui University), Jeongho Kim (KAIST), SOHYUN JEONG (KAIST), MINBONG LEE (kristincompany), Jaegul Choo (KAIST), Sung-Hee Kim (Dong-eui University)

Product designers often use generative AI for concept images, but these outputs often lack manufacturing feasibility, requiring repeated adjustments to meet design intentions. Focusing on sneaker design, this study introduces ShoeGenAI, an AI tool that enhances designers’ creativity while ensuring feasible outcomes and reducing the need for post-processing. A study with four shoe designers revealed challenges with both conventional methods and generative AI, leading to four key functions such as model fine-tuning, template-based prompting, combinational creativity support, and targeted refinement. A follow-up study with 20 designers indicated that with ShoeGenAI, they would expect to easily express their intentions to the system, be able to work efficiently with fewer post-processing adjustments, and be satisfied with the practical results. We also discuss differences between professionals and novices using creativity support tools and different types of design tasks, such as replicating a target image and designing from scratch.

The Effect of Target Depth on Performance of Multi-directional Tapping Task in Virtual Reality

Haejun Kim (KAIST), Yuhwa Hong (KAIST), Jihae Yu (Kangwon National University), Shuping Xiong (KAIST), Woojoo Kim (Kangwon National University)

While widely used to evaluate 2D pointing performance, adapting the multi-directional tapping task (ISO/TS 9241-411) to virtual reality (VR) poses challenges, particularly in addressing target depth. This study examines how depth affects user performance in the multi-directional tapping task in VR. We conducted a within-subject experiment with 20 participants, investigating the effect of various depths (0.5–100 m for Raycasting; 0.3–0.6 m for Virtual Hand) under consistent visual angles. Results showed that Raycasting performance remained stable beyond 2 m but degraded significantly at 0.5 m, while Virtual Hand performed best between 0.4 and 0.5 m and declined at closer and farther depths. These findings suggest that target depth strongly influences selection performance even when visual angles remain consistent, underscoring the need for considering standardized depth parameters in VR pointing protocols. We also provide evidence-based recommendations for implementing depth parameters in future VR studies using the multi-directional tapping task.

Visual Embedding of Screen Sequences for User-Flow Search in Example-driven Communication

Daeheon Jeong (KAIST), Hyehyun Chu (KAIST)

Effective communication of UX considerations to stakeholders (e.g., designers and developers) is a critical challenge for UX practitioners. To explore this problem, we interviewed four UX practitioners about their communication challenges and strategies. Our study identifies that providing an example user flow—a screen sequence representing a semantic task—as evidence reinforces communication, yet finding relevant examples remains challenging. To address this, we propose a method to systematically retrieve user flows using semantic embedding. Specifically, we design a model that learns to associate screens’ visual features with user flow descriptions through contrastive learning. A survey confirms that our approach retrieves user flows better aligned with human perceptions of relevance. We analyze the results and discuss implications for the computational representation of user flows.

WYSIMWYG: Simulation-based Decision Support for Data Distribution of Hyperlocal Services in Edge Cloud Environments

Sujeong Lim (KAIST), KyeongDeok Baek (KAIST), In-Young Ko (KAIST)

This study addresses effective decision-making for developers managing distributed storage resources in edge-cloud environments, particularly balancing data distribution for hyperlocal services. These neighborhood-focused services can leverage the edge-cloud environment to enhance performance and efficiency. Through a formative study, we identified key challenges, including developers’ difficulty specifying performance and cost requirements due to uncertainty about outcomes and the need for human involvement alongside programmatic processes. Consequently, we developed WYSIMWYG, a simulation-based decision-support tool for data distribution in hyperlocal services operating within edge-cloud environments. WYSIMWYG provides simulation results to address uncertainties, map-based visualizations, and region-specific strategy customizations. A user study with 16 practitioners demonstrated that WYSIMWYG significantly improves decision confidence and facilitates intuitive decision-making.

WrightHere: Supporting Children's Creative Writing with AI-Infused Interactive 3D Environment

Jaeryung Chung (KAIST), Seon Gyeom Kim (KAIST), Tak Yeon Lee (KAIST)

WrightHere is a generative AI-infused writing system that generates interactive 3D environments of the written story where users can explore, interact with characters, and gather inspiration to facilitate their creative writing. While creative writing is crucial for child development, it poses a unique challenge and sets a high hurdle for children. Building upon past research of providing effective stimuli for new inspiration, we explore how AI-infused interactive 3D scenes of stories can spark creativity and help children maintain their writing momentum. Through user studies with the WrightHere system, we examined how this integration of AI-generated 3D environments with writing interfaces enhances engagement and writing output. This work presents WrightHere as a novel prototype exploring the potential of generative AI and interactive 3D environments in supporting children’s creative writing process.

Special Interest Group

Designing for Neurodiversity in Academia: Addressing Challenges and Opportunities in Human-Computer Interaction

Nathalie Alexandra “Alex” Penglin Tcherdakoff (University of Bristol), Grace Jane Stangroome (University of Bristol), Ashlee Milton (University of Minnesota), Catherine Holloway (University College London), Marta E. Cecchinato (Northumbria University), Antonella Nonnis (UAL), Tessa Eagle (University of California, Santa Cruz), Dena Al Thani (Hamad Bin Khalifa University), Hwajung Hong (KAIST), Rua Mae Williams (Purdue University)

Academia is primarily structured around neurotypical norms, posing significant challenges for neurodivergent academics, who often face additional barriers that hinder their success. This Special Interest Group (SIG) examines the experiences of neurodiverse researchers in Human-Computer Interaction and explores how HCI can contribute to more inclusive academic environments. By bringing together HCI researchers, neurodiverse academics, and allies, this SIG aims to develop strategies for a more neurodivergent-inclusive, affirming, and supportive academic landscape. Since enhanced well-being can boost productivity, addressing these challenges may unlock greater research output and contributions, particularly by harnessing the talent and creativity of neurodivergent individuals. We will focus on challenges faced across career stages and roles (from students to senior academics, research to teaching staff), and explore the role of technology in academia — assessing how it alleviates and exacerbates barriers. Additionally, we aim to critically examine how policies and governance within the HCI community impact neurodiversity inclusion.

Workshop

Beyond Glasses: Future Directions for XR Interactions within the Physical World

Sang Ho Yoon (KAIST), Andrea Bianchi (KAIST), Hasti Seifi (University of Copenhagen), Jin Ryong Kim (University of Texas at Dallas), Radu-Daniel Vatavu (Ștefan cel Mare University of Suceava), Jeongmi Lee (KAIST), Geehyuk Lee (KAIST)

Recent developments in XR-related technologies enable us to extend the use of XR beyond laboratory settings and, therefore, beyond the common paradigm of head-mounted displays (HMD) or AR glasses. As the industry is pushing XR glasses to become the next-generation computer interface and mobile phone replacement, we see an opportunity to reconsider the future of XR interfaces beyond just this form factor and explore whether new affordances can be leveraged. In fact, while glasses represent the most convenient and practical wearable interface, users remain limited to a specific set of displays, raising concerns about privacy, social acceptability, and overreliance on the visual channel. Conversely, we believe that there is an opportunity to leverage the physicality of the world, including the human body and the surrounding space, to create more engaging XR experiences. In this workshop, our goal is to gather fresh insights and perspectives from HCI researchers, practitioners, and professionals on strategies and techniques to enhance interactions in XR beyond the conventional glasses framework. We will bring together experienced academics and emerging researchers within the interdisciplinary field of HCI. We anticipate developing research pathways to leverage physicality to investigate possibilities and obstacles beyond XR glasses, ultimately shaping a new approach to engaging with XR.

Mobile Technology and Teens: Understanding the Changing Needs of Sociocultural and Technical Landscape

Janghee Cho (National University of Singapore), Inhwa Song (KAIST), Zainab Agha (San Francisco State University), Bengisu Cagiltay (University of Wisconsin – Madison), Veena Calambur (Stevens Institute of Technology), Minjin (MJ) Rheu (Loyola University Chicago), Jina Huh-Yoo (Stevens Institute of Technology)

Teens’ mobile technology use can help teens connect with one another, but it also raises concerns around overuse, addiction, and exposure to harmful content. Traditional tools and methods for parental controls and guidance for mobile technology use among children, such as screen time limits, often fail to address teens’ nuanced experiences on the benefits and harm of their mobile technology use. This workshop brings together interdisciplinary researchers, practitioners, and teen advocates to examine how the CHI community can foster healthy teen mobile-technology relationships. Our goals are to: (1) co-design research agenda, (2) foster cross-sociocultural collaboration, (3) generate guidance for stakeholders (e.g., public, policymakers, parents, healthcare providers), and (4) plan actionable steps for ongoing impact. The workshop will explore themes like engaging broader stakeholders, embracing marginalized voices, and navigating the implications of emerging technologies through panel presentations and interactive sessions. By examining these themes, we aim to re-explore the HCI community’s discourse on teen mobile technology use and well-being, fostering a comprehensive understanding and inclusive approaches to navigate the multifaceted challenges in the modern digital landscape in diverse sociocultural contexts.

The Third Workshop on Building an Inclusive and Accessible Metaverse for All

Callum Parker (University of Sydney), Soojeong Yoo (The University of Sydney), Joel Fredericks (The University of Sydney), Tram Thi Minh Tran (School of Architecture, Design and Planning, The University of Sydney), Mark Colley (UCL Interaction Centre), Youngho Lee (Mokpo National University), Khanh-Duy Le (University of Science, VNUHCM), Simon Stannus (SQUARE ENIX CO., LTD.), Woontack Woo (KAIST), Mark Billinghurst (University of South Australia)

The Metaverse is envisioned as a shared, persistent experience that encompasses both augmented and virtual reality, representing the convergence of a virtually enhanced physical reality and interconnected persistent virtual spaces. It has the potential to break down physical boundaries, connecting people from all walks of life together through digital technology. As the Metaverse is still evolving, there is a unique opportunity to shape its development into an inclusive, all-encompassing space that is accessible for all. However a key challenge lies in designing the Metaverse from the ground up to ensure inclusivity and accessibility. This workshop aims to explore how to build an open, inclusive Metaverse and develop methods for evaluating its success. Key outcomes will include identifying new opportunities to enhance inclusivity, establishing evaluation methodologies, and outlining considerations for designing accessible environments and interactions within the Metaverse.

Human-Centered Evaluation and Auditing of Language Models

Yu Lu Liu (Johns Hopkins University), Wesley Hanwen Deng (Carnegie Mellon University), Michelle S. Lam (Stanford University), Motahhare Eslami (Carnegie Mellon University), Juho Kim (KAIST), Q. Vera Liao (Microsoft Research), Wei Xu (Georgia Institute of Technology), Jekaterina Novikova (AI Risk and Vulnerability Alliance), Ziang Xiao (Johns Hopkins University)

The recent advancements in Large Language Models (LLMs) have significantly impacted numerous, and will impact more, real-world applications. However, these models also pose significant risks to individuals and society. To mitigate these issues and guide future model development, responsible evaluation and auditing of LLMs are essential. This workshop aims to address the current “evaluation crisis” in LLM research and practice by bringing together HCI and AI researchers and practitioners to rethink LLM evaluation and auditing from a human-centered perspective. The workshop will explore topics around understanding stakeholders’ needs and goals with evaluation and auditing LLMs, establishing human-centered evaluation and auditing methods, developing tools and resources to support these methods, building community and fostering collaboration. By soliciting papers, organizing invited keynote and panel, and facilitating group discussions, this workshop aims to develop a future research agenda for addressing the challenges in LLM evaluation and auditing. Following a successful first iteration of this workshop at CHI 2024, we introduce the theme of “mind the context” for this second iteration, where participants will be encouraged to tackle the challenges and nuances of LLM evaluation and auditing in specific contexts.

Video Showcase

Exploration on Everyday Objects as an IoT Control Interface

Chang-Min Kim (KAIST), Tek-Jin Nam (KAIST)

Although IoT devices are becoming increasingly prevalent in our living spaces, their integration into daily life often remains rudimentary. This video explores a novel concept of transforming everyday objects into intuitive interfaces for controlling IoT devices in smart environments. Through a scenario movie titled ’A Day in the Life of Peter’, the video illustrates a speculative user experience where various types of everyday objects enable seamless IoT coordination. Grounded in 11 representative cases derived from our prior research, the video showcases potential use cases and highlights both the opportunities and challenges of utilizing everyday objects for IoT interaction. By reimagining non-digital artifacts as meaningful components of future IoT ecosystems, this video aims to inspire fresh perspectives on IoT user experiences and foster discussions on creating more engaging, personalized, and diverse smart environments.

MIRAbot: A Rearview Mirror Driving Assistant for Semi-Autonomous Vehicles

Max Fischer (The University of Tokyo), Yena Kim (KAIST), Geumjin Lee (KAIST), Shota Kiuchi (The University of Tokyo), Chang Hee Lee (KAIST), Kentaro Honma (The University of Tokyo), Miles Pennington (The University of Tokyo), Hyunjung Kim (The University of Tokyo)

This video introduces MIRAbot, a reimagined rearview mirror designed as a driving assistant for SAE Level 3 semi-autonomous vehicles (see Figure 1). While Level 3 vehicles allow hands-free, eyes-off driving in specific conditions, drivers must remain ready to take control when necessary [3]. Despite their convenience, these vehicles face challenges in earning drivers’ trust and achieving widespread adoption. MIRAbot seeks to enhance the semi-autonomous driving experience by making it more trustworthy and accessible to a diverse range of users. MIRAbot seamlessly switches between functioning as a standard rearview mirror and an interactive assistant, supporting manual driving, autonomous driving, and the transitions between them (see Figure 2). Building on prior research into robotic rearview mirror ornaments for handover pre-alerts [1, 2], MIRAbot enriches the in-car experience with expressive eyes, anthropomorphic movements, and voice prompts. This video highlights MIRAbot’s design, interaction features, and functionality, showcasing its potential to make semi-autonomous driving more supportive, enjoyable, and accessible for all.

Panels

Bridging Gaps in HCI: Advancing Education, Research, and Careers in Asia

Dilrukshi Gamage (University of Colombo School of Computing ), Shiwei Cheng (Zhejiang University of Technology), Preeti Mudliar (International Institute of Information Technology Bangalore), Zhicong Lu (City University of Hong Kong), Shengdong Zhao (City University of Hong Kong), Nova Ahmed (North South University), Xiaojuan Ma (Hong Kong University of Science and Technology), Uichin Lee (KAIST), Ding Wang (Google )

Asia’s Human-Computer Interaction (HCI) landscape is rapidly evolving, yet it faces distinct challenges in curriculum development, research establishment, and career navigation. This panel discussion, hosted by Asia SIGCHI Committee (ASC) at CHI2025, will bring together distinguished experts to address these challenges and explore strategies for fostering a robust and inclusive HCI community in Asia. Panelists will discuss the integration of culturally relevant content in HCI education, approaches to enhancing research visibility on international platforms, and initiatives to bridge gaps between academic training and industry expectations. Through an engaging discussion and audience interaction, the panel aims to identify actionable solutions, promote collaboration, and inspire future initiatives that strengthen HCI’s growth across the region. Participants will gain valuable insights into overcoming systemic barriers while building sustainable and impactful HCI programs tailored to Asia’s unique sociocultural dynamics. This panel seeks to advance the global relevance and contributions of HCI in Asia.

Technoskepticism or Justified Caution? The Future of Human-Centered AI in Mental Health Care

Nathaniel Swinger (Georgia Institute of Technology), Lauren Moran (Georgia Institute of Technology), Saeed Abdullah (Pennsylvania State University), Christopher W Wiese (Georgia Instittue of Technology), Uichin Lee (KAIST), Yuan-Chi Tseng (National Tsing Hua University), Andrew M Sherrill (Emory University School of Medicine), Rosa I. Arriaga (Georgia Institute of Technology)

Recent advances in AI provide a unique opportunity to reshape mental health care systems and practices. However, there remains considerable skepticism that AI will positively impact the futures of patients, workers, and technologies. An interdisciplinary approach toward design and development of human-centered AI is necessary, yet discussions about the future of mental health work are often stratified by discipline (e.g., clinical vs. HCI research) or mental health domain (i.e., PTSD, depression, etc.). With this panel, we will bring together HCI, AI, organizational, and clinical researchers and practitioners to focus on the future of patients, workers, and AI-based technology in mental health care. We will discuss current challenges associated with mental health care AI across diverse clinical domains. This panel aims to move toward common ground for the future of human-centered AI in mental health work among those spanning perspectives from technoskepticism to justified caution.Recent advances in AI provide a unique opportunity to reshape mental health care systems and practices. However, there remains considerable skepticism that AI will positively impact the futures of patients, workers, and technologies. An interdisciplinary approach toward design and development of human-centered AI is necessary, yet discussions about the future of mental health work are often stratified by discipline (e.g., clinical vs. HCI research) or mental health domain (i.e., PTSD, depression, etc.). With this panel, we will bring together HCI, AI, organizational, and clinical researchers and practitioners to focus on the future of patients, workers, and AI-based technology in mental health care. We will discuss current challenges associated with mental health care AI across diverse clinical domains. This panel aims to move toward common ground for the future of human-centered AI in mental health work among those spanning perspectives from technoskepticism to justified caution.

Doctoral Consortium

Interacting with AI by Manipulating Intents

Tae Soo Kim (KAIST)

Advanced AI models allow users to perform diverse tasks by simply expressing their high-level intents, without performing low-level operations.However, users can struggle to fully form and effectively express their intents, and inspecting and evaluating model outputs to verify whether their intents have been satisfied incurs significant cognitive load. My PhD research introduces the concept of intent manipulation, where user intents are externalized as interactive objects, allowing for direct exploration and iteration on both intents and model outputs. I explore three forms of intent manipulation: intent curation, disentangle intents into palettes users can curate their intent with; intent assembly, creating intent blocks that users can combine and experiment with; and intent framing, helping users inspect outputs through the lens of their intents. This work contributes to human-AI interaction by suggesting how interfaces can be designed to support iterative exploration and sensemaking of one’s own intents and the AI models in parallel.