News

CHI 2024

CHI 2024
DATE
  11 May – 16 May 2023
LOCATION  Hawai’i, USA 
 

We are excited to bring good news! At CHI 2024, KAIST records a total of 34 Full Paper publications, 9 Late-Breaking Works, 3 Interactivities, 2 Workshops, 1 Video Showcase, 1 Special Interest Group  Congratulations on the outstanding achievement!

For more information and details about the publications that feature in the conference, please refer to the publication list below. 

Paper Publications

Big or Small, It’s All in Your Head: Visuo-Haptic Illusion of Size-Change Using Finger-Repositioning

CHI'24 Honorable Mention

Myung Jin Kim (KAIST); Eyal Ofek (Microsoft Research); Michel Pahud (Microsoft Research); Mike J Sinclair (University of Washington); Andrea Bianchi (KAIST)

Haptic perception of physical sizes increases the realism and immersion in Virtual Reality (VR). Prior work rendered sizes by exerting pressure on the user’s fingertips or employing tangible, shape-changing devices. These interfaces are constrained by the physical shapes they can assume, making it challenging to simulate objects growing larger or smaller than the perceived size of the interface. Motivated by literature on pseudo-haptics describing the strong influence of visuals over haptic perception, this work investigates modulating the perception of size beyond this range. We developed a fixed-sized VR controller leveraging finger-repositioning to create a visuo-haptic illusion of dynamic size-change of handheld virtual objects. Through two user studies, we found that with an accompanying size-changing visual context, users can perceive virtual object sizes up to 44.2% smaller to 160.4%larger than the perceived size of the device. Without the accompanying visuals, a constant size (141.4% of device size) was perceived.

Comfortable Mobility vs. Attractive Scenery: The Key to Augmenting Narrative Worlds in Outdoor Locative Augmented Reality Storytelling

CHI'24 Honorable Mention

HYERIM PARK (KAIST); Aram Min (Technical Research Institute, Hanmac Engineering); Hyunjin Lee (KAIST); Maryam Shakeri (K.N. Toosi University of Technology); Ikbeom Jeon (KAIST); Woontack Woo (KAIST , KAIST)

We investigate how path context, encompassing both comfort and attractiveness, shapes user experiences in outdoor locative storytelling using Augmented Reality (AR). Addressing a research gap that predominantly concentrates on indoor settings or narrative backdrops, our user-focused research delves into the interplay between perceived path context and locative AR storytelling on routes with diverse walkability levels. We examine the correlation and causation between narrative engagement, spatial presence, perceived workload, and perceived path context. Our findings show that on paths with reasonable path walkability, attractive elements positively influence the narrative experience. However, even in environments with assured narrative walkability, inappropriate safety elements can divert user attention to mobility, hindering the integration of real-world features into the narrative. These results carry significant implications for path creation in outdoor locative AR storytelling, underscoring the importance of ensuring comfort and maintaining a balance between comfort and attractiveness to enrich the outdoor AR storytelling experience.

Demystifying Tacit Knowledge in Graphic Design: Characteristics, Instances, Approaches, and Guidelines

CHI'24 Honorable Mention

Kihoon Son (KAIST); DaEun Choi (KAIST); Tae Soo Kim (KAIST); Juho Kim (KAIST)

Despite the growing demand for professional graphic design knowledge, the tacit nature of design inhibits knowledge sharing. However, there is a limited understanding on the characteristics and instances of tacit knowledge in graphic design. In this work, we build a comprehensive set of tacit knowledge characteristics through a literature review. Through interviews with 10 professional graphic designers, we collected 123 tacit knowledge instances and labeled their characteristics. By qualitatively coding the instances, we identified the prominent elements, actions, and purposes of tacit knowledge. To identify which instances have been addressed the least, we conducted a systematic literature review of prior system support to graphic design. By understanding the reasons for the lack of support on these instances based on their characteristics, we propose design guidelines for capturing and applying tacit knowledge in design tools. This work takes a step towards understanding tacit knowledge, and how this knowledge can be communicated.

FoodCensor: Promoting Mindful Digital Food Content Consumption for People with Eating Disorders

CHI'24 Honorable Mention

Ryuhaerang Choi (KAIST); Subin Park (KAIST); Sujin Han (KAIST); Sung-Ju Lee (KAIST)

Digital food content’s popularity is underscored by recent studies revealing its addictive nature and association with disordered eating. Notably, individuals with eating disorders exhibit a positive correlation between their digital food content consumption and disordered eating behaviors. Based on these findings, we introduce FoodCensor, an intervention designed to empower individuals with eating disorders to make informed, conscious, and health-oriented digital food content consumption decisions. FoodCensor (i) monitors and hides passively exposed food content on smartphones and personal computers, and (ii) prompts reflective questions for users when they spontaneously search for food content. We deployed FoodCensor to people with binge eating disorder or bulimia (n=22) for three weeks. Our user study reveals that FoodCensor fostered self-awareness and self-reflection about unconscious digital food content consumption habits, enabling them to adopt healthier behaviors consciously. Furthermore, we discuss design implications for promoting healthier digital content consumption practices for vulnerable populations to specific content types.

Teach AI How to Code: Using Large Language Models as Teachable Agents for Programming Education

CHI'24 Honorable Mention

Hyoungwook Jin (KAIST); Seonghee Lee (Stanford University); Hyungyu Shin (KAIST); Juho Kim (KAIST)

This work investigates large language models (LLMs) as teachable agents for learning by teaching (LBT). LBT with teachable agents helps learners identify knowledge gaps and discover new knowledge. However, teachable agents require expensive programming of subject-specific knowledge. While LLMs as teachable agents can reduce the cost, LLMs’ expansive knowledge as tutees discourages learners from teaching. We propose a prompting pipeline that restrains LLMs’ knowledge and makes them initiate “why” and “how” questions for effective knowledge-building. We combined these techniques into TeachYou, an LBT environment for algorithm learning, and AlgoBo, an LLM-based tutee chatbot that can simulate misconceptions and unawareness prescribed in its knowledge state. Our technical evaluation confirmed that our prompting pipeline can effectively configure AlgoBo’s problem-solving performance. Through a between-subject study with 40 algorithm novices, we also observed that AlgoBo’s questions led to knowledge-dense conversations (effect size=0.71). Lastly, we discuss design implications, cost-efficiency, and personalization of LLM-based teachable agents.

S-ADL: Exploring Smartphone-based Activities of Daily Living to Detect Blood Alcohol Concentration in a Controlled Environment

CHI'24 Honorable Mention

Hansoo Lee (Korea Advanced Institute of Science and Technology); Auk Kim (Kangwon National University); Sang Won Bae (Stevens Institute of Technology); Uichin Lee (KAIST)

In public health and safety, precise detection of blood alcohol concentration (BAC) plays a critical role in implementing responsive interventions that can save lives. While previous research has primarily focused on computer-based or neuropsychological tests for BAC identification, the potential use of daily smartphone activities for BAC detection in real-life scenarios remains largely unexplored. Drawing inspiration from Instrumental Activities of Daily Living (I-ADL), our hypothesis suggests that Smartphone-based Activities of Daily Living (S-ADL) can serve as a viable method for identifying BAC. In our proof-of-concept study, we propose, design, and assess the feasibility of using S-ADLs to detect BAC in a scenario-based controlled laboratory experiment involving 40 young adults. In this study, we identify key S-ADL metrics, such as delayed texting in SMS, site searching, and finance management, that significantly contribute to BAC detection (with an AUC-ROC and accuracy of 81%). We further discuss potential real-life applications of the proposed BAC model.

Reinforcing and Reclaiming The Home: Co-speculating Future Technologies to Support Remote and Hybrid Work

CHI'24 Honorable Mention

Janghee Cho (National University of Singapore); Dasom Choi (KAIST); Junnan Yu (The Hong Kong Polytechnic University); Stephen Voida (University of Colorado Boulder)

With the rise of remote and hybrid work after COVID-19, there is growing interest in understanding remote workers’ experiences and designing digital technology for the future of work within the field of HCI. To gain a holistic understanding of how remote workers navigate the blurred boundary between work and home and how designers can better support their boundary work, we employ humanistic geography as a lens. We engaged in co-speculative design practices with 11 remote workers in the US, exploring how future technologies might sustainably enhance participants’ work and home lives in remote/hybrid arrangements. We present the imagined technologies that resulted from this process, which both reinforce remote workers’ existing boundary work practices through everyday routines/rituals and reclaim the notion of home by fostering independence, joy, and healthy relationships. Our discussions with participants inform implications for designing digital technologies that promote sustainability in the future remote/hybrid work landscape.

Natural Language Dataset Generation Framework for Visualizations Powered by Large Language Models

CHI'24

Kwon Ko (KAIST); Hyeon Jeon (Seoul National University); Gwanmo Park (Seoul National University); Dae Hyun Kim (KAIST); Nam Wook Kim (Boston College); Juho Kim (KAIST); Jinwook Seo (Seoul National University)

We introduce VL2NL, a Large Language Model (LLM) framework that generates rich and diverse NL datasets using Vega-Lite specifications as input, thereby streamlining the development of Natural Language Interfaces (NLIs) for data visualization. To synthesize relevant chart semantics accurately and enhance syntactic diversity in each NL dataset, we leverage 1) a guided discovery incorporated into prompting so that LLMs can steer themselves to create faithful NL datasets in a self-directed manner; 2) a score-based paraphrasing to augment NL syntax along with four language axes. We also present a new collection of 1,981 real-world Vega-Lite specifications that have increased diversity and complexity than existing chart collections. When tested on our chart collection, VL2NL extracted chart semantics and generated L1/L2 captions with 89.4% and 76.0% accuracy, respectively. It also demonstrated generating and paraphrasing utterances and questions with greater diversity compared to the benchmarks. Last, we discuss how our NL datasets and framework can be utilized in real-world scenarios. The codes and chart collection are available at https://github.com/hyungkwonko/chart-llm.

CHI'24

Kongpyung (Justin) Moon (KAIST); Zofia Marciniak (Korea Advanced Institute of Science and Technology); Ryo Suzuki (University of Calgary); Andrea Bianchi (KAIST)

3D printed displays promise to create unique visual interfaces for physical objects. However, current methods for creating 3D printed displays either require specialized post-fabrication processes (e.g., electroluminescence spray and silicon casting) or function as passive elements that simply react to environmental factors (e.g., body and air temperature). These passive displays offer limited control over when, where, and how the colors change. In this paper, we introduce ThermoPixels, a method for designing and 3D printing actively controlled and visually rich thermochromic displays that can be embedded in arbitrary geometries. We investigate the color-changing and thermal properties of thermochromic and conductive filaments. Based on these insights, we designed ThermoPixels and an accompanying software tool that allows embedding ThermoPixels in arbitrary 3D geometries, creating displays of various shapes and sizes (flat, curved, or matrix displays) or displays that embed textures, multiple colors, or that are flexible.

CHI'24

Jeesun Oh (KAIST); Wooseok Kim (KAIST); Sungbae Kim (KAIST); Hyeonjeong Im (KAIST); Sangsu Lee (KAIST)

Proactive voice assistants (VAs) in smart homes predict users’ needs and autonomously take action by controlling smart devices and initiating voice-based features to support users’ various activities. Previous studies on proactive systems have primarily focused on determining action based on contextual information, such as user activities, physiological state, or mobile usage. However, there is a lack of research that considers user agency in VAs’ proactive actions, which empowers users to express their dynamic needs and preferences and promotes a sense of control. Thus, our study aims to explore verbal communication through which VAs can proactively take action while respecting user agency. To delve into communication between a proactive VA and a user, we used the Wizard of Oz method to set up a smart home environment, allowing controllable devices and unrestrained communication. This paper proposes design implications for the communication strategies of proactive VAs that respect user agency.

CreativeConnect: Supporting Reference Recombination for Graphic Design Ideation with Generative AI

CHI'24

DaEun Choi (KAIST); Sumin Hong (Seoul National University of Science and Technology); Jeongeon Park (KAIST); John Joon Young Chung (SpaceCraft Inc.); Juho Kim (KAIST)

Graphic designers often get inspiration through the recombination of references. Our formative study (N=6) reveals that graphic designers focus on conceptual keywords during this process, and want support for discovering the keywords, expanding them, and exploring diverse recombination options of them, while still having room for designers’ creativity. We propose CreativeConnect, a system with generative AI pipelines that helps users discover useful elements from the reference image using keywords, recommends relevant keywords, generates diverse recombination options with user-selected keywords, and shows recombinations as sketches with text descriptions. Our user study (N=16) showed that CreativeConnect helped users discover keywords from the reference and generate multiple ideas based on them, ultimately helping users produce more design ideas with higher self-reported creativity compared to the baseline system without generative pipelines. While CreativeConnect was shown effective in ideation, we discussed how CreativeConnect can be extended to support other types of tasks in creativity support.

DeepStress: Supporting Stressful Context Sensemaking in Personal Informatics Systems Using a Quasi-experimental Approach

CHI'24

Gyuwon Jung (KAIST); Sangjun Park (KAIST); Uichin Lee (KAIST)

Personal informatics (PI) systems are widely used in various domains such as mental health to provide insights from self-tracking data for behavior change. Users are highly interested in examining relationships from the self-tracking data, but identifying causality is still considered challenging. In this study, we design DeepStress, a PI system that helps users analyze contextual factors causally related to stress. DeepStress leverages a quasi-experimental approach to address potential biases related to confounding factors. To explore the user experience of DeepStress, we conducted a user study and a follow-up diary study using participants’ own self-tracking data collected for 6 weeks. Our results show that DeepStress helps users consider multiple contexts when investigating causalities and use the results to manage their stress in everyday life. We discuss design implications for causality support in PI systems.

DiaryMate: Understanding User Perceptions and Experience in Human-AI Collaboration for Personal Journaling

CHI'24

Taewan Kim (KAIST); Donghoon Shin (University of Washington); Young-Ho Kim (NAVER AI Lab); Hwajung Hong (KAIST)

With their generative capabilities, large language models (LLMs) have transformed the role of technological writing assistants from simple editors to writing collaborators. Such a transition emphasizes the need for understanding user perception and experience, such as balancing user intent and the involvement of LLMs across various writing domains in designing writing assistants. In this study, we delve into the less explored domain of personal writing, focusing on the use of LLMs in introspective activities. Specifically, we designed DiaryMate, a system that assists users in journal writing with LLM. Through a 10-day field study (N=24), we observed that participants used the diverse sentences generated by the LLM to reflect on their past experiences from multiple perspectives. However, we also observed that they are over-relying on the LLM, often prioritizing its emotional expressions over their own. Drawing from these findings, we discuss design considerations when leveraging LLMs in a personal writing practice.

EvalLM: Interactive Evaluation of Large Language Model Prompts on User-Defined Criteria

CHI'24

Tae Soo Kim (KAIST); Yoonjoo Lee (KAIST); Jamin Shin (NAVER AI Lab); Young-Ho Kim (NAVER AI Lab); Juho Kim (KAIST)

By simply composing prompts, developers can prototype novel generative applications with Large Language Models (LLMs). To refine prototypes into products, however, developers must iteratively revise prompts by evaluating outputs to diagnose weaknesses. Formative interviews (N=8) revealed that developers invest significant effort in manually evaluating outputs as they assess context-specific and subjective criteria. We present EvalLM, an interactive system for iteratively refining prompts by evaluating multiple outputs on user-defined criteria. By describing criteria in natural language, users can employ the system’s LLM-based evaluator to get an overview of where prompts excel or fail, and improve these based on the evaluator’s feedback. A comparative study (N=12) showed that EvalLM, when compared to manual evaluation, helped participants compose more diverse criteria, examine twice as many outputs, and reach satisfactory prompts with 59% fewer revisions. Beyond prompts, our work can be extended to augment model evaluation and alignment in specific application contexts.

Exploring Context-Aware Mental Health Self-Tracking Using Multimodal Smart Speakers in Home Environments

CHI'24

Jieun Lim (KAIST); Youngji Koh (KAIST); Auk Kim (Kangwon National University); Uichin Lee (KAIST)

People with mental health issues often stay indoors, reducing their outdoor activities. This situation emphasizes the need for self-tracking technology in homes for mental health research, offering insights into their daily lives and potentially improving care. This study leverages a multimodal smart speaker to design a proactive self-tracking research system that delivers mental health surveys using an experience sampling method (ESM). Our system determines ESM delivery timing by detecting user context transitions and allowing users to answer surveys through voice dialogues or touch interactions. Furthermore, we explored the user experience of a proactive self-tracking system by conducting a four-week field study (n=20). Our results show that context transition-based ESM delivery can increase user compliance. Participants preferred touch interactions to voice commands, and the modality selection varied depending on the user’s immediate activity context. We explored the design implications for home-based, context-aware self-tracking with multimodal speakers, focusing on practical applications.

FLUID-IoT : Flexible and Fine-Grained Access Control in Shared IoT Environments via Multi-user UI Distribution

CHI'24

Sunjae Lee (KAIST); Minwoo Jeong (KAIST); Daye Song (KAIST); Junyoung Choi (KAIST); Seoyun Son (KAIST); Jean Y Song (DGIST); Insik Shin (KAIST)

The rapid growth of the Internet of Things (IoT) in shared spaces has led to an increasing demand for sharing IoT devices among multiple users. Yet, existing IoT platforms often fall short by offering an all-or-nothing approach to access control, not only posing security risks but also inhibiting the growth of the shared IoT ecosystem. This paper introduces FLUID-IoT, a framework that enables flexible and granular multi-user access control, even down to the User Interface (UI) component level. Leveraging a multi-user UI distribution technique, FLUID-IoT transforms existing IoT apps into centralized hubs that selectively distribute UI components to users based on their permission levels. Our performance evaluation, encompassing coverage, latency, and memory consumption, affirm that FLUID-IoT can be seamlessly integrated with existing IoT platforms and offers adequate performance for daily IoT scenarios. An in-lab user study further supports that the framework is intuitive and user-friendly, requiring minimal training for efficient utilization.

GenQuery: Supporting Expressive Visual Search with Generative Models

CHI'24

Kihoon Son (KAIST); DaEun Choi (KAIST); Tae Soo Kim (KAIST); Young-Ho Kim (NAVER AI Lab); Juho Kim (KAIST)

Designers rely on visual search to explore and develop ideas in early design stages. However, designers can struggle to identify suitable text queries to initiate a search or to discover images for similarity-based search that can adequately express their intent. We propose GenQuery, a novel system that integrates generative models into the visual search process. GenQuery can automatically elaborate on users’ queries and surface concrete search directions when users only have abstract ideas. To support precise expression of search intents, the system enables users to generatively modify images and use these in similarity-based search. In a comparative user study (N=16), designers felt that they could more accurately express their intents and find more satisfactory outcomes with GenQuery compared to a tool without generative features. Furthermore, the unpredictability of generations allowed participants to uncover more diverse outcomes. By supporting both convergence and divergence, GenQuery led to a more creative experience.

Investigating the Design of Augmented Narrative Spaces Through Virtual-Real Connections: A Systematic Literature Review

CHI'24

Jae-eun Shin (KAIST); Hayun Kim (KAIST); HYERIM PARK (KAIST); Woontack Woo (KAIST, KAIST )

Augmented Reality (AR) is regarded as an innovative storytelling medium that presents novel experiences by layering a virtual narrative space over a real 3D space. However, understanding of how the virtual narrative space and the real space are connected with one another in the design of augmented narrative spaces has been limited. For this, we conducted a systematic literature review of 64 articles featuring AR storytelling applications and systems in HCI, AR, and MR research. We investigated how virtual narrative spaces have been paired, functionalized, placed, and registered in relation to the real spaces they target. Based on these connections, we identified eight dominant types of augmented narrative spaces that are primarily categorized by whether they virtually narrativize reality or realize the virtual narrative. We discuss our findings to propose design recommendations on how virtual-real connections can be incorporated into a more structured approach to AR storytelling.

Investigating the Potential of Group Recommendation Systems As a Medium of Social Interactions: A Case of Spotify Blend Experiences between Two Users

CHI'24

Daehyun Kwak (KAIST); Soobin Park (KAIST); Inha Cha (Georgia Institute of Technology); Hankyung Kim (KAIST); Youn-kyung Lim (KAIST)

Designing user experiences for group recommendation systems (GRS) is challenging, requiring a nuanced understanding of the influence of social interactions between users. Using Spotify Blend as a real-world case of music GRS, we conducted empirical studies to investigate intricate social interactions among South Korean users in GRS. Through a preliminary survey about Blend experiences in general, we narrowed the focus for the main study to relationships between two users who are acquainted or close. Building on this, we conducted a 21-day diary study and interviews with 30 participants (15 pairs) to probe more in-depth interpersonal dynamics within Blend. Our findings reveal that users engaged in implicit social interactions, including tacit understanding of their companions and indirect communication. We conclude by discussing the newly discovered value of GRS as a social catalyst, along with design attributes and challenges for the social experiences it mediates.

MindfulDiary: Harnessing Large Language Model to Support Psychiatric Patients' Journaling

CHI'24

Taewan Kim (KAIST); Seolyeong Bae (Gwangju Institute of Science and Technology); Hyun AH Kim (NAVER Cloud); Su-woo Lee (Wonkwang university hospital); Hwajung Hong (KAIST); Chanmo Yang (Wonkwang University Hospital, Wonkwang University); Young-Ho Kim (NAVER AI Lab)

Large Language Models (LLMs) offer promising opportunities in mental health domains, although their inherent complexity and low controllability elicit concern regarding their applicability in clinical settings. We present MindfulDiary, an LLM-driven journaling app that helps psychiatric patients document daily experiences through conversation. Designed in collaboration with mental health professionals, MindfulDiary takes a state-based approach to safely comply with the experts’ guidelines while carrying on free-form conversations. Through a four-week field study involving 28 patients with major depressive disorder and five psychiatrists, we examined how MindfulDiary facilitates patients’ journaling practice and clinical care. The study revealed that MindfulDiary supported patients in consistently enriching their daily records and helped clinicians better empathize with their patients through an understanding of their thoughts and daily contexts. Drawing on these findings, we discuss the implications of leveraging LLMs in the mental health domain, bridging the technical feasibility and their integration into clinical settings.

Natural Language Dataset Generation Framework for Visualizations Powered by Large Language Models

CHI'24

Kwon Ko (KAIST); Hyeon Jeon (Seoul National University); Gwanmo Park (Seoul National University); Dae Hyun Kim (KAIST); Nam Wook Kim (Boston College); Juho Kim (KAIST); Jinwook Seo (Seoul National University)

We introduce VL2NL, a Large Language Model (LLM) framework that generates rich and diverse NL datasets using Vega-Lite specifications as input, thereby streamlining the development of Natural Language Interfaces (NLIs) for data visualization. To synthesize relevant chart semantics accurately and enhance syntactic diversity in each NL dataset, we leverage 1) a guided discovery incorporated into prompting so that LLMs can steer themselves to create faithful NL datasets in a self-directed manner; 2) a score-based paraphrasing to augment NL syntax along with four language axes. We also present a new collection of 1,981 real-world Vega-Lite specifications that have increased diversity and complexity than existing chart collections. When tested on our chart collection, VL2NL extracted chart semantics and generated L1/L2 captions with 89.4% and 76.0% accuracy, respectively. It also demonstrated generating and paraphrasing utterances and questions with greater diversity compared to the benchmarks. Last, we discuss how our NL datasets and framework can be utilized in real-world scenarios. The codes and chart collection are available at https://github.com/hyungkwonko/chart-llm.

Navigating User-System Gaps: Understanding User-Interactions in User-Centric Context-Aware Systems for Digital Well-being Intervention

CHI'24

Inyeop Kim (KAIST); Uichin Lee (KAIST)

In this paper, we investigate the challenges users face with a user-centric context-aware intervention system. Users often face gaps when the system’s responses do not align with their goals and intentions. We explore these gaps through a prototype system that enables users to specify context-action intervention rules as they desire. We conducted a lab study to understand how users perceive and cope with gaps while translating their intentions as rules, revealing that users experience context-mapping and context-recognition uncertainties (instant evaluation cycle). We also performed a field study to explore how users perceive gaps and make adaptations of rules when the operation of specified rules in real-world settings (delayed evaluation cycle). This research highlights the dynamic nature of user interaction with context-aware systems and suggests the potential of such systems in supporting digital well-being. It provides insights into user adaptation processes and offers guidance for designing user-centric context-aware applications.

PaperWeaver: Enriching Topical Paper Alerts by Contextualizing Recommended Papers with User-collected Papers

CHI'24

Yoonjoo Lee (KAIST); Hyeonsu B Kang (Carnegie Mellon University); Matt Latzke (Allen Institute for AI); Juho Kim (KAIST); Jonathan Bragg (Allen Institute for Artificial Intelligence); Joseph Chee Chang (Allen Institute for AI); Pao Siangliulue (Allen Institute for AI)

With the rapid growth of scholarly archives, researchers subscribe to “paper alert” systems that periodically provide them with recommendations of recently published papers that are similar to previously collected papers. However, researchers sometimes struggle to make sense of nuanced connections between recommended papers and their own research context, as existing systems only present paper titles and abstracts. To help researchers spot these connections, we present PaperWeaver, an enriched paper alerts system that provides contextualized text descriptions of recommended papers based on user-collected papers. PaperWeaver employs a computational method based on Large Language Models (LLMs) to infer users’ research interests from their collected papers, extract context-specific aspects of papers, and compare recommended and collected papers on these aspects. Our user study (N=15) showed that participants using PaperWeaver were able to better understand the relevance of recommended papers and triage them more confidently when compared to a baseline that presented the related work sections from recommended papers.

PriviAware: Exploring Data Visualization and Dynamic Privacy Control Support for Data Collection in Mobile Sensing Research

CHI'24

Hyunsoo Lee (KAIST); Yugyeong Jung (KAIST); Hei Yiu Law (Korea Advanced Institute of Science and Technology); Seolyeong Bae (Gwangju Institute of Science and Technology); Uichin Lee (KAIST)

With increased interest in leveraging personal data collected from 24/7 mobile sensing for digital healthcare research, supporting user-friendly consent to data collection for user privacy has also become important. This work proposes \emph{PriviAware}, a mobile app that promotes flexible user consent to data collection with data exploration and contextual filters that enable users to turn off data collection based on time and places that are considered privacy-sensitive. We conducted a user study (N = 58) to explore how users leverage data exploration and contextual filter functions to explore and manage their data and whether our system design helped users mitigate their privacy concerns. Our findings indicate that offering fine-grained control is a promising approach to raising users’ privacy awareness under the dynamic nature of the pervasive sensing context. We provide practical privacy-by-design guidelines for mobile sensing research.

Time2Stop: Adaptive and Explainable Human-AI Loop for Smartphone Overuse Intervention

CHI'24

Adiba Orzikulova (KAIST); Han Xiao (Beijing University of Posts and Telecommunications); Zhipeng Li (Department of Computer Science and Technology, Tsinghua University); Yukang Yan (Carnegie Mellon University); Yuntao Wang (Tsinghua University); Yuanchun Shi (Tsinghua University); Marzyeh Ghassemi (MIT); Sung-Ju Lee (KAIST); Anind K Dey (University of Washington); Xuhai “Orson” Xu (Massachusetts Institute of Technology, University of Washington)

Despite a rich history of investigating smartphone overuse intervention techniques, AI-based just-in-time adaptive intervention (JITAI) methods for overuse reduction are lacking. We develop Time2Stop, an intelligent, adaptive, and explainable JITAI system that leverages machine learning to identify optimal intervention timings, introduces interventions with transparent AI explanations, and collects user feedback to establish a human-AI loop and adapt the intervention model over time. We conducted an 8-week field experiment (N=71) to evaluate the effectiveness of both the adaptation and explanation aspects of Time2Stop. Our results indicate that our adaptive models significantly outperform the baseline methods on intervention accuracy (>32.8% relatively) and receptivity (>8.0%). In addition, incorporating explanations further enhances the effectiveness by 53.8% and 11.4% on accuracy and receptivity, respectively. Moreover, Time2Stop significantly reduces overuse, decreasing app visit frequency by 7.0∼8.9%. Our subjective data also echoed these quantitative measures. Participants preferred the adaptive interventions and rated the system highly on intervention time accuracy, effectiveness, and level of trust. We envision our work can inspire future research on JITAI systems with a human-AI loop to evolve with users.

Unlock Life with a Chat(GPT): Integrating Conversational AI with Large Language Models into Everyday Lives of Autistic Individuals

CHI'24

Dasom Choi (KAIST); Sunok Lee (KAIST); Sung-In Kim (Seoul National University Hospital); Kyungah Lee (Daegu University); Hee Jeong Yoo (Seoul National University Bundang Hospital); Sangsu Lee (KAIST); Hwajung Hong (KAIST)

Autistic individuals often draw on insights from their supportive networks to develop self-help life strategies ranging from everyday chores to social activities. However, human resources may not always be immediately available. Recently emerging conversational agents (CAs) that leverage large language models (LLMs) have the potential to serve as powerful information-seeking tools, facilitating autistic individuals to tackle daily concerns independently. This study explored the opportunities and challenges of LLM-driven CAs in empowering autistic individuals through focus group interviews and workshops (N=14). We found that autistic individuals expected LLM-driven CAs to offer a non-judgmental space, encouraging them to approach day-to-day issues proactively. However, they raised issues regarding critically digesting the CA responses and disclosing their autistic characteristics. Based on these findings, we propose approaches that place autistic individuals at the center of shaping the meaning and role of LLM-driven CAs in their lives, while preserving their unique needs and characteristics.

User Performance in Consecutive Temporal Pointing: An Exploratory Study

CHI'24

Dawon Lee (KAIST); Sunjun Kim (Daegu Gyeongbuk Institute of Science and Technology (DGIST)); Junyong Noh (KAIST); Byungjoo Lee (Yonsei University)

A significant amount of research has recently been conducted on user performance in so-called temporal pointing tasks, in which a user is required to perform a button input at the timing required by the system. Consecutive temporal pointing (CTP), in which two consecutive button inputs must be performed while satisfying temporal constraints, is common in modern interactions, yet little is understood about user performance on the task. Through a user study involving 100 participants, we broadly explore user performance in a variety of CTP scenarios. The key finding is that CTP is a unique task that cannot be considered as two ordinary temporal pointing processes. Significant effects of button input method, motor limitations, and different hand use were also observed.

VIVID: Human-AI Collaborative Authoring of Vicarious Dialogues from Lecture Videos

CHI'24

Seulgi Choi (KAIST); Hyewon Lee (KAIST); Yoonjoo Lee (KAIST); Juho Kim (KAIST)

The lengthy monologue-style online lectures cause learners to lose engagement easily. Designing lectures in a “vicarious dialogue” format can foster learners’ cognitive activities more than monologue-style. However, designing online lectures in a dialogue style catered to the diverse needs of learners is laborious for instructors. We conducted a design workshop with eight educational experts and seven instructors to present key guidelines and the potential use of large language models (LLM) to transform a monologue lecture script into pedagogically meaningful dialogue. Applying these design guidelines, we created VIVID which allows instructors to collaborate with LLMs to design, evaluate, and modify pedagogical dialogues. In a within-subjects study with instructors (N=12), we show that VIVID helped instructors select and revise dialogues efficiently, thereby supporting the authoring of quality dialogues. Our findings demonstrate the potential of LLMs to assist instructors with creating high-quality educational dialogues across various learning stages.

Viewer2Explorer: Designing a Map Interface for Spatial Navigation in Linear 360 Museum Exhibition Video

CHI'24

Chaeeun Lee (KAIST); Jinwook Kim (KAIST); HyeonBeom Yi (KAIST); Woohun Lee (KAIST)

The pandemic has contributed to the increased digital content development for remote experiences. Notably, museums have begun creating virtual exhibitions using 360-videos, providing a sense of presence and high level of immersion. However, 360-video content often uses a linear timeline interface that requires viewers to follow the path decided by the video creators. This format limits viewers’ ability to actively engage with and explore the virtual space independently. Therefore, we designed a map-based video interface, Viewer2Explorer, that enables the user to perceive and explore virtual spaces autonomously. We then conducted a study to compare the overall experience between the existing linear timeline and map interfaces. Viewer2Explorer enhanced users’ spatial controllability and enabled active exploration in virtual museum exhibition spaces. Additionally, based on our map interface, we discuss a new type of immersion and assisted autonomy that can be experienced through a 360-video interface and provide design insights for future content.

A Design Space for Intelligent and Interactive Writing Assistants

CHI'24

Mina Lee (Microsoft Research); Katy Ilonka Gero (Harvard University); John Joon Young Chung (Midjourney); Simon Buckingham Shum (University of Technology Sydney); Vipul Raheja (Grammarly); Hua Shen (University of Michigan); Subhashini Venugopalan (Google); Thiemo Wambsganss (Bern University of Applied Sciences); David Zhou (University of Illinois Urbana-Champaign); Emad A. Alghamdi (King Abdulaziz University); Tal August (University of Washington); Avinash Bhat (McGill University); Madiha Zahrah Choksi (Cornell Tech); Senjuti Dutta (University of Tennessee, Knoxville); Jin L.C. Guo (McGill University); Md Naimul Hoque (University of Maryland); Yewon Kim (KAIST); Simon Knight (University of Technology Sydney); Seyed Parsa Neshaei (EPFL); Antonette Shibani (University of Technology Sydney); Disha Shrivastava (Google DeepMind); Lila Shroff (Stanford University); Agnia Sergeyuk (JetBrains Research); Jessi Stark (University of Toronto); Sarah Sterman (University of Illinois, Urbana-Champaign); Sitong Wang (Columbia University); Antoine Bosselut (EPFL); Daniel Buschek (University of Bayreuth); Joseph Chee Chang (Allen Institute for AI); Sherol Chen (Google); Max Kreminski (Midjourney); Joonsuk Park (University of Richmond); Roy Pea (Stanford University); Eugenia H Rho (Virginia Tech); Zejiang Shen (Massachusetts Institute of Technology); Pao Siangliulue (B12)

In our era of rapid technological advancement, the research landscape for writing assistants has become increasingly fragmented across various research communities. We seek to address this challenge by proposing a design space as a structured way to examine and explore the multidimensional space of intelligent and interactive writing assistants. Through community collaboration, we explore five aspects of writing assistants: task, user, technology, interaction, and ecosystem. Within each aspect, we define dimensions and codes by systematically reviewing 115 papers while leveraging the expertise of researchers in various disciplines. Our design space aims to offer researchers and designers a practical tool to navigate, comprehend, and compare the various possibilities of writing assistants, and aid in the design of new writing assistants.

Reconfigurable Interfaces by Shape Change and Embedded Magnets

CHI'24

Himani Deshpande (Texas A&M University); Bo Han (National University of Singapore); Kongpyung (Justin) Moon (KAIST); Andrea Bianchi (KAIST); Clement Zheng (National University of Singapore); Jeeeun Kim (Texas A&M University)

Reconfigurable physical interfaces empower users to swiftly adapt to tailored design requirements or preferences. Shape-changing interfaces enable such reconfigurability, avoiding the cost of refabrication or part replacements. Nonetheless, reconfigurable interfaces are often bulky, expensive, or inaccessible. We propose a reversible shape-changing mechanism that enables reconfigurable 3D printed structures via translations and rotations of parts. We investigate fabrication techniques that enable reconfiguration using magnets and the thermoplasticity of heated polymer. Proposed interfaces achieve tunable haptic feedback and adjustment of different user affordances by reconfiguring input motions. The design space is demonstrated through applications in rehabilitation, embodied communication, accessibility, safety, and gaming.

Interrupting for Microlearning: Understanding Perceptions and Interruptibility of Proactive Conversational Microlearning Services

CHI'24

Minyeong Kim (Kangwon National University); Jiwook Lee (Kangwon National University); Youngji Koh (Korea Advanced Institute of Science and Technology); Chanhee Lee (KAIST); Uichin Lee (KAIST); Auk Kim (Kangwon National University)

Significant investment of time and effort for language learning has prompted a growing interest in microlearning. While microlearning requires frequent participation in 3-to-10-minute learning sessions, the recent widespread of smart speakers in homes presents an opportunity to expand learning opportunities by proactively providing microlearning in daily life. However, such proactive provision can distract users. Despite the extensive research on proactive smart speakers and their opportune moments for proactive interactions, our understanding of opportune moments for more-than-one-minute interactions remains limited. This study aims to understand user perceptions and opportune moments for more-than-one-minute microlearning using proactive smart speakers at home. We first developed a proactive microlearning service through six pilot studies (n=29), and then conducted a three-week field study (n=28). We identified the key contextual factors relevant to opportune moments for microlearning of various durations, and discussed the design implications for proactive conversational microlearning services at home.

Your Avatar Seems Hesitant to Share About Yourself: How People Perceive Others' Avatars in the Transparent System

CHI'24

Yeonju Jang (Cornell University); Taenyun Kim (Michigan State University); Huisung Kwon (KAIST); Hyemin Park (Sungkyunkwan University); Ki Joon Kim (City University of Hong Kong)

In avatar-mediated communications, users often cannot identify how others’ avatars are created, which is one of the important information they need to evaluate others. Thus, we tested a social virtual world that is transparent about others’ avatar-creation methods and investigated how knowing about others’ avatar-creation methods shapes users’ perceptions of others and their self-disclosure. We conducted a 2×2 mixed-design experiment with system design (nontransparent vs. transparent system) as a between-subjects and avatar-creation method (customized vs. personalized avatar) as a within-subjects variable with 60 participants. The results revealed that personalized avatars in the transparent system were viewed less positively than customized avatars in the transparent system or avatars in the nontransparent system. These avatars appeared less comfortable and honest in their self-disclosure and less competent. Interestingly, avatars in the nontransparent system attracted more followers. Our results suggest being cautious when creating a social virtual world that discloses the avatar-creation process.

Interactivity

CHI'24

Sang Ho Yoon (KAIST); Youjin Sung (KAIST); Kun Woo Song (KAIST); Kyungeun Jung (KAIST ); Kyungjin Seo (KAIST); Jina Kim (KAIST); Yi Hyung Il (KAIST); Nicha Vanichvoranun (Korea Advanced Institute of Science and Technology(KAIST)); Hanseok Jeong (Korea Advanced Institute of Science and Technology); Hojeong Lee (KAIST)

In this Interactivity, we present a lab demo on adaptive and immersive wearable interfaces that enhance extended reality (XR) interactions. Advances in wearable hardware with state-of-the-art software support have great potential to promote highly adaptive sensing and immersive haptic feedback for enhanced user experiences. Our research projects focus on novel sensing techniques, innovative hardware/devices, and realistic haptic rendering to achieve these goals. Ultimately, our work aims to improve the user experience in XR by overcoming the limitations of existing input control and haptic feedback. Our lab demo features three highly enhanced experiences with wearable interfaces. First, we present novel sensing techniques that enable a more precise understanding of user intent and status, enriched with a broader context. Then, we showcase innovative haptic devices and authoring toolkits that leverage the captured user intent and status. Lastly, we demonstrate immersive haptic rendering with body-based wearables that enhance the user experience.

STButton: Exploring Opportunities for Buttons with Spatio-Temporal Tactile Output

CHI'24

Yeonsu Kim (KAIST); Jisu Yim (KAIST); JaeHyun Kim (KAIST); Kyunghwan Kim (KAIST); Geehyuk Lee (School of Computing, KAIST)

We present STButton, a physical button with a high-resolution spatio-temporal tactile feedback surface. The 5 x 8 pin array tactile display size of 20mm x 28mm enables buttons to express various types of information, such as value with the number of raised pins, direction with the location of raised pins, and duration of time with blinking animation. With a highly expressive tactile surface, the button can seamlessly transfer assistive feedforward and feedback during spontaneous button interaction, such as touching to locate the button or applying gradual pressure to press the button. In the demonstration, attendees experience five scenarios of button interaction: the seat heater button on a car, the volume control button on a remote controller, the power button on a laptop, the menu button on a VR controller, and the play button on a game controller. In each scenario, the representative role of tactile feedback is configured differently, allowing attendees to experience the rich interaction space and potential benefits of STButton. Early accessed attendees appreciated the unique opportunity to transfer information with a highly expressive tactile surface and emphasized that STButton adds a tangible layer to the user experience, enhancing emotional and sensory engagement.

EMPop: Pin Based Electromagnetic Actuation for Projection Mapping

CHI'24

Sungbaek Kim (Graduate School of Culture Technology, KAIST); Doyo Choi (Graduate School of Culture Technology, KAIST); Jinjoon Lee (KAIST)

As interactive media arts evolve, there is a growing demand for technologies that offer multisensory experiences beyond audiovisual elements in large-scale projection mapping exhibitions. However, traditional methods of providing tactile feedback are impractical in expansive settings due to their bulk and complexity. The EMPop system is the proposed solution, utilizing a straightforward design of electromagnets and permanent magnets making projection mapping more interactive and engaging. Our system is designed to control three permanent magnets individually with one electromagnet by adjusting the current of the electromagnet, reliable and scalable. We assessed its ability to convey directions and the strength of feedback, finding that users correctly identified directions and differentiated feedback intensity levels. Participants enjoyed the realistic and engaging experience, suggesting EMPop’s potential for enriching interactive installations in museums and galleries.

Late-Breaking Work

Towards the Safety of Film Viewers from Sensitive Content: Advancing Traditional Content Warnings on Film Streaming Services

CHI'24

Soyeong Min (KAIST); Minha Lee (KAIST); Sangsu Lee (KAIST)

Traditional content warnings on film streaming services are limited to warnings in the form of text or pictograms that only offer broad categorizations at the start for a few seconds. This method does not provide details on the timing and intensity of sensitive scenes. To explore the potential for improving content warnings, we investigated users’ perceptions of the current system and their expectations for a new content warning system. This was achieved through participatory design workshops involving 11 participants. We found users’ expectations in three aspects: 1) develop a more nuanced understanding of their personal sensitivities beyond content sensitivities, 2) enable a trigger-centric film exploration process, and 3) allow for predictions regarding the timing of scenes and mitigating the intensity of sensitive content. Our study initiates a preliminary exploration of advanced content warnings, incorporating users’ specific expectations and creative ideas, with the goal of fostering safer viewing experiences.

MOJI: Enhancing Emoji Search System with Query Expansions and Emoji Recommendations

CHI'24

Yoo Jin Hong (KAIST); Hye Soo Park (KAIST); Eunki Joung (KAIST); Jihyeong Hong (KAIST)

The text-based emoji search, despite its widespread use and extensive variety of emojis, has received limited attention in terms of understanding user challenges and identifying ways to support users. In our formative study, we found the bottlenecks in text-based emoji searches, focusing on challenges in finding appropriate search keywords and user modification strategies for unsatisfying searches. Building on these findings, we introduce MOJI, an emoji entry system supporting 1) query expansion with content-relevant multi-dimensional keywords reflecting users’ modification strategies and 2) emoji recommendations that belong to each search query. The comparison study demonstrated that our system reduced the time required to finalize search keywords compared to traditional text-based methods. Additionally, users achieved higher satisfaction in final emoji selections through easy attempts and modifications on search queries, without increasing the overall selection time. We also present a comparison of emoji suggestion algorithms (GPT and iOS) to support query expansion.

Supporting Interpersonal Emotion Regulation of Call Center Workers via Customer Voice Modulation

CHI'24

Duri Lee (KAIST); Kyungmin Nam (Delft University of Technology (TU Delft)); Uichin Lee (KAIST)

Call center workers suffer from the aggressive voices of customers. In this study, we explore the possibility of proactive voice modulation or style transfer, in which a customer’s voice can be modified in real time to mitigate emotional contagion. As a preliminary study, we conducted an interview with call center workers and performed a scenario-based user study to evaluate the effects of voice modulation on perceived stress and emotion. We transformed the customer’s voice by modulating its pitch and found its potential value for designing a user interface for proactive voice modulation. We provide new insights into interface design for proactively supporting call center workers during emotionally stressful conversations.

Understanding Visual, Integrated, and Flexible Workspace for Comprehensive Literature Reviews with SketchingRelatedWork

CHI'24

Donghyeok Ma (KAIST); Joon Hyub Lee (KAIST); Seok-Hyung Bae (KAIST)

Writing an academic paper requires significant time and effort to find, read, and organize many related papers, which are complex knowledge tasks. We present a novel interactive system that allows users to perform these tasks quickly and easily on the 2D canvas with pen and multitouch inputs, turning users’ sketches and handwriting into node-link diagrams of papers and citations that users can iteratively expand in situ toward constructing a coherent narrative when writing Related Work sections. Through a pilot study involving researchers experienced in publishing academic papers, we show that our system can serve as a visual, integrated, and flexible workspace for conducting comprehensive literature reviews.

Unveiling the Inherent Needs: GPT Builder as Participatory Design Tool for Exploring Needs and Expectation of AI with Middle-Aged Users

CHI'24

Huisung Kwon (KAIST); Yunjae Josephine Choi (KAIST); Sunok Lee (Aalto University); Sangsu Lee (KAIST)

A generative session that directly involves users in the design process is an effective way to design user-centered experiences by uncovering intrinsic needs. However, engaging users who lack coding knowledge in AI system design poses significant challenges. Recognizing this, the recently revealed GPT-creating tool, which allows users to customize ChatGPT through simple dialog interactions, is a promising solution. We aimed to identify the possibility of using this tool to uncover intrinsic users’ needs and expectations towards AI. We conducted individual participatory design workshops with generative sessions focusing on middle-aged individuals. This approach helped us to delve into the latent needs and expectations of conversational AI among this demographic. We discovered a wide range of unexpressed needs and expectations for AI among them. Our research highlights the potential and value of using the GPT-creating tool as a design method, particularly for revealing the users’ unexpressed needs and expectations.

DirActor: Creating Interaction Illustrations by Oneself through Directing and Acting Simultaneously in VR

CHI'24

Seung-Jun Lee (KAIST); Siripon Sutthiwanna (KAIST); Joon Hyub Lee (KAIST); Seok-Hyung Bae (KAIST)

In HCI research papers, interaction illustrations are essential to vividly expressing user scenarios arising from novel interactions. However, creating these illustrations through drawing or photography can be challenging, especially when they involve human figures. In this study, we propose the DirActor system that helps researchers create interaction illustrations in VR that can be used as-is or post-processed, by becoming both the director and the actor simultaneously. We reproduced interaction illustrations from past ACM CHI Best and Honorable Mention papers using the proposed system to showcase its usefulness and versatility.

Supporting Novice Researchers to Write Literature Review using Language Models

CHI'24

Kiroong Choe (Seoul National University); Seokhyeon Park (Seoul National University); Seokweon Jung (Seoul National University); Hyeok Kim (Northwestern University); Ji Won Yang (Seoul National University); Hwajung Hong (KAIST); Jinwook Seo (Seoul National University)

A literature review requires more than summarization. While language model-based services and systems increasingly assist in analyzing accurate content in papers, their role in supporting novice researchers to develop independent perspectives on literature remains underexplored. We propose the design and evaluation of a system that supports the writing of argumentative narratives from literature. Based on the barriers faced by novice researchers before, during, and after writing, identified through semi-structured interviews, we propose a prototype of a language-model-assisted academic writing system that scaffolds the literature review writing process. A series of workshop studies revealed that novice researchers found the support valuable as they could initiate writing, co-create satisfying contents, and develop agency and confidence through a long-term dynamic partnership with the AI.

Bimanual Interactions for Surfacing Curve Networks in VR

CHI'24

Sang-Hyun Lee (KAIST); Joon Hyub Lee (KAIST); Seok-Hyung Bae (KAIST)

We propose an interactive system for authoring 3D curve and surface networks using bimanual interactions in virtual reality (VR) inspired by physical wire bending and film wrapping. In our system, the user can intuitively author 3D shapes by performing a rich vocabulary of interactions arising from a minimal gesture grammar based on hand poses and firmness of hand poses for constraint definition and object manipulation. Through a pilot test, we found that the user can quickly and easily learn and use our system and become immersed in 3D shape authoring.

VR-SSVEPeripheral: Designing Virtual Reality Friendly SSVEP Stimuli using Peripheral Vision Area for Immersive and Comfortable Experience

CHI'24

Jinwook Kim (KAIST); Taesu Kim (KAIST); Jeongmi Lee (KAIST)

Recent VR HMDs embed various bio-sensors (e.g., EEG, eye-tracker) to expand the interaction space. Steady-state visual evoked potential (SSVEP) is one of the most utilized methods in BCI, and recent studies are attempting to design novel VR interactions with it. However, most of them suffer from usability issues, as SSVEP uses flickering stimuli to detect target brain signals that could cause eye fatigue. Also, conventional SSVEP stimuli are not tailored to VR, taking the same form as in a 2D environment. Thus, we propose VR-friendly SSVEP stimuli that utilize the peripheral, instead of the central, vision area in HMD. We conducted an offline experiment to verify our design (n=20). The results indicated that VR-SSVEPeripheral was more comfortable than the conventional one (Central) and functional for augmenting synchronized brain signals for SSVEP detection. This study provides a foundation for designing a VR-suitable SSVEP system and guidelines for utilizing it.

Special Interest Group

A SIG on Understanding and Overcoming Barriers in Establishing HCI Degree Programs in Asia

CHI'24

Zhicong Lu (City University of Hong Kong); Ian Oakley (KAIST); Chat Wacharamanotham (Independent researcher)

Despite a high demand for HCI education, Asia’s academic landscape has a limited number of dedicated HCI programs. This situation leads to a brain drain and impedes the creation of regional HCI centers of excellence and local HCI knowledge. This SIG aims to gather stakeholders related to this problem to clarify and articulate its facets and explore potential solutions. The discussions and insights gained from this SIG will provide valuable input for the Asia SIGCHI Committee and other organizations in their endeavors to promote and expand HCI education across Asia. Furthermore, the findings and strategies identified can also serve as valuable insights for other communities in the Global South facing similar challenges. By fostering a comprehensive understanding of the barriers and brainstorming effective mitigating strategies, this SIG aims to catalyze the growth of HCI programs in Asia and beyond.

Workshop

Workshop on Building a Metaverse for All: Opportunities and Challenges for Future Inclusive and Accessible Virtual Environments

CHI'24

Callum Parker (University of Sydney), Soojeong Yoo (University College London), Joel Fredericks (The University of Sydney), Tram Thi Minh Tran (University of Sydney), Julie R. Williamson (University of Glasgow), Youngho Lee (Mokpo National University), Woontack Woo (KAIST)

The recent advancements in Large Language Models (LLMs) have significantly impacted numerous, and will impact more, real-world applications. However, these models also pose significant risks to individuals and society. To mitigate these issues and guide future model development, responsible evaluation and auditing of LLMs are essential. This workshop aims to address the current “evaluation crisis” in LLM research and practice by bringing together HCI and AI researchers and practitioners to rethink LLM evaluation and auditing from a human-centered perspective. The HEAL workshop will explore topics around understanding stakeholders’ needs and goals with evaluation and auditing LLMs, establishing human-centered evaluation and auditing methods, developing tools and resources to support these methods, building community and fostering collaboration. By soliciting papers, organizing invited keynote and panel, and facilitating group discussions, this workshop aims to develop a future research agenda for addressing the challenges in LLM evaluation and auditing.

Human-Centered Evaluation and Auditing of Language Models

CHI'24

Ziang Xiao (Johns Hopkins University, Microsoft Research); Wesley Hanwen Deng (Carnegie Mellon University); Michelle S. Lam (Stanford University); Motahhare Eslami (Carnegie Mellon University); Juho Kim (KAIST); Mina Lee (Microsoft Research); Q. Vera Liao (Microsoft Research)

The recent advancements in Large Language Models (LLMs) have significantly impacted numerous, and will impact more, real-world applications. However, these models also pose significant risks to individuals and society. To mitigate these issues and guide future model development, responsible evaluation and auditing of LLMs are essential. This workshop aims to address the current “evaluation crisis” in LLM research and practice by bringing together HCI and AI researchers and practitioners to rethink LLM evaluation and auditing from a human-centered perspective. The HEAL workshop will explore topics around understanding stakeholders’ needs and goals with evaluation and auditing LLMs, establishing human-centered evaluation and auditing methods, developing tools and resources to support these methods, building community and fostering collaboration. By soliciting papers, organizing invited keynote and panel, and facilitating group discussions, this workshop aims to develop a future research agenda for addressing the challenges in LLM evaluation and auditing.

Video Showcase

Design Exploration of Robotic In-Car Accessories for Semi-Autonomous Vehicles

CHI'24

Max Fischer (The University of Tokyo); Jongik Jeon (KAIST); Seunghwa Pyo (KAIST); Shota Kiuchi (The University of Tokyo); Kumi Oda (The University of Tokyo); Kentaro Honma (The University of Tokyo); Miles Pennington (The University of Tokyo); Hyunjung Kim (The University of Tokyo)

The recent advancements in Large Language Models (LLMs) have significantly impacted numerous, and will impact more, real-world applications. However, these models also pose significant risks to individuals and society. To mitigate these issues and guide future model development, responsible evaluation and auditing of LLMs are essential. This workshop aims to address the current “evaluation crisis” in LLM research and practice by bringing together HCI and AI researchers and practitioners to rethink LLM evaluation and auditing from a human-centered perspective. The HEAL workshop will explore topics around understanding stakeholders’ needs and goals with evaluation and auditing LLMs, establishing human-centered evaluation and auditing methods, developing tools and resources to support these methods, building community and fostering collaboration. By soliciting papers, organizing invited keynote and panel, and facilitating group discussions, this workshop aims to develop a future research agenda for addressing the challenges in LLM evaluation and auditing.

Workshop for CHI’24 – HCI@KAIST

Long Time No See !

HCI@KAIST successfully finished the CHI’24 workshop event. After facing some challenges because of the pandemic, the much-anticipated CHI’24 workshop event finally took place offline! As all the HCI@KAISTIANS met all in person once again, the atmosphere was filled with enthusiasm and a warm sense of reconnection.

When: July 28th, 2023 13:00PM – 16:30PM
Where: KAIST N1 #102, 201, 106, 107, 110

Since there were some new changes in CHI review process, this workshop held a CHI Review process tutorial with three senior Ph.D students(EunJi Park, Myungjin Kim, Dasom Choi) who have experienced the review cycle. They shared their episodes of R&R process with fruitful advices. Professor Hwajung Hong well leaded the overall session with sharing her observations of being ACs.

CHI Review Process Tutorial (Chair : Prof.Hong)

This year’s CHI’24 workshop was filled with full of participations from HCI@KAISTIANS. Commitee organizers pre-gathered all the participants who want to take off their projects and created the groups to have intensive and in-depth feedback together.

Timetable of CHI’24 Workshop

We send our huge gratitude to all professors in HCI@KAIST (Uichin Lee, Hwajung Hong, Youn-kyung Lim, Sang Ho Yoon and Jeongmi Lee) and all the organizing for Committee Students!

Thank You for all participants for your amazing presentations and wish to see you all in upcoming CHI’24
as well as CHI’25 workshop! 🙂

Fall in HCI@KAIST (2023 Fall Colloquium)

This year’s HCI@KAIST fall colloquium invited four speakers from diverse HCI domains.


Sherry Tongshuang Wu from Carnegie Mellon University

Practival AI Systems and Effective Human-AI Collaboration

As AI systems (such as LLMs) rapidly advance, they can now perform tasks that were once exclusive to humans. This trend indicates a shift towards extensive collaboration with LLMs, where humans delegate tasks to them while focusing on higher-level skills unique to their capabilities. However, haphazard pairing of humans and AIs can lead to negative consequences, such as blind trust in incorrect AI outputs and decreased human productivity. In this talk, I will discuss our effort in promoting effective human-AI collaboration, by ensuring competence in both humans and AIs for their respective responsibilities and enhancing their collaboration. I will cover three themes: (1) Evaluating LLMs on specific usage scenarios; (2) Building task-specific interactions that maximize LLM usabilities; and (3) Training and guiding humans to optimize their collaboration skills with AI systems. In my final remarks, I will reflect on how AI advances can be viewed through the lens of their usefulness to actual human users.

Sang Won Lee from Virginia Tech

Toward Computuer-mediated Empathy

This talk discusses ways to design computational systems that facilitate empathic communication and collaboration in various domains. In contrast to using technologies to develop users’ empathy for targets, I emphasize the duality of empathy and highlight empowering targets to express, reveal, and reflect on themselves. An ongoing framework will be introduced, and I will focus on recent projects that explore sharing perspectives, self-expression, and self-reflection as a means to mediate empathy in interactive systems from target perspectives.

Janghee Cho from National University of Singapore

Design for Sustainable Life in the Work-From-Home Era

Navigating the complexities of the contemporary human experience is precarious, marked by latent but pervasive anxiety and uncertainty. In this talk, I draw on a reflective design approach that emphasizes the value of human agency and meaning-making processes to discuss design implications for technologies that could help people (re)establish a sense of normalcy in their everyday lives. Specifically, the focus centers on recent projects that investigate the role of data-driven technology in addressing well-being issues within remote and hybrid work settings, where individuals grapple with blurred boundaries between home and work.

Audrey Desjardins from University of Washington

Data Imaginaries

0s and 1s on a screen. The Cloud. Fast moving. Clean. Efficient. Exponentially growing. Data Centers. Code on the black screen of a terminal window. Buzzing. Such images construct part of commonly shared imaginaries around data. As data increasingly become part of the most intimate parts of people’s lives (often at home), it remains a largely invisible phenomenon. In particular, one of the leading challenges currently facing Internet of Things (IoT) is algorithmic transparency and accountability with regards to how IoT data are collected, what is inferred and who they are shared with. From a home dweller’s perspective, data may be available for review and reflection via graphs, spreadsheets, and dashboards (if at all available!).In this talk, I instead argue for other modes of encountering IoT data: ways that are creative, critical, subtle, performative, and at times analog or fictional. By translating data into ceramic artifacts, performance and interactive installation experiments, fiction stories, imagined sounds, faded fabric, and even data cookies, I show a diversity of approaches for engaging data that might capture people’s attention and imagination. As a result, this work uncovers ways to make data more real, showing its messiness and complexities, and opens questions about how data might be interpreted, and by whom.

CHI 2023

CHI 2023
DATE
  23 April – 28 April 2023
LOCATION  Hamburg, Germany | Hybrid
 

We are excited to bring good news! At CHI 2023, KAIST records a total of 21 Full Paper publications, 6 Late-Breaking Works, 4 Student Game Competitions, 2 Interactivities, and 6 Workshops. Congratulations on the outstanding achievement!

For more information and details about the publications that feature in the conference, please refer to the publication list below.

Paper Publications

CHI'23

Hye-Young Jo, Laurenz Seidel, Michel Pahud, Mike Sinclair, Andrea Bianchi

Online fitness video tutorials are an increasingly popular way to stay fit at home without a personal trainer. However, to keep the screen playing the video in view, users typically disrupt their balance and break the motion flow — two main pillars for the correct execution of yoga poses. While past research partially addressed this problem, these approaches supported only a limited view of the instructor and simple movements. To enable the fluid execution of complex full-body yoga exercises, we propose FlowAR, an augmented reality system for home workouts that shows training video tutorials as always-present virtual static and dynamic overlays around the user. We tested different overlay layouts in a study with 16 participants, using motion capture equipment for baseline performance. Then, we iterated the prototype and tested it in a furnished lab simulating home settings with 12 users. Our results highlight the advantages of different visualizations and the system’s general applicability. 

Preview

AutomataStage: an AR-mediated Creativity Support Tool for Hands-on Multidisciplinary Learning

CHI'23

Yunwoo Jeong, Hyungjun Cho, Taewan Kim, Tek-Jin Nam

The creativity support tools can enhance the hands-on multidisciplinary learning experience by drawing interest in the process of creating the outcome. We present AutomataStage, an AR-mediated creativity support tool for hands-on multidisciplinary learning. AutomataStage utilizes a video see-through interface to support the creation of Interactive Automata. The combination of building blocks and low-cost materials increases the expressiveness. The generative design method and one-to-one guide support the idea development process. It also provides a hardware see-through feature with which inside parts and circuits can be seen and an operational see-through feature that shows the operation in real-time. The visual programming method with a state transition diagram supports the iterative process during the creation process. A user study shows that AutomataStage enabled the students to create diverse Interactive Automata within 40-minute sessions. By creating Interactive Automata, the participants could learn the basic concepts of components. See-through features allowed active exploration with interest while integrating the components. We discuss the implications of hands-on tools with interactive and kinetic content beyond multidisciplinary learning.

Preview

It is Okay to be Distracted: How Real-time Transcriptions Facilitate Online Meeting with Distraction

CHI'23

Seoyun Son, Junyoung Choi, Sunjae Lee, Jean Y Song, Insik Shin

Online meetings are indispensable in collaborative remote work environments, but they are vulnerable to distractions due to their distributed and location-agnostic nature. While distraction often leads to a decrease in online meeting quality due to loss of engagement and context, natural multitasking has positive tradeoff effects, such as increased productivity within a given time unit. In this study, we investigate the impact of real-time transcriptions (i.e., full-transcripts, summaries, and keywords) as a solution to help facilitate online meetings during distracting moments while still preserving multitasking behaviors. Through two rounds of controlled user studies, we qualitatively and quantitatively show that people can better catch up with the meeting flow and feel less interfered with when using real-time transcriptions. The benefits of real-time transcriptions were more pronounced after distracting activities. Furthermore, we reveal additional impacts of real-time transcriptions (e.g., supporting recalling contents) and suggest design implications for future online meeting platforms where these could be adaptively provided to users with different purposes.

Preview

RoutineAid: Externalizing Key Design Elements to Support Daily Routines of Individuals with Autism

CHI'23

Bogoan Kim, Sung-In Kim, Sangwon Park, Hee Jeong Yoo, Hwajung Hong, Kyungsik Han

Implementing structure into our daily lives is critical for maintaining health, productivity, and social and emotional well-being. New norms for routine management have emerged during the current pandemic, and in particular, individuals with autism find it difficult to adapt to those norms. While much research has focused on the use of computer technology to support individuals with autism, little is known about ways of helping them establish and maintain “self-directed” routine structures. In this paper, we identify design requirements for an app that support four key routine components (i.e., physical activity, diet, mindfulness, and sleep) through a formative study and develop RoutineAid, a gamified smartphone app that reflects the design requirements. The results of a two-month field study on design feasibility highlight two affordances of RoutineAid – the establishment of daily routines by facilitating micro-planning and the maintenance of daily routines through celebratory interactions. We discuss salient design considerations for the future development of daily routine management tools for individuals with autism.

Preview

OmniSense: Exploring Novel Input Sensing and Interaction Techniques on Mobile Device with an Omni-Directional Camera

CHI'23

Hui-Shyong Yeo, Erwin Wu, Daewha Kim, Juyoung Lee, Hyung-il Kim, Seo Young Oh, Luna Takagi, Woontack Woo, Hideki Koike, Aaron J Quigley

An omni-directional (360°) camera captures the entire viewing sphere surrounding its optical center. Such cameras are growing in use to create highly immersive content and viewing experiences. When such a camera is held by a user, the view includes the user’s hand grip, finger, body pose, face, and the surrounding environment, providing a complete understanding of the visual world and context around it. This capability opens up numerous possibilities for rich mobile input sensing. In OmniSense, we explore the broad input design space for mobile devices with a built-in omni-directional camera and broadly categorize them into three sensing pillars: i) near device ii) around device and iii) surrounding device. In addition we explore potential use cases and applications that leverage these sensing capabilities to solve user needs. Following this, we develop a working system to put these concepts into action, by leveraging these sensing capabilities to enable potential use cases and applications. We studied the system in a technical evaluation and a preliminary user study to gain initial feedback and insights. Collectively these techniques illustrate how a single, omni-purpose sensor on a mobile device affords many compelling ways to enable expressive input, while also affording a broad range of novel applications that improve user experience during mobile interaction.

Preview

DAPIE: Interactive Step-by-Step Explanatory Dialogues to Answer Children’s Why and How Questions

CHI'23

Yoonjoo Lee, Tae Soo Kim, Sungdong Kim, Yohan Yun, Juho Kim

Children acquire an understanding of the world by asking “why” and “how” questions. Conversational agents (CAs) like smart speakers or voice assistants can be promising respondents to children’s questions as they are more readily available than parents or teachers. However, CAs’ answers to “why” and “how” questions are not designed for children, as they can be difficult to understand and provide little interactivity to engage the child. In this work, we propose design guidelines for creating interactive dialogues that promote children’s engagement and help them understand explanations. Applying these guidelines, we propose DAPIE, a system that answers children’s questions through interactive dialogue by employing an AI-based pipeline that automatically transforms existing long-form answers from online sources into such dialogues. A user study (N=16) showed that, with DAPIE, children performed better in an immediate understanding assessment while also reporting higher enjoyment than when explanations were presented sentence-by-sentence.

Preview

ModSandbox: Facilitating Online Community Moderation Through Error Prediction and Improvement of Automated Rules

CHI'23

Jean Y Song, Sangwook Lee, Jisoo Lee, Mina Kim, Juho Kim

Despite the common use of rule-based tools for online content moderation, human moderators still spend a lot of time monitoring them to ensure they work as intended. Based on surveys and interviews with Reddit moderators who use AutoModerator, we identified the main challenges in reducing false positives and false negatives of automated rules: not being able to estimate the actual effect of a rule in advance and having difficulty figuring out how the rules should be updated. To address these issues, we built ModSandbox, a novel virtual sandbox system that detects possible false positives and false negatives of a rule and visualizes which part of the rule is causing issues. We conducted a comparative, between-subject study with online content moderators to evaluate the effect of ModSandbox in improving automated rules. Results show that ModSandbox can support quickly finding possible false positives and false negatives of automated rules and guide moderators to improve them to reduce future errors.

Preview

How Space is Told: Linking Trajectory, Narrative, and Intent in Augmented Reality Storytelling for Cultural Heritage Sites

CHI'23

Jae-eun Shin, Woontack Woo

We report on a qualitative study in which 22 participants created Augmented Reality (AR) stories for outdoor cultural heritage sites. As storytelling is a crucial strategy for AR content aimed at providing meaningful experiences, the emphasis has been on what storytelling does, rather than how it is done, the end user’s needs prioritized over the author’s. To address this imbalance, we identify how recurring patterns in the spatial trajectories and narrative compositions of AR stories for cultural heritage sites are linked to the author’s intent and creative process: While authors tend to bind story arcs tightly to confined trajectories for narrative delivery, the need for spatial exploration results in thematic content mapped loosely onto encompassing trajectories. Based on our analysis, we present design recommendations for site-specific AR storytelling tools that can support authors in delivering their intent while leveraging the placeness of cultural heritage sites as a creative resource.

Preview

AVscript: Accessible Video Editing with Audio-Visual Scripts

CHI'22

Mina Huh, Saelyne Yang, Yi-Hao Peng, Xiang ‘Anthony’ Chen, Young-Ho Kim, Amy Pavel

Sighted and blind and low vision (BLV) creators alike use videos to communicate with broad audiences. Yet, video editing remains inaccessible to BLV creators. Our formative study revealed that current video editing tools make it difficult to access the visual content, assess the visual quality, and efficiently navigate the timeline. We present AVscript, an accessible text-based video editor. AVscript enables users to edit their video using a script that embeds the video’s visual content, visual errors (e.g., dark or blurred footage), and speech. Users can also efficiently navigate between scenes and visual errors or locate objects in the frame or spoken words of interest. A comparison study (N=12) showed that AVscript significantly lowered BLV creators’ mental demands while increasing confidence and independence in video editing. We further demonstrate the potential of AVscript through an exploratory study (N=3) where BLV creators edited their own footage.

Preview

Blaming Humans and Machines: What Shapes People's Reactions to Algorithmic Harm

CHI'23

Gabriel Lima, Nina Grgić-Hlača, Meeyoung Cha

Artificial intelligence (AI) systems can cause harm to people. This research examines how individuals react to such harm through the lens of blame. Building upon research suggesting that people blame AI systems, we investigated how several factors influence people’s reactive attitudes towards machines, designers, and users. The results of three studies (N = 1,153) indicate differences in how blame is attributed to these actors. Whether AI systems were explainable did not impact blame directed at them, their developers, and their users. Considerations about fairness and harmfulness increased blame towards designers and users but had little to no effect on judgments of AI systems. Instead, what determined people’s reactive attitudes towards machines was whether people thought blaming them would be a suitable response to algorithmic harm. We discuss implications, such as how future decisions about including AI systems in the social and moral spheres will shape laypeople’s reactions to AI-caused harm.

"We Speak Visually" : User-generated Icons for Better Video-Mediated Mixed Group Communications Between Deaf and Hearing Participants

CHI'23

Yeon Soo Kim, Hyeonjeong Im, Sunok Lee, Haena Cho, Sangsu Lee

Since the outbreak of the COVID-19 pandemic, videoconferencing technology has been widely adopted as a convenient, powerful, and fundamental tool that has simplified many day-to-day tasks. However, video communication is dependent on audible conversation and can be strenuous for those who are Hard of Hearing. Communication methods used by the Deaf and Hard of Hearing community differ significantly from those used by the hearing community, and a distinct language gap is evident in workspaces that accommodate workers from both groups. Therefore, we integrated users in both groups to explore ways to alleviate obstacles in mixed-group videoconferencing by implementing user-generated icons. A participatory design methodology was employed to investigate how the users overcome language differences. We observed that individuals utilized icons within video-mediated meetings as a universal language to reinforce comprehension. Herein, we present design implications from these findings, along with recommendations for future icon systems to enhance and support mixed-group conversations.

Surch: Enabling Structural Search and Comparison for Surgical Videos

CHI'23

Jeongyeon Kim, Daeun Choi, Nicole Lee, Matt Beane, Juho Kim

Video is an effective medium for learning procedural knowledge, such as surgical techniques. However, learning procedural knowledge through videos remains difficult due to limited access to procedural structures of knowledge (e.g., compositions and ordering of steps) in a large-scale video dataset. We present Surch, a system that enables structural search and comparison of surgical procedures. Surch supports video search based on procedural graphs generated by our clustering workflow capturing latent patterns within surgical procedures. We used vectorization and weighting schemes that characterize the features of procedures, such as recursive structures and unique paths. Surch enhances cross-video comparison by providing video navigation synchronized by surgical steps. Evaluation of the workflow demonstrates the effectiveness and interpretability (Silhouette score = 0.82) of our clustering for surgical learning. A user study with 11 residents shows that our system significantly improves the learning experience and task efficiency of video search and comparison, especially benefiting junior residents.

Love on the spectrum: Toward Inclusive online dating experience of autistic individuals

CHI'23

Dasom Choi, Sung-In Kim, Sunok Lee, Hyunseung Lim, Hee Jeong Yoo, Hwajung Hong

Online dating is a space where autistic individuals can find romantic partners with reduced social demands. Autistic individuals are often expected to adapt their behaviors to the social norms underlying the online dating platform to appear as desirable romantic partners. However, given that their autistic traits can lead them to different expectations of dating, it is uncertain whether conforming their behaviors to the norm will guide them to the person they truly want. In this paper, we explored the perceptions and expectations of autistic adults in online dating through interviews and workshops. We found that autistic people desired to know whether they behaved according to the platform’s norms. Still, they expected to keep their unique characteristics rather than unconditionally conform to the norm. We conclude by providing suggestions for designing inclusive online dating experiences that could foster self-guided decisions of autistic users and embrace their unique characteristics.

Fostering Youth’s Critical Thinking Competency about AI through Exhibition

CHI'23

Sunok Lee, Dasom Choi, Minha Lee, Jonghak Choi, Sangsu Lee

Today’s youth lives in a world deeply intertwined with AI, which has become an integral part of everyday life. For this reason, it is important for youth to critically think about and examine AI to become responsible users in the future. Although recent attempts have educated youth on AI with focus on delivering critical perspectives within a structured curriculum, opportunities to develop critical thinking competencies that can be reflected in their lives must be provided. With this background, we designed an informal learning experience through an AI-related exhibition to cultivate critical thinking competency. To explore changes before and after the exhibition, 23 participants were invited to experience the exhibition. We found that the exhibition can support the youth in relating AI to their lives through critical thinking processes. Our findings suggest implications for designing learning experiences to foster critical thinking competency for better coexistence with AI.

Creator-friendly Algorithms: Behaviors, Challenges, and Design Opportunities in Algorithmic Platforms

CHI'23

Yoonseo Choi, Eun Jeong Kang, Min Kyung Lee, Juho Kim

In many creator economy platforms, algorithms significantly impact creators’ practices and decisions about their creative expression and monetization. Emerging research suggests that the opacity of the algorithm and platform policies often distract creators from their creative endeavors. To study how algorithmic platforms can be more ‘creator-friendly,’ we conducted a mixed-methods study: interviews (N=14) and a participatory design workshop (N=12) with YouTube creators. Through the interviews, we found how creators’ folk theories of the curation algorithm impact their work strategies — whether they choose to work with or against the algorithm — and the associated challenges in the process. In the workshop, creators explored solution ideas to overcome the aforementioned challenges, such as fostering diverse and creative expressions, achieving success as a creator, and motivating creators to continue their job. Based on these findings, we discuss design opportunities for how algorithmic platforms can support and motivate creators to sustain their creative work.

Toward a Multilingual Conversational Agent: Challenges and Expectations of Code-Mixing Multilingual Users

CHI'23

Yunjae Josephine Choi, Minha Lee, Sangsu Lee

Multilingual speakers tend to interleave two or more languages when communicating. This communication strategy is called code-mixing, and it has surged with today’s ever-increasing linguistic and cultural diversity. Because of their communication style, multilinguals who use conversational agents have specific needs and expectations which are currently not being met by conversational systems. While research has been undertaken on code-mixing conversational systems, previous works have rarely focused on the code-mixing users themselves to discover their genuine needs. This work furthers our understanding of the challenges faced by code-mixing users in conversational agent interaction, unveils the key factors that users consider in code-mixing scenarios, and explores expectations that users have for future conversational agents capable of code-mixing. This study discusses the design implications of our findings and provides a guide on how to alleviate the challenges faced by multilingual users and how to improve the conversational agent user experience for multilingual users.

“I Won't Go Speechless”: Design Exploration on a Real-Time Text-To-Speech Speaking Tool for Videoconferencing

CHI'23

Wooseok Kim, Jian Jun, Minha Lee, Sangsu Lee

The COVID-19 pandemic has shifted many business activities to non-face-to-face activities, and videoconferencing has become a new paradigm. However, conference spaces isolated from surrounding interferences are not always readily available. People frequently participate in public places with unexpected crowds or acquaintances, such as cafés, living rooms, and shared offices. These environments have surrounding limitations that potentially cause challenges in speaking up during videoconferencing. To alleviate these issues and support the users in speaking-restrained spatial contexts, we propose a text-to-speech (TTS) speaking tool as a new speaking method to support active videoconferencing participation. We derived the possibility of a TTS speaking tool and investigated the empirical challenges and user expectations of a TTS speaking tool using a technology probe and participatory design methodology. Based on our findings, we discuss the need for a TTS speaking tool and suggest design considerations for its application in videoconferencing.

Charlie and the Semi-Automated Factory: Data-Driven Operator Behavior and Performance Modeling for Human-Machine Collaborative Systems

CHI'23

Eunji Park, Yugyeong Jung, Inyeop Kim, Uichin Lee

A semi-automated manufacturing system that entails human intervention in the middle of the process is a representative collaborative system that requires active interaction between humans and machines. User behavior induced by the operator’s decision-making process greatly impacts system operation and performance in such an environment that requires human-machine collaboration. There has been room for utilizing machine-generated data for a fine-grained understanding of the relationship between the behavior and performance of operators in the industrial domain, while multiple streams of data have been collected from manufacturing machines. In this study, we propose a large-scale data-analysis methodology that comprises data contextualization and performance modeling to understand the relationship between operator behavior and performance. For a case study, we collected machine-generated data over 6-months periods from a highly automated machine in a large tire manufacturing facility. We devised a set of metrics consisting of six human-machine interaction factors and four work environment factors as independent variables, and three performance factors as dependent variables. Our modeling results reveal that the performance variations can be explained by the interaction and work environment factors ($R^2$ = 0.502, 0.356, and 0.500 for the three performance factors, respectively). Finally, we discuss future research directions for the realization of context-aware computing in semi-automated systems by leveraging machine-generated data as a new modality in human-machine collaboration.

How Older Adults Use Online Videos for Learning

CHI'23

Seoyoung Kim, Donghoon Shin, Jeongyeon Kim, Soonwoo Kwon, Juho Kim

Online videos are a promising medium for older adults to learn. Yet, few studies have investigated what, how, and why they learn through online videos. In this study, we investigated older adults’ motivation, watching patterns, and difficulties in using online videos for learning by (1) running interviews with 13 older adults and (2) analyzing large-scale video event logs (N=41.8M) from a Korean Massive Online Open Course (MOOC) platform. Our results show that older adults (1) are motivated to learn practical topics, leading to less consumption of STEM domains than non-older adults, (2) watch videos with less interaction and watch a larger portion of a single video compared to non-older adults, and (3) face various difficulties (e.g., inconvenience arisen due to their unfamiliarity with technologies) that limit their learning through online videos. Based on the findings, we propose design guidelines for online videos and platforms targeted to support older adults’ learning.

Beyond Instructions: A Taxonomy of Information Types in How-to Videos

CHI'23

Saelyne Yang, Sangkyung Kwak, Juhoon Lee, Juho Kim

How-to videos are rich in information-they not only give instructions but also provide justifications or descriptions. People seek different information to meet their needs, and identifying different types of information present in the video can improve access to the desired knowledge. Thus, we present a taxonomy of information types in how-to videos. Through an iterative open coding of 4k sentences in 48 videos, 21 information types under 8 categories emerged. The taxonomy represents diverse information types that instructors provide beyond instructions. We first show how our taxonomy can serve as an analytical framework for video navigation systems. Then, we demonstrate through a user study (n=9) how type-based navigation helps participants locate the information they needed. Finally, we discuss how the taxonomy enables a wide range of video-related tasks, such as video authoring, viewing, and analysis. To allow researchers to build upon our taxonomy, we release a dataset of 120 videos containing 9.9k sentences labeled using the taxonomy.

Potential and Challenges of DIY Smart Homes with an ML-intensive Camera Sensor

CHI'23

Sojeong Yun, Youn-kyung Lim

Sensors and actuators are crucial components of a do-it-yourself (DIY) smart home system that enables users to construct smart home features successfully. In addition, machine learning (ML) (e.g., ML-intensive camera sensors) can be applied to sensor technology to increase its accuracy. Although camera sensors are often utilized in homes, research on user experiences with DIY smart home systems employing camera sensors is still in its infancy. This research investigates novel user experiences while constructing DIY smart home features using an ML-intensive camera sensor in contrast to commonly used internet-of-things (IoT) sensors. Thus, we conducted a seven-day field diary study with 12 families who were given a DIY smart home kit. Here, we assess the five characteristics of the camera sensor as well as the potential and challenges of utilizing the camera sensor in the DIY smart home and discuss the opportunities to address existing DIY smart home issues.

Interactivity

Explore the Future Earth with Wander 2.0: AI Chatbot Driven by Knowledge-base Story Generation and Text-to-image Model

CHI'23

Yuqian Sun, Ying Xu, Chenhang Cheng, Yihua Li, Chang Hee Lee, Ali Asadipour

People always envision the future of earth through science fiction (Sci-fi), so can we create a unique experience of “visiting the future earth” through the lens of artificial intelligence (AI)? We introduce Wander 2.0, an AI chatbot that co-creates sci-fi stories through knowledge-based story generation on daily communication platforms like WeChat and Discord. Using location information from Google Maps, Wander generates narrative travelogues about specific locations (e.g. Paris) through a large-scale language model (LLM). Additionally, using the large-scale text-to-image model (LTGM) Stable Diffusion, Wander transfers future scenes that match both the text description and location photo, facilitating future imagination. The project also includes a real-time visualization of the human-AI collaborations on a future map. Through journeys with visitors from all over the world, Wander demonstrates how AI can serve as a subjective interface linking fiction and reality. Our research shows that multi-modal AI systems have the potential to extend the artistic experience and creative world-building through adaptive and unique content generation for different people. Wander 2.0 is available at http://wander001.com/

Preview

AutomataStage: An Interactive Automata Creating Tool for Hands-on STEAM Learning

CHI'23

Yunwoo Jeong, Hyungjun Cho, Taewan Kim, Tek-Jin Nam

Hands-on STEAM learning requires scattered tools in the digital and physical environment and educational content that can draw attention, interest, and fun. We present AutomataStage, an interactive tool, and Interactive Automata, a learning content. AutomataStage utilizes a video see-through interface and building blocks to actively engage the entire creation process from ideation to visual programming, mechanism simulation, and making. It also provides a hardware see-through feature with which inside parts and circuits can be seen and an operational see-through feature that shows the operation in real-time. A user study shows that AutomataStage enabled the students to create diverse Interactive Automata within 40-minute sessions. See-through features enabled active exploration with interest, while visual programming with a state transition diagram supported the integration. The participants could rapidly learn sensors, motors, mechanisms, and programming by creating Interactive Automata. We discuss the implications of hands-on tools with interactive and kinetic content beyond STEAM education.

Late-Breaking Work

Virtual Trackball on VR Controller: Evaluation of 3D Rotation Methods in Virtual Reality

CHI'23

Sunbum Kim, Geehyuk Lee

Rotating 3D objects is an essential operation in virtual reality (VR). However, efficient rotation methods with the current VR controllers have not been considered extensively yet. Users must repeatedly move their arms and wrists to rotate an object with the current VR controller. We considered utilizing the trackpad available in most VR controllers as a virtual trackball for an efficient rotation method and implemented two types of virtual trackballs (Arcball and Two-axis Valuator) to enable additional rotation using the thumb while holding an object with a VR controller. In this study, we investigated whether a controller with a virtual trackball would be effective for 3D manipulation tasks. The results showed that participants could perform the tasks faster with Arcball but not faster with Two-axis Valuator than with the regular VR controller. Also, most participants preferred Arcball to Two-axis Valuator and felt Arcball more natural than Two-axis Valuator.

Preview

QuickRef: Should I Read Cited Papers for Understanding This Paper?

CHI'23

Sangjun Park, Chanhee Lee, Uichin Lee

Researchers spend lots of time for reading scientific papers as they need to stay updated with recent trends. However, navigating citations, which are indispensable elements of research papers, can act as a barrier for junior researchers as they do not have enough background knowledge and experience. We conduct a formative user study to identify challenges in navigating cited papers. We then prototype QuickRef, an interactive reader that provides additional information about cited papers on the side panel. A preliminary user study documents the usability of QuickRef. Further, we present practical design implications for citation navigation support.

Preview

HapticPalmrest: Haptic Feedback through the Palm for the Laptop Keyboard

CHI'23

Jisu Yim, Sangyoon Lee, Geehyuk Lee

Programmable haptic feedback on touchscreen keyboards enriches user experiences but is hard to realize for physical keyboards because this requires individually augmenting each key with an actuator. As an alternative approach, we propose HapticPalmrest, where haptic feedback for a physical keyboard is provided to the palms. This is particularly feasible for a laptop environment, where users usually rest their palms while interacting with the keyboard. To verify the feasibility of the approach, we conducted two user studies. The first study showed that at least one palm was on palmrest for more than 90\% of key interaction time. The second study showed a vibration power of 1.17 g (peak-to-peak) and a duration of 4 ms was sufficient for reliable perception of palmrest vibrations during keyboard interaction. We finally demonstrated the potential of such an approach by designing Dwell+ Key, an application that extends the function of each key by enabling timed dwelling operations.

Preview

AEDLE: Designing Drama Therapy Interface for Improving Pragmatic Language Skills of Children with Autism Spectrum Disorder Using AR

CHI'23

Jungin Park, Gahyun Bae, Jueon Park, Seo Kyung Park, Yeon Soo Kim, Sangsu Lee

This research proposes AEDLE, a new interface combining AR with drama therapy — an approved method of improving pragmatic language skills — to offer effective, universal, and accessible language therapy for children with Autism Spectrum Disorder (ASD). People with ASD commonly have a disability in pragmatic language and experience difficulty speaking. However, although therapy in childhood is necessary to prevent long-term social isolation due to such constraints, the limited number of therapists forbids doing so. Technology-based therapy can be a solution, but studies on utilizing digital therapy to improve pragmatic language are still insufficient. We conducted a preliminary user study with an ASD child and a therapist to investigate how the child with ASD reacts to drama therapy using AEDLE. We observed that our ASD child actively participated in AEDLE-mediated drama therapy, used our insights to recommend design suggestions for AR-based drama therapy, and explored various ways to utilize AEDLE.

Preview

Tailoring Interactions: Exploring the Opportune Moment for Remote Computer-mediated Interactions with Home-alone Dogs

CHI'23

Yewon Kim, Taesik Gong, Sung-Ju Lee

We argue for research on identifying opportune moments for remote computer-mediated interactions with home-alone dogs. We analyze the behavior of home-alone pet dogs to find specific situations where positive interaction between the dog and toys is more likely and when the interaction might induce more stress. We highlight the importance of considering the timing of remote interactions with pet dogs and the potential benefits it brings to the effectiveness of the interaction, leading to greater satisfaction and engagement for both the pet and the pet owner.

Preview

Dis/Immersion in Mindfulness Meditation with a Wandering Voice Assistant

CHI'23

Bonhee Ku, Katie Seaborn

Mindfulness meditation is a validated means of helping people manage stress. Voice-based virtual assistants (VAs) in smart speakers, smartphones, and smart environments can assist people in carrying out mindfulness meditation through guided experiences. However, the common fixed location embodiment of VAs makes it difficult to provide intuitive support. In this work, we explored the novel embodiment of a “wandering voice” that is co-located with the user and “moves” with the task. We developed a multi-speaker VA embedded in a yoga mat that changes location along the body according to the meditation experience. We conducted a qualitative user study in two sessions, comparing a typical fixed smart speaker to the wandering VA embodiment. Thick descriptions from interviews with twelve people revealed sometimes simultaneous experiences of immersion and dis-immersion. We offer design implications for “wandering voices” and a new paradigm for VA embodiment that may extend to guidance tasks in other contexts.

Student Game Comepetition

Glow the Buzz: A VR Puzzle Adventure Game Mainly Played Through Haptic Feedback

CHI'23

Sihyun Jeong, Hyun Ho Yun, Yoonji Lee, Yeeun Han

Virtual Reality (VR) has become a more popular tool, leading to increased demands for various immersive VR games for players. In addition, haptic technology is gaining attention as it adds a sense of touch to the visual and auditory dominant Human-Computer Interface (HCI) in terms of providing more extended VR experiences. However, most games, including VR, use haptics as a supplement while mostly depending on the visual elements as their main mode of transferring information. It is because the complexity of haptic in accurately capturing and replicating touch is still in its infancy. To further investigate the potential of haptics, we propose to Glow the Buzz, a VR game in which haptic feedback serves as a core element using wearable haptic devices. Our research explores whether haptic stimuli can be a primary form of interaction by conceiving iterative playtests for three haptic puzzle designs – rhythm, texture, and direction. The study concludes that haptic technology in VR has the potential extendability by proposing a VR haptic puzzle game that cannot be played without haptics and enhances the player’s immersion. Moreover, the study suggests elements that enhance each haptic stimuli’s discriminability when designing haptic puzzles.

Preview

Spatial Chef: A Spatial Transforming VR Game with Full Body Interaction

CHI'23

Yeeun Shin, Yewon Lee, Sungbaek Kim, Soomin Park

How can we play with space? We present Spatial Chef, a spatial cooking game that focuses on interacting with space itself, shifting away from the conventional object interaction of virtual reality (VR) games. This allows players to generate and transform the virtual environment (VE) around them directly. To capture the ambiguity of space, we created a game interface with full-body movement based on the player’s perception of spatial interaction. This was evaluated as easy and intuitive, providing clues for the spatial interaction design. Our user study reveals that manipulating virtual space can lead to unique experiences: Being both a player and an absolute and Experiencing realized fantasy. This suggests the potential of interacting with space as an engaging gameplay mechanic. Spatial Chef proposes turning the VE, typically treated as a passive backdrop, into an active medium that responds to the player’s intentions, creating a fun and novel experience.

Preview

MindTerior: A Mental Healthcare Game with Metaphoric Gamespace and Effective Activities for Mitigating Mild Emotional Difficulties

CHI'23

Ain Lee, Juhyun Lee, Sooyeon Ahn, Youngik Lee

Contemporaries suffer from more stress and emotional difficulties, but developing practices that allow them to manage and become aware of emotional states has been a challenge. MindTerior is a mental health care game developed for people who occasionally experience mild emotional difficulties. The game contains four mechanisms: measuring players’ emotional state, providing game activities that help mitigate certain negative emotions, visualizing players’ emotional state and letting players cultivate the game space with customizable items, and completing game events that educate players on how to cope with certain negative emotions. This set of gameplays can allow players to experience effective positive emotional relaxation and to perform gamified mental health care activities. Playtest showed that projecting players’ emotional state to a virtual game space is helpful for players to be conscious of their emotional state, and playing gamified activities is helpful for mental health care. Additionally, the game motivated players to practice the equivalent activities in real life.

Preview

Bean Academy: A Music Composition Game for Beginners with Vocal Query Transcription

CHI'23

Jaejun Lee, Hyeyoon Cho, Yonghyun Kim

Bean Academy is a music composition game designed for musicallyunskilled learners to lower entry barriers to music composition learning such as music theory comprehension, literacy and proficiency in utilizing music composition software. As a solution, Bean Academy’s Studio Mode was designed with the adaptation of an auditory-based ‘Vocal Query Transcription (VQT)’ model to enhance learners’ satisfaction and enjoyment towards music composition learning. Through the VQT model, players can experience a simple and efficient music composition process by experiencing their recorded voice input being transcripted into an actual musical piece. Based on our playtest, thematic analysis was conducted in two separate experiment groups. Here, we noticed that although Bean Academy does not outperform the current-level Digital Audio Workstation (DAW) in terms of performance or functionality, it can be highly considered as suitable learning material for musicallyunskilled learners.

Preview

Workshop

Beyond prototyping boards: future paradigms for electronics toolkits

CHI'23

Andera Bianchi, Steve Hodges, David J. Curtielles, HyunJoo Oh, Mannu Lambrichts, Anne Roudaut

Electronics prototyping platforms such as Arduino enable a wide variety of creators with and without an engineering background to rapidly and inexpensively create interactive prototypes. By opening up the process of prototyping to more creators, and by making it cheaper and quicker, prototyping platforms and toolkits have undoubtedly shaped the HCI community. With this workshop, we aim to understand how recent trends in technology, from reprogrammable digital and analog arrays to printed electronics, and from metamaterials to neurally-inspired processors, might be leveraged in future prototyping platforms and toolkits. Our goal is to go beyond the well-established paradigm of mainstream microcontroller boards, leveraging the more diverse set of technologies that already exist but to date have remained relatively niche. What is the future of electronics prototyping toolkits? How will these tools fit in the current ecosystem? What are the new opportunities for research and commercialization?

Towards Explainable AI Writing Assistants for Non-native English Speakers

CHI'23

Yewon Kim, Mina Lee, Donghwi Kim, Sung-Ju Lee

We highlight the challenges faced by non-native speakers when using AI writing assistants to paraphrase text. Through an interview study with 15 non-native English speakers (NNESs) with varying levels of English proficiency, we observe that they face difficulties in assessing paraphrased texts generated by AI writing assistants, largely due to the lack of explanations accompanying the suggested paraphrases. Furthermore, we examine their strategies to assess AI-generated texts in the absence of such explanations. Drawing on the needs of NNESs identified in our interview, we propose four potential user interfaces to enhance the writing experience of NNESs using AI writing assistants. The proposed designs focus on incorporating explanations to better support NNESs in understanding and evaluating the AI-generated paraphrasing suggestions.

ChatGPT for Moderating Customer Inquiries and Responses to Alleviate Stress and Reduce Emotional Dissonance of Customer Service Representatives

CHI'23

Hyung-Kwon Ko, Kihoon Son, Hyoungwook Jin, Yoonseo Choi, Xiang ‘Anthony’ Chen

Customer service representatives (CSRs) face significant levels of stress as a result of handling disrespectful customer inquiries and the emotional dissonance that arises from concealing their true emotions to provide the best customer experience. To solve this issue, we propose ExGPTer that uses ChatGPT to moderate the tone and manner of a customer inquiry to be more gentle and appropriate, while ensuring that the content remains unchanged. ExGPTer also augments CSRs’ responses to answer customer inquiries, so they can conform to established company protocol while effectively conveying the essential information that customers seek.

LMCanvas: Object-Oriented Interaction to Personalize Large Language Model-Powered Writing Environments

CHI'23

Tae Soo Kim, Arghya Sarkar, Yoonjoo Lee, Minsuk Chang, Juho Kim

Large language models (LLMs) can enhance writing by automating or supporting specific tasks in writers’ workflows (e.g., paraphrasing, creating analogies). Leveraging this capability, a collection of interfaces have been developed that provide LLM-powered tools for specific writing tasks. However, these interfaces provide limited support for writers to create personal tools for their own unique tasks, and may not comprehensively fulfill a writer’s needs—requiring them to continuously switch between interfaces during writing. In this work, we envision LMCanvas, an interface that enables writers to create their own LLM-powered writing tools and arrange their personal writing environment by interacting with “blocks” in a canvas. In this interface, users can create text blocks to encapsulate writing and LLM prompts, model blocks for model parameter configurations, and connect these to create pipeline blocks that output generations. In this workshop paper, we discuss the design for LMCanvas and our plans to develop this concept.

Look Upon Thyself: Understanding the Effect of Self-Reflection on Toxic Behavior in Online Gaming

CHI'23

Juhoon Lee, Jeong-woo Jang, Juho Kim

TBD

Towards an Experience-Centric Paradigm of Online Harassment: Responding to Calling out and Networked Harassment

CHI'23

Haesoo Kim, Juhoon Lee, Juho Kim, Jeong-woo Jang

TBD

The full schedule of presentations at CHI 2023 can also seen here!

CHI 2022

CHI 2022
DATE
  30 April – 5 May 2022
LOCATION  Online (New Orleans, LA)
 

We are happy to bring good news! At CHI 2022, KAIST records a total of 19 full paper publications (with 1 Best Paper, and 2 Honorable Mention Awards), 2 interactivities, 7 late-breaking works, 4 Student Game Competition works, and ranking 5th place in the number of publications out of all CHI 2022 participating institutions. Congratulations on the outstanding achievement!

KAIST CHI Statistics (2015-2022)
Year        Number of Publications       Rank
2015       9                                                14
2016       15                                              7
2017       7                                                26
2018       21                                              8
2019       13                                              11
2020       15                                              14
2021       22                                              4
2022       19                                              5

Nation-wide (Korea) CHI Statistics (2015-2022)
Year        Number of Publications       Rank
2015       17                                              6
2016       20                                              6
2017       16                                              11
2018       30                                              6
2019       23                                              8
2020       29                                              7
2021       35                                              7
2022       33                                              7

For more information and details about the publications that feature in the conference, please refer to the publication list below.

Paper Publications

Mobile-Friendly Content Design for MOOCs: Challenges, Requirements, and Design Opportunities

CHI'22, Best Paper

Jeongyeon Kim, Yubin Choi, Meng Xia, Juho Kim

Most video-based learning content is designed for desktops without considering mobile environments. We (1) investigate the gap between mobile learners’ challenges and video engineers’ considerations using mixed methods and (2) provide design guidelines for creating mobile-friendly MOOC videos. To uncover learners’ challenges, we conducted a survey (n=134) and interviews (n=21), and evaluated the mobile adequacy of current MOOCs by analyzing 41,722 video frames from 101 video lectures. Interview results revealed low readability and situationally-induced impairments as major challenges. The content analysis showed a low guideline compliance rate for key design factors. We then interviewed 11 video production engineers to investigate design factors they mainly consider. The engineers mainly focus on the size and amount of content while lacking consideration for color, complex images, and situationally-induced impairments. Finally, we present and validate guidelines for designing mobile-friendly MOOCs, such as providing adaptive and customizable visual design and context-aware accessibility support.

Stylette: Styling the Web with Natural Language

CHI'22, Honorable Mention

Tae Soo Kim, DaEun Choi, Yoonseo Choi, Juho Kim

End-users can potentially style and customize websites by editing them through in-browser developer tools. Unfortunately, end-users lack the knowledge needed to translate high-level styling goals into low-level code edits. We present Stylette, a browser extension that enables users to change the style of websites by expressing goals in natural language. By interpreting the user’s goal with a large language model and extracting suggestions from our dataset of 1.7 million web components, Stylette generates a palette of CSS properties and values that the user can apply to reach their goal. A comparative study (N=40) showed that Stylette lowered the learning curve, helping participants perform styling changes 35% faster than those using developer tools. By presenting various alternatives for a single goal, the tool helped participants familiarize themselves with CSS through experimentation. Beyond CSS, our work can be expanded to help novices quickly grasp complex software or programming languages.

MyDJ: Sensing Food Intakes with an Attachable on Your Eyeglass Frame

CHI'22, Honorable Mention

Jaemin Shin, Seungjoo Lee, Taesik Gong, Hyungjun Yoon, Hyunchul Roh, Andrea Bianchi, Sung-Ju Lee 

Various automated eating detection wearables have been proposed to monitor food intakes. While these systems overcome the forgetfulness of manual user journaling, they typically show low accuracy at outside-the-lab environments or have intrusive form-factors (e.g., headgear). Eyeglasses are emerging as a socially-acceptable eating detection wearable, but existing approaches require custom-built frames and consume large power. We propose MyDJ, an eating detection system that could be attached to any eyeglass frame. MyDJ achieves accurate and energy-efficient eating detection by capturing complementary chewing signals on a piezoelectric sensor and an accelerometer. We evaluated the accuracy and wearability of MyDJ with 30 subjects in uncontrolled environments, where six subjects attached MyDJ on their own eyeglasses for a week. Our study shows that MyDJ achieves 0.919 F1-score in eating episode coverage, with 4.03× battery time over the state-of-the-art systems. In addition, participants reported wearing MyDJ was almost as comfortable (94.95%) as wearing regular eyeglasses.

Lattice Menu: A Low-Error Gaze-Based Marking Menu Utilizing Target-Assisted Gaze Gestures on a Lattice of Visual Anchors

CHI'22

Taejun Kim, Auejin Ham, Sunggeun Ahn, Geehyuk Lee

We present Lattice Menu, a gaze-based marking menu utilizing a lattice of visual anchors that helps perform accurate gaze pointing for menu item selection. Users who know the location of the desired item can leverage target-assisted gaze gestures for multilevel item selection by looking at visual anchors over the gaze trajectories. Our evaluation showed that Lattice Menu exhibits a considerably low error rate (~1%) and a quick menu selection time (1.3-1.6 s) for expert usage across various menu structures (4 × 4 × 4 and 6 × 6 × 6) and sizes (8, 10 and 12°). In comparison with a traditional gaze-based marking menu that does not utilize visual targets, Lattice Menu showed remarkably (~5 times) fewer menu selection errors for expert usage. In a post-interview, all 12 subjects preferred Lattice Menu, and most subjects (8 out of 12) commented that the provisioning of visual targets facilitated more stable menu selections with reduced eye fatigue.

SpinOcchio: Understanding Haptic-Visual Congruency of Skin-Slip in VR with a Dynamic Grip Controller

CHI'22

Myung Jin Kim, Neung Ryu, Wooje Chang, Michel Pahud, Mike Sinclair, Andrea Bianchi

This paper’s goal is to understand the haptic-visual congruency perception of skin-slip on the fingertips given visual cues in Virtual Reality (VR). We developed SpinOcchio (‘Spin’ for the spinning mechanism used, ‘Occhio’ for the Italian word “eye”), a handheld haptic controller capable of rendering the thickness and slipping of a virtual object pinched between two fingers. This is achieved using a mechanism with spinning and pivoting disks that apply a tangential skin-slip movement to the fingertips. With SpinOcchio, we determined the baseline haptic discrimination threshold for skin-slip, and, using these results, we tested how haptic realism of motion and thickness is perceived with varying visual cues in VR. Surprisingly, the results show that in all cases, visual cues dominate over haptic perception. Based on these results, we suggest applications that leverage skin-slip and grip interaction, contributing further to realistic experiences in VR.

Understanding Emotion Changes in Mobile Experience Sampling

CHI'22

Soowon Kang, Cheul Young Park, Narae Cha, Auk Kim, Uichin Lee

Mobile experience sampling methods~(ESMs) are widely used to measure users’ affective states by randomly sending self-report requests. However, this random probing can interrupt users and adversely influence users’ emotional states by inducing disturbance and stress. This work aims to understand how ESMs themselves may compromise the validity of ESM responses and what contextual factors contribute to changes in emotions when users respond to ESMs. Towards this goal, we analyze 2,227 samples of the mobile ESM data collected from 78 participants. Our results show ESM interruptions positively or negatively affected users’ emotional states in at least 38\% of ESMs, and the changes in emotions are closely related to the contexts users were in prior to ESMs. Finally, we discuss the implications of using the ESM and possible considerations for mitigating the variability in emotional responses in the context of mobile data collection for affective computing.

Cocomix: Utilizing Comments to Improve Non-Visual Webtoon Accessibility

CHI'22

Mina Huh, YunJung Lee, Dasom Choi, Haesoo Kim, Uran Oh, Juho Kim

Webtoon is a type of digital comics read online where readers can leave comments to share their thoughts on the story. While it has experienced a surge in popularity internationally, people with visual impairments cannot enjoy webtoon with the lack of an accessible format. While traditional image description practices can be adopted, resulting descriptions cannot preserve webtoons’ unique values such as control over the reading pace and social engagement through comments. To improve the webtoon reading experience for BLV users, we propose Cocomix, an interactive webtoon reader that leverages comments into the design of novel webtoon interactions. Since comments can identify story highlights and provide additional context, we designed a system that provides 1) comments-based adaptive descriptions with selective access to details and 2) panel-anchored comments for easy access to relevant descriptive comments. Our evaluation (N=12) showed that Cocomix users could adapt the description for various needs and better utilize comments.

“It’s not wrong, but I’m quite disappointed”: Toward an Inclusive Algorithmic Experience for Content Creators with Disabilities

CHI'22

Dasom Choi, Uichin Lee, Hwajung Hong

YouTube is a space where people with disabilities can reach a wider online audience to present what it is like to have disabilities. Thus, it is imperative to understand how content creators with disabilities strategically interact with algorithms to draw viewers around the world. However, considering that the algorithm carries the risk of making less inclusive decisions for users with disabilities, whether the current algorithmic experiences (AXs) on video platforms is inclusive for creators with disabilities is an open question. To address that, we conducted semi-structured interviews with eight YouTubers with disabilities. We found that they aimed to inform the public of diverse representations of disabilities, which led them to work with algorithms by strategically portraying disability identities. However, they were disappointed that the way the algorithms work did not sufficiently support their goals. Based on findings, we suggest implications for designing inclusive AXs that could embrace creators’ subtle needs.

AlgoSolve: Supporting Subgoal Learning in Algorithmic Problem-Solving with Learnersourced Microtasks

CHI'22

Kabdo Choi, Hyungyu Shin, Meng Xia, Juho Kim

Designing solution plans before writing code is critical for successful algorithmic problem-solving. Novices, however, often plan on-the-fly during implementation, resulting in unsuccessful problem-solving due to lack of mental organization of the solution. Research shows that subgoal learning helps learners develop more complete solution plans by enhancing their understanding of the high-level solution structure. However, expert-created materials such as subgoal labels are necessary to provide learning benefits from subgoal learning, which are a scarce resource in self-learning due to limited availability and high cost. We propose a learnersourcing workflow that collects high-quality subgoal labels from learners by helping them improve their label quality. We implemented the workflow into AlgoSolve, a prototype interface that supports subgoal learning for algorithmic problems. A between-subjects study with 63 problem-solving novices revealed that AlgoSolve helped learners create higher-quality labels and more complete solution plans, compared to a baseline method known to be effective in subgoal learning.

FitVid: Responsive and Flexible Video Content Adaptation

CHI'22

Jeongyeon Kim, Yubin Choi, Minsuk Kahng, Juho Kim

Mobile video-based learning attracts many learners with its mobility and ease of access. However, most lectures are designed for desktops. Our formative study reveals mobile learners’ two major needs: more readable content and customizable video design. To support mobile-optimized learning, we present FitVid, a system that provides responsive and customizable video content. Our system consists of (1) an adaptation pipeline that reverse-engineers pixels to retrieve design elements (e.g., text, images) from videos, leveraging deep learning with a custom dataset, which powers (2) a UI that enables resizing, repositioning, and toggling in-video elements. The content adaptation improves the guideline compliance rate by 24% and 8% for word count and font size. The content evaluation study (n=198) shows that the adaptation significantly increases readability and user satisfaction. The user study (n=31) indicates that FitVid significantly improves learning experience, interactivity, and concentration. We discuss design implications for responsive and customizable video adaptation.

Sad or just jealous? Using Experience Sampling to Understand and Detect Negative Affective Experiences on Instagram

CHI'22

Mintra Ruensuk, Taewon Kim, Hwajung Hong, Ian Oakley

Social Network Services (SNSs) evoke diverse affective experiences. While most are positive, many authors have documented both the negative emotions that can result from browsing SNS and their impact: Facebook depression is a common term for the more severe results. However, while the importance of the emotions experienced on SNSs is clear, methods to catalog them, and systems to detect them, are less well developed. Accordingly, this paper reports on two studies using a novel contextually triggered Experience Sampling Method to log surveys immediately after using Instagram, a popular image-based SNS, thus minimizing recall biases. The first study improves our understanding of the emotions experienced while using SNSs. It suggests that common negative experiences relate to appearance comparison and envy. The second study captures smartphone sensor data during Instagram sessions to detect these two emotions, ultimately achieving peak accuracies of 95.78% (binary appearance comparison) and 93.95% (binary envy).

Prediction for Retrospection: Integrating Algorithmic Stress Prediction into Personal Informatics Systems for College Students' Mental Health

CHI'22

Taewan Kim, Haesoo Kim, Ha Yeon Lee, Hwarang Goh, Shakhboz Abdigapporov, Mingon Jeong, Hyunsung Cho, Kyungsik Han, Youngtae Noh, Sung-Ju Lee, Hwajung Hong

Reflecting on stress-related data is critical in addressing one’s mental health. Personal Informatics (PI) systems augmented by algorithms and sensors have become popular ways to help users collect and reflect on data about stress. While prediction algorithms in the PI systems are mainly for diagnostic purposes, few studies examine how the explainability of algorithmic prediction can support user-driven self-insight. To this end, we developed MindScope, an algorithm-assisted stress management system that determines user stress levels and explains how the stress level was computed based on the user’s everyday activities captured by a smartphone. In a 25-day field study conducted with 36 college students, the prediction and explanation supported self-reflection, a process to re-establish preconceptions about stress by identifying stress patterns and recalling past stress levels and patterns that led to coping planning. We discuss the implications of exploiting prediction algorithms that facilitate user-driven retrospection in PI systems.

"It Feels Like Taking a Gamble": Exploring Perceptions, Practices, and Challenges of Using Makeup and Cosmetics for People with Visual Impairments

CHI'22

Franklin Mingzhe Li, Franchesca Spektor, Meng Xia, Mina Huh, Peter Cederberg, Yuqi Gong, Kristen Shinohara, Patrick Carrington

Makeup and cosmetics offer the potential for self-expression and the reshaping of social roles for visually impaired people. However, there exist barriers to conducting a beauty regime because of the reliance on visual information and color variances in makeup. We present a content analysis of 145 YouTube videos to demonstrate visually impaired individuals’ unique practices before, during, and after doing makeup. Based on the makeup practices, we then conducted semi-structured interviews with 12 visually impaired people to discuss their perceptions of and challenges with the makeup process in more depth. Overall, through our findings and discussion, we present novel perceptions of makeup from visually impaired individuals (e.g., broader representations of blindness and beauty). The existing challenges provide opportunities for future research to address learning barriers, insufficient feedback, and physical and environmental barriers, making the experience of doing makeup more accessible to people with visual impairments.

Quantifying Proactive and Reactive Button Input

CHI'22

Hyunchul Kim, Kasper Hornbaek, Byungjoo Lee

When giving input with a button, users follow one of two strategies: (1) react to the output from the computer or (2) proactively act in anticipation of the output from the computer. We propose a technique to quantify reactiveness and proactiveness to determine the degree and characteristics of each input strategy. The technique proposed in this study uses only screen recordings and does not require instrumentation beyond the input logs. The likelihood distribution of the time interval between the button inputs and system outputs, which is uniquely determined for each input strategy, is modeled. Then the probability that each observed input/output pair originates from a specific strategy is estimated along with the parameters of the corresponding likelihood distribution. In two empirical studies, we show how to use the technique to answer questions such as how to design animated transitions and how to predict a player’s score in real-time games.

Promptiverse: Scalable Generation of Scaffolding Prompts Through Human-AI Hybrid Knowledge Graph Annotation

CHI'22

Yoonjoo Lee, John Joon Young Chung, Tae Soo Kim, Jean Y Song, Juho Kim

Online learners are hugely diverse with varying prior knowledge, but most instructional videos online are created to be one-size-fits-all. Thus, learners may struggle to understand the content by only watching the videos. Providing scaffolding prompts can help learners overcome these struggles through questions and hints that relate different concepts in the videos and elicit meaningful learning. However, serving diverse learners would require a spectrum of scaffolding prompts, which incurs high authoring effort. In this work, we introduce Promptiverse, an approach for generating diverse, multi-turn scaffolding prompts at scale, powered by numerous traversal paths over knowledge graphs. To facilitate the construction of the knowledge graphs, we propose a hybrid human-AI annotation tool, Grannotate. In our study (N=24), participants produced 40 times more on-par quality prompts with higher diversity, through Promptiverse and Grannotate, compared to hand-designed prompts. Promptiverse presents a model for creating diverse and adaptive learning experiences online.

CatchLive: Real-time Summarization of Live Streams with Stream Content and Interaction Data

CHI'22

Saelyne Yang, Jisu Yim, Juho Kim, Hijung Valentina Shin

Live streams usually last several hours with many viewers joining in the middle. Viewers who join in the middle often want to understand what has happened in the stream. However, catching up with the earlier parts is challenging because it is difficult to know which parts are important in the long, unedited stream while also keeping up with the ongoing stream. We present CatchLive, a system that provides a real-time summary of ongoing live streams by utilizing both the stream content and user interaction data. CatchLive provides viewers with an overview of the stream along with summaries of highlight moments with multiple levels of detail in a readable format. Results from deployments of three streams with 67 viewers show that CatchLive helps viewers grasp the overview of the stream, identify important moments, and stay engaged. Our findings provide insights into designing summarizations of live streams reflecting their characteristics.

A Conversational Approach for Modifying Service Mashups in IoT Environments

CHI'22

Sanghoon Kim, In-Young Ko

Existing conversational approaches for Internet of Things (IoT) service mashup do not support modification because of the usability challenge, although it is common for users to modify the service mashups in IoT environments. To support the modification of IoT service mashups through conversational interfaces in a usable manner, we propose the conversational mashup modification agent (CoMMA). Users can modify IoT service mashups using CoMMA through natural language conversations. CoMMA has a two-step mashup modification interaction, an implicature-based localization step, and a modification step with a disambiguation strategy. The localization step allows users to easily search for a mashup by vocalizing their expressions in the environment. The modification step supports users to modify mashups by speaking simple modification commands. We conducted a user study and the results show that CoMMA is as effective as visual approaches in terms of task completion time and perceived task workload for modifying IoT service mashups.

TaleBrush: Sketching Stories with Generative Pretrained Language Models

CHI'22

John Joon Young Chung, Wooseok Kim, Kang Min Yoo, Hwaran Lee, Eytan Adar, Minsuk Chang

While advanced text generation algorithms (e.g., GPT-3) have enabled writers to co-create stories with an AI, guiding the narrative remains a challenge. Existing systems often leverage simple turn-taking between the writer and the AI in story development. However, writers remain unsupported in intuitively understanding the AI’s actions or steering the iterative generation. We introduce TaleBrush, a generative story ideation tool that uses line sketching interactions with a GPT-based language model for control and sensemaking of a protagonist’s fortune in co-created stories. Our empirical evaluation found our pipeline reliably controls story generation while maintaining the novelty of generated sentences. In a user study with 14 participants with diverse writing experiences, we found participants successfully leveraged sketching to iteratively explore and write stories according to their intentions about the character’s fortune while taking inspiration from generated stories. We conclude with a reflection on how sketching interactions can facilitate the iterative human-AI co-creation process.

Distracting Moments in Videoconferencing: A Look Back at the Pandemic Period

CHI'22

Minha Lee, Wonyoung Park, Sunok Lee, Sangsu Lee

The COVID-19 pandemic has forced workers around the world to switch their working paradigms from on-site to video-mediated communication. Despite the advantages of videoconferencing, diverse circumstances have prevented people from focusing on their work. One of the most typical problems they face is that various surrounding factors distract them during their meetings. This study focuses on conditions in which remote workers are distracted by factors that disturb, interrupt, or restrict them during their meetings. We aim to explore the various problem situations and user needs. To understand users’ pain points and needs, focus group interviews and participatory design workshops were conducted to learn about participants’ troubled working experiences over the past two years and the solutions they expected. Our study provides a unified framework of distracting factors by which to understand causes of poor user experience and reveals valuable implications to improve videoconferencing experiences.

Interactivity

QuadStretch: A Forearm-wearable Multi-dimensional Skin Stretch Display for Immersive VR Haptic Feedback

CHI'22

Youngbo Aram Shim, Taejun Kim, Geehyuk Lee

This demonstration presents QuadStretch, a multidimensional skin stretch display that is worn on the forearm for VR interaction. QuadStretch realizes a light and flexible form factor without a large frame that grounds the device on the arm and provides rich haptic feedback through high expressive performance of stretch modality and various stimulation sites around the forearm. In the demonstration, the presenter lets participants experience six VR interaction scenarios with QuadStretch feedback: Boxing, Pistol, Archery, Slingshot, Wings, and Climbing. In each scenario, the user’s actions are mapped to the skin stretch parameters and fed back, allowing users to experience QuadStretch’s large output space that enables an immersive VR experience.

TaleBrush: Visual Sketching of Story Generation with Pretrained Language Models

CHI'22

John Joon Young Chung, Wooseok Kim, Kang Min Yoo, Hwaran Lee, Eytan Adar, Minsuk Chang

Advancing text generation algorithms (e.g., GPT-3) have led to new kinds of human-AI story co-creation tools. However, it is difficult for authors to guide this generation and understand the relationship between input controls and generated output. In response, we introduce TaleBrush, a GPT-based tool that uses abstract visualizations and sketched inputs. The tool allows writers to draw out the protagonist’s fortune with a simple and expressive interaction. The visualization of the fortune serves both as input control and representation of what the algorithm generated (a story with varying fortune levels). We hope this demonstration leads the community to consider novel controls and sensemaking interactions for human-AI co-creation.

Late-Breaking Work

Effect of Contact Points Feedback on Two-Thumb Touch Typing in Virtual Reality

CHI'22

Jeongmin Son, Sunggeun Ahn, Sunbum Kim, Geehyuk Lee

Two-thumb touch typing (4T) is a touchpad-based text-entry technique also used in virtual reality (VR) systems. However, the performance of 4T in VR is far below that of 4T in a real environment, such as on a smartphone. Unlike “real 4T”, 4T in VR provides virtual cursors representing the thumb positions determined by a position tracker. The virtual cursor positions may differ from the thumb contact points on an input surface. Still, users may regard them as their thumb contact points. In this case, the virtual cursor movements may conflict with the thumb movements perceived by their proprioception and may contribute to typing errors. We hypothesized that virtual cursors accurately representing the contact points of the thumb can improve the performance of 4T in VR. We designed a method to provide accurate contact point feedback, and showed that accurate contact point feedback has a statistically significant positive effect on the speed of 4T in VR.

An Interactive Car Drawing System with Tick'n'Draw for Training Perceptual and Perspective Drawing Skills

CHI'22

Seung-Jun Lee, Joon Hyub Lee, Seok-Hyung Bae

Young children love to draw. However, at around age 10, they begin to feel that their drawings are unrealistic and give up drawing altogether. This study aims to help those who did not receive the proper training in drawing at the time and as a result remain at that level of drawing. First, through 12 drawing workshop sessions, we condensed 2 prominent art education books into 10 core drawing skills. Second, we designed and implemented a novel interactive system that helps the user repeatedly train these skills in the 5 stages of drawing a nice car in an accurate perspective. Our novel interactive technique, Tick’n’Draw, inspired by the drawing habits of experts, provides friendly guidance so that the user does not get lost in the many steps of perceptual and perspective drawing. Third, through a pilot test, we found that our system is quick to learn, easy to use, and can potentially improve real-world drawing abilities with continued use.

Mine Yourself!: A Role-playing Privacy Tutorial in Virtual Reality Environment

CHI'22

Junsu Lim, Hyeonggeun Yun, Auejin Ham, Sunjun Kim

Virtual Reality (VR) has potential vulnerabilities in privacy risks from collecting a wide range of data with higher density. Various designs to provide information on Privacy Policy (PP) have improved the awareness and motivation towards privacy risks. However, most of them have focused on desktop environments, not utilizing the full potential of VR’s immersive interactivity. Therefore, we proposed a role-playing mechanism to provide an immersive experience of interacting with PP’s key entities. First, our formative study found insights for PP where VR users had limited awareness of what data to be collected and how to control them. Following this, we implemented a VR privacy tutorial based on our role-playing mechanism and PP from off-the-shelf VR devices. Our privacy tutorial increased a similar amount of privacy awareness with significantly higher satisfaction (p=0.007) than conventional PP. Our tutorial also showed marginally higher usability (p=0.11).

Exploring the Effects of AI-assisted Emotional Support Processes in Online Mental Health Community

CHI'22

Donghoon Shin, Subeen Park, Esther Hehsun Kim, Soomin Kim, Jinwook Seo, Hwajung Hong

Social support in online mental health communities (OMHCs) is an effective and accessible way of managing mental wellbeing. In this process, sharing emotional supports is considered crucial to the thriving social supports in OMHCs, yet often difficult for both seekers and providers. To support empathetic interactions, we design an AI-infused workflow that allows users to write emotional supporting messages to other users’ posts based on the elicitation of the seeker’s emotion and contextual keywords from writing. Based on a preliminary user study (N = 10), we identified that the system helped seekers to clarify emotion and describe text concretely while writing a post. Providers could also learn how to react empathetically to the post. Based on these results, we suggest design implications for our proposed system.

Virfie: Virtual Group Selfie Station for Remote Togetherness

CHI'22

Hyerin Im, Taewan Kim, Eunhee Jung, bonhee ku, Seungho Baek, Tak Yeon Lee

Selfies have become a prominent means of online communication. Group selfies, in particular, encourage people to represent their identity as part of the group and foster a sense of belonging. During the COVID-19 pandemic, video conferencing systems are used as a tool for group selfies. However, conventional systems are not ideal for group selfies due to the rigidness of grid-based layout, information overload, and lack of eye contact. To explore design opportunities and needs for a novel virtual group selfie platform, we conducted a participatory design, and identified three characteristics of virtual group selfie scenarios, “context with narratives”, “interactive group tasks”, and “capturing subtle moments.” We implemented Virfie, a web-based platform that enables users to take group selfies with embodied social interaction, and to create and customize selfie scenarios using a novel JSON specification. In order to validate our design concept and identify usability issues we conducted a user study. Feedbacks from the study participants suggest that Virfie is effective at strengthening social interaction and remote togetherness.

CareMouse: An Interactive Mouse System that Supports Wrist Stretching Exercises in the Workplace

CHI'22

Gyuwon Jung, Youwon Shin, Jieun Lim, Uichin Lee

Knowledge workers suffer from wrist pain due to their long-term mouse and keyboard use. In this study, we present CareMouse, an interactive mouse system that supports wrist stretching exercises in the workplace. When the stretch alarm is given, users hold CareMouse and do exercises, and the system collects the wrist movement data and determines whether they follow the accurate stretching motions based on a machine learning algorithm, enabling real-time guidance. We conducted a preliminary user study to understand the users’ perception and user experience of the system. Our results showed the feasibility of CareMouse in guiding stretching exercises interactively. We provided design implications for the augmentation of existing tools when offering auxiliary functions.

ThinkWrite: Design Interventions for Empowering User Deliberation in Online Petition

CHI'22

Jini Kim, Chorong Kim, Ki-Young Nam

Online petitions have served as an innovative means of citizen participation over the past decade. However, their original purpose has been waning due to inappropriate language, fabricated information, and the lack of evidence that supports petitions. The lack of deliberation in online petitions has influenced other users, deteriorating the platform to a degree that good petitions are seldom generated. Therefore, this study designs interventions that empower users to create deliberative petitions. We conducted user research to observe users’ writing behavior in online petitions and identified causes of non-deliberative petitions. Based on our findings, we propose ThinkWrite, a new interactive app promoting user deliberation. The app includes six main features: a gamified learning process, a writing recommendation system, a guiding interface for self-construction, tailored AI for self-revision, short-cuts for easy archiving of evidence, and a citizen-collaborative page. Finally, the efficacy of the app is demonstrated through user surveys and in-depth interviews.

Student Game Comepetition

Play With Your Emotions: Exploring Possibilities of Emotions as Game Input in NERO

CHI'22

Valérie Erb, Tatiana Chibisova, Haesoo Kim, Jeongmi Lee, Young Yim Doh

This work presents NERO, a game concept using the player’s active emotional input to map the emotional state of the player to representative in-game characters. Emotional input in games has been mainly used as a passive measure to adjust for game difficulty or other variables. However the player has not been given the possibility to explore and play with one’s emotions as an active feature. Given the high subjectivity of felt emotions we wanted to focus on the player’s experience of emotional input rather than the objective accuracy of the input sensor. We therefore implemented a proof-of-concept game using heart-rate as a proxy for emotion measurement and through repeated player tests the game mechanics were revised and evaluated. Valuable insight for the design of entertainment-focused emotional input games were gained, including emotional connection despite limited accuracy, influence of the environment and the importance of calibration. The players overall enjoyed the novel game experience and their feedback carries useful implications for future games including active emotional input.

The Melody of the Mysterious Stones: A VR Mindfulness Game Using Sound Spatialization

CHI'22

Haven Kim, Jaeran Choi, Young Yim Doh, Juhan Nam

The Melody of Mysterious Stones is a VR meditation game that utilizes spatial audio technologies. One of the most common mindfulness exercises is to notice and observe five senses including the sense of sound. As a way of helping the players with focusing on their sense of sound, the Melody of Mysterious Stones makes use of spatialized sounds as game elements. Our play tests showed that game players enjoyed playing missions with spatial audio elements. Also, they reported that spatial audio helped them with focusing on their sense of sound and therefore felt more engaged in meditation materials.

Evoker: Narrative-based Facial Expression Game for Emotional Development of Adolescents

CHI'22

Seokhyeon Hong, Yeonsoo Choi, Youjin Sung, YURHEE JIN, Young Yim Doh, Jeongmi Lee

Evoker is a narrative-based facial expression game. Due to the COVID-19 pandemic, adolescents should be wearing masks in their daily lives. However, wearing masks disturbs emotional interaction through facial expressions, which is a critical component in emotional development. Therefore, a negative impact on adolescent emotional development is predicted. To solve this problem, we design a narrative-based game Evoker that uses real-time facial expression recognition. In this game, players are asked to identify an emotion from narrative contexts in missions, and make a facial expression appropriate for the context to clear the challenges. Our game provides an opportunity to practice reading emotional contexts and expressing appropriate emotions, which has a high potential for promoting emotional development of adolescents.

Classy Trash Monster: An Educational Game for Teaching Machine Learning to Non-major Students

CHI'22

Joonhyung Bae, Karam Eum, Haram Kwon, Seolhee Lee, Juhan Nam, Young Yim Doh

As machine learning (ML) became more relevant to our lives, ML education for college students without technical background arose important. However, not many educational games designed to suit challenges they experience exist. We introduce an educational game Classy Trash Monster(CTM), designed to better educate ML and data dependency to non-major students who learn ML for the first time. The player can easily learn to train a classification model and solve tasks by engaging in simple game activities designed according to an ML pipeline. Simple controls, positive rewards, and clear audiovisual feedback makes game easy to play even for novice players. The playtest result showed that players were able to learn basic ML concepts and how data can impact model results, and that the game made ML feel less difficult and more relevant. However, proper debriefing session seems crucial to prevent misinterpretations that may occur in the learning process.

Classy Trash Monster: An Educational Game for Teaching Machine Learning to Non-major Students

CHI'22

Joonhyung Bae, Karam Eum, Haram Kwon, Seolhee Lee, Juhan Nam, Young Yim Doh

As machine learning (ML) became more relevant to our lives, ML education for college students without technical background arose important. However, not many educational games designed to suit challenges they experience exist. We introduce an educational game Classy Trash Monster(CTM), designed to better educate ML and data dependency to non-major students who learn ML for the first time. The player can easily learn to train a classification model and solve tasks by engaging in simple game activities designed according to an ML pipeline. Simple controls, positive rewards, and clear audiovisual feedback makes game easy to play even for novice players. The playtest result showed that players were able to learn basic ML concepts and how data can impact model results, and that the game made ML feel less difficult and more relevant. However, proper debriefing session seems crucial to prevent misinterpretations that may occur in the learning process.

CHI 2021

CHI 2021
DATE
  7 May – 17 May 2021
LOCATION  Online (Yokohama, Japan)
 

We are happy to bring good news! At CHI 2021, KAIST records a total of 22 full paper publications (with 7 Honorable Mention Awards), ranking 4th place in the number of publications out of all CHI 2021 participating institutions. Congratulations on the outstanding achievement!

KAIST CHI Statistics (2015-2021)
Year        Number of Publications       Rank
2015       9                                                14
2016       15                                              7
2017       7                                                26
2018       21                                              8
2019       13                                              11
2020       15                                              14
2021       22                                              4

Nation-wide (Korea) CHI Statistics (2015-2021)
Year        Number of Publications       Rank
2015       17                                              6
2016       20                                              6
2017       16                                              11
2018       30                                              6
2019       23                                              8
2020       29                                              7
2021       35                                              7

For more information and details about the publications that feature in the conference, please refer to the publication list below.
 

CHI 2021 Publications

20210401_figures
Designing Metamaterial Cells to Enrich Thermoforming 3D Printed Objects for Post-Print Modification
CHI'21, Honorable Mention

Donghyeon Ko, Jee Bin Yim, Yujin Lee, Jaehoon Pyun, Woohun Lee

In this paper, we present a metamaterial structure called thermoformable cells, TF-Cells, to enrich thermoforming for post-print modification. So far, thermoforming is limitedly applied for modifying a 3D printed object due to its low thermal conductivity. TF-Cells consists of beam arrays that affluently pass hot air and have high heat transference. Through heating the embedded TF-Cells of the printed object, users can modify not only the deeper area of the object surface but also its form factor. With a series of technical experiments, we investigated TF-Cells’ thermoformability, depending on their structure’s parameters, orientations, and heating conditions. Next, we present a series of compound cells consisting of TF-Cells and solid structure to adjust stiffness or reduce undesirable shape deformation. Adapting the results from the experiments, we built a simple tool for embedding TF-Cells into a 3D model. Using the tool, we implemented examples under contexts of mechanical fitting, ergonomic fitting, and aesthetic tuning.

chi2021_uvrlab
A User-Oriented Approach to Space-Adaptive Augmentation: The Effects of Spatial Affordance on Narrative Experience in an Augmented Reality Detective Game
CHI'21, Honorable Mention

Jae-eun Shin, Boram Yoon, Dooyoung Kim, Woontack Woo

Space-adaptive algorithms aim to effectively align the virtual with the real to provide immersive user experiences for Augmented Reality(AR) content across various physical spaces. While such measures are reliant on real spatial features, efforts to understand those features from the user’s perspective and reflect them in designing adaptive augmented spaces have been lacking. For this, we compared factors of narrative experience in six spatial conditions during the gameplay of Fragments, a space-adaptive AR detective game. Configured by size and furniture layout, each condition afforded disparate degrees of traversability and visibility. Results show that whereas centered furniture clusters are suitable for higher presence in sufficiently large rooms, the same layout leads to lower narrative engagement. Based on our findings, we suggest guidelines that can enhance the effects of space adaptivity by considering how users perceive and navigate augmented space generated from different physical environments.

chi2021_makinteract
GamesBond: Bimanual Haptic Illusion of Physically Connected Objects for Immersive VR Using Grip Deformation
CHI'21, Honorable Mention

Neung Ryu, Hye-Young Jo, Michel Pahud, Mike Sinclair, Andrea Bianchi

Virtual Reality experiences, such as games and simulations, typically support the usage of bimanual controllers to interact with virtual objects. To recreate the haptic sensation of holding objects of various shapes and behaviors with both hands, previous researchers have used mechanical linkages between the controllers that render adjustable stiffness. However, the linkage cannot quickly adapt to simulate dynamic objects, nor it can be removed to support free movements. This paper introduces GamesBond, a pair of 4-DoF controllers without physical linkage but capable to create the illusion of being connected as a single device, forming a virtual bond. The two controllers work together by dynamically displaying and physically rendering deformations of hand grips, and so allowing users to perceive a single connected object between the hands, such as a jumping rope. With a user study and various applications we show that GamesBond increases the realism, immersion, and enjoyment of bimanual interaction.

chi2021_atatouch
AtaTouch: Robust Finger Pinch Detection for a VR Controller Using RF Return Loss
CHI'21, Honorable Mention

Daehwa Kim, Keunwoo Park, Geehyuk Lee

Handheld controllers are an essential part of VR systems. Modern sensing techniques enable them to track users’ finger movements to support natural interaction using hands. The sensing techniques, however, often fail to precisely determine whether two fingertips touch each other, which is important for the robust detection of a pinch gesture. To address this problem, we propose AtaTouch, which is a novel, robust sensing technique for detecting the closure of a finger pinch. It utilizes a change in the coupled impedance of an antenna and human fingers when the thumb and finger form a loop. We implemented a prototype controller in which AtaTouch detects the finger pinch of the grabbing hand. A user test with the prototype showed a finger-touch detection accuracy of 96.4%. Another user test with the scenarios of moving virtual blocks demonstrated low object-drop rate (2.75%) and false-pinch rate (4.40%). The results and feedback from the participants support the robustness and sensitivity of AtaTouch.

throughhand
ThroughHand: 2D Tactile Interaction to Simultaneously Recognize and Touch Multiple Objects
CHI'21, Honorable Mention

Jingun Jung, Sunmin Son, Sangyoon Lee, Yeonsu Kim, Geehyuk Lee

Users with visual impairments find it difficult to enjoy real-time 2D interactive applications on the touchscreen. Touchscreen applications such as sports games often require simultaneous recognition of and interaction with multiple moving targets through vision. To mitigate this issue, we propose ThroughHand, a novel tactile interaction that enables users with visual impairments to interact with multiple dynamic objects in real time. We designed the ThroughHand interaction to utilize the potential of the human tactile sense that spatially registers both sides of the hand with respect to each other. ThroughHand allows interaction with multiple objects by enabling users to perceive the objects using the palm while providing a touch input space on the back of the same hand. A user study verified that ThroughHand enables users to locate stimuli on the palm with a margin of error of approximately 13 mm and effectively provides a real-time 2D interaction experience for users with visual impairments.

chi2021_gosu
Secrets of Gosu: Understanding Physical Combat Skills of Professional Players in First-Person Shooters
CHI'21, Honorable Mention

Eunji Park, Sangyoon Lee, Auejin Ham, Minyeop Choi, Sunjun Kim, Byungjoo Lee

In first-person shooters (FPS), professional players (a.k.a., Gosu) outperform amateur players. The secrets behind the performance of professional FPS players have been debated in online communities with many conjectures; however, attempts of scientific verification have been limited. We addressed this conundrum through a data-collection study of the gameplay of eight professional and eight amateur players in the commercial FPS Counter-Strike: Global Offensive. The collected data cover behavioral data from six sensors (motion capture, eye tracker, mouse, keyboard, electromyography armband, and pulse sensor) and in-game data (player data and event logs). We examined conjectures in four categories: aiming, character movement, physicality, and device and settings. Only 6 out of 13 conjectures were supported with statistically sufficient evidence.

A Simulation Model of Intermittently Controlled Point-and-Click Behaviour
CHI'21, Honorable Mention

Do Seungwon, Minsuk Chang, Byungjoo Lee

We present a novel simulation model of point-and-click behaviour that is applicable both when a target is stationary or moving. To enable more realistic simulation than existing models, the model proposed in this study takes into account key features of the user and the external environment, such as intermittent motor control, click decision-making, visual perception, upper limb kinematics and the effect of input device. The simulated user’s point-and-click behaviour is formulated as a Markov decision process (MDP), and the user’s policy of action is optimised through deep reinforcement learning. As a result, our model successfully and accurately reproduced the trial completion time, distribution of click endpoints, and cursor trajectories of real users. Through an ablation study, we showed how the simulation results change when the model’s sub-modules are individually removed. The implemented model and dataset are publicly available.

heterostroke
Heterogeneous Stroke: Using Unique Vibration Cues to Improve the Wrist-Worn Spatiotemporal Tactile Display

CHI'21

Taejun Kim, Youngbo Aram Shim, Geehyuk Lee

Beyond a simple notification of incoming calls or messages, more complex information such as alphabets and digits can be delivered through spatiotemporal tactile patterns (STPs) on a wrist-worn tactile display (WTD) with multiple tactors. However, owing to the limited skin area and spatial acuity of the wrist, frequent confusions occur between closely located tactors, resulting in a low recognition accuracy. Furthermore, the accuracies reported in previous studies have mostly been measured for a specific posture and could further decrease with free arm postures in real life. Herein, we present Heterogeneous Stroke, a design concept for improving the recognition accuracy of STPs on a WTD. By assigning unique vibrotactile stimuli to each tactor, the confusion between tactors can be reduced. Through our implementation of Heterogeneous Stroke, the alphanumeric characters could be delivered with high accuracy (93.8% for 26 alphabets and 92.4% for 10 digits) across different arm postures.

rubyslippers
RubySlippers: Supporting Content-based Voice Navigation for How-to Videos

CHI'21

Minsuk Chang, Mina Huh, Juho Kim

Directly manipulating the timeline, such as scrubbing for thumbnails, is the standard way of controlling how-to videos. However, when how-to videos involve physical activities, people inconveniently alternate between controlling the video and performing the tasks. Adopting a voice user interface allows people to control the video with voice while performing the tasks with hands. However, naively translating timeline manipulation into voice user interfaces (VUI) results in temporal referencing (e.g.  “rewind 20 seconds’‘), which requires a different mental model for navigation and thereby limiting users’ ability to peek into the content. We present RubySlippers, a system that supports efficient content-based voice navigation through keyword-based queries. Our computational pipeline automatically detects referenceable elements in the video, and finds the video segmentation that minimizes the number of needed navigational commands. Our evaluation (N=12) shows that participants could perform three representative navigation tasks with fewer commands and less frustration using RubySlippers than the conventional voice-enabled video interface.

studywithme-2
Personalizing Ambience and Illusionary Presence: How People Use “Study with me” Videos to Create Effective Studying Environments

CHI'21

Yoonjoo Lee, John Joon Young Chung, Jean Y. Song, Minsuk Chang, Juho Kim

“Study with me” videos contain footage of people studying for hours, in which social components like conversations or informational content like instructions are absent. Recently, they became increasingly popular on video-sharing platforms. This paper provides the first broad look into what “study with me” videos are and how people use them. We analyzed 30 “study with me” videos and conducted 12 interviews with their viewers to understand their motivation and viewing practices. We identified a three-factor model that explains the mechanism for shaping a satisfactory studying experience in general. One of the factors, a well-suited ambience, was difficult to achieve because of two common challenges: external conditions that prevent studying in study-friendly places and extra cost needed to create a personally desired ambience. We found that the viewers used “study with me” videos to create a personalized ambience at a lower cost, to find controllable peer pressure, and to get emotional support. These findings suggest that the viewers self-regulate their learning through watching “study with me” videos to improve efficiency even when studying alone at home.

dataURItoBlob
Sticky Goals: Understanding Goal Commitments for Behavioral Changes in the Wild

CHI’21

Hyunsoo Lee, Auk Ki, Hwajung Hong, Uichin Lee

A commitment device, an attempt to bind oneself for a successful goal achievement, has been used as an effective strategy to promote behavior change. However, little is known about how commitment devices are used in the wild, and what aspects of commitment devices are related to goal achievements. In this paper, we explore a large-scale dataset from stickK, an online behavior change support system that provides both financial and social commitments. We characterize the patterns of behavior change goals (e.g., topics and commitment setting) and then perform a series of multilevel regression analyses on goal achievements. Our results reveal that successful goal achievements are largely dependent on the configuration of financial and social commitment devices, and a mixed commitment setting is considered beneficial. We discuss how our findings could inform the design of effective commitment devices, and how large-scale data can be leveraged to support data-driven goal elicitation and customization. 

winder-3
Winder: Linking Speech and Visual Objects to Support Communication in Asynchronous Collaboration

CHI'21

Tae Soo Kim, Seungsu Kim, Yoonseo Choi, Juho Kim

Team members commonly collaborate on visual documents remotely and asynchronously. Particularly, students are frequently restricted to this setting as they often do not share work schedules or physical workspaces. As communication in this setting has delays and limits the main modality to text, members exert more effort to reference document objects and understand others’ intentions. We propose Winder, a Figma plugin that addresses these challenges through linked tapes—multimodal comments of clicks and voice. Bidirectional links between the clicked-on objects and voice recordings facilitate understanding tapes: selecting objects retrieves relevant recordings, and playing recordings highlights related objects. By periodically prompting users to produce tapes, Winder preemptively obtains information to satisfy potential communication needs. Through a five-day study with eight teams of three, we evaluated the system’s impact on teams asynchronously designing graphical user interfaces. Our findings revealed that producing linked tapes could be as lightweight as face-to-face (F2F) interactions while transmitting intentions more precisely than text. Furthermore, with preempted tapes, teammates coordinated tasks and invited members to build on each others’ work.

good
"Good Enough!": Flexible Goal Achievement with Margin-based Outcome Evaluation

CHI'21

Gyuwon Jung, Jio Oh, Youjin Jung, Juho Sun, Ha-Kyung Kong, Uichin Lee

Traditional goal setting simply assumes a binary outcome for goal evaluation. This binary judgment does not consider a user’s effort, which may demotivate the user. This work explores the possibility of mitigating this negative impact with a slight modification on the goal evaluation criterion, by introducing a ‘margin’ that is widely used for quality control in the manufacturing fields. A margin represents a range near the goal where the user’s outcome will be regarded as ‘good enough’ even if the user fails to reach it. We explore users’ perceptions and behaviors through a large-scale survey study and a small-scale field experiment using a coaching system to promote physical activity. Our results provide positive evidence on the margin, such as lowering the burden of goal achievement and increasing motivation to make attempts. We discuss practical design implications on margin-enabled goal setting and evaluation for behavioral change support systems.

goldentime
GoldenTime: Exploring System-Driven Timeboxing and Micro-Financial Incentives for Self-Regulated Phone Use

CHI'21

Joonyoung Park, Hyunsoo Lee, Sangkeun Park, Kyong-Mee Chung, Uichin Lee

User-driven intervention tools such as self-tracking help users to self-regulate problematic smartphone usage. These tools basically assume active user engagement, but prior studies warned a lack of user engagement over time. This paper proposes GoldenTime, a mobile app that promotes self-regulated usage behavior via system-driven proactive timeboxing and micro-financial incentives framed as gain or loss for behavioral reinforcement. We conducted a large-scale user study (n = 210) to explore how our proactive timeboxing and micro-financial incentives influence users’ smartphone usage behaviors. Our findings show that GoldenTime’s timeboxing based micro-financial incentives are effective in self-regulating smartphone usage, and incentive framing has a significant impact on user behavior. We provide practical design guidelines for persuasive technology design related to promoting digital wellbeing.

차인하_대표사진_CHI21
Exploring the Use of a Voice-based Conversational Agent to Empower Adolescents with Autism Spectrum Disorder

CHI'21

Inha Cha, Sung-In Kim, Hwajung Hong, Heejeong Yoo, and Youn-kyung Lim

Voice-based Conversational Agents (VCA) have served as personal assistants that support individuals with special needs. Adolescents with Autism Spectrum Disorder (ASD) may also benefit from VCAs to deal with their everyday needs and challenges, ranging from self-care to social communications. In this study, we explored how VCAs could encourage adolescents with ASD in navigating various aspects of their daily lives through the two-week use of VCAs and a series of participatory design workshops. Our findings demonstrated that VCAs could be an engaging, empowering, emancipating tool that supports adolescents with ASD to address their needs, personalities, and expectations, such as promoting self-care skills, regulating negative emotions, and practicing conversational skills. We propose implications of using off-the-shelf technologies as a personal assistant to ASD users in Assistive Technology design. We suggest design implications for promoting positive opportunities while mitigating the remaining challenges of VCAs for adolescents with ASD.

MomentMeld: AI-augmented Mobile Photographic Memento towards Mutually Stimulatory Inter-generational Interaction

CHI'21

Bumsoo Kang, Seungwoo Kang, Inseok Hwang

ToonNote: Improving Communication in Computational Notebooks Using Interactive Data Comics

CHI'21

DaYe Kang, Tony Ho, Nicolai Marquardt, Bilge Mutlu, Andrea Bianchi

Elevate: A Walkable Pin-Array for Large Shape-Changing Terrains

CHI'21

Seungwoo Je, Hyunseung Lim, Kongpyung Moon, Shan-Yuan Teng, Jas Brooks, Pedro Lopes, Andrea Bianchi

Human Perceptions on Moral Responsibility of AI: A Case Study in AI-Assisted Bail Decision-Making

CHI'21

Gabriel Lima, Nina Grgić-Hlača, Meeyoung Cha

Virtual Camera Layout Generation using a Reference Video

CHI'21

Jung Eun Yoo, Kwanggyoon Seo, Sanghun Park, Jaedong Kim, Dawon Lee, Junyong Noh

Late Breaking Work

postit
Post-Post-it: A Spatial Ideation System in VR for Overcoming Limitations of Physical Post-it Notes

Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (LBW)

Joon Hyub Lee, Donghyeok Ma, Haena Cho, Seok-Hyung Bae

Post-it notes are great problem-solving tools. However, physical Post-it notes have limitations: surfaces for attaching them can run out; rearranging them can be labor-intensive; documenting and storing them can be cumbersome. We present Post-Post-it, a novel VR interaction system that overcomes these physical limitations. We derived design requirements from a formative study involving a problem-solving meeting using Post-it notes. Then, through physical prototyping, using physical materials such as Post-it notes, transparent acrylic panels, and masking tape, we designed a set of lifelike VR interactions based on hand gestures that the user can perform easily and intuitively. With our system, the user can create and place Post-it notes in an immersive space that is large enough to ideate freely, quickly move, copy, or delete many Post-it notes at once, and easily manage the results.

I Can't Talk Now: Speaking with Voice Output Communication Aid Using Text-to-Speech Synthesis During Multiparty Video Conference

Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (LBW)

Wooseok Kim, Sangsu Lee
I want more than 👍 User-generated icons for Better Video-mediated Communications on the Collaborative Design Process

Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (LBW)

Haena Cho, Hyeonjeong Im, Sunok Lee, Sangsu Lee
How the Death-themed Game Spiritfarer Can Help Players Cope with the Loss of a Loved One

Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (LBW)

Karam Eum, Valérie Erb, Subin Lin, Sungpil Wang, Young Yim Doh
“I Don’t Know Exactly but I Know a Little”: Exploring Better Responses of Conversational Agents with Insufficient Information

Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (LBW)

Minha Lee, Sangsu Lee
Bubble Coloring to Visualize the Speech Emotion

Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (LBW)

Qinyue Chen, Yuchun Yan, Hyeon-Jeong Suk
Guideline-Based Evaluation and Design Opportunities for Mobile Video-based Learning

Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (LBW)

Jeongyeon Kim, Juho Kim

Workshops & Symposia

Challenges in Devising Resources for Ethics: What Should We Consider When Designing Toolkits to Tackle AI Ethical Issues for Practitioners?

CHI 2021 (The ACM CHI Conference on Human Factors in Computing Systems 2021) Workshop on Co-designing Ethics.

Inha Cha and Youn-kyung Lim

Artificial Intelligence (AI) technologies become interwoven in our daily contexts with various services and products, and discussions on AI’s social impact are actively being held. As awareness on the social impact of AI technology increased, studies focusing on algorithmic bias and its harm have gained attention, as have the efforts to mitigate social bias. One way to solve this problem is to support and guide the practitioners who design the technologies. Therefore, various toolkits and methods, including checklists or open-source software, to detect algorithmic bias, games, and activity-based approaches have been devised to support practitioners. This paper proposes pros and cons according to toolkits’ characteristics based on the existing approaches. We want to discuss what we have to consider before designing toolkits to tackle AI ethics by examining the existing toolkits.