CHI 2023

CHI 2023
DATE
  23 April – 28 April 2023
LOCATION  Hamburg, Germany | Hybrid
 

We are excited to bring good news! At CHI 2023, KAIST records a total of 21 Full Paper publications, 6 Late-Breaking Works, 4 Student Game Competitions, 2 Interactivities, and 6 Workshops. Congratulations on the outstanding achievement!

For more information and details about the publications that feature in the conference, please refer to the publication list below.

Paper Publications

CHI'23

Hye-Young Jo, Laurenz Seidel, Michel Pahud, Mike Sinclair, Andrea Bianchi

Online fitness video tutorials are an increasingly popular way to stay fit at home without a personal trainer. However, to keep the screen playing the video in view, users typically disrupt their balance and break the motion flow — two main pillars for the correct execution of yoga poses. While past research partially addressed this problem, these approaches supported only a limited view of the instructor and simple movements. To enable the fluid execution of complex full-body yoga exercises, we propose FlowAR, an augmented reality system for home workouts that shows training video tutorials as always-present virtual static and dynamic overlays around the user. We tested different overlay layouts in a study with 16 participants, using motion capture equipment for baseline performance. Then, we iterated the prototype and tested it in a furnished lab simulating home settings with 12 users. Our results highlight the advantages of different visualizations and the system’s general applicability. 

Preview

AutomataStage: an AR-mediated Creativity Support Tool for Hands-on Multidisciplinary Learning

CHI'23

Yunwoo Jeong, Hyungjun Cho, Taewan Kim, Tek-Jin Nam

The creativity support tools can enhance the hands-on multidisciplinary learning experience by drawing interest in the process of creating the outcome. We present AutomataStage, an AR-mediated creativity support tool for hands-on multidisciplinary learning. AutomataStage utilizes a video see-through interface to support the creation of Interactive Automata. The combination of building blocks and low-cost materials increases the expressiveness. The generative design method and one-to-one guide support the idea development process. It also provides a hardware see-through feature with which inside parts and circuits can be seen and an operational see-through feature that shows the operation in real-time. The visual programming method with a state transition diagram supports the iterative process during the creation process. A user study shows that AutomataStage enabled the students to create diverse Interactive Automata within 40-minute sessions. By creating Interactive Automata, the participants could learn the basic concepts of components. See-through features allowed active exploration with interest while integrating the components. We discuss the implications of hands-on tools with interactive and kinetic content beyond multidisciplinary learning.

Preview

It is Okay to be Distracted: How Real-time Transcriptions Facilitate Online Meeting with Distraction

CHI'23

Seoyun Son, Junyoung Choi, Sunjae Lee, Jean Y Song, Insik Shin

Online meetings are indispensable in collaborative remote work environments, but they are vulnerable to distractions due to their distributed and location-agnostic nature. While distraction often leads to a decrease in online meeting quality due to loss of engagement and context, natural multitasking has positive tradeoff effects, such as increased productivity within a given time unit. In this study, we investigate the impact of real-time transcriptions (i.e., full-transcripts, summaries, and keywords) as a solution to help facilitate online meetings during distracting moments while still preserving multitasking behaviors. Through two rounds of controlled user studies, we qualitatively and quantitatively show that people can better catch up with the meeting flow and feel less interfered with when using real-time transcriptions. The benefits of real-time transcriptions were more pronounced after distracting activities. Furthermore, we reveal additional impacts of real-time transcriptions (e.g., supporting recalling contents) and suggest design implications for future online meeting platforms where these could be adaptively provided to users with different purposes.

Preview

RoutineAid: Externalizing Key Design Elements to Support Daily Routines of Individuals with Autism

CHI'23

Bogoan Kim, Sung-In Kim, Sangwon Park, Hee Jeong Yoo, Hwajung Hong, Kyungsik Han

Implementing structure into our daily lives is critical for maintaining health, productivity, and social and emotional well-being. New norms for routine management have emerged during the current pandemic, and in particular, individuals with autism find it difficult to adapt to those norms. While much research has focused on the use of computer technology to support individuals with autism, little is known about ways of helping them establish and maintain “self-directed” routine structures. In this paper, we identify design requirements for an app that support four key routine components (i.e., physical activity, diet, mindfulness, and sleep) through a formative study and develop RoutineAid, a gamified smartphone app that reflects the design requirements. The results of a two-month field study on design feasibility highlight two affordances of RoutineAid – the establishment of daily routines by facilitating micro-planning and the maintenance of daily routines through celebratory interactions. We discuss salient design considerations for the future development of daily routine management tools for individuals with autism.

Preview

OmniSense: Exploring Novel Input Sensing and Interaction Techniques on Mobile Device with an Omni-Directional Camera

CHI'23

Hui-Shyong Yeo, Erwin Wu, Daewha Kim, Juyoung Lee, Hyung-il Kim, Seo Young Oh, Luna Takagi, Woontack Woo, Hideki Koike, Aaron J Quigley

An omni-directional (360°) camera captures the entire viewing sphere surrounding its optical center. Such cameras are growing in use to create highly immersive content and viewing experiences. When such a camera is held by a user, the view includes the user’s hand grip, finger, body pose, face, and the surrounding environment, providing a complete understanding of the visual world and context around it. This capability opens up numerous possibilities for rich mobile input sensing. In OmniSense, we explore the broad input design space for mobile devices with a built-in omni-directional camera and broadly categorize them into three sensing pillars: i) near device ii) around device and iii) surrounding device. In addition we explore potential use cases and applications that leverage these sensing capabilities to solve user needs. Following this, we develop a working system to put these concepts into action, by leveraging these sensing capabilities to enable potential use cases and applications. We studied the system in a technical evaluation and a preliminary user study to gain initial feedback and insights. Collectively these techniques illustrate how a single, omni-purpose sensor on a mobile device affords many compelling ways to enable expressive input, while also affording a broad range of novel applications that improve user experience during mobile interaction.

Preview

DAPIE: Interactive Step-by-Step Explanatory Dialogues to Answer Children’s Why and How Questions

CHI'23

Yoonjoo Lee, Tae Soo Kim, Sungdong Kim, Yohan Yun, Juho Kim

Children acquire an understanding of the world by asking “why” and “how” questions. Conversational agents (CAs) like smart speakers or voice assistants can be promising respondents to children’s questions as they are more readily available than parents or teachers. However, CAs’ answers to “why” and “how” questions are not designed for children, as they can be difficult to understand and provide little interactivity to engage the child. In this work, we propose design guidelines for creating interactive dialogues that promote children’s engagement and help them understand explanations. Applying these guidelines, we propose DAPIE, a system that answers children’s questions through interactive dialogue by employing an AI-based pipeline that automatically transforms existing long-form answers from online sources into such dialogues. A user study (N=16) showed that, with DAPIE, children performed better in an immediate understanding assessment while also reporting higher enjoyment than when explanations were presented sentence-by-sentence.

Preview

ModSandbox: Facilitating Online Community Moderation Through Error Prediction and Improvement of Automated Rules

CHI'23

Jean Y Song, Sangwook Lee, Jisoo Lee, Mina Kim, Juho Kim

Despite the common use of rule-based tools for online content moderation, human moderators still spend a lot of time monitoring them to ensure they work as intended. Based on surveys and interviews with Reddit moderators who use AutoModerator, we identified the main challenges in reducing false positives and false negatives of automated rules: not being able to estimate the actual effect of a rule in advance and having difficulty figuring out how the rules should be updated. To address these issues, we built ModSandbox, a novel virtual sandbox system that detects possible false positives and false negatives of a rule and visualizes which part of the rule is causing issues. We conducted a comparative, between-subject study with online content moderators to evaluate the effect of ModSandbox in improving automated rules. Results show that ModSandbox can support quickly finding possible false positives and false negatives of automated rules and guide moderators to improve them to reduce future errors.

Preview

How Space is Told: Linking Trajectory, Narrative, and Intent in Augmented Reality Storytelling for Cultural Heritage Sites

CHI'23

Jae-eun Shin, Woontack Woo

We report on a qualitative study in which 22 participants created Augmented Reality (AR) stories for outdoor cultural heritage sites. As storytelling is a crucial strategy for AR content aimed at providing meaningful experiences, the emphasis has been on what storytelling does, rather than how it is done, the end user’s needs prioritized over the author’s. To address this imbalance, we identify how recurring patterns in the spatial trajectories and narrative compositions of AR stories for cultural heritage sites are linked to the author’s intent and creative process: While authors tend to bind story arcs tightly to confined trajectories for narrative delivery, the need for spatial exploration results in thematic content mapped loosely onto encompassing trajectories. Based on our analysis, we present design recommendations for site-specific AR storytelling tools that can support authors in delivering their intent while leveraging the placeness of cultural heritage sites as a creative resource.

Preview

AVscript: Accessible Video Editing with Audio-Visual Scripts

CHI'22

Mina Huh, Saelyne Yang, Yi-Hao Peng, Xiang ‘Anthony’ Chen, Young-Ho Kim, Amy Pavel

Sighted and blind and low vision (BLV) creators alike use videos to communicate with broad audiences. Yet, video editing remains inaccessible to BLV creators. Our formative study revealed that current video editing tools make it difficult to access the visual content, assess the visual quality, and efficiently navigate the timeline. We present AVscript, an accessible text-based video editor. AVscript enables users to edit their video using a script that embeds the video’s visual content, visual errors (e.g., dark or blurred footage), and speech. Users can also efficiently navigate between scenes and visual errors or locate objects in the frame or spoken words of interest. A comparison study (N=12) showed that AVscript significantly lowered BLV creators’ mental demands while increasing confidence and independence in video editing. We further demonstrate the potential of AVscript through an exploratory study (N=3) where BLV creators edited their own footage.

Preview

Blaming Humans and Machines: What Shapes People's Reactions to Algorithmic Harm

CHI'23

Gabriel Lima, Nina Grgić-Hlača, Meeyoung Cha

Artificial intelligence (AI) systems can cause harm to people. This research examines how individuals react to such harm through the lens of blame. Building upon research suggesting that people blame AI systems, we investigated how several factors influence people’s reactive attitudes towards machines, designers, and users. The results of three studies (N = 1,153) indicate differences in how blame is attributed to these actors. Whether AI systems were explainable did not impact blame directed at them, their developers, and their users. Considerations about fairness and harmfulness increased blame towards designers and users but had little to no effect on judgments of AI systems. Instead, what determined people’s reactive attitudes towards machines was whether people thought blaming them would be a suitable response to algorithmic harm. We discuss implications, such as how future decisions about including AI systems in the social and moral spheres will shape laypeople’s reactions to AI-caused harm.

"We Speak Visually" : User-generated Icons for Better Video-Mediated Mixed Group Communications Between Deaf and Hearing Participants

CHI'23

Yeon Soo Kim, Hyeonjeong Im, Sunok Lee, Haena Cho, Sangsu Lee

Since the outbreak of the COVID-19 pandemic, videoconferencing technology has been widely adopted as a convenient, powerful, and fundamental tool that has simplified many day-to-day tasks. However, video communication is dependent on audible conversation and can be strenuous for those who are Hard of Hearing. Communication methods used by the Deaf and Hard of Hearing community differ significantly from those used by the hearing community, and a distinct language gap is evident in workspaces that accommodate workers from both groups. Therefore, we integrated users in both groups to explore ways to alleviate obstacles in mixed-group videoconferencing by implementing user-generated icons. A participatory design methodology was employed to investigate how the users overcome language differences. We observed that individuals utilized icons within video-mediated meetings as a universal language to reinforce comprehension. Herein, we present design implications from these findings, along with recommendations for future icon systems to enhance and support mixed-group conversations.

Surch: Enabling Structural Search and Comparison for Surgical Videos

CHI'23

Jeongyeon Kim, Daeun Choi, Nicole Lee, Matt Beane, Juho Kim

Video is an effective medium for learning procedural knowledge, such as surgical techniques. However, learning procedural knowledge through videos remains difficult due to limited access to procedural structures of knowledge (e.g., compositions and ordering of steps) in a large-scale video dataset. We present Surch, a system that enables structural search and comparison of surgical procedures. Surch supports video search based on procedural graphs generated by our clustering workflow capturing latent patterns within surgical procedures. We used vectorization and weighting schemes that characterize the features of procedures, such as recursive structures and unique paths. Surch enhances cross-video comparison by providing video navigation synchronized by surgical steps. Evaluation of the workflow demonstrates the effectiveness and interpretability (Silhouette score = 0.82) of our clustering for surgical learning. A user study with 11 residents shows that our system significantly improves the learning experience and task efficiency of video search and comparison, especially benefiting junior residents.

Love on the spectrum: Toward Inclusive online dating experience of autistic individuals

CHI'23

Dasom Choi, Sung-In Kim, Sunok Lee, Hyunseung Lim, Hee Jeong Yoo, Hwajung Hong

Online dating is a space where autistic individuals can find romantic partners with reduced social demands. Autistic individuals are often expected to adapt their behaviors to the social norms underlying the online dating platform to appear as desirable romantic partners. However, given that their autistic traits can lead them to different expectations of dating, it is uncertain whether conforming their behaviors to the norm will guide them to the person they truly want. In this paper, we explored the perceptions and expectations of autistic adults in online dating through interviews and workshops. We found that autistic people desired to know whether they behaved according to the platform’s norms. Still, they expected to keep their unique characteristics rather than unconditionally conform to the norm. We conclude by providing suggestions for designing inclusive online dating experiences that could foster self-guided decisions of autistic users and embrace their unique characteristics.

Fostering Youth’s Critical Thinking Competency about AI through Exhibition

CHI'23

Sunok Lee, Dasom Choi, Minha Lee, Jonghak Choi, Sangsu Lee

Today’s youth lives in a world deeply intertwined with AI, which has become an integral part of everyday life. For this reason, it is important for youth to critically think about and examine AI to become responsible users in the future. Although recent attempts have educated youth on AI with focus on delivering critical perspectives within a structured curriculum, opportunities to develop critical thinking competencies that can be reflected in their lives must be provided. With this background, we designed an informal learning experience through an AI-related exhibition to cultivate critical thinking competency. To explore changes before and after the exhibition, 23 participants were invited to experience the exhibition. We found that the exhibition can support the youth in relating AI to their lives through critical thinking processes. Our findings suggest implications for designing learning experiences to foster critical thinking competency for better coexistence with AI.

Creator-friendly Algorithms: Behaviors, Challenges, and Design Opportunities in Algorithmic Platforms

CHI'23

Yeonseo Choi, Eun Jeong Kang, Min Kyung Lee, Juho Kim

In many creator economy platforms, algorithms significantly impact creators’ practices and decisions about their creative expression and monetization. Emerging research suggests that the opacity of the algorithm and platform policies often distract creators from their creative endeavors. To study how algorithmic platforms can be more ‘creator-friendly,’ we conducted a mixed-methods study: interviews (N=14) and a participatory design workshop (N=12) with YouTube creators. Through the interviews, we found how creators’ folk theories of the curation algorithm impact their work strategies — whether they choose to work with or against the algorithm — and the associated challenges in the process. In the workshop, creators explored solution ideas to overcome the aforementioned challenges, such as fostering diverse and creative expressions, achieving success as a creator, and motivating creators to continue their job. Based on these findings, we discuss design opportunities for how algorithmic platforms can support and motivate creators to sustain their creative work.

Toward a Multilingual Conversational Agent: Challenges and Expectations of Code-Mixing Multilingual Users

CHI'23

Yunjae Josephine Choi, Minha Lee, Sangsu Lee

Multilingual speakers tend to interleave two or more languages when communicating. This communication strategy is called code-mixing, and it has surged with today’s ever-increasing linguistic and cultural diversity. Because of their communication style, multilinguals who use conversational agents have specific needs and expectations which are currently not being met by conversational systems. While research has been undertaken on code-mixing conversational systems, previous works have rarely focused on the code-mixing users themselves to discover their genuine needs. This work furthers our understanding of the challenges faced by code-mixing users in conversational agent interaction, unveils the key factors that users consider in code-mixing scenarios, and explores expectations that users have for future conversational agents capable of code-mixing. This study discusses the design implications of our findings and provides a guide on how to alleviate the challenges faced by multilingual users and how to improve the conversational agent user experience for multilingual users.

“I Won't Go Speechless”: Design Exploration on a Real-Time Text-To-Speech Speaking Tool for Videoconferencing

CHI'23

Wooseok Kim, Jian Jun, Minha Lee, Sangsu Lee

The COVID-19 pandemic has shifted many business activities to non-face-to-face activities, and videoconferencing has become a new paradigm. However, conference spaces isolated from surrounding interferences are not always readily available. People frequently participate in public places with unexpected crowds or acquaintances, such as cafés, living rooms, and shared offices. These environments have surrounding limitations that potentially cause challenges in speaking up during videoconferencing. To alleviate these issues and support the users in speaking-restrained spatial contexts, we propose a text-to-speech (TTS) speaking tool as a new speaking method to support active videoconferencing participation. We derived the possibility of a TTS speaking tool and investigated the empirical challenges and user expectations of a TTS speaking tool using a technology probe and participatory design methodology. Based on our findings, we discuss the need for a TTS speaking tool and suggest design considerations for its application in videoconferencing.

Charlie and the Semi-Automated Factory: Data-Driven Operator Behavior and Performance Modeling for Human-Machine Collaborative Systems

CHI'23

Eunji Park, Yugyeong Jung, Inyeop Kim, Uichin Lee

A semi-automated manufacturing system that entails human intervention in the middle of the process is a representative collaborative system that requires active interaction between humans and machines. User behavior induced by the operator’s decision-making process greatly impacts system operation and performance in such an environment that requires human-machine collaboration. There has been room for utilizing machine-generated data for a fine-grained understanding of the relationship between the behavior and performance of operators in the industrial domain, while multiple streams of data have been collected from manufacturing machines. In this study, we propose a large-scale data-analysis methodology that comprises data contextualization and performance modeling to understand the relationship between operator behavior and performance. For a case study, we collected machine-generated data over 6-months periods from a highly automated machine in a large tire manufacturing facility. We devised a set of metrics consisting of six human-machine interaction factors and four work environment factors as independent variables, and three performance factors as dependent variables. Our modeling results reveal that the performance variations can be explained by the interaction and work environment factors ($R^2$ = 0.502, 0.356, and 0.500 for the three performance factors, respectively). Finally, we discuss future research directions for the realization of context-aware computing in semi-automated systems by leveraging machine-generated data as a new modality in human-machine collaboration.

How Older Adults Use Online Videos for Learning

CHI'23

Seoyoung Kim, Donghoon Shin, Jeongyeon Kim, Soonwoo Kwon, Juho Kim

Online videos are a promising medium for older adults to learn. Yet, few studies have investigated what, how, and why they learn through online videos. In this study, we investigated older adults’ motivation, watching patterns, and difficulties in using online videos for learning by (1) running interviews with 13 older adults and (2) analyzing large-scale video event logs (N=41.8M) from a Korean Massive Online Open Course (MOOC) platform. Our results show that older adults (1) are motivated to learn practical topics, leading to less consumption of STEM domains than non-older adults, (2) watch videos with less interaction and watch a larger portion of a single video compared to non-older adults, and (3) face various difficulties (e.g., inconvenience arisen due to their unfamiliarity with technologies) that limit their learning through online videos. Based on the findings, we propose design guidelines for online videos and platforms targeted to support older adults’ learning.

Beyond Instructions: A Taxonomy of Information Types in How-to Videos

CHI'23

Saelyne Yang, Sangkyung Kwak, Juhoon Lee, Juho Kim

How-to videos are rich in information-they not only give instructions but also provide justifications or descriptions. People seek different information to meet their needs, and identifying different types of information present in the video can improve access to the desired knowledge. Thus, we present a taxonomy of information types in how-to videos. Through an iterative open coding of 4k sentences in 48 videos, 21 information types under 8 categories emerged. The taxonomy represents diverse information types that instructors provide beyond instructions. We first show how our taxonomy can serve as an analytical framework for video navigation systems. Then, we demonstrate through a user study (n=9) how type-based navigation helps participants locate the information they needed. Finally, we discuss how the taxonomy enables a wide range of video-related tasks, such as video authoring, viewing, and analysis. To allow researchers to build upon our taxonomy, we release a dataset of 120 videos containing 9.9k sentences labeled using the taxonomy.

Potential and Challenges of DIY Smart Homes with an ML-intensive Camera Sensor

CHI'23

Sojeong Yun, Youn-kyung Lim

Sensors and actuators are crucial components of a do-it-yourself (DIY) smart home system that enables users to construct smart home features successfully. In addition, machine learning (ML) (e.g., ML-intensive camera sensors) can be applied to sensor technology to increase its accuracy. Although camera sensors are often utilized in homes, research on user experiences with DIY smart home systems employing camera sensors is still in its infancy. This research investigates novel user experiences while constructing DIY smart home features using an ML-intensive camera sensor in contrast to commonly used internet-of-things (IoT) sensors. Thus, we conducted a seven-day field diary study with 12 families who were given a DIY smart home kit. Here, we assess the five characteristics of the camera sensor as well as the potential and challenges of utilizing the camera sensor in the DIY smart home and discuss the opportunities to address existing DIY smart home issues.

Interactivity

Explore the Future Earth with Wander 2.0: AI Chatbot Driven by Knowledge-base Story Generation and Text-to-image Model

CHI'23

Yuqian Sun, Ying Xu, Chenhang Cheng, Yihua Li, Chang Hee Lee, Ali Asadipour

People always envision the future of earth through science fiction (Sci-fi), so can we create a unique experience of “visiting the future earth” through the lens of artificial intelligence (AI)? We introduce Wander 2.0, an AI chatbot that co-creates sci-fi stories through knowledge-based story generation on daily communication platforms like WeChat and Discord. Using location information from Google Maps, Wander generates narrative travelogues about specific locations (e.g. Paris) through a large-scale language model (LLM). Additionally, using the large-scale text-to-image model (LTGM) Stable Diffusion, Wander transfers future scenes that match both the text description and location photo, facilitating future imagination. The project also includes a real-time visualization of the human-AI collaborations on a future map. Through journeys with visitors from all over the world, Wander demonstrates how AI can serve as a subjective interface linking fiction and reality. Our research shows that multi-modal AI systems have the potential to extend the artistic experience and creative world-building through adaptive and unique content generation for different people. Wander 2.0 is available at http://wander001.com/

Preview

AutomataStage: An Interactive Automata Creating Tool for Hands-on STEAM Learning

CHI'23

Yunwoo Jeong, Hyungjun Cho, Taewan Kim, Tek-Jin Nam

Hands-on STEAM learning requires scattered tools in the digital and physical environment and educational content that can draw attention, interest, and fun. We present AutomataStage, an interactive tool, and Interactive Automata, a learning content. AutomataStage utilizes a video see-through interface and building blocks to actively engage the entire creation process from ideation to visual programming, mechanism simulation, and making. It also provides a hardware see-through feature with which inside parts and circuits can be seen and an operational see-through feature that shows the operation in real-time. A user study shows that AutomataStage enabled the students to create diverse Interactive Automata within 40-minute sessions. See-through features enabled active exploration with interest, while visual programming with a state transition diagram supported the integration. The participants could rapidly learn sensors, motors, mechanisms, and programming by creating Interactive Automata. We discuss the implications of hands-on tools with interactive and kinetic content beyond STEAM education.

Late-Breaking Work

Virtual Trackball on VR Controller: Evaluation of 3D Rotation Methods in Virtual Reality

CHI'23

Sunbum Kim, Geehyuk Lee

Rotating 3D objects is an essential operation in virtual reality (VR). However, efficient rotation methods with the current VR controllers have not been considered extensively yet. Users must repeatedly move their arms and wrists to rotate an object with the current VR controller. We considered utilizing the trackpad available in most VR controllers as a virtual trackball for an efficient rotation method and implemented two types of virtual trackballs (Arcball and Two-axis Valuator) to enable additional rotation using the thumb while holding an object with a VR controller. In this study, we investigated whether a controller with a virtual trackball would be effective for 3D manipulation tasks. The results showed that participants could perform the tasks faster with Arcball but not faster with Two-axis Valuator than with the regular VR controller. Also, most participants preferred Arcball to Two-axis Valuator and felt Arcball more natural than Two-axis Valuator.

Preview

QuickRef: Should I Read Cited Papers for Understanding This Paper?

CHI'23

Sangjun Park, Chanhee Lee, Uichin Lee

Researchers spend lots of time for reading scientific papers as they need to stay updated with recent trends. However, navigating citations, which are indispensable elements of research papers, can act as a barrier for junior researchers as they do not have enough background knowledge and experience. We conduct a formative user study to identify challenges in navigating cited papers. We then prototype QuickRef, an interactive reader that provides additional information about cited papers on the side panel. A preliminary user study documents the usability of QuickRef. Further, we present practical design implications for citation navigation support.

Preview

HapticPalmrest: Haptic Feedback through the Palm for the Laptop Keyboard

CHI'23

Jisu Yim, Sangyoon Lee, Geehyuk Lee

Programmable haptic feedback on touchscreen keyboards enriches user experiences but is hard to realize for physical keyboards because this requires individually augmenting each key with an actuator. As an alternative approach, we propose HapticPalmrest, where haptic feedback for a physical keyboard is provided to the palms. This is particularly feasible for a laptop environment, where users usually rest their palms while interacting with the keyboard. To verify the feasibility of the approach, we conducted two user studies. The first study showed that at least one palm was on palmrest for more than 90\% of key interaction time. The second study showed a vibration power of 1.17 g (peak-to-peak) and a duration of 4 ms was sufficient for reliable perception of palmrest vibrations during keyboard interaction. We finally demonstrated the potential of such an approach by designing Dwell+ Key, an application that extends the function of each key by enabling timed dwelling operations.

Preview

AEDLE: Designing Drama Therapy Interface for Improving Pragmatic Language Skills of Children with Autism Spectrum Disorder Using AR

CHI'23

Jungin Park, Gahyun Bae, Jueon Park, Seo Kyung Park, Yeon Soo Kim, Sangsu Lee

This research proposes AEDLE, a new interface combining AR with drama therapy — an approved method of improving pragmatic language skills — to offer effective, universal, and accessible language therapy for children with Autism Spectrum Disorder (ASD). People with ASD commonly have a disability in pragmatic language and experience difficulty speaking. However, although therapy in childhood is necessary to prevent long-term social isolation due to such constraints, the limited number of therapists forbids doing so. Technology-based therapy can be a solution, but studies on utilizing digital therapy to improve pragmatic language are still insufficient. We conducted a preliminary user study with an ASD child and a therapist to investigate how the child with ASD reacts to drama therapy using AEDLE. We observed that our ASD child actively participated in AEDLE-mediated drama therapy, used our insights to recommend design suggestions for AR-based drama therapy, and explored various ways to utilize AEDLE.

Preview

Tailoring Interactions: Exploring the Opportune Moment for Remote Computer-mediated Interactions with Home-alone Dogs

CHI'23

Yewon Kim, Taesik Gong, Sung-Ju Lee

We argue for research on identifying opportune moments for remote computer-mediated interactions with home-alone dogs. We analyze the behavior of home-alone pet dogs to find specific situations where positive interaction between the dog and toys is more likely and when the interaction might induce more stress. We highlight the importance of considering the timing of remote interactions with pet dogs and the potential benefits it brings to the effectiveness of the interaction, leading to greater satisfaction and engagement for both the pet and the pet owner.

Preview

Dis/Immersion in Mindfulness Meditation with a Wandering Voice Assistant

CHI'23

Bonhee Ku, Katie Seaborn

Mindfulness meditation is a validated means of helping people manage stress. Voice-based virtual assistants (VAs) in smart speakers, smartphones, and smart environments can assist people in carrying out mindfulness meditation through guided experiences. However, the common fixed location embodiment of VAs makes it difficult to provide intuitive support. In this work, we explored the novel embodiment of a “wandering voice” that is co-located with the user and “moves” with the task. We developed a multi-speaker VA embedded in a yoga mat that changes location along the body according to the meditation experience. We conducted a qualitative user study in two sessions, comparing a typical fixed smart speaker to the wandering VA embodiment. Thick descriptions from interviews with twelve people revealed sometimes simultaneous experiences of immersion and dis-immersion. We offer design implications for “wandering voices” and a new paradigm for VA embodiment that may extend to guidance tasks in other contexts.

Student Game Comepetition

Glow the Buzz: A VR Puzzle Adventure Game Mainly Played Through Haptic Feedback

CHI'23

Sihyun Jeong, Hyun Ho Yun, Yoonji Lee, Yeeun Han

Virtual Reality (VR) has become a more popular tool, leading to increased demands for various immersive VR games for players. In addition, haptic technology is gaining attention as it adds a sense of touch to the visual and auditory dominant Human-Computer Interface (HCI) in terms of providing more extended VR experiences. However, most games, including VR, use haptics as a supplement while mostly depending on the visual elements as their main mode of transferring information. It is because the complexity of haptic in accurately capturing and replicating touch is still in its infancy. To further investigate the potential of haptics, we propose to Glow the Buzz, a VR game in which haptic feedback serves as a core element using wearable haptic devices. Our research explores whether haptic stimuli can be a primary form of interaction by conceiving iterative playtests for three haptic puzzle designs – rhythm, texture, and direction. The study concludes that haptic technology in VR has the potential extendability by proposing a VR haptic puzzle game that cannot be played without haptics and enhances the player’s immersion. Moreover, the study suggests elements that enhance each haptic stimuli’s discriminability when designing haptic puzzles.

Preview

Spatial Chef: A Spatial Transforming VR Game with Full Body Interaction

CHI'23

Yeeun Shin, Yewon Lee, Sungbaek Kim, Soomin Park

How can we play with space? We present Spatial Chef, a spatial cooking game that focuses on interacting with space itself, shifting away from the conventional object interaction of virtual reality (VR) games. This allows players to generate and transform the virtual environment (VE) around them directly. To capture the ambiguity of space, we created a game interface with full-body movement based on the player’s perception of spatial interaction. This was evaluated as easy and intuitive, providing clues for the spatial interaction design. Our user study reveals that manipulating virtual space can lead to unique experiences: Being both a player and an absolute and Experiencing realized fantasy. This suggests the potential of interacting with space as an engaging gameplay mechanic. Spatial Chef proposes turning the VE, typically treated as a passive backdrop, into an active medium that responds to the player’s intentions, creating a fun and novel experience.

Preview

MindTerior: A Mental Healthcare Game with Metaphoric Gamespace and Effective Activities for Mitigating Mild Emotional Difficulties

CHI'23

Ain Lee, Juhyun Lee, Sooyeon Ahn, Youngik Lee

Contemporaries suffer from more stress and emotional difficulties, but developing practices that allow them to manage and become aware of emotional states has been a challenge. MindTerior is a mental health care game developed for people who occasionally experience mild emotional difficulties. The game contains four mechanisms: measuring players’ emotional state, providing game activities that help mitigate certain negative emotions, visualizing players’ emotional state and letting players cultivate the game space with customizable items, and completing game events that educate players on how to cope with certain negative emotions. This set of gameplays can allow players to experience effective positive emotional relaxation and to perform gamified mental health care activities. Playtest showed that projecting players’ emotional state to a virtual game space is helpful for players to be conscious of their emotional state, and playing gamified activities is helpful for mental health care. Additionally, the game motivated players to practice the equivalent activities in real life.

Preview

Bean Academy: A Music Composition Game for Beginners with Vocal Query Transcription

CHI'23

Jaejun Lee, Hyeyoon Cho, Yonghyun Kim

Bean Academy is a music composition game designed for musicallyunskilled learners to lower entry barriers to music composition learning such as music theory comprehension, literacy and proficiency in utilizing music composition software. As a solution, Bean Academy’s Studio Mode was designed with the adaptation of an auditory-based ‘Vocal Query Transcription (VQT)’ model to enhance learners’ satisfaction and enjoyment towards music composition learning. Through the VQT model, players can experience a simple and efficient music composition process by experiencing their recorded voice input being transcripted into an actual musical piece. Based on our playtest, thematic analysis was conducted in two separate experiment groups. Here, we noticed that although Bean Academy does not outperform the current-level Digital Audio Workstation (DAW) in terms of performance or functionality, it can be highly considered as suitable learning material for musicallyunskilled learners.

Preview

Workshop

Beyond prototyping boards: future paradigms for electronics toolkits

CHI'23

Andera Bianchi, Steve Hodges, David J. Curtielles, HyunJoo Oh, Mannu Lambrichts, Anne Roudaut

Electronics prototyping platforms such as Arduino enable a wide variety of creators with and without an engineering background to rapidly and inexpensively create interactive prototypes. By opening up the process of prototyping to more creators, and by making it cheaper and quicker, prototyping platforms and toolkits have undoubtedly shaped the HCI community. With this workshop, we aim to understand how recent trends in technology, from reprogrammable digital and analog arrays to printed electronics, and from metamaterials to neurally-inspired processors, might be leveraged in future prototyping platforms and toolkits. Our goal is to go beyond the well-established paradigm of mainstream microcontroller boards, leveraging the more diverse set of technologies that already exist but to date have remained relatively niche. What is the future of electronics prototyping toolkits? How will these tools fit in the current ecosystem? What are the new opportunities for research and commercialization?

Towards Explainable AI Writing Assistants for Non-native English Speakers

CHI'23

Yewon Kim, Mina Lee, Donghwi Kim, Sung-Ju Lee

We highlight the challenges faced by non-native speakers when using AI writing assistants to paraphrase text. Through an interview study with 15 non-native English speakers (NNESs) with varying levels of English proficiency, we observe that they face difficulties in assessing paraphrased texts generated by AI writing assistants, largely due to the lack of explanations accompanying the suggested paraphrases. Furthermore, we examine their strategies to assess AI-generated texts in the absence of such explanations. Drawing on the needs of NNESs identified in our interview, we propose four potential user interfaces to enhance the writing experience of NNESs using AI writing assistants. The proposed designs focus on incorporating explanations to better support NNESs in understanding and evaluating the AI-generated paraphrasing suggestions.

ChatGPT for Moderating Customer Inquiries and Responses to Alleviate Stress and Reduce Emotional Dissonance of Customer Service Representatives

CHI'23

Hyung-Kwon Ko, Kihoon Son, Hyoungwook Jin, Yoonseo Choi, Xiang ‘Anthony’ Chen

Customer service representatives (CSRs) face significant levels of stress as a result of handling disrespectful customer inquiries and the emotional dissonance that arises from concealing their true emotions to provide the best customer experience. To solve this issue, we propose ExGPTer that uses ChatGPT to moderate the tone and manner of a customer inquiry to be more gentle and appropriate, while ensuring that the content remains unchanged. ExGPTer also augments CSRs’ responses to answer customer inquiries, so they can conform to established company protocol while effectively conveying the essential information that customers seek.

LMCanvas: Object-Oriented Interaction to Personalize Large Language Model-Powered Writing Environments

CHI'23

Tae Soo Kim, Arghya Sarkar, Yoonjoo Lee, Minsuk Chang, Juho Kim

Large language models (LLMs) can enhance writing by automating or supporting specific tasks in writers’ workflows (e.g., paraphrasing, creating analogies). Leveraging this capability, a collection of interfaces have been developed that provide LLM-powered tools for specific writing tasks. However, these interfaces provide limited support for writers to create personal tools for their own unique tasks, and may not comprehensively fulfill a writer’s needs—requiring them to continuously switch between interfaces during writing. In this work, we envision LMCanvas, an interface that enables writers to create their own LLM-powered writing tools and arrange their personal writing environment by interacting with “blocks” in a canvas. In this interface, users can create text blocks to encapsulate writing and LLM prompts, model blocks for model parameter configurations, and connect these to create pipeline blocks that output generations. In this workshop paper, we discuss the design for LMCanvas and our plans to develop this concept.

Look Upon Thyself: Understanding the Effect of Self-Reflection on Toxic Behavior in Online Gaming

CHI'23

Juhoon Lee, Jeong-woo Jang, Juho Kim

TBD

Towards an Experience-Centric Paradigm of Online Harassment: Responding to Calling out and Networked Harassment

CHI'23

Haesoo Kim, Juhoon Lee, Juho Kim, Jeong-woo Jang

TBD

The full schedule of presentations at CHI 2023 can also seen here!

CHI 2022

CHI 2022
DATE
  30 April – 5 May 2022
LOCATION  Online (New Orleans, LA)
 

We are happy to bring good news! At CHI 2022, KAIST records a total of 19 full paper publications (with 1 Best Paper, and 2 Honorable Mention Awards), 2 interactivities, 7 late-breaking works, 4 Student Game Competition works, and ranking 5th place in the number of publications out of all CHI 2022 participating institutions. Congratulations on the outstanding achievement!

KAIST CHI Statistics (2015-2022)
Year        Number of Publications       Rank
2015       9                                                14
2016       15                                              7
2017       7                                                26
2018       21                                              8
2019       13                                              11
2020       15                                              14
2021       22                                              4
2022       19                                              5

Nation-wide (Korea) CHI Statistics (2015-2022)
Year        Number of Publications       Rank
2015       17                                              6
2016       20                                              6
2017       16                                              11
2018       30                                              6
2019       23                                              8
2020       29                                              7
2021       35                                              7
2022       33                                              7

For more information and details about the publications that feature in the conference, please refer to the publication list below.

Paper Publications

Mobile-Friendly Content Design for MOOCs: Challenges, Requirements, and Design Opportunities

CHI'22, Best Paper

Jeongyeon Kim, Yubin Choi, Meng Xia, Juho Kim

Most video-based learning content is designed for desktops without considering mobile environments. We (1) investigate the gap between mobile learners’ challenges and video engineers’ considerations using mixed methods and (2) provide design guidelines for creating mobile-friendly MOOC videos. To uncover learners’ challenges, we conducted a survey (n=134) and interviews (n=21), and evaluated the mobile adequacy of current MOOCs by analyzing 41,722 video frames from 101 video lectures. Interview results revealed low readability and situationally-induced impairments as major challenges. The content analysis showed a low guideline compliance rate for key design factors. We then interviewed 11 video production engineers to investigate design factors they mainly consider. The engineers mainly focus on the size and amount of content while lacking consideration for color, complex images, and situationally-induced impairments. Finally, we present and validate guidelines for designing mobile-friendly MOOCs, such as providing adaptive and customizable visual design and context-aware accessibility support.

Stylette: Styling the Web with Natural Language

CHI'22, Honorable Mention

Tae Soo Kim, DaEun Choi, Yoonseo Choi, Juho Kim

End-users can potentially style and customize websites by editing them through in-browser developer tools. Unfortunately, end-users lack the knowledge needed to translate high-level styling goals into low-level code edits. We present Stylette, a browser extension that enables users to change the style of websites by expressing goals in natural language. By interpreting the user’s goal with a large language model and extracting suggestions from our dataset of 1.7 million web components, Stylette generates a palette of CSS properties and values that the user can apply to reach their goal. A comparative study (N=40) showed that Stylette lowered the learning curve, helping participants perform styling changes 35% faster than those using developer tools. By presenting various alternatives for a single goal, the tool helped participants familiarize themselves with CSS through experimentation. Beyond CSS, our work can be expanded to help novices quickly grasp complex software or programming languages.

MyDJ: Sensing Food Intakes with an Attachable on Your Eyeglass Frame

CHI'22, Honorable Mention

Jaemin Shin, Seungjoo Lee, Taesik Gong, Hyungjun Yoon, Hyunchul Roh, Andrea Bianchi, Sung-Ju Lee 

Various automated eating detection wearables have been proposed to monitor food intakes. While these systems overcome the forgetfulness of manual user journaling, they typically show low accuracy at outside-the-lab environments or have intrusive form-factors (e.g., headgear). Eyeglasses are emerging as a socially-acceptable eating detection wearable, but existing approaches require custom-built frames and consume large power. We propose MyDJ, an eating detection system that could be attached to any eyeglass frame. MyDJ achieves accurate and energy-efficient eating detection by capturing complementary chewing signals on a piezoelectric sensor and an accelerometer. We evaluated the accuracy and wearability of MyDJ with 30 subjects in uncontrolled environments, where six subjects attached MyDJ on their own eyeglasses for a week. Our study shows that MyDJ achieves 0.919 F1-score in eating episode coverage, with 4.03× battery time over the state-of-the-art systems. In addition, participants reported wearing MyDJ was almost as comfortable (94.95%) as wearing regular eyeglasses.

Lattice Menu: A Low-Error Gaze-Based Marking Menu Utilizing Target-Assisted Gaze Gestures on a Lattice of Visual Anchors

CHI'22

Taejun Kim, Auejin Ham, Sunggeun Ahn, Geehyuk Lee

We present Lattice Menu, a gaze-based marking menu utilizing a lattice of visual anchors that helps perform accurate gaze pointing for menu item selection. Users who know the location of the desired item can leverage target-assisted gaze gestures for multilevel item selection by looking at visual anchors over the gaze trajectories. Our evaluation showed that Lattice Menu exhibits a considerably low error rate (~1%) and a quick menu selection time (1.3-1.6 s) for expert usage across various menu structures (4 × 4 × 4 and 6 × 6 × 6) and sizes (8, 10 and 12°). In comparison with a traditional gaze-based marking menu that does not utilize visual targets, Lattice Menu showed remarkably (~5 times) fewer menu selection errors for expert usage. In a post-interview, all 12 subjects preferred Lattice Menu, and most subjects (8 out of 12) commented that the provisioning of visual targets facilitated more stable menu selections with reduced eye fatigue.

SpinOcchio: Understanding Haptic-Visual Congruency of Skin-Slip in VR with a Dynamic Grip Controller

CHI'22

Myung Jin Kim, Neung Ryu, Wooje Chang, Michel Pahud, Mike Sinclair, Andrea Bianchi

This paper’s goal is to understand the haptic-visual congruency perception of skin-slip on the fingertips given visual cues in Virtual Reality (VR). We developed SpinOcchio (‘Spin’ for the spinning mechanism used, ‘Occhio’ for the Italian word “eye”), a handheld haptic controller capable of rendering the thickness and slipping of a virtual object pinched between two fingers. This is achieved using a mechanism with spinning and pivoting disks that apply a tangential skin-slip movement to the fingertips. With SpinOcchio, we determined the baseline haptic discrimination threshold for skin-slip, and, using these results, we tested how haptic realism of motion and thickness is perceived with varying visual cues in VR. Surprisingly, the results show that in all cases, visual cues dominate over haptic perception. Based on these results, we suggest applications that leverage skin-slip and grip interaction, contributing further to realistic experiences in VR.

Understanding Emotion Changes in Mobile Experience Sampling

CHI'22

Soowon Kang, Cheul Young Park, Narae Cha, Auk Kim, Uichin Lee

Mobile experience sampling methods~(ESMs) are widely used to measure users’ affective states by randomly sending self-report requests. However, this random probing can interrupt users and adversely influence users’ emotional states by inducing disturbance and stress. This work aims to understand how ESMs themselves may compromise the validity of ESM responses and what contextual factors contribute to changes in emotions when users respond to ESMs. Towards this goal, we analyze 2,227 samples of the mobile ESM data collected from 78 participants. Our results show ESM interruptions positively or negatively affected users’ emotional states in at least 38\% of ESMs, and the changes in emotions are closely related to the contexts users were in prior to ESMs. Finally, we discuss the implications of using the ESM and possible considerations for mitigating the variability in emotional responses in the context of mobile data collection for affective computing.

Cocomix: Utilizing Comments to Improve Non-Visual Webtoon Accessibility

CHI'22

Mina Huh, YunJung Lee, Dasom Choi, Haesoo Kim, Uran Oh, Juho Kim

Webtoon is a type of digital comics read online where readers can leave comments to share their thoughts on the story. While it has experienced a surge in popularity internationally, people with visual impairments cannot enjoy webtoon with the lack of an accessible format. While traditional image description practices can be adopted, resulting descriptions cannot preserve webtoons’ unique values such as control over the reading pace and social engagement through comments. To improve the webtoon reading experience for BLV users, we propose Cocomix, an interactive webtoon reader that leverages comments into the design of novel webtoon interactions. Since comments can identify story highlights and provide additional context, we designed a system that provides 1) comments-based adaptive descriptions with selective access to details and 2) panel-anchored comments for easy access to relevant descriptive comments. Our evaluation (N=12) showed that Cocomix users could adapt the description for various needs and better utilize comments.

“It’s not wrong, but I’m quite disappointed”: Toward an Inclusive Algorithmic Experience for Content Creators with Disabilities

CHI'22

Dasom Choi, Uichin Lee, Hwajung Hong

YouTube is a space where people with disabilities can reach a wider online audience to present what it is like to have disabilities. Thus, it is imperative to understand how content creators with disabilities strategically interact with algorithms to draw viewers around the world. However, considering that the algorithm carries the risk of making less inclusive decisions for users with disabilities, whether the current algorithmic experiences (AXs) on video platforms is inclusive for creators with disabilities is an open question. To address that, we conducted semi-structured interviews with eight YouTubers with disabilities. We found that they aimed to inform the public of diverse representations of disabilities, which led them to work with algorithms by strategically portraying disability identities. However, they were disappointed that the way the algorithms work did not sufficiently support their goals. Based on findings, we suggest implications for designing inclusive AXs that could embrace creators’ subtle needs.

AlgoSolve: Supporting Subgoal Learning in Algorithmic Problem-Solving with Learnersourced Microtasks

CHI'22

Kabdo Choi, Hyungyu Shin, Meng Xia, Juho Kim

Designing solution plans before writing code is critical for successful algorithmic problem-solving. Novices, however, often plan on-the-fly during implementation, resulting in unsuccessful problem-solving due to lack of mental organization of the solution. Research shows that subgoal learning helps learners develop more complete solution plans by enhancing their understanding of the high-level solution structure. However, expert-created materials such as subgoal labels are necessary to provide learning benefits from subgoal learning, which are a scarce resource in self-learning due to limited availability and high cost. We propose a learnersourcing workflow that collects high-quality subgoal labels from learners by helping them improve their label quality. We implemented the workflow into AlgoSolve, a prototype interface that supports subgoal learning for algorithmic problems. A between-subjects study with 63 problem-solving novices revealed that AlgoSolve helped learners create higher-quality labels and more complete solution plans, compared to a baseline method known to be effective in subgoal learning.

FitVid: Responsive and Flexible Video Content Adaptation

CHI'22

Jeongyeon Kim, Yubin Choi, Minsuk Kahng, Juho Kim

Mobile video-based learning attracts many learners with its mobility and ease of access. However, most lectures are designed for desktops. Our formative study reveals mobile learners’ two major needs: more readable content and customizable video design. To support mobile-optimized learning, we present FitVid, a system that provides responsive and customizable video content. Our system consists of (1) an adaptation pipeline that reverse-engineers pixels to retrieve design elements (e.g., text, images) from videos, leveraging deep learning with a custom dataset, which powers (2) a UI that enables resizing, repositioning, and toggling in-video elements. The content adaptation improves the guideline compliance rate by 24% and 8% for word count and font size. The content evaluation study (n=198) shows that the adaptation significantly increases readability and user satisfaction. The user study (n=31) indicates that FitVid significantly improves learning experience, interactivity, and concentration. We discuss design implications for responsive and customizable video adaptation.

Sad or just jealous? Using Experience Sampling to Understand and Detect Negative Affective Experiences on Instagram

CHI'22

Mintra Ruensuk, Taewon Kim, Hwajung Hong, Ian Oakley

Social Network Services (SNSs) evoke diverse affective experiences. While most are positive, many authors have documented both the negative emotions that can result from browsing SNS and their impact: Facebook depression is a common term for the more severe results. However, while the importance of the emotions experienced on SNSs is clear, methods to catalog them, and systems to detect them, are less well developed. Accordingly, this paper reports on two studies using a novel contextually triggered Experience Sampling Method to log surveys immediately after using Instagram, a popular image-based SNS, thus minimizing recall biases. The first study improves our understanding of the emotions experienced while using SNSs. It suggests that common negative experiences relate to appearance comparison and envy. The second study captures smartphone sensor data during Instagram sessions to detect these two emotions, ultimately achieving peak accuracies of 95.78% (binary appearance comparison) and 93.95% (binary envy).

Prediction for Retrospection: Integrating Algorithmic Stress Prediction into Personal Informatics Systems for College Students' Mental Health

CHI'22

Taewan Kim, Haesoo Kim, Ha Yeon Lee, Hwarang Goh, Shakhboz Abdigapporov, Mingon Jeong, Hyunsung Cho, Kyungsik Han, Youngtae Noh, Sung-Ju Lee, Hwajung Hong

Reflecting on stress-related data is critical in addressing one’s mental health. Personal Informatics (PI) systems augmented by algorithms and sensors have become popular ways to help users collect and reflect on data about stress. While prediction algorithms in the PI systems are mainly for diagnostic purposes, few studies examine how the explainability of algorithmic prediction can support user-driven self-insight. To this end, we developed MindScope, an algorithm-assisted stress management system that determines user stress levels and explains how the stress level was computed based on the user’s everyday activities captured by a smartphone. In a 25-day field study conducted with 36 college students, the prediction and explanation supported self-reflection, a process to re-establish preconceptions about stress by identifying stress patterns and recalling past stress levels and patterns that led to coping planning. We discuss the implications of exploiting prediction algorithms that facilitate user-driven retrospection in PI systems.

"It Feels Like Taking a Gamble": Exploring Perceptions, Practices, and Challenges of Using Makeup and Cosmetics for People with Visual Impairments

CHI'22

Franklin Mingzhe Li, Franchesca Spektor, Meng Xia, Mina Huh, Peter Cederberg, Yuqi Gong, Kristen Shinohara, Patrick Carrington

Makeup and cosmetics offer the potential for self-expression and the reshaping of social roles for visually impaired people. However, there exist barriers to conducting a beauty regime because of the reliance on visual information and color variances in makeup. We present a content analysis of 145 YouTube videos to demonstrate visually impaired individuals’ unique practices before, during, and after doing makeup. Based on the makeup practices, we then conducted semi-structured interviews with 12 visually impaired people to discuss their perceptions of and challenges with the makeup process in more depth. Overall, through our findings and discussion, we present novel perceptions of makeup from visually impaired individuals (e.g., broader representations of blindness and beauty). The existing challenges provide opportunities for future research to address learning barriers, insufficient feedback, and physical and environmental barriers, making the experience of doing makeup more accessible to people with visual impairments.

Quantifying Proactive and Reactive Button Input

CHI'22

Hyunchul Kim, Kasper Hornbaek, Byungjoo Lee

When giving input with a button, users follow one of two strategies: (1) react to the output from the computer or (2) proactively act in anticipation of the output from the computer. We propose a technique to quantify reactiveness and proactiveness to determine the degree and characteristics of each input strategy. The technique proposed in this study uses only screen recordings and does not require instrumentation beyond the input logs. The likelihood distribution of the time interval between the button inputs and system outputs, which is uniquely determined for each input strategy, is modeled. Then the probability that each observed input/output pair originates from a specific strategy is estimated along with the parameters of the corresponding likelihood distribution. In two empirical studies, we show how to use the technique to answer questions such as how to design animated transitions and how to predict a player’s score in real-time games.

Promptiverse: Scalable Generation of Scaffolding Prompts Through Human-AI Hybrid Knowledge Graph Annotation

CHI'22

Yoonjoo Lee, John Joon Young Chung, Tae Soo Kim, Jean Y Song, Juho Kim

Online learners are hugely diverse with varying prior knowledge, but most instructional videos online are created to be one-size-fits-all. Thus, learners may struggle to understand the content by only watching the videos. Providing scaffolding prompts can help learners overcome these struggles through questions and hints that relate different concepts in the videos and elicit meaningful learning. However, serving diverse learners would require a spectrum of scaffolding prompts, which incurs high authoring effort. In this work, we introduce Promptiverse, an approach for generating diverse, multi-turn scaffolding prompts at scale, powered by numerous traversal paths over knowledge graphs. To facilitate the construction of the knowledge graphs, we propose a hybrid human-AI annotation tool, Grannotate. In our study (N=24), participants produced 40 times more on-par quality prompts with higher diversity, through Promptiverse and Grannotate, compared to hand-designed prompts. Promptiverse presents a model for creating diverse and adaptive learning experiences online.

CatchLive: Real-time Summarization of Live Streams with Stream Content and Interaction Data

CHI'22

Saelyne Yang, Jisu Yim, Juho Kim, Hijung Valentina Shin

Live streams usually last several hours with many viewers joining in the middle. Viewers who join in the middle often want to understand what has happened in the stream. However, catching up with the earlier parts is challenging because it is difficult to know which parts are important in the long, unedited stream while also keeping up with the ongoing stream. We present CatchLive, a system that provides a real-time summary of ongoing live streams by utilizing both the stream content and user interaction data. CatchLive provides viewers with an overview of the stream along with summaries of highlight moments with multiple levels of detail in a readable format. Results from deployments of three streams with 67 viewers show that CatchLive helps viewers grasp the overview of the stream, identify important moments, and stay engaged. Our findings provide insights into designing summarizations of live streams reflecting their characteristics.

A Conversational Approach for Modifying Service Mashups in IoT Environments

CHI'22

Sanghoon Kim, In-Young Ko

Existing conversational approaches for Internet of Things (IoT) service mashup do not support modification because of the usability challenge, although it is common for users to modify the service mashups in IoT environments. To support the modification of IoT service mashups through conversational interfaces in a usable manner, we propose the conversational mashup modification agent (CoMMA). Users can modify IoT service mashups using CoMMA through natural language conversations. CoMMA has a two-step mashup modification interaction, an implicature-based localization step, and a modification step with a disambiguation strategy. The localization step allows users to easily search for a mashup by vocalizing their expressions in the environment. The modification step supports users to modify mashups by speaking simple modification commands. We conducted a user study and the results show that CoMMA is as effective as visual approaches in terms of task completion time and perceived task workload for modifying IoT service mashups.

TaleBrush: Sketching Stories with Generative Pretrained Language Models

CHI'22

John Joon Young Chung, Wooseok Kim, Kang Min Yoo, Hwaran Lee, Eytan Adar, Minsuk Chang

While advanced text generation algorithms (e.g., GPT-3) have enabled writers to co-create stories with an AI, guiding the narrative remains a challenge. Existing systems often leverage simple turn-taking between the writer and the AI in story development. However, writers remain unsupported in intuitively understanding the AI’s actions or steering the iterative generation. We introduce TaleBrush, a generative story ideation tool that uses line sketching interactions with a GPT-based language model for control and sensemaking of a protagonist’s fortune in co-created stories. Our empirical evaluation found our pipeline reliably controls story generation while maintaining the novelty of generated sentences. In a user study with 14 participants with diverse writing experiences, we found participants successfully leveraged sketching to iteratively explore and write stories according to their intentions about the character’s fortune while taking inspiration from generated stories. We conclude with a reflection on how sketching interactions can facilitate the iterative human-AI co-creation process.

Distracting Moments in Videoconferencing: A Look Back at the Pandemic Period

CHI'22

Minha Lee, Wonyoung Park, Sunok Lee, Sangsu Lee

The COVID-19 pandemic has forced workers around the world to switch their working paradigms from on-site to video-mediated communication. Despite the advantages of videoconferencing, diverse circumstances have prevented people from focusing on their work. One of the most typical problems they face is that various surrounding factors distract them during their meetings. This study focuses on conditions in which remote workers are distracted by factors that disturb, interrupt, or restrict them during their meetings. We aim to explore the various problem situations and user needs. To understand users’ pain points and needs, focus group interviews and participatory design workshops were conducted to learn about participants’ troubled working experiences over the past two years and the solutions they expected. Our study provides a unified framework of distracting factors by which to understand causes of poor user experience and reveals valuable implications to improve videoconferencing experiences.

Interactivity

QuadStretch: A Forearm-wearable Multi-dimensional Skin Stretch Display for Immersive VR Haptic Feedback

CHI'22

Youngbo Aram Shim, Taejun Kim, Geehyuk Lee

This demonstration presents QuadStretch, a multidimensional skin stretch display that is worn on the forearm for VR interaction. QuadStretch realizes a light and flexible form factor without a large frame that grounds the device on the arm and provides rich haptic feedback through high expressive performance of stretch modality and various stimulation sites around the forearm. In the demonstration, the presenter lets participants experience six VR interaction scenarios with QuadStretch feedback: Boxing, Pistol, Archery, Slingshot, Wings, and Climbing. In each scenario, the user’s actions are mapped to the skin stretch parameters and fed back, allowing users to experience QuadStretch’s large output space that enables an immersive VR experience.

TaleBrush: Visual Sketching of Story Generation with Pretrained Language Models

CHI'22

John Joon Young Chung, Wooseok Kim, Kang Min Yoo, Hwaran Lee, Eytan Adar, Minsuk Chang

Advancing text generation algorithms (e.g., GPT-3) have led to new kinds of human-AI story co-creation tools. However, it is difficult for authors to guide this generation and understand the relationship between input controls and generated output. In response, we introduce TaleBrush, a GPT-based tool that uses abstract visualizations and sketched inputs. The tool allows writers to draw out the protagonist’s fortune with a simple and expressive interaction. The visualization of the fortune serves both as input control and representation of what the algorithm generated (a story with varying fortune levels). We hope this demonstration leads the community to consider novel controls and sensemaking interactions for human-AI co-creation.

Late-Breaking Work

Effect of Contact Points Feedback on Two-Thumb Touch Typing in Virtual Reality

CHI'22

Jeongmin Son, Sunggeun Ahn, Sunbum Kim, Geehyuk Lee

Two-thumb touch typing (4T) is a touchpad-based text-entry technique also used in virtual reality (VR) systems. However, the performance of 4T in VR is far below that of 4T in a real environment, such as on a smartphone. Unlike “real 4T”, 4T in VR provides virtual cursors representing the thumb positions determined by a position tracker. The virtual cursor positions may differ from the thumb contact points on an input surface. Still, users may regard them as their thumb contact points. In this case, the virtual cursor movements may conflict with the thumb movements perceived by their proprioception and may contribute to typing errors. We hypothesized that virtual cursors accurately representing the contact points of the thumb can improve the performance of 4T in VR. We designed a method to provide accurate contact point feedback, and showed that accurate contact point feedback has a statistically significant positive effect on the speed of 4T in VR.

An Interactive Car Drawing System with Tick'n'Draw for Training Perceptual and Perspective Drawing Skills

CHI'22

Seung-Jun Lee, Joon Hyub Lee, Seok-Hyung Bae

Young children love to draw. However, at around age 10, they begin to feel that their drawings are unrealistic and give up drawing altogether. This study aims to help those who did not receive the proper training in drawing at the time and as a result remain at that level of drawing. First, through 12 drawing workshop sessions, we condensed 2 prominent art education books into 10 core drawing skills. Second, we designed and implemented a novel interactive system that helps the user repeatedly train these skills in the 5 stages of drawing a nice car in an accurate perspective. Our novel interactive technique, Tick’n’Draw, inspired by the drawing habits of experts, provides friendly guidance so that the user does not get lost in the many steps of perceptual and perspective drawing. Third, through a pilot test, we found that our system is quick to learn, easy to use, and can potentially improve real-world drawing abilities with continued use.

Mine Yourself!: A Role-playing Privacy Tutorial in Virtual Reality Environment

CHI'22

Junsu Lim, Hyeonggeun Yun, Auejin Ham, Sunjun Kim

Virtual Reality (VR) has potential vulnerabilities in privacy risks from collecting a wide range of data with higher density. Various designs to provide information on Privacy Policy (PP) have improved the awareness and motivation towards privacy risks. However, most of them have focused on desktop environments, not utilizing the full potential of VR’s immersive interactivity. Therefore, we proposed a role-playing mechanism to provide an immersive experience of interacting with PP’s key entities. First, our formative study found insights for PP where VR users had limited awareness of what data to be collected and how to control them. Following this, we implemented a VR privacy tutorial based on our role-playing mechanism and PP from off-the-shelf VR devices. Our privacy tutorial increased a similar amount of privacy awareness with significantly higher satisfaction (p=0.007) than conventional PP. Our tutorial also showed marginally higher usability (p=0.11).

Exploring the Effects of AI-assisted Emotional Support Processes in Online Mental Health Community

CHI'22

Donghoon Shin, Subeen Park, Esther Hehsun Kim, Soomin Kim, Jinwook Seo, Hwajung Hong

Social support in online mental health communities (OMHCs) is an effective and accessible way of managing mental wellbeing. In this process, sharing emotional supports is considered crucial to the thriving social supports in OMHCs, yet often difficult for both seekers and providers. To support empathetic interactions, we design an AI-infused workflow that allows users to write emotional supporting messages to other users’ posts based on the elicitation of the seeker’s emotion and contextual keywords from writing. Based on a preliminary user study (N = 10), we identified that the system helped seekers to clarify emotion and describe text concretely while writing a post. Providers could also learn how to react empathetically to the post. Based on these results, we suggest design implications for our proposed system.

Virfie: Virtual Group Selfie Station for Remote Togetherness

CHI'22

Hyerin Im, Taewan Kim, Eunhee Jung, bonhee ku, Seungho Baek, Tak Yeon Lee

Selfies have become a prominent means of online communication. Group selfies, in particular, encourage people to represent their identity as part of the group and foster a sense of belonging. During the COVID-19 pandemic, video conferencing systems are used as a tool for group selfies. However, conventional systems are not ideal for group selfies due to the rigidness of grid-based layout, information overload, and lack of eye contact. To explore design opportunities and needs for a novel virtual group selfie platform, we conducted a participatory design, and identified three characteristics of virtual group selfie scenarios, “context with narratives”, “interactive group tasks”, and “capturing subtle moments.” We implemented Virfie, a web-based platform that enables users to take group selfies with embodied social interaction, and to create and customize selfie scenarios using a novel JSON specification. In order to validate our design concept and identify usability issues we conducted a user study. Feedbacks from the study participants suggest that Virfie is effective at strengthening social interaction and remote togetherness.

CareMouse: An Interactive Mouse System that Supports Wrist Stretching Exercises in the Workplace

CHI'22

Gyuwon Jung, Youwon Shin, Jieun Lim, Uichin Lee

Knowledge workers suffer from wrist pain due to their long-term mouse and keyboard use. In this study, we present CareMouse, an interactive mouse system that supports wrist stretching exercises in the workplace. When the stretch alarm is given, users hold CareMouse and do exercises, and the system collects the wrist movement data and determines whether they follow the accurate stretching motions based on a machine learning algorithm, enabling real-time guidance. We conducted a preliminary user study to understand the users’ perception and user experience of the system. Our results showed the feasibility of CareMouse in guiding stretching exercises interactively. We provided design implications for the augmentation of existing tools when offering auxiliary functions.

ThinkWrite: Design Interventions for Empowering User Deliberation in Online Petition

CHI'22

Jini Kim, Chorong Kim, Ki-Young Nam

Online petitions have served as an innovative means of citizen participation over the past decade. However, their original purpose has been waning due to inappropriate language, fabricated information, and the lack of evidence that supports petitions. The lack of deliberation in online petitions has influenced other users, deteriorating the platform to a degree that good petitions are seldom generated. Therefore, this study designs interventions that empower users to create deliberative petitions. We conducted user research to observe users’ writing behavior in online petitions and identified causes of non-deliberative petitions. Based on our findings, we propose ThinkWrite, a new interactive app promoting user deliberation. The app includes six main features: a gamified learning process, a writing recommendation system, a guiding interface for self-construction, tailored AI for self-revision, short-cuts for easy archiving of evidence, and a citizen-collaborative page. Finally, the efficacy of the app is demonstrated through user surveys and in-depth interviews.

Student Game Comepetition

Play With Your Emotions: Exploring Possibilities of Emotions as Game Input in NERO

CHI'22

Valérie Erb, Tatiana Chibisova, Haesoo Kim, Jeongmi Lee, Young Yim Doh

This work presents NERO, a game concept using the player’s active emotional input to map the emotional state of the player to representative in-game characters. Emotional input in games has been mainly used as a passive measure to adjust for game difficulty or other variables. However the player has not been given the possibility to explore and play with one’s emotions as an active feature. Given the high subjectivity of felt emotions we wanted to focus on the player’s experience of emotional input rather than the objective accuracy of the input sensor. We therefore implemented a proof-of-concept game using heart-rate as a proxy for emotion measurement and through repeated player tests the game mechanics were revised and evaluated. Valuable insight for the design of entertainment-focused emotional input games were gained, including emotional connection despite limited accuracy, influence of the environment and the importance of calibration. The players overall enjoyed the novel game experience and their feedback carries useful implications for future games including active emotional input.

The Melody of the Mysterious Stones: A VR Mindfulness Game Using Sound Spatialization

CHI'22

Haven Kim, Jaeran Choi, Young Yim Doh, Juhan Nam

The Melody of Mysterious Stones is a VR meditation game that utilizes spatial audio technologies. One of the most common mindfulness exercises is to notice and observe five senses including the sense of sound. As a way of helping the players with focusing on their sense of sound, the Melody of Mysterious Stones makes use of spatialized sounds as game elements. Our play tests showed that game players enjoyed playing missions with spatial audio elements. Also, they reported that spatial audio helped them with focusing on their sense of sound and therefore felt more engaged in meditation materials.

Evoker: Narrative-based Facial Expression Game for Emotional Development of Adolescents

CHI'22

Seokhyeon Hong, Yeonsoo Choi, Youjin Sung, YURHEE JIN, Young Yim Doh, Jeongmi Lee

Evoker is a narrative-based facial expression game. Due to the COVID-19 pandemic, adolescents should be wearing masks in their daily lives. However, wearing masks disturbs emotional interaction through facial expressions, which is a critical component in emotional development. Therefore, a negative impact on adolescent emotional development is predicted. To solve this problem, we design a narrative-based game Evoker that uses real-time facial expression recognition. In this game, players are asked to identify an emotion from narrative contexts in missions, and make a facial expression appropriate for the context to clear the challenges. Our game provides an opportunity to practice reading emotional contexts and expressing appropriate emotions, which has a high potential for promoting emotional development of adolescents.

Classy Trash Monster: An Educational Game for Teaching Machine Learning to Non-major Students

CHI'22

Joonhyung Bae, Karam Eum, Haram Kwon, Seolhee Lee, Juhan Nam, Young Yim Doh

As machine learning (ML) became more relevant to our lives, ML education for college students without technical background arose important. However, not many educational games designed to suit challenges they experience exist. We introduce an educational game Classy Trash Monster(CTM), designed to better educate ML and data dependency to non-major students who learn ML for the first time. The player can easily learn to train a classification model and solve tasks by engaging in simple game activities designed according to an ML pipeline. Simple controls, positive rewards, and clear audiovisual feedback makes game easy to play even for novice players. The playtest result showed that players were able to learn basic ML concepts and how data can impact model results, and that the game made ML feel less difficult and more relevant. However, proper debriefing session seems crucial to prevent misinterpretations that may occur in the learning process.

Classy Trash Monster: An Educational Game for Teaching Machine Learning to Non-major Students

CHI'22

Joonhyung Bae, Karam Eum, Haram Kwon, Seolhee Lee, Juhan Nam, Young Yim Doh

As machine learning (ML) became more relevant to our lives, ML education for college students without technical background arose important. However, not many educational games designed to suit challenges they experience exist. We introduce an educational game Classy Trash Monster(CTM), designed to better educate ML and data dependency to non-major students who learn ML for the first time. The player can easily learn to train a classification model and solve tasks by engaging in simple game activities designed according to an ML pipeline. Simple controls, positive rewards, and clear audiovisual feedback makes game easy to play even for novice players. The playtest result showed that players were able to learn basic ML concepts and how data can impact model results, and that the game made ML feel less difficult and more relevant. However, proper debriefing session seems crucial to prevent misinterpretations that may occur in the learning process.

CHI 2021

CHI 2021
DATE
  7 May – 17 May 2021
LOCATION  Online (Yokohama, Japan)
 

We are happy to bring good news! At CHI 2021, KAIST records a total of 22 full paper publications (with 7 Honorable Mention Awards), ranking 4th place in the number of publications out of all CHI 2021 participating institutions. Congratulations on the outstanding achievement!

KAIST CHI Statistics (2015-2021)
Year        Number of Publications       Rank
2015       9                                                14
2016       15                                              7
2017       7                                                26
2018       21                                              8
2019       13                                              11
2020       15                                              14
2021       22                                              4

Nation-wide (Korea) CHI Statistics (2015-2021)
Year        Number of Publications       Rank
2015       17                                              6
2016       20                                              6
2017       16                                              11
2018       30                                              6
2019       23                                              8
2020       29                                              7
2021       35                                              7

For more information and details about the publications that feature in the conference, please refer to the publication list below.
 

CHI 2021 Publications

20210401_figures
Designing Metamaterial Cells to Enrich Thermoforming 3D Printed Objects for Post-Print Modification
CHI'21, Honorable Mention

Donghyeon Ko, Jee Bin Yim, Yujin Lee, Jaehoon Pyun, Woohun Lee

In this paper, we present a metamaterial structure called thermoformable cells, TF-Cells, to enrich thermoforming for post-print modification. So far, thermoforming is limitedly applied for modifying a 3D printed object due to its low thermal conductivity. TF-Cells consists of beam arrays that affluently pass hot air and have high heat transference. Through heating the embedded TF-Cells of the printed object, users can modify not only the deeper area of the object surface but also its form factor. With a series of technical experiments, we investigated TF-Cells’ thermoformability, depending on their structure’s parameters, orientations, and heating conditions. Next, we present a series of compound cells consisting of TF-Cells and solid structure to adjust stiffness or reduce undesirable shape deformation. Adapting the results from the experiments, we built a simple tool for embedding TF-Cells into a 3D model. Using the tool, we implemented examples under contexts of mechanical fitting, ergonomic fitting, and aesthetic tuning.

chi2021_uvrlab
A User-Oriented Approach to Space-Adaptive Augmentation: The Effects of Spatial Affordance on Narrative Experience in an Augmented Reality Detective Game
CHI'21, Honorable Mention

Jae-eun Shin, Boram Yoon, Dooyoung Kim, Woontack Woo

Space-adaptive algorithms aim to effectively align the virtual with the real to provide immersive user experiences for Augmented Reality(AR) content across various physical spaces. While such measures are reliant on real spatial features, efforts to understand those features from the user’s perspective and reflect them in designing adaptive augmented spaces have been lacking. For this, we compared factors of narrative experience in six spatial conditions during the gameplay of Fragments, a space-adaptive AR detective game. Configured by size and furniture layout, each condition afforded disparate degrees of traversability and visibility. Results show that whereas centered furniture clusters are suitable for higher presence in sufficiently large rooms, the same layout leads to lower narrative engagement. Based on our findings, we suggest guidelines that can enhance the effects of space adaptivity by considering how users perceive and navigate augmented space generated from different physical environments.

chi2021_makinteract
GamesBond: Bimanual Haptic Illusion of Physically Connected Objects for Immersive VR Using Grip Deformation
CHI'21, Honorable Mention

Neung Ryu, Hye-Young Jo, Michel Pahud, Mike Sinclair, Andrea Bianchi

Virtual Reality experiences, such as games and simulations, typically support the usage of bimanual controllers to interact with virtual objects. To recreate the haptic sensation of holding objects of various shapes and behaviors with both hands, previous researchers have used mechanical linkages between the controllers that render adjustable stiffness. However, the linkage cannot quickly adapt to simulate dynamic objects, nor it can be removed to support free movements. This paper introduces GamesBond, a pair of 4-DoF controllers without physical linkage but capable to create the illusion of being connected as a single device, forming a virtual bond. The two controllers work together by dynamically displaying and physically rendering deformations of hand grips, and so allowing users to perceive a single connected object between the hands, such as a jumping rope. With a user study and various applications we show that GamesBond increases the realism, immersion, and enjoyment of bimanual interaction.

chi2021_atatouch
AtaTouch: Robust Finger Pinch Detection for a VR Controller Using RF Return Loss
CHI'21, Honorable Mention

Daehwa Kim, Keunwoo Park, Geehyuk Lee

Handheld controllers are an essential part of VR systems. Modern sensing techniques enable them to track users’ finger movements to support natural interaction using hands. The sensing techniques, however, often fail to precisely determine whether two fingertips touch each other, which is important for the robust detection of a pinch gesture. To address this problem, we propose AtaTouch, which is a novel, robust sensing technique for detecting the closure of a finger pinch. It utilizes a change in the coupled impedance of an antenna and human fingers when the thumb and finger form a loop. We implemented a prototype controller in which AtaTouch detects the finger pinch of the grabbing hand. A user test with the prototype showed a finger-touch detection accuracy of 96.4%. Another user test with the scenarios of moving virtual blocks demonstrated low object-drop rate (2.75%) and false-pinch rate (4.40%). The results and feedback from the participants support the robustness and sensitivity of AtaTouch.

throughhand
ThroughHand: 2D Tactile Interaction to Simultaneously Recognize and Touch Multiple Objects
CHI'21, Honorable Mention

Jingun Jung, Sunmin Son, Sangyoon Lee, Yeonsu Kim, Geehyuk Lee

Users with visual impairments find it difficult to enjoy real-time 2D interactive applications on the touchscreen. Touchscreen applications such as sports games often require simultaneous recognition of and interaction with multiple moving targets through vision. To mitigate this issue, we propose ThroughHand, a novel tactile interaction that enables users with visual impairments to interact with multiple dynamic objects in real time. We designed the ThroughHand interaction to utilize the potential of the human tactile sense that spatially registers both sides of the hand with respect to each other. ThroughHand allows interaction with multiple objects by enabling users to perceive the objects using the palm while providing a touch input space on the back of the same hand. A user study verified that ThroughHand enables users to locate stimuli on the palm with a margin of error of approximately 13 mm and effectively provides a real-time 2D interaction experience for users with visual impairments.

chi2021_gosu
Secrets of Gosu: Understanding Physical Combat Skills of Professional Players in First-Person Shooters
CHI'21, Honorable Mention

Eunji Park, Sangyoon Lee, Auejin Ham, Minyeop Choi, Sunjun Kim, Byungjoo Lee

In first-person shooters (FPS), professional players (a.k.a., Gosu) outperform amateur players. The secrets behind the performance of professional FPS players have been debated in online communities with many conjectures; however, attempts of scientific verification have been limited. We addressed this conundrum through a data-collection study of the gameplay of eight professional and eight amateur players in the commercial FPS Counter-Strike: Global Offensive. The collected data cover behavioral data from six sensors (motion capture, eye tracker, mouse, keyboard, electromyography armband, and pulse sensor) and in-game data (player data and event logs). We examined conjectures in four categories: aiming, character movement, physicality, and device and settings. Only 6 out of 13 conjectures were supported with statistically sufficient evidence.

A Simulation Model of Intermittently Controlled Point-and-Click Behaviour
CHI'21, Honorable Mention

Do Seungwon, Minsuk Chang, Byungjoo Lee

We present a novel simulation model of point-and-click behaviour that is applicable both when a target is stationary or moving. To enable more realistic simulation than existing models, the model proposed in this study takes into account key features of the user and the external environment, such as intermittent motor control, click decision-making, visual perception, upper limb kinematics and the effect of input device. The simulated user’s point-and-click behaviour is formulated as a Markov decision process (MDP), and the user’s policy of action is optimised through deep reinforcement learning. As a result, our model successfully and accurately reproduced the trial completion time, distribution of click endpoints, and cursor trajectories of real users. Through an ablation study, we showed how the simulation results change when the model’s sub-modules are individually removed. The implemented model and dataset are publicly available.

heterostroke
Heterogeneous Stroke: Using Unique Vibration Cues to Improve the Wrist-Worn Spatiotemporal Tactile Display

CHI'21

Taejun Kim, Youngbo Aram Shim, Geehyuk Lee

Beyond a simple notification of incoming calls or messages, more complex information such as alphabets and digits can be delivered through spatiotemporal tactile patterns (STPs) on a wrist-worn tactile display (WTD) with multiple tactors. However, owing to the limited skin area and spatial acuity of the wrist, frequent confusions occur between closely located tactors, resulting in a low recognition accuracy. Furthermore, the accuracies reported in previous studies have mostly been measured for a specific posture and could further decrease with free arm postures in real life. Herein, we present Heterogeneous Stroke, a design concept for improving the recognition accuracy of STPs on a WTD. By assigning unique vibrotactile stimuli to each tactor, the confusion between tactors can be reduced. Through our implementation of Heterogeneous Stroke, the alphanumeric characters could be delivered with high accuracy (93.8% for 26 alphabets and 92.4% for 10 digits) across different arm postures.

rubyslippers
RubySlippers: Supporting Content-based Voice Navigation for How-to Videos

CHI'21

Minsuk Chang, Mina Huh, Juho Kim

Directly manipulating the timeline, such as scrubbing for thumbnails, is the standard way of controlling how-to videos. However, when how-to videos involve physical activities, people inconveniently alternate between controlling the video and performing the tasks. Adopting a voice user interface allows people to control the video with voice while performing the tasks with hands. However, naively translating timeline manipulation into voice user interfaces (VUI) results in temporal referencing (e.g.  “rewind 20 seconds’‘), which requires a different mental model for navigation and thereby limiting users’ ability to peek into the content. We present RubySlippers, a system that supports efficient content-based voice navigation through keyword-based queries. Our computational pipeline automatically detects referenceable elements in the video, and finds the video segmentation that minimizes the number of needed navigational commands. Our evaluation (N=12) shows that participants could perform three representative navigation tasks with fewer commands and less frustration using RubySlippers than the conventional voice-enabled video interface.

studywithme-2
Personalizing Ambience and Illusionary Presence: How People Use “Study with me” Videos to Create Effective Studying Environments

CHI'21

Yoonjoo Lee, John Joon Young Chung, Jean Y. Song, Minsuk Chang, Juho Kim

“Study with me” videos contain footage of people studying for hours, in which social components like conversations or informational content like instructions are absent. Recently, they became increasingly popular on video-sharing platforms. This paper provides the first broad look into what “study with me” videos are and how people use them. We analyzed 30 “study with me” videos and conducted 12 interviews with their viewers to understand their motivation and viewing practices. We identified a three-factor model that explains the mechanism for shaping a satisfactory studying experience in general. One of the factors, a well-suited ambience, was difficult to achieve because of two common challenges: external conditions that prevent studying in study-friendly places and extra cost needed to create a personally desired ambience. We found that the viewers used “study with me” videos to create a personalized ambience at a lower cost, to find controllable peer pressure, and to get emotional support. These findings suggest that the viewers self-regulate their learning through watching “study with me” videos to improve efficiency even when studying alone at home.

dataURItoBlob
Sticky Goals: Understanding Goal Commitments for Behavioral Changes in the Wild

CHI’21

Hyunsoo Lee, Auk Ki, Hwajung Hong, Uichin Lee

A commitment device, an attempt to bind oneself for a successful goal achievement, has been used as an effective strategy to promote behavior change. However, little is known about how commitment devices are used in the wild, and what aspects of commitment devices are related to goal achievements. In this paper, we explore a large-scale dataset from stickK, an online behavior change support system that provides both financial and social commitments. We characterize the patterns of behavior change goals (e.g., topics and commitment setting) and then perform a series of multilevel regression analyses on goal achievements. Our results reveal that successful goal achievements are largely dependent on the configuration of financial and social commitment devices, and a mixed commitment setting is considered beneficial. We discuss how our findings could inform the design of effective commitment devices, and how large-scale data can be leveraged to support data-driven goal elicitation and customization. 

winder-3
Winder: Linking Speech and Visual Objects to Support Communication in Asynchronous Collaboration

CHI'21

Tae Soo Kim, Seungsu Kim, Yoonseo Choi, Juho Kim

Team members commonly collaborate on visual documents remotely and asynchronously. Particularly, students are frequently restricted to this setting as they often do not share work schedules or physical workspaces. As communication in this setting has delays and limits the main modality to text, members exert more effort to reference document objects and understand others’ intentions. We propose Winder, a Figma plugin that addresses these challenges through linked tapes—multimodal comments of clicks and voice. Bidirectional links between the clicked-on objects and voice recordings facilitate understanding tapes: selecting objects retrieves relevant recordings, and playing recordings highlights related objects. By periodically prompting users to produce tapes, Winder preemptively obtains information to satisfy potential communication needs. Through a five-day study with eight teams of three, we evaluated the system’s impact on teams asynchronously designing graphical user interfaces. Our findings revealed that producing linked tapes could be as lightweight as face-to-face (F2F) interactions while transmitting intentions more precisely than text. Furthermore, with preempted tapes, teammates coordinated tasks and invited members to build on each others’ work.

good
"Good Enough!": Flexible Goal Achievement with Margin-based Outcome Evaluation

CHI'21

Gyuwon Jung, Jio Oh, Youjin Jung, Juho Sun, Ha-Kyung Kong, Uichin Lee

Traditional goal setting simply assumes a binary outcome for goal evaluation. This binary judgment does not consider a user’s effort, which may demotivate the user. This work explores the possibility of mitigating this negative impact with a slight modification on the goal evaluation criterion, by introducing a ‘margin’ that is widely used for quality control in the manufacturing fields. A margin represents a range near the goal where the user’s outcome will be regarded as ‘good enough’ even if the user fails to reach it. We explore users’ perceptions and behaviors through a large-scale survey study and a small-scale field experiment using a coaching system to promote physical activity. Our results provide positive evidence on the margin, such as lowering the burden of goal achievement and increasing motivation to make attempts. We discuss practical design implications on margin-enabled goal setting and evaluation for behavioral change support systems.

goldentime
GoldenTime: Exploring System-Driven Timeboxing and Micro-Financial Incentives for Self-Regulated Phone Use

CHI'21

Joonyoung Park, Hyunsoo Lee, Sangkeun Park, Kyong-Mee Chung, Uichin Lee

User-driven intervention tools such as self-tracking help users to self-regulate problematic smartphone usage. These tools basically assume active user engagement, but prior studies warned a lack of user engagement over time. This paper proposes GoldenTime, a mobile app that promotes self-regulated usage behavior via system-driven proactive timeboxing and micro-financial incentives framed as gain or loss for behavioral reinforcement. We conducted a large-scale user study (n = 210) to explore how our proactive timeboxing and micro-financial incentives influence users’ smartphone usage behaviors. Our findings show that GoldenTime’s timeboxing based micro-financial incentives are effective in self-regulating smartphone usage, and incentive framing has a significant impact on user behavior. We provide practical design guidelines for persuasive technology design related to promoting digital wellbeing.

차인하_대표사진_CHI21
Exploring the Use of a Voice-based Conversational Agent to Empower Adolescents with Autism Spectrum Disorder

CHI'21

Inha Cha, Sung-In Kim, Hwajung Hong, Heejeong Yoo, and Youn-kyung Lim

Voice-based Conversational Agents (VCA) have served as personal assistants that support individuals with special needs. Adolescents with Autism Spectrum Disorder (ASD) may also benefit from VCAs to deal with their everyday needs and challenges, ranging from self-care to social communications. In this study, we explored how VCAs could encourage adolescents with ASD in navigating various aspects of their daily lives through the two-week use of VCAs and a series of participatory design workshops. Our findings demonstrated that VCAs could be an engaging, empowering, emancipating tool that supports adolescents with ASD to address their needs, personalities, and expectations, such as promoting self-care skills, regulating negative emotions, and practicing conversational skills. We propose implications of using off-the-shelf technologies as a personal assistant to ASD users in Assistive Technology design. We suggest design implications for promoting positive opportunities while mitigating the remaining challenges of VCAs for adolescents with ASD.

MomentMeld: AI-augmented Mobile Photographic Memento towards Mutually Stimulatory Inter-generational Interaction

CHI'21

Bumsoo Kang, Seungwoo Kang, Inseok Hwang

ToonNote: Improving Communication in Computational Notebooks Using Interactive Data Comics

CHI'21

DaYe Kang, Tony Ho, Nicolai Marquardt, Bilge Mutlu, Andrea Bianchi

Elevate: A Walkable Pin-Array for Large Shape-Changing Terrains

CHI'21

Seungwoo Je, Hyunseung Lim, Kongpyung Moon, Shan-Yuan Teng, Jas Brooks, Pedro Lopes, Andrea Bianchi

Human Perceptions on Moral Responsibility of AI: A Case Study in AI-Assisted Bail Decision-Making

CHI'21

Gabriel Lima, Nina Grgić-Hlača, Meeyoung Cha

Virtual Camera Layout Generation using a Reference Video

CHI'21

Jung Eun Yoo, Kwanggyoon Seo, Sanghun Park, Jaedong Kim, Dawon Lee, Junyong Noh

Late Breaking Work

postit
Post-Post-it: A Spatial Ideation System in VR for Overcoming Limitations of Physical Post-it Notes

Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (LBW)

Joon Hyub Lee, Donghyeok Ma, Haena Cho, Seok-Hyung Bae

Post-it notes are great problem-solving tools. However, physical Post-it notes have limitations: surfaces for attaching them can run out; rearranging them can be labor-intensive; documenting and storing them can be cumbersome. We present Post-Post-it, a novel VR interaction system that overcomes these physical limitations. We derived design requirements from a formative study involving a problem-solving meeting using Post-it notes. Then, through physical prototyping, using physical materials such as Post-it notes, transparent acrylic panels, and masking tape, we designed a set of lifelike VR interactions based on hand gestures that the user can perform easily and intuitively. With our system, the user can create and place Post-it notes in an immersive space that is large enough to ideate freely, quickly move, copy, or delete many Post-it notes at once, and easily manage the results.

I Can't Talk Now: Speaking with Voice Output Communication Aid Using Text-to-Speech Synthesis During Multiparty Video Conference

Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (LBW)

Wooseok Kim, Sangsu Lee
I want more than 👍 User-generated icons for Better Video-mediated Communications on the Collaborative Design Process

Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (LBW)

Haena Cho, Hyeonjeong Im, Sunok Lee, Sangsu Lee
How the Death-themed Game Spiritfarer Can Help Players Cope with the Loss of a Loved One

Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (LBW)

Karam Eum, Valérie Erb, Subin Lin, Sungpil Wang, Young Yim Doh
“I Don’t Know Exactly but I Know a Little”: Exploring Better Responses of Conversational Agents with Insufficient Information

Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (LBW)

Minha Lee, Sangsu Lee
Bubble Coloring to Visualize the Speech Emotion

Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (LBW)

Qinyue Chen, Yuchun Yan, Hyeon-Jeong Suk
Guideline-Based Evaluation and Design Opportunities for Mobile Video-based Learning

Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (LBW)

Jeongyeon Kim, Juho Kim

Workshops & Symposia

Challenges in Devising Resources for Ethics: What Should We Consider When Designing Toolkits to Tackle AI Ethical Issues for Practitioners?

CHI 2021 (The ACM CHI Conference on Human Factors in Computing Systems 2021) Workshop on Co-designing Ethics.

Inha Cha and Youn-kyung Lim

Artificial Intelligence (AI) technologies become interwoven in our daily contexts with various services and products, and discussions on AI’s social impact are actively being held. As awareness on the social impact of AI technology increased, studies focusing on algorithmic bias and its harm have gained attention, as have the efforts to mitigate social bias. One way to solve this problem is to support and guide the practitioners who design the technologies. Therefore, various toolkits and methods, including checklists or open-source software, to detect algorithmic bias, games, and activity-based approaches have been devised to support practitioners. This paper proposes pros and cons according to toolkits’ characteristics based on the existing approaches. We want to discuss what we have to consider before designing toolkits to tackle AI ethics by examining the existing toolkits.

CHI 2019

At CHI2019, HCI@KAIST presents 14 papers, 6 late breaking works and 3 workshops.
One best paper and one honorable mention are included. These works are from 11 labs of 4 different schools and departments at KAIST. We thank our outstanding colleagues and collaborators from industry, research centers and universities around the world. 

We will be hosting the KAIST HOSPITALITY NIGHT event on May 7th (Tue).
Join us at HCI@KAIST HOSPITALITY NIGHT!

DATE   May 7th (Tue) 2019
TIME  20:00 – 23:00
LOCATION  The Strip Joint Glasgow – pizza place & drink monger (956 Argyle street, Glasgow G3 8LU)

 

Paper & Notes

pickme
PicMe: Interactive Visual Guidance for Taking Requested Photo Composition

Monday 11:00 -12:20 | Session of On the Streets | Room: Boisdale 1

  • Minju Kim, Graduate School of Culture Technology, KAIST
  • Jungjin Lee, KAI Inc., Daejeon

PicMe is a mobile application that provides interactive onscreen guidance that helps the user take pictures of a composition that another person requires. Once the requester captures a picture of the desired composition and delivers it to the user (photographer), a 2.5D guidance system, called the virtual frame, guides the user in real-time by showing a three-dimensional composition of the target image (i.e., size and shape). In addition, according to the matching accuracy rate, we provide a small-sized target image in an inset window as feedback and edge visualization for further alignment of the detail elements. We implemented PicMe to work fully in mobile environments. We then conducted a preliminary user study to evaluate the effectiveness of PicMe compared to traditional 2D guidance methods. The results show that PicMe helps users reach their target images more accurately and quickly by giving participants more confidence in their tasks.

1-01
Co-Performing Agent: Design for Building User-Agent Partnership in Learning and Adaptive Services

Wednesday 16:00-17:20 | Session of The One with Bots | Room: Boisdale 1

  • Da-jung Kim, Department of Industrial Design, KAIST
  • Youn-kyung Lim, Department of Industrial Design, KAIST

Intelligent agents have become prevalent in everyday IT products and services. To improve an agent’s knowledge of a user and the quality of personalized service experience, it is important for the agent to cooperate with the user (e.g., asking users to provide their information and feedback). However, few works inform how to support such user-agent co-performance from a human-centered perspective. To fill this gap, we devised Co-Performing Agent, a Wizard-of-Oz-based research probe of an agent that cooperates with a user to learn by helping users to have a partnership mindset. By incorporating the probe, we conducted a two-month exploratory study, aiming to understand how users experience co-performing with their agent over time. Based on the findings, this paper presents the factors that affected users’ co-performing behaviors and discusses design implications for supporting constructive co-performance and building a resilient user–agent partnership over time.

ten
Ten-Minute Silence: A New Notification UX of Mobile Instant Messenger

Wednesday 12:00-12:20 | Session of UX Methods | Room: Forth

  • In-geon Shin, Department of Industrial Design, KAIST
  • Jin-min Seok, Department of Industrial Design, KAIST
  • Youn-kyung Lim, Department of Industrial Design, KAIST

People receive a tremendous number of messages through mobile instant messaging (MIM), which generates crowded notifications. This study highlights our attempt to create a new notification rule to reduce this crowdedness, which can be recognized by both senders and recipients. We developed an MIM app that provides only one notification per conversation session, which is a group of consecutive messages distinguished based on a ten-minute silence period. Through the two-week field study, 20,957 message logs and interview data from 17 participants revealed that MIM notifications affect not only the recipients’ experiences before opening the app but also the entire conversation experience, including that of the senders. The new notification rule created new social norms for the participants’ use of MIM. We report themes about the changes in the MIM experience, which will expand the role of notifications for future MIM apps.

skin
Like A Second Skin: Understanding How Epidermal Devices Affect Human Tactile Perception

Thursday 14:00 - 15:20 | Paper Session: Skin and Textiles | Hall 2

  • Aditya Shekhar Nittala, Saarland University
  • Klaus Kruttwig, INM-Leibniz Institute
  • Jaeyeon Lee, School of Computing, KAIST
  • Roland Bennewitz, INM-Leibniz Institute
  • Eduard Arzt, INM-Leibniz Institute
  • Jürgen Steimle, Saarland University

The emerging class of epidermal devices opens up new opportunities for skin-based sensing, computing, and interaction. Future design of these devices requires an understanding of how skin-worn devices affect the natural tactile perception. In this study, we approach this research challenge by proposing a novel classification system for epidermal devices based on flexural rigidity and by testing advanced adhesive materials, including tattoo paper and thin films of poly (dimethylsiloxane) (PDMS). We report on the results of three psychophysical experiments that investigated the effect of epidermal devices of different rigidity on passive and active tactile perception. We analyzed human tactile sensitivity thresholds, two-point discrimination thresholds, and roughness discrimination abilities on three different body locations (fingertip, hand, forearm). Generally, a correlation was found between device rigidity and tactile sensitivity thresholds as well as roughness discrimination ability. Surprisingly, thin epidermal devices based on PDMS with a hundred times the rigidity of commonly used tattoo paper resulted in comparable levels of tactile acuity. The material offers the benefit of increased robustness against wear and the option to re-use the device. Based on our findings, we derive design recommendations for epidermal devices that combine tactile perception with device robustness.

virtual
VirtualComponent: a Mixed-Reality Tool for for Designing and Tuning Breadboarded Circuits

Wednesday 10:00 - 10:20 | Session of Fabricating Electronics | Room: Hall 1

  • Yoonji Kim, Department of Industrial Design, KAIST
  • Youngkyung Choi, Department of Industrial Design, KAIST
  • Hyein Lee, Department of Industrial Design, KAIST
  • Geehyuk Lee, School of Computing, KAIST
  • Andrea Bianchi, Department of Industrial Design, KAIST

Prototyping electronic circuits is an increasingly popular activity, supported by the work of researchers, who developed toolkits to improve the design, debug and fabrication of electronics. While past work mainly dealt with circuit topology, in this paper we propose a system for determining or tuning the values of the circuit components. Based on the results of a formative study with seventeen makers, we designed VirtualComponent, a mixed-reality tool that allows to digitally place electronic components on a real breadboard, tune their values in software, and see these changes applied to the physical circuit in real-time. VirtualComponent is composed of a set of plug-and-play modules containing banks of components, and a custom breadboard managing the connections and components’ values. Through example usages and the results of an informal study with twelve makers, we demonstrate that VirtualComponent is easy to use and encourages users to test components’ value configurations with little effort.

lockn
LocknType: Lockout Task Intervention for Discouraging Smartphone App Use

Monday 14:00-15:20 | Paper Session: Human-Smartphone Interaction | Hall 2

  • Jaejeung Kim, Graduate School of Knowledge Service Engineering, KAIST
  • Joonyoung Park, Graduate School of Knowledge Service Engineering, KAIST
  • Hyunsoo Lee, Graduate School of Knowledge Service Engineering, KAIST
  • Minsam Ko, College of Computing, Hanyang
  • Uichin Lee, Graduate School of Knowledge Service Engineering, KAIST

Instant access and gratification make it difficult for us to self-limit the use of smartphone apps. We hypothesize that a slight increase in the interaction cost of accessing an app could successfully discourage app use. We propose a proactive intervention that requests users to perform a simple lockout task (e.g., typing a fixed length number) whenever a target app is launched. We investigate how a lockout task with varying workloads (i.e., pause only without number input, 10-digit input, and 30-digit input) influence a user’s decision making, by a 3-week, in-situ experiment with 40 participants. Our findings show that even the pause-only task that requires a user to press a button to proceed discouraged an average of 13.1% of app use, and the 30-digit-input task discouraged 47.5%. We derived determinants of app use and non-use decision making for a given lockout task. We further provide implications for persuasive technology design for discouraging undesired behaviors.

evaluating
Evaluating the Combination of Visual Communication Cues for HMD-based Mixed Reality Remote Collaboration

Tuesday 09:00 - 10:20 | Session of X Reality Evaluations | Room Dochart 2

  • Seungwon Kim, School of ITMS, University of South Australia
  • Gun Lee, School of ITMS, University of South Australia
  • Weidong Huang, Swinburne University of Technology
  • Hayun Kim, Graduate School of Culture Technology, KAIST
  • Woontack Woo, Graduate School of Culture Technology, KAIST
  • Mark Billinghurst, School of ITMS, University of South Australia

Many researchers have studied various visual communication cues (e.g. pointer, sketching, and hand gesture) in Mixed Reality remote collaboration systems for real-world tasks. However, the effect of combining them has not been so well explored. We studied the effect of these cues in four combinations: hand only, hand + pointer, hand + sketch, and hand + pointer + sketch, with three problem tasks: Lego, Tangram, and Origami. The study results showed that the participants completed the task significantly faster and felt a significantly higher level of usability when the sketch cue is added to the hand gesture cue, but not with adding the pointer cue. Participants also preferred the combinations including hand and sketch cues over the other combinations. However, using additional cues (pointer or sketch) increased the perceived mental effort and did not improve the feeling of co-presence. We discuss the implications of these results and future research directions.

theEffects
The Effects of Interruption Timing on Autonomous Height-Adjustable Desks that Respond to Task Changes

Tuesday 11:00 - 12:20 | Session of In the Office | Room Alsh 1

  • Bokyung Lee, Department of Industrial Design, KAIST
  • Sindy Wu, School of computer science and technology, KAIST
  • Maria Jose Reyes, Department of Industrial Design, KAIST
  • Daniel Saakes, Department of Industrial Design, KAIST

Actuated furniture, such as electric adjustable sit-stand desks, helps users vary their posture and contributes to comfort and health. However, studies have found that users rarely initiate height changes. Therefore, in this paper, we look into furniture that adjusts itself to the user’s needs. A situated interview study indicated task-changing as an opportune moment for automatic height adjustment. We then performed a Wizard of Oz study to find the best timing for changing desk height to minimize interruption and discomfort. The results are in line with prior work on task interruption in graphical user interfaces and show that the table should change height during a task change. However, the results also indicate that until users build trust in the system, they prefer actuation after a task change to experience the impact of the adjustment.

smart
SmartManikin: Virtual Humans with Agency for Design Tools

Tuesday 16:00 - 17:20 | Session of Design Tools | Room Alsh 1

  • Bokyung Lee, Department of Industrial Design, KAIST
  • Taeil Jin, Graduate school of culture and technology, KAIST
  • Sung-Hee Lee, Faculty of art and Design, University of Tsukuba
  • Daniel Saakes, Department of Industrial Design, KAIST

When designing comfort and usability in products, designers need to evaluate aspects ranging from anthropometrics to use scenarios. Therefore, virtual and poseable mannequins are employed as a reference in early-stage tools and for evaluation in the later stages. However, tools to intuitively interact with virtual humans are lacking. In this paper, we introduce SmartManikin, a mannequin with agency that responds to high-level commands and to real-time design changes. We first captured human poses with respect to desk configurations, identified key features of the pose and trained regression functions to estimate the optimal features at a given desk setup. The SmartManikin’s pose is generated by the predicted features as well as by using forward and inverse kinematics. We present our design, implementation, and an evaluation with expert designers. The results revealed that SmartManikin enhances the design experience by providing feedback concerning comfort and health in real time.

slow
Slow Robots for Unobtrusive Posture Correction

Tuesday 09:00 - 10:20 | Session of Weighty Interactions | Room Dochart 1

  • Joon-Gi Shin, Department of Industrial Design, KAIST
  • Eiji Onchi, Graduate school of comprehensive human science, University of Tsukuba
  • Maria Jose Reyes, Department of Industrial Design, KAIST
  • Junbong Song, TeamVoid
  • Uichin Lee, Graduate School of Knowledge Service Engineering, KAIST
  • Seung-Hee Lee, Faculty of Art and Design, University of Tsukuba
  • Daniel Saakes, Department of Industrial Design, KAIST

Prolonged static and unbalanced sitting postures during computer usage contribute to musculoskeletal discomfort. In this paper, we investigated the use of a very slow moving monitor for unobtrusive posture correction. In a first study, we identified display velocities below the perception threshold and observed how users (without being aware) responded by gradually following the monitor’s motion. From the result, we designed a robotic monitor that moves imperceptible to counterbalance unbalanced sitting postures and induces posture correction. In an evaluation study (n=12), we had participants work for four hours without and with our prototype (8 in total). Results showed that actuation increased the frequency of non-disruptive swift posture corrections and significantly.

howto
How to Design Voice Based Navigation for How-To Videos

Monday 11:00 - 12:20 | Interacting with Videos |

  • Minsuk Chang, School of Computing, KAIST
  • Anh Troung, Adobe Research, Stanford University
  • Oliver Wang, Adobe Research
  • Maneesh Agrawala, Stanford University
  • Juho Kim, School of Computing, KAIST

When watching how-to videos related to physical tasks, users’ hands are often occupied by the task, making voice input a natural fit. To better understand the design space of voice interactions for how-to video navigation, we conducted three think-aloud studies using: 1) a traditional video interface, 2) a research probe providing a voice controlled video interface, and 3) a wizard-of-oz interface. From the studies, we distill seven navigation objectives and their underlying intents: pace control pause, content alignment pause, video control pause, reference jump, replay jump, skip jump, and peek jump. Our analysis found that users’ navigation objectives and intents affect the choice of referent type and referencing approach in command utterances. Based on our findings, we recommend to 1) support conversational strategies like sequence expansions and command queues, 2) allow users to identify and refine their navigation objectives explicitly, and 3) support the seven interaction intents.

project website

diagonose
Diagnosing and Coping with Mode Errors in Korean-English Dual-language Keyboard

Monday 14:00 - 15:20 | Paper Session: Human-Smartphone Interaction | Hall 2

  • Sangyoon Lee, School of Computing, KAIST
  • Jaeyeon Lee, School of Computing, KAIST
  • Geehyuk Lee, School of Computing, KAIST

In countries where languages with non-LatIn characters are prevalent, people use a keyboard with two language modes namely, the native language and English, and often experience mode errors. To diagnose the mode error problem, we conducted a field study and observed that 78% of the mode errors occurred immediately after application switchIng. We implemented four methods (Auto-switch, Preview, Smart-toggle, and Preview & Smart-toggle) based on three strategies to deal with the mode error problem and conducted field studies to verify their effectiveness. In the studies considerIng Korean-English dual Input, Auto-switch was Ineffective. On the contrary, Preview significantly reduced the mode errors from 75.1% to 41.3%, and Smart-toggle saved typIng cost for recoverIng from mode errors. In Preview & Smart-toggle, Preview reduced mode errors and Smart-toggle handled 86.2% of the mode errors that slipped past Preview. These results suggest that Preview & Smart-toggle is a promisIng method for preventIng mode errors for the Korean-English dual-Input environment.

Torc
TORC: A Virtual Reality Controller for In-Hand High-Dexterity Finger Interaction

Thursday 11:00 - 12:20 | Paper Session: Unexpected interactions | Clyde Auditorium

  • Jaeyeon Lee, School of Computing, KAIST
  • Mike Sinclair, Microsoft Research
  • Mar Gonzalez-Franco, Microsoft Research
  • Eyal Ofek, Microsoft Research
  • Christian Holz, Microsoft Research

Recent hand-held controllers have explored a variety of haptic feedback sensations for users in virtual reality by producing both kinesthetic and cutaneous feedback from virtual objects. These controllers are grounded to the user’s hand and can only manipulate objects through arm and wrist motions, not using the dexterity of their fingers as they would in real life. In this paper, we present TORC, a rigid haptic controller that renders virtual object characteristics and behaviors such as texture and compliance. Users hold and squeeze TORC using their thumb and two fingers and interact with virtual objects by sliding their thumb on TORC’s trackpad. During the interaction, vibrotactile motors produce sensations to each finger that represent the haptic feel of squeezing, shearing or turning an object. Our evaluation showed that using TORC, participants could manipulate virtual objects more precisely (e.g., position and rotate objects in 3D) than when using a conventional VR controller.

geometric
Geometrically Compensating Effect of End-to-End Latency in Moving-Target Selection Games

Wednesday 16:00-17:20 | Session of Gameplay Analysis and Latency | Room Hall2

  • Injung Lee, Graduate school of culture and technology, KAIST
  • Sunjun Kim, Aalto University
  • Byungjoo Lee, Graduate school of culture and technology, KAIST

Effects of unintended latency on gamer performance have been reported. End-to-end latency can be corrected by post- input manipulation of activation times, but this gives the player unnatural gameplay experience. For moving-target selection games such as Flappy Bird, the paper presents a predictive model of latency on error rate and a novel compensation method for the latency effects by adjusting the game’s geometry design – e.g., by modifying the size of the selection region. Without manipulation of the game clock, this can keep the user’s error rate constant even if the end- to-end latency of the system changes. The approach extends the current model of moving-target selection with two additional assumptions about the effects of latency: (1) latency reduces players’ cue-viewing time and (2) pushes the mean of the input distribution backward. The model and method proposed have been validated through precise experiments.

Late Breaking Work

what
“What does your Agent look like?” A Drawing Study to Understand Users’ Perceived Persona of Conversational Agent

Tuesday - 10:20 - 11:00 | Session of Des | Room: Hall 4

  • Sunok Lee, Department of Industrial Design, KAIST
  • Sungbae Kim, Department of Industrial Design, KAIST
  • Sangsu Lee, Department of Industrial Design, KAIST

Conversational agents (CAs) become more popular and useful at home. Creating the persona is an important part of designing the conversational user interface (CUI). Since the CUI is a voice-mediated interface, users naturally form an image of the CA’s persona through the voice. Because that image affects users’ interaction with CAs while using a CUI, we tried to understand users’ perception via drawing method. We asked 31 users to draw an image of the CA that communicates with the user. Through a qualitative analysis of the collected drawings and interviews, we could see the various types of CA personas perceived by users and found design factors that influenced users’ perception. Our findings help us understand persona perception, and that will provide designers with design implications for creating an appropriate persona.

designing
Designing Personalities of Conversational Agents

Tuesday - 10:20 - 11:00 | Session of Des | Room: Hall 4

  • Hankyung Kim, Department of Industrial Design, KAIST
  • Dong Yoon Koh, Department of Industrial Design, KAIST
  • Gaeun Lee, Samsung Research
  • Jung-Mi Park, Samsung Research
  • Youn-kyung Lim, Department of Industrial Design, KAIST

Recent hand-held controllers have explored a variety of haptic feedback sensations for users in virtual reality by producing both kinesthetic and cutaneous feedback from virtual objects. These controllers are grounded to the user’s hand and can only manipulate objects through arm and wrist motions, not using the dexterity of their fingers as they would in real life. In this paper, we present TORC, a rigid haptic controller that renders virtual object characteristics and behaviors such as texture and compliance. Users hold and squeeze TORC using their thumb and two fingers and interact with virtual objects by sliding their thumb on TORC’s trackpad. During the interaction, vibrotactile motors produce sensations to each finger that represent the haptic feel of squeezing, shearing or turning an object. Our evaluation showed that using TORC, participants could manipulate virtual objects more precisely (e.g., position and rotate objects in 3D) than when using a conventional VR controller.

solvedeep
SolveDeep: A System for Supporting Subgoal Learning in Online Math Problem Solving

Tuesday - 10:20 - 11:00 | Session of Des | Room: Hall 4

  • Hyoungwook Jin, School of Computing, KAIST
  • Minsuk Chang, School of Computing, KAIST
  • Juho Kim, School of Computing, KAIST

Learner-driven subgoal labeling helps learners form a hierarchical structure of solutions with subgoals, which are conceptual units of procedural problem solving. While learning with such hierarchical structure of a solution in mind is effective in learning problem solving strategies, the development of an interactive feedback system to support subgoal labeling tasks at scale requires significant expert efforts, making learner-driven subgoal labeling difficult to be applied in online learning environments. We propose SolveDeep, a system that provides feedback on learner solutions with peer-generated subgoals. SolveDeep utilizes a learnersourcing workflow to generate the hierarchical representation of possible solutions, and uses a graph-alignment algorithm to generate a solution graph by merging the populated solution structures, which are then used to generate feedback on future learners’ solutions. We conducted a user study with 7 participants to evaluate the efficacy of our system. Participants did subgoal learning with two math problems and rated the usefulness of system feedback. The average rating was 4.86 out of 7 (1: Not useful, 7: Useful), and the system could successfully construct a hierarchical structure of solutions with learnersourced subgoal labels.

crowd
Crowdsourcing Perspectives on Public Policy from Stakeholders

Tuesday - 10:20 - 11:00 | Session of Des | Room: Hall 4

  • Hyunwoo Kim, School of Computing, KAIST
  • Eun-Young Ko, School of Computing, KAIST
  • Donghoon Han, School of Computing, KAIST
  • Sung-Chul Lee, School of Computing, KAIST
  • Simon T. Perrault, Singapore University of Technology and Design, Singapore
  • Jihee Kim, School of Business and Technology Management, KAIST​
  • Juho Kim, School of Computing, KAIST

Personal deliberation, the process through which people can form an informed opinion on social issues, serves an important role in helping citizens construct a rational argument in the public deliberation. However, existing information channels for public policies deliver only few stakeholders’ voices, thus failing to provide a diverse knowledge base for personal deliberation. This paper presents an initial design of PolicyScape, an online system that supports personal deliberation on public policies by helping citizens explore diverse stakeholders and their perspectives on the policy’s effect. Building on literature on crowdsourced policymaking and policy stakeholders, we present several design choices for crowdsourcing stakeholder perspectives. We introduce perspective-taking as an approach for personal deliberation by helping users consider stakeholder perspectives on policy issues. Our initial results suggest that PolicyScape could collect diverse sets of perspectives from the stakeholders of public policies, and help participants discover unexpected viewpoints of various stakeholder groups.

improving
Improving Two-Thumb Touchpad Typing in Virtual Reality

Wednesday 15:20 - 16:00 | Late-breaking Work: Poster Rotation 2 | Hall 4

  • Jeongmin Son, School of Computing, KAIST
  • Sunggeun Ahn, School of Computing, KAIST
  • Sunbum Kim, School of Computing, KAIST
  • Geehyuk Lee, School of Computing, KAIST

Two-Thumb Touchpad Typing (4T) using hand-held controllers is one of the common text entry techniques in Virtual Reality (VR). However, its performance is far below that of Two-Thumb Typing on a smartphone. We explored the possibility of improving its performance focusing on the following two factors: the visual feedback of hovering thumbs and the grip stability of the controllers. We examined the effects of these two factors on the performance of 4T in VR in user experiments. Their results show that hover feedback had a significant main effect on the 4T performance, but grip stability did not. We then investigated the achievable performance of the final 4T design in a longitudinal study, and its results show that users could achieve a Typing speed over 30 words per minute after two hours of practice.

fingmag
FingMag: Finger Identification Method for Smartwatch

Wednesday 15:20 - 16:00 | Late-breaking Work: Poster Rotation 2 | Hall 4

  • Keunwoo Park, School of Computing, KAIST
  • Geehyuk Lee, School of Computing, KAIST

Interacting with a smartwatch is difficult owing to its small touchscreen. A general strategy to overcome the limitations of the small screen is to increase the input vocabulary. A popular approach to do this is to distinguish fingers and assign different functions to them. As a finger identification method for a smartwatch, we propose FingMag, a machine-learning-based method that identifies the finger on the screen with the help of a ring. for this identification, the finger’s touch position and the magnetic field from a magnet embedded in the ring are used. In an offline evaluation using data collected from 12 participants, we show that FingMag can identify the finger with an accuracy of 96.21% in stationary geomagnetic conditions.

Workshops & Symposia

uc
User-Centered Graphical Models of Interaction

Workshop on Computational Modeling in Human-Computer Interaction

  • Minsuk Chang, School of Computing, KAIST
  • Juho Kim, School of Computing, KAIST

In this position paper, I present a set of data-driven techniques in modeling the learning material, learner workflow and the learning task as graphical representations, with which at scale can createn and support learning opportunities in the wild. I propose the graphical models resulting from this bottom-up approach can further serve as proxies for representing learnability bounds of an interface. I also propose an alternative approach which directly aims to “learn” the interaction bounds by modeling the interface as an agent’s sequential decision-making problem. Then I illustrate how the data-driven modeling techniques and algorithm modeling techniques can create a mutually beneficial bridge for advancing design of interfaces.

Readersourcing
Readersourcing an Accurate and Comprehensive Understanding of Health-related Information Represented by Media

Workshop on HCI for Accurate, Impartial and Transparent Journalism: Challenges and Solutions

  • Eun-Young Ko, School of Computing, KAIST
  • Ching Liu, National Tsing Hua University
  • Hyuntak Cha, Seoul National University
  • Juho Kim, School of Computing, KAIST

Health news delivers findings from health-related research to the public. As the delivered information may affect the public’s everyday decision or behavior, readers should get an accurate and compre- hensive understanding of the research from articles they read. However, it is rarely achieved due to incomplete information delivered by the news stories and a lack of critical evaluation of readers. In this position paper, we propose a readersourcing approach, an idea of engaging readers in a critical reading activity while collecting valuable artifacts for future readers to acquire a more accurate and comprehensive understanding of health-related information. We discuss challenges, opportunities, and design considerations in the readersourcing approach. Then we present the initial design of a web-based news reading application that connects health news readers via questioning and answering tasks.

playmaker
PlayMaker: A Participatory Design Method for Creating Entertainment Application Concepts Using Activity Data

Workshop on HCI for Accurate, Impartial and Transparent Journalism: Challenges and Solutions

  • Dong Yoon Koh, Department of Industrial Design, KAIST
  • Ju Yeon Kim, Department of Industrial Design, KAIST
  • Donghyeok Yun, Department of Industrial Design, KAIST
  • Youn-kyung Lim, Department of Industrial Design, KAIST

The public’s ever-growing interest in health has led the well-being industry to explosive growth over the years. This propelled activity trackers as one of the trendiest items among the current day wearable devices. Seeking new opportunities for effective data utilization, we present a participatory design method that explores the fusion of activity data with entertainment application. In this method we spur participants to design by mix-and-matching activity tracker data attributes to existing entertainment application features to produce new concepts. We report two cases of method implementation and further discuss the opportunities of activity tracker data as means for entertainment application design.

Let’s take KAIST HCI friends photos together! Take a photo with the HCI KAIST pose as shown in the example and upload it to the Instagram. With hash tags #HCIKAIST@CHI2019 or #KAISTNIGHT.​

CHI 2018

At CHI 2018, HCI@KAIST presents 19 papers, 11 late breaking works, 5 demonstrations and 1 video showcase.
These works are from 20 labs of 4 different schools and departments at KAIST. We thank our outstanding colleagues and collaborators from industry, research centers and universities around the world. 

Please come to the KAIST Night on April 25, 2018. Let’s take KAIST HCI Friends photo together!

HCI KAIST Reception
DATE  April 25, 2018
TIME  20:00 – 22:00
LOCATION  Joverse, Montreal

Paper & Notes

mechanism
Mechanism Perfboard: An Augmented Reality Environment for Linkage Mechanism Design and Fabrication

Monday 11:30-11:50 | Session of Craft, Fabrication, Making | Room 516C

  • Yunwoo Jeong, Department of Industrial Design, KAIST
  • Han-Jong Kim, Department of Industrial Design, KAIST
  • Tek-Jin Nam, Department of Industrial Design, KAIST

Prototyping devices with kinetic mechanisms, such as automata and robots, has become common in physical computing projects. However, mechanism design in the early-concept exploration phase is challenging, due to the dynamic and unpredictable characteristics of mechanisms. We present Mechanism Perfboard, an augmented reality environment that supports linkage mechanism design and fabrication. It supports the concretization of ideas by generating the initial desired linkage mechanism from a real world movement. The projection of simulated movement within the environment enables iterative tests and modifications in real scale. Augmented information and accompanying tangible parts help users to fabricate mechanisms. Through a user study with 10 participants, we found that Mechanism Perfboard helped the participant to achieve their desired movement. The augmented environment enabled intuitive modification and fabrication with an understanding of mechanical movement. Based on the tool development and the user study, we discuss implications for mechanism prototyping with augmented reality and computational support.

clight
c.light: A Tool for Exploring Light Properties in Early Design Stage

Monday 15:10-15:30 | Session of Tools for Designing | Room 514C

  • Kyeong Ah Jeong, Department of Industrial Design, KAIST
  • Eunjin Kim, Department of Industrial Design, KAIST
  • Taesu Kim, Department of Industrial Design, KAIST
  • Hyeon-Jeong Suk, Department of Industrial Design, KAIST

Although a light becomes an important design element, there are little techniques available to explore shapes and light effects in early design stages. We present c.light, a design tool that consists of a set of modules and a mobile application for visualizing the light in a physical world. It allows designers to easily fabricate both tangible and intangible properties of a light without a technical barrier. We analyzed how c.light contributes to the ideation process of light design through a workshop. The results showed that c.light largely expands designers’ capability to manipulate intangible properties of light and, by doing so, it facilitates collaborative and inverted ideation process in early design stages. It is expected that the results of this study could enhance our understanding of how designers manipulate light in a physical world in early design stages and could be a good stepping stone for future tool development

projective window
Projective Windows: Bringing Windows in Space to the Fingertip

Monday 15:10-15:30 | Session of Modelling AR & VR | Room 517A

  • Joon Hyub Lee, Department of Industrial Design, KAIST
  • Sang-Gyun An, Department of Industrial Design, KAIST
  • Yongkwan Kim, Department of Industrial Design, KAIST
  • Seok-Hyung Bae Department of Industrial Design, KAIST

In augmented and virtual reality (AR and VR), there may be many 3D planar windows with 2D texts, images, and videos on them. However, managing the position, orientation, and scale of such a window in an immersive 3D workspace can be difficult. Projective Windows strategically uses the absolute and apparent sizes of the window at various stages of the interaction to enable the grabbing, moving, scaling, and releasing of the window in one continuous hand gesture. With it, the user can quickly and intuitively manage and interact with windows in space without any controller hardware or dedicated widget. Through an evaluation, we demonstrate that our technique is performant and preferable, and that projective geometry plays an important role in the design of spatial user interfaces.

Impactive activation
Impact Activation Improves Rapid Button Pressing

Monday 17:10-17:30 | Session of Interaction Under Pressure | Room 516E

  • Sunjun Kim, Aalto University, Finland; School of Culture Technology, KAIST
  • Byungjoo Lee, School of Culture Technology, KAIST
  • Antti Oulasvirta, Aalto University, Finland

The activation point of a button is defined as the depth at which it invokes a make signal. Regular buttons are activated during the downward stroke, which occurs within the first 20 ms of a press. The remaining portion, which can be as long as 80~ms, has not been examined for button activation for reason of mechanical limitations. The paper presents a technique and empirical evidence for an activation technique called Impact Activation, where the button is activated at its maximal impact point. We argue that this technique is advantageous particularly in rapid, repetitive button pressing, which is common in gaming and music applications. We report on a study of rapid button pressing, wherein users’ timing accuracy improved significantly with use of Impact Activation. The technique can be implemented for modern push-buttons and capacitive sensors that generate a continuous signal.

agile 3d
Agile 3D Sketching with Air Scaffolding

Monday 16:10-16:30 | Session of Creativity, Sketching & Animation | Room 517D

  • Yongkwan Kim, Department of Industrial Design, KAIST
  • Sang-Gyun An, Department of Industrial Design, KAIST
  • Joon Hyub Lee, Department of Industrial Design, KAIST
  • Seok-Hyung Bae, Department of Industrial Design, KAIST

Hand motion and pen drawing can be intuitive and expressive inputs for professional digital 3D authoring. However, their inherent limitations have hampered wider adoption. 3D sketching using hand motion is rapid but rough, and 3D sketching using pen drawing is delicate but tedious. Our new 3D sketching workflow combines these two in a complementary manner. The user makes quick hand motions in the air to generate approximate 3D shapes, and uses them as scaffolds on which to add details via pen-based 3D sketching on a tablet device. Our air scaffolding technique and corresponding algorithm extract only the intended shapes from unconstrained hand motions. Then, the user sketches 3D ideas by defining sketching planes on these scaffolds while appending new scaffolds, as needed. A user study shows that our progressive and iterative workflow enables more agile 3D sketching compared to ones using either hand motion or pen drawing alone.

understanding the effect of
Understanding the Effect of In-Video Prompting on Learners and Instructors

Tuesday 9:00-9:20 | Session of Learning and training | Room 517D

  • Hyungyu Shin, School of Computing, KAIST
  • Eun-Young Ko, School of Computing, KAIST
  • Joseph Jay Williams, National University of Singapore, Singapore
  • Juho Kim, School of Computing, KAIST

Online instructional videos are ubiquitous, but it is difficult for instructors to gauge learners’ experience and their level of comprehension or confusion regarding the lecture video. Moreover, learners watching the videos may become disengaged or fail to reflect and construct their own understanding. This paper explores instructor and learner perceptions of in-video prompting where learners answer reflective questions while watching videos. We conducted two studies with crowd workers to understand the effect of prompting in general, and the effect of different prompting strategies on both learners and instructors. Results show that some learners found prompts to be useful checkpoints for reflection, while others found them distracting. Instructors reported the collected responses to be generally more specific than what they have usually collected. Also, different prompting strategies had different effects on the learning experience and the usefulness of responses as feedback.

moving target selection
Moving Target Selection: A Cue Integration Model

Tuesday 11:00-11:20 | Session of Buttons, Targets, Sliders | Room 518AB

  • Byungjoo Lee, School of Culture Technology, KAIST
  • Sunjun Kim, School of Culture Technology, KAIST; Aalto University, Finland
  • Antti Oulasvirta, Aalto University, Finland
  • JONG-IN LEE, School of Culture Technology, KAIST; Aalto University, Finland
  • Eunji Park, School of Culture Technology, KAIST

This paper investigates a common task requiring temporal precision: the selection of a rapidly moving target on display by invoking an input event when it is within some selection window. Previous work has explored the relationship between accuracy and precision in this task, but the role of visual cues available to users has remained unexplained. To expand modeling of timing performance to multimodal settings, common in gaming and music, our model builds on the principle of probabilistic cue integration. Maximum likelihood estimation (MLE) is used to model how different types of cues are integrated into a reliable estimate of the temporal task. The model deals with temporal structure (repetition, rhythm) and the perceivable movement of the target on display. It accurately predicts error rate in a range of realistic tasks. Applications include the optimization of difficulty in game-level design.

neuromechanics of a button
Neuromechanics of a Button Press

Tuesday 11:20-11:40 | Session of Buttons, Targets, Sliders | Room 518AB

  • Antti Oulasvirta, Aalto University, Finland
  • Sunjun Kim, Aalto University, Finland
  • Byungjoo Lee, School of Culture Technology, KAIST

To press a button, a finger must push down and pull up with the right force and timing. How the motor system succeeds in button-pressing, in spite of neural noise and lacking direct access to the mechanism of the button, is poorly understood. This paper investigates a unifying account based on neuromechanics. Mechanics is used to model muscles controlling the finger that contacts the button. Neurocognitive principles are used to model how the motor system learns appropriate muscle activations over repeated strokes though relying on degraded sensory feedback. Neuromechanical simulations yield a rich set of predictions for kinematics, dynamics, and user performance and may aid in understanding and improving input devices. We present a computational implementation and evaluate predictions for common button types.

enhancing online
Enhancing Online Problems Through Instructor-Centered Tools for Randomized Experiments

Tuesday 11:00-11:20 | Session of Automated and Crowd Supports for Learning | Room 518C

  • Joseph Jay Williams, National University of Singapore, Singapore
  • Anna N. Rafferty, Carleton College, USA
  • Dustin Tingley, Harvard, USA
  • Andrew Ang, Harvard, USA
  • Walter S. Lasecki, University of Michigan, USA
  • Juho Kim, School of Computing, KAIST

Digital educational resources could enable the use of randomized experiments to answer pedagogical questions that instructors care about, taking academic research out of the laboratory and into the classroom. We take an instructor-centered approach to designing tools for experimentation that lower the barriers for instructors to conduct experiments. We explore this approach through DynamicProblem, a proof-of-concept system for experimentation on components of digital problems, which provides interfaces for authoring of experiments on explanations, hints, feedback messages, and learning tips. To rapidly turn data from experiments into practical improvements, the system uses an interpretable machine learning algorithm to analyze students’ ratings of which conditions are helpful, and present conditions to future students in proportion to the evidence they are higher rated. We evaluated the system by collaboratively deploying experiments in the courses of three mathematics instructors. They reported benefits in reflecting on their pedagogy, and having a new method for improving online problems for future students.

collaborate dynamic-01
Collaborative Dynamic Queries: Supporting Distributed Small Group Decision-making

Tuesday 14:00-14:20 | Session of Distributed Work | Room 516C

  • Sungsoo (Ray) Hong, University of Washington, USA
  • Minhyang (Mia) Suh, University of Washington, USA
  • Nathalie Henry Riche, Microsoft Research, USA
  • Jooyoung Lee, School of Computing, KAIST
  • Juho Kim, School of Computing, KAIST
  • Mark Zachry, University of Washington, USA

Communication is critical in small group decision-making processes during which each member must be able to express preferences to reach consensus. Finding consensus can be difficult when each member in a group has a perspective that potentially conflicts with those of others. To support groups attempting to harmonize diverse preferences, we propose Collaborative Dynamic Queries (C-DQ), a UI component that enables a group to filter queries over decision criteria while being aware of others’ preferences. To understand how C-DQ affects a group’s behavior and perception in the decision-making process, we conducted 2 studies with groups who were prompted to make decisions together on mobile devices in a dispersed and synchronous situation. In Study 1, we found showing group preferences with C-DQ helped groups to communicate more efficiently and effectively. In Study 2, we found filtering candidates based on each member’s own filter range further improved a groups’ communication efficiency and effectiveness.

bebecode
BebeCode: Collaborative Child Development Tracking System

Wednesday 9:20-9:40 | Session of Children, Well-being, and Play | Room 518AB

  • Seokwoo Song, School of Computing, KAIST
  • Juho Kim, School of Computing, KAIST
  • Bumsoo Kang, School of Computing, KAIST
  • Wonjeong Park, Ewha Womans University
  • John Kim, KAIST

Continuous tracking young children’s development is important for parents because early detection of developmental delay can lead to better treatment through early intervention. Screening tests, often based on questions answered by a parent, are used to assess children’s development, but responses from only one parent can be subjective and even inaccurate due to limited memory and observations. In this work, we propose a collaborative child development tracking system, where screening test responses are collected through collaboration between parents or caregivers. We implement BebeCODE, a mobile system that encourages parents to independently answer all developmental questions for a given age and resolve disagreements through chatting, image/video sharing, or asking a third person. A 4-week deployment study of BebeCODE with 12 families found that parents had approximately 22% disagreements about questions regarding their children’s developmental and BebeCODE helped them reach a consensus. Parents also reported that their awareness of their child’s development, increased with BebeCODE.

too close and crowded
Too Close and Crowded: Understanding Stress on Mobile Instant Messengers based on Proxemics

Wednesday 11:40-12:00 | Session of Roads and Crowds | Room 516AB

  • In-geon Shin, Department of Industrial Design, KAIST
  • Jin-min Seok, Department of Industrial Design, KAIST
  • Youn-kyung Lim, Department of Industrial Design, KAIST

Nowadays, mobile instant messaging (MIM) is a necessity for our private and public lives, but it has also been the cause of stress. In South Korea, MIM stress has become a serious social problem. To understand this stress, we conducted four focus groups with 20 participants under MIM stress. We initially discovered that MIM stress relates to how people perceive the territory in MIM. We then applied proxemics—the theory of human use of space—to the thematic analysis as the rationale. The data revealed two main themes: too close and too crowded. The participants were stressed due to design features that let strangers or crowds into their MIM applications and forced them to interact and share their status with them. Based on this finding, we propose a set of implications for designing anti-stress MIM applications.

conceptscape collaborative-01
ConceptScape: Collaborative Concept Mapping for Video Learning

Wednesday 14:00-14:20 | Session of Learning 2 | Room 517D

  • Ching Liu, National Tsing Hua University, Taiwan
  • Juho Kim, School of Computing, KAIST
  • Hao-Chuan Wang, National Tsing Hua University, Taiwan; University of California, USA

While video has become a widely adopted medium for online learning, existing video players provide limited support for navigation and learning. It is difficult to locate parts of the video that are linked to specific concepts. Also, most video players afford passive watching, thus making it difficult for learners with limited metacognitive skills to deeply engage with the content and reflect on their understanding. To support concept-driven navigation and comprehension of lecture videos, we present ConceptScape, a system that generates and presents a concept map for lecture videos. ConceptScape engages crowd workers to collaboratively generate a concept map by prompting them to externalize reflections on the video. We present two studies to show that (1) interactive concept maps can be useful tools for concept-based video navigation and comprehension, and (2) with ConceptScape, novice crowd workers can collaboratively generate complex concept maps that match the quality of those by experts.

hapcube-01
HapCube: A Wearable Tactile Device to Provide Tangential and Normal Pseudo-Force Feedback on a Fingertip

Wednesday 16:00-16:20 | Session of Haptic Wearables | Room 517C

  • Hwan Kim, Department of Industrial Design, KAIST
  • HyeonBeom Yi, Department of Industrial Design, KAIST
  • Hyein Lee, Department of Industrial Design, KAIST
  • Woohun Lee, Department of Industrial Design, KAIST

Haptic devices allow a more immersive experience with Virtual and Augmented Reality. However, for a wider range of usage they need to be miniaturized while maintaining the quality of haptic feedback. In this study, we used two kinds of human sensory illusion of vibration. The first illusion involves creating a virtual force (pulling sensation) using asymmetric vibration, and the second involves imparting compliances of complex stress-strain curves (i.e. force-displacement curves of mechanical keyboards) to a rigid object by changing the frequency and amplitude of vibration. Using these two illusions, we developed a wearable tactile device named HapCube, consisting of three orthogonal voicecoil actuators. Four measurement tests and four user tests confirmed that 1) a combination of two orthogonal asymmetric vibrations could provide a 2D virtual force in any tangential directions on a finger pad, and 2) a single voicecoil actuator produced pseudo-force feedback of the complex compliance curves in the normal direction.

exploring multimodal
Exploring Multimodal Watch-back Tactile Display using Wind and Vibration

Wednesday 16:20-16:40 | Session of Haptic Wearables | Room 517C

  • Youngbo Aram Shim, School of Computing, KAIST
  • Jaeyeon Lee, School of Computing, KAIST
  • Geehyuk Lee, School of Computing, KAIST

A tactile display on the back of a smartwatch is an attractive output option; however, its channel capacity is limited owing to the small contact area. In order to expand the channel capacity, we considered using two perceptually distinct types of stimuli, wind and vibration, together on the same skin area. The result is a multimodal tactile display that combines wind and vibration to create “colored” tactile sensations on the wrist. As a first step toward this goal, we conducted in this study four user experiments with a wind-vibration tactile display to examine different ways of combining wind and vibration: Individual, Sequential, and Simultaneous. The results revealed the sequential combination of wind and vibration to exhibit the highest potential, with an information transfer capacity of 3.29 bits. In particular, the transition of tactile modality was perceived at an accuracy of 98.52%. The current results confirm the feasibility and potential of a multimodal tactile display combining wind and vibration.

pokering
PokeRing: Notifications by Poking Around the Finger

Wednesday 16:40-17:00 | Session of Haptic Wearables | Room 517C

  • Seungwoo Je, Department of Industrial Design, KAIST
  • Minkyeong Lee, Department of Industrial Design, KAIST
  • Yoonji Kim, Department of Industrial Design, KAIST
  • Liwei Chan, National Chiao Tung University, Taiwan
  • Xing-Dong Yang, Dartmouth College, USA
  • Andrea Bianchi, Department of Industrial Design, KAIST

Smart-rings are ideal for subtle and always-available haptic notifications due to their direct contact with the skin. Previous researchers have highlighted the feasibility of haptic technology in smart-rings and their promise in delivering noticeable stimulations by poking a limited set of planar locations on the finger. However, the full potential of poking as a mechanism to deliver richer and more expressive information on the finger is overlooked. With three studies and a total of 76 participants, we informed the design of PokeRing, a smart-ring capable of delivering information via stimulating eight different locations around the index finger’s proximal phalanx. We report our evaluation of the performance of PokeRing in semi-realistic wearable conditions, (standing and walking), and its effective usage for information transfer with twenty-one spatio-temporal patterns designed by six interaction designers in a workshop. Finally, we present three applications that exploit PokeRing’s notification usages.

recipescape
RecipeScape: An Interactive Tool for Analyzing Cooking Instructions at Scale

Thursday 9:00-9:20 | Session of Crowdsourcing, data mining, dealing with information | Room 516E

  • Minsuk Chang, School of Computing, KAIST
  • Leonore V. Guillain, ecole polytechnique fédérale de lausanne, Switzerland
  • Hyeungshik Jung, School of Computing, KAIST
  • Vivian M. Hare, Stanford University, USA; Chan Zuckerberg Initiative, USA
  • Juho Kim, School of Computing, KAIST
  • Maneesh Agrawala, Stanford University, USA

For cooking professionals and culinary students, understanding cooking instructions is an essential yet demanding task. Common tasks include categorizing different approaches to cooking a dish and identifying usage patterns of particular ingredients or cooking methods, all of which require extensive browsing and comparison of multiple recipes. However, no existing system provides support for such in-depth and at-scale analysis. We present RecipeScape, an interactive system for browsing and analyzing the hundreds of recipes of a single dish available online. We also introduce a computational pipeline that extracts cooking processes from recipe text and calculates a procedural similarity between them. To evaluate how RecipeScape supports culinary analysis at scale, we conducted a user study with cooking professionals and culinary students with 500 recipes for two different dishes. Results show that RecipeScape clusters recipes into distinct approaches, and captures notable usage patterns of ingredients and cooking actions.

thors hammer
Thor’s Hammer: An Ungrounded Force Feedback Device Utilizing Propeller-Induced Propulsive Force

Thursday 12:00-12:20 | Session of Force Feedback in VR | Room 516D

  • Seongkook Heo, University of Toronto, Canada
  • Christina Chung, University of Toronto, Canada
  • Geehyuk Lee, School of Computing, KAIST
  • Daniel Wigdor, University of Toronto, Canada

We present a new handheld haptic device, Thor’s Hammer, which uses propeller propulsion to generate ungrounded, 3-DOF force feedback. Thor’s Hammer has six motors and propellers that generates strong thrusts of air without the need for physical grounding or heavy air compressors. With its location and orientation tracked by an optimal tracking system, the system can exert forces in arbitrary directions regardless of the device’s orientation. Our technical evaluation shows that Thor’s Hammer can apply up to 4 N of force in arbitrary directions with less than 0.11 N and 3.9° of average magnitude and orientation errors. We also present virtual reality applications that can benefit from the force feedback provided by Thor’s Hammer. Using these applications, we conducted a preliminary user study and participants felt the experience more realistic and immersive with the force feedback.

to distrot or not
To Distort or Not to Distort: Distance Cartograms in the Wild

Thursday 14:00-14:20 | Session of Visualization of Space and Shape | Room 518C

  • Sungsoo (Ray) Hong, University of Washington, USA
  • Min-Joon Yoo, New York University, USA
  • Bonnie Chinh, University of Washington, USA
  • Amy Han, Swathmore, USA
  • Sarah Battersby, Tableau Software, USA
  • Juho Kim, School of Computing, KAIST

Distance Cartograms (DC) distort geographical features so that the measured distance between a single location and any other location on a map indicates absolute travel time. Although studies show that users can efficiently assess travel time with DC, distortion applied in DC may confuse users, and its usefulness “in the wild” is unknown. To understand how real world users perceive DC’s benefits and drawbacks, we devise techniques that improve DC’s presentation (preserving topological relationships among map features while aiming at retaining shapes) and scalability (presenting accurate live travel time). We developed a DC-enabled system with these techniques, and deployed it to 20 participants for 4 weeks. During this period, participants spent, on average, more than 50% of their time with DC as opposed to a standard map. Participants felt DC to be intuitive and useful for assessing travel time. They indicated intent in adopting DC in their real-life scenarios.

Demonstrations

hapcube-01
HapCube: A Wearable Tactile Device to Provide Tangential and Normal Pseudo-Force Feedback on a Fingertip

Monday 18:00-21:00 | D304 | Room 220BC

  • Hwan Kim, Department of Industrial Design, KAIST
  • HyeonBeom Yi, Department of Industrial Design, KAIST
  • Hyein Lee, Department of Industrial Design, KAIST
  • Woohun Lee, Department of Industrial Design, KAIST
exploring multimodal
Exploring Multimodal Watch-back Tactile Display using Wind and Vibration

Monday 18:00-21:00 | D305 | Room 220BC

  • Youngbo Aram Shim, School of Computing, KAIST
  • Jaeyeon Lee, School of Computing, KAIST
  • Geehyuk Lee, School of Computing, KAIST
thors hammer
Thor’s Hammer: An Ungrounded Force Feedback Device Utilizing Propeller-Induced Propulsive Force

Monday 18:00-21:00 | D110 | Room 220BC

  • Seongkook Heo, University of Toronto, Canada
  • Christina Chung, University of Toronto, Canada
  • Geehyuk Lee, School of Computing, KAIST
  • Daniel Wigdor, University of Toronto, Canada
agile 3d
Agile 3D Sketching with Air Scaffolding

Monday 18:00-21:00 | D411 | Room 220BC

  • Yongkwan Kim, Department of Industrial Design, KAIST
  • Sang-Gyun An, Department of Industrial Design, KAIST
  • Joon Hyub Lee, Department of Industrial Design, KAIST
  • Seok-Hyung Bae, Department of Industrial Design, KAIST
projective window
Projective Windows: Bringing Windows in Space to the Fingertip

Monday 18:00-21:00 | D412 | Room 220BC

  • Joon Hyub Lee, Department of Industrial Design, KAIST
  • Sang-Gyun An, Department of Industrial Design, KAIST
  • Yongkwan Kim, Department of Industrial Design, KAIST
  • Seok-Hyung Bae Department of Industrial Design, KAIST

Late Breaking Work

tnt
TNT: Exploring Pseudo Social Reminding for Effective Task Management

Tuesday 10:20-11:00 / 15:20-16:00 | LBW035 | Room 220BC

  • Wonyoung Shin, Graduate School of Knowledge Service Engineering, KAIST
  • Soowon Kang, Graduate School of Knowledge Service Engineering, KAIST
  • Inyeop Kim, Graduate School of Knowledge Service Engineering, KAIST
  • Mun Yong Yi, Department of Industrial&Systems Engineering, KAIST
  • Uichin Lee, Graduate School of Knowledge Service Engineering, KAIST
exprgram
Exprgram: A Language Learning Interface for Mastering Pragmatic Competence

Tuesday 10:20-11:00 / 15:20-16:00 | LBW057 | Room 220BC

  • Kyung Je Jo, School of Computing, KAIST
  • John Joon Young, School of Computing, KAIST
  • Chung
  • Juho Kim, School of Computing, KAIST

Mastering pragmatic competence, the ability to use language in a contextually appropriate way, is one of the most challenging parts of foreign language learning. Despite its importance, existing language learning systems often focus on linguistic components such as grammar, vocabulary, or pronunciation. Consequently, foreign language learners may generate grammatically flawless speech that is contextually inappropriate. With the diverse socio-cultural contexts captured in real-life settings, videos at scale can serve as a great material for learners to acquire pragmatic competence. We introduce Exprgram, a webbased video learning interface that assists learners to master pragmatic competence. With Exprgram, learners can raise their contextawareness, practice generating an alternative expression, and learn alternative expressions for the given context. Our user study with 12 advanced English learners shows potential in our learnersourcing approach to collecting descriptive context annotations and diverse alternative expressions.

enhancing storytelling
Enhancing Storytelling Experience with Story-Aware Interactive Puppet

Tuesday 10:20-11:00 / 15:20-16:00 | LBW076 | Room 220BC

  • Bogyeong Kim, Department of Industrial Design, KAIST
  • Jaehoon Pyun, Department of Industrial Design, KAIST
  • Woohun Lee, Department of Industrial Design, KAIST

Puppets are often used in storytelling, but there are few studies about puppets regarding the storytelling experience. In this paper, we introduce the concept of an ideal puppet for storytelling and discuss directions for puppet development. The ideal puppet is able to automatically animate itself in line with a story plot and positively influence the interactions in the storytelling dynamic. To see how children and parents would accept the concept, we created a preliminary prototype and conducted user study using Wizard-of-Oz method. Participants experienced enhanced immersion and increased communication between them by through the automatic movement of the puppet. They expected various roles from the puppet such as actor, support tool, and friend, which made various usage scenarios possible. The puppet should be developed in the direction of enhancing its advantages and including various usage scenarios, especially by combining the needs of both automation and manipulation.

designinghealth
Designing Health-Promoting Technologies with IoT at Home

Tuesday 10:20-11:00 / 15:20-16:00 | LBW083 | Room 220BC

  • Eulim Sull, Department of Industrial Design, KAIST
  • Youn-kyung Lim, Department of Industrial Design, KAIST

Health-related IT products (e.g., Fitbit) support persuasive technologies to reinforce an individual’s desired behaviors. While these products are dedicated to certain health behaviors, such as walking or specific types of sports, IoT at home can be integrated more broadly throughout one’s daily life. To address this opportunity, this paper aims to shed light on the use of domestic IoT that can foster changes toward healthy behaviors through a 3-week explorative field trial. This paper reports two major goals of health-promoting technologies using IoT as well as different persuasive techniques according to the temporal phases of before, during, and after the health behaviors.

button++
Button++: Designing Risk-aware Smart Buttons

Tuesday 10:20-11:00 / 15:20-16:00 | LBW116 | Room 220BC

  • Eunji Park, School of Culture Technology, KAIST
  • Hyunju Kim, School of Culture Technology, KAIST
  • Byungjoo Lee, School of Culture Technology, KAIST

Buttons are the most commonly used input devices. So far the goal of the designers was to provide a passive button that can accept user input as easily as possible. Therefore, based on Fitts’ law, they maximize the size of the button and make the distance closer. This paper proposes Button++, a novel method to design smart buttons that actively judge user’s movement risk and selectively trigger input. Based on the latest model of moving target selection, Button++ tracks the user’s submovement just before the click and infers the expected error rate that can occur if the user repeatedly clicks with the same movement. This allows designers to make buttons that actively respond to the amount of risk in the user’s input movement.

detecting personality unobtrusive
Detecting Personality Unobtrusively from Users' Online and Offline Workplace Behaviors

Wednesday 10:20-11:00 / 15:20-16:00 | LBW515 | Room 220BC

  • Seoyoung Kim, School of Computing, KAIST
  • Jiyoun Ha, School of Computing, KAIST 
  • Juho Kim, School of Computing, KAIST

Personality affects various social behaviors of an individual, such as collaboration, group dynamics, and social relationships within the workplace. However, existing methods for assessing personality have shortcomings: self-assessed methods are cumbersome due to repeated assessment and erroneous due to a self-report bias. On the other hand, automatic, data-driven personality detection raises privacy concerns due to a need for excessive personal data. We present an unobtrusive method for detecting personality within the workplace that combines a user’s online and offline behaviors. We report insights from analyzing data collected from four different workplaces with 37 participants, which shows that complementing online and offline data allows a more complete reflection of an individual’s personality. We also present possible applications of unobtrusive personality detection in the workplace.

micro-ngo
Micro-NGO: Tackling Wicked Social Problems with Problem Solving and Action Planning Support in Chat

Wednesday 10:20-11:00 / 15:20-16:00 | LBW559 | Room 220BC

  • Joonyoung Park, Graduate School of Knowledge Service Engineering, KAIST
  • Jin Yong Sim, Graduate School of Knowledge Service Engineering, KAIST
  • Jaejeung Kim, Graduate School of Knowledge Service Engineering, KAIST
  • Mun Yong Yi, Graduate School of Knowledge Service Engineering, KAIST
  • Uichin Lee, Graduate School of Knowledge Service Engineering, KAIST

Health-related IT products (e.g., Fitbit) support persuasive technologies to reinforce an individual’s desired behaviors. While these products are dedicated to certain health behaviors, such as walking or specific types of sports, IoT at home can be integrated more broadly throughout one’s daily life. To address this opportunity, this paper aims to shed light on the use of domestic IoT that can foster changes toward healthy behaviors through a 3-week explorative field trial. This paper reports two major goals of health-promoting technologies using IoT as well as different persuasive techniques according to the temporal phases of before, during, and after the health behaviors.

InteractionRestraint1
Interaction Restraint: Enforcing Adaptive Cognitive Tasks to Restrain Problematic User Interaction

Wednesday 10:20-11:00 / 15:20-16:00 | LBW553 | Room 220BC

  • Sung-Chul Lee, College of Business, KAIST
  • Jihee Kim, College of Business, KAIST
  • Juho Kim, School of Computing, KAIST

When a group of citizens wants to tackle a social problem online, they need to discuss the problem, possible solutions, and concrete actions. Instant messengers are a common tool used in this setting, which support free and unstructured discussion. But tackling complex social problems often calls for structured discussion. In this paper, we present Micro-NGO, a chat-based online discussion platform with built-in support for (1) the problem-solving process and (2) the action planning process. To scaffold the process, Micro-NGO adopts a question prompting strategy, which asks relevant questions to users in each stage of the problem-solving process. Users can answer the questions and vote for the best answer while they freely discuss in the chat room. For an informal evaluation, we conducted a pilot study with two groups (n=7). The participants held a discussion while collectively answering the question prompts and reached consensus to send a petition letter about campus issues to the related personnel.

identifying everyday objects
Identifying Everyday Objects with a Smartphone Knock

Wednesday 10:20-11:00 / 15:20-16:00 | LBW606 | Room 220BC

  • Taesik Gong, School of Computing, KAIST
  • Hyunsung Cho, School of Computing, KAIST
  • Bowon Lee, Inha University
  • Sung-Ju Lee, School of Computing, KAIST

We use smartphones and their apps for almost every daily activity. For instance, to purchase a bottle of water online, a user has to unlock the smartphone, find the right e-commerce app, search the name of the water product, and finally place an order. This procedure requires manual, often cumbersome, input of a user, but could be significantly simplified if the smartphone can identify an object and automatically process this routine. We present Knocker, an object identification technique that only uses commercial off-theshelf smartphones. The basic idea of Knocker is to leverage a unique set of responses that occur when a user knocks on an object with a smartphone, which consist of the generated sound from the knock and the changes in accelerometer and gyroscope values. Knocker employs a machine learning classifier to identify an object from the knock responses. A user study was conducted to evaluate the feasibility of Knocker with 14 objects in both quiet and noisy environments. The result shows that Knocker identifies objects with up to 99.7% accuracy.

actuating a monitor
Actuating a Monitor for Posture Changes

Wednesday 10:20-11:00 / 15:20-16:00 | LBW606 | Room 220BC

  • Joongi Shin, Department of Industrial Design, KAIST
  • Woohyeok Choi
  • Uichin Lee, Graduate School of Knowledge Service Engineering, KAIST
  • Daniel Saakes, Department of Industrial Design, KAIST

The position and orientation of a monitor affects users’ behavior at their desk. In this study, we explored and designed six types of interactions between an actuated monitor and a user to induce posture changes. We built a virtual monitor that simulates the motions of an actuated monitor and slowly moved in the opposite direction of unbalanced sitting postures. We conducted an explorative study with eight participants. The study showed participants’ responses and step by step posture changes toward balanced sitting postures. As contribution, we share considerations for designing monitor actuations that induce posture intervention.

thermal interaction
Thermal Interaction with a Voice-based Intelligent Agent

Wednesday 10:20-11:00 / 15:20-16:00 | LBW631 | Room 220BC

  • Seyeong Kim, Department of Industrial Design, KAIST
  • Yea-kyung Row, Department of Industrial Design, KAIST
  • Tek-Jin Nam, Department of Industrial Design, KAIST
Recently, voice-based intelligent agents (VIAs), such as Alexa, Siri, and Bixby, are becoming popular. One of the interaction challenges with VIAs is that it is difficult to deliver rich information, experience, and meaning via voice-only communication channels. We propose interactive thermal augmentation to address VIA’s interaction challenges. We developed a prototype system and conducted a user study to investigate the effects of thermal interaction in a VIA interaction context. The preliminary study results revealed that: 1) the thermal interface helped the participants to understand the information better; 2) the integration of heat and sound sensation provided an immersive and engaging experience; 3) a thermal stimulus worked as an additional feedback channel that supplements the voice interface. We discuss potentials and considerations when adopting thermal interaction to enrich people’s experiences with a VIA.

Video Showcase

rolling graphics
Rolling Graphics: Create Graphics on the Cross Section of a Roll Cake

Wednesday 17:30-18:30 | Video Showcase | Room 517D

  • Joongi Shin, Department of Industrial Design, KAIST
  • Maria Jose, Department of Industrial Design, KAIST
  • Su Ah Han, Department of Industrial Design, KAIST
  • Moojin Joh, Department of Industrial Design, KAIST
  • Daniel Saakes, Department of Industrial Design, KAIST

Let’s take KAIST HCI friends photos together! Take a photo with the HCI KAIST pose as shown in the example and upload it to the Instagram. With hash tags #HCIKAIST@CHI2019 or #KAISTNIGHT.​