CHI 2023

CHI 2023
DATE
  23 April – 28 April 2023
LOCATION  Hamburg, Germany | Hybrid
 

We are excited to bring good news! At CHI 2023, KAIST records a total of 21 Full Paper publications, 6 Late-Breaking Works, 4 Student Game Competitions, 2 Interactivities, and 6 Workshops. Congratulations on the outstanding achievement!

For more information and details about the publications that feature in the conference, please refer to the publication list below.

Paper Publications

CHI'23

Hye-Young Jo, Laurenz Seidel, Michel Pahud, Mike Sinclair, Andrea Bianchi

Online fitness video tutorials are an increasingly popular way to stay fit at home without a personal trainer. However, to keep the screen playing the video in view, users typically disrupt their balance and break the motion flow — two main pillars for the correct execution of yoga poses. While past research partially addressed this problem, these approaches supported only a limited view of the instructor and simple movements. To enable the fluid execution of complex full-body yoga exercises, we propose FlowAR, an augmented reality system for home workouts that shows training video tutorials as always-present virtual static and dynamic overlays around the user. We tested different overlay layouts in a study with 16 participants, using motion capture equipment for baseline performance. Then, we iterated the prototype and tested it in a furnished lab simulating home settings with 12 users. Our results highlight the advantages of different visualizations and the system’s general applicability. 

Preview

AutomataStage: an AR-mediated Creativity Support Tool for Hands-on Multidisciplinary Learning

CHI'23

Yunwoo Jeong, Hyungjun Cho, Taewan Kim, Tek-Jin Nam

The creativity support tools can enhance the hands-on multidisciplinary learning experience by drawing interest in the process of creating the outcome. We present AutomataStage, an AR-mediated creativity support tool for hands-on multidisciplinary learning. AutomataStage utilizes a video see-through interface to support the creation of Interactive Automata. The combination of building blocks and low-cost materials increases the expressiveness. The generative design method and one-to-one guide support the idea development process. It also provides a hardware see-through feature with which inside parts and circuits can be seen and an operational see-through feature that shows the operation in real-time. The visual programming method with a state transition diagram supports the iterative process during the creation process. A user study shows that AutomataStage enabled the students to create diverse Interactive Automata within 40-minute sessions. By creating Interactive Automata, the participants could learn the basic concepts of components. See-through features allowed active exploration with interest while integrating the components. We discuss the implications of hands-on tools with interactive and kinetic content beyond multidisciplinary learning.

Preview

It is Okay to be Distracted: How Real-time Transcriptions Facilitate Online Meeting with Distraction

CHI'23

Seoyun Son, Junyoung Choi, Sunjae Lee, Jean Y Song, Insik Shin

Online meetings are indispensable in collaborative remote work environments, but they are vulnerable to distractions due to their distributed and location-agnostic nature. While distraction often leads to a decrease in online meeting quality due to loss of engagement and context, natural multitasking has positive tradeoff effects, such as increased productivity within a given time unit. In this study, we investigate the impact of real-time transcriptions (i.e., full-transcripts, summaries, and keywords) as a solution to help facilitate online meetings during distracting moments while still preserving multitasking behaviors. Through two rounds of controlled user studies, we qualitatively and quantitatively show that people can better catch up with the meeting flow and feel less interfered with when using real-time transcriptions. The benefits of real-time transcriptions were more pronounced after distracting activities. Furthermore, we reveal additional impacts of real-time transcriptions (e.g., supporting recalling contents) and suggest design implications for future online meeting platforms where these could be adaptively provided to users with different purposes.

Preview

RoutineAid: Externalizing Key Design Elements to Support Daily Routines of Individuals with Autism

CHI'23

Bogoan Kim, Sung-In Kim, Sangwon Park, Hee Jeong Yoo, Hwajung Hong, Kyungsik Han

Implementing structure into our daily lives is critical for maintaining health, productivity, and social and emotional well-being. New norms for routine management have emerged during the current pandemic, and in particular, individuals with autism find it difficult to adapt to those norms. While much research has focused on the use of computer technology to support individuals with autism, little is known about ways of helping them establish and maintain “self-directed” routine structures. In this paper, we identify design requirements for an app that support four key routine components (i.e., physical activity, diet, mindfulness, and sleep) through a formative study and develop RoutineAid, a gamified smartphone app that reflects the design requirements. The results of a two-month field study on design feasibility highlight two affordances of RoutineAid – the establishment of daily routines by facilitating micro-planning and the maintenance of daily routines through celebratory interactions. We discuss salient design considerations for the future development of daily routine management tools for individuals with autism.

Preview

OmniSense: Exploring Novel Input Sensing and Interaction Techniques on Mobile Device with an Omni-Directional Camera

CHI'23

Hui-Shyong Yeo, Erwin Wu, Daewha Kim, Juyoung Lee, Hyung-il Kim, Seo Young Oh, Luna Takagi, Woontack Woo, Hideki Koike, Aaron J Quigley

An omni-directional (360°) camera captures the entire viewing sphere surrounding its optical center. Such cameras are growing in use to create highly immersive content and viewing experiences. When such a camera is held by a user, the view includes the user’s hand grip, finger, body pose, face, and the surrounding environment, providing a complete understanding of the visual world and context around it. This capability opens up numerous possibilities for rich mobile input sensing. In OmniSense, we explore the broad input design space for mobile devices with a built-in omni-directional camera and broadly categorize them into three sensing pillars: i) near device ii) around device and iii) surrounding device. In addition we explore potential use cases and applications that leverage these sensing capabilities to solve user needs. Following this, we develop a working system to put these concepts into action, by leveraging these sensing capabilities to enable potential use cases and applications. We studied the system in a technical evaluation and a preliminary user study to gain initial feedback and insights. Collectively these techniques illustrate how a single, omni-purpose sensor on a mobile device affords many compelling ways to enable expressive input, while also affording a broad range of novel applications that improve user experience during mobile interaction.

Preview

DAPIE: Interactive Step-by-Step Explanatory Dialogues to Answer Children’s Why and How Questions

CHI'23

Yoonjoo Lee, Tae Soo Kim, Sungdong Kim, Yohan Yun, Juho Kim

Children acquire an understanding of the world by asking “why” and “how” questions. Conversational agents (CAs) like smart speakers or voice assistants can be promising respondents to children’s questions as they are more readily available than parents or teachers. However, CAs’ answers to “why” and “how” questions are not designed for children, as they can be difficult to understand and provide little interactivity to engage the child. In this work, we propose design guidelines for creating interactive dialogues that promote children’s engagement and help them understand explanations. Applying these guidelines, we propose DAPIE, a system that answers children’s questions through interactive dialogue by employing an AI-based pipeline that automatically transforms existing long-form answers from online sources into such dialogues. A user study (N=16) showed that, with DAPIE, children performed better in an immediate understanding assessment while also reporting higher enjoyment than when explanations were presented sentence-by-sentence.

Preview

ModSandbox: Facilitating Online Community Moderation Through Error Prediction and Improvement of Automated Rules

CHI'23

Jean Y Song, Sangwook Lee, Jisoo Lee, Mina Kim, Juho Kim

Despite the common use of rule-based tools for online content moderation, human moderators still spend a lot of time monitoring them to ensure they work as intended. Based on surveys and interviews with Reddit moderators who use AutoModerator, we identified the main challenges in reducing false positives and false negatives of automated rules: not being able to estimate the actual effect of a rule in advance and having difficulty figuring out how the rules should be updated. To address these issues, we built ModSandbox, a novel virtual sandbox system that detects possible false positives and false negatives of a rule and visualizes which part of the rule is causing issues. We conducted a comparative, between-subject study with online content moderators to evaluate the effect of ModSandbox in improving automated rules. Results show that ModSandbox can support quickly finding possible false positives and false negatives of automated rules and guide moderators to improve them to reduce future errors.

Preview

How Space is Told: Linking Trajectory, Narrative, and Intent in Augmented Reality Storytelling for Cultural Heritage Sites

CHI'23

Jae-eun Shin, Woontack Woo

We report on a qualitative study in which 22 participants created Augmented Reality (AR) stories for outdoor cultural heritage sites. As storytelling is a crucial strategy for AR content aimed at providing meaningful experiences, the emphasis has been on what storytelling does, rather than how it is done, the end user’s needs prioritized over the author’s. To address this imbalance, we identify how recurring patterns in the spatial trajectories and narrative compositions of AR stories for cultural heritage sites are linked to the author’s intent and creative process: While authors tend to bind story arcs tightly to confined trajectories for narrative delivery, the need for spatial exploration results in thematic content mapped loosely onto encompassing trajectories. Based on our analysis, we present design recommendations for site-specific AR storytelling tools that can support authors in delivering their intent while leveraging the placeness of cultural heritage sites as a creative resource.

Preview

AVscript: Accessible Video Editing with Audio-Visual Scripts

CHI'22

Mina Huh, Saelyne Yang, Yi-Hao Peng, Xiang ‘Anthony’ Chen, Young-Ho Kim, Amy Pavel

Sighted and blind and low vision (BLV) creators alike use videos to communicate with broad audiences. Yet, video editing remains inaccessible to BLV creators. Our formative study revealed that current video editing tools make it difficult to access the visual content, assess the visual quality, and efficiently navigate the timeline. We present AVscript, an accessible text-based video editor. AVscript enables users to edit their video using a script that embeds the video’s visual content, visual errors (e.g., dark or blurred footage), and speech. Users can also efficiently navigate between scenes and visual errors or locate objects in the frame or spoken words of interest. A comparison study (N=12) showed that AVscript significantly lowered BLV creators’ mental demands while increasing confidence and independence in video editing. We further demonstrate the potential of AVscript through an exploratory study (N=3) where BLV creators edited their own footage.

Preview

Blaming Humans and Machines: What Shapes People's Reactions to Algorithmic Harm

CHI'23

Gabriel Lima, Nina Grgić-Hlača, Meeyoung Cha

Artificial intelligence (AI) systems can cause harm to people. This research examines how individuals react to such harm through the lens of blame. Building upon research suggesting that people blame AI systems, we investigated how several factors influence people’s reactive attitudes towards machines, designers, and users. The results of three studies (N = 1,153) indicate differences in how blame is attributed to these actors. Whether AI systems were explainable did not impact blame directed at them, their developers, and their users. Considerations about fairness and harmfulness increased blame towards designers and users but had little to no effect on judgments of AI systems. Instead, what determined people’s reactive attitudes towards machines was whether people thought blaming them would be a suitable response to algorithmic harm. We discuss implications, such as how future decisions about including AI systems in the social and moral spheres will shape laypeople’s reactions to AI-caused harm.

"We Speak Visually" : User-generated Icons for Better Video-Mediated Mixed Group Communications Between Deaf and Hearing Participants

CHI'23

Yeon Soo Kim, Hyeonjeong Im, Sunok Lee, Haena Cho, Sangsu Lee

Since the outbreak of the COVID-19 pandemic, videoconferencing technology has been widely adopted as a convenient, powerful, and fundamental tool that has simplified many day-to-day tasks. However, video communication is dependent on audible conversation and can be strenuous for those who are Hard of Hearing. Communication methods used by the Deaf and Hard of Hearing community differ significantly from those used by the hearing community, and a distinct language gap is evident in workspaces that accommodate workers from both groups. Therefore, we integrated users in both groups to explore ways to alleviate obstacles in mixed-group videoconferencing by implementing user-generated icons. A participatory design methodology was employed to investigate how the users overcome language differences. We observed that individuals utilized icons within video-mediated meetings as a universal language to reinforce comprehension. Herein, we present design implications from these findings, along with recommendations for future icon systems to enhance and support mixed-group conversations.

Surch: Enabling Structural Search and Comparison for Surgical Videos

CHI'23

Jeongyeon Kim, Daeun Choi, Nicole Lee, Matt Beane, Juho Kim

Video is an effective medium for learning procedural knowledge, such as surgical techniques. However, learning procedural knowledge through videos remains difficult due to limited access to procedural structures of knowledge (e.g., compositions and ordering of steps) in a large-scale video dataset. We present Surch, a system that enables structural search and comparison of surgical procedures. Surch supports video search based on procedural graphs generated by our clustering workflow capturing latent patterns within surgical procedures. We used vectorization and weighting schemes that characterize the features of procedures, such as recursive structures and unique paths. Surch enhances cross-video comparison by providing video navigation synchronized by surgical steps. Evaluation of the workflow demonstrates the effectiveness and interpretability (Silhouette score = 0.82) of our clustering for surgical learning. A user study with 11 residents shows that our system significantly improves the learning experience and task efficiency of video search and comparison, especially benefiting junior residents.

Love on the spectrum: Toward Inclusive online dating experience of autistic individuals

CHI'23

Dasom Choi, Sung-In Kim, Sunok Lee, Hyunseung Lim, Hee Jeong Yoo, Hwajung Hong

Online dating is a space where autistic individuals can find romantic partners with reduced social demands. Autistic individuals are often expected to adapt their behaviors to the social norms underlying the online dating platform to appear as desirable romantic partners. However, given that their autistic traits can lead them to different expectations of dating, it is uncertain whether conforming their behaviors to the norm will guide them to the person they truly want. In this paper, we explored the perceptions and expectations of autistic adults in online dating through interviews and workshops. We found that autistic people desired to know whether they behaved according to the platform’s norms. Still, they expected to keep their unique characteristics rather than unconditionally conform to the norm. We conclude by providing suggestions for designing inclusive online dating experiences that could foster self-guided decisions of autistic users and embrace their unique characteristics.

Fostering Youth’s Critical Thinking Competency about AI through Exhibition

CHI'23

Sunok Lee, Dasom Choi, Minha Lee, Jonghak Choi, Sangsu Lee

Today’s youth lives in a world deeply intertwined with AI, which has become an integral part of everyday life. For this reason, it is important for youth to critically think about and examine AI to become responsible users in the future. Although recent attempts have educated youth on AI with focus on delivering critical perspectives within a structured curriculum, opportunities to develop critical thinking competencies that can be reflected in their lives must be provided. With this background, we designed an informal learning experience through an AI-related exhibition to cultivate critical thinking competency. To explore changes before and after the exhibition, 23 participants were invited to experience the exhibition. We found that the exhibition can support the youth in relating AI to their lives through critical thinking processes. Our findings suggest implications for designing learning experiences to foster critical thinking competency for better coexistence with AI.

Creator-friendly Algorithms: Behaviors, Challenges, and Design Opportunities in Algorithmic Platforms

CHI'23

Yoonseo Choi, Eun Jeong Kang, Min Kyung Lee, Juho Kim

In many creator economy platforms, algorithms significantly impact creators’ practices and decisions about their creative expression and monetization. Emerging research suggests that the opacity of the algorithm and platform policies often distract creators from their creative endeavors. To study how algorithmic platforms can be more ‘creator-friendly,’ we conducted a mixed-methods study: interviews (N=14) and a participatory design workshop (N=12) with YouTube creators. Through the interviews, we found how creators’ folk theories of the curation algorithm impact their work strategies — whether they choose to work with or against the algorithm — and the associated challenges in the process. In the workshop, creators explored solution ideas to overcome the aforementioned challenges, such as fostering diverse and creative expressions, achieving success as a creator, and motivating creators to continue their job. Based on these findings, we discuss design opportunities for how algorithmic platforms can support and motivate creators to sustain their creative work.

Toward a Multilingual Conversational Agent: Challenges and Expectations of Code-Mixing Multilingual Users

CHI'23

Yunjae Josephine Choi, Minha Lee, Sangsu Lee

Multilingual speakers tend to interleave two or more languages when communicating. This communication strategy is called code-mixing, and it has surged with today’s ever-increasing linguistic and cultural diversity. Because of their communication style, multilinguals who use conversational agents have specific needs and expectations which are currently not being met by conversational systems. While research has been undertaken on code-mixing conversational systems, previous works have rarely focused on the code-mixing users themselves to discover their genuine needs. This work furthers our understanding of the challenges faced by code-mixing users in conversational agent interaction, unveils the key factors that users consider in code-mixing scenarios, and explores expectations that users have for future conversational agents capable of code-mixing. This study discusses the design implications of our findings and provides a guide on how to alleviate the challenges faced by multilingual users and how to improve the conversational agent user experience for multilingual users.

“I Won't Go Speechless”: Design Exploration on a Real-Time Text-To-Speech Speaking Tool for Videoconferencing

CHI'23

Wooseok Kim, Jian Jun, Minha Lee, Sangsu Lee

The COVID-19 pandemic has shifted many business activities to non-face-to-face activities, and videoconferencing has become a new paradigm. However, conference spaces isolated from surrounding interferences are not always readily available. People frequently participate in public places with unexpected crowds or acquaintances, such as cafés, living rooms, and shared offices. These environments have surrounding limitations that potentially cause challenges in speaking up during videoconferencing. To alleviate these issues and support the users in speaking-restrained spatial contexts, we propose a text-to-speech (TTS) speaking tool as a new speaking method to support active videoconferencing participation. We derived the possibility of a TTS speaking tool and investigated the empirical challenges and user expectations of a TTS speaking tool using a technology probe and participatory design methodology. Based on our findings, we discuss the need for a TTS speaking tool and suggest design considerations for its application in videoconferencing.

Charlie and the Semi-Automated Factory: Data-Driven Operator Behavior and Performance Modeling for Human-Machine Collaborative Systems

CHI'23

Eunji Park, Yugyeong Jung, Inyeop Kim, Uichin Lee

A semi-automated manufacturing system that entails human intervention in the middle of the process is a representative collaborative system that requires active interaction between humans and machines. User behavior induced by the operator’s decision-making process greatly impacts system operation and performance in such an environment that requires human-machine collaboration. There has been room for utilizing machine-generated data for a fine-grained understanding of the relationship between the behavior and performance of operators in the industrial domain, while multiple streams of data have been collected from manufacturing machines. In this study, we propose a large-scale data-analysis methodology that comprises data contextualization and performance modeling to understand the relationship between operator behavior and performance. For a case study, we collected machine-generated data over 6-months periods from a highly automated machine in a large tire manufacturing facility. We devised a set of metrics consisting of six human-machine interaction factors and four work environment factors as independent variables, and three performance factors as dependent variables. Our modeling results reveal that the performance variations can be explained by the interaction and work environment factors ($R^2$ = 0.502, 0.356, and 0.500 for the three performance factors, respectively). Finally, we discuss future research directions for the realization of context-aware computing in semi-automated systems by leveraging machine-generated data as a new modality in human-machine collaboration.

How Older Adults Use Online Videos for Learning

CHI'23

Seoyoung Kim, Donghoon Shin, Jeongyeon Kim, Soonwoo Kwon, Juho Kim

Online videos are a promising medium for older adults to learn. Yet, few studies have investigated what, how, and why they learn through online videos. In this study, we investigated older adults’ motivation, watching patterns, and difficulties in using online videos for learning by (1) running interviews with 13 older adults and (2) analyzing large-scale video event logs (N=41.8M) from a Korean Massive Online Open Course (MOOC) platform. Our results show that older adults (1) are motivated to learn practical topics, leading to less consumption of STEM domains than non-older adults, (2) watch videos with less interaction and watch a larger portion of a single video compared to non-older adults, and (3) face various difficulties (e.g., inconvenience arisen due to their unfamiliarity with technologies) that limit their learning through online videos. Based on the findings, we propose design guidelines for online videos and platforms targeted to support older adults’ learning.

Beyond Instructions: A Taxonomy of Information Types in How-to Videos

CHI'23

Saelyne Yang, Sangkyung Kwak, Juhoon Lee, Juho Kim

How-to videos are rich in information-they not only give instructions but also provide justifications or descriptions. People seek different information to meet their needs, and identifying different types of information present in the video can improve access to the desired knowledge. Thus, we present a taxonomy of information types in how-to videos. Through an iterative open coding of 4k sentences in 48 videos, 21 information types under 8 categories emerged. The taxonomy represents diverse information types that instructors provide beyond instructions. We first show how our taxonomy can serve as an analytical framework for video navigation systems. Then, we demonstrate through a user study (n=9) how type-based navigation helps participants locate the information they needed. Finally, we discuss how the taxonomy enables a wide range of video-related tasks, such as video authoring, viewing, and analysis. To allow researchers to build upon our taxonomy, we release a dataset of 120 videos containing 9.9k sentences labeled using the taxonomy.

Potential and Challenges of DIY Smart Homes with an ML-intensive Camera Sensor

CHI'23

Sojeong Yun, Youn-kyung Lim

Sensors and actuators are crucial components of a do-it-yourself (DIY) smart home system that enables users to construct smart home features successfully. In addition, machine learning (ML) (e.g., ML-intensive camera sensors) can be applied to sensor technology to increase its accuracy. Although camera sensors are often utilized in homes, research on user experiences with DIY smart home systems employing camera sensors is still in its infancy. This research investigates novel user experiences while constructing DIY smart home features using an ML-intensive camera sensor in contrast to commonly used internet-of-things (IoT) sensors. Thus, we conducted a seven-day field diary study with 12 families who were given a DIY smart home kit. Here, we assess the five characteristics of the camera sensor as well as the potential and challenges of utilizing the camera sensor in the DIY smart home and discuss the opportunities to address existing DIY smart home issues.

Interactivity

Explore the Future Earth with Wander 2.0: AI Chatbot Driven by Knowledge-base Story Generation and Text-to-image Model

CHI'23

Yuqian Sun, Ying Xu, Chenhang Cheng, Yihua Li, Chang Hee Lee, Ali Asadipour

People always envision the future of earth through science fiction (Sci-fi), so can we create a unique experience of “visiting the future earth” through the lens of artificial intelligence (AI)? We introduce Wander 2.0, an AI chatbot that co-creates sci-fi stories through knowledge-based story generation on daily communication platforms like WeChat and Discord. Using location information from Google Maps, Wander generates narrative travelogues about specific locations (e.g. Paris) through a large-scale language model (LLM). Additionally, using the large-scale text-to-image model (LTGM) Stable Diffusion, Wander transfers future scenes that match both the text description and location photo, facilitating future imagination. The project also includes a real-time visualization of the human-AI collaborations on a future map. Through journeys with visitors from all over the world, Wander demonstrates how AI can serve as a subjective interface linking fiction and reality. Our research shows that multi-modal AI systems have the potential to extend the artistic experience and creative world-building through adaptive and unique content generation for different people. Wander 2.0 is available at http://wander001.com/

Preview

AutomataStage: An Interactive Automata Creating Tool for Hands-on STEAM Learning

CHI'23

Yunwoo Jeong, Hyungjun Cho, Taewan Kim, Tek-Jin Nam

Hands-on STEAM learning requires scattered tools in the digital and physical environment and educational content that can draw attention, interest, and fun. We present AutomataStage, an interactive tool, and Interactive Automata, a learning content. AutomataStage utilizes a video see-through interface and building blocks to actively engage the entire creation process from ideation to visual programming, mechanism simulation, and making. It also provides a hardware see-through feature with which inside parts and circuits can be seen and an operational see-through feature that shows the operation in real-time. A user study shows that AutomataStage enabled the students to create diverse Interactive Automata within 40-minute sessions. See-through features enabled active exploration with interest, while visual programming with a state transition diagram supported the integration. The participants could rapidly learn sensors, motors, mechanisms, and programming by creating Interactive Automata. We discuss the implications of hands-on tools with interactive and kinetic content beyond STEAM education.

Late-Breaking Work

Virtual Trackball on VR Controller: Evaluation of 3D Rotation Methods in Virtual Reality

CHI'23

Sunbum Kim, Geehyuk Lee

Rotating 3D objects is an essential operation in virtual reality (VR). However, efficient rotation methods with the current VR controllers have not been considered extensively yet. Users must repeatedly move their arms and wrists to rotate an object with the current VR controller. We considered utilizing the trackpad available in most VR controllers as a virtual trackball for an efficient rotation method and implemented two types of virtual trackballs (Arcball and Two-axis Valuator) to enable additional rotation using the thumb while holding an object with a VR controller. In this study, we investigated whether a controller with a virtual trackball would be effective for 3D manipulation tasks. The results showed that participants could perform the tasks faster with Arcball but not faster with Two-axis Valuator than with the regular VR controller. Also, most participants preferred Arcball to Two-axis Valuator and felt Arcball more natural than Two-axis Valuator.

Preview

QuickRef: Should I Read Cited Papers for Understanding This Paper?

CHI'23

Sangjun Park, Chanhee Lee, Uichin Lee

Researchers spend lots of time for reading scientific papers as they need to stay updated with recent trends. However, navigating citations, which are indispensable elements of research papers, can act as a barrier for junior researchers as they do not have enough background knowledge and experience. We conduct a formative user study to identify challenges in navigating cited papers. We then prototype QuickRef, an interactive reader that provides additional information about cited papers on the side panel. A preliminary user study documents the usability of QuickRef. Further, we present practical design implications for citation navigation support.

Preview

HapticPalmrest: Haptic Feedback through the Palm for the Laptop Keyboard

CHI'23

Jisu Yim, Sangyoon Lee, Geehyuk Lee

Programmable haptic feedback on touchscreen keyboards enriches user experiences but is hard to realize for physical keyboards because this requires individually augmenting each key with an actuator. As an alternative approach, we propose HapticPalmrest, where haptic feedback for a physical keyboard is provided to the palms. This is particularly feasible for a laptop environment, where users usually rest their palms while interacting with the keyboard. To verify the feasibility of the approach, we conducted two user studies. The first study showed that at least one palm was on palmrest for more than 90\% of key interaction time. The second study showed a vibration power of 1.17 g (peak-to-peak) and a duration of 4 ms was sufficient for reliable perception of palmrest vibrations during keyboard interaction. We finally demonstrated the potential of such an approach by designing Dwell+ Key, an application that extends the function of each key by enabling timed dwelling operations.

Preview

AEDLE: Designing Drama Therapy Interface for Improving Pragmatic Language Skills of Children with Autism Spectrum Disorder Using AR

CHI'23

Jungin Park, Gahyun Bae, Jueon Park, Seo Kyung Park, Yeon Soo Kim, Sangsu Lee

This research proposes AEDLE, a new interface combining AR with drama therapy — an approved method of improving pragmatic language skills — to offer effective, universal, and accessible language therapy for children with Autism Spectrum Disorder (ASD). People with ASD commonly have a disability in pragmatic language and experience difficulty speaking. However, although therapy in childhood is necessary to prevent long-term social isolation due to such constraints, the limited number of therapists forbids doing so. Technology-based therapy can be a solution, but studies on utilizing digital therapy to improve pragmatic language are still insufficient. We conducted a preliminary user study with an ASD child and a therapist to investigate how the child with ASD reacts to drama therapy using AEDLE. We observed that our ASD child actively participated in AEDLE-mediated drama therapy, used our insights to recommend design suggestions for AR-based drama therapy, and explored various ways to utilize AEDLE.

Preview

Tailoring Interactions: Exploring the Opportune Moment for Remote Computer-mediated Interactions with Home-alone Dogs

CHI'23

Yewon Kim, Taesik Gong, Sung-Ju Lee

We argue for research on identifying opportune moments for remote computer-mediated interactions with home-alone dogs. We analyze the behavior of home-alone pet dogs to find specific situations where positive interaction between the dog and toys is more likely and when the interaction might induce more stress. We highlight the importance of considering the timing of remote interactions with pet dogs and the potential benefits it brings to the effectiveness of the interaction, leading to greater satisfaction and engagement for both the pet and the pet owner.

Preview

Dis/Immersion in Mindfulness Meditation with a Wandering Voice Assistant

CHI'23

Bonhee Ku, Katie Seaborn

Mindfulness meditation is a validated means of helping people manage stress. Voice-based virtual assistants (VAs) in smart speakers, smartphones, and smart environments can assist people in carrying out mindfulness meditation through guided experiences. However, the common fixed location embodiment of VAs makes it difficult to provide intuitive support. In this work, we explored the novel embodiment of a “wandering voice” that is co-located with the user and “moves” with the task. We developed a multi-speaker VA embedded in a yoga mat that changes location along the body according to the meditation experience. We conducted a qualitative user study in two sessions, comparing a typical fixed smart speaker to the wandering VA embodiment. Thick descriptions from interviews with twelve people revealed sometimes simultaneous experiences of immersion and dis-immersion. We offer design implications for “wandering voices” and a new paradigm for VA embodiment that may extend to guidance tasks in other contexts.

Student Game Comepetition

Glow the Buzz: A VR Puzzle Adventure Game Mainly Played Through Haptic Feedback

CHI'23

Sihyun Jeong, Hyun Ho Yun, Yoonji Lee, Yeeun Han

Virtual Reality (VR) has become a more popular tool, leading to increased demands for various immersive VR games for players. In addition, haptic technology is gaining attention as it adds a sense of touch to the visual and auditory dominant Human-Computer Interface (HCI) in terms of providing more extended VR experiences. However, most games, including VR, use haptics as a supplement while mostly depending on the visual elements as their main mode of transferring information. It is because the complexity of haptic in accurately capturing and replicating touch is still in its infancy. To further investigate the potential of haptics, we propose to Glow the Buzz, a VR game in which haptic feedback serves as a core element using wearable haptic devices. Our research explores whether haptic stimuli can be a primary form of interaction by conceiving iterative playtests for three haptic puzzle designs – rhythm, texture, and direction. The study concludes that haptic technology in VR has the potential extendability by proposing a VR haptic puzzle game that cannot be played without haptics and enhances the player’s immersion. Moreover, the study suggests elements that enhance each haptic stimuli’s discriminability when designing haptic puzzles.

Preview

Spatial Chef: A Spatial Transforming VR Game with Full Body Interaction

CHI'23

Yeeun Shin, Yewon Lee, Sungbaek Kim, Soomin Park

How can we play with space? We present Spatial Chef, a spatial cooking game that focuses on interacting with space itself, shifting away from the conventional object interaction of virtual reality (VR) games. This allows players to generate and transform the virtual environment (VE) around them directly. To capture the ambiguity of space, we created a game interface with full-body movement based on the player’s perception of spatial interaction. This was evaluated as easy and intuitive, providing clues for the spatial interaction design. Our user study reveals that manipulating virtual space can lead to unique experiences: Being both a player and an absolute and Experiencing realized fantasy. This suggests the potential of interacting with space as an engaging gameplay mechanic. Spatial Chef proposes turning the VE, typically treated as a passive backdrop, into an active medium that responds to the player’s intentions, creating a fun and novel experience.

Preview

MindTerior: A Mental Healthcare Game with Metaphoric Gamespace and Effective Activities for Mitigating Mild Emotional Difficulties

CHI'23

Ain Lee, Juhyun Lee, Sooyeon Ahn, Youngik Lee

Contemporaries suffer from more stress and emotional difficulties, but developing practices that allow them to manage and become aware of emotional states has been a challenge. MindTerior is a mental health care game developed for people who occasionally experience mild emotional difficulties. The game contains four mechanisms: measuring players’ emotional state, providing game activities that help mitigate certain negative emotions, visualizing players’ emotional state and letting players cultivate the game space with customizable items, and completing game events that educate players on how to cope with certain negative emotions. This set of gameplays can allow players to experience effective positive emotional relaxation and to perform gamified mental health care activities. Playtest showed that projecting players’ emotional state to a virtual game space is helpful for players to be conscious of their emotional state, and playing gamified activities is helpful for mental health care. Additionally, the game motivated players to practice the equivalent activities in real life.

Preview

Bean Academy: A Music Composition Game for Beginners with Vocal Query Transcription

CHI'23

Jaejun Lee, Hyeyoon Cho, Yonghyun Kim

Bean Academy is a music composition game designed for musicallyunskilled learners to lower entry barriers to music composition learning such as music theory comprehension, literacy and proficiency in utilizing music composition software. As a solution, Bean Academy’s Studio Mode was designed with the adaptation of an auditory-based ‘Vocal Query Transcription (VQT)’ model to enhance learners’ satisfaction and enjoyment towards music composition learning. Through the VQT model, players can experience a simple and efficient music composition process by experiencing their recorded voice input being transcripted into an actual musical piece. Based on our playtest, thematic analysis was conducted in two separate experiment groups. Here, we noticed that although Bean Academy does not outperform the current-level Digital Audio Workstation (DAW) in terms of performance or functionality, it can be highly considered as suitable learning material for musicallyunskilled learners.

Preview

Workshop

Beyond prototyping boards: future paradigms for electronics toolkits

CHI'23

Andera Bianchi, Steve Hodges, David J. Curtielles, HyunJoo Oh, Mannu Lambrichts, Anne Roudaut

Electronics prototyping platforms such as Arduino enable a wide variety of creators with and without an engineering background to rapidly and inexpensively create interactive prototypes. By opening up the process of prototyping to more creators, and by making it cheaper and quicker, prototyping platforms and toolkits have undoubtedly shaped the HCI community. With this workshop, we aim to understand how recent trends in technology, from reprogrammable digital and analog arrays to printed electronics, and from metamaterials to neurally-inspired processors, might be leveraged in future prototyping platforms and toolkits. Our goal is to go beyond the well-established paradigm of mainstream microcontroller boards, leveraging the more diverse set of technologies that already exist but to date have remained relatively niche. What is the future of electronics prototyping toolkits? How will these tools fit in the current ecosystem? What are the new opportunities for research and commercialization?

Towards Explainable AI Writing Assistants for Non-native English Speakers

CHI'23

Yewon Kim, Mina Lee, Donghwi Kim, Sung-Ju Lee

We highlight the challenges faced by non-native speakers when using AI writing assistants to paraphrase text. Through an interview study with 15 non-native English speakers (NNESs) with varying levels of English proficiency, we observe that they face difficulties in assessing paraphrased texts generated by AI writing assistants, largely due to the lack of explanations accompanying the suggested paraphrases. Furthermore, we examine their strategies to assess AI-generated texts in the absence of such explanations. Drawing on the needs of NNESs identified in our interview, we propose four potential user interfaces to enhance the writing experience of NNESs using AI writing assistants. The proposed designs focus on incorporating explanations to better support NNESs in understanding and evaluating the AI-generated paraphrasing suggestions.

ChatGPT for Moderating Customer Inquiries and Responses to Alleviate Stress and Reduce Emotional Dissonance of Customer Service Representatives

CHI'23

Hyung-Kwon Ko, Kihoon Son, Hyoungwook Jin, Yoonseo Choi, Xiang ‘Anthony’ Chen

Customer service representatives (CSRs) face significant levels of stress as a result of handling disrespectful customer inquiries and the emotional dissonance that arises from concealing their true emotions to provide the best customer experience. To solve this issue, we propose ExGPTer that uses ChatGPT to moderate the tone and manner of a customer inquiry to be more gentle and appropriate, while ensuring that the content remains unchanged. ExGPTer also augments CSRs’ responses to answer customer inquiries, so they can conform to established company protocol while effectively conveying the essential information that customers seek.

LMCanvas: Object-Oriented Interaction to Personalize Large Language Model-Powered Writing Environments

CHI'23

Tae Soo Kim, Arghya Sarkar, Yoonjoo Lee, Minsuk Chang, Juho Kim

Large language models (LLMs) can enhance writing by automating or supporting specific tasks in writers’ workflows (e.g., paraphrasing, creating analogies). Leveraging this capability, a collection of interfaces have been developed that provide LLM-powered tools for specific writing tasks. However, these interfaces provide limited support for writers to create personal tools for their own unique tasks, and may not comprehensively fulfill a writer’s needs—requiring them to continuously switch between interfaces during writing. In this work, we envision LMCanvas, an interface that enables writers to create their own LLM-powered writing tools and arrange their personal writing environment by interacting with “blocks” in a canvas. In this interface, users can create text blocks to encapsulate writing and LLM prompts, model blocks for model parameter configurations, and connect these to create pipeline blocks that output generations. In this workshop paper, we discuss the design for LMCanvas and our plans to develop this concept.

Look Upon Thyself: Understanding the Effect of Self-Reflection on Toxic Behavior in Online Gaming

CHI'23

Juhoon Lee, Jeong-woo Jang, Juho Kim

TBD

Towards an Experience-Centric Paradigm of Online Harassment: Responding to Calling out and Networked Harassment

CHI'23

Haesoo Kim, Juhoon Lee, Juho Kim, Jeong-woo Jang

TBD

The full schedule of presentations at CHI 2023 can also seen here!

Leave a Reply

Your email address will not be published.