CHI 2022

CHI 2022
DATE
  30 April – 5 May 2022
LOCATION  Online (New Orleans, LA)
 

We are happy to bring good news! At CHI 2022, KAIST records a total of 19 full paper publications (with 1 Best Paper, and 2 Honorable Mention Awards), 2 interactivities, 7 late-breaking works, 4 Student Game Competition works, and ranking 5th place in the number of publications out of all CHI 2022 participating institutions. Congratulations on the outstanding achievement!

KAIST CHI Statistics (2015-2022)
Year        Number of Publications       Rank
2015       9                                                14
2016       15                                              7
2017       7                                                26
2018       21                                              8
2019       13                                              11
2020       15                                              14
2021       22                                              4
2022       19                                              5

Nation-wide (Korea) CHI Statistics (2015-2022)
Year        Number of Publications       Rank
2015       17                                              6
2016       20                                              6
2017       16                                              11
2018       30                                              6
2019       23                                              8
2020       29                                              7
2021       35                                              7
2022       33                                              7

For more information and details about the publications that feature in the conference, please refer to the publication list below.

Paper Publications

Mobile-Friendly Content Design for MOOCs: Challenges, Requirements, and Design Opportunities

CHI'22, Best Paper

Jeongyeon Kim, Yubin Choi, Meng Xia, Juho Kim

Most video-based learning content is designed for desktops without considering mobile environments. We (1) investigate the gap between mobile learners’ challenges and video engineers’ considerations using mixed methods and (2) provide design guidelines for creating mobile-friendly MOOC videos. To uncover learners’ challenges, we conducted a survey (n=134) and interviews (n=21), and evaluated the mobile adequacy of current MOOCs by analyzing 41,722 video frames from 101 video lectures. Interview results revealed low readability and situationally-induced impairments as major challenges. The content analysis showed a low guideline compliance rate for key design factors. We then interviewed 11 video production engineers to investigate design factors they mainly consider. The engineers mainly focus on the size and amount of content while lacking consideration for color, complex images, and situationally-induced impairments. Finally, we present and validate guidelines for designing mobile-friendly MOOCs, such as providing adaptive and customizable visual design and context-aware accessibility support.

Stylette: Styling the Web with Natural Language

CHI'22, Honorable Mention

Tae Soo Kim, DaEun Choi, Yoonseo Choi, Juho Kim

End-users can potentially style and customize websites by editing them through in-browser developer tools. Unfortunately, end-users lack the knowledge needed to translate high-level styling goals into low-level code edits. We present Stylette, a browser extension that enables users to change the style of websites by expressing goals in natural language. By interpreting the user’s goal with a large language model and extracting suggestions from our dataset of 1.7 million web components, Stylette generates a palette of CSS properties and values that the user can apply to reach their goal. A comparative study (N=40) showed that Stylette lowered the learning curve, helping participants perform styling changes 35% faster than those using developer tools. By presenting various alternatives for a single goal, the tool helped participants familiarize themselves with CSS through experimentation. Beyond CSS, our work can be expanded to help novices quickly grasp complex software or programming languages.

MyDJ: Sensing Food Intakes with an Attachable on Your Eyeglass Frame

CHI'22, Honorable Mention

Jaemin Shin, Seungjoo Lee, Taesik Gong, Hyungjun Yoon, Hyunchul Roh, Andrea Bianchi, Sung-Ju Lee 

Various automated eating detection wearables have been proposed to monitor food intakes. While these systems overcome the forgetfulness of manual user journaling, they typically show low accuracy at outside-the-lab environments or have intrusive form-factors (e.g., headgear). Eyeglasses are emerging as a socially-acceptable eating detection wearable, but existing approaches require custom-built frames and consume large power. We propose MyDJ, an eating detection system that could be attached to any eyeglass frame. MyDJ achieves accurate and energy-efficient eating detection by capturing complementary chewing signals on a piezoelectric sensor and an accelerometer. We evaluated the accuracy and wearability of MyDJ with 30 subjects in uncontrolled environments, where six subjects attached MyDJ on their own eyeglasses for a week. Our study shows that MyDJ achieves 0.919 F1-score in eating episode coverage, with 4.03× battery time over the state-of-the-art systems. In addition, participants reported wearing MyDJ was almost as comfortable (94.95%) as wearing regular eyeglasses.

Lattice Menu: A Low-Error Gaze-Based Marking Menu Utilizing Target-Assisted Gaze Gestures on a Lattice of Visual Anchors

CHI'22

Taejun Kim, Auejin Ham, Sunggeun Ahn, Geehyuk Lee

We present Lattice Menu, a gaze-based marking menu utilizing a lattice of visual anchors that helps perform accurate gaze pointing for menu item selection. Users who know the location of the desired item can leverage target-assisted gaze gestures for multilevel item selection by looking at visual anchors over the gaze trajectories. Our evaluation showed that Lattice Menu exhibits a considerably low error rate (~1%) and a quick menu selection time (1.3-1.6 s) for expert usage across various menu structures (4 × 4 × 4 and 6 × 6 × 6) and sizes (8, 10 and 12°). In comparison with a traditional gaze-based marking menu that does not utilize visual targets, Lattice Menu showed remarkably (~5 times) fewer menu selection errors for expert usage. In a post-interview, all 12 subjects preferred Lattice Menu, and most subjects (8 out of 12) commented that the provisioning of visual targets facilitated more stable menu selections with reduced eye fatigue.

SpinOcchio: Understanding Haptic-Visual Congruency of Skin-Slip in VR with a Dynamic Grip Controller

CHI'22

Myung Jin Kim, Neung Ryu, Wooje Chang, Michel Pahud, Mike Sinclair, Andrea Bianchi

This paper’s goal is to understand the haptic-visual congruency perception of skin-slip on the fingertips given visual cues in Virtual Reality (VR). We developed SpinOcchio (‘Spin’ for the spinning mechanism used, ‘Occhio’ for the Italian word “eye”), a handheld haptic controller capable of rendering the thickness and slipping of a virtual object pinched between two fingers. This is achieved using a mechanism with spinning and pivoting disks that apply a tangential skin-slip movement to the fingertips. With SpinOcchio, we determined the baseline haptic discrimination threshold for skin-slip, and, using these results, we tested how haptic realism of motion and thickness is perceived with varying visual cues in VR. Surprisingly, the results show that in all cases, visual cues dominate over haptic perception. Based on these results, we suggest applications that leverage skin-slip and grip interaction, contributing further to realistic experiences in VR.

Understanding Emotion Changes in Mobile Experience Sampling

CHI'22

Soowon Kang, Cheul Young Park, Narae Cha, Auk Kim, Uichin Lee

Mobile experience sampling methods~(ESMs) are widely used to measure users’ affective states by randomly sending self-report requests. However, this random probing can interrupt users and adversely influence users’ emotional states by inducing disturbance and stress. This work aims to understand how ESMs themselves may compromise the validity of ESM responses and what contextual factors contribute to changes in emotions when users respond to ESMs. Towards this goal, we analyze 2,227 samples of the mobile ESM data collected from 78 participants. Our results show ESM interruptions positively or negatively affected users’ emotional states in at least 38\% of ESMs, and the changes in emotions are closely related to the contexts users were in prior to ESMs. Finally, we discuss the implications of using the ESM and possible considerations for mitigating the variability in emotional responses in the context of mobile data collection for affective computing.

Cocomix: Utilizing Comments to Improve Non-Visual Webtoon Accessibility

CHI'22

Mina Huh, YunJung Lee, Dasom Choi, Haesoo Kim, Uran Oh, Juho Kim

Webtoon is a type of digital comics read online where readers can leave comments to share their thoughts on the story. While it has experienced a surge in popularity internationally, people with visual impairments cannot enjoy webtoon with the lack of an accessible format. While traditional image description practices can be adopted, resulting descriptions cannot preserve webtoons’ unique values such as control over the reading pace and social engagement through comments. To improve the webtoon reading experience for BLV users, we propose Cocomix, an interactive webtoon reader that leverages comments into the design of novel webtoon interactions. Since comments can identify story highlights and provide additional context, we designed a system that provides 1) comments-based adaptive descriptions with selective access to details and 2) panel-anchored comments for easy access to relevant descriptive comments. Our evaluation (N=12) showed that Cocomix users could adapt the description for various needs and better utilize comments.

“It’s not wrong, but I’m quite disappointed”: Toward an Inclusive Algorithmic Experience for Content Creators with Disabilities

CHI'22

Dasom Choi, Uichin Lee, Hwajung Hong

YouTube is a space where people with disabilities can reach a wider online audience to present what it is like to have disabilities. Thus, it is imperative to understand how content creators with disabilities strategically interact with algorithms to draw viewers around the world. However, considering that the algorithm carries the risk of making less inclusive decisions for users with disabilities, whether the current algorithmic experiences (AXs) on video platforms is inclusive for creators with disabilities is an open question. To address that, we conducted semi-structured interviews with eight YouTubers with disabilities. We found that they aimed to inform the public of diverse representations of disabilities, which led them to work with algorithms by strategically portraying disability identities. However, they were disappointed that the way the algorithms work did not sufficiently support their goals. Based on findings, we suggest implications for designing inclusive AXs that could embrace creators’ subtle needs.

AlgoSolve: Supporting Subgoal Learning in Algorithmic Problem-Solving with Learnersourced Microtasks

CHI'22

Kabdo Choi, Hyungyu Shin, Meng Xia, Juho Kim

Designing solution plans before writing code is critical for successful algorithmic problem-solving. Novices, however, often plan on-the-fly during implementation, resulting in unsuccessful problem-solving due to lack of mental organization of the solution. Research shows that subgoal learning helps learners develop more complete solution plans by enhancing their understanding of the high-level solution structure. However, expert-created materials such as subgoal labels are necessary to provide learning benefits from subgoal learning, which are a scarce resource in self-learning due to limited availability and high cost. We propose a learnersourcing workflow that collects high-quality subgoal labels from learners by helping them improve their label quality. We implemented the workflow into AlgoSolve, a prototype interface that supports subgoal learning for algorithmic problems. A between-subjects study with 63 problem-solving novices revealed that AlgoSolve helped learners create higher-quality labels and more complete solution plans, compared to a baseline method known to be effective in subgoal learning.

FitVid: Responsive and Flexible Video Content Adaptation

CHI'22

Jeongyeon Kim, Yubin Choi, Minsuk Kahng, Juho Kim

Mobile video-based learning attracts many learners with its mobility and ease of access. However, most lectures are designed for desktops. Our formative study reveals mobile learners’ two major needs: more readable content and customizable video design. To support mobile-optimized learning, we present FitVid, a system that provides responsive and customizable video content. Our system consists of (1) an adaptation pipeline that reverse-engineers pixels to retrieve design elements (e.g., text, images) from videos, leveraging deep learning with a custom dataset, which powers (2) a UI that enables resizing, repositioning, and toggling in-video elements. The content adaptation improves the guideline compliance rate by 24% and 8% for word count and font size. The content evaluation study (n=198) shows that the adaptation significantly increases readability and user satisfaction. The user study (n=31) indicates that FitVid significantly improves learning experience, interactivity, and concentration. We discuss design implications for responsive and customizable video adaptation.

Sad or just jealous? Using Experience Sampling to Understand and Detect Negative Affective Experiences on Instagram

CHI'22

Mintra Ruensuk, Taewon Kim, Hwajung Hong, Ian Oakley

Social Network Services (SNSs) evoke diverse affective experiences. While most are positive, many authors have documented both the negative emotions that can result from browsing SNS and their impact: Facebook depression is a common term for the more severe results. However, while the importance of the emotions experienced on SNSs is clear, methods to catalog them, and systems to detect them, are less well developed. Accordingly, this paper reports on two studies using a novel contextually triggered Experience Sampling Method to log surveys immediately after using Instagram, a popular image-based SNS, thus minimizing recall biases. The first study improves our understanding of the emotions experienced while using SNSs. It suggests that common negative experiences relate to appearance comparison and envy. The second study captures smartphone sensor data during Instagram sessions to detect these two emotions, ultimately achieving peak accuracies of 95.78% (binary appearance comparison) and 93.95% (binary envy).

Prediction for Retrospection: Integrating Algorithmic Stress Prediction into Personal Informatics Systems for College Students' Mental Health

CHI'22

Taewan Kim, Haesoo Kim, Ha Yeon Lee, Hwarang Goh, Shakhboz Abdigapporov, Mingon Jeong, Hyunsung Cho, Kyungsik Han, Youngtae Noh, Sung-Ju Lee, Hwajung Hong

Reflecting on stress-related data is critical in addressing one’s mental health. Personal Informatics (PI) systems augmented by algorithms and sensors have become popular ways to help users collect and reflect on data about stress. While prediction algorithms in the PI systems are mainly for diagnostic purposes, few studies examine how the explainability of algorithmic prediction can support user-driven self-insight. To this end, we developed MindScope, an algorithm-assisted stress management system that determines user stress levels and explains how the stress level was computed based on the user’s everyday activities captured by a smartphone. In a 25-day field study conducted with 36 college students, the prediction and explanation supported self-reflection, a process to re-establish preconceptions about stress by identifying stress patterns and recalling past stress levels and patterns that led to coping planning. We discuss the implications of exploiting prediction algorithms that facilitate user-driven retrospection in PI systems.

"It Feels Like Taking a Gamble": Exploring Perceptions, Practices, and Challenges of Using Makeup and Cosmetics for People with Visual Impairments

CHI'22

Franklin Mingzhe Li, Franchesca Spektor, Meng Xia, Mina Huh, Peter Cederberg, Yuqi Gong, Kristen Shinohara, Patrick Carrington

Makeup and cosmetics offer the potential for self-expression and the reshaping of social roles for visually impaired people. However, there exist barriers to conducting a beauty regime because of the reliance on visual information and color variances in makeup. We present a content analysis of 145 YouTube videos to demonstrate visually impaired individuals’ unique practices before, during, and after doing makeup. Based on the makeup practices, we then conducted semi-structured interviews with 12 visually impaired people to discuss their perceptions of and challenges with the makeup process in more depth. Overall, through our findings and discussion, we present novel perceptions of makeup from visually impaired individuals (e.g., broader representations of blindness and beauty). The existing challenges provide opportunities for future research to address learning barriers, insufficient feedback, and physical and environmental barriers, making the experience of doing makeup more accessible to people with visual impairments.

Quantifying Proactive and Reactive Button Input

CHI'22

Hyunchul Kim, Kasper Hornbaek, Byungjoo Lee

When giving input with a button, users follow one of two strategies: (1) react to the output from the computer or (2) proactively act in anticipation of the output from the computer. We propose a technique to quantify reactiveness and proactiveness to determine the degree and characteristics of each input strategy. The technique proposed in this study uses only screen recordings and does not require instrumentation beyond the input logs. The likelihood distribution of the time interval between the button inputs and system outputs, which is uniquely determined for each input strategy, is modeled. Then the probability that each observed input/output pair originates from a specific strategy is estimated along with the parameters of the corresponding likelihood distribution. In two empirical studies, we show how to use the technique to answer questions such as how to design animated transitions and how to predict a player’s score in real-time games.

Promptiverse: Scalable Generation of Scaffolding Prompts Through Human-AI Hybrid Knowledge Graph Annotation

CHI'22

Yoonjoo Lee, John Joon Young Chung, Tae Soo Kim, Jean Y Song, Juho Kim

Online learners are hugely diverse with varying prior knowledge, but most instructional videos online are created to be one-size-fits-all. Thus, learners may struggle to understand the content by only watching the videos. Providing scaffolding prompts can help learners overcome these struggles through questions and hints that relate different concepts in the videos and elicit meaningful learning. However, serving diverse learners would require a spectrum of scaffolding prompts, which incurs high authoring effort. In this work, we introduce Promptiverse, an approach for generating diverse, multi-turn scaffolding prompts at scale, powered by numerous traversal paths over knowledge graphs. To facilitate the construction of the knowledge graphs, we propose a hybrid human-AI annotation tool, Grannotate. In our study (N=24), participants produced 40 times more on-par quality prompts with higher diversity, through Promptiverse and Grannotate, compared to hand-designed prompts. Promptiverse presents a model for creating diverse and adaptive learning experiences online.

CatchLive: Real-time Summarization of Live Streams with Stream Content and Interaction Data

CHI'22

Saelyne Yang, Jisu Yim, Juho Kim, Hijung Valentina Shin

Live streams usually last several hours with many viewers joining in the middle. Viewers who join in the middle often want to understand what has happened in the stream. However, catching up with the earlier parts is challenging because it is difficult to know which parts are important in the long, unedited stream while also keeping up with the ongoing stream. We present CatchLive, a system that provides a real-time summary of ongoing live streams by utilizing both the stream content and user interaction data. CatchLive provides viewers with an overview of the stream along with summaries of highlight moments with multiple levels of detail in a readable format. Results from deployments of three streams with 67 viewers show that CatchLive helps viewers grasp the overview of the stream, identify important moments, and stay engaged. Our findings provide insights into designing summarizations of live streams reflecting their characteristics.

A Conversational Approach for Modifying Service Mashups in IoT Environments

CHI'22

Sanghoon Kim, In-Young Ko

Existing conversational approaches for Internet of Things (IoT) service mashup do not support modification because of the usability challenge, although it is common for users to modify the service mashups in IoT environments. To support the modification of IoT service mashups through conversational interfaces in a usable manner, we propose the conversational mashup modification agent (CoMMA). Users can modify IoT service mashups using CoMMA through natural language conversations. CoMMA has a two-step mashup modification interaction, an implicature-based localization step, and a modification step with a disambiguation strategy. The localization step allows users to easily search for a mashup by vocalizing their expressions in the environment. The modification step supports users to modify mashups by speaking simple modification commands. We conducted a user study and the results show that CoMMA is as effective as visual approaches in terms of task completion time and perceived task workload for modifying IoT service mashups.

TaleBrush: Sketching Stories with Generative Pretrained Language Models

CHI'22

John Joon Young Chung, Wooseok Kim, Kang Min Yoo, Hwaran Lee, Eytan Adar, Minsuk Chang

While advanced text generation algorithms (e.g., GPT-3) have enabled writers to co-create stories with an AI, guiding the narrative remains a challenge. Existing systems often leverage simple turn-taking between the writer and the AI in story development. However, writers remain unsupported in intuitively understanding the AI’s actions or steering the iterative generation. We introduce TaleBrush, a generative story ideation tool that uses line sketching interactions with a GPT-based language model for control and sensemaking of a protagonist’s fortune in co-created stories. Our empirical evaluation found our pipeline reliably controls story generation while maintaining the novelty of generated sentences. In a user study with 14 participants with diverse writing experiences, we found participants successfully leveraged sketching to iteratively explore and write stories according to their intentions about the character’s fortune while taking inspiration from generated stories. We conclude with a reflection on how sketching interactions can facilitate the iterative human-AI co-creation process.

Distracting Moments in Videoconferencing: A Look Back at the Pandemic Period

CHI'22

Minha Lee, Wonyoung Park, Sunok Lee, Sangsu Lee

The COVID-19 pandemic has forced workers around the world to switch their working paradigms from on-site to video-mediated communication. Despite the advantages of videoconferencing, diverse circumstances have prevented people from focusing on their work. One of the most typical problems they face is that various surrounding factors distract them during their meetings. This study focuses on conditions in which remote workers are distracted by factors that disturb, interrupt, or restrict them during their meetings. We aim to explore the various problem situations and user needs. To understand users’ pain points and needs, focus group interviews and participatory design workshops were conducted to learn about participants’ troubled working experiences over the past two years and the solutions they expected. Our study provides a unified framework of distracting factors by which to understand causes of poor user experience and reveals valuable implications to improve videoconferencing experiences.

Interactivity

QuadStretch: A Forearm-wearable Multi-dimensional Skin Stretch Display for Immersive VR Haptic Feedback

CHI'22

Youngbo Aram Shim, Taejun Kim, Geehyuk Lee

This demonstration presents QuadStretch, a multidimensional skin stretch display that is worn on the forearm for VR interaction. QuadStretch realizes a light and flexible form factor without a large frame that grounds the device on the arm and provides rich haptic feedback through high expressive performance of stretch modality and various stimulation sites around the forearm. In the demonstration, the presenter lets participants experience six VR interaction scenarios with QuadStretch feedback: Boxing, Pistol, Archery, Slingshot, Wings, and Climbing. In each scenario, the user’s actions are mapped to the skin stretch parameters and fed back, allowing users to experience QuadStretch’s large output space that enables an immersive VR experience.

TaleBrush: Visual Sketching of Story Generation with Pretrained Language Models

CHI'22

John Joon Young Chung, Wooseok Kim, Kang Min Yoo, Hwaran Lee, Eytan Adar, Minsuk Chang

Advancing text generation algorithms (e.g., GPT-3) have led to new kinds of human-AI story co-creation tools. However, it is difficult for authors to guide this generation and understand the relationship between input controls and generated output. In response, we introduce TaleBrush, a GPT-based tool that uses abstract visualizations and sketched inputs. The tool allows writers to draw out the protagonist’s fortune with a simple and expressive interaction. The visualization of the fortune serves both as input control and representation of what the algorithm generated (a story with varying fortune levels). We hope this demonstration leads the community to consider novel controls and sensemaking interactions for human-AI co-creation.

Late-Breaking Work

Effect of Contact Points Feedback on Two-Thumb Touch Typing in Virtual Reality

CHI'22

Jeongmin Son, Sunggeun Ahn, Sunbum Kim, Geehyuk Lee

Two-thumb touch typing (4T) is a touchpad-based text-entry technique also used in virtual reality (VR) systems. However, the performance of 4T in VR is far below that of 4T in a real environment, such as on a smartphone. Unlike “real 4T”, 4T in VR provides virtual cursors representing the thumb positions determined by a position tracker. The virtual cursor positions may differ from the thumb contact points on an input surface. Still, users may regard them as their thumb contact points. In this case, the virtual cursor movements may conflict with the thumb movements perceived by their proprioception and may contribute to typing errors. We hypothesized that virtual cursors accurately representing the contact points of the thumb can improve the performance of 4T in VR. We designed a method to provide accurate contact point feedback, and showed that accurate contact point feedback has a statistically significant positive effect on the speed of 4T in VR.

An Interactive Car Drawing System with Tick'n'Draw for Training Perceptual and Perspective Drawing Skills

CHI'22

Seung-Jun Lee, Joon Hyub Lee, Seok-Hyung Bae

Young children love to draw. However, at around age 10, they begin to feel that their drawings are unrealistic and give up drawing altogether. This study aims to help those who did not receive the proper training in drawing at the time and as a result remain at that level of drawing. First, through 12 drawing workshop sessions, we condensed 2 prominent art education books into 10 core drawing skills. Second, we designed and implemented a novel interactive system that helps the user repeatedly train these skills in the 5 stages of drawing a nice car in an accurate perspective. Our novel interactive technique, Tick’n’Draw, inspired by the drawing habits of experts, provides friendly guidance so that the user does not get lost in the many steps of perceptual and perspective drawing. Third, through a pilot test, we found that our system is quick to learn, easy to use, and can potentially improve real-world drawing abilities with continued use.

Mine Yourself!: A Role-playing Privacy Tutorial in Virtual Reality Environment

CHI'22

Junsu Lim, Hyeonggeun Yun, Auejin Ham, Sunjun Kim

Virtual Reality (VR) has potential vulnerabilities in privacy risks from collecting a wide range of data with higher density. Various designs to provide information on Privacy Policy (PP) have improved the awareness and motivation towards privacy risks. However, most of them have focused on desktop environments, not utilizing the full potential of VR’s immersive interactivity. Therefore, we proposed a role-playing mechanism to provide an immersive experience of interacting with PP’s key entities. First, our formative study found insights for PP where VR users had limited awareness of what data to be collected and how to control them. Following this, we implemented a VR privacy tutorial based on our role-playing mechanism and PP from off-the-shelf VR devices. Our privacy tutorial increased a similar amount of privacy awareness with significantly higher satisfaction (p=0.007) than conventional PP. Our tutorial also showed marginally higher usability (p=0.11).

Exploring the Effects of AI-assisted Emotional Support Processes in Online Mental Health Community

CHI'22

Donghoon Shin, Subeen Park, Esther Hehsun Kim, Soomin Kim, Jinwook Seo, Hwajung Hong

Social support in online mental health communities (OMHCs) is an effective and accessible way of managing mental wellbeing. In this process, sharing emotional supports is considered crucial to the thriving social supports in OMHCs, yet often difficult for both seekers and providers. To support empathetic interactions, we design an AI-infused workflow that allows users to write emotional supporting messages to other users’ posts based on the elicitation of the seeker’s emotion and contextual keywords from writing. Based on a preliminary user study (N = 10), we identified that the system helped seekers to clarify emotion and describe text concretely while writing a post. Providers could also learn how to react empathetically to the post. Based on these results, we suggest design implications for our proposed system.

Virfie: Virtual Group Selfie Station for Remote Togetherness

CHI'22

Hyerin Im, Taewan Kim, Eunhee Jung, bonhee ku, Seungho Baek, Tak Yeon Lee

Selfies have become a prominent means of online communication. Group selfies, in particular, encourage people to represent their identity as part of the group and foster a sense of belonging. During the COVID-19 pandemic, video conferencing systems are used as a tool for group selfies. However, conventional systems are not ideal for group selfies due to the rigidness of grid-based layout, information overload, and lack of eye contact. To explore design opportunities and needs for a novel virtual group selfie platform, we conducted a participatory design, and identified three characteristics of virtual group selfie scenarios, “context with narratives”, “interactive group tasks”, and “capturing subtle moments.” We implemented Virfie, a web-based platform that enables users to take group selfies with embodied social interaction, and to create and customize selfie scenarios using a novel JSON specification. In order to validate our design concept and identify usability issues we conducted a user study. Feedbacks from the study participants suggest that Virfie is effective at strengthening social interaction and remote togetherness.

CareMouse: An Interactive Mouse System that Supports Wrist Stretching Exercises in the Workplace

CHI'22

Gyuwon Jung, Youwon Shin, Jieun Lim, Uichin Lee

Knowledge workers suffer from wrist pain due to their long-term mouse and keyboard use. In this study, we present CareMouse, an interactive mouse system that supports wrist stretching exercises in the workplace. When the stretch alarm is given, users hold CareMouse and do exercises, and the system collects the wrist movement data and determines whether they follow the accurate stretching motions based on a machine learning algorithm, enabling real-time guidance. We conducted a preliminary user study to understand the users’ perception and user experience of the system. Our results showed the feasibility of CareMouse in guiding stretching exercises interactively. We provided design implications for the augmentation of existing tools when offering auxiliary functions.

ThinkWrite: Design Interventions for Empowering User Deliberation in Online Petition

CHI'22

Jini Kim, Chorong Kim, Ki-Young Nam

Online petitions have served as an innovative means of citizen participation over the past decade. However, their original purpose has been waning due to inappropriate language, fabricated information, and the lack of evidence that supports petitions. The lack of deliberation in online petitions has influenced other users, deteriorating the platform to a degree that good petitions are seldom generated. Therefore, this study designs interventions that empower users to create deliberative petitions. We conducted user research to observe users’ writing behavior in online petitions and identified causes of non-deliberative petitions. Based on our findings, we propose ThinkWrite, a new interactive app promoting user deliberation. The app includes six main features: a gamified learning process, a writing recommendation system, a guiding interface for self-construction, tailored AI for self-revision, short-cuts for easy archiving of evidence, and a citizen-collaborative page. Finally, the efficacy of the app is demonstrated through user surveys and in-depth interviews.

Student Game Comepetition

Play With Your Emotions: Exploring Possibilities of Emotions as Game Input in NERO

CHI'22

Valérie Erb, Tatiana Chibisova, Haesoo Kim, Jeongmi Lee, Young Yim Doh

This work presents NERO, a game concept using the player’s active emotional input to map the emotional state of the player to representative in-game characters. Emotional input in games has been mainly used as a passive measure to adjust for game difficulty or other variables. However the player has not been given the possibility to explore and play with one’s emotions as an active feature. Given the high subjectivity of felt emotions we wanted to focus on the player’s experience of emotional input rather than the objective accuracy of the input sensor. We therefore implemented a proof-of-concept game using heart-rate as a proxy for emotion measurement and through repeated player tests the game mechanics were revised and evaluated. Valuable insight for the design of entertainment-focused emotional input games were gained, including emotional connection despite limited accuracy, influence of the environment and the importance of calibration. The players overall enjoyed the novel game experience and their feedback carries useful implications for future games including active emotional input.

The Melody of the Mysterious Stones: A VR Mindfulness Game Using Sound Spatialization

CHI'22

Haven Kim, Jaeran Choi, Young Yim Doh, Juhan Nam

The Melody of Mysterious Stones is a VR meditation game that utilizes spatial audio technologies. One of the most common mindfulness exercises is to notice and observe five senses including the sense of sound. As a way of helping the players with focusing on their sense of sound, the Melody of Mysterious Stones makes use of spatialized sounds as game elements. Our play tests showed that game players enjoyed playing missions with spatial audio elements. Also, they reported that spatial audio helped them with focusing on their sense of sound and therefore felt more engaged in meditation materials.

Evoker: Narrative-based Facial Expression Game for Emotional Development of Adolescents

CHI'22

Seokhyeon Hong, Yeonsoo Choi, Youjin Sung, YURHEE JIN, Young Yim Doh, Jeongmi Lee

Evoker is a narrative-based facial expression game. Due to the COVID-19 pandemic, adolescents should be wearing masks in their daily lives. However, wearing masks disturbs emotional interaction through facial expressions, which is a critical component in emotional development. Therefore, a negative impact on adolescent emotional development is predicted. To solve this problem, we design a narrative-based game Evoker that uses real-time facial expression recognition. In this game, players are asked to identify an emotion from narrative contexts in missions, and make a facial expression appropriate for the context to clear the challenges. Our game provides an opportunity to practice reading emotional contexts and expressing appropriate emotions, which has a high potential for promoting emotional development of adolescents.

Classy Trash Monster: An Educational Game for Teaching Machine Learning to Non-major Students

CHI'22

Joonhyung Bae, Karam Eum, Haram Kwon, Seolhee Lee, Juhan Nam, Young Yim Doh

As machine learning (ML) became more relevant to our lives, ML education for college students without technical background arose important. However, not many educational games designed to suit challenges they experience exist. We introduce an educational game Classy Trash Monster(CTM), designed to better educate ML and data dependency to non-major students who learn ML for the first time. The player can easily learn to train a classification model and solve tasks by engaging in simple game activities designed according to an ML pipeline. Simple controls, positive rewards, and clear audiovisual feedback makes game easy to play even for novice players. The playtest result showed that players were able to learn basic ML concepts and how data can impact model results, and that the game made ML feel less difficult and more relevant. However, proper debriefing session seems crucial to prevent misinterpretations that may occur in the learning process.

Classy Trash Monster: An Educational Game for Teaching Machine Learning to Non-major Students

CHI'22

Joonhyung Bae, Karam Eum, Haram Kwon, Seolhee Lee, Juhan Nam, Young Yim Doh

As machine learning (ML) became more relevant to our lives, ML education for college students without technical background arose important. However, not many educational games designed to suit challenges they experience exist. We introduce an educational game Classy Trash Monster(CTM), designed to better educate ML and data dependency to non-major students who learn ML for the first time. The player can easily learn to train a classification model and solve tasks by engaging in simple game activities designed according to an ML pipeline. Simple controls, positive rewards, and clear audiovisual feedback makes game easy to play even for novice players. The playtest result showed that players were able to learn basic ML concepts and how data can impact model results, and that the game made ML feel less difficult and more relevant. However, proper debriefing session seems crucial to prevent misinterpretations that may occur in the learning process.

Leave a Reply

Your email address will not be published.