News

CHI 2021

CHI 2021
DATE
  7 May – 17 May 2021
LOCATION  Online (Yokohama, Japan)
 

We are happy to bring good news! At CHI 2021, KAIST records a total of 22 full paper publications (with 7 Honorable Mention Awards), ranking 4th place in the number of publications out of all CHI 2021 participating institutions. Congratulations on the outstanding achievement!

KAIST CHI Statistics (2015-2021)
Year        Number of Publications       Rank
2015       9                                                14
2016       15                                              7
2017       7                                                26
2018       21                                              8
2019       13                                              11
2020       15                                              14
2021       22                                              4

Nation-wide (Korea) CHI Statistics (2015-2021)
Year        Number of Publications       Rank
2015       17                                              6
2016       20                                              6
2017       16                                              11
2018       30                                              6
2019       23                                              8
2020       29                                              7
2021       35                                              7

For more information and details about the publications that feature in the conference, please refer to the publication list below.
 

CHI 2021 Publications

20210401_figures
Designing Metamaterial Cells to Enrich Thermoforming 3D Printed Objects for Post-Print Modification
CHI'21, Honorable Mention

Donghyeon Ko, Jee Bin Yim, Yujin Lee, Jaehoon Pyun, Woohun Lee

In this paper, we present a metamaterial structure called thermoformable cells, TF-Cells, to enrich thermoforming for post-print modification. So far, thermoforming is limitedly applied for modifying a 3D printed object due to its low thermal conductivity. TF-Cells consists of beam arrays that affluently pass hot air and have high heat transference. Through heating the embedded TF-Cells of the printed object, users can modify not only the deeper area of the object surface but also its form factor. With a series of technical experiments, we investigated TF-Cells’ thermoformability, depending on their structure’s parameters, orientations, and heating conditions. Next, we present a series of compound cells consisting of TF-Cells and solid structure to adjust stiffness or reduce undesirable shape deformation. Adapting the results from the experiments, we built a simple tool for embedding TF-Cells into a 3D model. Using the tool, we implemented examples under contexts of mechanical fitting, ergonomic fitting, and aesthetic tuning.

chi2021_uvrlab
A User-Oriented Approach to Space-Adaptive Augmentation: The Effects of Spatial Affordance on Narrative Experience in an Augmented Reality Detective Game
CHI'21, Honorable Mention

Jae-eun Shin, Boram Yoon, Dooyoung Kim, Woontack Woo

Space-adaptive algorithms aim to effectively align the virtual with the real to provide immersive user experiences for Augmented Reality(AR) content across various physical spaces. While such measures are reliant on real spatial features, efforts to understand those features from the user’s perspective and reflect them in designing adaptive augmented spaces have been lacking. For this, we compared factors of narrative experience in six spatial conditions during the gameplay of Fragments, a space-adaptive AR detective game. Configured by size and furniture layout, each condition afforded disparate degrees of traversability and visibility. Results show that whereas centered furniture clusters are suitable for higher presence in sufficiently large rooms, the same layout leads to lower narrative engagement. Based on our findings, we suggest guidelines that can enhance the effects of space adaptivity by considering how users perceive and navigate augmented space generated from different physical environments.

chi2021_makinteract
GamesBond: Bimanual Haptic Illusion of Physically Connected Objects for Immersive VR Using Grip Deformation
CHI'21, Honorable Mention

Neung Ryu, Hye-Young Jo, Michel Pahud, Mike Sinclair, Andrea Bianchi

Virtual Reality experiences, such as games and simulations, typically support the usage of bimanual controllers to interact with virtual objects. To recreate the haptic sensation of holding objects of various shapes and behaviors with both hands, previous researchers have used mechanical linkages between the controllers that render adjustable stiffness. However, the linkage cannot quickly adapt to simulate dynamic objects, nor it can be removed to support free movements. This paper introduces GamesBond, a pair of 4-DoF controllers without physical linkage but capable to create the illusion of being connected as a single device, forming a virtual bond. The two controllers work together by dynamically displaying and physically rendering deformations of hand grips, and so allowing users to perceive a single connected object between the hands, such as a jumping rope. With a user study and various applications we show that GamesBond increases the realism, immersion, and enjoyment of bimanual interaction.

chi2021_atatouch
AtaTouch: Robust Finger Pinch Detection for a VR Controller Using RF Return Loss
CHI'21, Honorable Mention

Daehwa Kim, Keunwoo Park, Geehyuk Lee

Handheld controllers are an essential part of VR systems. Modern sensing techniques enable them to track users’ finger movements to support natural interaction using hands. The sensing techniques, however, often fail to precisely determine whether two fingertips touch each other, which is important for the robust detection of a pinch gesture. To address this problem, we propose AtaTouch, which is a novel, robust sensing technique for detecting the closure of a finger pinch. It utilizes a change in the coupled impedance of an antenna and human fingers when the thumb and finger form a loop. We implemented a prototype controller in which AtaTouch detects the finger pinch of the grabbing hand. A user test with the prototype showed a finger-touch detection accuracy of 96.4%. Another user test with the scenarios of moving virtual blocks demonstrated low object-drop rate (2.75%) and false-pinch rate (4.40%). The results and feedback from the participants support the robustness and sensitivity of AtaTouch.

throughhand
ThroughHand: 2D Tactile Interaction to Simultaneously Recognize and Touch Multiple Objects
CHI'21, Honorable Mention

Jingun Jung, Sunmin Son, Sangyoon Lee, Yeonsu Kim, Geehyuk Lee

Users with visual impairments find it difficult to enjoy real-time 2D interactive applications on the touchscreen. Touchscreen applications such as sports games often require simultaneous recognition of and interaction with multiple moving targets through vision. To mitigate this issue, we propose ThroughHand, a novel tactile interaction that enables users with visual impairments to interact with multiple dynamic objects in real time. We designed the ThroughHand interaction to utilize the potential of the human tactile sense that spatially registers both sides of the hand with respect to each other. ThroughHand allows interaction with multiple objects by enabling users to perceive the objects using the palm while providing a touch input space on the back of the same hand. A user study verified that ThroughHand enables users to locate stimuli on the palm with a margin of error of approximately 13 mm and effectively provides a real-time 2D interaction experience for users with visual impairments.

chi2021_gosu
Secrets of Gosu: Understanding Physical Combat Skills of Professional Players in First-Person Shooters
CHI'21, Honorable Mention

Eunji Park, Sangyoon Lee, Auejin Ham, Minyeop Choi, Sunjun Kim, Byungjoo Lee

In first-person shooters (FPS), professional players (a.k.a., Gosu) outperform amateur players. The secrets behind the performance of professional FPS players have been debated in online communities with many conjectures; however, attempts of scientific verification have been limited. We addressed this conundrum through a data-collection study of the gameplay of eight professional and eight amateur players in the commercial FPS Counter-Strike: Global Offensive. The collected data cover behavioral data from six sensors (motion capture, eye tracker, mouse, keyboard, electromyography armband, and pulse sensor) and in-game data (player data and event logs). We examined conjectures in four categories: aiming, character movement, physicality, and device and settings. Only 6 out of 13 conjectures were supported with statistically sufficient evidence.

A Simulation Model of Intermittently Controlled Point-and-Click Behaviour
CHI'21, Honorable Mention

Do Seungwon, Minsuk Chang, Byungjoo Lee

We present a novel simulation model of point-and-click behaviour that is applicable both when a target is stationary or moving. To enable more realistic simulation than existing models, the model proposed in this study takes into account key features of the user and the external environment, such as intermittent motor control, click decision-making, visual perception, upper limb kinematics and the effect of input device. The simulated user’s point-and-click behaviour is formulated as a Markov decision process (MDP), and the user’s policy of action is optimised through deep reinforcement learning. As a result, our model successfully and accurately reproduced the trial completion time, distribution of click endpoints, and cursor trajectories of real users. Through an ablation study, we showed how the simulation results change when the model’s sub-modules are individually removed. The implemented model and dataset are publicly available.

heterostroke
Heterogeneous Stroke: Using Unique Vibration Cues to Improve the Wrist-Worn Spatiotemporal Tactile Display

CHI'21

Taejun Kim, Youngbo Aram Shim, Geehyuk Lee

Beyond a simple notification of incoming calls or messages, more complex information such as alphabets and digits can be delivered through spatiotemporal tactile patterns (STPs) on a wrist-worn tactile display (WTD) with multiple tactors. However, owing to the limited skin area and spatial acuity of the wrist, frequent confusions occur between closely located tactors, resulting in a low recognition accuracy. Furthermore, the accuracies reported in previous studies have mostly been measured for a specific posture and could further decrease with free arm postures in real life. Herein, we present Heterogeneous Stroke, a design concept for improving the recognition accuracy of STPs on a WTD. By assigning unique vibrotactile stimuli to each tactor, the confusion between tactors can be reduced. Through our implementation of Heterogeneous Stroke, the alphanumeric characters could be delivered with high accuracy (93.8% for 26 alphabets and 92.4% for 10 digits) across different arm postures.

rubyslippers
RubySlippers: Supporting Content-based Voice Navigation for How-to Videos

CHI'21

Minsuk Chang, Mina Huh, Juho Kim

Directly manipulating the timeline, such as scrubbing for thumbnails, is the standard way of controlling how-to videos. However, when how-to videos involve physical activities, people inconveniently alternate between controlling the video and performing the tasks. Adopting a voice user interface allows people to control the video with voice while performing the tasks with hands. However, naively translating timeline manipulation into voice user interfaces (VUI) results in temporal referencing (e.g.  “rewind 20 seconds’‘), which requires a different mental model for navigation and thereby limiting users’ ability to peek into the content. We present RubySlippers, a system that supports efficient content-based voice navigation through keyword-based queries. Our computational pipeline automatically detects referenceable elements in the video, and finds the video segmentation that minimizes the number of needed navigational commands. Our evaluation (N=12) shows that participants could perform three representative navigation tasks with fewer commands and less frustration using RubySlippers than the conventional voice-enabled video interface.

studywithme-2
Personalizing Ambience and Illusionary Presence: How People Use “Study with me” Videos to Create Effective Studying Environments

CHI'21

Yoonjoo Lee, John Joon Young Chung, Jean Y. Song, Minsuk Chang, Juho Kim

“Study with me” videos contain footage of people studying for hours, in which social components like conversations or informational content like instructions are absent. Recently, they became increasingly popular on video-sharing platforms. This paper provides the first broad look into what “study with me” videos are and how people use them. We analyzed 30 “study with me” videos and conducted 12 interviews with their viewers to understand their motivation and viewing practices. We identified a three-factor model that explains the mechanism for shaping a satisfactory studying experience in general. One of the factors, a well-suited ambience, was difficult to achieve because of two common challenges: external conditions that prevent studying in study-friendly places and extra cost needed to create a personally desired ambience. We found that the viewers used “study with me” videos to create a personalized ambience at a lower cost, to find controllable peer pressure, and to get emotional support. These findings suggest that the viewers self-regulate their learning through watching “study with me” videos to improve efficiency even when studying alone at home.

dataURItoBlob
Sticky Goals: Understanding Goal Commitments for Behavioral Changes in the Wild

CHI’21

Hyunsoo Lee, Auk Ki, Hwajung Hong, Uichin Lee

A commitment device, an attempt to bind oneself for a successful goal achievement, has been used as an effective strategy to promote behavior change. However, little is known about how commitment devices are used in the wild, and what aspects of commitment devices are related to goal achievements. In this paper, we explore a large-scale dataset from stickK, an online behavior change support system that provides both financial and social commitments. We characterize the patterns of behavior change goals (e.g., topics and commitment setting) and then perform a series of multilevel regression analyses on goal achievements. Our results reveal that successful goal achievements are largely dependent on the configuration of financial and social commitment devices, and a mixed commitment setting is considered beneficial. We discuss how our findings could inform the design of effective commitment devices, and how large-scale data can be leveraged to support data-driven goal elicitation and customization. 

winder-3
Winder: Linking Speech and Visual Objects to Support Communication in Asynchronous Collaboration

CHI'21

Tae Soo Kim, Seungsu Kim, Yoonseo Choi, Juho Kim

Team members commonly collaborate on visual documents remotely and asynchronously. Particularly, students are frequently restricted to this setting as they often do not share work schedules or physical workspaces. As communication in this setting has delays and limits the main modality to text, members exert more effort to reference document objects and understand others’ intentions. We propose Winder, a Figma plugin that addresses these challenges through linked tapes—multimodal comments of clicks and voice. Bidirectional links between the clicked-on objects and voice recordings facilitate understanding tapes: selecting objects retrieves relevant recordings, and playing recordings highlights related objects. By periodically prompting users to produce tapes, Winder preemptively obtains information to satisfy potential communication needs. Through a five-day study with eight teams of three, we evaluated the system’s impact on teams asynchronously designing graphical user interfaces. Our findings revealed that producing linked tapes could be as lightweight as face-to-face (F2F) interactions while transmitting intentions more precisely than text. Furthermore, with preempted tapes, teammates coordinated tasks and invited members to build on each others’ work.

good
"Good Enough!": Flexible Goal Achievement with Margin-based Outcome Evaluation

CHI'21

Gyuwon Jung, Jio Oh, Youjin Jung, Juho Sun, Ha-Kyung Kong, Uichin Lee

Traditional goal setting simply assumes a binary outcome for goal evaluation. This binary judgment does not consider a user’s effort, which may demotivate the user. This work explores the possibility of mitigating this negative impact with a slight modification on the goal evaluation criterion, by introducing a ‘margin’ that is widely used for quality control in the manufacturing fields. A margin represents a range near the goal where the user’s outcome will be regarded as ‘good enough’ even if the user fails to reach it. We explore users’ perceptions and behaviors through a large-scale survey study and a small-scale field experiment using a coaching system to promote physical activity. Our results provide positive evidence on the margin, such as lowering the burden of goal achievement and increasing motivation to make attempts. We discuss practical design implications on margin-enabled goal setting and evaluation for behavioral change support systems.

goldentime
GoldenTime: Exploring System-Driven Timeboxing and Micro-Financial Incentives for Self-Regulated Phone Use

CHI'21

Joonyoung Park, Hyunsoo Lee, Sangkeun Park, Kyong-Mee Chung, Uichin Lee

User-driven intervention tools such as self-tracking help users to self-regulate problematic smartphone usage. These tools basically assume active user engagement, but prior studies warned a lack of user engagement over time. This paper proposes GoldenTime, a mobile app that promotes self-regulated usage behavior via system-driven proactive timeboxing and micro-financial incentives framed as gain or loss for behavioral reinforcement. We conducted a large-scale user study (n = 210) to explore how our proactive timeboxing and micro-financial incentives influence users’ smartphone usage behaviors. Our findings show that GoldenTime’s timeboxing based micro-financial incentives are effective in self-regulating smartphone usage, and incentive framing has a significant impact on user behavior. We provide practical design guidelines for persuasive technology design related to promoting digital wellbeing.

차인하_대표사진_CHI21
Exploring the Use of a Voice-based Conversational Agent to Empower Adolescents with Autism Spectrum Disorder

CHI'21

Inha Cha, Sung-In Kim, Hwajung Hong, Heejeong Yoo, and Youn-kyung Lim

Voice-based Conversational Agents (VCA) have served as personal assistants that support individuals with special needs. Adolescents with Autism Spectrum Disorder (ASD) may also benefit from VCAs to deal with their everyday needs and challenges, ranging from self-care to social communications. In this study, we explored how VCAs could encourage adolescents with ASD in navigating various aspects of their daily lives through the two-week use of VCAs and a series of participatory design workshops. Our findings demonstrated that VCAs could be an engaging, empowering, emancipating tool that supports adolescents with ASD to address their needs, personalities, and expectations, such as promoting self-care skills, regulating negative emotions, and practicing conversational skills. We propose implications of using off-the-shelf technologies as a personal assistant to ASD users in Assistive Technology design. We suggest design implications for promoting positive opportunities while mitigating the remaining challenges of VCAs for adolescents with ASD.

MomentMeld: AI-augmented Mobile Photographic Memento towards Mutually Stimulatory Inter-generational Interaction

CHI'21

Bumsoo Kang, Seungwoo Kang, Inseok Hwang

ToonNote: Improving Communication in Computational Notebooks Using Interactive Data Comics

CHI'21

DaYe Kang, Tony Ho, Nicolai Marquardt, Bilge Mutlu, Andrea Bianchi

Elevate: A Walkable Pin-Array for Large Shape-Changing Terrains

CHI'21

Seungwoo Je, Hyunseung Lim, Kongpyung Moon, Shan-Yuan Teng, Jas Brooks, Pedro Lopes, Andrea Bianchi

Human Perceptions on Moral Responsibility of AI: A Case Study in AI-Assisted Bail Decision-Making

CHI'21

Gabriel Lima, Nina Grgić-Hlača, Meeyoung Cha

Virtual Camera Layout Generation using a Reference Video

CHI'21

Jung Eun Yoo, Kwanggyoon Seo, Sanghun Park, Jaedong Kim, Dawon Lee, Junyong Noh

Late Breaking Work

postit
Post-Post-it: A Spatial Ideation System in VR for Overcoming Limitations of Physical Post-it Notes

Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (LBW)

Joon Hyub Lee, Donghyeok Ma, Haena Cho, Seok-Hyung Bae

Post-it notes are great problem-solving tools. However, physical Post-it notes have limitations: surfaces for attaching them can run out; rearranging them can be labor-intensive; documenting and storing them can be cumbersome. We present Post-Post-it, a novel VR interaction system that overcomes these physical limitations. We derived design requirements from a formative study involving a problem-solving meeting using Post-it notes. Then, through physical prototyping, using physical materials such as Post-it notes, transparent acrylic panels, and masking tape, we designed a set of lifelike VR interactions based on hand gestures that the user can perform easily and intuitively. With our system, the user can create and place Post-it notes in an immersive space that is large enough to ideate freely, quickly move, copy, or delete many Post-it notes at once, and easily manage the results.

I Can't Talk Now: Speaking with Voice Output Communication Aid Using Text-to-Speech Synthesis During Multiparty Video Conference

Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (LBW)

Wooseok Kim, Sangsu Lee
I want more than 👍 User-generated icons for Better Video-mediated Communications on the Collaborative Design Process

Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (LBW)

Haena Cho, Hyeonjeong Im, Sunok Lee, Sangsu Lee
How the Death-themed Game Spiritfarer Can Help Players Cope with the Loss of a Loved One

Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (LBW)

Karam Eum, Valérie Erb, Subin Lin, Sungpil Wang, Young Yim Doh
“I Don’t Know Exactly but I Know a Little”: Exploring Better Responses of Conversational Agents with Insufficient Information

Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (LBW)

Minha Lee, Sangsu Lee
Bubble Coloring to Visualize the Speech Emotion

Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (LBW)

Qinyue Chen, Yuchun Yan, Hyeon-Jeong Suk
Guideline-Based Evaluation and Design Opportunities for Mobile Video-based Learning

Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (LBW)

Jeongyeon Kim, Juho Kim

Workshops & Symposia

Challenges in Devising Resources for Ethics: What Should We Consider When Designing Toolkits to Tackle AI Ethical Issues for Practitioners?

CHI 2021 (The ACM CHI Conference on Human Factors in Computing Systems 2021) Workshop on Co-designing Ethics.

Inha Cha and Youn-kyung Lim

Artificial Intelligence (AI) technologies become interwoven in our daily contexts with various services and products, and discussions on AI’s social impact are actively being held. As awareness on the social impact of AI technology increased, studies focusing on algorithmic bias and its harm have gained attention, as have the efforts to mitigate social bias. One way to solve this problem is to support and guide the practitioners who design the technologies. Therefore, various toolkits and methods, including checklists or open-source software, to detect algorithmic bias, games, and activity-based approaches have been devised to support practitioners. This paper proposes pros and cons according to toolkits’ characteristics based on the existing approaches. We want to discuss what we have to consider before designing toolkits to tackle AI ethics by examining the existing toolkits.

CHI 2019

At CHI2019, HCI@KAIST presents 14 papers, 6 late breaking works and 3 workshops.
One best paper and one honorable mention are included. These works are from 11 labs of 4 different schools and departments at KAIST. We thank our outstanding colleagues and collaborators from industry, research centers and universities around the world. 

We will be hosting the KAIST HOSPITALITY NIGHT event on May 7th (Tue).
Join us at HCI@KAIST HOSPITALITY NIGHT!

DATE   May 7th (Tue) 2019
TIME  20:00 – 23:00
LOCATION  The Strip Joint Glasgow – pizza place & drink monger (956 Argyle street, Glasgow G3 8LU)

 

Paper & Notes

pickme
PicMe: Interactive Visual Guidance for Taking Requested Photo Composition

Monday 11:00 -12:20 | Session of On the Streets | Room: Boisdale 1

  • Minju Kim, Graduate School of Culture Technology, KAIST
  • Jungjin Lee, KAI Inc., Daejeon

PicMe is a mobile application that provides interactive onscreen guidance that helps the user take pictures of a composition that another person requires. Once the requester captures a picture of the desired composition and delivers it to the user (photographer), a 2.5D guidance system, called the virtual frame, guides the user in real-time by showing a three-dimensional composition of the target image (i.e., size and shape). In addition, according to the matching accuracy rate, we provide a small-sized target image in an inset window as feedback and edge visualization for further alignment of the detail elements. We implemented PicMe to work fully in mobile environments. We then conducted a preliminary user study to evaluate the effectiveness of PicMe compared to traditional 2D guidance methods. The results show that PicMe helps users reach their target images more accurately and quickly by giving participants more confidence in their tasks.

1-01
Co-Performing Agent: Design for Building User-Agent Partnership in Learning and Adaptive Services

Wednesday 16:00-17:20 | Session of The One with Bots | Room: Boisdale 1

  • Da-jung Kim, Department of Industrial Design, KAIST
  • Youn-kyung Lim, Department of Industrial Design, KAIST

Intelligent agents have become prevalent in everyday IT products and services. To improve an agent’s knowledge of a user and the quality of personalized service experience, it is important for the agent to cooperate with the user (e.g., asking users to provide their information and feedback). However, few works inform how to support such user-agent co-performance from a human-centered perspective. To fill this gap, we devised Co-Performing Agent, a Wizard-of-Oz-based research probe of an agent that cooperates with a user to learn by helping users to have a partnership mindset. By incorporating the probe, we conducted a two-month exploratory study, aiming to understand how users experience co-performing with their agent over time. Based on the findings, this paper presents the factors that affected users’ co-performing behaviors and discusses design implications for supporting constructive co-performance and building a resilient user–agent partnership over time.

ten
Ten-Minute Silence: A New Notification UX of Mobile Instant Messenger

Wednesday 12:00-12:20 | Session of UX Methods | Room: Forth

  • In-geon Shin, Department of Industrial Design, KAIST
  • Jin-min Seok, Department of Industrial Design, KAIST
  • Youn-kyung Lim, Department of Industrial Design, KAIST

People receive a tremendous number of messages through mobile instant messaging (MIM), which generates crowded notifications. This study highlights our attempt to create a new notification rule to reduce this crowdedness, which can be recognized by both senders and recipients. We developed an MIM app that provides only one notification per conversation session, which is a group of consecutive messages distinguished based on a ten-minute silence period. Through the two-week field study, 20,957 message logs and interview data from 17 participants revealed that MIM notifications affect not only the recipients’ experiences before opening the app but also the entire conversation experience, including that of the senders. The new notification rule created new social norms for the participants’ use of MIM. We report themes about the changes in the MIM experience, which will expand the role of notifications for future MIM apps.

skin
Like A Second Skin: Understanding How Epidermal Devices Affect Human Tactile Perception

Thursday 14:00 - 15:20 | Paper Session: Skin and Textiles | Hall 2

  • Aditya Shekhar Nittala, Saarland University
  • Klaus Kruttwig, INM-Leibniz Institute
  • Jaeyeon Lee, School of Computing, KAIST
  • Roland Bennewitz, INM-Leibniz Institute
  • Eduard Arzt, INM-Leibniz Institute
  • Jürgen Steimle, Saarland University

The emerging class of epidermal devices opens up new opportunities for skin-based sensing, computing, and interaction. Future design of these devices requires an understanding of how skin-worn devices affect the natural tactile perception. In this study, we approach this research challenge by proposing a novel classification system for epidermal devices based on flexural rigidity and by testing advanced adhesive materials, including tattoo paper and thin films of poly (dimethylsiloxane) (PDMS). We report on the results of three psychophysical experiments that investigated the effect of epidermal devices of different rigidity on passive and active tactile perception. We analyzed human tactile sensitivity thresholds, two-point discrimination thresholds, and roughness discrimination abilities on three different body locations (fingertip, hand, forearm). Generally, a correlation was found between device rigidity and tactile sensitivity thresholds as well as roughness discrimination ability. Surprisingly, thin epidermal devices based on PDMS with a hundred times the rigidity of commonly used tattoo paper resulted in comparable levels of tactile acuity. The material offers the benefit of increased robustness against wear and the option to re-use the device. Based on our findings, we derive design recommendations for epidermal devices that combine tactile perception with device robustness.

virtual
VirtualComponent: a Mixed-Reality Tool for for Designing and Tuning Breadboarded Circuits

Wednesday 10:00 - 10:20 | Session of Fabricating Electronics | Room: Hall 1

  • Yoonji Kim, Department of Industrial Design, KAIST
  • Youngkyung Choi, Department of Industrial Design, KAIST
  • Hyein Lee, Department of Industrial Design, KAIST
  • Geehyuk Lee, School of Computing, KAIST
  • Andrea Bianchi, Department of Industrial Design, KAIST

Prototyping electronic circuits is an increasingly popular activity, supported by the work of researchers, who developed toolkits to improve the design, debug and fabrication of electronics. While past work mainly dealt with circuit topology, in this paper we propose a system for determining or tuning the values of the circuit components. Based on the results of a formative study with seventeen makers, we designed VirtualComponent, a mixed-reality tool that allows to digitally place electronic components on a real breadboard, tune their values in software, and see these changes applied to the physical circuit in real-time. VirtualComponent is composed of a set of plug-and-play modules containing banks of components, and a custom breadboard managing the connections and components’ values. Through example usages and the results of an informal study with twelve makers, we demonstrate that VirtualComponent is easy to use and encourages users to test components’ value configurations with little effort.

lockn
LocknType: Lockout Task Intervention for Discouraging Smartphone App Use

Monday 14:00-15:20 | Paper Session: Human-Smartphone Interaction | Hall 2

  • Jaejeung Kim, Graduate School of Knowledge Service Engineering, KAIST
  • Joonyoung Park, Graduate School of Knowledge Service Engineering, KAIST
  • Hyunsoo Lee, Graduate School of Knowledge Service Engineering, KAIST
  • Minsam Ko, College of Computing, Hanyang
  • Uichin Lee, Graduate School of Knowledge Service Engineering, KAIST

Instant access and gratification make it difficult for us to self-limit the use of smartphone apps. We hypothesize that a slight increase in the interaction cost of accessing an app could successfully discourage app use. We propose a proactive intervention that requests users to perform a simple lockout task (e.g., typing a fixed length number) whenever a target app is launched. We investigate how a lockout task with varying workloads (i.e., pause only without number input, 10-digit input, and 30-digit input) influence a user’s decision making, by a 3-week, in-situ experiment with 40 participants. Our findings show that even the pause-only task that requires a user to press a button to proceed discouraged an average of 13.1% of app use, and the 30-digit-input task discouraged 47.5%. We derived determinants of app use and non-use decision making for a given lockout task. We further provide implications for persuasive technology design for discouraging undesired behaviors.

evaluating
Evaluating the Combination of Visual Communication Cues for HMD-based Mixed Reality Remote Collaboration

Tuesday 09:00 - 10:20 | Session of X Reality Evaluations | Room Dochart 2

  • Seungwon Kim, School of ITMS, University of South Australia
  • Gun Lee, School of ITMS, University of South Australia
  • Weidong Huang, Swinburne University of Technology
  • Hayun Kim, Graduate School of Culture Technology, KAIST
  • Woontack Woo, Graduate School of Culture Technology, KAIST
  • Mark Billinghurst, School of ITMS, University of South Australia

Many researchers have studied various visual communication cues (e.g. pointer, sketching, and hand gesture) in Mixed Reality remote collaboration systems for real-world tasks. However, the effect of combining them has not been so well explored. We studied the effect of these cues in four combinations: hand only, hand + pointer, hand + sketch, and hand + pointer + sketch, with three problem tasks: Lego, Tangram, and Origami. The study results showed that the participants completed the task significantly faster and felt a significantly higher level of usability when the sketch cue is added to the hand gesture cue, but not with adding the pointer cue. Participants also preferred the combinations including hand and sketch cues over the other combinations. However, using additional cues (pointer or sketch) increased the perceived mental effort and did not improve the feeling of co-presence. We discuss the implications of these results and future research directions.

theEffects
The Effects of Interruption Timing on Autonomous Height-Adjustable Desks that Respond to Task Changes

Tuesday 11:00 - 12:20 | Session of In the Office | Room Alsh 1

  • Bokyung Lee, Department of Industrial Design, KAIST
  • Sindy Wu, School of computer science and technology, KAIST
  • Maria Jose Reyes, Department of Industrial Design, KAIST
  • Daniel Saakes, Department of Industrial Design, KAIST

Actuated furniture, such as electric adjustable sit-stand desks, helps users vary their posture and contributes to comfort and health. However, studies have found that users rarely initiate height changes. Therefore, in this paper, we look into furniture that adjusts itself to the user’s needs. A situated interview study indicated task-changing as an opportune moment for automatic height adjustment. We then performed a Wizard of Oz study to find the best timing for changing desk height to minimize interruption and discomfort. The results are in line with prior work on task interruption in graphical user interfaces and show that the table should change height during a task change. However, the results also indicate that until users build trust in the system, they prefer actuation after a task change to experience the impact of the adjustment.

smart
SmartManikin: Virtual Humans with Agency for Design Tools

Tuesday 16:00 - 17:20 | Session of Design Tools | Room Alsh 1

  • Bokyung Lee, Department of Industrial Design, KAIST
  • Taeil Jin, Graduate school of culture and technology, KAIST
  • Sung-Hee Lee, Faculty of art and Design, University of Tsukuba
  • Daniel Saakes, Department of Industrial Design, KAIST

When designing comfort and usability in products, designers need to evaluate aspects ranging from anthropometrics to use scenarios. Therefore, virtual and poseable mannequins are employed as a reference in early-stage tools and for evaluation in the later stages. However, tools to intuitively interact with virtual humans are lacking. In this paper, we introduce SmartManikin, a mannequin with agency that responds to high-level commands and to real-time design changes. We first captured human poses with respect to desk configurations, identified key features of the pose and trained regression functions to estimate the optimal features at a given desk setup. The SmartManikin’s pose is generated by the predicted features as well as by using forward and inverse kinematics. We present our design, implementation, and an evaluation with expert designers. The results revealed that SmartManikin enhances the design experience by providing feedback concerning comfort and health in real time.

slow
Slow Robots for Unobtrusive Posture Correction

Tuesday 09:00 - 10:20 | Session of Weighty Interactions | Room Dochart 1

  • Joon-Gi Shin, Department of Industrial Design, KAIST
  • Eiji Onchi, Graduate school of comprehensive human science, University of Tsukuba
  • Maria Jose Reyes, Department of Industrial Design, KAIST
  • Junbong Song, TeamVoid
  • Uichin Lee, Graduate School of Knowledge Service Engineering, KAIST
  • Seung-Hee Lee, Faculty of Art and Design, University of Tsukuba
  • Daniel Saakes, Department of Industrial Design, KAIST

Prolonged static and unbalanced sitting postures during computer usage contribute to musculoskeletal discomfort. In this paper, we investigated the use of a very slow moving monitor for unobtrusive posture correction. In a first study, we identified display velocities below the perception threshold and observed how users (without being aware) responded by gradually following the monitor’s motion. From the result, we designed a robotic monitor that moves imperceptible to counterbalance unbalanced sitting postures and induces posture correction. In an evaluation study (n=12), we had participants work for four hours without and with our prototype (8 in total). Results showed that actuation increased the frequency of non-disruptive swift posture corrections and significantly.

howto
How to Design Voice Based Navigation for How-To Videos

Monday 11:00 - 12:20 | Interacting with Videos |

  • Minsuk Chang, School of Computing, KAIST
  • Anh Troung, Adobe Research, Stanford University
  • Oliver Wang, Adobe Research
  • Maneesh Agrawala, Stanford University
  • Juho Kim, School of Computing, KAIST

When watching how-to videos related to physical tasks, users’ hands are often occupied by the task, making voice input a natural fit. To better understand the design space of voice interactions for how-to video navigation, we conducted three think-aloud studies using: 1) a traditional video interface, 2) a research probe providing a voice controlled video interface, and 3) a wizard-of-oz interface. From the studies, we distill seven navigation objectives and their underlying intents: pace control pause, content alignment pause, video control pause, reference jump, replay jump, skip jump, and peek jump. Our analysis found that users’ navigation objectives and intents affect the choice of referent type and referencing approach in command utterances. Based on our findings, we recommend to 1) support conversational strategies like sequence expansions and command queues, 2) allow users to identify and refine their navigation objectives explicitly, and 3) support the seven interaction intents.

project website

diagonose
Diagnosing and Coping with Mode Errors in Korean-English Dual-language Keyboard

Monday 14:00 - 15:20 | Paper Session: Human-Smartphone Interaction | Hall 2

  • Sangyoon Lee, School of Computing, KAIST
  • Jaeyeon Lee, School of Computing, KAIST
  • Geehyuk Lee, School of Computing, KAIST

In countries where languages with non-LatIn characters are prevalent, people use a keyboard with two language modes namely, the native language and English, and often experience mode errors. To diagnose the mode error problem, we conducted a field study and observed that 78% of the mode errors occurred immediately after application switchIng. We implemented four methods (Auto-switch, Preview, Smart-toggle, and Preview & Smart-toggle) based on three strategies to deal with the mode error problem and conducted field studies to verify their effectiveness. In the studies considerIng Korean-English dual Input, Auto-switch was Ineffective. On the contrary, Preview significantly reduced the mode errors from 75.1% to 41.3%, and Smart-toggle saved typIng cost for recoverIng from mode errors. In Preview & Smart-toggle, Preview reduced mode errors and Smart-toggle handled 86.2% of the mode errors that slipped past Preview. These results suggest that Preview & Smart-toggle is a promisIng method for preventIng mode errors for the Korean-English dual-Input environment.

Torc
TORC: A Virtual Reality Controller for In-Hand High-Dexterity Finger Interaction

Thursday 11:00 - 12:20 | Paper Session: Unexpected interactions | Clyde Auditorium

  • Jaeyeon Lee, School of Computing, KAIST
  • Mike Sinclair, Microsoft Research
  • Mar Gonzalez-Franco, Microsoft Research
  • Eyal Ofek, Microsoft Research
  • Christian Holz, Microsoft Research

Recent hand-held controllers have explored a variety of haptic feedback sensations for users in virtual reality by producing both kinesthetic and cutaneous feedback from virtual objects. These controllers are grounded to the user’s hand and can only manipulate objects through arm and wrist motions, not using the dexterity of their fingers as they would in real life. In this paper, we present TORC, a rigid haptic controller that renders virtual object characteristics and behaviors such as texture and compliance. Users hold and squeeze TORC using their thumb and two fingers and interact with virtual objects by sliding their thumb on TORC’s trackpad. During the interaction, vibrotactile motors produce sensations to each finger that represent the haptic feel of squeezing, shearing or turning an object. Our evaluation showed that using TORC, participants could manipulate virtual objects more precisely (e.g., position and rotate objects in 3D) than when using a conventional VR controller.

geometric
Geometrically Compensating Effect of End-to-End Latency in Moving-Target Selection Games

Wednesday 16:00-17:20 | Session of Gameplay Analysis and Latency | Room Hall2

  • Injung Lee, Graduate school of culture and technology, KAIST
  • Sunjun Kim, Aalto University
  • Byungjoo Lee, Graduate school of culture and technology, KAIST

Effects of unintended latency on gamer performance have been reported. End-to-end latency can be corrected by post- input manipulation of activation times, but this gives the player unnatural gameplay experience. For moving-target selection games such as Flappy Bird, the paper presents a predictive model of latency on error rate and a novel compensation method for the latency effects by adjusting the game’s geometry design – e.g., by modifying the size of the selection region. Without manipulation of the game clock, this can keep the user’s error rate constant even if the end- to-end latency of the system changes. The approach extends the current model of moving-target selection with two additional assumptions about the effects of latency: (1) latency reduces players’ cue-viewing time and (2) pushes the mean of the input distribution backward. The model and method proposed have been validated through precise experiments.

Late Breaking Work

what
“What does your Agent look like?” A Drawing Study to Understand Users’ Perceived Persona of Conversational Agent

Tuesday - 10:20 - 11:00 | Session of Des | Room: Hall 4

  • Sunok Lee, Department of Industrial Design, KAIST
  • Sungbae Kim, Department of Industrial Design, KAIST
  • Sangsu Lee, Department of Industrial Design, KAIST

Conversational agents (CAs) become more popular and useful at home. Creating the persona is an important part of designing the conversational user interface (CUI). Since the CUI is a voice-mediated interface, users naturally form an image of the CA’s persona through the voice. Because that image affects users’ interaction with CAs while using a CUI, we tried to understand users’ perception via drawing method. We asked 31 users to draw an image of the CA that communicates with the user. Through a qualitative analysis of the collected drawings and interviews, we could see the various types of CA personas perceived by users and found design factors that influenced users’ perception. Our findings help us understand persona perception, and that will provide designers with design implications for creating an appropriate persona.

designing
Designing Personalities of Conversational Agents

Tuesday - 10:20 - 11:00 | Session of Des | Room: Hall 4

  • Hankyung Kim, Department of Industrial Design, KAIST
  • Dong Yoon Koh, Department of Industrial Design, KAIST
  • Gaeun Lee, Samsung Research
  • Jung-Mi Park, Samsung Research
  • Youn-kyung Lim, Department of Industrial Design, KAIST

Recent hand-held controllers have explored a variety of haptic feedback sensations for users in virtual reality by producing both kinesthetic and cutaneous feedback from virtual objects. These controllers are grounded to the user’s hand and can only manipulate objects through arm and wrist motions, not using the dexterity of their fingers as they would in real life. In this paper, we present TORC, a rigid haptic controller that renders virtual object characteristics and behaviors such as texture and compliance. Users hold and squeeze TORC using their thumb and two fingers and interact with virtual objects by sliding their thumb on TORC’s trackpad. During the interaction, vibrotactile motors produce sensations to each finger that represent the haptic feel of squeezing, shearing or turning an object. Our evaluation showed that using TORC, participants could manipulate virtual objects more precisely (e.g., position and rotate objects in 3D) than when using a conventional VR controller.

solvedeep
SolveDeep: A System for Supporting Subgoal Learning in Online Math Problem Solving

Tuesday - 10:20 - 11:00 | Session of Des | Room: Hall 4

  • Hyoungwook Jin, School of Computing, KAIST
  • Minsuk Chang, School of Computing, KAIST
  • Juho Kim, School of Computing, KAIST

Learner-driven subgoal labeling helps learners form a hierarchical structure of solutions with subgoals, which are conceptual units of procedural problem solving. While learning with such hierarchical structure of a solution in mind is effective in learning problem solving strategies, the development of an interactive feedback system to support subgoal labeling tasks at scale requires significant expert efforts, making learner-driven subgoal labeling difficult to be applied in online learning environments. We propose SolveDeep, a system that provides feedback on learner solutions with peer-generated subgoals. SolveDeep utilizes a learnersourcing workflow to generate the hierarchical representation of possible solutions, and uses a graph-alignment algorithm to generate a solution graph by merging the populated solution structures, which are then used to generate feedback on future learners’ solutions. We conducted a user study with 7 participants to evaluate the efficacy of our system. Participants did subgoal learning with two math problems and rated the usefulness of system feedback. The average rating was 4.86 out of 7 (1: Not useful, 7: Useful), and the system could successfully construct a hierarchical structure of solutions with learnersourced subgoal labels.

crowd
Crowdsourcing Perspectives on Public Policy from Stakeholders

Tuesday - 10:20 - 11:00 | Session of Des | Room: Hall 4

  • Hyunwoo Kim, School of Computing, KAIST
  • Eun-Young Ko, School of Computing, KAIST
  • Donghoon Han, School of Computing, KAIST
  • Sung-Chul Lee, School of Computing, KAIST
  • Simon T. Perrault, Singapore University of Technology and Design, Singapore
  • Jihee Kim, School of Business and Technology Management, KAIST​
  • Juho Kim, School of Computing, KAIST

Personal deliberation, the process through which people can form an informed opinion on social issues, serves an important role in helping citizens construct a rational argument in the public deliberation. However, existing information channels for public policies deliver only few stakeholders’ voices, thus failing to provide a diverse knowledge base for personal deliberation. This paper presents an initial design of PolicyScape, an online system that supports personal deliberation on public policies by helping citizens explore diverse stakeholders and their perspectives on the policy’s effect. Building on literature on crowdsourced policymaking and policy stakeholders, we present several design choices for crowdsourcing stakeholder perspectives. We introduce perspective-taking as an approach for personal deliberation by helping users consider stakeholder perspectives on policy issues. Our initial results suggest that PolicyScape could collect diverse sets of perspectives from the stakeholders of public policies, and help participants discover unexpected viewpoints of various stakeholder groups.

improving
Improving Two-Thumb Touchpad Typing in Virtual Reality

Wednesday 15:20 - 16:00 | Late-breaking Work: Poster Rotation 2 | Hall 4

  • Jeongmin Son, School of Computing, KAIST
  • Sunggeun Ahn, School of Computing, KAIST
  • Sunbum Kim, School of Computing, KAIST
  • Geehyuk Lee, School of Computing, KAIST

Two-Thumb Touchpad Typing (4T) using hand-held controllers is one of the common text entry techniques in Virtual Reality (VR). However, its performance is far below that of Two-Thumb Typing on a smartphone. We explored the possibility of improving its performance focusing on the following two factors: the visual feedback of hovering thumbs and the grip stability of the controllers. We examined the effects of these two factors on the performance of 4T in VR in user experiments. Their results show that hover feedback had a significant main effect on the 4T performance, but grip stability did not. We then investigated the achievable performance of the final 4T design in a longitudinal study, and its results show that users could achieve a Typing speed over 30 words per minute after two hours of practice.

fingmag
FingMag: Finger Identification Method for Smartwatch

Wednesday 15:20 - 16:00 | Late-breaking Work: Poster Rotation 2 | Hall 4

  • Keunwoo Park, School of Computing, KAIST
  • Geehyuk Lee, School of Computing, KAIST

Interacting with a smartwatch is difficult owing to its small touchscreen. A general strategy to overcome the limitations of the small screen is to increase the input vocabulary. A popular approach to do this is to distinguish fingers and assign different functions to them. As a finger identification method for a smartwatch, we propose FingMag, a machine-learning-based method that identifies the finger on the screen with the help of a ring. for this identification, the finger’s touch position and the magnetic field from a magnet embedded in the ring are used. In an offline evaluation using data collected from 12 participants, we show that FingMag can identify the finger with an accuracy of 96.21% in stationary geomagnetic conditions.

Workshops & Symposia

uc
User-Centered Graphical Models of Interaction

Workshop on Computational Modeling in Human-Computer Interaction

  • Minsuk Chang, School of Computing, KAIST
  • Juho Kim, School of Computing, KAIST

In this position paper, I present a set of data-driven techniques in modeling the learning material, learner workflow and the learning task as graphical representations, with which at scale can createn and support learning opportunities in the wild. I propose the graphical models resulting from this bottom-up approach can further serve as proxies for representing learnability bounds of an interface. I also propose an alternative approach which directly aims to “learn” the interaction bounds by modeling the interface as an agent’s sequential decision-making problem. Then I illustrate how the data-driven modeling techniques and algorithm modeling techniques can create a mutually beneficial bridge for advancing design of interfaces.

Readersourcing
Readersourcing an Accurate and Comprehensive Understanding of Health-related Information Represented by Media

Workshop on HCI for Accurate, Impartial and Transparent Journalism: Challenges and Solutions

  • Eun-Young Ko, School of Computing, KAIST
  • Ching Liu, National Tsing Hua University
  • Hyuntak Cha, Seoul National University
  • Juho Kim, School of Computing, KAIST

Health news delivers findings from health-related research to the public. As the delivered information may affect the public’s everyday decision or behavior, readers should get an accurate and compre- hensive understanding of the research from articles they read. However, it is rarely achieved due to incomplete information delivered by the news stories and a lack of critical evaluation of readers. In this position paper, we propose a readersourcing approach, an idea of engaging readers in a critical reading activity while collecting valuable artifacts for future readers to acquire a more accurate and comprehensive understanding of health-related information. We discuss challenges, opportunities, and design considerations in the readersourcing approach. Then we present the initial design of a web-based news reading application that connects health news readers via questioning and answering tasks.

playmaker
PlayMaker: A Participatory Design Method for Creating Entertainment Application Concepts Using Activity Data

Workshop on HCI for Accurate, Impartial and Transparent Journalism: Challenges and Solutions

  • Dong Yoon Koh, Department of Industrial Design, KAIST
  • Ju Yeon Kim, Department of Industrial Design, KAIST
  • Donghyeok Yun, Department of Industrial Design, KAIST
  • Youn-kyung Lim, Department of Industrial Design, KAIST

The public’s ever-growing interest in health has led the well-being industry to explosive growth over the years. This propelled activity trackers as one of the trendiest items among the current day wearable devices. Seeking new opportunities for effective data utilization, we present a participatory design method that explores the fusion of activity data with entertainment application. In this method we spur participants to design by mix-and-matching activity tracker data attributes to existing entertainment application features to produce new concepts. We report two cases of method implementation and further discuss the opportunities of activity tracker data as means for entertainment application design.

Let’s take KAIST HCI friends photos together! Take a photo with the HCI KAIST pose as shown in the example and upload it to the Instagram. With hash tags #HCIKAIST@CHI2019 or #KAISTNIGHT.​