MobileHCI ’19- Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services

SESSION: Full Paper
SeeingHaptics: Visualizations for Communicating Haptic Designs
Rendering haptic feedback in virtual reality is a common approach to enhancing the immersion of virtual reality content. However, current editing tools allow developers to access the haptic feedback only through physical contact with the actuators, making it difficult to fast iterate haptic interaction designs. This paper introduces SeeingHaptics, an authoring tool which visualizes haptic properties in 3D scenes. The active area of certain feedback is simulated with mesh shapes, while the 2D icons allow for indicating the type of haptic sensation. Our evaluation showed that SeeingHaptics helps developers rapidly create haptic feedbacks after a short training session.
Investigating Smartphone-based Pan and Zoom in 3D Data Spaces in Augmented Reality
In this paper, we investigate mobile devices as interactive controllers to support the exploration of 3D data spaces in head-mounted Augmented Reality (AR). In future mobile contexts, applications such as immersive analysis or ubiquitous information retrieval will involve large 3D data sets, which must be visualized in limited physical space. This necessitates efficient interaction techniques for 3D panning and zooming. Smartphones as additional input devices are promising because they are familiar and widely available in mobile usage contexts. They also allow more casual and discreet interaction compared to free-hand gestures or voice input. We introduce smartphone-based pan & zoom techniques for 3D data spaces and present a user study comparing five techniques. Our results show that spatial device gestures can outperform both touch-based techniques and hand gestures in terms of task completion times and user preference. We discuss our findings in detail and suggest suitable techniques for specific AR navigation tasks.
Understanding Emoji Interpretation through User Personality and Message Context
Emojis are commonly used as non-verbal cues in texting, yet may also lead to misunderstandings due to their often ambiguous meaning. User personality has been linked to understanding of emojis isolated from context, or via indirect personality assessment through text analysis. This paper presents the first study on the influence of personality (measured with BFI-2) on understanding of emojis, which are presented in concrete mobile messaging contexts: four recipients (parents, friend, colleague, partner) and four situations (information, arrangement, salutory, romantic). In particular, we presented short text chat scenarios in an online survey (N=646) and asked participants to add appropriate emojis. Our results show that personality factors influence the choice of emojis. In another open task participants compared emojis found as semantically similar by related work. Here, participants provided rich and varying emoji interpretations, even in defined contexts. We discuss implications for research and design of mobile texting interfaces.
The Influence of Hand Size on Touch Accuracy
Touch accuracy is not just dependent on the performance of the touch sensor itself. Instead, aspects like phone grip or occlusion of the screen have been shown to also have an influence on accuracy. Yet, these are all dependent on one underlying factor: the size and proportions of the user’s hand. To better understand touch input, we investigate how 11 hand features influence accuracy. We find that thumb length in particular correlates significantly with touch accuracy and accounts for about 12% of touch error variance. Furthermore, we show that measures of some higher level interactions also correlate with hand size.
Mapping Perceptions of Humanness in Intelligent Personal Assistant Interaction
Humanness is core to speech interface design. Yet little is known about how users conceptualise perceptions of humanness and how people define their interaction with speech interfaces through this. To map these perceptions n=21 participants held dialogues with a human and two speech interface based intelligent personal assistants, and then reflected and compared their experiences using the repertory grid technique. Analysis of the constructs show that perceptions of humanness are multidimensional, focusing on eight key themes: partner knowledge set, interpersonal connection, linguistic content, partner performance and capabilities, conversational interaction, partner identity and role, vocal qualities and behavioral affordances. Through these themes, it is clear that users define the capabilities of speech interfaces differently to humans, seeing them as more formal, fact based, impersonal and less authentic. Based on the findings, we discuss how the themes help to scaffold, categorise and target research and design efforts, considering the appropriateness of emulating humanness.
Tiger: Wearable Glasses for the 20-20-20 Rule to Alleviate Computer Vision Syndrome
We propose Tiger, an eyewear system for helping users follow the 20-20-20 rule to alleviate the Computer Vision Syndrome symptoms. It monitors user’s screen viewing activities and provides real-time feedback to help users follow the rule. For accurate screen viewing detection, we devise a light-weight multi-sensory fusion approach with three sensing modalities, color, IMU, and lidar. We also design the real-time feedback to effectively lead users to follow the rule. Our evaluation shows that Tiger accurately detects screen viewing events, and is robust to the differences in screen types, contents, and ambient light. Our user study shows positive perception of Tiger regarding its usefulness, acceptance, and real-time feedback.
From Design to Development to Evaluation of a Pregnancy App for Low-Income Women in a Community-Based Setting
Due to the increasing rates of mobile phone adoption in low-income communities in the United States, mobile apps can be used to increase access to prenatal care in this population. But, design and evaluation studies in this area are rare. Using existing guidelines and needs assessment results, we developed, MomLink, that aligned with the needs of the target population and their community health workers. Nine women from a low-income community evaluated the app and provided suggestions to improve its design. After making suggested changes, we deployed MomLink and its corresponding provider app for a pre-pilot evaluation. The evaluation met an unexpected barrier resulting in low usage of the system by both women and providers. Based on our findings, we discuss opportunities to improve mHealth apps for the target population and better ways to collaborate with community-based partners.
Exploring Cross-Modal Training via Touch to Learn a Mid-Air Marking Menu Gesture Set
While mid-air gestures are an attractive modality with an extensive research history, one challenge with their usage is that the gestures are not self-revealing. Scaffolding techniques to teach these gestures are difficult to implement since the input device, e.g. a hand, wand or arm, cannot present the gestures to the user. In contrast, for touch gestures, feedforward mechanisms (such as Marking Menus or OctoPocus) have been shown to effectively support user awareness and learning. In this paper, we explore whether touch gesture input can be leveraged to teach users to perform mid-air gestures. We show that marking menu touch gestures transfer directly to knowledge of mid-air gestures, allowing performance of these gestures without intervention. We argue that cross-modal learning can be an effective mechanism for introducing users to mid-air gestural input.
How do People Type on Mobile Devices?: Observations from a Study with 37,000 Volunteers
This paper presents a large-scale dataset on mobile text entry collected via a web-based transcription task performed by 37,370 volunteers. The average typing speed was 36.2 WPM with 2.3% uncorrected errors. The scale of the data enables powerful statistical analyses on the correlation between typing performance and various factors, such as demographics, finger usage, and use of intelligent text entry techniques. We report effects of age and finger usage on performance that correspond to previous studies. We also find evidence of relationships between performance and use of intelligent text entry techniques: auto-correct usage correlates positively with entry rates, whereas word prediction usage has a negative correlation. To aid further work on modeling, machine learning and design improvements in mobile text entry, we make the code and dataset openly available.
Help!: I’m Stuck, and there’s no F1 Key on My Tablet!
Older adults are often considered to be less frequent adopters of new technologies, in part due to increased efforts required to learn new interaction paradigms, especially if these need to overcome long-established mental models of technology use. Many current interfaces such as mobile devices often do not incorporate elements that align with older adults’ models of use: explicit help menus, user manuals, navigation affordances. The lack of such reassuring elements may cause anxiety to those trying to learn interaction paradigms that are new to them. This paper details the help and support paradigms behind the design of a contextual support interface for tablet devices and describes the results of a usability evaluation with older adult participants.
WatchPen: Using Cross-Device Interaction Concepts to Augment Pen-Based Interaction
Pen-based input is often treated as auxiliary to mobile devices. We posit that cross-device interactions can inspire and extend the design space of pen-based interactions into new, expressive directions. We realize this through WatchPen, a smartwatch mounted on a passive, capacitive stylus that: (1) senses the usage context and leverages it for expression (e.g., changing colour), (2) contains tools and parameters within the display, and (3) acts as an on-demand output. As a result, it provides users with a dynamic relationship between inputs and outputs, awareness of current tool selection and parameters, and increased expressive match (e.g., added ability to mimic physical tools, showing clipboard contents). We discuss and reflect upon a series of interaction techniques that demonstrate WatchPen within a drawing application. We highlight the expressive power of leveraging multiple sensing and output capabilities across both the watch-augmented stylus and the tablet surface.
Cake Cam: Take Your Photo and Be in it Too
Tourists often turn to strangers when they need a photographer while traveling; however, they do so at a cost. Strangers are not typically trained photographers, nor are they telepathically intuiting what composition the tourist wants. Existing smartphone camera interfaces do not communicate the desired framing to the stranger, and prior work in mobile photography guidance does not manage the 3D movement required when composing the tourist’s ideal photo. We offer a new kind of mobile interaction for communicating the intended photo to a stranger without instructions. In our methodology, the tourist first composes a photo with the desired framing. The app, Cake Cam, then stores the camera position and orientation. Finally, 3D augmented reality markers guide the stranger to retake the photo with the tourist now standing in the frame. Our study resulted in more accurate camera placements and required fewer additional instructions than the traditional tourist photography method (n=40).
Efficient Speech-Recognition Error-Correction Interface for Japanese Text Entry on Smartwatches
We propose an efficient speech-recognition error-correction interface for Japanese text entry on smart-watches. Although the accuracy of automatic speech recognition (ASR) has significantly improved, an interface for text modification is still essential. Considering the strict limitation of a narrow display area and practical demand of text modification, the proposed interface arranges the N-best results of ASR and a list of morphemes consisting of the 1-best result to enable quick access to any word to be modified. Specifically, multiple screens of the N-best results are switched by horizontal flicks, and another extended screen listing a morpheme sequence of the 1-best result is scrolled by vertical flicks. The proposed interface was compared with a software keyboard and a speech-input-enabled input method editor (IME), which was a simple combination of speech input and software keyboard. The proposed interface outperformed the other two interfaces in terms of time required to complete specified sentences, subjective score using system usability scale (SUS), and perceived workload quantified using the NASA Task Load Index (NASA-TLX).
I Think It’s Her: Investigating Smartphone Users’ Speculation about Phone Notifications and Its Influence on Attendance
Smartphone users’ decisions about whether to attend to a notification after sensing it are under-researched. We therefore studied 33 Android users, and found that they speculated extensively about notifications’ sources—i.e., which apps and which senders were responsible for them— before attending to them. The participants’ speculation about apps was both more common and more accurate than that about senders. They also were more likely to 1) perceive notifications as important, 2) attend to them, and 3) consider them beneficial if they speculated about them than if they did not or could not. Participants’ speculations were based on the alert’s inherent characteristics, context, and temporality. Inaccurate speculations were mainly caused by unclear signals, insufficient clues, and a multiplicity of possible sources. Ringer mode affected the accuracy of user speculation, but not its frequency or the frequency of attending to notifications.
The Limits of Expert Text Entry Speed on Mobile Keyboards with Autocorrect
Improving mobile keyboard typing speed increases in value as more tasks move to a mobile setting. Autocorrect reduces the time it takes to manually fix typing errors, which results in typing speed increase. However, recent user studies uncovered an unexplored side-effect: participants’ aversion to typing errors despite autocorrect. We present a computational model of typing on keyboards with autocorrect, which enables precise study of expert typists’ aversion to typing errors on such keyboards. Unlike empirical typing studies that last days, our model evaluates this phenomenon for any autocorrect accuracy in seconds. We show that typists’ aversion to typing errors imposes a limit on upper bound typing speeds, even for highly accurate autocorrect. Our findings motivate future keyboard designs that reduce typists’ aversion to typing errors to increase typing speeds.
An Evaluation of Discrete and Continuous Mid-Air Loop and Marking Menu Selection in Optical See-Through HMDs
This paper investigates discrete and continuous hand-drawn loops and marks in mid-air as a selection input for gesture-based menu systems on optical see-through head-mounted displays (OST HMDs). We explore two fundamental methods of providing menu selection: the marking menu and the loop menu, and a hybrid method which combines the two. The loop menu design uses a selection mechanism with loops to approximate directional selections in a menu system. We evaluate the merits of loop and marking menu selection in an experiment with two phases and report that 1) the loop-based selection mechanism provides smooth and effective interaction; 2) users prioritize accuracy and comfort over speed for mid-air gestures; 3) users can exploit the flexibility of a final hybrid marking/loop menu design; and, finally, 4) users tend to chunk gestures depending on the selection task and their level of familiarity with the menu layout.
Visualising Location Uncertainty to Support Navigation under Degraded GPS Signals: a Comparison Study
Degraded GPS signals can negatively affect users of mobile Pedestrian Navigation Applications. Visualization of location uncertainty has emerged as a solution to this problem that has proven beneficial to users. However, there are only a small number of different visualizations developed for this purpose. In addition, their actual impact on facilitating navigation in GPS degraded situations has not been studied well. We designed two new visualizations of location uncertainty and compared them to existing ones in terms of efficiency and user acceptance. A field-based user study(N=18) showed that the two new visualizations significantly reduced the number of wrong turns. Users preferred the landmark-based visualization most and ranked it as the most helpful visualization for judging their true location in the environment when faced with GPS degradations. Despite participants being unfamiliar with the new visualizations, the task completion time, subjective task load and user experience for them were not significantly different from the more familiar state-of-the-art visualization.
Private Reader: Using Eye Tracking to Improve Reading Privacy in Public Spaces
Reading in public spaces can often be tricky if we wish to keep the contents away from the prying eye. We propose Private Reader, an eye-tracking approach towards maintaining privacy while reading by rendering only the portion of text that is gazed by the reader. We conducted a user study by evaluating for both the reader and observer in terms of privacy, reading comfort, and reading speed for three reading modes; normal, underscored, and scrambled text. “Scrambled” performs best in terms of perceived effort and frustration for the shoulder surfer. Our contribution is threefold; we developed a system to preserve privacy by rendering only the text at gaze-point of the reader, we conducted a user study to evaluate user preferences and subjective task load, and we suggested several scenarios where Private Reader is useful in public spaces.
WRIST: Watch-Ring Interaction and Sensing Technique for Wrist Gestures and Macro-Micro Pointing
To better explore the incorporation of pointing and gesturing into ubiquitous computing, we introduce WRIST, an interaction and sensing technique that leverages the dexterity of human wrist motion. WRIST employs a sensor fusion approach which combines inertial measurement unit (IMU) data from a smartwatch and a smart ring. The relative orientation difference of the two devices is measured as the wrist rotation that is independent from arm rotation, which is also position and orientation invariant. Employing our test hardware, we demonstrate that WRIST affords and enables a number of novel yet simplistic interaction techniques, such as (i) macro-micro pointing without explicit mode switching and (ii) wrist gesture recognition when the hand is held in different orientations (e.g., raised or lowered). We report on two studies to evaluate the proposed techniques and we present a set of applications that demonstrate the benefits of WRIST. We conclude with a discussion of the limitations and highlight possible future pathways for research in pointing and gesturing with wearable devices.
Touch Pointing Performance for Uncertain Touchable Sizes of 1D Targets
When users operate smartphones and desktop interfaces with their fingers, there are differences between the motor and visual widths. For example, when a user selects an item from a vertical menu, the area that is physically touched by the user is often larger than the visual width (e.g., of the label for the item selected). Therefore, the user aims for the label assuming that the label width (the visual width) means the motor width. Consequently, the user performs operations more carefully than necessary. We conducted an experiment to investigate the effect of the motor and visual widths on finger pointing. After asking participants to explore the motor width, they performed an experimental task. Our experiment shows that the users’ movement time depends on the motor width and can be predicted. We also analyze existing interfaces and discuss the implications.
HasAnswers: Development of a Mobile Service to Support Young People to Find and Keep a Home
Many young people experience difficulty finding and keeping an independent home, which can lead to homelessness or risk of homelessness. To help address this challenge, a young people’s service in Scotland (Calman Trust) is developing a mobile service called HasAnswers. This paper provides: a description of HasAnswers; the results of iterative testing with 59 young people (36 male, 23 female) using paper and digital prototypes; and feedback from other services with a responsibility for supporting young people to achieve an independent adulthood, as a potential target market for the future scaling up of HasAnswers. While preliminary, the findings indicate the usefulness and acceptability of HasAnswers. The research contributes to HCI work on design for independent living and homelessness. In particular, the paper contributes new insight into the challenge that some users may experience navigating to the information they need, and an approach to address the problem that has been embedded in HasAnswers and preliminary tested with positive results.
Challenges of Parkinson’s Disease: User Experiences with STOP
Parkinson’s disease (PD) is the second most common neurodegenerative disorder, impacting an estimated seven to ten million people worldwide. Measuring the symptoms and progress of the disease, and medication effectiveness is currently performed using subjective measures and visual estimation. We developed and evaluated a mobile application, STOP for tracking hand’s motor symptoms, and a medication journal for recording medication intake. We followed 13 PD patients from two countries for a 1-month long real-world deployment. We found that PD patients are willing to use digital tools, such as STOP, to track their medication intake and symptoms, and are also willing to share such data with their caregivers and medical personnel to improve their own care.
Using Poke Stimuli to Improve a 3×3 Watch-back Tactile Display
A watch-back tactile display (WBTD) is an attractive output option due to its always-available nature. However, employing commonly-used vibration modality on a WBTD may result in a low efficiency since its stimulation area is relatively wide compared with the small contact area of a watch-back. We considered using a more localized tactile stimulus, a poke, to improve the efficiency of a WBTD. We built a WBTD consisting of overlapping 3×3 poke and vibrotactile tactor arrays so that it may be used either as a poke display or as a vibrotactile display. An experiment was conducted to optimize the parameters of poke stimuli, and its results revealed that four directional patterns were best recognized when poking depth was deepest (3 mm) and sensory saltation was exploited. In the next two experiments, we compared the information transfer capacities of the poke and vibrotactile displays. The information transfer capacity of the poke display (1.55 bits) was shown to be higher than that of the vibrotactile display (1.32 bits) in a simulated environment with the mental load of a primary task. This result confirmed our expectation that using a more localized tactile stimulus would improve the efficiency of a WBTD.
Exploring the Use of Fingerprint Sensor Gestures for Unlock Journaling: A Comparison With Slide-to-X
Experience Sampling Methods (ESM) allow the timely collection of subjective self-reports that would otherwise be impossible to measure accurately in ecologically valid scenarios. Recent work suggests that unlock journaling allowed the collection of more data points per day, was faster and perceived as being less intrusive by participants than notification-based ESM. This work extends the unlock journaling field by introducing a novel lockscreen data collection mechanism harnessing an increasingly popular authentication mechanism: the fingerprint sensor. Results collected during a twelve-day user study with fingerprint sensor users show that fingerprint sensor gesture reporting compares favorably to Slide-to-X approaches. The proposed gestural interface was subjectively perceived as being the fastest, least intrusive, and overall most preferred interface, in addition to offering the highest response compliance. By offering a reporting mechanism better aligned with modern smartphone unlocking habits, this work encourages the deployment of unlock journaling in the wild.
Gesture-Based Auto-Completion of Handwritten Text
Auto-completion is a major feature in all keyboard based mobile devices. However, not a lot of work has been done to extend this feature into handwritten text based input. Generally, auto-complete options are ranked on the basis of previous context and word probabilities. However, this leads to missing out on the user intended word simply because there are more frequently used words with the same prefix, in the same context. In this paper, we propose a gesture-based solution to recognize and complete partially-written words, where the user indicates to the system the missing text length by a stroke gesture, thus ruling out a huge subset of possibilities. We also propose a length dependent hybrid score ranking system, to improve the prediction accuracy and speed. Our results show that using a gesture stroke length based searching method not only reduced the processing time by 27% but also showed an accuracy increase of 10%, when the top 3 candidates are considered.
Tangible Around-Device Interaction Using Rotatory Gestures with a Magnetic Ring
The majority of mobile applications use built-in touchscreens and/or accelerometers to provide direct ways for user inputs. Yet, the need to manipulate the device itself (e.g. touch, tilt) poses usability issues such as occlusion and inaccuracy. To address these issues, research proposed using the built-in magnetometer and magnets to facilitate around-device interactions. However, there is little evaluation in how this technique impacts performance and user experience beyond simple docking tasks. To fill this gap, we explored the mobile gameplay context by implementing an interface that uses rotatory gestures from a magnetic ring as input, and compared two control mappings (angular and linear) with touch and tilt in a usability study using a mobile game. We found that rotatory gestures with the ring, when mapped to angular controls, were on par with touch and superior over tilt, and engendered greater gameplay experience and sense of mapping. Based on our findings, we discuss implications of using this technique for gameplay, as well as other applications.
Investigating the Influence of External Car Displays on Pedestrians’ Crossing Behavior in Virtual Reality
Focusing on pedestrian safety in the era of automated vehicles, we investigate the interaction between pedestrians and automated cars. In particular, we investigate the influence of external car displays (ECDs) on pedestrians’ crossing behavior, and the time needed to make a crossing decision. We present a study in a high-immersion VR environment comparing three alternative car-situated visualizations: a smiling grille, a traffic light style indicator, and a gesturing robotic driver. Crossing at non-designated crossing points on a straight road and at a junction, where vehicles turn towards the pedestrian, are explored. We report that ECDs significantly reduce pedestrians’ decision time, and argue that ECDs support comfort, trust and acceptance in automated vehicles. We believe ECDs might become a valuable addition for future vehicles.
EEG-based Measures of Auditory Saliency in a Complex Context
Auditory saliency is an important mechanism that helps humans extract relevant information from environments. Audio notifications of mobile devices with high saliency can increase users’ receptivity, yet overly high saliency could cause annoyance. Accurately measuring auditory saliency of a notification is critical for evaluating its usability. Previous studies adopted behavioral methods. However, their results may not accurately reflect auditory saliency as humans’ perception of auditory saliency often involves complicated cognitive processes. Thus, we propose an electroencephalography (EEG)-based approach that can complement behavioral studies to provide a more nuanced analysis of auditory saliency. We evaluated our method by conducting an EEG experiment that measured the mismatch negativity and P3a of the sounds in realistic scenarios. We also conducted a behavioral experiment to link the EEG-based method with the behavioral method. The results suggested that EEG can provide detailed information about how human perceive auditory saliency and complement the behavioral measures.
Explicit Disaster Response Features in Social Media: Safety Check and Community Help Usage on Facebook during Typhoon Mangkhut
Although social media has been widely used as disaster infrastructure for information and communication, there are insufficient systematic tools to actively serve users’ needs. This paper establishes the concept of EDR (Explicit Disaster Response) features as a solution and describes their potential and future directions. Facebook’s Safety Check and Community Help were chosen as the objects of study since they are currently the most developed and representative EDR features. This study investigates usage of, and user experience with, Safety Check and Community Help through in-depth interviews with 15 Facebook users who encountered these features when they were activated during typhoon Mangkhut, which struck Hong Kong in September, 2018. This research reveals that the interviewees regarded Safety Check as being convenient and useful. However, they also felt that there are barriers to using both EDR features in terms of lack of detailed settings, loss of privacy, and trust of information. This study discusses implications of these findings, and offers design directions for future EDR features.
Interacting with Autostereograms
Autostereograms are 2D images that can reveal 3D content when viewed with a specific eye convergence, without using extra-apparatus. We contribute to autostereogram studies from an HCI perspective. We explore touch inputs and output design options when interacting with autostereograms on smartphones. We found that an interactive help (i.e. to control the autostereogram stereo-separation), a color-based feedback (i.e. highlight of the screen), and a direct touch input can provide support for faster and more accurate interaction than a static help (i.e. static dots indicating the stereo-separation), an animated feedback (i.e., a ‘pressed’ effect), and an indirect input. In addition, results reveal that participants learn to perceive smaller and smaller autostereogram content faster with practice. This learning effect transfers across display devices (smartphone to desktop screen).
Exploring the Potential of Augmented Reality in Domestic Environments
While Augmented Reality (AR) technologies are becoming increasingly available, our understanding of AR is primarily limited to controlled experiments which address use at work or for entertainment. Little is known about how it could enhance everyday interaction from a user’s perspective. Personal use of AR at home may improve how users’ interface with information on a daily basis. Through an online survey, we investigated attitudes towards domestic AR. We further explored the opportunities for AR at home in a technology probe. We first introduced the users to AR by offering an AR experience presented through mixed reality smart glasses. We then used a tailor-made tablet application to elicit photos illustrating how users imagine future AR experiences. Finally, we conducted semi-structured interviews based on elicited photos. Our results show that users are eager to benefit from on-demand information, assistance, enhanced sensory perception, and play offered by AR across many locations at home. We contribute insights for future AR systems designed for domestic environments.
Hand-Over-Face Input Sensing for Interaction with Smartphones through the Built-in Camera
This paper proposes using face as a touch surface and employing hand-over-face (HOF) gestures as a novel input modality for interaction with smartphones, especially when touch input is limited. We contribute InterFace, a general system framework that enables the HOF input modality using advanced computer vision techniques. As an examplar of the usage of this framework, we demonstrate the feasibility and usefulness of HOF with an Android application for improving single-user and group selfie-taking experience through providing appearance customization in real-time. In a within-subjects study comparing HOF against touch input for single-user interaction, we found that HOF input led to significant improvements in accuracy and perceived workload, and was preferred by the participants. Qualitative results of an observational study also demonstrated the potential of HOF input modality to improve the user experience in multi-user interactions. Based on the lessons learned from our studies, we propose a set of potential applications of HOF to support smartphone interaction. We envision that the affordances provided by the this modality can expand the mobile interaction vocabulary and facilitate scenarios where touch input is limited or even not possible.
Understanding How Digital Gifting Influences Social Interaction on Live Streams
Digital gifting in live streaming, in which viewers buy digital gifts to reward the streamers, was worth over $200 million in 2018 in China and its growth has been accelerating. This paper explores what motivates people to tip and how it impacts interactions between viewers and streamers. Through a survey, we identified the main categories of viewers’ tipping motivations. We found that viewers were motivated by the reciprocal acts of streamers, who would engage in various types of social interactions with tippers during the live streams. The styles of interactions and contents of live stream based on the tipping are differently influenced by the motivations of viewers and streamers. For example, viewers often tip large to attract attentions from the crowd or promote preferred live-streaming content. These findings provide more knowledge on the social interaction in live streaming platforms.
Investigating Unintended Inputs for One-Handed Touch Interaction Beyond the Touchscreen
Additional input controls such as fingerprint scanners, physical buttons, and Back-of-Device (BoD) touch panels improve the input capabilities on smartphones. While previous work showed the benefits of input beyond the touchscreen, unfavorably designed input controls force detrimental grip changes and increase the likelihood of unintended inputs. Researchers investigated all fingers’ comfortable areas to avoid grip changes. However, there is no understanding of unintended BoD inputs which frustrate users and lead to embarrassing mistakes. In this paper, we study the BoD areas in which unintended inputs occur during interaction with the touchscreen. Participants performed common tasks on four smartphones which they held in the prevalent single-handed grip while sitting and walking. We recorded finger movements with a motion capture system and analyzed the unintended inputs. We identified comfortable areas on the back in which no unintended inputs occur and found that the least unintended inputs occurred on 5″ devices. We derive three design implications for BoD input to help designers considering reachability and unintended input.
A Shortcut for Caret Positioning on Touch-Screen Phones
Moving a caret to the desired position in a text field is challenging and inefficient, especially when the user has only one hand available to hold and interact with a mobile phone. We propose a shortcut on keyboards that enables precise and efficient caret positioning in a text field on mobile devices. The proposed method uses a long press on a key to determine the desired position for the caret in the text field. A lab experiment was conducted to compare the shortcut with the traditional handle method and an existing method provided by the Google Keyboard. Even though participants were using the proposed shortcut for the first time, the technique achieved a higher task completion speed comparing to the Google method and was as quick as the traditional handle method. It also showed advantages in accuracy and user perceptions than the other two methods for one-handed interaction.
AudioTouch: Minimally Invasive Sensing of Micro-Gestures via Active Bio-Acoustic Sensing
We present AudioTouch, a minimally invasive approach for sensing micro-gestures using active bio-acoustic sensing. It only requires attaching two piezo-electric elements, acting as a surface mounted speaker and microphone, on the back of the hand. It does not require any instrumentation on the palm or fingers; therefore, it does not encumber interactions with physical objects. The signal is rich enough to detect small differences in micro-gestures with standard machine-learning classifiers. This approach also allows for the discrimination of different levels of touch-force, further expanding the interaction vocabulary. We conducted four experiments to evaluate the performances of AudioTouch: a user study for measuring the gesture recognition accuracy, a follow-up study investigating the ability to discriminate different levels of touch-force, an experiment assessing the cross-session robustness, and, a systematic evaluation assessing the effect of sensor placement on the back of the hand.
POSTER SESSION: Poster
Enabling Personal Alcohol Tracking using Transdermal Sensing Wristbands: Benefits and Challenges
Our current project involves the development of a wristband-mounted sensor that is meant to function as an alcohol use monitoring system. This paper focuses on the degree to which physical activity influences ethanol concentrations in the vapor secreted from the skin through collecting data from seven recruited participants when they conducting one designated activity, which could presumably affect the accuracy of detection results. We proposes a preliminary design of building a personal alcohol tracking system that can improve the reliability and affordability of current transdermal ethanol tracking devices to accommodate potential interferences presented in daily life and be intuitive to be used to raise the awareness of alcohol use.
WithDorm: Dormitory Solution for Linking Roommates
Experiences in universities are important for emotional maturation and offer an opportunity to develop individual characteristics and skills needed for social life. There are diverse issues affecting the quality of dormitory life and roommate relationships, which can influence one’s psychosocial development. In this paper, we propose WithDorm, a mobile application to help communication with roommates and tighten their connections, and thereby assisting the users’ emotional health and psychosocial development. We analyzed dormitory roommate issues from a human-centered perspective and narrowed down to three design implications after dormitory life modeling. Furthermore, we implemented the design implications in a prototype and performed a usability test to evaluate and improve the design. The final design, WithDorm, is aware of dormitory-specific concerns, collects and adapts to users’ lifestyles, and initiates humanhuman interaction among roommates.
CryptoAR Wallet: A Blockchain Cryptocurrency Wallet Application that Uses Augmented Reality for On-chain User Data Display
Blockchain technology has recently become popular and its use in business and industry has increased, especially in finance and technology. A blockchain wallet plays a vital role in blockchain industry, but it is difficult to understand and get started, as well as hard to learn how to use blockchain wallet. From the aspect of Interaction Design, we proposed a blockchain cryptocurrency wallet that combine with augmented reality and crypto technology, that is, an Augmented Crypto Wallet (CryptoAR Wallet). We except that browsing and viewing virtual information (On-Chain User Data) through augmented reality, will shorten the distance between user and blockchain wallet services. Though our preliminary design, development and user testing, this in-development application shows its potential on increasing the level of trust and satisfactions, with more comprehensive user experience.
Predicting Smartphone Users’ General Responsiveness to IM Contacts Based on IM Behavior
History of conversations through instant messaging (IM) contains abundant information about the communication patterns of the dyad, including conversation partners’ mutual responsiveness to messages. We have, however, not seen many examinations of using such information in modeling mobile users’ responsiveness in IM communication. In this paper, we present an in-the-wild study, in which we leverage participants’ IM messaging logs to build models predicting their general responsiveness. Our models based on data from 33 IM user achieved an accuracy of up to 71% (AUROC). In particular, we show that 90-day IM-communication patterns, in general, outperformed their 14-day equivalent in our prediction models, indicating better coherence between long-term IM patterns with their general communication experience.
Indoor human localization based on the corneal reflection of illumination
Corneal imaging has much potential for the development of eye-based interactions. However, it can only provide information on the object being focused on. We therefore propose a localization method based on corneal imaging that exploits the reflections of illumination features from the cornea. A virtual corneal image can be generated from an illumination map, and its similarity to the input eye image can be computed. Global and local localizations are then achieved based on this similarity and a particle filter. The x–, y– coordinates and &thetas; angle of a participant in a room can thus be estimated practically, as demonstrated experimentally.
Force Touch Detection on Capacitive Sensors using Deep Neural Networks
As the touchscreen is the most successful input method of current mobile devices, the importance to transmit more information per touch is raising. A wide range of approaches has been presented to enhance the richness of a single touch. With Apple’s 3D Touch, they successfully introduce pressure as a new input dimension into consumer devices. However, they are using a new sensing layer, which increases production cost and hardware complexity. Moreover, users have to upgrade their phones to use the new feature. In contrast, with this work, we introduce a strategy to acquire the pressure measurements from the mutual capacitive sensor, which is used in the majority of today’s touch devices. We present a data collection study in which we collect capacitive images where participants apply different pressure levels. We then train a Deep Neural Network (DNN) to estimate the pressure allowing for force touch detection. As a result, we present a model which enables estimating the pressure with a mean error of 369.0g.
Designing an Urban Support for Autism
This paper describes the preliminary results of a project aimed to support people with autism in finding city places that match their “sensorial” preferences and aversions. Through a participatory design approach, we designed an interactive map that collects sensorial data about the urban environment exploiting crowdsourcing mechanisms.
An Information Behaviour-Based Approach to Virtual Doctor Design
Information behaviour models have been used extensively to explain people’s interactions with information, such as in information search and user behaviour in libraries. However, we do not yet know the connection between components of information models and the interface design of digital systems, particularly when these are designed to support marginalized users such as older adults (OAs). Yet, this connection may relate to users’ perceptions and subsequent adoption of emerging technologies, such as the autonomous virtual agents (VAs) functioning as advice-dispensing chatbots (increasingly present on mobile devices). We explore here the feasibility of information models in informing our understanding of how OAs may use and perceive a VA. For this, we use the information search process (ISP) model to explain the results of a case study with health information VAs and speculate on the implications of the ISP on the design of mobile-based VAs, chatbots, and voice-based interfaces.
Evaluating User Experience under Location Quality Variations: A Framework for in-the-wild Studies
While lab-based approaches to evaluation of location-based services (LBSs) allow faster, cost-effective and less complicated evaluation of many aspects, they are limited in evaluating certain aspects, especially the ones related to usability and user experience (UX). Variations in the location information quality can negatively affect location-based UX. Lab-based evaluations are limited in factoring users’ actual context, which includes location quality variations into the evaluation process. Therefore, to evaluate actual location-based UX, we need to evaluate LBSs in-the-wild with users under variations in the quality of location information. Currently, there is a lack of standard tools, methods, and frameworks for this purpose. Motivated by this factor, in this paper, we describe the Location Uncertainty Injection Framework (LUIF) for evaluating mobile LBSs in-the-wild with users under location quality variations. We also present the results of a study conducted to gain initial feedback on the potential use cases of the framework.
Revisiting Facebook: A Study on Changes in Social Network Usage
Facebook has the most active users as compared to any other social network. Despite its popularity, in recent years Facebook has been criticized for failing to develop user experience (“UX”) continually. This failure resulted in a decrease in active users. In this study, 20 students and workers were interviewed to explore how Facebook users are currently using the app and changes that they experienced from their initial usage. The data generated by these interviews revealed not only challenges but also opportunities related to the user experience of social networks. This study would provide insights into the latest UX problems associated with the use of social networks and design guidelines for UX designers to overcome these problems as well.
Designing for Task Resumption Support in Mobile Learning
Distractions and interruptions often disrupt mobile learners. Luckily, task resumption (memory) cues can support users in resuming a learning task. These cues can have multiple forms and designs, but their effectiveness depends heavily on their adaptation to the specific learning use case. This work explores the causes of interruptions during mobile learning and outlines designs for task resumption support. We report findings from two focus groups with HCI experts (N = 4) and users of mobile learning applications (N = 3). Finally, we discuss these findings by drawing on literature, and we derive a research agenda of currently unexplored concepts. We state limitations and open questions in the domain of task resumption support for mobile learning.
Fostering Virtual Guide in Exhibitions
Museums are essential to make culture accessible to the mass audience. Human museum guides are important to explain the presented artifacts to the visitors. Recently, museums started to experiment with enhancing exhibitions through mixed reality. It enables cultural exhibitors to provide each visitor with an individualized virtual guide that adapts to the visitor’s interests. The effect of the presence and appearance of a virtual museum guide is, however, unclear. In this paper, we compare a real-world guide with a realistic, an abstract, and an audio-only representation of the virtual guide. Participants followed four multimodal presentations while we investigated the effect on comprehension and perceived co-presence. We found that a realistic representation of a virtual guide increases the perceived co-presence and does not adversely affect the comprehension of learning content in mixed reality exhibitions. Insights from our study inform the design of virtual guides for real-world exhibitions.
Semantic 3D gaze mapping for estimating focused objects
Eye-trackers are expected to be used in portable daily-use devices. However, it must register object information and define a unified coordinate system in advance for human–computer interaction and quantitative analysis. Therefore, we propose a semantic 3D gaze mapping to collect gaze information from multiple people on the unified map and detect focused objects automatically. The semantic 3D map can be reconstructed using keyframe-based semantic segmentation and structure-from-motion, and the 3D point-of-gaze can also be computed on the map. We confirmed that the fixation time of the focused object can be calculated through an experiment without prior information.
Alexa, I’m in Need!: Investigating the Potential and Barriers of Voice Assistance Services for Social Work
Due the involved counselling activities, the profession of social work might provide a promising, yet so far uninvestigated application field for voice assistants. To explore requirements and barriers for voice-based services in the field of social work, we conducted focus group sessions with 41 professionals. During the group discussions and through a questionnaire, we collected requirements for applying voice assistants in this professional setting and corresponding application ideas. Overall, the participants expected a positive impact of voice assistants on their profession. While the protection of the clients’ sensitive data and the risk of their reduced social interactions need to be considered, the participants saw high potential in documentation services and the cooperative guidance of clients by human professionals and voice assistants. Future research should investigate the design of this interplay and study appropriate handover strategies between human experts and virtual assistants for specific social work services.
Towards Graphical User Interface Redefinition without Source Code Access: System Design and Evaluation
Nowadays several interactive computing systems (ICSs) still have Graphical User Interfaces (GUIs) that are inadequate in terms of usability and user experience. Numerous improvements were made in the development of better GUIs however, little has been done to improve existing ones. This might be explained by the fact that most ICSs do not provide source code access. In most cases, this means that only persons with source code access can (easily) enhance the respective GUI.
This paper presents a tool using computer vision (CV) algorithms to semi-automatically redefine existing GUIs without accessing their source code. The evaluation of a new GUI obtained from the redefinition of an existing GUI using the tool is described. Results show statistically significant improvements in usability (reduction of interaction mistakes), improved task completion success rate and improved user satisfaction.
UI Design Pattern-driven Rapid Prototyping for Agile Development of Mobile Applications
In agile development, lean UX designers perform rapid prototyping and quick evaluation of prototypes to ensure fast releases. To understand designers’ workflow during rapid prototyping, we interviewed 15 lean UX designers. We identified the following pain points in the workflow: 1) Compromise on quality of UI design due to time constraint 2) UI design knowledge being scattered among numerous sources such as websites and books 3) Inability of developers to reproduce the same quality of UI design due to lack of UI design knowledge. To address these issues, we propose a UI design pattern-driven approach for rapid prototyping. To realize this approach, we introduce Kiwi, a library for UI design patterns and guidelines that aims to consolidate UI design knowledge for mobile applications. Each UI design pattern consists of a problem statement, context, rationale, and a proposed solution. Additionally, Kiwi provides downloadable and customizable GUI examples, layout blueprints and front-end code for each pattern. Usability evaluation (SUS) of Kiwi with 21 lean UX designers depict good usability and high learnability.
Sense-able Lunch Recommendations
An ideal mobile user interface provides users with just the information they want, when they want it. We believe that sensors in the ambient environment can help automatically showcase this information. In this paper, we describe how we inferred users’ favorite lunch stations using indoor location trajectories. We had 109 users participate in our study over an eight month period and we were able to predict their lunch station choices with 85% accuracy using a heuristic algorithm. We describe our system, the data we collected and our post-hoc user assessment.
A Mobile Robot Generating Video Summaries of Seniors’ Indoor Activities
We develop a system which generates summaries from seniors’ indoor-activity videos captured by a social robot to help remote family members know their seniors’ daily activities at home. Unlike the traditional video summarization datasets, indoor videos captured from a moving robot poses additional challenges, namely, (i) the video sequences are very long (ii) a significant number of videoframes contain no-subject or with subjects at ill-posed locations and scales (iii) most of the well-posed frames contain highly redundant information. To address this problem, we propose to exploit pose estimation for detecting people in frames. This guides the robot to follow the user and capture effective videos. We use person identification to distinguish a target senior from other people. We also make use of action recognition to analyze seniors’ major activities at different moments, and develop a video summarization method to select diverse and representative keyframes as summaries.
A Formative Study for Record-time Manual Annotation of First-person Videos
To efficiently edit first-person videos, manually highlighting important scenes while recording is helpful. However, little study has been performed on how such annotation contributes to video editing and affects user behavior during recording. To elicit fundamental requirements for designing useful record-time annotation techniques, we conducted a study using a set of prototype wearable camera system and a video editing interface that enables users to annotate scenes during recording. We asked participants to perform video recording and editing tasks with two different interface settings. We observed that the participants edited videos more efficiently with detailed annotation techniques, whereas focussing on annotating scenes affected their record-time behavior. We conclude the paper with the design guidelines developed from the findings.
Effects of WER on ASR Correction Interfaces for Mobile Text Entry
Speech is increasingly being used as a method for text entry, especially on commercial mobile devices such as smartphones. While automatic speech recognition has seen great advances, factors like acoustic noise, differences in language or accents can affect the accuracy of speech dictation for mobile text entry. There has been some research on interfaces that enable users to intervene in the process, by correcting speech recognition errors. However, there is currently little research that investigates the effect of Automatic Speech Recognition (ASR) metrics, such as word error rate, on human performance and usability of speech recognition correction interfaces for mobile devices. This research explores how word error rates affect the usability and usefulness of touch-based speech recognition correction interfaces in the context of mobile device text entry.
Engaging Seniors through Automatically-Generated Photo Digests from their Families’ Social Media
Seniors are increasingly using the Internet. However, their adoption of available services such as social media is often restricted by their limited experience with new technologies. At the same time, there is significant interest in designing communication applications, especially mobile, that improve seniors’ social connectedness. These are mostly implemented as dedicated social networking tools for seniors and their families. A barrier to the full adoption of such tools is the requirement for younger family members to actively manage a platform parallel to the social media tools they already use (e.g., Facebook). We propose PhotoDigest — a user-centred application that allows seniors to passively engage in their families’ social media activities. PhotoDigest automatically harvests families’ Facebook photo posts and delivers them to seniors as weekly digests. We conducted a preliminary deployment study and show that PhotoDigest is easily adopted by seniors, does not interfere with younger generations’ life routines, and enhances the entire family’s social connectedness.
She is in a Bad Mood Now: Leveraging Peers to Increase Data Quantity via a Chatbot-Based ESM
The experience sampling method (ESM) is widely used for collecting in situ experiences in various domains. One known limitation, however, is its reliance on participants being receptive to ESM questionnaires at the sampled moments. At moments when participants cannot notice or respond to an ESM questionnaire, researchers cannot obtain a response. In this research, we explored the feasibility of inviting peers to provide information about participants in an ESM study. Results from a two-week experiment with a total of 27 participants and 82 peers showed that including peers’ ESM responses increased ESM data quantity. Furthermore, the agreement between the peers’ and the participants’ responses could be maintained by asking peers’ confidence. Even considering only data with high confidence could increase data quantity. Moreover, inviting peers had a positive impact on the participant’s compliance to respond. These results suggest that using peer-ESM to obtain more in-situ data about participants is promising.
When There is No Progress with a Task-Oriented Chatbot: A Conversation Analysis
Task-oriented chatbots are increasingly prevalent in our daily life. Research effort has been devoted to advancing our understanding of users’ interaction with conversational agents, including conversation breakdowns. However, most research attempts were limited to obversions from a relatively short duration of user interaction with chatbots, where users were aware of being studied. In this study, we conducted a conversation analysis on a three-month conversation log of users conversing with a chatbot of a banking institution. The log consisted of 1,837 users’ conversations with this chatbot with 19,449 message exchanges. From this analysis, we show that users more often failed to make a progress in a conversation when they requested information than when they provided information. Furthermore, we uncovered five kinds of intention gaps unexpected to the chatbot, and five major behaviors users adopted to cope with non-progress.
Chat with Smart Conversational Agents: How to Evaluate Chat Experience in Smart Home
There are more and more smart devices equipped with smart conversational agents, which can engage in chat or free conversation with human. However, the human-machine chat is still in the early stage of development, and there is a lack of effective methods to evaluate chat experience. In this study we proposed an approach to evaluate chat experience with smart conversational agents in smart home. We collected evaluation metrics and applied them in user testing, and then optimized the metrics and constructed an evaluation system. We applied the evaluation system to compare chat experience of five different smart conversational agents.
Exploring the Design of Availability Status in Mobile IM Messaging with User Enactments
Current mobile instant messaging (IM) applications offer limited information on the availability status of IM users, particularly, their availability for reading and responding to IM messages. Research suggests a gap between what IM recipients want to disclose and what IM senders want to see to determine when to initiate a conversation. The advancement of IM users’ receptivity prediction makes it possible to present IM users’ predicted availability status. In this research, we conducted user enactment, a design approach for researchers to let participants experience and reflect on possible designs of future technologies, to explore designs of IM availability status in 72 IM conversation scenarios. We explore how IM users interpret different presentations of an uncertain IM status from both the senders’ and recipients’ perspectives, and what they need and will act upon these presentations.
DEMONSTRATION SESSION: Demo
Watch Spaces: A Spatial User Interface for Smart Watches
We present a platform to prototype spatial user interfaces for smart watches relative to the user’s body. We show the general feasibility and present 2 applications. The first application scenario enable users “pin” applications in the air around them and get back to them by moving the smart watch display at the same position again. The second application shows a zoom use case for maps or other larger displays/visualizations. The smart watch acts like a digital “magnifying glass” enabling the user to see details he cares about.
WatchPen: Using Cross-Device Interaction Concepts to Augment Pen-Based Interaction
WatchPen illustrates how cross-device interactions can inspire and extend the design space of pen-based interactions into new, expressive directions. We demonstrate WatchPen, a smartwatch mounted on a passive, capacitive stylus that: (1) senses the usage context and leverages it for expression (e.g., changing colour), (2) contains tools and parameters within the display, and (3) acts as an on-demand output. As a result, it provides users with a dynamic relationship between inputs and outputs, awareness of current tool selection and parameters, and increased expressive match (e.g., added ability to mimic physical tools, showing clipboard contents). Our demonstration includes a series of interaction techniques within a drawing application.
R2S: A Public Augmented Printed Media System to Promote Care Home Residents’ Social Interaction
Institutional care settings are often described as places where residents suffer from social isolation. In the past decades, conventional social interventions changed very little and had many limitations. Our research explores the potential of integrating technologies into public caring environments. In this paper, we present “R2S”, a flexible platform in care homes to support residents’ social interaction through augmenting public print media products. It aims to provide caregivers a convenient way to transform any print media into interactive surfaces, which can enhance residents’ public reading experience and trigger communications. This paper describes the design motivation, the design and prototype implementation, preliminary user feedback and future plans.
An Application for Wrist Rehabilitation Using Smartphones
In this paper, we propose a wrist rehabilitation support system using a smartphone app. There are several issues with the conventional wrist rehabilitation systems, primarily that there is no method for quantitatively evaluating whether it has been appropriately carried out or not, that it is difficult for doctors to observe the condition of patients at home, and that the content is often boring. In our system, using a smartphone means that we can easily introduce it to homes and have medical doctors observe the condition of patients by accessing their data on the cloud. We also aim to help patients maintain their motivation for rehabilitation by playing a game using their wrist and a smartphone.
SCAN: Indoor Navigation Interface on a User-Scanned Indoor Map
We present an indoor navigation system, SCAN, which displays the user’s current location on a user-scanned indoor map. Smartphones use the global positioning system (GPS) to determine their position on the earth, but it does not work in interior environments. SCAN uses indoor map images scanned by a smartphone camera and displays the user’s position on the indoor map while they move around a floor. It tracks the user’s position using an inertial measurement unit (IMU) on the smartphone. Camera calibration is required for precise navigation, but our system estimates the user’s position based on a single landscape image. Results of our preliminary user study suggest that participants’ experiences were similar to using outdoor GPS navigation systems.
EOG Glasses: an Eyewear Platform for Cognitive and Social Interaction Assessments in the Wild
In this work we present the smart eyewear demo setup consisting of the software platform for cognitive and social interaction assessments in the wild, with several application cases and a demonstration of activity recognition in real-time. The platform is designed to work with Jins MEME, smart EOG enabled glasses, The user software is capable data logging, posture tracking and recognition of several activities, such as talking, reading and blinking. In this work we present several applications and studies that the platform has been used for.
Communicating Multimodal Wayfinding Messages for Visually Impaired People via Wearables
People with a visual impairment (PVI) often experience difficulties with wayfinding. Current navigation applications have limited communication channels and do not provide detailed enough information to support PVI. By transmitting wayfinding information via multimodal channels and combining these with wearables, we can provide tailored information for wayfinding and reduce the cognitive load. This study presents a framework for multimodal wayfinding communication via smartwatch. The framework consists of four modalities: audio, voice, tactile and visual. Audio and voice messages are transmitted using a bone conduction headphone, keeping the ears free to focus on the environment. With a smartwatch vibrations are directed to a sensitive part of the body (i.e., the wrist), making it easier to sense the vibrations. Icons and short textual feedback are viewed on the display of the watch, allowing for hands-free navigation.
SESSION: Doctoral Consortium
Deformable Interactions to Improve the Usability of Handheld Mobile Devices
Handheld mobile device use is dominated by touch interfaces; keyboards and other physical inputs are disappearing. In parallel, there is a trend towards more sophisticated devices supporting complex use. While touch devices are usable, the user experience is not optimal for all people or across all tasks. Augmenting touch devices with deformable interactions can support usability. I identify open research questions: how to pair deformation and touch, what are design questions for augmented devices, and what benefits they can provide to people? I outline work to date, highlighting a deformable mobile phone case.
Methods and Interfaces for Closed Loop Smartphone Notifications
Our smartphones are constantly fighting to capture our attention, oftentimes causing significant disruption in professional and social contexts. In contrast with prior smart notification systems work focused on external contextual information (e.g., environment, user activity, etc.), my research explores how the notification experience could be enhanced by providing smartphones with a better awareness of their user’s psycho-physiological state both prior to, but more importantly immediately after the presentation of alerts. This paper first summarizes findings from the evaluation of a novel notification perception classification technique based on wearable physiological sensing, and a non-intrusive mobile journaling mechanisms adapted to modern smartphone usage. From there, a tentative sequence of studies is presented, aiming to answer the project’s remaining research questions.
Interactive Voice Technologies and the Digital Marginalization of Older Adults
The sociotechnical approach to the design of interactive systems has been seen as a means of acknowledging and understanding the benefits and risks of emerging and innovative technologies. However, implementing such an approach in practice is easier said than done, especially for technology that is as ubiquitous as mobile technologies. Yet, this is a worthwhile challenge as this approach may strengthen our understanding of users’ perceptions and subsequent adoption of such mobile technologies. My previous research has indicated that there may be a link between information studies theory and the digital interface design practice surrounding mobile technologies. I plan to build upon this work by further developing and evaluating methods, practices, and approaches to bridge information theory and design practice. I am seeking advice and feedback on my research in terms of the development and evaluation of new practices of mobile design work. The implications of the sociotechnical approach to mobile voice interface design are discussed.
Tools to Support Voice User Interface Design
As Voice User Interfaces gain popularity and appear in many devices in the commercial market (such as Google Home, Amazon Alexa, Siri, etc), designers must address the new usability challenges that come with using voice as a primary form of interaction. There is currently a lack of tools and resources that guide both current and new usability experts in building usable VUIs. My research seeks to identify and fill the gaps in VUI design resources in HCI. To date, I have conducted a series of meta-analyses and studies with both VUI and non-VUI usability experts that explore the gaps in design knowledge for Voice User Interfaces. I describe this currently completed research and how it guides my primary research goal of developing tools and resources to aid usability experts and designers in filling these design knowledge gaps.
WORKSHOP SESSION: Workshop
Measuring Holistic User Experience: Keeping an Eye on What Matters Most to Users
When it comes to how we define success metrics for our products, teams often leave out the user. Daily active users, conversion rates, % uptime, CSAT – these are all important metrics to keep track of from a product and business perspective, but none of these fully capture the user’s perspective. They don’t give insight into what users care about and what they’re trying to achieve. With qualitative research, we gain a deep understanding of what matters to users, but these insights are often quickly forgotten by product teams. In this workshop, we’ll introduce Critical User Journeys (CUJs: important tasks your user needs to be able to complete) and Experience Outcomes (XOs: your user’s fundamental emotional needs) as tools that will enable product teams to prioritize based on what matters to users.
Scrappy User Research: How to Get Feedback in 24 Hours or Less
Demands of the fast paced tech industry can leave little time for rigorous UX research. Some teams may not even have dedicated UX researchers or access to users. This workshop will focus on teaching various research methods to apply in 24 hours or less, at any phase of the product life cycle. We will demonstrate how to apply four methods: heuristic evaluations, cafe studies, surveys and remote user testing. These methods have been successfully used to provide immediately actionable results for our teams.
Designing Mobile Technologies for Neurodiversity: Challenges and Opportunities
Mobile applications have a great potential in making everyday environments more accessible from the cognitive point of view, allowing neurodiverse people, such as individuals with autism, dementia, or ADHD, to gain independency and find continuous support. This workshop will discuss the main technological, methodological, theoretical and design issues that researchers and practitioners are facing when designing mobile devices and services for neurodiversity, exploring novel strategies to address them. In doing so, we want to focus on the neurodiverse people’s idiosyncratic needs, also exploring ways for directly involving them in the design process.
Bounded & Nuanced: Designing Mobile Technology for Children and Parents
In this workshop, we intend to bring scholars and practitioners of MobileHCI conference who are interested in designing technology use for children and their parents. Parents are concerned with technology addiction for their children but generally feel loss of control in the ubiquity of mobile devices. We believe mobile design community can contribute meaningful ideas in this space. The goal of this workshop is for participants to co-develop a shared vocabulary and framework for “designing bounded and nuanced technology” as a strategy to be deployed in conceiving and implementing mobile technologies for family use.
SESSION: Industry Perspectives
Hacks for Cost-Justifying Usability: “Fear-Setting” vs. “Goal-Setting”
This paper is designed to guide user-centered design professionals in running negotiations of usability cost justification with product owners. The paper introduces hacks that can be used by usability professionals to convince entrepreneurs or other decision makers to invest in usability improvements. The traditional cost-justifying usability approach focuses on emphasizing the potential profits that can be expected. This “goal-setting” approach, commonly represents a strong influence on the investment decision of young, nascent entrepreneurs. The new approach, which we are suggesting, is about avoiding losses and costs stemming from not taking any action. This is a “fear-setting” approach, which tends to influence the investment decisions of older and more experienced entrepreneurs. Usability professionals can thus identify and classify their audience regarding to age and years of experience in order to choose the most efficient mode of presenting justification of usability cost.
Integrating HCI Perspective into a Mobile Software Development Team: Strategies and lessons from the field
Software engineers are traditionally concerned with performance and capabilities of mobile applications, while Human-Computer Interaction (HCI) approaches remain underused in many cases of industry everyday practice, especially in smaller companies, running on a tight schedule.
The present paper reports a set of strategies empirically outlined by HCI and mobile development teams working integrated into different stages of the development lifecycle while keeping project constraints in mind, aiming to provide practical ways of incorporating academic knowledge about human factors into the development of mobile applications. We also discuss some lessons learned, which we believe may favor a more significant application of HCI issues in the mobile industry.
Crowdsourcing Images for Global Diversity
Crowdsourcing enables human workers to perform designated tasks unbounded by time and location. As mobile devices and embedded cameras have become widely available, we deployed an image capture task globally for more geographically diverse images. Via our micro-crowdsourcing mobile application, users capture images of surrounding subjects, tag with keywords, and can choose to open source their work. We open-sourced 478,000 images collected from worldwide users as a dataset “Open Images Extended” that aims to add global diversity to imagery training data. We describe our approach and workers’ feedback through survey responses from 171 global contributors to this task.
Behind the Scenes of Researching Android 9 Pie’s System Navigation
The Android operating system runs on more than 2 billion smartphones, tablets, and other devices. In this case study, we go behind the scenes of the user experience research process at Google that led to the design of Android 9 Pie’s system navigation. We describe the type of research we conducted over a 9 month period and what we learned about smartphone system navigation and gestures.