MobileHCI ’18- Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services
SESSION: Notifications and attention
Forecasting user attention during everyday mobile interactions using device-integrated and wearable sensors
Visual attention is highly fragmented during mobile interactions, but the erratic nature of attention shifts currently limits attentive user interfaces to adapting after the fact, i.e. after shifts have already happened. We instead study attention forecasting – the challenging task of predicting users’ gaze behaviour (overt visual attention) in the near future. We present a novel long-term dataset of everyday mobile phone interactions, continuously recorded from 20 participants engaged in common activities on a university campus over 4.5 hours each (more than 90 hours in total). We propose a proof-of-concept method that uses device-integrated sensors and body-worn cameras to encode rich information on device usage and users’ visual scene. We demonstrate that our method can forecast bidirectional attention shifts and predict whether the primary attentional focus is on the handheld mobile device. We study the impact of different feature sets on performance and discuss the significant potential but also remaining challenges of forecasting user attention during mobile interactions.
Notifications on mobile devices are a prominent source of interruptions. Previous work suggests using opportune moments to deliver notifications to reduce negative effects. In this paper, we instead explore the manual deferral of notifications. We developed an Android app that allows users to “snooze” mobile notifications for a user-defined amount of time or to a user-defined point in time. Using this app, we conducted a year-long in-the-wild study with 295 active users. To complement the findings, we recruited 16 further participants who used the app for one week and subsequently interviewed them. In both studies, snoozing was mainly used to defer notifications related to people and events. The reasons for deferral were manifold, from not being able to attend notifications immediately to not wanting to. Daily routines played an important role in the deferral of notifications. Most notifications were deferred to the same day or next morning, and a deferral of more than two days was an exception. Based on our findings, we derive design implications that can inform the design of future smart notification systems.
We analyzed 794,525 notifications from 278 mobile phone users and how they were handled. Our study advances prior analyses in two ways: first, we systematically split notifications into five categories, including a novel separation of messages into individual- and group messages. Second, we conduct a comprehensive analysis of the behaviors involved in attending the notifications. Our participants received a median number of 56 notifications per day, which does not indicate that the number of notifications has increased over the past years. We further show that messaging apps create most of the notifications, and that other types of notifications rarely lead to a conversion (rates between ca. 15 and 25%). A surprisingly large fraction of notifications is received while the phone is unlocked or the corresponding app is in foreground, hinting at possibility to optimize for this scenario. Finally, we show that the main difference in handling notifications is how long users leave them unattended if they will ultimately not consume them.
Digital overuse on mobile devices is a growing problem in everyday life. This paper describes a generalizable mobile intervention that combines nudge theory and negative reinforcement to create a subtle, repeating phone vibration that nudges a user to reduce their digital consumption. For example, if a user has a daily Facebook limit of 30 minutes but opens Facebook past this limit, the user’s phone will issue gentle vibrations every five seconds, but the vibration stops once the user navigates away from Facebook. We evaluated the intervention through a three-week controlled experiment with 50 participants on Amazon’s Mechanical Turk platform with findings that show daily digital consumption was successfully reduced by over 20%. Although the reduction did not persist after the intervention was removed, insights from qualitative feedback suggest that the intervention made participants more aware of their app usage habits; and we discuss design implications of episodically applying our intervention in specific everyday contexts such as education, sleep, and work. Taken together, our findings advance the HCI community’s understanding of how to curb digital overload.
SESSION: Displays, input, touch
Most interfaces of our interactive devices such as phones and laptops are flat and are built as external devices in our environment, disconnected from our bodies. Therefore, we need to carry them with us in our pocket or in a bag and accommodate our bodies to their design by sitting at a desk or holding the device in our hand. We propose Hand Range Interface, an input surface that is always at our fingertips. This body-centric interface is a semi-sphere attached to a user’s wrist, with a radius the same as the distance from the wrist to the index finger. We prototyped the concept in virtual reality and conducted a user study with a pointing task. The input surface can be designed as rotating with the wrist or fixed relative to the wrist. We evaluated and compared participants’ subjective physical comfort level, pointing speed and pointing accuracy on the interface that was divided into 64 regions. We found that the interface whose orientation was fixed had a much better performance, with 41.2% higher average comfort score, 40.6% shorter average pointing time and 34.5% lower average error. Our results revealed interesting insights on user performance and preference of different regions on the interface. We concluded with a set of guidelines for future designers and developers on how to develop this type of new body-centric input surface.
We present MagicScroll, a rollable tablet with 2 concatenated flexible multitouch displays, actuated scrollwheels and gestural input. When rolled up, MagicScroll can be used as a rolodex, smartphone, expressive messaging interface or gestural controller. When extended, it provides full access to its 7.5″ high-resolution multitouch display, providing the display functionality of a tablet device. We believe that the cylindrical shape in the rolled-up configuration facilitates gestural interaction, while its shape changing and input capabilities allow the navigation of continuous information streams and provide focus plus context functionality. We investigated the gestural affordances of MagicScroll in its rolled-up configuration by means of an elicitation study.
Navigation and mobility mechanics for virtual environments aim to be realistic or fun, but rarely prioritize the accuracy of movement. We propose PinchMove, a highly accurate navigation mechanic utilizing pinch gestures and manipulation of the viewport for confined environments that prefers accurate movement. We ran a pilot study to first determine the degree of simulator sickness caused by this mechanic, and a comprehensive user study to evaluate its accuracy in a virtual environment. We found that utilizing an 80° tunneling effect at a maximum speed of 15.18° per second was deemed suitable for PinchMove in reducing motion sickness. We also found our system to be at average, more accurate in enclosed virtual environments when compared to conventional methods. This paper makes the following three contributions: 1) We propose a navigation solution in near-field virtual environments for accurate movement, 2) we determined the appropriate tunneling effect for our method to minimize motion sickness, and 3) We validated our proposed solution by comparing it with conventional navigation solutions in terms of accuracy of movement. We also propose several use- case scenarios where accuracy in movement is desirable and further discuss the effectiveness of PinchMove.
Digital painting is an increasingly popular medium of expression for many artists, yet when compared to its traditional equivalents of physical brushes and viscous paint it lacks a dimension of tangibility. We conducted observations and interviews with physical and digital artists, which gave us a strong understanding of the types of interactions used to create both physical and digital art, and the important role tangibility plays within these experiences. From this, we developed a unique liquid-like tangible display for mobile, digital colour mixing. Using a chemical hydrogel that changes its viscosity depending on temperature, we are able to create some resemblances to the feeling of mixing paint with a finger. This paper documents the information gathered from working with artists, how this process informed the development of a mobile painting attachment, and an exploration of its capabilities. After returning with our prototype, we found that it provided artists with sensations of oil and acrylic paint mixing and also successfully mimicked how paints are laid out on a paint palette.
SESSION: Analyzing/large scale use
Logging mobile application usage on smartphones is limited to rather general system events unless one has access to the operating system’s or applications’ source code. In this paper, we present a method for analyzing mobile application usage in detail by generating log files based on mobile screen output. We are combining long-term log file analysis and short-term screen recording analysis by utilizing existing computer vision and machine learning methods. To validate the log results of our approach and implementation we collect 118 sample screen recordings of phone usage sessions and evaluate the resulting log file manually. Besides that, we explore the performance of our approach with different video quality parameters: frame rate and bit rate. We show that our method provides detailed data about application use and can work with low-quality video under certain circumstances.
While mobile apps have become an integral part of everyday life, little is known about the factors that govern their usage. Particularly the role of geographic and cultural factors has been understudied. This article contributes by carrying out a large-scale analysis of geographic, cultural, and demographic factors in mobile usage. We consider app usage gathered from 25,323 Android users from 44 countries and 54,776 apps in 55 categories, and demographics information collected through a user survey. Our analysis reveals significant differences in app category usage across countries and we show that these differences, to large degree, reflect geographic boundaries. We also demonstrate that country gives more information about application usage than any demographic, but that there also are geographic and socio-economic subgroups in the data. Finally, we demonstrate that app usage correlates with cultural values using the Value Survey Model of Hofstede as a reference of cross-cultural differences.
Not all smartphone owners use their device in the same way. In this work, we uncover broad, latent patterns of mobile phone use behavior. We conducted a study where, via a dedicated logging app, we collected daily mobile phone activity data from a sample of 340 participants for a period of four weeks. Through an unsupervised learning approach and a methodologically rigorous analysis, we reveal five generic phone use profiles which describe at least 10% of the participants each: limited use, business use, power use, and personality– & externally induced problematic use. We provide evidence that intense mobile phone use alone does not predict negative well-being. Instead, our approach automatically revealed two groups with tendencies for lower well-being, which are characterized by nightly phone use sessions.
Natural emotional experiences happen “in the wild” as people are mobile, living their daily lives. To capture these experiences, emotion researchers often give participants smartphone applications with various graphical user interfaces (GUIs) to record how they are feeling, however, there exist few empirical tests that assess the comparative benefits and drawbacks of different GUI designs. This paper presents two empirical evaluations of three types of GUI designs for capturing emotion using both a 10 participant in-lab trial and a 100 participant AMT trial. We define GUI scoring metrics and report on participants’ ability to rate real world scenarios and evocative images, respectively, in ways that are consistent with population norms and with respect their own emotion word choices. We additionally report on users preferences for different designs, their perceived ease of use and the average time taken to complete an assessment for the different designs.
Location awareness of people inside commercial establishments can help with occupancy-based dynamic energy management and indoor navigation. In this paper, we propose MobiCeil, a novel phone-based indoor localization technique. The proposed technique is offline, automated, and uses image captured from phone’s camera to identify the unique ceiling structure of any particular location in the office building. The proposed method is based on these assumptions: (a) in office, employees tend to keep their phones lying on the table, and (b) the layout of ceiling landmarks in a portion of the ceiling structure (as captured by the phone’s camera on the table) is unique. We validated these assumptions by checking the phone placement of 47 employees randomly at their cubicle or meeting room, and collecting ceiling layout data from 18 meeting rooms and 6 cubicles in an IT office building. To evaluate the performance of MobiCeil, we collected images of the ceiling as seen by the phone (front and back) camera in three different rotations of the phone placed on the table, to capture a total of 960 ceiling images. Our approach achieved an accuracy of 88.2% for identifying locations, with a low computation time of 2.8s per image.
SESSION: Driving and bicycling
Highly automated vehicles, occasionally require users to resume vehicle control from non-driving related tasks by issuing cues called take-over request (TOR). Due to being engaged in non-driving related tasks (NDRT), users have a decreased level of situational awareness of the driving context. Therefore, user interface designs for TORs should ensure smooth transitions from the NDRTs to vehicle control. In this paper, we investigated the role of decision priming cues as TORs across different levels of NDRT engagement. In a driving simulator, users performed a reading span task while driving in automated mode. They received audio-visual TORs which primed them with an appropriate maneuver (steering vs. braking), depending on the traffic situation. Our results showed that priming users with upcoming maneuvers results in faster responses and longer time to collision to obstacles. However, the level of engagement in NDRT does not affect user responses to TORs.
Child cyclists are often at greater risk for traffic accidents. This is in part due to the development of children’s motor and perceptual-motor abilities. To facilitate road safety for children, we explore the use of multimodal warning signals to increase their awareness and prime action in critical situations. We developed a bicycle simulator instrumented with these signals and conducted two controlled experiments. We found that participants spent significantly more time perceiving visual than auditory or vibrotactile cues. Unimodal signals were the easiest to recognize and suitable for encoding directional cues. However, when priming stop actions, reaction time was shorter when all three modalities were used simultaneously. We discuss the implications of these outcomes with regard to design of safety systems for children and their perceptual-motor learning.
Automated driving eliminates the permanent need for vehicle control and allows to engage in non-driving related tasks. As literature identifies office work as one potential activity, we estimate that advanced input devices will shortly appear in automated vehicles. To address this matter, we mounted a keyboard on the steering wheel, aiming to provide an exemplary safe and productive working environment. In a driving simulator study (n=20), we evaluated two feedback mechanisms (heads-up augmentation on a windshield, conventional heads-down display) and assessed both typing effort and driving performance in handover situations. Results indicate that the windshield alternative positively influences handovers, while heads-down feedback results in better typing performance. Text difficulty (two levels) showed no significant impact on handover time. We conclude that for a widespread acceptance of specialized interfaces for automated vehicles, a balance between safety aspects and productivity must be found in order to attract customers while retaining driving safety.
SESSION: Digital memories and emotions
We developed a prototype which overlays local and remote participants in a video call and enables them to take group pictures together. These pictures serve as keepsakes of the event. The application uses real-time chroma key background removal to composite the remote person into the scene with the local group. We tested the prototype in a museum setting, and compared it to a more standard picture-in-picture (PiP) model. Users rated the composite mode as being significantly more fun, creating a greater sense of copresence and involvement than the PiP mode. Composite snapshots were also strongly preferred over picture-in-picture. Based on results from the study, we added a pinch-zoom and positioning interface to make it easier to frame remote people together into the snapshot, and conducted a second study. We conclude that combining composite video calls and picture-taking on a mobile device enables people to socially construct a shared activity with a remote person.
New form factors and user interfaces for computer-mediated communication are emerging. The possibilities to use these systems for emotional communication are interesting, and recent years have witnessed the appearance of a versatile range of prototypes. In this paper, we present the results of a systematic literature review on research addressing the design of systems with unconventional user interfaces for emotional communication, focusing on the use case of facilitating long-distance relationships. We reviewed a body of 150 papers resulting from a systematic search, further analysis scoping the body to 47 papers, containing altogether 52 prototypes that were relevant for our focus. We then analysed the characteristics affecting the interaction mediated by these systems and their user interfaces. We present the results related to the design attributes, e.g., form factors, modalities, and message types of the systems, as well as to the evaluation approaches. As salient findings, touch input and visual output are the most common interaction modalities in these systems, and their evaluations lack in-the-wild studies, especially on long-term usage.
We took an ethnographic approach to explore the continuum between excessive smartphone use and healthy disconnection. We conducted a qualitative mixed-methods study in Switzerland and the United States to understand the nature of the problem, how it evolves, the workarounds that users employ to disconnect, and their experience of smartphone disconnection. We discussed two negative behavioral cycles: an internal experience of habit and excessive use, and an externally reinforced cycle of social obligation. We presented a taxonomy of non-use based on the dimensions of time and user level of control. We highlighted 3 potential areas for solutions around short-term voluntary disconnection and describe recommendations for how the mobile industry and app developers can address this issue.
Today’s sensor-rich mobile and wearable devices allow us to seamlessly capture an increasing amount of our daily experiences in digital format. This process can support human memory by producing “memory cues”, e.g., an image or a sound that can help trigger our memories of a past event. However, first-person captures such as those coming from wearable cameras are not always ideal for triggering remembrance. One interesting option is thus to combine our own capture streams with those coming from co-located peers, in or even infrastructure sensors (e.g., a surveillance camera) in order to create more powerful memory cues. Given the significant privacy and security concerns of a system that shares personal experience streams with co-located peers, we developed a tangible user interface (TUI) that allows users to in-situ control the capture and sharing of their experience streams through a set of five physical gestures. We report on the design of the device, as well as the results of a user study with 20 participants that evaluated its usability and efficiency in the context of a meeting capture. Our results show that our TUI outperforms a comparable smartphone application, but also uncovers user concerns regarding the need for additional control devices.
SESSION: Pointing and gestures
Targets on touchscreens should be large enough so that they can be tapped by fingers. In addition to the size of a target, properties of unintended targets around the intended target (e.g., margins) could affect user performance. In this study, we investigate the negative effects of such unintended targets (or distractors), which impose a penalty time when tapped for which users have to wait. Our participants sometimes purposely tapped an empty space on the opposite side of the distractor to avoid tapping it, and such behavior was affected by (1) the size of the intended target, (2) gap between the intended target and distractors, and (3) dimensionality of pointing tasks (1D or 2D). We also found that we could not estimate user performance by using Fitts’ and FFitts’ laws, probably because tap positions tended to shift away from distractors.
The capacity of spatial multi-touch menus such as FastTap is limited by device screen size. We explore the idea of using multiple tabs to increase capacity – multiplexing the tablet’s screen space so each location holds multiple items. Earlier work has shown potential of this idea for smartwatches, but no evaluations have considered larger devices. To assess issues with interference-based errors and spatial memory development, we built two FastTap systems with multiple tabs and conducted two studies. We first tested user learning of 16 targets with a training game, and found that participants easily adapted to the multi-tab model, were able to perform memory-based shortcuts, and made few interference-based errors. The second study used realistic drawing tasks and showed that people successfully used the multi-tab FastTap system, with 88% of selections made using shortcuts by the study’s end. Our work demonstrates that spatial memory can successfully be multiplexed, and that tabs are a promising way to increase command set sizes for spatial interfaces.
We introduce $Q, a super-quick, articulation-invariant point-cloud stroke-gesture recognizer for mobile, wearable, and embedded devices with low computing resources. $Q ran up to 142X faster than its predecessor $P in our benchmark evaluations on several mobile CPUs, and executed in less than 3% of $P’s computations without any accuracy loss. In our most extreme evaluation demanding over 99% user-independent recognition accuracy, $P required 9.4s to run a single classification, while $Q completed in just 191ms (a 49X speed-up) on a Cortex-A7, one of the most widespread CPUs on the mobile market. $Q was even faster on a low-end 600-MHz processor, on which it executed in only 0.7% of $P’s computations (a 142X speed-up), reducing classification time from two minutes to less than one second. $Q is the next major step for the “$-family” of gesture recognizers: articulation-invariant, extremely fast, accurate, and implementable on top of $P with just 30 extra lines of code.
One potential method of improving the efficiency of human-computer interaction is to display information subliminally. Such information cannot be recalled consciously, but has some impact on the perceiver. However, it is not yet clear whether people can extract meaning from subliminal presentation of information in mobile contexts. We therefore explored subliminal semantic priming on smartphones. This builds on mixed evidence for subliminal priming across HCI in general, and mixed evidence for the effect of subliminal affective priming on smartphones. Our semi-controlled experiment (n=103) investigated subliminal processing of numerical information on smartphones. We found evidence that concealed transfer of information is possible to a very limited extent, but little evidence of a semantic effect. Overall, the impact is effectively negligible for practical applications. We discuss the implications of our results for real-world deployments and outline future research themes as HCI moves beyond mobile.
SESSION: Design work
The potential for mobile technology to support bespoke learning activities seamlessly across learning contexts has not been fully realized. We contribute insights gained from four months of field studies of place-based mobile learning in two different contexts: formal education with a primary school and informal, community-led learning with volunteers in a nearby park. For these studies we introduced ParkLearn: a platform for creating, sharing and engaging with place-based mobile learning activities through seamless learning experiences. The platform enables the creation of easily configurable learning activities that leverage the targeted learning environment and mobile devices’ hardware to support situated learning. Learners’ uploaded responses to activities can be viewed and shared via a website, supporting seamless follow-up classroom activities. By supporting creativity and independence for both learners and activity designers, ParkLearn promoted a sense of ownership, increased engagement in follow-up activities and supported the leveraging of physical and social communal learning resources.
This paper describes the lessons learned when designing an empathy-oriented image-exchange app for fifth-grade pupils. The aim was to evoke curiosity and empathy towards someone living elsewhere or under different socio-economic circumstances. In addition, we strived to apply design ethics (e.g. protecting users from insults, humiliation, inappropriate content etc) and take users’ privacy into account. By setting up these boundaries for this user group we found ourselves confronted with a set of conflicting design decisions which ultimately led to a lesser and different user experience than we had expected. Here, we discuss the interplay between our design decisions and the consequences thereof, and evaluate the mistakes we made. Moreover we discuss how to balance anonymity and curiosity, and comment on the benefits of making a pre-analysis of potential clashes related to intended UX and other core design decisions.
Software developers typically use multiple large screens in their office setup. However, they often work away from the office where such a setup is not available, instead only working with a laptop computer and drastically reduced screen real estate. We explore how developers can be better supported in ad-hoc scenarios, for example when they work in a cafe, an airport, or at a client’s site. We present insights into current work practices and challenges when working away from the usual office desk sourced from a survey of professional software developers. Based on these insights, we introduce an IDE that makes use of additional personal devices, such as a phone or a tablet. Parts of the IDE can be offloaded to these mobile devices, for example the application that is being developed, a debugging console or navigational elements. A qualitative evaluation with professional software developers showed that they appreciate the increased screen real estate.
Although gait/balance analysis methods have proven effective for assessing falls risk (FR), they are mostly confined to the laboratory and rely on expensive specialist equipment. Recent sensor technologies have made it possible to capture FR data accurately; however, no exploration has been done on how to effectively communicate these data to seniors in both healthcare and free-living settings. We describe IDA (Insole Device for Assessment of Falls Risk), comprising a relatively inexpensive insole and prototype application that provides feedback to stakeholders. To explore what level of FR data should best be communicated to different stakeholders, we conducted workshops with 26 seniors and interviewed 7 healthcare workers in the UK. We highlight stakeholder preferences on viewing FR data to foster greater understanding of outcomes and enhance communication between stakeholders. Finally, we identify opportunities for design on enhancing understanding of gait/balance outcomes; these have potential applications in other areas of physical rehabilitation.
SESSION: Touch, gestures and strokes
A large number of today’s systems use interactive touch surfaces as the main input channel. Current devices reduce the richness of touch input to two-dimensional positions on the screen. A growing body of work develops methods that enrich touch input to provide additional degrees of freedom for touch interaction. In particular, previous work proposed to use the finger’s orientation as additional input. To efficiently implement new input techniques which make use of the new input dimensions, we need to understand the limitations of the input. Therefore, we conducted a study to derive the ergonomic constraints for using finger orientation as additional input in a two-handed smartphone scenario. We show that for both hands, the comfort and the non-comfort zone depend on how the user interacts with a touch surface. For two-handed smart-phone scenarios, the range is 33.3% larger than for tabletop scenarios. We further show that the phone orientation correlates with the finger orientation. Finger orientations which are harder to perform result in phone orientations where the screen does not directly face the user.
We present new alternative interfaces for zooming out on a mobile device: Bounce Back and Force Zoom. These interfaces are designed to be used with a single hand. They use a pressure-sensitive multitouch technology in which the pressure itself is used to zoom. Bounce Back senses the intensity of pressure while the user is pressing down on the display. When the user releases his or her finger, the view is bounced back to zoom out. Force Zoom also senses the intensity of pressure, and the zoom level is associated with this intensity. When the user presses down on the display, the view is scaled back according to the intensity of the pressure. We conducted a user study to investigate the efficiency and usability of our interfaces by comparing with previous pressure-sensitive zooming interface and Google Maps zooming interface as a baseline. Results showed that Bounce Back and Force Zoom was evaluated as significantly superior to that of previous research; number of operations was significantly lower than default mobile Google Maps interface and previous research.
Knocking is a way of interacting with everyday objects. We introduce BeatIt, a novel technique that allows users to use passive, everyday objects to control a smart environment by recognizing the sounds generated from knocking on the objects. BeatIt uses a BeatSet, a series of percussive sound samples, to represent the sound signature of knocking on an object. A user associates a BeatSet with an event. For example, a user can associate the BeatSet of knocking on a door with the event of turning on the lights. Decoder, a signal-processing module, classifies the sound signals into one of the recorded BeatSets, and then triggers the associated event. Unlike prior work, BeatIt can be implemented on microphone-enabled commodity devices. Our user studies with 12 participants showed that our proof-of-concept implementation based on a smartwatch could accurately classify eight BeatSets using a user-independent classifier.
We introduce GATO, a human performance analysis technique grounded in the Kinematic Theory that delivers accurate predictions for the expected user production time of stroke gestures of all kinds: unistrokes, multistrokes, multitouch, or combinations thereof. Our experimental results obtained on several public datasets (82 distinct gesture types, 123 participants, ≈36k gesture samples) show that GATO predicts user-independent gesture production times that correlate rs > .9 with groundtruth, while delivering an average relative error of less than 10% with respect to actual measured times. With its accurate estimations of users’ a priori time performance with stroke gesture input, GATO will help researchers to understand better users’ gesture articulation patterns on touchscreen devices of all kinds. GATO will also benefit practitioners to inform highly effective gesture set designs.
This paper presents Brassau, a graphical virtual assistant that converts natural language commands into GUIs. A virtual assistant with a GUI has the following benefits compared to text or speech based virtual assistants: users can monitor multiple queries simultaneously, it is easy to re-run complex commands, and user can adjust settings using multiple modes of interaction. Brassau introduces a novel template-based approach that leverages a large corpus of images to make GUIs visually diverse and interesting. Brassau matches a command from the user to an image to create a GUI. This approach decouples the commands from GUIs and allows for reuse of GUIs across multiple commands. In our evaluation, users prefer the widgets produced by Brassau over plain GUIs.
SESSION: Understanding mobile use
I don’t want to seem trashy: exploring context and self-presentation through young gay and bisexual males’ attitudes toward shirtless selfies on instagram
Mobile devices and social media have made it possible to share photos, often selfies, nearly instantaneously with potentially large networks of contacts and followers. Selfies have become a frequent component of young people’s online self-presentations and shirtless male selfies, a common trope among some gay Instagram users, present an interesting self-presentation dilemma. Images of shirtless males, normatively appropriate, attractive and innocuous in some contexts, can also be vulnerable to misinterpretation or unintended sexualization in ways that can negatively impact others’ impressions. This paper reports on an interview study of 15-24 year-old gay and bisexual Instagram users’ attitudes toward and experiences with shirtless selfies. Results suggest that they see a clear tension between these images conveying attractiveness and possible negative connotations such as promiscuity, and have different strategies for navigating this tension. The results have implications for consideration of the contexts in which mobile social media content is produced and consumed.
Since the emergence of video computer games in the early 70’s, the concept of “cheating” has been a hot issue in video gaming research. Adding mobility and location-based capabilities to computer games introduces a whole new set of behaviours, motivations and justifications that challenge gaming communities to reconsider what constitutes “cheating”, and what is simply an acceptable extension of game play. Using the specific case of Pokémon GO, we investigate players’ perceptions on cheating in this mobile location-based game. In our research, we identified 10 ways that players circumvent the rules of Pokémon GO. Through analysis of online forums, field observations, interviews, and a focus group with local players, we realised that players’ attitudes vary as to what constitutes “cheating”, and whether playing outside the rules is acceptable. We found players “cheat” to enhance game experience, to compensate for limitations in the game’s design, or to keep up with other cheaters. While this has been observed in online gaming before, our study contributes to research by relating these specifically to the game’s mobile location-based nature. We offer implications for design of location-based games.
Watching online videos on mobile devices has been massively present in people’s daily activities. However, different users can watch the same video for different purposes, and hence develop different expectations for their experience. Understanding people’s motivations for watching videos on mobile can help address this problem by giving designers the information needed to craft the whole watching journey better adapting to user’s expectations. To obtain this understanding, a comprehensive framework of viewer motivations is necessary. We present research that provides several contributions to understanding mobile video watchers: a thorough framework of user motivations to watch videos on mobile devices, a detailed procedure for collecting and categorizing these motivations, a set of challenges that viewers experience to address each motivation, insights on usage of mobile and non-mobile devices, and design recommendations for video sharing systems.
The emergence of low-cost thermographic cameras for mobile devices provides users with new practical and creative prospects. While recent work has investigated how novices use thermal cameras for energy auditing tasks in structured activities, open questions remain about “in the wild” use and the challenges or opportunities therein. To study these issues, we analyzed 1,000 YouTube videos depicting everyday uses of thermal cameras by non-professional, novice users. We coded the videos by content area, identified whether common misconceptions regarding thermography were present, and analyzed questions within the comment threads. To complement this analysis, we conducted an online survey of the YouTube content creators to better understand user behaviors and motivations. Our findings characterize common thermographic use cases, extend discussions surrounding the challenges novices encounter, and have implications for the design of future thermographic systems and tools.
SESSION: Gaze, HMD and AR
While first-generation mobile gaze interfaces required special-purpose hardware, recent advances in computational gaze estimation and the availability of sensor-rich and powerful devices is finally fulfilling the promise of pervasive eye tracking and eye-based interaction on off-the-shelf mobile devices. This work provides the first holistic view on the past, present, and future of eye tracking on handheld mobile devices. To this end, we discuss how research developed from building hardware prototypes, to accurate gaze estimation on unmodified smartphones and tablets. We then discuss implications by laying out 1) novel opportunities, including pervasive advertising and conducting in-the-wild eye tracking studies on handhelds, and 2) new challenges that require further research, such as visibility of the user’s eyes, lighting conditions, and privacy implications. We discuss how these developments shape MobileHCI research in the future, possibly the next 20 years.
Current head-mounted displays (HMDs) for Virtual Reality (VR) and Augmented Reality (AR) have a limited field-of-view (FOV). This limited FOV further decreases the already restricted human visual range and amplifies the problem of objects going out of view. Therefore, we explore the utility of augmenting HMDs with RadialLight, a peripheral light display implemented as 18 radially positioned LEDs around each eye to cue direction towards out-of-view objects. We first investigated direction estimation accuracy of multi-colored cues presented on one versus two eyes. We then evaluated direction estimation accuracy and search time performance for locating out-of-view objects in two representative 360° video VR scenarios. Key findings show that participants could not distinguish between LED cues presented to one or both eyes simultaneously, participants estimated LED cue direction within a maximum 11.8° average deviation, and out-of-view objects in less distracting scenarios were selected faster. Furthermore, we provide implications for building peripheral HMDs.
Beyond Halo and Wedge: visualizing out-of-view objects on head-mounted virtual and augmented reality devices
Head-mounted devices (HMDs) for Virtual and Augmented Reality (VR/AR) enable us to alter our visual perception of the world. However, current devices suffer from a limited field of view (FOV), which becomes problematic when users need to locate out of view objects (e.g., locating points-of-interest during sightseeing). To address this, we developed and evaluated in two studies HaloVR, WedgeVR, HaloAR and WedgeAR, which are inspired by usable 2D off-screen object visualization techniques (Halo, Wedge). While our techniques resulted in overall high usability, we found the choice of AR or VR impacts mean search time (VR: 2.25s, AR: 3.92s) and mean direction estimation error (VR: 21.85°, AR: 32.91°). Moreover, while adding more out-of-view objects significantly affects search time across VR and AR, direction estimation performance remains unaffected. We provide implications and discuss the challenges of designing for VR and AR HMDs.
Alertness is a crucial component of our cognitive performance. Reduced alertness can negatively impact memory consolidation, productivity and safety. As a result, there has been an increasing focus on continuous assessment of alertness. The existing methods usually require users to wear sensors, fill out questionnaires, or perform response time tests periodically, in order to track their alertness. These methods may be obtrusvie to some users, and thus have limited capability. In this work, we propose AlertnessScanner, a computer-vision-based system that collects in-situ pupil information to model alertness in the wild. We conducted two in-the-wild studies to evaluate the effectiveness of our solution, and found that AlertnessScanner passively and unobtrusively assess alertness. We discuss the implications of our findings and present opportunities for mobile applications that measure and act upon changes in alertness.
Drones offer camera angles that are not possible with traditional cameras and are becoming increasingly popular for videography. However, flying a drone and controlling its camera simultaneously requires manipulating 5-6 degrees of freedom (DOF) that needs significant training. We present ARPilot, a direct-manipulation interface that lets users plan an aerial video by physically moving their mobile devices around a miniature 3D model of the scene, shown via Augmented Reality (AR). The mobile devices act as the viewfinder, making them intuitive to explore and frame the shots. We leveraged AR technology to explore three 6DOF video-shooting interfaces on mobile devices: AR keyframe, AR continuous, and AR hybrid, and compared against a traditional touch interface in a user study. The results show that AR hybrid is the most preferred by the participants and expends the least effort among all the techniques, while the users’ feedback suggests that AR continuous empowers more creative shots. We discuss several distinct usage patterns and report insights for further design.
SESSION: Smart watches
MyoTilt: a target selection method for smartwatches using the tilting operation and electromyography
We present the MyoTilt target selection method for smartwatches, which employs a combination of a tilt operation and electromyography (EMG). First, a user tilts his/her arm to indicate the direction of cursor movement on the smartwatch; then s/he applies forces on the arm. EMG senses the force and moves the cursor to the direction where the user is tilting his/her arm to manipulate the cursor. In this way, the user can simply manipulate the cursor on the smartwatch with minimal effort, by tiling the arm and applying force to it. We conducted an experiment to investigate its performance and to understand its usability. Result showed that participants selected small targets with an accuracy greater than 93.89%. In addition, performance significantly improved compared to previous tilting operation methods. Likewise, its accuracy was stable as targets became smaller, indicating that the method is unaffected by the “fat finger problem”.
We propose BubbleFlick, an effective interface for Japanese text entry on smartwatches. While various ideas have been proposed to provide easy and fast text entry for the Latin alphabet, Japanese text entry has additional challenges such as having more than fifty syllabary characters, or kana, to enter and the subsequent kana-kanji conversion, which translates a sequence of the syllabary characters into a standard expression with a mixture of kanji and kana characters. This paper focuses on interfaces for entry of kana syllabary characters. We designed and prototyped three interfaces: 1) Japanese kana syllabary keyboard, 2) Dial&Flick interface, and 3) DualBelts interface. Through a comparative pilot study of the prototypes, we refined the most promising Dial&Flick interface into BubbleFlick. BubbleFlick provides the widest possible area for easy flick operations while also leaving an area for editing text. We conducted a 30-day consecutive user study on BubbleFlick in comparison with Google’s latest Japanese text-entry method based on a numeric keypad. After thirty days, BubbleFlick showed a text-entry speed of over 35 characters per minute, which was comparable to Google’s numeric-keypad-based method for novice participants. Through the user study, BubbleFlick showed a lower error rate and gave us informative hints for further improvement.
As smartwatch functions expand, target selection among many items will probably become a common task. List search interfaces (LSIs) for a smartwatch use a prefix matching query to search for items, and need two modes because of the small screen size: query input mode and list navigation mode. Despite the modes, LSIs may be more efficient than list interfaces (LIs), which involve no text querying, for large pools. Actually, we could show that the LSI outperformed the LI for pool sizes over 60. However, we also found that the LSI users experience overhead when deciding whether to switch the modes or not. To reduce the overhead, we designed two auto-switching LSIs: input-length-based auto-switching LSI (IA-LSI) and list-size-based auto-switching LSI (LA-LSI). We could show both auto-switching LSIs outperformed the conventional LSI for pool sizes over 60. We also conducted experiments with the auto-switching LSIs for various pool sizes, and provided their results and guidelines for the optimal switching criteria for the LSIs.
SESSION: Accessibility and mobile health
Platforms like Google Maps or Bing Maps are used by a large number of users to find the shortest path to their destinations. While these services mainly focus on supporting drivers and pedestrians, first services exist that support wheelchair users. Routing algorithms for wheelchair users try to avoid obstacles like stairs or bollards and optimize on criteria like surface properties and slope of the route. In this study, we undertake the first controlled examination of wheelchair routing approaches. By analyzing three routing platforms, including two wheelchair routing algorithms and three pedestrian routing algorithms, across fifteen major cities in Germany, our results highlight that the routes for wheelchair users are significantly longer and partially also more complex than those for pedestrians. In addition, we show that today’s pedestrian routing algorithms also output very diverse routes.
Mobile location-based games to support orientation & mobility training for visually impaired students
Orientation and Mobility (O&M) training is an important aspect in the education of visually impaired students. In this work we present a scavenger hunt-like location-based game to support O&M training. In two comparative studies with blind and partially sighted students and interviews with teachers we investigate if a mobile game played in the real world is a suitable approach to support O&M training and if a mobile location-based O&M training game is preferred over a game played in a virtual world. Our results show that a mobile location-based game is a fruitful approach to support O&M training for visually impaired students, and that a mobile location based game is preferred over playing in a virtual world. Based on the gathered insights we discuss implications for using mobile location-based games in O&M training.
Previous studies have suggested that the learning disability (LD) population face significant communication barriers when interacting with health professionals. Such obstacles may be considered as preventable; however, there is a surprising lack of research-based technologies available that intend to promote this communication. We aim to address this issue by investigating the potential use of mobile technologies to support adults with mild LDs during clinical consultations. To achieve this, we interviewed 10 domain experts including government advisors, academics, support workers and General Practitioners. The extracted information was used to develop an initial technology probe, which was evaluated by a subset of the aforementioned experts. The overall contribution of this research is a set of design guidelines for the development of Augmentative and Communicative technologies that target the clinical needs of adults with mild LDs.
Communication between nurses and other healthcare member is essential during the bedside medication administration stage to provide effective patient care and prevent medication errors. The nurse provides information to the physician and pharmacist when consultation regarding medication errors or concern is needed. This information is very critical as it affects the situation assessment and clinical judgment. Insufficient information can lead to failure in treatment and jeopardize patient health. The research to date has focused on improving tools for general communication between healthcare members. However, none of them have been customized to effectively fit the medication administration stage nor have considered the content of the communication. Therefore, in this paper, we propose a novel idea of customized communication that precisely applies to the medication administration stage. We developed the Medication Administration Communication (MAC) application that generates the essential content of communication between the nurse and other healthcare members. We evaluated the application by testing its usability from the nurses perspective.
SpokeIt is a mobile serious game for health designed to support speech articulation therapy. Here, we present SpokeIt as well as 2 preceding speech therapy prototypes we built, all of which use a novel offline critical speech recognition system capable of providing feedback in real-time. We detail key design motivations behind each of them and report on their potential to help adults with speech impairment co-occurring with developmental disabilities. We conducted a qualitative within-subject comparative study on 5 adults within this target group, who played all 3 prototypes. This study yielded refined functional requirements based on user feedback, relevant reward systems to implement based on user interest, and insights on the preferred hybrid game structure, which can be useful to others designing mobile games for speech articulation therapy for a similar target group.