Welcome to this issue of the Proceedings of the ACM on Human-Computer Interaction, which brings together contributions from the Mobile Human-Computer Interaction (MHCI) community. This issue showcases innovations in research focused on mobile, wearable, and personal devices. Research in MHCI encompasses both technical innovations and social considerations. Mobile technologies accompany us throughout our daily lives: at home, at work, in traffic, and out in the wild. However, while mobile technologies provide the capacity for constant connectivity, they also pose risks of harm and unintended consequences. There has never been a more pressing time to debate and explore the meaning of digital culture, and to understand how mobile technologies can and should be used to connect us, our data, and mobile applications and services in meaningful ways - true to the conference theme of 'Connecting Cultures'.
Smartphone overuse around family and friends has been shown to be increasing over the past years and often leads to limited one-to-one interaction between co-located individuals. Smartphone-based virtual agents have been shown to be effective for behavior intervention and mediation, such as promoting physical activity. Little is known about leveraging smartphone-based agents to play a role in communication and facilitate conversation between co-located individuals. In this paper, we explore strengthening conversations between co-located couples by introducing a smartphone-based agent that acts as a conversation facilitator between them. We contrast the results with a text-based alternative. Our findings suggest that virtual agents serve as a valuable social entity mediating support in couples' communication and relationship dynamics. Through this, we suggest design considerations for this context that leverage the unique qualities of virtual agents.
Rendering realistic tactile sensations of virtual objects remains a challenge in VR. While haptic interfaces have advanced, particularly with phased arrays, their ability to create realistic object properties like state and temperature remains unclear. This study investigates the potential of Ultrasound Mid-air Haptics (UMH) for enhancing the perceived congruency of virtual objects. In a user study with 30 participants, we assessed how UMH impacts the perceived material state and temperature of virtual objects. We also analyzed EEG data to understand how participants integrate UMH information physiologically. Our results reveal that UMH significantly enhances the perceived congruency of virtual objects, particularly for solid objects, reducing the feeling of mismatch between visual and tactile feedback. Additionally, UMH consistently increases the perceived temperature of virtual objects. These findings offer valuable insights for haptic designers, demonstrating UMH's potential for creating more immersive tactile experiences in VR by addressing key limitations in current haptic technologies.
Guiding users with limb exercise can assist in muscle training or physical recovery. However, traditional vision-based methods often require multiple camera angles to help users understand the motions and require them to be within the range of the screen. Therefore, we propose a non-visual system that can guide users with multiple-directional limb motions utilizing spatial audio, AudioMove, with commercial-off-the-shelf (COTS) devices (i.e., smartphones and earphones). The proposed system addresses the challenge of conveying directional information encompassing multiple planes in real-time. We conduct a mixed-method user study to evaluate the effectiveness of the system with three methods combining motion data with spatial audio perception. Additionally, a user interface is built to collect users' comments. The results conclude that spatial audio guidance could create a natural, pervasive, and non-visual exercise training solution in daily life.
Due to the continuous improvement of Augmented Reality (AR) head-mounted displays (HMDs), these devices are bound to be increasingly integrated into our daily routines. So far, a major focus of AR research has been on indoor usage and deployment. However, since seamlessly supporting users in their activities while being on-the-move in various outdoor contexts becomes increasingly important, there is a need to investigate the current state-of-the-art of AR technologies while people are in motion outdoors. Therefore, we conducted a systematic literature review of pertinent HCI publications, specifically looking into applications concerning vulnerable road users. We identify the contexts in which such technologies have been researched, prevailing challenges in the field, and applied methodological approaches. Our findings show that most contributions address pedestrians, a shift towards HMDs, and a prevalence of lab studies due to technology limitations. Based on our findings, we discuss trends, existing gaps and opportunities for future research.
Engaging with our devices as we engage with each other is problematic as it distracts us and diminishes our social interactions. Subtle interactions have been presented as an approach to reconcile personal and computing interactions, through less disrupting technology. Along those lines, we investigate showing information right on and next to the people we are engaging with. Body-based data visualization allows us to maintain our attention with others, but to also receive information at the same time. We explore potential designs of such body-based and especially on-face visualizations and create a set of five prototype visualizations in a Snapchat lens. We use these prototypes in a video call study with 16 participants to evaluate how body-based visualizations affect actual conversations.
To enable people with visual impairments (PVI) to explore shopping malls, it is important to provide information for selecting destinations and obtaining information based on the individual's interests. We achieved this through conversational interaction by integrating a large language model (LLM) with a navigation system. ChitChatGuide allows users to plan a tour through contextual conversations, receive personalized descriptions of surroundings based on transit time, and make inquiries during navigation. We conducted a study in a shopping mall with 11 PVI, and the results reveal that the system allowed them to explore the facility with increased enjoyment. The LLM-based conversational interaction, by understanding vague and context-based questions, enabled the participants to explore unfamiliar environments effectively. The personalized and in-situ information generated by the LLM was both useful and enjoyable. Considering the limitations we identified, we discuss the criteria for integrating LLMs into navigation systems to enhance the exploration experiences of PVI.
Research in affective robotics has been using emotions to improve human-robot interaction. One important aspect has been to design recognizable and believable emotions in robotics. Recent work argued that externally displayed emotions on robots may or may not be appropriate for a given situation. However, the selection of emotions as appropriate/inappropriate is not trivial. We here examine the practicality of an established model to craft for emotion appropriateness based on situations of interaction. To do so, we explored the use of the Ortony, Clore, and Collins (OCC) model, which provides a psychological framework of appraisal in which the characteristics of situations are defined and connected to emotions, to identify emotion categories and create contrasting perceptions of emotion appropriateness. We then mapped these categories to four recognizable emotions on aerial robots and designed two video clips (3min35s each) of respectively appropriate and inappropriate emotions. The clips were evaluated in an online study (N=100) where significant differences were found in attitudes toward the robot's emotions. This paper contributes initial findings to designing for emotion appropriateness.
We developed an Android phone unlock mechanism utilizing facial recognition and specific mimics to access a specially secured portion of the device, designed for plausible deniability. The widespread adoption of biometric authentication methods, such as fingerprint and facial recognition, has revolutionized mobile device security, offering enhanced protection against shoulder-surfing attacks and improving user convenience compared to traditional passwords. However, a downside is the potential for third-party coercion to unlock the device. While text-based authentication allows users to reveal a hidden system by entering a special password, this is challenging with face authentication. We evaluated our approach in a role-playing user study involving 50 participants, with one participant acting as the attacker and the other as the suspect. Suspects successfully accessed the secured area, mostly without detection. They further expressed interest in this feature on their personal phones. We also discuss open challenges and opportunities in implementing such authentication mechanisms.
We explore the design of a watch that can deliver notifications through shape changes, with a specific focus on changes in curvature at the back of the watch face. We explain our design choices and the challenges we faced while creating such a watch. We conducted an experimental study to determine the absolute detection threshold (ADT) of this novel form of feedback. We compared the ADT of two different watches, both of which have a back face that can change its curvature and make contact with the wearer's wrist to notify them. These two watches exhibit different shapes when inflated with high air pressure. To determine the ADT, we conducted a standard two-down, one-up adaptive staircase procedure. Our findings show that an ADT of 3.86 psi is required to inflate the back surface for detection by participants. Overall, our qualitative findings indicate that participants enjoyed this novel type of feedback and could feel different sensations with each watch.
Early prediction of children's literacy skills is crucial for successful literacy development. However, standardized screenings for pre-readers are mainly paper-based and designed for one-on-one sessions, demanding significant resources. We present the development and feasibility evaluation of a digital, game-based literacy screening for German pre-readers that supports group sessions. The screening comprises five tasks that do not rely on written language skills. We detail critical design decisions and guidelines for the effective implementation of this group-based screening. We evaluated the feasibility and user experience with 34 German second- and third-graders. Results revealed that the screening is suitable for use in group settings and that it was positively perceived by the children. Children found the tasks enganging and straightforward, often perceiving them as games. This study demonstrates that digital game-based screenings can be used effectively in group settings with young children with minimal adult guidance, offering a motivating and engaging assessment method.
Whether it is sleep, diet, or procrastination, changing behaviours can be challenging. Individuals could design and build their own personalised digital interventions to help them reach their goals, but little is known about this process. Building upon previous research we propose the Behaviour Change with Trigger-Action Programming (BC-TAP) model which describes how individuals could bridge the gap between their current and desired behaviour through the creation of 'Do-It-Yourself' (DIY) digital interventions. We conducted a two-day participatory workshop based on the BC-TAP model with 28 participants. Participants articulated plans to change a behaviour of their choice and represented these plans in mobile device automations. After using their interventions for up to three weeks, participants reflected on their experience. Our findings report opportunities and challenges at each stage of the process. While formulating a digital proxy for certain behaviours was challenging, both failures and successes facilitated participants' awareness of their behaviour, and their ability to change it.
Road safety remains a critical global concern, with millions of crashes reported annually. Understanding the safety of individual road junctions is vital, especially in areas prone to road rage and reckless driving. However, current navigation systems lack detailed safety information, increasing risk for drivers and pedestrians. Recognizing this need, this paper introduces øurmethod that automatically annotates the road segments with a driving safety level to aid cautious maneuvering and safe driving practices. By leveraging onboard sensors, øurmethod identifies causal chains behind poor driving maneuvers, enabling the modeling of safety levels for various road segments. We perform a thorough evaluation of øurmethod over publicly available and collected datasets from multiple countries and observe >80% accuracy (in terms of F1-score) in correctly annotating the safety concerns. In addition, a thorough user study indicates the generalizability and usability of the proposed approach for its practical deployment considerations.
Hearing loss affects 20% of the global population, a rate that is increasing dramatically as the world's population ages. Early prevention and identification of ear diseases can significantly reduce the risk of becoming disabled with hearing impairment. We proposeEarMonitor, an interactive, vision-based ear health monitoring system that enables users to examine their ear conditions with a low-cost hand-held endoscope.EarMonitor can detect six ear health conditions suitable for self-assessment. It can particularly recognize complications from ear diseases, helping users better understand the results. In the wild, our computer vision algorithm achieves a detection sensitivity of 0.949 for earwax buildup and blockage in 100 external auditory canal photos; our deep learning model achieves an average detection sensitivity of 0.861 for the other five conditions considering complications in 350 tympanic membrane photos. We validatedEarMonitor 's effectiveness through a user study involving 17 participants and two experts, leading to valuable insights regarding the design and interpretation of non-clinical assessment devices.
Mobile phones have enabled users to browse information in varying mobility contexts. For high-mobility settings such as walking, however, phones pose several usability challenges, particularly safety and limited screen sizes. While Augmented Reality (AR) has been proposed to address these issues, prior work has yet to investigate AR interface design in real-world walking conditions beyond text readability and notification design. This paper presents the first exploration of AR browsing interface design and extended usage while walking in the wild. We first conducted design sessions with 12 UI designers while walking in varied environments to design the window size, distance, opacity, anchor type, and placement for three categories of apps: text, video, and mixed content. Results show that traffic level significantly affects the designed window size, whereas content type significantly affects window size, distance, opacity, and vertical placement. To gain further insights from real-world usage, we conducted a multi-day observational study with 5 participants and observed that participants on average switched among window layouts every 3.3 minutes, for reasons such as safety and the level of extended visual attention.
Self-tracking has grown in popularity over recent years, leading to a rise in users who monitor data for improving their health. However, decreasing self-motivation due to different user circumstances such as priorities, lifestyles, and habits affects how users conduct self-tracking in apps and hamper their progress towards achieving health goals. We investigated factors that contribute to increasing and maintaining self-motivation while self-tracking for health goals, as well as the potential applications of these factors in health apps. We conducted semi-structured interviews with 15 participants to gain insights into their self-tracking habits and health app usage. Our thematic analysis suggests that users value convenient self-tracking methods and seeing notable progress changes toward a health goal. These findings reveal design opportunities for streamlining app features, reframing progress indicators in data visualizations, and accommodating user priorities.
The proliferation of mobile Virtual Reality (VR) headsets shifts our interaction with virtual worlds beyond our living rooms into shared spaces. Consequently, we are entrusting more and more personal data to these devices, calling for strong security measures and authentication. However, the standard authentication method of such devices - entering PINs via virtual keyboards - is vulnerable to shoulder-surfing, as movements to enter keys can be monitored by an unnoticed observer. To address this, we evaluated masking techniques to obscure VR users' input during PIN authentication by diverting their hand movements. Through two experimental studies, we demonstrate that these methods increase users' security against shoulder-surfing attacks from observers without excessively impacting their experience and performance. With these discoveries, we aim to enhance the security of future VR authentication without disrupting the virtual experience or necessitating additional hardware or training of users.
Hearables are highly functional earphone-type wearables; however, existing input methods using stand-alone hearables are limited in the number of commands, and there is a need to extend device operation through hand gestures. In previous research on hearables for hand input, user understanding and gesture recognition systems have been developed. However, in the realm of user understanding, investigation concerning hand input with hearables remains incomplete, and existing recognition systems have not demonstrated proficiency in discerning user-defined gestures. In this study, we conducted a gesture elicitation study (GES) assuming hand input using hearables under six conditions (three interaction areas x two device shapes). Then, we extracted ear-level gestures that the device's built-in IMU sensor could recognize from the user-defined gestures and investigated the recognition performance. The results of sitting experiments showed that the gesture recognition rate for in-ear devices was 91.0% and that for ear-hook devices was 74.7%.
Users frequently use their smartphones in combination with other smart devices, for example, when streaming music to smart speakers or controlling smart appliances. During these interconnected interactions, user data gets handled and processed by several entities that employ different data protection practices or are subject to different regulations. Users need to understand these processes to inform themselves in the right places and make informed privacy decisions. We conducted an online survey (N=120) to investigate whether users have accurate mental models about interconnected interactions. We found that users consider scenarios more privacy-concerning when multiple devices are involved. Yet, we also found that most users do not fully comprehend the privacy-relevant processes in interconnected interactions. Our results show that current privacy information methods are insufficient and that users must be better educated to make informed privacy decisions. Finally, we advocate for restricting data processing to the app layer and better encryption to reduce users' data protection responsibilities.
Depression, a prevalent and complex mental health issue affecting millions worldwide, presents significant challenges for detection and monitoring. While facial expressions have shown promise in laboratory settings for identifying depression, their potential in real-world applications remains largely unexplored due to the difficulties in developing efficient mobile systems. In this study, we aim to introduce FacePsy, an open-source mobile sensing system designed to capture affective inferences by analyzing sophisticated features and generating real-time data on facial behavior landmarks, eye movements, and head gestures - all within the naturalistic context of smartphone usage with 25 participants. Through rigorous development, testing, and optimization, we identified eye-open states, head gestures, smile expressions, and specific Action Units (2, 6, 7, 12, 15, and 17) as significant indicators of depressive episodes (AUROC=81%). Our regression model predicting PHQ-9 scores achieved moderate accuracy, with a Mean Absolute Error of 3.08. Our findings offer valuable insights and implications for enhancing deployable and usable mobile affective sensing systems, ultimately improving mental health monitoring, prediction, and just-in-time adaptive interventions for researchers and developers in healthcare.
Password sharing is a convenient means to access shared resources, save on subscription costs, provide emergency access, and avoid forgetting vital account details. However, it also raises significant privacy concerns, especially in digital communication contexts where content may be inadvertently exposed to unintended recipients. In this paper, we investigate this duality, using a survey of 86 Egyptian women to understand their sharing behavior and the design and evaluation of a chat application used by 60 participants. This application issues warnings based on content sensitivity, leading to increased user awareness about privacy risks. Our findings indicate that, while many participants initially shared passwords, they were surprised to discover others doing the same. Furthermore, our application effectively reduced password sharing, reflecting improved awareness of associated risks. This research acknowledges the cultural aspects of password sharing while striving to enhance the experience, enabling participants to make informed choices that enhance their information control.
Integrating Artificial Intelligence (AI) into mobile and wearables offers numerous benefits at individual, societal, and environmental levels. Yet, it also spotlights concerns over emerging risks. Traditional assessments of risks and benefits have been sporadic, and often require costly expert analysis. We developed a semi-automatic method that leverages Large Language Models (LLMs) to identify AI uses in mobile and wearables, classify their risks based on the EU AI Act, and determine their benefits that align with globally recognized long-term sustainable development goals; a manual validation of our method by two experts in mobile and wearable technologies, a legal and compliance expert, and a cohort of nine individuals with legal backgrounds who were recruited from Prolific, confirmed its accuracy to be over 85%. We uncovered that specific applications of mobile computing hold significant potential in improving well-being, safety, and social equality. However, these promising uses are linked to risks involving sensitive data, vulnerable groups, and automated decision-making. To avoid rejecting these risky yet impactful mobile and wearable uses, we propose a risk assessment checklist for the Mobile HCI community.
This paper investigates the relationship between menu design and hand positions in relation to the assessment of end users with main focus on usability, user preference, and potential adaptions to different hand positions. Sixteen (N=16) participants first participated in a co-design workshop, in which they proposed menu designs for different hand grips. Based on the design proposals, a selection of menu designs were derived and implemented in a mobile app prototype, on which a menu selection study was conducted to investigate performance and perceived usability of the menus in one-handed and two-handed interaction. The results include user ratings and performance, which highlight the need for mobile menus to be adapted for different hand positions. Based on that, we derive design recommendations for more adaptive, user-centric and ergonomic mobile menu designs to match the natural interactions of users.
Two-factor Authentication (also known as 2FA or two-step verification) is an authentication method that provides an extra layer of protection to ensure online account security. 2FA methods are used along with other primary authentication methods like PINs and Passwords to verify that the person trying to access any digital account is the person they are claiming to be. However, 2FA methods can be inaccessible for blind and low vision (BLV) users due to the requirement of multiple steps, apps, and/or devices for authentication. In addition, it can be a security risk as screen readers may read out the verification codes to bystanders. To address this, we present Haptic2FA, a haptic-based authentication method to improve 2FA accessibility for BLV users. Here, as a part of the 2FA process, the users are sent a 'haptic pattern' (similar to a one-time passcode in traditional 2FA methods) that they are required to enter or select for verification. Through a usability study with 10 BLV participants, we evaluated haptic patterns and input methods for the haptic patterns in the Haptic2FA method. Through the findings, we discuss the accessibility and usability of the Haptic2FA method.
Distractions caused by digital devices are increasingly causing dangerous situations on the road, particularly for more vulnerable road users like cyclists. While researchers have been exploring ways to enable richer interaction scenarios on the bike, safety concerns are frequently neglected and compromised. In this work, we propose Head 'n Shoulder, a gesture-driven approach to bike interaction without affecting bike control, based on a wearable garment that allows hands- and eyes-free interaction with digital devices through integrated capacitive sensors. It achieves an average accuracy of 97% in the final iteration, evaluated on 14 participants. Head 'n Shoulder does not rely on direct pressure sensing, allowing users to wear their everyday garments on top or underneath, not affecting recognition accuracy. Our work introduces a promising research direction: easily deployable smart garments with a minimal set of gestures suited for most bike interaction scenarios, sustaining the rider's comfort and safety.
Commercial pregnancy apps are becoming popular in mobile health and integral to individuals' health management ecosystems. For that, they can complement medical advice and be conveniently used for ubiquitous tracking of pregnancy. Besides their functional and medical purpose, they may elicit subjective, personal, and intimate experiences that are equally relevant to users. Yet, these qualitative aspects of experiencing pregnancy apps remain under-researched. An inquiry into those qualitative aspects may help advance the design of pregnancy apps for improved user embodiment, engagement, and experience. Here, we qualitatively inquire about experiences with six popular pregnancy apps through 4,000+ online reviews. Our findings reveal that pregnancy apps are more than mere trackers and can impact pregnancy experiences, either positively or negatively, based on their design features. Further, reviews pointed to a neglect of family, friends, and relatives in the apps' design, which users found often problematic. To counter these shortcomings, we outline avenues for improving the design of pregnancy apps beyond usability and medical outcomes and call for enhancing their user experience through more sensitive, user-centered, and inclusive design.
Research indicates that smartphone users often speculate about notifications upon sensing their arrival, aiding their decision to attend to them. This speculation, however, relies on the presence of sufficient clues to associate with the notification, which are not always available. To address this challenge, through an experience sampling study, we investigated the effectiveness of delivering user-assigned alerts in influencing users' speculation accuracy, attendance effectiveness, and perceived disturbance. Our findings suggest that while user-assigned alerts enhanced the accuracy of speculation and improved participants' decisions to attend to notifications, the increased notification awareness sometimes led participants to view their decision to ignore notifications as less favorable. Moreover, we found that sporadic alert delivery disrupted the association between the alert and the notification, leading to no reduction in perceived disturbance nor improvement in speculation accuracy. In assigning alerts to notifications, participants considered five strategies: familiarity, distinctiveness, disturbance, emotional resonance, and dimension representation.
Body-Focused Repetitive Behaviors (BFRBs), such as nail biting, impact a wide demographic, and can negatively affect physical, psychological, and social well-being. Although pharmacological and behavioral therapies are common treatments, many avoid seeking help, and not everyone responds fully to treatment. Recent advances in wearable sensing enable new digital solutions that can detect BFRB episodes and intervene to mitigate them. While BFRBs have been extensively studied in medical research, translating this knowledge into effective digital intervention solutions may not be straightforward, and the end user's perspective may be overlooked. We report a user study with 12 frequent nail biters, who shared their experiences about nail biting and expectations of intervention solutions in semi-structured qualitative interviews. We describe the progression of a nail biting episode from a nail biter's perspective and present a taxonomy of intervention strategies to mitigate nail biting. Our results inform the design of future digital BFRB intervention solutions.
In this exploratory experience sampling method (ESM) research, we examined the perceptions of 74 smartphone users regarding the opportuneness of moments for proceeding through a four-stage notification-response process: the phone generating an alert (Alert), the user roughly glancing at the notification (Glance), engaging with it (Engage), and acting on it (Act). We investigated how the moments perceived as opportune for each of the four stages related to users' self-reported values of 20 contextual factors, and how these factors influenced users' perceived opportuneness of the moments for each stage. Our results reveal that Alert and Glance stages were perceived as more distinct, with Alert being influenced by social-environmental related factors and Glance characterized by a lower threshold for what constitutes an opportune moment. The final two stages - Engage and Act - were the most similar to each other. The findings also indicated how the influence of contextual factors on perceived opportuneness of the moments varied across factors, notification types, stages, and how such variation was manifested in the likelihood, valence, and magnitude of their overall influence.
The remarkable growth of Virtual Reality (VR) in recent years has extended its applications beyond entertainment to sectors including education, e-commerce, and remote communication. Since VR devices contain user's private information, user authentication becomes increasingly important. Current authentication systems in VR, such as password-based or static biometric-based methods, are either cumbersome to use or vulnerable to attacks such as shoulder surfing. To address these limitations, we propose Medusa3D, a challenge-response authentication system for VR based on reflexive eye responses. Unlike existing methods, reflexive eye responses are involuntary and effortless, offering a secure and user-friendly credential for authentication. We implement Medusa3D on an off-the-shelf VR and conduct evaluations with 25 participants. The evaluation results show that Medusa3D achieves 0.21% FAR and 0.13% FRR, demonstrating high security under various ocular conditions and resilience against attacks such as zero-effort attack, replay attack, and mimicry attack. A user study indicates that Medusa3D is user-friendly and well-adopted among participants.
Research on the activity of tasting, examined through the lens of conversation analysis, reveals that in face-to-face contexts, tasting is an interactive and sequential process that combines individual sensory experience with a public, witnessable, accountable, and intersubjective dimension. This perspective can be extended to the realm of live-streamed tasting, where streamers demonstrate and communicate the taste of food products to online audiences in real time. By adopting multimodal conversation analysis to scrutinize the unfolding sequence moment by moment, my study aims to demonstrate 1) Three practices to achieve the configuration of "tasting heads," wherein the current taster's face is displayed on-screen. 2) Patterns of gaze withdrawal from the screen and subsequent gaze back to the screen during the tasting process. 3) The performative and animated facial expressions. 4) The structure of responses from both streamers and viewers after the tasting.
Today, users are constrained by binary choices when configuring permissions. These binary choices contrast with the complex data collected, limiting user control and transparency. For instance, weather applications do not need exact user locations when merely inquiring about local weather conditions. We envision sliders to empower users to fine-tune permissions. First, we ran two online surveys (N=123 & N=109) and a workshop (N=5) to develop the initial design of Privacy Slider. After the implementation phase, we evaluated our functional prototype using a lab study (N=32). The results show that our slider design for permission control outperforms today's system concerning all measures, including control and transparency.
Ovarian cancer presents significant well-being challenges for middle-aged and older women. Recent research underscores the vital role of recovery identity in predicting wellbeing. However, a research gap exists regarding the influence of online support groups (OSPs) on identity synthesis for middle-aged and older cancer patients. This study introduces "Rehab-Diary," a mobile age-friendly OSP grounded in The Social Identity Model of Identity Change, aimed at helping ovarian cancer patients foster recovery identity. A four-week randomized controlled trial involving 68 participants assessed the OSP's impact. The interface was tailored for ease of use by older individuals. The findings demonstrate the feasibility of utilizing Rehab- Diary among older individuals. The intervention effectively enhanced recovery identity. This study offers evidence-based insights for developing future age-friendly online support interventions, ultimately enhancing ovarian cancer patients' quality of care.
Cycling navigation is a complex and stressful task as the cyclist needs to focus simultaneously on the navigation, the road, and other road users. We propose directional electrotactile feedback at the wrist to reduce the auditory and visual load during navigation-aided cycling. We designed a custom electrotactile grid with 9 electrodes that is clipped under a smartwatch. In a preliminary study we identified suitable calibration settings and gained first insights about a suitable electrode layout. In a subsequent laboratory study we showed that a direction can be encoded with a mean error of 19.28\,° (? = 42.77°) by combining 2 adjacent electrodes. Additionally, by interpolating with 3 electrodes a direction can be conveyed with a similar mean error of 22.54° (? = 43.57°). We evaluated our concept of directional electrotactile feedback for cyclists in an outdoor study, in which 98.8% of all junctions were taken correctly by eight study participants. Only one participant deviated substantially from the optimal path, but was successfully navigated back to the original route by our system.
We present Snap&Nav, a navigation system for blind people in unfamiliar buildings, without prebuilt digital maps. Instead, the system utilizes the floor map as its primary information source for route guidance. The system requires a sighted assistant to capture an image of the floor map, which is analyzed to create a node map containing intersections, destinations, and current positions on the floor. The system provides turn-by-turn navigation instructions while tracking users' positions on the node map by detecting intersections. Additionally, the system estimates the scale difference of the node map to provide distance information. Our system was validated through two user studies with 20 sighted and 12 blind participants. Results showed that sighted participants processed floor map images without being accustomed to the system, while blind participants navigated with increased confidence and lower cognitive load compared to the condition using only cane, appreciating the system's potential for use in various buildings.
Hand microgestures are promising for mobile interaction with wearable devices. However, they will not be adopted if practitioners cannot communicate to users the microgestures associated with the commands of their applications. This requires unambiguous representations that simultaneously show the multiple microgestures available to control an application. Using a systematic approach, we evaluate how these representations should be designed and contrast 4 conditions depending on the microgestures (tap-swipe and tap-hold) and fingers (index and index-middle) considered. Based on the results, we design a simultaneous representation of microgestures for a given set of 14 application commands. We then evaluate the usability of the representation for novice users and the suitability of the representation for small screens compared with a baseline. Finally, we formulate 8 recommendations based on the results of all the experiments. In particular, redundant graphical and textual representations of microgestures should only be displayed for novice users.
Despite significant progress in the capabilities of mobile devices and applications, most apps remain oblivious to their users' abilities. To enable apps to respond to users' situated abilities, we created the Ability-Based Design Mobile Toolkit (ABD-MT). ABD-MT integrates with an app's user input and sensors to observe a user's touches, gestures, physical activities, and attention at runtime, to measure and model these abilities, and to adapt interfaces accordingly. Conceptually, ABD-MT enables developers to engage with a user's "ability profile,'' which is built up over time and inspectable through our API. As validation, we created example apps to demonstrate ABD-MT, enabling ability-aware functionality in 91.5% fewer lines of code compared to not using our toolkit. Further, in a study with 11 Android developers, we showed that ABD-MT is easy to learn and use, is welcomed for future use, and is applicable to a variety of end-user scenarios.
Rich user information is gained through user tracking and power mobile smartphone applications. Apps thereby become aware of the user and their context, enabling intelligent and adaptive applications. However, such data poses severe privacy risks. Although users are only partially aware of them, awareness increases with the proliferation of privacy-enhancing technologies. How privacy literacy and raising privacy concerns affect app adoption is unclear; however, we hypothesize that it leads to a lower adoption rate of data-heavy smartphone apps, as non-usage often is the user's only option to protect themselves. We conducted a survey (N=100) to investigate the relationship between privacy-relevant app- and publisher characteristics with the users' intention to install and use it. We found that users are especially critical of contentful data types and apps with rights to perform actions on their behalf. On the other hand, the expectation of a productive benefit induced by the app can increase the app-adoption intention. Our findings show which aspects designers of privacy-enhancing technologies should focus on to meet the demand for more user-centered privacy.
The increasing interest in thermal haptic feedback devices, particularly for virtual reality (VR) applications, highlights the need for more immersive user experiences. However, replicating precise thermal sensations on the fingers remains challenging due to the complexity of finger joints and movements. In this paper, we introduce ThermoGrasp, a novel thermal display designed to enhance VR experiences by providing realistic thermal feedback during precision object grasping. ThermoGrasp is a modular wearable device that targets controlled thermal feedback on the distal phalanges. The implications of designing its VR application were assessed through two experimental studies. The first study focused on the device's ability to accurately convey thermal sensations across different fingers during various precision grasps. The second study investigated the overall haptic experience in VR, examining the impact of thermal feedback on user immersion and realism during interactions with objects of varying temperatures. Participants' subjective responses were analyzed based on factors such as autotelicity, expressiveness, immersion, realism, and harmony. The findings indicate that precise, localized thermal feedback significantly enhances the VR experience, offering a marked improvement over traditional haptic feedback methods.
Neurofeedback refers to the process of feeding a sensory representation of brain activity back to users in real time to improve a particular brain function, e.g., their focus and/or attention on a particular task. This study addressed the notable lack of research on methods used to visualize EEG data and their effects on the immersive quality of VR. We developed an algorithm to quantify focus, yielding a focus score. A pre-study with twenty participants confirmed its effectiveness in distinguishing between focused and relaxed mental states. Subsequently, we used this focus score to prototype a VR experience system visualizing the focus score in preconfigured manners, which was utilized in an exploratory study to assess the impact of different neurofeedback visualization methods on user engagement and focus in VR. Among all the visualization methods evaluated, the environmental scheme stood out due to its superior usability during task execution, its ability to evoke positive emotions through the visualization of objects or scenes, and its minimal deviation from user expectations. Additionally, we explored design guidelines based on collected results for future research to further refine the visualization scheme, ensuring effective integration of the focus score within the VR environment. These enhancements are crucial for designing neurofeedback visualization schemes that aim to boost participant focus in VR settings, offering significant insights into the optimization of such technologies.
While Mixed Reality allows the seamless blending of digital content in users' surroundings, it is unclear if its fusion with physical information impacts users' perceptual and cognitive resources differently. While the fusion of digital and physical objects provides numerous opportunities to present additional information, it also introduces undesirable side effects, such as split attention and increased visual complexity. We conducted a visual search study in three manifestations of mixed reality (Augmented Reality, Augmented Virtuality, Virtual Reality) to understand the effects of the environment on visual search behavior. We conducted a multimodal evaluation measuring Fixation-Related Potentials (FRPs), alongside eye tracking to assess search efficiency, attention allocation, and behavioral measures. Our findings indicate distinct patterns in FRPs and eye-tracking data that reflect varying cognitive demands across environments. Specifically, AR environments were associated with increased workload, as indicated by decreased FRP - P3 amplitudes and more scattered eye movement patterns, impairing users' ability to identify target information efficiently. Participants reported AR as the most demanding and distracting environment. These insights inform design implications for MR adaptive systems, emphasizing the need for interfaces that dynamically respond to user cognitive load based on physiological inputs.
As the number of applications installed on smartphones continues to grow, the task of effectively managing location privacy has become increasingly complex. In this paper, we explore the factors that influence users' privacy-preserving intentions and contrast them with their actual behaviours. In addition, we compare location privacy concerns across different apps investigating the impact of app-specific features on the willingness to disclose location information. Our findings highlight significant challenges in privacy management due to privacy fatigue and perceived usability. Furthermore, participants raised the importance of more uniform standards regarding location privacy settings across various applications, calling for more detailed and interactive well-informed consent processes that highlight the risks instead of the benefits of disclosing location information. This research contributes important insights towards the development of more effective privacy settings that can foster increased user engagement in managing location privacy on smartphones.
Femtech, a growing sector in mobile healthcare technology, caters to women's needs across various life stages with digital solutions like period tracking and pregnancy management apps. Maintaining robust data privacy is crucial due to the sensitive nature of the information involved, such as menstrual cycles and pregnancy status. Our research analyzes user feedback from platforms like the Apple App Store and Google Play Store to understand perceptions of femtech apps, covering accessibility, interface, features, and privacy concerns. Understanding user perspectives helps developers enhance usability and trust, driving further adoption. Prioritizing privacy also fosters industry advancement. This paper stresses the importance of dialogue among developers, users, and policymakers in femtech. Our findings aim to facilitate positive change within the femtech sector, leading to more inclusive, user-centric, and ethically driven advancements, benefiting both the industry and its users.
Rapid Serial Visual Presentation (RSVP) improves the reading speed for optimizing the user's information processing capabilities on Virtual Reality (VR) devices. Yet, the user's RSVP reading performance changes over time while the reading speed remains static. In this paper, we evaluate pupil dilation as a physiological metric to assess the mental workload of readers in real-time. We assess mental workload under different background lighting and RSVP presentation speeds to estimate the optimal color that discriminates the pupil diameter varying RSVP presentation speeds. We discovered that a gray background provides the best contrast for reading at various presentation speeds. Then, we conducted a second study to evaluate the classification accuracy of mental workload for different presentation speeds. We find that pupil dilation relates to mental workload when reading with RSVP. We discuss how pupil dilation can be used to adapt the RSVP speed in future VR applications to optimize information intake.
Previous studies on the Finger-Fitts law (FFitts law) are lacking in sufficient experiments to verify its inherent potential. Since the FFitts law is originally a modified version of the effective width method to normalize speed-accuracy biases, the model fit would improve if multiple biases were mixed together and the throughputs would be more stable than using the nominal target width. In this study, we conduct an experiment in which participants tap 1D-bar and 2D-circular targets under three subjective biases: balancing the speed and accuracy, emphasizing speed, and emphasizing accuracy when they perform the tasks. The results showed that applying the effective width to Ko et al.'s refined FFitts law, which represents the touch ambiguity with a free parameter, was the most successful in normalizing biases. Reanalyzing another dataset on ray-casting pointing also led to the same conclusion. We thus recommend using Ko et al.'s model with effective width when researchers compare several experimental conditions such as devices and user groups.
It is challenging for older drivers to transition to manual control after a Take-Over Request (TOR) has been issued by a Level 3 car. This study investigated if the presentation source of the TOR affects driver performance when resuming control. We measured take-over performance, hazard perception, and user acceptance when the TOR was presented on (1) a smartphone displaying a Non-Driving Related Task (NDRT) simultaneously with the In-Vehicle Information System (IVIS), or (2) presented on the IVIS only. Two NDRTs that varied in cognitive demand were tested with older drivers aged 60-69 and 70+. For the lower cognitive demand NDRT, presenting the TOR on the smartphone+IVIS improved takeover performance, hazard perception, and user acceptance, with greater benefits observed in the 70+ group. For the cognitively demanding NDRT, the smartphone+IVIS presentation did not benefit either group of drivers. TOR designers can apply these findings to enhance TORs and assist older drivers in managing control transitions considering the NDRT cognitive demand.
Chinese families usually place a strong emphasis on maintaining a sense of integrality even after the children have grown up. However, this might pose challenges in remote communication. For instance, over-frequent remote communication may disturb each other's life; and because of Chinese people's conservative way of expression, pervasive communication tools, such as video or voice calls, cannot communicate each other's emotions conveniently. To enhance remote communication between Chinese parents and their adult children, we employed a three-stage design process to identify the target users' needs in remote communication and presented WhisperCup. Our study indicated that WhisperCup could unintentionally increase daily communication between Chinese parents and adult children. The awareness system embedded in the WhisperCup prototype could provide a glimpse of each other's daily life, helping virtually engage in each other's life. Additionally, we found that WhisperCup could facilitate a better understanding of each other's lives while addressing privacy concerns.