Interactive Tutorials

Invited Interactive Tutorial

Mobile-based Tangible Interaction Design for Shared Displays
Date: Tuesday, September 23
Location: Studio E
Time: 9:00-12:00
Organizers: Ali Mazalek, Ryerson University, Toronto, Ontario, Canada
Ahmed Sabbir Arif, Ryerson University, Toronto, Ontario, Canada
Prerequisite/requirements: : This tutorial is intended for mobile HCI enthusiasts who are interested in exploring mobile-based tangible interactions. Participants must have experience developing with Android SDK, HTML5, and JavaScript. In addition, a basic understanding of gesture recognition and touch-based interaction approaches is useful. Participants are expected to bring their own laptops for development. Full details found on the dedicated tutorial webpage.

Abstract

Multi-touch has become the dominant interaction technique on shared displays, such as interactive tabletop surfaces. Alternative techniques include in-air gestures, interactive pens, and conventional pointing devices such as a mouse. A theoretically appealing but less explored approach is tangible interaction. Tangibles are physical objects that can act as both control and representation for the underlying system, allowing users to create, access, and manipulate digital information. Tangibles can offer a comparatively richer interaction experience by providing additional sensory information, such as pressure and friction, and by extending the interaction and display space, for instance through off-screen content control or feedback. One limitation, which often discourages researchers and designers from using such techniques, is the need for additional hardware or devices (as tangibles). Since touchscreen smartphones are gradually becoming ubiquitous, the possibility of using such devices as tangibles may encourage researchers to explore the matter further. In this hands-on tutorial, we will discuss and explore how touchscreen-based smartphones can be used as tangibles to interact with shared displays, and participants will be guided through the process of designing and prototyping their own mobile-based tangible interactions on an interactive tabletop surface.

Accepted Interactive Tutorials

Mobile Health – Beyond Consumer Apps
Date: Tuesday, September 23
Location: Studio B
Time: 9:00-12:00
Organizers: Jill Freyne, CSIRO, Sydney, Australia

Prerequisite/requirements:: None. Participants with interest in the topic are welcome, regardless of their domain or background.
Abstract

The explosion in the number of applications (apps) designed for the medical and wellness sectors has been noted by many. Recently we have seen increased presence of truly medical apps, in addition to consumer health and wellbeing apps, designed for clinical professionals and patients with medical conditions.
Consumer based mHealth apps typically allow people to do old things in new ways, such as recording health measures digitally rather than on paper. We see this also with medical apps, where increases in the quality and efficiency of existing health care models provide clinical staff with digital tools that replace paper based documentation. In rare and exciting cases we are also seeing mHealth applications that are doing things in entirely new ways to drive real innovation in health care delivery through mobile devices.
The aim of the tutorial is to highlight real world, high impact mobile research that is relevant to the key discipline of Mobile HCI. Thus, the tutorial will be application rather than academically focused. The tutorial will highlight the wide range of mHealth applications available that go far beyond trackers and behavior change tools and encourage researchers to look beyond consumer applications in their research. Four key areas of mHealth applications will be covered including Apps for the HealthyWell, mHealth in Hospitals, Practice and Clinical Apps and Patient Apps and will cover applications for health assessment, treatment and triage, behavior change, chronic illness, mental health, adolescent health, rehabilitation and age care with a focus on the need for rigorous evaluation and efficacy analysis.
The interactive component of the tutorial will focus on innovation in mobile apps for health services. Groups will be given case studies, from real clinicians and hospitals gathered at CSIRO and will be required to design and pitch apps, and evaluation studies to validate their ideas.

Wearable Computing: A Human-centered View of Key Concepts, Application Domains, and Quality Factors
Date: Tuesday, September 23
Location: Studio E
Time: 14:00-17:00
Organizers: Vivian Genaro Motti, Spencer Kohn and Kelly Caine, Clemson University
Prerequisite/requirements:: None. Participants with interest in the topic are welcome, regardless of their domain or background.

Abstract

The solutions provided by wearable computing have already been proved beneficial for various application domains, ranging from entertainment to safety critical systems. By integrating computational capabilities in clothing and accessories, wearable devices offer a great potential to support several human activities, including: monitoring the vital signs of patients, augmenting human capabilities, replacing and improving sensory organs, tracking daily activities or even notifying medical emergencies. Although wearable computing has already been proved successful and promising in a variety of scenarios, its problem space is broad and the design solutions are largely unexplored. Relevant information is scattered across sources, making it difficult and time consuming for interested parties to find unified support that guides them towards the best design decisions. This tutorial provides a comprehensive view about the state-of-the-art of wearable computing from a human-centered perspective. We present background information (key concepts, and theoretical definitions), illustrate application scenarios, form factors and their use cases, and we conclude by presenting the advantages and disadvantages of existing approaches, as well as, principles, guidelines, and quality factors that are relevant for improving the design process. During the tutorial, interactive activities will enable participants to reflect about the contents presented (through brainstorming sessions) and also to apply them in practical case studies (through focus groups sessions).

Speech-based Interaction: Myths, Challenges, and Opportunities
Date: Tuesday, September 23
Location: Studio B
Time: 14:00-17:00
Organizers: Cosmin Munteanu, National Research Council Canada & Gerald Penn, University of Toronto
Prerequisite/requirements: : Participants are asked to bring their mobile phones with them. Participants with interest in the topic are welcome, regardless of their domain or background.

Abstract

HCI research has for long been dedicated to better and more naturally facilitating information transfer between humans and machines. Unfortunately, humans’ most natural form of communication, speech, is also one of the most difficult modalities to be understood by machines – despite, and perhaps, because it is the highest-bandwidth communication channel we possess. While significant research efforts, from engineering, to linguistic, and to cognitive sciences, have been spent on improving machines’ ability to understand speech, the MobileHCI community has been relatively timid in embracing this modality as a central focus of research. This can be attributed in part to the relatively discouraging levels of accuracy in understanding speech, in contrast with often-unfounded claims of success from industry, but also to the intrinsic difficulty of designing and especially evaluating speech and natural language interfaces.
The goal of this course is to inform the MobileHCI community of the current state of speech and natural language research, to dispel some of the myths surrounding speech-based interaction, as well as to provide an opportunity for researchers and practitioners to learn more about how speech recognition and speech synthesis work, what are their limitations, and how they could be used to enhance current interaction paradigms. This highly-interactive tutorial will blend the introduction of theoretical concepts with illustration of design challenges through audio and video examples, as well as two hands-on activities (there are no technical prerequisites for these, although bringing an iPhone/iPad/Android device is recommended). Through this, we hope that Mobile HCI researchers and practitioners will learn how to combine recent advances in speech processing with user-centred principles in designing more usable and useful speech-based interactive systems.
The tutorial will be beneficial to all MobileHCI attendees without a strong expertise in ASR or TTS, who still believe in fulfilling HCI’s goal of developing methods and systems that allow humans to naturally interact with the ever increasingly ubiquitous mobile technology, but are disappointed with the lack of success in using speech and natural language to achieve this goal.