Accepted Interactive Tutorials

Speech-based Interaction: Myths, Challenges, and Opportunities
Date: Tuesday, September 23
Location: Studio B
Time: 14:00-17:00
Organizers: Cosmin Munteanu, National Research Council Canada & Gerald Penn, University of Toronto
Prerequisite/requirements: : Participants are asked to bring their mobile phones with them. Participants with interest in the topic are welcome, regardless of their domain or background.

Abstract
HCI research has for long been dedicated to better and more naturally facilitating information transfer between humans and machines. Unfortunately, humans’ most natural form of communication, speech, is also one of the most difficult modalities to be understood by machines – despite, and perhaps, because it is the highest-bandwidth communication channel we possess. While significant research efforts, from engineering, to linguistic, and to cognitive sciences, have been spent on improving machines’ ability to understand speech, the MobileHCI community has been relatively timid in embracing this modality as a central focus of research. This can be attributed in part to the relatively discouraging levels of accuracy in understanding speech, in contrast with often-unfounded claims of success from industry, but also to the intrinsic difficulty of designing and especially evaluating speech and natural language interfaces.
The goal of this course is to inform the MobileHCI community of the current state of speech and natural language research, to dispel some of the myths surrounding speech-based interaction, as well as to provide an opportunity for researchers and practitioners to learn more about how speech recognition and speech synthesis work, what are their limitations, and how they could be used to enhance current interaction paradigms. This highly-interactive tutorial will blend the introduction of theoretical concepts with illustration of design challenges through audio and video examples, as well as two hands-on activities (there are no technical prerequisites for these, although bringing an iPhone/iPad/Android device is recommended). Through this, we hope that Mobile HCI researchers and practitioners will learn how to combine recent advances in speech processing with user-centred principles in designing more usable and useful speech-based interactive systems.
The tutorial will be beneficial to all MobileHCI attendees without a strong expertise in ASR or TTS, who still believe in fulfilling HCI’s goal of developing methods and systems that allow humans to naturally interact with the ever increasingly ubiquitous mobile technology, but are disappointed with the lack of success in using speech and natural language to achieve this goal.