- February 28, 2014: Interactive Tutorial Proposals Deadline
- March 28, 2014: Proposal Notification
- September 23, 2014: Interactive Tutorials Date
MobileHCI 2014 continues to build on the tradition of previous conferences with a high quality tutorial program but this year we’ve introduced a slight twist!
For the 2014 edition of Mobile HCI we want to focus on interactive, engaging tutorials where the scope of the tutorial goes beyond just theory so that attendees also have the opportunity to put what they learn into practice.
To support this interactive component, we invite proposals for 3-hour tutorials on emerging and established areas of research and practice. As in past years, tutorials will be held on the first day of the conference, prior to the main conference program, and are expected to provide participants with new insights and skills relevant to the area in question.
An interactive MobileHCI tutorial is an in-depth presentation of one or more state-of-the-art topics presented by researchers or practitioners within the field of Mobile HCI. The scope for tutorials is broad and includes topics such as new technologies, research approaches and methodologies, design practices, user/consumer insights, investigations into new services/applications/ interfaces, and much more.
An interactive tutorial should focus on its topic in detail and include references to the “must read” papers or materials within its domain. However for the 2014 edition we want every tutorial to include a participatory component in which the tutorial participants actively engage in practical exercises; interactive group-work; or hands-on work where the outcome is a working prototype.
The expected audience will vary in terms of prior knowledge, but will largely consist of researchers, Ph.D. students, practitioners, and educators. Note that given the 3 hour timeframe for tutorials and in order to avoid turning away great interactive tutorials, we may have a couple of tutorials running in parallel on the tutorial day.
We encourage you to review the scope and nature of previous Mobile HCI tutorial programs from past years to inform your tutorial proposal but please remember that the 2014 edition is all about engagement and interactivity.
- We may invite a small number of interactive tutorials from experts that we think will be particularly interesting to attendees. In order to avoid overlaps with those tutorials we suggest reviewing the 2014 Interactive Tutorials page (which we will update to reflect invited tutorials) before submitting.
- Interactive tutorials should last 3 hours. This should be divided into a theory component and an interactive/hands-on component in which attendees get to collaborate and immediately put their learnings into use.
- Submissions are encouraged from multiple presenters with different backgrounds or from different research institutes/organizations.
- Your proposal needs to include:
- A brief biography of the presenter(s), he title of the tutorial, and a detailed description of the tutorial to convey what you expect attendees to have learned by the end of the tutorial.
- An overview of the intended tutorial topics and details regarding the depths to which you will cover the topics
- The activities / group-work that attendees will engage and details regarding how these activities will be structured and delivered.
- Details regarding any prerequisites for attendees. For example, if attendees are required to have some prior knowledge or experience of a particular programming language please specify that. Likewise if attendees need to bring a laptop or a mobile phone with them for the hands on component, please specify.
- Details regarding any technical requirements for the tutorial in general.
- Send a PDF version of your tutorial proposal directly to the Interactive Tutorial Chairs at firstname.lastname@example.org
- The Interactive Tutorials Chairs will evaluate all proposals and communicate acceptance decisions to the proposers.
Mobile-based Tangible Interaction Design for Shared Displays
Ali Mazalek, Ryerson University, Toronto, Ontario, Canada
Ahmed Sabbir Arif, Ryerson University, Toronto, Ontario, Canada
Multi-touch has become the dominant interaction technique on shared displays, such as interactive tabletop surfaces. Alternative techniques include in-air gestures, interactive pens, and conventional pointing devices such as a mouse. A theoretically appealing but less explored approach is tangible interaction. Tangibles are physical objects that can act as both control and representation for the underlying system, allowing users to create, access, and manipulate digital information. Tangibles can offer a comparatively richer interaction experience by providing additional sensory information, such as pressure and friction, and by extending the interaction and display space, for instance through off-screen content control or feedback. One limitation, which often discourages researchers and designers from using such techniques, is the need for additional hardware or devices (as tangibles). Since touchscreen smartphones are gradually becoming ubiquitous, the possibility of using such devices as tangibles may encourage researchers to explore the matter further. In this hands-on tutorial, we will discuss and explore how touchscreen-based smartphones can be used as tangibles to interact with shared displays, and participants will be guided through the process of designing and prototyping their own mobile-based tangible interactions on an interactive tabletop surface.
Mobile Health – Beyond Consumer Apps
Organizers: Jill Freyne, CSIRO, Sydney, Australia
The explosion in the number of applications (apps) designed for the medical and wellness sectors has been noted by many. Recently we have seen increased presence of truly medical apps, in addition to consumer health and wellbeing apps, designed for clinical professionals and patients with medical conditions.
Consumer based mHealth apps typically allow people to do old things in new ways, such as recording health measures digitally rather than on paper. We see this also with medical apps, where increases in the quality and efficiency of existing health care models provide clinical staff with digital tools that replace paper based documentation. In rare and exciting cases we are also seeing mHealth applications that are doing things in entirely new ways to drive real innovation in health care delivery through mobile devices.
The aim of the tutorial is to highlight real world, high impact mobile research that is relevant to the key discipline of Mobile HCI. Thus, the tutorial will be application rather than academically focused. The tutorial will highlight the wide range of mHealth applications available that go far beyond trackers and behavior change tools and encourage researchers to look beyond consumer applications in their research. Four key areas of mHealth applications will be covered including Apps for the HealthyWell, mHealth in Hospitals, Practice and Clinical Apps and Patient Apps and will cover applications for health assessment, treatment and triage, behavior change, chronic illness, mental health, adolescent health, rehabilitation and age care with a focus on the need for rigorous evaluation and efficacy analysis.
The interactive component of the tutorial will focus on innovation in mobile apps for health services. Groups will be given case studies, from real clinicians and hospitals gathered at CSIRO and will be required to design and pitch apps, and evaluation studies to validate their ideas.
Wearable Computing: A Human-centered View of Key Concepts, Application Domains, and Quality Factors
Organizers: Vivian Genaro Motti, Spencer Kohn and Kelly Caine, Clemson University
The solutions provided by wearable computing have already been proved beneficial for various application domains, ranging from entertainment to safety critical systems. By integrating computational capabilities in clothing and accessories, wearable devices offer a great potential to support several human activities, including: monitoring the vital signs of patients, augmenting human capabilities, replacing and improving sensory organs, tracking daily activities or even notifying medical emergencies. Although wearable computing has already been proved successful and promising in a variety of scenarios, its problem space is broad and the design solutions are largely unexplored. Relevant information is scattered across sources, making it difficult and time consuming for interested parties to find unified support that guides them towards the best design decisions. This tutorial provides a comprehensive view about the state-of-the-art of wearable computing from a human-centered perspective. We present background information (key concepts, and theoretical definitions), illustrate application scenarios, form factors and their use cases, and we conclude by presenting the advantages and disadvantages of existing approaches, as well as, principles, guidelines, and quality factors that are relevant for improving the design process. During the tutorial, interactive activities will enable participants to reflect about the contents presented (through brainstorming sessions) and also to apply them in practical case studies (through focus groups sessions).
Speech-based Interaction: Myths, Challenges, and Opportunities
Organizers: Cosmin Munteanu, National Research Council Canada & Gerald Penn, University of Toronto
HCI research has for long been dedicated to better and more naturally facilitating information transfer between humans and machines. Unfortunately, humans’ most natural form of communication, speech, is also one of the most difficult modalities to be understood by machines – despite, and perhaps, because it is the highest-bandwidth communication channel we possess. While significant research efforts, from engineering, to linguistic, and to cognitive sciences, have been spent on improving machines’ ability to understand speech, the MobileHCI community has been relatively timid in embracing this modality as a central focus of research. This can be attributed in part to the relatively discouraging levels of accuracy in understanding speech, in contrast with often-unfounded claims of success from industry, but also to the intrinsic difficulty of designing and especially evaluating speech and natural language interfaces.
The goal of this course is to inform the MobileHCI community of the current state of speech and natural language research, to dispel some of the myths surrounding speech-based interaction, as well as to provide an opportunity for researchers and practitioners to learn more about how speech recognition and speech synthesis work, what are their limitations, and how they could be used to enhance current interaction paradigms. This highly-interactive tutorial will blend the introduction of theoretical concepts with illustration of design challenges through audio and video examples, as well as two hands-on activities (there are no technical prerequisites for these, although bringing an iPhone/iPad/Android device is recommended). Through this, we hope that Mobile HCI researchers and practitioners will learn how to combine recent advances in speech processing with user-centred principles in designing more usable and useful speech-based interactive systems.
The tutorial will be beneficial to all MobileHCI attendees without a strong expertise in ASR or TTS, who still believe in fulfilling HCI’s goal of developing methods and systems that allow humans to naturally interact with the ever increasingly ubiquitous mobile technology, but are disappointed with the lack of success in using speech and natural language to achieve this goal.
Interactive Tutorial Chairs
If you have questions about the interactive tutorials track at MobileHCI 2014, please contact the Interactive Tutorial Chairs:
Karen Church and Andrés Lucero at email@example.com