Tutorials

Tutorials will take place on Monday, September 3. To register for a tutorial, check it in the relevant section when completing your registration. Tutorial registration includes coffee breaks and lunch on the day of the tutorial.

Tutorial 1: Machine Learning For Intelligent Mobile User Interfaces using Keras
Tutorial 2: Augmenting Augmented Reality
Tutorial 3: Speech and Hands-free Interaction: Myths, Challenges, and Opportunities
Tutorial 4: Mobile Eye Tracking: Technical Principles
Tutorial 5: Embodied Thinking as a Driver for Mobile and Wearable Ideas

Machine Learning For Intelligent Mobile User Interfaces using Keras


Instructors: Huy Viet Le, Sven Mayer, Abdallah El Ali, Niels Henze
Description: In this tutorial, we teach attendees three basic steps to run neural networks on a mobile phone: Developing neural network architectures and train them with Keras, porting and running the trained model on mobile phones, and demonstrating how to perform human activity recognition using existing mobile device sensor datasets.
Learning Goals: Three basic steps to run neural networks on a mobile phone based on the example of handwritten digit recognition and human activity recognition.
Requirements: Pre-knowledge in Python and basic understanding of machine learning could be helpful.

Augmenting Augmented Reality


Instructors: Uwe Gruenefeld, Tim Claudius Stratmann, Jonas Auda, Marion Koelle, Stefan Schneegass, Wilko Heuten
Description: Today’s Augmented Reality (AR) devices enable users to interact almost naturally with their surroundings, e.g., by pinning digital content onto real-world objects. However, current AR display are mostly limited to optical and video see-through technologies. Nevertheless, extending Augmented Reality (AR) beyond screens by accommodating additional modalities (e.g., smell or haptics) or additional visuals (e.g., peripheral light) has recently become a trend in HCI. During this tutorial, we provide beginner-level, hands-on instructions for augmenting an Augmented Reality application using peripheral hardware to generate multi-sensual stimuli.
Learning goals: The tutorial will comprise two parts. First, we will provide an overview of Augmented Reality (AR) and AR applications beyond the visual. Second, we will guide a hands-on activity where participants will receive step-by-step instructions for interfacing a state-of-the-art head-mounted display (Microsoft Hololens) with a peripheral hardware platform (Node MCU).
Requirements: The tutorial targets beginners, and does not require any prior knowledge of developing Augmented Reality applications or hardware prototyping. However, we will design the tutorial to particularly engage both, researchers developing for AR without prior knowledge in hardware prototyping as well as researchers experienced in hardware prototyping, but without AR experience.

Speech and Hands-free Interaction: Myths, Challenges, and Opportunities


Instructors: Cosmin Munteanu, Gerald Penn.
Description: The goal of this course is to inform the HCI community of the current state of speech and natural language research, to dispel some of the myths surrounding speech-based interaction, as well as to provide an opportunity for researchers and practitioners to learn more about how speech recognition and speech synthesis work, what are their limitations, and how they could be used to enhance current interaction paradigms. Through this, we hope that HCI researchers and practitioners will learn how to combine recent advances in speech processing with user-centred principles in designing more usable and useful speech-based interactive systems.
Learning Goals:

  • How Automatic Speech Recognition (ASR) and Speech Synthesis (or Text-To-Speech – TTS) work and why these are such computationally-difficult problems
  • Where are ASR and TTS used in current commercial interactive applications
  • What are the usability issues surrounding speech-based interaction systems, particularly in mobile and pervasive computing
  • What are the challenges in enabling speech as a modality for mobile interaction
  • What is the current state-of-the-art in ASR and TTS research
  • What are the differences between the commercial ASR systems’ accuracy claims and the needs of mobile interactive applications
  • What are the difficulties in evaluating the quality of TTS systems, particularly from a usability and user perspective
  • What opportunities exist for HCI researchers in terms of enhancing systems’ interactivity by enabling speech

Requirements: Open to all attendees, no prior technical experience is required for the participants.
Tutorial Website: http://www.speech-interaction.org/mobilehci2018course/

Mobile Eye Tracking: Technical Principles


Instructors: Andrew T. Duchowski, Krzysztof Krejtz.
Description:
This tutorial presents a Python-based gaze analytics pipeline developed and used by the authors for analysis of video-based eye-tracking data and stimulus, e.g., as obtained by a mobile eye tracker. The pipeline consists of extraction of raw gaze data, analysis and event detection via velocity-based filtering, collation for statistical evaluation, analysis and visualization using R. Examples of technical challenges include gaze data and video synchronization, dynamic Areas Of Interest (AOIs), and marker registration. Demonstrations include a driving study, pilot checklist during engine startup, children’s walk through a museum, and a live jam session where band players wore head-mounted eye trackers.
Learning Goals:

  • Discussing availability of eye-tracking options.
  • Stressing importance of experimental design.
  • Listing technical considerations.
  • Performing hands-on data extraction and processing.
  • Developing statistical analyses, including classical eye tracking metrics and modern methods.

Requirements: Prior knowledge of Python and R, OpenGL, and OpenCV is a plus.

Embodied Thinking as a Driver for Mobile and Wearable Ideas


Instructors: Kristina Andersen, Annika Hupfeld, Oscar Tomico.

Description: This tutorial will consist of an introduction and a highly interactive making session of embodied wearable devices, where we will make no-tech prototypes based on methods of ideation and movement, well established in the adjacent fields of performance and design. The outcomes will take the form of conceptual ideas manifested in a selection of low complexity materials, addressing complex technological concerns. In the following, we will outline the background for this process and describe the general flow of the tutorial.
Learning Goals: Ideation, embodied design, wearable fashion.