Tutorials and Master Class will take place on Monday, September 3, check the program.
Tutorial 1: Machine Learning For Intelligent Mobile User Interfaces using Keras
Tutorial 2: Augmenting Augmented Reality
Tutorial 3: Speech and Hands-free Interaction: Myths, Challenges, and Opportunities
To register for a tutorial, check it in the relevant section when completing your registration. Tutorial registration includes coffee breaks and lunch on the day of the tutorial.
Master Class: Tacking and Using Gaze: Eye-Tracking Monday, September 3, 13:30 – 15:00
To attend the Free Master Class the participant must be registered for the conference and must add the Master Class to their registration by logging in and adding the class. Registration for the Master Class does not include lunch and coffee break on the day of the Master Class.
—
Machine Learning For Intelligent Mobile User Interfaces using Keras
Instructors: Huy Viet Le, Sven Mayer, Abdallah El Ali, Niels Henze
Description: In this tutorial, we teach attendees three basic steps to run neural networks on a mobile phone: Developing neural network architectures and train them with Keras, porting and running the trained model on mobile phones, and demonstrating how to perform human activity recognition using existing mobile device sensor datasets.
Learning Goals: Three basic steps to run neural networks on a mobile phone based on the example of handwritten digit recognition and human activity recognition.
Requirements: Pre-knowledge in Python and basic understanding of machine learning could be helpful.
—
Augmenting Augmented Reality
Instructors: Uwe Gruenefeld, Tim Claudius Stratmann, Jonas Auda, Marion Koelle, Stefan Schneegass, Wilko Heuten
Description: Today’s Augmented Reality (AR) devices enable users to interact almost naturally with their surroundings, e.g., by pinning digital content onto real-world objects. However, current AR display are mostly limited to optical and video see-through technologies. Nevertheless, extending Augmented Reality (AR) beyond screens by accommodating additional modalities (e.g., smell or haptics) or additional visuals (e.g., peripheral light) has recently become a trend in HCI. During this tutorial, we provide beginner-level, hands-on instructions for augmenting an Augmented Reality application using peripheral hardware to generate multi-sensual stimuli.
Learning goals: The tutorial will comprise two parts. First, we will provide an overview of Augmented Reality (AR) and AR applications beyond the visual. Second, we will guide a hands-on activity where participants will receive step-by-step instructions for interfacing a state-of-the-art head-mounted display (Microsoft Hololens) with a peripheral hardware platform (Node MCU).
Requirements: The tutorial targets beginners, and does not require any prior knowledge of developing Augmented Reality applications or hardware prototyping. However, we will design the tutorial to particularly engage both, researchers developing for AR without prior knowledge in hardware prototyping as well as researchers experienced in hardware prototyping, but without AR experience.
—
Speech and Hands-free Interaction: Myths, Challenges, and Opportunities
Instructors: Cosmin Munteanu, Gerald Penn.
Description: The goal of this course is to inform the HCI community of the current state of speech and natural language research, to dispel some of the myths surrounding speech-based interaction, as well as to provide an opportunity for researchers and practitioners to learn more about how speech recognition and speech synthesis work, what are their limitations, and how they could be used to enhance current interaction paradigms. Through this, we hope that HCI researchers and practitioners will learn how to combine recent advances in speech processing with user-centred principles in designing more usable and useful speech-based interactive systems.
Learning Goals:
- How Automatic Speech Recognition (ASR) and Speech Synthesis (or Text-To-Speech – TTS) work and why these are such computationally-difficult problems
- Where are ASR and TTS used in current commercial interactive applications
- What are the usability issues surrounding speech-based interaction systems, particularly in mobile and pervasive computing
- What are the challenges in enabling speech as a modality for mobile interaction
- What is the current state-of-the-art in ASR and TTS research
- What are the differences between the commercial ASR systems’ accuracy claims and the needs of mobile interactive applications
- What are the difficulties in evaluating the quality of TTS systems, particularly from a usability and user perspective
- What opportunities exist for HCI researchers in terms of enhancing systems’ interactivity by enabling speech
Requirements: Open to all attendees, no prior technical experience is required for the participants.
Tutorial Website: http://www.speech-interaction.org/mobilehci2018course/
—
Tacking and Using Gaze: Eye-Tracking Master Class
Instructors: Andrew Duchowski
Abstract: In this talk I will provide an overview of eye-tracking applications, distinguishing eye movement analysis from synthesis in virtual reality, games, and other venues including mobile eye tracking. My focus is on four forms of applications: diagnostic (off-line measurement), active (selection, look to shoot), passive (foveated rendering, a.k.a. gaze-contingent displays), and expressive (gaze synthesis). Diagnostic applications include training or assessment of expertise in a multitude of environments, e.g., mobile, desktop, etc. Active gaze interaction is rooted in the desire to use the eyes to point and click, with gaze gestures growing in popularity. Passive gaze interaction is the manipulation of scene elements in response to gaze direction, with an example goal of improvement of frame rate. Expressive eye movement centers on synthesis, which involves the development of a procedural (stochastic) model of microsaccadic jitter, embedded within a directed gaze model, given goal-oriented tasks such as reading. In covering the field, I will briefly review classic works and recent advancements, highlighting outstanding research problems.
Bio: Dr. Duchowski is a professor of Computer Science at Clemson University. He received his baccalaureate (1990) from Simon Fraser University, Burnaby, Canada, and doctorate (1997) from Texas A&M University, College Station, TX, both in Computer Science. His research and teaching interests include visual attention and perception, eye tracking, computer vision, and computer graphics. He joined the School of Computing faculty at Clemson in January, 1998. He has since produced a corpus of publications and a textbook related to eye tracking research, and has delivered courses and seminars on the subject at international conferences. He maintains Clemson’s eye tracking laboratory, and teaches a regular course on eye tracking methodology attracting students from a variety of disciplines across campus.