Tuesday, 09:30-10:30, Room: Harlekin & Columbine
Seeing and Feeling Air: Exploring Mid-air Displays and Haptics
Sriram Subramanian (University of Sussex, UK)
One of the visions on my research is to deliver visual and tactile experiences to users without instrumenting them with wearable or head-mounted displays. My team has been exploring various technical solutions to create displays that use air as the diffuser surface to project visual content and in using air pressure to create tactile feedback. For example, we generate fog-filled bubbles that can be tracked and projected on to provide ambient notifications. Similarly, we create a curtain of mist to act as a reach-through personal space between the user and an interactive tabletop. In this talk, I will present some of our recent projects on this topic and conclude with UltraHaptics. Ultrahaptics is our haptic feedback system that uses acoustic radiation forces to create tactile stimulations in multiple locations of the hand. This feedback is created in mid-air — so users don’t have to touch or hold any device to experience it. I will also touch on Ultrahaptics journey from a research project to a spin–out and how this has been facilitated by funding for blue-sky research.
Sriram Subramanian is a Professor of Informatics at the University of Sussex where he leads a research group on novel interactive systems. Before joining Sussex, he was a Professor of Human-computer Interaction at the University of Bristol (till July 2015) and prior to this a senior scientist at Philips Research Netherlands. He holds an ERC Starting Grant and has received funding from the EU FET-open call. In 2014 he was one of 30 young scientists invited by the WEF to attend their Summer Davos. Subramanian is also the co-founder of Ultrahaptics a spin-out company that aims to commercialise the mid-air haptics enabled by his ERC grant. In 2015, Ultrahaptics won the CES 2015 top pick award for Best Enabling Technology. Prof. Subramanian's research has been featured in several news media outlets around the world including CNN, BBC and Fox-News.
Thursday, 14:00-15:00, Room: Harlekin & Columbine
Interactive Projected Augmented Reality
Andrew D. Wilson (Microsoft Research, Redmond)
For the last several years we have explored ways to use depth sensing cameras in combination with projectors to create a variety of augmented and mixed realities.
While large scale projection mapping installations are by now familiar to most, the unique capabilities of depth cameras and today's GPUs can be used to solve the projection mapping problem at video rate, thus enabling truly interactive systems.
A progression of projected augmented reality prototypes will be presented, beginning with early pre-Kinect experiments and culminating in Illumiroom and RoomAlive.
In this talk, i will also discuss some interesting diversions we have taken along the way, such as considering physics simulation as an organizing principle for interaction, and releasing the open source RoomAlive Toolkit (https://github.com/Kinect/RoomAliveToolkit/).
Andy Wilson is a Principal Researcher and Research Manager at Microsoft Research. There he has been applying sensing technologies to enable new modes of human-computer interaction. His interests include gesture-based interfaces, inertial sensing and display technologies. He helped found the Surface Computing group at Microsoft, and pioneered early efforts to commercialize depth cameras at Microsoft. Before joining Microsoft, Andy obtained his BA at Cornell University, and MS and PhD at the MIT Media Laboratory. He currently manages the Natural Interaction Research group at Microsoft Research.