Session 20: Sensors
Flexible "Roll-up" Voice-Separation and Gesture-Sensing Human-Machine Interface with All-Flexible Sensors
Thursday, June 22, 2017
9:25 AM - 9:45 AM
As a model of a “natural” human-machine interface, we present a hybrid system which can separate the separate the speech of individuals when multiple people are speaking at once and sense human gestures such as hand swipes from a distance well over 50 cm, implemented via a single “roll-up” sensing sheet on the scale of meters. The user does not need to use “artificial” aids such as clickers, lapel microphones, and so forth. The voice separation relies on a 2-meter array of microphones feeding into a modified “inverse beam-forming” algorithm, and the gesture sensing relies on ultra-sensitive capacitive sensing with flexible sensors. After a system review, this talk will focus on the performance of flexible passive microphones compared to conventional state-of-the-art MEMS microphones with a built-in preamplifier both on a component and system level. Fundamentally, the microphones respond to the mechanical deformation of the substrate on to which they are mounted, rather than a built in diaphragm like conventional microphones. Therefore the response of the microphones to sound can be one or two orders of magnitude larger when mounted on a flexible platform than a rigid platform, and allow the sensors to be placed on the back side of the flexible platform (opposite from the speaker). Since the gesture-sensing electrodes can also be mounted on the back-side, the system is aesthetically-pleasing. Multiple simultaneous speakers were successfully separated with the flexible microphones, and system performance metrics vs. conventional microphones will be presented. There will be a system demonstration to accompany the talk.