SEMICON West Preview: MEMS
As Demand for Sensors Soars with IoT, Opportunities Increase with Smart Functionality
By Paula Doe, SEMI
Autonomous automobiles, smart manufacturing, smart buildings, mobile human health monitoring, and 4G+ communications hardware for connecting all these devices will drive strong 24 percent growth in units and 14 percent in value for the MEMS sector, according to Yole Développement. “These emerging markets will give a noticeable boost to MEMS growth going forward,” says Yole Founder and President Jean Christophe Eloy, who will discuss the changes coming to the sector at SEMICON West 2017, on July 11.
These emerging applications are changing what’s required from MEMS suppliers. We are seeing bigger building blocks with higher value, integration of more functions and more processing power in the package, and increased demand for software intelligence to turn the sensor data into useful information, Eloy notes. This probably also means a shake up in the players, as it’s not clear who will capture the value of this growth opportunity, as the key skills move even more towards integration and software to enable functions.
Emerging smart autos, manufacturing, healthcare and increasingly complex high speed communications will boost MEMS market to more than $25 billion in the next six years. Source: Yole Développement.
Demand for smart audio, smart visual and more RF
The demand for RF filters required by the increasing complexity of communicating all this data with high-speed 4G/4G+ mobile technology will make RF MEMS BAW filters the fastest-growing segment of the MEMS business, likely seeing some 35 percent compound annual growth, jumping from $2.2 billion in 2017 to a $10.2 billion market in 2022, according to Yole analysts.
Demand for audio processing will also be particularly strong, with 11 percent growth in units for MEMS microphones, increasingly for more sophisticated applications that use the devices in an always-listening capacity, continually sensing what is happening around in the home, in the car or in the factory. That means more processing power and software are needed to detect key sounds form the background noise, and even recognize what they mean. .
Another coming change: MEMS micro speakers will soon finally hit the market. STMicroelectronics is currently making wafers for USound for qualification. “Micro speakers will happen next year,” says Eloy, noting that this will enable a proliferation of small and diffuse audio applications, and will increase demand for more and more sophisticated audio ICs for processing, as audio increasingly becomes a more main used human-machine interface.
Growing opportunity for adding audio value based on MEMS means interest by a host of competing players. Source: Yole Développement.
Smarter image sensing will also make its way into more applications, while various types of 3D imaging like ultrasonics, radar, and LIDAR are starting to get traction not only in automotive applications, but also in smartphones for autofocus and for facial recognition for security.
Adding intelligence at the edge
The next generation of sensor technology will also clearly integrate more intelligence. IoT applications are generating immense amounts of data, which needs to be intelligently processed into useful information for local action. However, sending all that data to the cloud and back for processing is often not practical. “Now that we have so much sensor data available ─ not just motion, but also sound, imaging, IR, UV, and other spectra ─ the next opportunity is to add artificial intelligence (AI) or machine learning at the edge, so the sensors report only the selective information required to signal problems that need action,” says Pete Beckman, co-director, Northwestern/Argonne Institute for Science and Engineering, Argonne National Laboratory. Beckman will talk at SEMICON West (July 11-13) about his lab’s open platform that allows researchers to experiment with adding machine learning to sensor nodes.
The Argonne Waggle platform includes a Linux-based single board computer to handle encrypted networking and data caching. It also pulls sensor data from customized boards or off-the-shelf sensor devices. The Waggle management (wagman) board controls power and diagnostics. The third key component is a single board computer focused completely on edge computing, supporting AI and machine learning. With eight CPU cores and a GPU, the edge processor can be trained to recognize sounds and images or other patterns, using open source software like UC Berkeley’s Caffe deep learning software and the OpenCV computer vision package. “We isolated this part on a separate board to run the newest software available, and out on the leading edge of development, all of this AI software can still be a little buggy,” Beckman notes.
The group is working with the city of Chicago on a network of these smart nodes to monitor things like traffic incidents, air pollution, ice on roads, or potential flooding. Other researchers are the using the platform to measure pollen and particulates in air to predict asthma outbreaks, or monitor water flow patterns across a prairie site.
Adding intelligence to development
“If the MEMS industry is going to innovate more smartly, we can’t keep doing things the same old way we always have, and the foundries have to do their part to do things differently as well,” notes Tomas Bauer, Silex Microsystems' SVP Sales & Business Development, who will discuss Silex’s efforts to use tailored IT systems to speed the development of MEMS devices. Since most innovative MEMS devices depend on developing a whole new wafer process, ramping to stable volume production has often taken years. So Silex has worked on developing information systems to track the wafers through development, with a cockpit view for easy access to all the statistics on the runs and the risk items, immediate notification of potential issues, and more sophisticated queuing and optimization of pathways of development batches to speed throughput in the high-mix fab, Silex’s also uses optical inspection tools during processing so its engineers can roll back the images to see what went wrong. “Instead of trying to standardize the process, we need to find ways to speed the development of the custom process,” Bauer suggests.
At SEMICON West 2017 (July 11-13), the MEMS and Sensors session also features David Horsley from University of California (Davis) on piezoelectric MEMS opportunities, and Thin Film’s Arvind Kamoth and Princeton’s James Sturm on new technologies for systems integrating sensors and CMOS on flexible substrates.
May 16, 2017