Kevin Morris
EE Journal
On April 19, 1965, Electronics magazine ran an article called “Cramming More Components Onto Integrated Circuits.” It was written by an engineer from Fairchild Semiconductor, and it contained a simple prediction that turned out to be the trend that changed the world. Gordon Moore’s article is the reference point for the explosive growth in semiconductor capability that has lasted for almost fifty years now.
In that same year, there was another article in that same magazine describing a device invented by Harvey Nathanson of Westinghouse Labs that combined a tungsten rod over a transistor to form a “microscopic frequency selective device” - the very first MEMS device. The device was later patented as the “Resonant Gate Transistor.”
So - MEMS and logic transistors have both been around for almost fifty years. And, since MEMS and logic transistors are fabricated in the same factories, using the same techniques, and used in the same systems, there is a natural temptation to draw correlations between them. Indeed, as I attended the annual MEMS Executive Congress last week, I had the distinct deja vu sense that I was back in 1980s semiconductor land. The tight-knit community of highly-motivated people exploring a vast universe of possibilities with an exciting emerging technology whose time has come - had all the ingredients of that Moore’s Law magic that captured our imaginations and transformed our culture back before semiconductor production became the exclusive purview of entities with the wealth of nations.
Everyone seems to be silently waiting in anticipation of the same thing. When will MEMS have a Moore’s-Law-like explosion that will catapult companies with Intel-like velocity from shaky startups to stalwart supercorporations? With MEMS in every mobile device, and predictions that the world will contain a trillion MEMS sensors within just a few years, the excitement is palpable. After all, a trillion is a very big number - it works out to between 300 and 400 sensors for every man, woman, and child on Earth.
There will be no Moore’s Law for MEMS
While 300-400 MEMS devices for every human being in existence may sound like a lot, to paraphrase Douglas Adams, that’s just peanuts to transistors. With transistor counts in the latest process nodes running into the billions of transistors per device, there will be many individuals who own transistors in the trillions. And, while this comparison may seem silly, it does highlight an important fact: Moore’s Law was not about “electronics” or “components” in general. It was about one single type of device - the CMOS logic transistor.
Of course, lithography made quantum improvements over the decades and we can now make smaller, better versions of all kinds of components - including MEMS - as a result. But the component driving that explosion was the only one we knew how to use productively in almost unlimited quantities - the logic transistor. A smartphone or tablet today can put several billion logic transistors to work without missing a beat. If we offered smartphone designers a billion more for free, they’d take it. But it’s hard to figure out what we’d do with more than a few dozen MEMS sensors in a phone. With 9 motion sensors and a GPS, your phone already knows where it is, which way it’s oriented, and how it’s moving.
Doubling up on those sensors offers no practical value. We could throw in a few variometers, hygrometers, thermometers, barometers, heck - even a spectrometer or two - and our device would be a sensory bad-ass with only a double-digit MEMS tab. And, behind each one of those sensors we’d still need a massive number of transistors to do the requisite amount of processing required to make use of the data those sensors are dumping out. In fact, the irony of the situation is that the presence of MEMS in our systems is causing a renewed demand for much more of the non-MEMS technology - like FPGAs.
There is most certainly a MEMS-driven revolution occurring in our systems. And the proliferation of those sensors - which most likely will fulfill the “trillion sensor” forecasts being tossed around by MEMS industry experts - will absolutely transform the electronics landscape again, just not with a Moore’s Law explosion in MEMS itself.
Consider today’s primary technology driver, the smartphone. There is considerable speculation as to the utility of quad-core, 64-bit processors in smartphones. Why? There just hasn’t been that much processing to do. Once we had devices that could deliver outstanding video gaming performance, there weren’t many application mountains to climb that required giant, in-phone, heavy-iron processing power. And, those big ‘ol processors impose a power penalty that’s very hard to ignore in our incredibly tight battery budgets.
But throwing a passel of MEMS sensors into the mix brings on a whole new processing challenge. Now we need to perform sophisticated analyses on massive amounts of data coming from those sensors - often constantly and in real time - in order to achieve the end-goal for our system, which is referred to as “context.”
“Context” is simply an understanding of what is going on, extrapolated from a pile of diverse data. Context usually involves answering a simple question reliably - what is the device (or the user of the device) doing right now, and in what environment? After a bunch of algorithms are applied to a crazy stream of data, our system may conclude that the user is now “walking.” Bonus points if it knows other details like where that walking is taking place, how fast the user is going, and what environment the user is walking through.
Making a system that can reliably infer context from cross-correlating a lot of sensor data requires a few good MEMS sensors - and a gigantic amount of ultra-low-power processing prowess. That challenge is one that won’t be addressed by more or better sensors. It is also likely one that won’t be able to get much benefit from that quad-core 64-bit ARM monstrosity. Just powering that thing up for more than a quick after-the-fact analysis breaks the power budget of most battery-powered systems - and pretty much every potentially wearable device.
Solving those processing challenges will most likely require hardware architectures similar to FPGAs - which are the only devices right now that can deliver the combination of ultra-high performance, on-the-fly algorithm reconfigurability, and super-low power consumption that are needed to tackle the sensor-data tsunami. In fact, at least two FPGA companies (QuickLogic and Lattice Semiconductor) have gone after this challenge specifically, producing programmable logic devices suitable for running complex sensor fusion algorithms in battery-operated systems with tight constraints on power, cost, and form factor.
But sensor fusion is just the tip of the proverbial iceberg. When there are a trillion sensors out there in the world deluging us with data, our only hope of being able to extract high-quality, real-world, actionable information is a meta-scale heterogeneous client-and-server computing system that spans the gamut from tiny, efficient, local sensor fusion devices to enormous cloud-based, big-data, server farm analysis. Each layer of that meta machine will need to correlate, consolidate, and reduce the data available to it, and then pass the results upstream for higher-level analysis.
So, even though those sensors won’t have a Moore’s Law of their own, they are likely to be the driving factor in a formidable category of applications that will fuel the need for the same-old Moore’s Law to continue for a few more cycles.