EE Times: PIMIC Adds Tiny AI to Microphones, Eyes Big AI Chips
By Sally Ward-Foxton January 13, 2025
AI chip startup PIMIC has taped out ultra-low power keyword spotting and environmental noise cancellation chips based on its Jetstreme AI accelerator IP. The company also has plans to develop silicon for LLMs and high-performance computer vision in the next couple of years.
PIMIC’s Jetstreme AI accelerator is a multiply-accumulate (MAC) accelerator based on SRAM, David Hu, chief strategy officer at PIMIC, told EE Times.
“Matrix operations take up 90-95% of computation in neural networks, so we focus on this type of operation, and how to make it much more efficient—that’s why we created in-memory computing,” he said. “We’re using existing SRAM-based cells. Model weights sit inside the memory, then once the input comes in, the multiply-accumulate happens inside the memory, so there’s no back and forth to the compute unit to move data multiple times.”
Jetstreme is a digital accelerator without any need to convert to and from the analog domain, according to Hu.
“Internally we call it a hybrid, but the functionality is digital,” he said. “Right now we can offer 10× the performance of near-memory compute, and we expect future generations to be even more efficient.”
The company’s first two products are in TSMC 22-nm ultra-low leakage (ULL), but the technology is compatible with any standard CMOS process technology, Hu said.
Both PIMIC’s chips are based on Jetstreme IP, running a tiny pre-loaded AI model in the same package as, or very close to, a MEMS microphone. Neither device is customer programmable; it runs PIMIC’s pre-loaded model only.
The keyword spotting chip has two power domains: an analog voice activity detector designed for always-on operation, using 20 µA, and a Jetstreme-based AI inference engine, triggered when a voice is heard, which uses 30 µA. This chip can be used to wake up a bigger system when keywords are heard.
The ENC chip runs AI inference using around 150 µA, adding 4-30 ms latency. It does a good job of noise cancellation with a single microphone source, Hu said. At less than a square millimeter of silicon, it can be integrated into MEMS microphone packaging.
“Many customers are interested in this, especially for smart glasses, because space is very constrained,” Hu said. “This is a game changer for sensor makers, so it’s been very well-received.”
This chip supports analog microphones, which operate in the 40-80 µA range. Being low power, it can sit close to a MEMS die without introducing noise. Microphone makers are very interested in upgrading performance of their microphones with a chip that takes no extra space, so the upgraded microphone can simply be swapped into existing system designs, he added.
PIMIC has already partnered with Taiwanese microphone maker Zilltek to incorporate its ENC chip into Zilltek microphone packages. This allows PIMIC to use Zilltek’s sales channels, Hu said.
While integrating into microphone packaging is one of PIMIC’s key routes to market, the discretely-packaged PIMIC ENC chip can also simply be placed on a PCB, Hu said. The company will also consider IP licensing and custom ASIC design for high-volume customers, he added.
Three-year plan
“Our three-year plan is to start the company based on smaller AI models, build that experience and build a team, then move to larger models,” Hu said. “Our approach is more organic than moonshot. We saw an opportunity in speech, because it needs a lot of AI and very low power…after the first phase, we will move on to larger models, transformers and high performance imaging.”
Future PIMIC chips would be based on a scaled-up version of Jetstreme, and would be software-programmable for customer neural networks. The company has in mind separate product lines for high-performance imaging models and LLMs.
Edge chips have problems keeping up with the evolution of LLMs, Hu said, noting the main challenges are latency, memory bandwidth and power efficiency—especially as applications move towards reasoning models and agentic AI, which require multiple inferences.
Hu said Jetstreme can offer 40-50 TB/s memory bandwidth, since computation is performed inside memory cells, but noted that future LLM-oriented products would require additional DRAM for models that do not fit on the chip.
“We expect we can have a local chip supporting easily three to four billion parameter networks, and bigger chips can go to thousands of TOPS per chip,” Hu said.
Amongst the applications the company are looking at are robotics, edge servers and industrial imaging.
“We’re talking with a lot of partners to identify the first target markets, but the underlying technical approach and our manufacturing supply chain are already established within the company,” Hu said, noting the company’s partnerships in the Taiwanese manufacturing ecosystem—especially with TSMC.
In terms of productizing Jetstreme as a larger-scale chip, it would be entirely driven by customer demand, Hu said. PIMIC is talking to both defense contractors and edge server makers, he said.
PIMIC’s keyword spotting and ENC products will start shipping in volume before the end of 2025.
Sally Ward-Foxton covers AI for EETimes.com and EETimes Europe magazine. Sally has spent the last 18 years writing about the electronics industry from London. She has written for Electronic Design, ECN, Electronic Specifier: Design, Components in Electronics, and many more news publications. She holds a Masters' degree in Electrical and Electronic Engineering from the University of Cambridge. Follow Sally on LinkedIn