Radically more efficient deep learning to enable inference on low-power hardware.
We are working on a future in which an air conditioner turns off when we leave the room. A warehouse directs us to the missing box. And a home lets us know if our elderly need help. It’s a future that is safer and more thoughtful thanks to intelligent sensors inside all the things around us.
Yesterday,
you had to sit at your PC.
Today, you take your
smartphone with you.
Tomorrow, intelligent sensors
turn computing ambient.
A large number of intelligent chips are required to make computing ambient, as a result they have to be small, cheap and low-power. Our technology works on $1 chips that consume less than 10 milliwatts. These are so efficient, they can be powered by a coin battery or small solar cell.
Processing data on the device is inherently more reliable than a connection with the cloud. Intelligence shouldn’t have to depend on weak WiFi.
Sending data from the sensor to the cloud, processing the data, and sending it back again takes time. Sometimes whole seconds. This latency is problematic for products that need to respond to sensor input in real-time.
Running AI in the cloud comes with significant recurring compute costs. Executing the AI on the device instead saves several dollars per month in additional cloud costs.
Sending sensor data such as audio and video to the cloud increases privacy and security risks. To reduce abuse and give people confidence to let intelligent sensors into their lives, the data should not leave the device.
Ubiquitous connected sensors would overwhelm the network. Plumerai software only uses the network when it has something to report. This keeps bandwidth and mobile data costs low.
The farther we move data, the more energy we use. Sending data to the cloud uses a lot of energy. Processing data on-chip is more efficient by orders of magnitude. If a device needs a battery life of months or years, data needs to be processed locally.
Plumerai has developed a complete software solution for camera-based people detection and familiar face identification. Trained with over 30 million images, our software provides highly accurate results under a wide variety of conditions. These AI models are so small that they even run on Arm Cortex-M microcontrollers. On Arm Cortex-A CPUs, processor load is minimal such that there’s plenty of compute available for additional applications running on the same device.
Plumerai’s inference engine is the fastest and most memory-efficient in the world, confirmed by MLPerf. It accelerates any neural network. So whether you're developing speech recognition for your microwave, breaking glass detection for your alarm, or activity recognition with an IMU sensor, our inference engine speeds it up. It gives an average speedup of 2.6x, a RAM reduction of 2.0x and a code size reduction of 3.6x without changing the accuracy.
We combine our optimized inference engine with our collection of tiny AI models to provide turnkey software solutions. These are highly accurate and so efficient that they run on nearly every off-the-shelf chip.
We develop our neural networks from scratch. We optimize for small embedded devices with customized model architectures and training strategies, based on our world-class research on model quantization. This results in tiny but highly accurate AI models.
AI developers use the Plumerai Inference Engine to make AI models run efficiently on microcontrollers. Our inference engine is optimized for Arm Cortex-M, Arm Cortex-A, and RISC-V processors.
We collect, label and build our own datasets. Our data pipeline identifies failure cases to ensure that our models are highly reliable and accurate.
We work closely with semiconductor companies that offer high performance and low power microcontrollers and SOCs. Using these chips, our software powers security cameras, consumer electronics, smart home cameras, and intelligent sensors.
Thank you for submitting the form. We’ll get in touch as soon as possible.
There was an error submitting the form. Please try again later.
We’re partnering with industry experts