Let’s make your AI
tiny and fast.
Try out our inference engine for Arm Cortex-M. It is the fastest and smallest in the world. On average it gives a speedup of 2.6x, a RAM reduction of 2.0x and a code size reduction of 3.6x. Accuracy does not change. No binarization, no pruning.