All Categories
A development board to quickly prototype on-device ML products. Scale from prototype to production with a removable system-on-module (som)
Performs high-speed ML inferencing: the on-board edge TPU Coprocessor is capable of performing 4 trillion operations (tera-operations) per second (tops), using 0.5 watts for each tops (2 tops per watt). For example, it can execute state-of-the-art mobile vision models such as mobilenet V2 AT 400 FPS, in a power efficient manner
Provides a complete system: a Single-board computer with SoC plus ML plus wireless connectivity, all on the board running a derivative of Debian Linux We call Mendel, so you can run your favorite Linux tools with this board
Supports tensorflow Lite: no need to build models from the ground up. Tensorflow Lite models can be compiled to run on the edge TPE
Supports automl vision edge: easily build and deploy Fast, high-accuracy custom image Classification models to your device with automl vision edge
Scale from prototype to production: considers your manufacturing needs. The som can be removed from the baseboard, ordered in Bulk, and integrated into your hardware
Cpu: NXP I.Mx 8M SoC (Quad Cortex-A53, cortex-m4f)
Gpu: integrated C Lite Graphics
Ml Accelerator: Google edge TPU Coprocessor
Ram: 1 GB LPDDR4
A development board to quickly prototype on-device ML products. Scale from prototype to production with a removable system-on-module (SoM).