Artificial intelligence and machine learning (AI/ML) offer the unparalleled ability to recognize complex patterns and make decisions quickly. Consequently, companies are quickly adding AI/ML inference capabilities to a wide range of products—and there are many chip vendors in the market ready to help.

These vendors offer AI/ML capabilities, either integrated into a system-on-chip (SoC) or as stand-alone hardware AI/ML accelerators. The market for these chips is getting very crowded, especially for embedded products.

When selecting an AI/ML implementation technology, you’re faced with a wide array of choices, which can look overwhelming. You can run AI/ML models on un-augmented microprocessors or microcontrollers. The performance is low and inefficient with this alternative, but most processor vendors support ways to run AI/ML models on their processors using software libraries that accept models from some of the standard AI/ML development tools.

You can also get AI/ML tools specifically developed for un-augmented processor ISAs. For example, Tensorflow Lite for Microcontrollers was specifically developed for microcontrollers and SoCs that integrate Arm Cortex-M processor cores. The tool, which is written in C++, has been ported to other processor architectures.

However, processors that don’t have hardware specifically for AI/ML tasks are slow and inefficient, because running AI/ML models requires a lot of computation involving multiplication and addition. So, you generally should use vector or tensor hardware to get good performance. Many microcontroller vendors, including STMicroelectronics, Renesas, NXP and XMOS, have added hardware to support AI/ML model execution to increase the AI/ML performance of their processors.

Read more …

Share This