A Vision Processing Unit (VPU) is a computing chip that is optimized for AI tasks. Therefore, VPUs enable the transition of computer vision and deep learning from the laboratory setting to real-world applications.

Megatrend Deep Learning

The recent advances in deep learning methods and convolutional neural networks (CNNs) have drastically impacted the role of machine learning in a wide range of computer vision tasks.

With deep learning, object classification and object detection accuracy have been greatly improving. The inference error rate of machine learning algorithms has become remarkably low and reaches a state that already surpasses human performance in certain scenarios (e.g., in face recognition).

Optimized Deep Learning Hardware

With the deep learning trend comes the need for new hardware architectures that enables higher performance for machine learning tasks, both during training and inference.

The use of general-purpose processors for machine learning applications is limited, mainly due to the irregularity of memory access that comes with long memory stalls and large bandwidth requirements. As a side effect, this leads to significant increases in power consumption and thermal dissipation requirements.

Innovations at the software level introduced novel data formats that use tensors. A tensor is a generalization of vectors and matrices, easily understood as a multidimensional array. These breakthroughs provide multiple advantages in terms of performance and power consumption.

The industry is shifting towards designing processors where cost, power, and thermal dissipation are key concerns. Hence, specialized co-processors have emerged with the purpose of reducing the energy consumption constraints while improving the overall computing performance for deep learning tasks.

Therefore, the adoption of power-efficient AI accelerators for computer vision and machine learning on the “edge”, or Edge AI,  is an important field in robotics and the Internet-of-Things (IoT).

Read more …

Share This