Neurxcore, a provider of cutting-edge Artificial Intelligence (AI) solutions, has launched its Neural Processor Unit (NPU) product line for AI inference applications.
It is built on an enhanced and extended version of the open-source NVIDIA’s Deep Learning Accelerator (Open NVDLA) technology, combined with patented in-house architectures. The SNVDLA IP series from Neurxcore sets a new standard for energy efficiency, performance, and capability, with a primary focus on image processing, including classification and object detection. SNVDLA also offers versatility for generative AI applications and has already been silicon-proven, operating on a 22nm TSMC platform, and showcased on a demonstration board running a variety of applications.
The innovative IP package also includes the Heracium SDK (Software Development Kit) built by Neurxcore upon the open-source Apache TVM (Tensor-Virtual Machine) framework to configure, optimize and compile neural network applications on SNVDLA products. Neurxcore’s product line caters to a wide range of industries and applications, spanning from ultra-low power to high-performance scenarios, including sensors and IoT, wearables, smartphones, smart homes, surveillance, Set-Top Box and Digital TV (STB/DTV), smart TV, robotics, edge computing, AR/VR, ADAS, servers and more.
In addition to this groundbreaking product, Neurxcore offers a complete package allowing the development of customized NPU solutions, including new operators, AI-enabled optimised subsystem design, and optimized model development, covering training and quantization.