Neuromorphic AI chip developer BrainChip believes better metrics are needed to iron out the limitations of current industry-standard performance benchmarks in edge AI.
The company’s findings, part of a new whitepaper, suggest that metrics need to be continually refined upon to measure performance and efficiency in real-world edge deployments.
BrainChip also posits that its neuromorphic processors help to increase performance at low-power while reducing latency. It sees the current industry benchmarks for AI performance and their focus on TOPS metrics as ill-suited to real-world applications.
Anil Mankar, chief development officer at BrainChip, said: “While there’s been a good start, current methods of benchmarking for edge AI don’t accurately account for the factors that affect devices in industries such as automotive, smart homes and Industry 4.0.”
BrainChip says that the industry’s focus on TOPS metrics will not be fixed by new standards alone, as the issue of proving performance and power usage remains.
The company argues that MLPerf, seen as the benchmark for measuring performance of AI workloads, should be expanded to include application-based parameters. It should also be able to emulate sensor inputs for a more realistic understanding of performance.
“We believe that as a community, we should evolve benchmarks to continuously incorporate factors such as on-chip, in-memory computation, and model sizes to complement the latency and power metrics that are measured today,” Mankar said.
BrainChip says that with application-specific parameters and open-loop/closed-loop datasets, companies will be better equipped to leverage data and optimise AI algorithms across industries such as smart homes, automotive, and Industry 4.0.
“Targeted Industry 4.0 inference benchmarks focused on balancing efficiency and power will enable system designers to architect a new generation of energy-efficient robots that optimally process data-heavy input from multiple sensors,” the whitepaper states.