USC researchers have announced a breakthrough in memristive technology that could shrink edge computing for AI to smartphone-sized devices.

As artificial intelligence continues to permeate our daily lives, computing demands on underlying hardware are becoming more stringent. Today, our most sophisticated and large-scale AI models exist in the cloud, like those behind ChatGPT. To fully unlock the potential of AI, many believe that it will be necessary to bring these models to the edge. Achieving this will require a combination of highly optimized, lightweight AI models as well as dense and powerful computing resources.

This week, researchers from USC made news in the industry with the publication of a new paper claiming to achieve “the best memory of any chip for edge AI”. In this article, we’ll talk about the need for improved memory for edge AI and the new memristive technology from USC. 

AI’s Need for Better Memory

As AI algorithms become more complex, the need for faster and more efficient memory technologies becomes increasingly important. However, traditional memory technologies, such as dynamic random-access memory (DRAM) and Flash memory, are not suitable for AI applications due to their limited capacity, high power consumption, and slow read and write speeds.

Read more …

Share This