Transformers are making their way from generative AI and large language models down to embedded chips.

Avi Baum, chief technology officer and co-founder of Hailo in Israel, talks to Nick Flaherty about the different use cases for transformer AI driving its third generation of chip design.

“There’s a huge span of misuse of terms in this domain,” says Baum. “Transformers are the major building blocks that arrived from natural language processing (NLP) but now it is being used across many fields, for imaging, audio, text, whatever. And this is as critical as these are complementary to CNNs,” he tells eeNews Europe.

The technology is already being used to reduce the bandwidth of 5G radio networks by de-noising the data, and even by re-creating digital avatars for video conferencing with the first generation Hailop-8 chip.  “At Hailo our support for transformers is growing over time, driven by our customers,” he said.

The company is one of the best funded AI chip designers, raising $224m according to Crunchbase. The second-generation chips launched in March and currently sampling are being tweaked to add support for transformer AI, and this is a key area for the third generation architecture that is under development.

While transformers are key to cloud-based generative AI systems such as Dall-E and ChatGPT, they are also being used for image recognition at the edge on embedded chips such as the Hailo-8 processor. The key is the balance of cloud and local processing fro individual use cases, he says.

“People are using Generative AI for two different things, for generating content such as new images from a text prompt., but I see many cases where people are extending GenAI to anything with LLMs or multiple modalities to build AI for the new age. That creates a lot of confusion,” he said.

Read more …

Share This