Adoption of generative AI on edge devices will progress faster than people’s imagination. LLMs can help users complete complex tasks directly on mobile devices and LVMs can provide more possibilities for entertainment and creation.
Qualcomm’s senior vice president of product management, Ziad Asghar, believes that the adoption of generative AI on edge devices will progress faster than people’s imagination. He made the remarks in an interview conducted alongside the just-concluded Snapdragon Summit 2023.
Asghar noted that many AI PCs will be released in 2024, and mobile processors released by the end of 2023 can support generative AI. He said that all stakeholders in the ecosystem, including OEM/ODM customers, developers of AI models and software, and large platform operators, are all very proactive in developing new solutions.
Asghar continued that large language models (LLMs) and large image models (LVMs) have ushered in a wave of generative AI, which is likely to lead to a radical change in the way humans interact with machines.
LLMs can help users complete complex tasks directly on mobile devices and LVMs can provide more possibilities for entertainment and creation. On PCs, generative AI can be used to automate a variety of tasks such as recording, translation, writing, code writing, and presentation generation to greatly improve human productivity.
There are many benefits with moving edge AI to devices to make them more personalized. In addition to reducing the burden on cloud AI and the possible latency and connectivity limitations of AI applications, personalized AI models can self-tune to better meet the needs of each individual, according to Asghar.
In addition, by limiting key privacy information to the AI models running on the device, the risk of data theft can be avoided when the data is uploaded to the cloud. This is also helpful for enterprise applications. For example, some intellectual property information can be more confidently processed using AI applications.