Qualcomm Technologies is betting big on Physical AI. Durga Malladi, EVP and GM, technology planning, edge solutions & data center, talks about the company’s partnership with Sarvam AI, its bets on AI edge use cases and how the nature of devices is changing due to the proliferation of AI agents, in an interview with Poulomi Chatterjee on the sidelines of the India AI Impact Summit. Excerpts:
AI is rapidly moving beyond the cloud to edge use cases. How do you see this phase of AI adoption unfolding?
There’s two different things happening at the same time. One is a gradual understanding that with the capabilities of all the devices around you, along with servers that are located on-prem, enterprises are quite capable of hosting sophisticated AI models complementing the workloads on the data center as well. The true usage of AI essentially now relies upon not the training of the models but actually inference. There’s a clear understanding that we need to have AI inference distributed across the entire network.
This is called hybrid AI. Secondly, in terms AI usage, we can run AI models on devices and on servers but the key is to make sure that you can build the right kind of use cases on top of it. This is where we see a lot of innovation. There’s a new AI phone which was was launched by ByteDance where the user interface is dominated by an AI agent. The apps are not even visible to you. You just talk to the agent and it performs tasks behind the scenes. That’s an example of how AI is changing the user interface but also bringing in new kinds of devices into the market.
How important are partnerships for scaling AI globally? Sarvam AI and Qualcomm just announced a collaboration to deploy AI models into a range of devices.
AI has become a national conversation globally. Every country has some unique perspective on how they want to use AI for their own citizens and local language models form a big part of this. When Sarvam AI came up more than a year back, we decided to work with them because they had a larger number of models. Some models are very well suited for smartphones, some might be better suited for PCs, some for glasses. So, they have diversity in use cases.
Physical AI and robotics have been gaining momentum. How is Qualcomm looking at the opportunity in this space?
Our work in Physical AI is beyond robotics and spans across from Consumer IoT to smartphones to Industrial IoT to data centres and the automotive sector. We showed the first unveiling of our robotics platform at the Consumer Electronics Show in Las Vegas. It is our firm belief that we need to have AI inference running in places where the data is locally generated. And secondly, there are a lot of instances where you can run AI inference in the data center or in some cloud where you outsource the inference. However, there are specific use cases where you have no choice but to run the device locally. Robotics is a prime example of such a use case. Qualcomm is a big believer in Physical AI and we believe that the investments that we have made in the space position us pretty well.
Do you think the real world use cases for humanoids are feasible in the world in its current state?
The pace of innovation in Physical AI is so fast that I would not be surprised if we start seeing some really good, compelling use cases. As Qualcomm, we build the platforms and work on them behind the scenes but we can only see a glimpse of the use cases. We rely upon our partners instead who define the use cases… In some sense, we are building and investing a little ahead of time, but what you see today are still the early days. But it’s evolving very rapidly.
Do you foresee the market for AI devices to become significant soon?
Absolutely, we are very bullish on that. Especially because we don’t just see the potential but we are beginning to see new kinds of devices emerge which simply didn’t exist before. The AR glasses are probably the easiest example.
How else do you think AI agents changing the interface and how users interact?
Most the premium tier smartphones today have the ability to run very large AI models. They are already integrated and they continue to evolve very accurately. Every smartphone is coming out with their own AI agent. We work very closely with Google’s Gemini, where you basically just ask even a very sophisticated question but the agent understands what is it that you’re asking, and then tries to break it down and then give you an answer.
Let’s say that you want to buy something on Flipkart or Amazon, and you have a question about the product but you also want to check your bank balance or your credit card. Today, what you have to do is you open one app and then a separate one, but instead if you ask the agent it can do all the tasks on its own. There’s already an emergence of such things. The same thing will apply to PCs as well. Instead of you trying to remember exactly what you stored, in which file you stored, you can just instruct an AI agent to ask for what you’re looking for.
