OpenAI CEO Sam Altman emphasized the importance of mastering new AI tools during his conversation with Nikhil Kamath on the WTF podcast. According to Altman, the key to staying ahead in today’s rapidly evolving tech landscape is to focus on becoming highly proficient in using these advanced AI technologies.
“I think the most important thing to study is just getting really good at using the new AI tools,” he noted.
“Right now learning how to use AI tools is probably the most important specific hard skill to learn and the difference between people who are really good at really AI native and think of everything in terms of those tools and don’t is huge.”
He stressed meta-learning over subject choice: “If you get good at learning things you can learn new things quickly.” “There’s other general skills learning kind of like to be adaptable and resilient which I think is something really learnable that’s like quite valuable in a world that’s changing so fast,” he added.
India’s potential role in shaping the future of AI:
When Nikhil Kamath inquired about India’s potential role in shaping the future of AI, Sam Altman said that India is OPenAI’s second largest market in the world, it may become largest. He emphasized how feedback from Indian users, across language support, affordability, and access has directly shaped product development at OpenAI.
Reflecting on the country’s AI potential, Altman said, “If there is one large society in the world that seems most enthusiastic to transform with AI right now, it’s India. The excitement, the embrace of AI…the energy is incredible.” For India, Sam highlights, the real opportunity lies in moving from consumption to global creation, building tools, platforms, and companies the rest of the world will use.
On the Difference Between AGI and Human Intelligence:
When asked about the difference between AGI and human intelligence today, and what that gap looks like in the future, Sam explained: “With GPT-5 you have something that is incredibly smart in a lot of domains at tasks that take, you know, seconds to a few minutes.” He added, “It’s very superhuman at knowledge, at pattern recognition, at recall on these shorter-term tasks.” But he acknowledged key limitations: “In terms of figuring out what questions to ask or to work on something over a very long period of time, we are definitely not close to human performance.”