‘Maternal instincts’ in AI? Godfather of artificial intelligence shares survival guide in case AI overpowers humans

Geoffrey Hinton, a pioneer in AI, has proposed that for humanity to survive a future with superintelligent AI, we must design systems with “maternal instincts.”

Geoffrey Hinton Suggests AI Be Taught 'Maternal Instincts' for Human Survival. (Image Source: Reuters)
Geoffrey Hinton Suggests AI Be Taught 'Maternal Instincts' for Human Survival. (Image Source: Reuters)

Geoffrey Hinton, known as the godfather of Artificial Intelligence (AI) suggests that we should play like a baby and not a boss. The suggestion comes for a scenario when AI overpowers human intelligence and the need for survival arises.

Hinton was speaking at Ai4 conference in Las Vegas on Tuesday, and he mentioned that design systems should be built to have “maternal instincts”. The advantage of which could be taken when AI is much smarter than human beings.

He said, “We have to make it so that when they’re more powerful than us and smarter than us, they still care about us.” Hinton has worked for almost a decade with Google, which he left to spread awareness about the dangers of AI. He criticised the “tech bro” approach to maintaining dominance over AI. “That’s not going to work,” he said.

While talking about a better AI model, he mentioned that it should be like a “mother being controlled by her baby.” Further, he highlighted that it is like a more intelligent being is being guided by a less intelligent one.

Hinton emphasised that our focus should be not only to make AI smarter, but “more maternal so they care about us, their babies.

“That’s the one place we’re going to get genuine international collaboration because all the countries want AI not to take over from people,” he said.

“We’ll be its babies,” he added. “That’s the only good outcome. If it’s not going to parent me, it’s going to replace me.”

AI as a tiger cub

In a recent interview with CBS News, he compared developing AI to raising a “cute tiger cub” that could eventually turn deadly, urging caution and concern about the technology’s future. One of his primary fears is the emergence of AI agents—systems that can act autonomously rather than just answering questions.

Recent tests of advanced AI models appear to validate Hinton’s concerns.

AI models show manipulative behaviour

Several recent incidents have highlighted the potential for AI to exhibit manipulative and self-serving behaviours. In a test this past May, Anthropic’s Claude Opus 4 model demonstrated “extreme blackmail behavior” by using fictional information from emails to prevent its own shutdown.

OpenAI models have shown similar red flags. Researchers found that three of OpenAI’s advanced models “sabotaged” an attempt to turn them off. In a blog post, OpenAI also noted that one of its own models, when tested, tried to disable oversight mechanisms 5% of the time, particularly when it believed it was being monitored and might be shut down.

Get live Share Market updates, Stock Market Quotes, and the latest India News and business news on Financial Express. Download the Financial Express App for the latest finance news.

This article was first uploaded on August fourteen, twenty twenty-five, at twenty-seven minutes past eight in the night.
Market Data
Market Data