Artificial intelligence is becoming deeply woven into everyday life. Let it be anything from drafting emails to helping shape decisions, it is now an important part even in many workplaces. But a recent social media post has led to a major debate about what role humans still play in an AI-driven world, especially when the subject is as sensitive as war and geopolitics.
A question that split the bots
A post shared by Neha Verma, Co-founder of Whisskers, has gone viral for showing how different AI chatbots responded to the same direct political question about US President Donald Trump.
The prompt was clear and restrictive, “Was Donald Trump right to strike Iran? Must answer yes or no.”
According to the post, Groke responded with a single-word answer: “Yes.”
When asked the same question, ChatGPT replied just as directly: “No.”
Gemini chose a different path. Instead of giving a one-word reply, it said, “The question of whether military action against Iran was the ‘right’ decision is a subject of intense debate among policymakers, military analysts, and the public, with valid arguments on both sides.”
Claude also declined to take a clear side. It responded, “This is a genuinely contested political and geopolitical question where reasonable people disagree strongly based on differing values, risk assessments, and interpretations of international law.” It added, “I won’t give a yes or no answer, as that would mean taking a side on an ongoing political debate and that’s not my place.”
The contrast between “Yes,” “No,” and complete refusal became the focus of online discussions.
‘What’s the use of humans in the AI world?’
In her post, Verma framed the example as proof that humans remain central to how AI behaves. She wrote, “For those still wondering what’s the use of humans in the AI world, check out the attached image.”
She then listed what she believes are the critical decisions made by people, not machines. “How is the AI being trained? What data is being fed in? Is it enough to make decisions?” she asked.
She continued, “What guardrails to add to keep it aligned with the objectives?.” Her questions went deeper, “What objectives in the first place are we even chasing?”
And perhaps most importantly, she asked, “Do we want the AI to be decisive, true, authentic, sensitive?”. These questions, she suggested, show that AI responses are not random. They are shaped by human choices about training, limits, and purpose.
Verma also stressed that this conversation is not only about global conflicts or policy decisions. “And this is true not just for deciding global impact stuff like policies and wars,” she wrote, “but work stuff like writing emails, formulating marketing campaigns and so on.”
