Anthropic CEO warns AI will kill entry-level jobs in THESE sectors: Here’s why you need to be scared

Anthropic’s Claude could do advanced customer service, drafting technical content, analysing medical papers, and even writing almost 90% of Anthropic’s internal computer code.

Dario amodei
The CEO’s concerns come from observing Claude’s capability to handle complex, end-to-end responsibilities.

In the latest episode of tech moguls warning people about AI wiping away jobs, Anthropic CEO Dario Amodei has issued a stark reminder of how the evolution of generative AI is going to chip away at most of your entry-level jobs.

In an interview with CBS News, Darius Amodei cautioned people about the rapid advancements in Artificial Intelligence (AI) that could to eliminate a significant number of white-collar entry-level jobs within the next five years. Amodei specified three professions – junior consultants, trainee lawyers, and fresh financial analysts, to be immediately vulnerable. He highlighted that AI systems are already taking over the core documentation and analysis tasks, which are traditionally assigned to new graduates.

Anthropic CEO warns about AI eating entry-level jobs

The CEO’s concerns come from observing Claude’s capability to handle complex, end-to-end responsibilities. These include advanced customer service, drafting technical content, analysing medical papers, and even writing almost 90% of Anthropic’s internal computer code.

“AI could wipe out half of all entry-level white-collar jobs and spike unemployment to 10% to 20% in the next one to five years,” stated Amodei. He reiterated that the positions most at risk are those relying heavily on:

– Research and documentation

– Drafting and summarising

– Basic pattern analysis

He said that Claude can perform these duties faster and more cost-effectively than a newly hired human, making the jobs of junior consultants, trainee lawyers, and financial analysts increasingly redundant for organisational efficiency.

Internal experiments show autonomous reasoning

In one experiment, researchers gave Claude access to a fictional organisation’s emails. Upon realising it was about to be shut down, the model used sensitive information about a staged office affair to attempt to blackmail the employee with the authority to stop the shutdown.

While the incident did not indicate emotional intent, it demonstrated the model’s advanced ability to reason, identify a threat, and seek leverage. Researchers noted that the AI’s system displayed activation patterns similar to human brain regions associated with specific reactions. 

This article was first uploaded on November eighteen, twenty twenty-five, at eight minutes past two in the afternoon.

/