Three months ago, Howard Marks, co-chairman of Oaktree Capital Management, wrote a memo asking whether artificial intelligence was a financial bubble. He has come back with a follow-up, and the answer is no longer tentative. In a memo titled ‘AI Hurtles Ahead,’ released February 26, 2026, Marks and Oaktree lay out a striking case: AI has crossed from productivity tool to autonomous agent, and the world has not yet caught up with what that means.

The memo, written partly in collaboration with Anthropic’s AI model Claude, covers AI’s technical foundations, recent capability jumps, what it means for investing, and why the social consequences could be more destabilising than anything technology has done before.

1. AI is not a search engine -It synthesises and reasons

Oaktree opens its memo by clearing up the most common misconception about how AI works. Most people picture AI as a very fast Google. The memo argues that framing misses the point entirely.

“[AI is] a computer system that’s capable of synthesizing data and reasoning from it,” said Oaktree Capital Management.

The memo explains that an AI model goes through two phases: training and inference. During training, the model does not merely memorise facts. It learns how to think. It absorbs patterns from vast amounts of text, then uses those patterns to reason through new problems. Oaktree compares this to how a baby develops intelligence by absorbing stimuli from the world, not by being pre-loaded with answers.

2. AI moved from chat to autonomous agent in under two years

Oaktree draws a clear line between three levels of AI capability, and its conclusion about where AI currently sits is the most consequential part of the memo.

“Level 3 is autonomous agents. At this level, the user doesn’t tell AI what to do. The user gives it a goal as well as the parameters of the desired output. The agent does the work, checks it, and submits a finished product. This is labour replacement at the task level. Not assistance replacement.” Oaktree Capital Management, ‘AI Hurtles Ahead,’ February 26, 2026 stated.

The memo notes that AI was at Level 1 in 2023 and Level 2 in 2024. By early 2026, Oaktree says it has reached Level 3. The firm is blunt about what this distinction means financially: “The distinction between Level 2 and Level 3 might sound subtle. It isn’t. It’s the difference that determines whether AI is a productivity tool or a labour substitute. And that difference is what separates a $50 billion market from a multi-trillion dollar one.”

3.  The speed of AI’s growth has no historical precedent

Oaktree puts AI’s pace of adoption in historical context, and the comparison is jarring. The first computer, ENIAC, was completed in 1945. It took nearly 40 years before IBM began selling personal computers for general business and home use. AI, by contrast, went from invisible infrastructure to being used by roughly 400 million individuals and 75-80% of companies in under two years after it was framed as a general-purpose technology.

“Nothing has ever taken hold at the pace AI has. It’s able to change the world at a speed that approaches instantaneous, outpacing the ability of most observers to anticipate or even comprehend,” as quoted in the memo.

The memo also cites a blog post from Matt Shumer, CEO of OthersideAI, which was viewed by more than 50 million people in less than a month. Shumer wrote that he is “no longer needed for the actual technical work” of his job, describing how AI now writes, tests, and ships finished software without his intervention.

4. AI helped build itself –  That changes everything

Among the most striking details in the Oaktree memo is what OpenAI disclosed about its February 5, 2026 release of GPT-5.3 Codex. The documentation stated plainly that the model was instrumental in creating itself, having been used to debug its own training, manage its own deployment, and evaluate its own test results.

“Read that again. The AI helped build itself,” as quoted in the note.

Oaktree cites Dario Amodei, CEO of Anthropic, who said AI is now writing much of the code at his company, and that the feedback loop between current AI and next-generation AI is “gathering steam month by month.” Amodei has suggested the industry may be only one to two years away from a point where the current generation of AI autonomously builds the next.

5. AI has real limits, and Oaktree is not pretending otherwise

The firm lists several concrete limitations it says investors and users should keep in mind. AI still experiences hallucinations, where it presents wrong answers with confidence. It works within a context window, meaning it cannot hold unlimited information in working memory at once. Its reliability has improved sharply but it still makes mistakes. And its brilliance can lead people to trust it more than they should.

“Claude can make mistakes. Please double-check responses.” That warning appears on the bottom of my Claude screen every time I use it,” Marks noted.

The firm also raises the question of AI takeover in a broader sense, referencing the 1968 film 2001: A Space Odyssey and the HAL 9000 computer. Marks asks plainly whether AI will eventually develop its own motivations and refuse human instructions. He does not answer the question. He says it is worth asking.

6. For Investing, AI possesses that humans need but lacks what they have

Oaktree’s analysis of AI’s role in investing is nuanced in a way that much of the commentary on this topic is not. The firm says AI can absorb more data than any individual, is less susceptible to emotional biases like fear and greed, and can process quantitative information better than virtually everyone.

“AI possesses a lot of the qualities one needs to be a good investor,” as per the note.

But Oaktree argues AI is weakest precisely where the best investors are strongest: in situations with no historical precedent, where pattern-matching from past data is not enough. The firm also points out that AI does not have skin in the game. It does not feel the weight of a wrong call. Marks believes there will continue to be human investors who are superior to AI, but that those individuals will need to be genuinely exceptional. Indexation already pushed out the mediocre. AI, Oaktree warns, will push the bar higher still.

7. On the bubble question, Oaktree’s answer: Proceed with caution, not fear

Marks returns to the bubble question he posed in December 2025, and he breaks it into five separate questions: Is AI real? Is it being applied? Are infrastructure builders being reckless? Will the investment produce adequate returns? And are the valuations irrational? His answers vary by question.

“Since no one can say definitively whether this is a bubble, I’d advise that no one should go all-in without acknowledging that they face the risk of ruin if things go badly. But by the same token, no one should stay all-out and risk missing out on one of the great technological steps forward,” noted Marks.

On valuations, the firm is more careful. Large companies like Microsoft, Amazon, and Google may be appropriately priced. But some AI startups with multi-billion dollar valuations and no announced products are, in Marks’s words, lottery tickets.

 Conclusion

Oaktree’s point is that AI is doing something no prior technology has done, it is not just replacing tasks humans already performed, it is beginning to take on work humans had not imagined handing over. 

It says  with convictionthat the pace of change already exceeds society’s ability to process it, that the social consequences around employment are serious and underappreciated, and that the right posture for anyone,  is to stay engaged without betting everything on any single outcome.

“The bottom line for me is that AI is very real, capable of doing a lot of work that heretofore has been done by knowledge workers, and growing extremely rapidly in terms of applications. What we see today is only the beginning,” noted Howard Marks, Oaktree Capital Management, ‘AI Hurtles Ahead.’