If you can’t create it, buy it. That’s what Facebook parent Meta CEO Mark Zuckerberg has followed in a bid to stay ahead in the Silicon Valley AI race. After rumours of a superintelligence team emerged, Zuckerberg has confirmed its existence. In an internal memo to hi employees, sourced by Bloomberg, Zuckerberg confirmed the creation of his Meta Superintelligence Labs (MSL). Rumours have it
Said to be backed by a staggering $14.3 billion seed funding round, this new venture within Meta aims to put the company ahead of OpenAI and all its other rivals in the AI race. “We’re going to call our overall organization Meta Superintelligence Labs (MSL). This includes all of our foundations, product, and FAIR teams, as well as a new lab focused on developing the next generation of our models,” said Zuckerberg in the memo.
The highly anticipated venture features a stacked roster of talent drawn from the top tier of AI innovation, including experts from OpenAI, Google DeepMind, Google Research, and Anthropic. Their collective work has shaped critical advancements in large language models, multimodal AI, complex reasoning, and cutting-edge image generation.
Under MSL, Zuckerberg has appointed Alexandr Wang, former CEO of Scale AI, as its Chief AI Officer to head the new division. Nat Friedman, previously CEO of GitHub, will serve as co-lead, focusing on AI product strategy and applied research. Other than these, Meta has poached a lot of high-profile AI talent from its rivals to form MSL.
Key AI talent Meta poached from its rivals

Trapit Bansal: Known for pioneering Reinforcement Learning (RL) on Chain of Thought and being a co-creator of OpenAI’s o-series models.
Shuchao Bi: A key figure behind GPT-4o’s voice mode and o4-mini, who previously led multimodal post-training at OpenAI.
Huiwen Chang: Co-creator of GPT-4o’s image generation and the inventor of MaskIT and Muse text-to-image architectures during her tenure at Google Research.
Ji Lin: A prolific contributor to numerous foundational models, including o3/o4-mini, GPT-4o, GPT-4.1, GPT-4.5, 40-imagegen, and the Operator reasoning stack.
Joel Pobar: Brings extensive experience in inference from Anthropic, following an 11-year career at Meta where he worked on HHVM, Hack, Flow, Redex, performance tooling, and machine learning.
Jack Rae: Formerly the pre-training tech lead for Gemini and reasoning lead for Gemini 2.5, he previously led early LLM efforts like Gopher and Chinchilla at DeepMind.
Hongyu Ren: A co-creator of several OpenAI models, including GPT-4o, 4o-mini, o1-mini, o3-mini, o3, and o4-mini, having previously led a post-training group at OpenAI.
Johan Schalkwyk: A former Google Fellow and an early contributor to Sesame, also serving as the technical lead for Maya.
Pei Sun: Focused on post-training, coding, and reasoning for Gemini at Google DeepMind, and was instrumental in creating the last two generations of Waymo’s perception models.
Jiahui Yu: A co-creator of o3, o4-mini, GPT-4.1, and GPT-4o, who previously led the perception team at OpenAI and co-led multimodal efforts at Gemini.
Shengjia Zhao: Credited as a co-creator of ChatGPT, GPT-4, all mini models, 4.1, and o3, with prior leadership in synthetic data at OpenAI.
Zuckerberg details future plans for Meta AI
“I’m excited about the progress we have planned for Llama 4.1 and 4.2. These models power Meta AI, which is used by more than 1 billion monthly actives across our apps and an increasing number of agents across Meta that help improve our products and technology. We’re committed to continuing to build out these models,” says Zuckerberg in his memo.
Additionally, he also confirms plans to start research on the next generation AI models within a year. Zuckerberg is keen on creating a founding group for a small talent-dense effort. “I’ve spent the past few months meeting top folks across Meta, other AI labs, and promising startups to put together the founding group for this small talent-dense effort. We’re still forming this group and we’ll ask several people across the AI org to join this lab as well,” he added.