Jack Clark, co-founder and head of policy at Anthropic, has disclosed that he restricts his young daughter’s access to YouTube. The primary reason is the platform’s powerful recommendation algorithm, which he finds deeply unsettling for children. 

Clark, a key figure behind the AI safety-focused company Anthropic (known for developing models like Claude), shared his parenting approach during an interview on “The Ezra Klein Show.” He explained that while his toddler enjoys select content such as episodes of “Bluey” on their smart TV, he deliberately avoids granting “unfettered access to the YouTube algorithm.”

Anthropic co-founder restricts YouTube for his daughter

“It freaks me out,” Clark stated bluntly, expressing unease about exposing young minds to the endless, algorithm-driven stream of videos that can rapidly escalate from harmless content to potentially addictive or inappropriate material. He described the algorithm as “freaky” in its ability to captivate and influence users, particularly vulnerable children, through personalised recommendations designed to maximise engagement.

Clark acknowledged the growing challenge for parents in an era where technology is becoming increasingly “ubiquitous” and “hard to escape.” Despite his role at the forefront of advanced AI development, he applies strict guardrails at home to protect his family from overexposure to algorithmic systems, mirroring broader debates about the addictive nature of social media and video platforms.

Tech leaders’ pattern of limiting screen time

Clark’s stance aligns with a pattern among prominent tech executives who limit their children’s screen time and digital access. Notable examples include the late Apple CEO Steve Jobs, who famously restricted his own kids’ use of Apple products, and investor Peter Thiel, who reportedly caps his children’s weekly screen time at just 90 minutes. Such revelations often highlight a disconnect, showing that those building or profiting from transformative technologies frequently shield their families from their side effects.

On a different note, Anthropic CEO Dario Amodei addressed concerns about the future of work for young Indians during his recent visit to India for the India AI Impact Summit 2026 in New Delhi. He advised the country’s youth, particularly those aged 20-25 entering the workforce or starting ventures, to focus on human-centered tasks that involve relating to people, rather than competing directly with AI in areas like coding, mathematics, or pure software engineering, which he described as increasingly AI-centric and likely to be automated first.

Amodei highlighted building careers around AI’s “tailwind” by combining human strengths (such as creativity, leadership, sales, and interpersonal skills) with AI tools, exploring opportunities in physical-world applications like robotics or AI-linked supply chains, and prioritising critical thinking to navigate misinformation and complex decisions in an era of rapid AI advancement.