In the US, research shows that the number of teenagers and young adults with clinical depression more than doubled between 2011 and 2021. The suicide rate for teenagers nearly doubled from 2007 to 2019, and nearly tripled for 10-14-year-olds in particular. Nearly 25% of teenage girls made a suicide plan in 2021. Jean Twenge, a research psychologist, has spent years studying the data and is unequivocal that America’s teenage mental health crisis is the direct product of the rise of smartphones and social media. Twenge confirms that these numbers are not outliers, certainly in the English-speaking world—kids in the UK, Canada and Australia suffer from similar high rates of mental disorders.
In another study described by Gillian Tett in the Financial Times, the research group Sapien Labs polled 28,000 young people (Gen Z, 18-24 year olds), who represent the first generation who went through adolescence with social media as part of their lives. Sapien found, unsurprisingly, that the mental health in this cohort was far worse than that of earlier generations.
On the other hand, in China, the government has compelled social media companies to implement blackout hours, built-in breaks and time limits for young people. Kids using Douyin, the Chinese version of TikTok, will only be able to use it for 40 minutes per day. If kids under 14 try to use Douyin between 10PM and 6AM, the app simply won’t work. Last month, the company said it would add five-second pauses between some videos. In these pauses, the app will show messages like “put down the phone”, “go to bed”, or “work tomorrow”. The breaks might shock some people out of mindless, endless-scroll rabbit holes. About two years ago, China banned video games for kids outside of three hours on weekends.
Clearly a win, in my view, for the Chinese approach. While I am not nearly naïve enough to see China as some haven of citizens’ rights, they certainly seem to understand that the dangers of contemporary technology need to be managed with a big stick. This stick will never be available to lily-livered democrats, who are structurally unable to take hard decisions, in large part because of the symbiotic relationship between capital and politics.
The capitalist democracies have been struggling for at least a couple of decades with “controlling” or “regulating” social media companies. Recall the continuing roll call of social media entrepreneurs testifying before the US Congress in platitudes while the cash registers keep ringing a spectacular symphony. And, let us recognise that it is not just mental health of people, young and old, which is a critical issue, but also so many other ills that unbridled social media propagate in the world—hate speech and fake news, for instance.
MIT economics professor Daron Acemoglu (who has won the John Bates Clark Medal, often a precursor to the Nobel Prize) points out that capital takes what it can in the absence of constraints and, while “technological progress is the most important driver of human flourishing…the process is not automatic. Major technological disruptions … can flatten wages for an entire class of working people…[and while] you got progress, … you also had costs that were huge and very long-lasting.” After the Industrial Revolution, for instance, working people suffered over a hundred years of much harsher conditions, lower real wages, much worse health and living conditions, less autonomy, and, of course, greater inequality.
All seminal technologies carry these potential threats, even more so in the new age of ‘winner takes all’, as we have seen over the past couple of decades with social media companies. And with AI billed as social media on steroids, these divergences—in capitalist economies—are certain to become even worse. As Geoffrey Hinton, considered one of the godfathers of AI, put it, “My worry is that it will [make] the rich richer and the poor poorer. As you do that . . . society gets more violent. This technology which ought to be wonderful . . . is being developed in a society that is not designed to use it for everybody’s good.” More philosophically, he pointed out that AI may be opening a Pandora’s box that shows up humanity as just a passing phase in the evolution of intelligence.
Some prominent researchers and practitioners also fear there could be existential threats posed by AI systems to humans if the technology was given too much autonomy. Stuart Russell, a professor of computer science at the University of California, Berkeley, constructed an example of the UN asking an AGI (artificial general intelligence) to help de-acidify the oceans, specifying that any by-products be non-toxic and not harm fish. In response, the AI system comes up with a self-multiplying catalyst that achieves all stated aims, but the ensuing chemical reaction uses a quarter of all the oxygen in the atmosphere. “We all die slowly and painfully,” Russell concluded. “If we put the wrong objective into a super-intelligent machine, we create a conflict that we are bound to lose.”
While this sort of doomsday scenario may never pan out, the cards suggest that AI, in the final reckoning, may simply be an evolutionary tool that, at very least, brings capitalism to its knees.
(The author is CEO, Mecklai Financial)