TikTok is set to fire hundreds of UK employees from its content moderation and security team amid growing efforts to automate much of the workflow using artificial intelligence. The update comes weeks after the Online Safety Act took effect across the country — setting tough new requirements to protect children and remove illegal content.
According to a Financial Times report, all UK-based employees within the trust and safety department received an email on Friday broaching the possibility of layoffs. The missive stated that TikTok was “considering that moderation and quality assurance work would no longer be carried out at our London site”. The company currently employs more than 2,500 staffers in the UK. TikTok was also slated to hold a town-hall meeting with affected staff on Friday morning.
The video-sharing app owned by ByteDance has indicated that several hundred jobs are likely to be impacted across the UK as well as south and south-east Asia. The ‘collective consultation process’ is being pursued as part of a global reorganisation of content moderation efforts on TikTok.
What is the Online Safety Act?
The Online Safety Act was passed in 2023 with rollout beginning earlier this year. It sets out tough new requirements for social media platforms such as Facebook, YouTube, TikTok and X, as well as sites hosting pornography, to protect children and remove illegal content.
The law has attracted some criticism from politicians, free-speech campaigners and content creators — with more than 468,000 people signing an online petition calling for its repeal. They contend that the rules had been implemented too broadly and led to censorship of legal content. Users have also complained about age checks that require personal data to be uploaded to access sites that show pornography.
The UK government has however pushed forward with the Act — now working with regulator Ofcom to implement it as quickly as possible.
TikTok’s AI pivot cause for concern?
The decision is part of a broader global restructuring where TikTok is shifting much of its content moderation work from humans to artificial intelligence systems. According to detailed viewed by Financial Times, the short video platform revealed in its email that “technological advances, such as the enhancement of large language models, are reshaping our approach”. The message shared with employees added that the proposed changes were “intended to concentrate operation expertise in specific locations”.
TikTok had introduced new “age assurance” controls last month to limit exposure to harmful content for minors. It plans to use machine-learning technology to infer the age of users on the bass of their activity — a move that is yet to be endorsed by Ofcom. The layoffs are likely to generate fresh concern about the risk of dwindling human oversight in content moderation. Critics argue that such AI alternatives are still in a nascent stage and may fail to maintain the same quality of user safety as human moderators. There is also worry about the accuracy and cultural sensitivity of AI systems — especially in diverse linguistic and social contexts.