TikTok cuts threaten hundreds of UK content moderator jobs amid AI shift

Read also

TikTok, the popular short-form video app, has announced plans to move its UK content moderation roles to Europe as it increasingly relies on artificial intelligence (AI) to monitor and filter content. This move has put hundreds of jobs at risk, despite the growing regulatory pressure under the newly introduced Online Safety Act.

The decision to shift content moderation roles to Europe comes as no surprise, as TikTok has been facing mounting criticism for its handling of harmful and inappropriate content on its platform. With over 100 million monthly active users in Europe alone, the company has been under immense pressure to ensure the safety and well-being of its users, especially young people.

In a statement, TikTok said that the move to shift content moderation roles to Europe is part of its ongoing efforts to improve its content moderation policies and practices. The company believes that this move will enable it to better align with local laws and regulations, as well as strengthen its ability to identify and remove harmful content.

However, this decision has raised concerns about the potential loss of jobs for UK-based content moderators. The company has not disclosed the exact number of jobs that will be affected, but it is estimated to be in the hundreds. This news has caused anxiety and uncertainty among the affected employees, who fear losing their livelihoods in the midst of a global pandemic.

The move to rely more on AI for content moderation is not a new trend in the tech industry. Many social media platforms have been increasingly using AI to filter out harmful and inappropriate content, citing the need for efficiency and scalability. However, this shift has also raised concerns about the accuracy and effectiveness of AI in identifying and removing harmful content.

TikTok has assured that the use of AI will not completely replace human moderators, and that they will continue to play a crucial role in the content moderation process. The company believes that the combination of AI and human moderation will result in a more effective and efficient content moderation system.

Despite the potential job losses, this move by TikTok has been welcomed by many as a step in the right direction. The Online Safety Act, which was introduced in the UK earlier this year, places a legal duty on social media platforms to protect their users from harmful content. By shifting content moderation roles to Europe, TikTok is showing its commitment to complying with this new legislation and ensuring the safety of its users.

Moreover, this move also highlights the company’s dedication to continuously improving its content moderation policies and practices. TikTok has been working closely with regulators and experts to develop and implement effective measures to tackle harmful content on its platform. This includes investing in technology and resources to improve its AI capabilities and providing support and training for its human moderators.

In addition to this, TikTok has also introduced various features and tools to empower its users to control their online experience. This includes the ability to filter out specific types of content, report and block users, and limit screen time. These efforts demonstrate TikTok’s commitment to creating a safe and positive environment for its users.

In conclusion, while the decision to move UK content moderation roles to Europe may have put hundreds of jobs at risk, it is a necessary step for TikTok to improve its content moderation practices and comply with the Online Safety Act. The company’s commitment to continuously improving its policies and practices, as well as its efforts to empower its users, should be applauded. With this move, TikTok is setting a positive example for other social media platforms to follow in ensuring the safety and well-being of their users.

More news