According to the BBC, multiple whistleblowers and insiders have revealed that social media giants (Meta and TikTok) altered their algorithms to promote more 'borderline' harmful content—such as misogyny and conspiracy theories—in a bid to stay competitive and retain users amid intense rivalry. A Meta engineer stated that senior management made this decision after the stock price fell, specifically to compete with TikTok, highlighting how financial pressures drove these changes.
A TikTok employee, referred to as 'Nick' in the BBC documentary, provided access to internal complaint dashboards, showing that cases involving politicians were prioritized over reports of harm to children. Nick alleged that these decisions were made to maintain 'strong relationships' with political figures and avoid regulatory threats or bans, rather than out of concern for user safety. He criticized the company for allegedly caring less about children's safety than political appeasement.
Former senior Meta researcher Matt Motyl disclosed that Instagram Reels, Meta's competitor to TikTok, was launched in 2020 without adequate safeguards. Internal research shared with the BBC indicated that comments on Reels had significantly higher rates of bullying and harassment (75% higher), hate speech (19% higher), and violence or incitement (7% higher) compared to the main Instagram feed. Motyl described a 'power imbalance' where safety teams needed approval from Reels teams to implement safety features, as toxic content drives more engagement.
Another former Meta engineer, called 'Tim', reported that the company stopped limiting borderline harmful content to compete with TikTok, a move allegedly sanctioned by top executives after stock declines. Internal documents revealed that Meta was aware its algorithms amplified content that angered users and even incited harm, as such content generates higher engagement due to outrage-driven interactions.
Former TikTok machine-learning engineer Ruofan Ding characterized the recommendation algorithm as a 'black box', difficult to scrutinize internally. He noted that engineers often treat content as mere data points, relying on safety teams to filter harmful posts, but observed an increase in 'borderline' content as the algorithm was optimized for market share, raising concerns about systemic safety gaps.
The BBC interviewed teenagers who reported being recommended violence and hateful content on major platforms, with complaint systems failing to function effectively. One teen, Calum (now 19), claimed he was 'radicalized by algorithm' from age 14, leading to racist and misogynistic views. UK counter-terror police specialists noted a 'normalization' of antisemitic, racist, and far-right posts in recent months, linking it to desensitization and increased sharing of extreme content.
Meta and TikTok denied the allegations. A Meta spokesperson asserted that any suggestion of deliberately amplifying harmful content for profit is false, citing investments in safety features like Teen Accounts. TikTok dismissed the claims as 'fabricated', emphasizing its technology to prevent harmful content from being viewed and dedicated teams for child safety, though whistleblowers contested the effectiveness of these measures.
Source: www.bbc.com