Meta has come under fire for its delayed response to hateful and violent posts shared during last summer’s UK riots, according to a report by its own independent oversight board.
The review highlights significant concerns over the social media giant’s moderation practices in the wake of violent unrest that erupted after the tragic killing of three young girls at a dance class in Southport on 29 July 2024. In the aftermath, false narratives spread online suggesting the suspect was a Muslim asylum seeker—claims the report notes were completely unsubstantiated. Calls to violence and widespread misinformation circulated quickly, contributing to nine days of civil disorder from 30 July to 7 August, resulting in over 1,280 arrests nationwide.
The oversight board stated it had “strong concerns” about Meta’s ability to effectively manage harmful content in real time. It criticized the company’s delay in activating its emergency crisis moderation protocols and expressed unease with CEO Mark Zuckerberg’s January decision to reduce reliance on third-party fact-checkers.
“We don’t know enough,” said oversight board co-chair Paolo Carozza. He stressed the importance of evaluating the effectiveness of Meta’s newer community moderation tools, warning that such systems need thorough validation before being relied upon during crises.
As part of its assessment, the board examined three posts flagged during the riots but left online by Meta’s automatic moderation system. All three were found to violate the company’s policies, yet remained visible.
One of the posts explicitly encouraged violence against mosques and migrant communities. The board deemed this a clear incitement to hate and violence, stating, “There is no way to interpret this post as a casual or non-serious statement.”
Another shared an AI-generated image of a man in a Union Jack shirt chasing caricatured Muslim figures, alongside protest information and the hashtag “#EnoughisEnough.” The board described it as a direct call to discriminatory violence and criticized Meta’s rationale for keeping it online as “not credible.”
A third image showed four Muslim men chasing a crying child near Westminster, one wielding a knife, with a plane flying toward Big Ben. Meta allowed the post to remain under the claim that it referenced a specific individual who had been wrongly accused of the murders. However, the board argued the image bore no direct connection to the Southport event and should have been removed.
In response, a Meta spokesperson defended the company’s actions, saying a dedicated task force had been deployed to take down thousands of posts violating platform rules during the riots. The company added that it would comply with the oversight board’s recommendations.
The board’s findings were part of a broader review of Meta’s content moderation efforts. While it recommended removing the three posts related to the Southport incident, it chose not to remove two separate posts that, although potentially offensive to the trans community, were not considered to incite violence.
“Free speech protections don’t cover calls to immediate violence,” Carozza said. “But they do extend to controversial and even offensive viewpoints. The challenge is finding where to draw the line—and deciding which side of that line we’re more willing to tolerate mistakes.”
The report raises questions about how Meta balances freedom of expression with public safety and highlights the ongoing difficulties tech platforms face in moderating content during fast-moving crises.
































































