The U.K.’s internet regulator, Ofcom, has published an open letter to social media platforms raising concerns about the use of their tools to incite violence. The development follows days of violent civil unrest and rioting in towns and cities around the United Kingdom after the slaying of three young girls in a knife attack in Southport on July 30.
Ofcom has powers to sanction video platforms for failing to protect their users from content that’s likely to incite violence or hatred. Under the U.K.’s newer Online Safety Act (OSA), Ofcom’s powers to enforce content moderation standards online have been further expanded to cover all sorts of platforms, including social media services.
Penalties under the OSA can reach up to 10% of global annual turnover — so, on paper, the regulator’s toolbox contains hefty new powers to clamp down on serious content moderation failures.
However, Ofcom is still in the process of implementing the regime. Enforcement on social media platforms is not expected to kick in before 2025, as the regulator continues to consult on guidance for how firms should comply.
Parliament will also need to approve these rules before enforcement starts. Currently, there is no clear legal route for Ofcom to compel social media firms to tackle hateful conduct that may be whipping up violent social unrest.
Nonetheless, in recent days there have been calls for Ofcom’s enforcement timeline to be speeded up in light of the civic unrest and for the regulator to be more proactive in dealing with social media giants.
Speaking to the BBC Radio 4’s World at One program on Tuesday, former minister Damian Collins urged Ofcom to “put the tech companies on notice.”
“Communications on social media platforms that incite violence, create genuine fear people have of being the victim of violent acts, that incite racial hatred, these are already regulatory offences under the Act,” Collins told the BBC. “What Ofcom needs to be doing now is putting the tech companies on notice to say they will be audited using the powers Ofcom has to look at what they did to try and dampen down the spread of extremist content and disinformation related to that extremist content on their platforms.
“[The tech companies] have the power to do that… and my concern is, it’s not just they’re not doing that, they are actively amplifying this content and making the problem worse.”
Concern over the role of social media platforms, including Elon Musk’s X (formerly Twitter), was sparked almost immediately by the swift spread of disinformation about the identity of the minor responsible for killing the three girls.
U.K. media outlets were initially restricted from reporting the identity of the suspect who police had arrested because he is under the age of 18. A judge later lifted the restriction, naming the teen as a British-born citizen called Axel Rudakubana, but not before the information vacuum had been exploited by far right activists using platforms like X to spread false claims that the killer was a Muslim asylum seeker.
Activists also used social media sites and messaging apps such as Telegram to organize fresh unrest. The first violent disturbance took place in Southport the day after the killings. Since then unrest has spread to a number of towns and cities in England and Northern Ireland, with incidents including looting, arson and racist attacks. Several police officers were injured in the clashes.
Musk personally waded into the fray, engaging with content posted on X by far-right influencers intent on using the tragedy to further a divisive political agenda. That includes X user Tommy Robinson (also known as Stephen Yaxley-Lennon), whose account X reinstated last year, lifting a 2018 Twitter ban that had been imposed for breaching the platform’s hateful conduct policies in posts targeting Muslims.
In one of Musk’s own posts remarking on the unrest in the U.K., Musk suggested “civil war is inevitable.” In another, Musk attacked U.K. Prime Minister Keir Starmer, insinuating that his government is responsible for so-called two-tier policing, a right wing conspiracy theory that suggests police are tougher on right-wing criminality.
Ministers have rubbished Musk’s claim and disputed the framing of the violent public disturbances as protests, instead branding the individuals involved “thugs” who are engaged in “criminal acts.”
The government has also vowed to bring the full force of the law to bear on anyone involved. But that still leaves the tricky question of how to handle major tech platforms which are being used to spread content intended to whip up violence and to organize fresh unrest. That specifically includes X, where the owner of a platform is himself amplifying the divisive dogwhistling.
Ofcom’s public letter, which is attributed to Gill Whitehead, its group director for online safety, represents the weakest level of regulatory intervention possible, lacking a strong and forceful call to action for platforms to act. There is only a suggestion to platforms that “you can act now.”
But it may be all Ofcom feels able to do at this point.
“When we publish our final codes of practice and guidance, later this year, regulated services will have three months to assess the risk of illegal content on their platforms, and will then be required to take appropriate steps to stop it appearing, and act quickly to remove it when they become aware of it,” writes Whitehead, underscoring the OSA enforcement gap it’s saddled with — failing new action by the government to speed up the implementation timeline.
“Some of the most widely-used online sites and apps will in due course need to go even further — by consistently applying their terms of service, which often include banning things like hate speech, inciting violence, and harmful disinformation,” the Ofcom letter continues, pointing to some of the incoming duties social media firms will be expected to comply with once the OSA is fully up and running.
The regulator goes on to say it expects “continued engagement” with companies during the OSA implementation period.
“[W]e welcome the proactive approaches that have been deployed by some services in relation to these acts of violence across the UK,” Ofcom adds, ending with a suggestion that platforms shouldn’t wait for the “new safety duties” to kick in but can instead “act now” to ensure their services are “safer for users.”
But without a fully implemented regime to force platforms to clean up their act, Ofcom’s letter may be all too easy for certain chaos-peddlers to ignore.
Comment