For days, xenophobic riots have erupted in several British cities, sparked by a false news story circulating online. The BBC has tracked down the origin of the fake news.
It all began with a false report. In Southport, a 17-year-old attacked several people at a ballet school—three children died. Information quickly spread online claiming the attacker was a young asylum seeker, a Muslim who had arrived last year by boat through the English Channel. However, this information was false. The accused attacker was born in the UK to immigrant parents. The false information spread online, prompting many to protest in the streets of various cities for days. How did this happen? BBC’s investigation traced the source of the information to its first publication. The fake news was published on a news portal called Channel3Now. This site specializes in crime-related topics. A management member of the portal admitted to the BBC that the publication about the Southport attacker was an unintentional mistake. An employee of the portal stated that the venture is commercial and aims to publish as many stories as possible. BBC could not verify any connections to Russia.
Online Disinformation with Offline Consequences
A false story that fits the worldview of many people who spread it online. On Telegram and X (formerly Twitter), it has concrete consequences: “What we experienced here is that a fake news story online spreads, inciting violence and unrest offline,” says Sander van der Linden, a psychology professor at Cambridge University and an expert on fake news. False information spreads quickly because anyone can publish it online. “The information is unverified and lacks evidence. These publications can go viral,” says van der Linden. “Media corporations have fact-checkers, editors, producers—all of whom verify. Meanwhile, these platforms lack control mechanisms.”
Media Power Shifts to Internet Giants
Last week, the British newspaper The Sun also addressed this issue. The headline read: “Anti-Social Media” – a play on the concept of social media. Alongside the headline was a picture of a stone-thrower. The subheadline was: “How Facebook and Others Incite Violence in Our Streets.” The headline is striking as it highlights a shift; it is no longer tabloids and sensationalist newspapers setting the tone in the UK, but rather the large internet corporations. Elon Musk, owner of the platform X, attacked British Prime Minister Keir Starmer last week. Musk wrote that a civil war is inevitable.
Musk accused Starmer, for example, of taking a completely different stance on “white protesters” compared to those attacking with migration backgrounds. However, no evidence was provided. Far-right British politicians used this argument too—without providing proof. Musk allowed far-right extremist activist Tommy Robinson, who has 900,000 followers and posts right-wing content and false information, to remain on the X platform, inciting violent protesters.
Government attempts to regulate these platforms are in their early stages in the UK. Hannah Rose, an extremism expert at the Institute for Strategic Dialogue, views this as a test for the Online Safety Act: “This law should require platforms to remove illegal content. Those who do not comply should be fined. But the regulation is still in its early stages.” How effective this law will be remains unclear and will depend on the stance of the regulatory framework.