In the eighteen months since a single AI-generated video falsely showed a sitting prime minister announcing a military strike — and briefly triggered a stock market crash before being debunked — governments around the world have scrambled to respond to a threat that is evolving faster than the laws designed to contain it.
The problem is not merely that disinformation exists. It has always existed. The problem is the combination of hyper-realistic synthetic media, algorithmic amplification and social fragmentation that now allows a single false claim to reach tens of millions of people within hours — often before any correction is possible.
"We are playing a game of whack-a-mole where the moles are faster than the hammers," said Renée DiResta, a researcher at Stanford Internet Observatory who has studied online manipulation campaigns for a decade. "The tools for creating convincing synthetic media are improving at a rate that far outpaces our detection capabilities."
The legislative response has been fragmented and slow. The EU's Digital Services Act requires platforms to remove 'illegal content' swiftly, but what constitutes illegal disinformation varies widely across member states. The United States has no equivalent federal law, and Section 230 of the Communications Decency Act continues to shield platforms from liability for user-generated content.
Some democracies have taken more aggressive steps. Finland has embedded media literacy education in its national curriculum for a decade and consistently ranks among the most resilient populations to disinformation in European surveys. Taiwan's digital ministry runs real-time fact-checking operations that publish corrections within hours of false claims going viral.
The technology industry argues that AI can be part of the solution as well as the problem. Google, Meta and Microsoft have invested heavily in automated detection systems, and major platforms now add labels to content identified as synthetic. Critics point out, however, that labelling arrives too late for most content and is easily circumvented.
"The fundamental issue is that disinformation is profitable," said Professor Katharina Kerr of Oxford's Internet Institute. "Outrage and fear drive engagement. Platforms are structurally incentivised to amplify it. Until that economic model changes, no amount of fact-checking will be sufficient."
The next major test will come this autumn, when four of the world's largest democracies hold national elections within an eight-week period. Intelligence agencies in all four countries have already issued warnings about foreign interference operations. Whether their democracies pass that test may determine whether the current institutional responses are adequate — or whether a more fundamental rethink is required.
Join the Conversation