
How to fight online misinformation: beyond laws and algorithms, try vaccination
An online game that puts the player in the shoes of a master manipulator strengthens people’s defences against fake news and trolling.

Climate denial, Donald Trump’s presidency, anti-vaxxers, Brexit: you name it, online misinformation has been blamed for it. What, though, can be done about it? That’s the question we have been trying to answer.
We approach the problem of misinformation by treating it like a disease. But we’re not proposing a cure: instead, we are trying to create ‘vaccines’ that will strengthen people’s resistance to fake news and other online deceptions.
But – to continue the medical metaphor – why not simply find ways to eradicate misinformation, or at least put it in quarantine? There are certainly people who believe we can do just that. Some propose using algorithms to detect online misinformation automatically and delete, downrank or disincentivise manipulative content. Others want laws to punish those who produce, spread or fail to delete harmful content.
These approaches have their advantages, but also come with significant downsides. Algorithms can’t always identify misinformation correctly, and wrongfully deleting or flagging content can easily backfire. Using the law to fight misinformation, meanwhile, raises serious questions about freedom of speech and expression.
In contrast, our approach aims to strengthen everyday human judgement to prevent misinformation taking hold. We’re not the only ones working with that broad idea: there are many fact-checking initiatives that promise to dispel pervasive myths and false stories, and educational tools to improve people’s ability to spot manipulative information. However, these too have limitations.
Fact-checks, while useful, are often slow, and research has shown that fake news spreads faster, farther and deeper than any other type of news, meaning that fact-checkers are bound to be behind the curve.
Educational tools, meanwhile, tend to only target one part of the population, usually the young – who tend to be better at spotting misleading content anyway. This leaves a lot of people exposed who are vulnerable to misinformation, particularly older people.
So we have turned to behavioural science and psychology to create our ‘vaccines’ against misinformation. We based our work on ‘inoculation theory’, which states that it is possible to confer psychological resistance against manipulation attempts by pre-emptively exposing people to a weakened version of a deceptive argument – much like a real vaccine confers resistance against a pathogen after being injected with a severely weakened version of it.
We aren’t the first to use inoculation theory to combat misinformation, but we think our approach could offer a broader spectrum of protection. Previous work has focused mostly on inoculating people against specific deceptive arguments. This is inefficient within the broader context of online misinformation due to the large amount and variety of misinformation that exists on the internet.
We therefore decided to see if it was possible to inoculate people against the techniques used in spreading misinformation. If successful, this would go a long way towards conferring resistance against misinformation in general, rather than continuing to rely on post-hoc solutions.
The tricky part was the how: what kind of intervention would be effective while also attracting a large enough audience – one beyond the sphere of formal education alone? The point of vaccinations, after all, is to aim for a critical coverage rate in order to achieve ‘herd immunity’.
We therefore decided to deliver our vaccine in the form of an online game. Together with DROG, a Netherlands-based anti-misinformation platform, we built ‘Bad News’, a free browser game in which players walk a mile in the shoes of a misinformation producer.

Starting out as an anonymous Twitter troll, each player gradually gains followers and credibility to form their own fake news empire. In doing so, they learn about six common misinformation techniques: impersonating people (or groups of people) online, using emotional language, polarising audiences, spreading conspiracy theories, discrediting opponents and deflecting criticism, and internet trolling.
Our idea was that prompting players to actively think about how misinformation works from the perspective of the ‘villain’ would be significantly more effective than conventional media literacy interventions that tend to focus on passive exercises such as reading or watching a video.
The game was a big success: it has so far been played over 900.000 times (a huge step towards ‘herd immunity’), and translated into thirteen languages other than English (including German, Czech, Polish, Greek, Esperanto, Swedish and Serbian).
Did it work? To find out, we crafted a number of tweets that use of one of the six techniques learned in the game. For example, one stated “The myth of ‘equal IQ’ between left-wing and right-wing people exposed!”. As experimental controls, we also designed a few tweets that used no misinformation technique. We then asked people to rate the reliability of these tweets both before and after playing the game.
We found that after playing the game, people gave the deceptive headlines much lower reliability ratings than they had before playing. The ratings of the control tweets – those that did not employ misinformation techniques – were much the same before and after playing the game, which shows that our game really did have an effect in sharpening the players’ awareness of misinformation techniques.
The participants were not entirely naïve to start off with, however: the control tweets got much higher reliability ratings than the misinformation tweets even before they had played the game.
There was no meaningful difference between people of different ages, genders, education levels and
political persuasions. We also found that the inoculation effect remains significant for up to five weeks. Subsequent testing has replicated this result many times over.
So what are the implications of all this? First of all, our research is showing that while misinformation is extremely difficult to tackle at the point of production, it is possible to render it much less effective at the consumer level.
Second, other domains where misinformation is a threat could also benefit from this approach. For example, we are working with WhatsApp to develop a new game to combat misinformation on direct messaging apps: this is a growing problem, especially in countries such as India, Brazil and Mexico.
We have also embarked on projects to inoculate people against misinformation about medical vaccines and against online extremist recruitment. The preliminary results of this research are again showing significant learning effects, and we are hopeful that this approach can go towards building a sustainable set of solutions against the pervasive problem of online misinformation.
Read more
Get our weekly email
Comments
We encourage anyone to comment, please consult the oD commenting guidelines if you have any questions.