How can we better regulate elections in the digital age?

Our politicians need to empower our electoral and information regulators to tackle the challenges ahead. Sam Jeffers sets out some starting principles and some radical suggestions.

Sam Jeffers
19 March 2018

Image: PA Images. All rights reserved.

The Cambridge Analytica revelations this weekend wreck the credibility of Alexander Nix’s company and of Facebook. Cambridge Analytica will probably close down. Facebook will struggle to rebuild trust in its ability to carry political communication. As a user, it’s a reminder you really have no idea who’s behind a message or how it found its way to you. It’s not hard to see the company now deciding that politics isn’t worth the trouble it’s giving them (due to the trouble they’ve given it).

Because Facebook can’t be trusted, the company, platforms like it and the wider universe of those who sell data and technology solutions to campaigns, should have their role in political communication regulated. These revelations, among other things, prove they can’t oversee themselves. Facebook itself probably knows this by now.

These aren’t the acts of ‘bad apples’ - they’re a result of their business models, where selling access to voters in ever more specific ways has led to a race to the bottom of marketing ethics (note also the responsibility of their clients).

But the need for oversight should put fear in the heart of every election, media and data regulator in the world. The US Federal Election Commission failed this specific test – but others would have done the same.

In the UK, Ofcom, The Electoral Commission and ICO all have a role to play in ensuring elections are trustworthy. But they’re worried about being seen as ‘political’ and this, ironically, tends to give political campaigns a free rein. Many of the challenges posed by modern elections fall between them, or through them, because they aren’t set up to co-ordinate their work, and can’t imagine a policy response because they aren’t trying to understand the near future.

We need some initial guiding principles for new forms of regulation. These principles must do several things. They need to recognise the risk of fragmentation of the electorate. Fragmentation makes it easier to drive a wedge between people, which is bad for getting millions of them to live together.

They need to ensure that the tools being used can be explained to voters using simple language. They need so support appropriate monitoring and data gathering during campaigns. We should avoid a future where we’re worrying about things from a couple of years ago, with little data to understand how a previous election was fought.

And the principles need to keep things simple for the regulators themselves. To make things slower. Presume you don’t understand, and that new things may have a big impact for good, or bad. Work to understand them. Like medicines regulators do.

These are just proposals. A debate about the unintended consequences of these principles would be very useful as a counterbalance, and to help refine them. But working from the principles helps you generate ideas like:

“Drug testing” campaigns data. A regulator should be able to demand instant access to a campaign’s systems and check that data was legally acquired and is being legitimately used.

Limiting microtargeting. An example might be to restrict segment sizes to a certain proportion of the electorate in a given campaign - for example, no smaller than the average parliamentary constituency. It’s unproven whether hyper-targeted messaging that preys on our psychology and emotions is an effective campaign technique. But do we want to find out? I’m not persuaded it’s something we need to have a better democracy.

Demanding radical transparency. Campaigns should lodge their campaign plans and the techniques they want to use with regulators in advance of campaigns and update them in a timely way throughout them.

Reserving the right to outlaw or delay the use of specific techniques for the purposes of political communication. Bots, AI-assisted advertising products, almost anything that impersonates human communication but isn’t a human. These are the things that tempt campaigns and consultants because they (usually) promise unparalleled accuracy for less money. Regulators should make campaigns wait to use them until their effects are known.

Currently, the regulators’ policy work on new campaigning techniques is inadequate to ensure the integrity of campaigns. They lack guidance. Their staff lack the resources and skills, and their impulse to do much about these issues is weak, as they try to avoid the appearance of being politicised.

All of that must change. The recent past creates a worrying precedent for what’s wrong with democratic oversight in the internet era. But the near future holds much worse. Politicians, who ultimately give regulators their orders and resources, must start to act.

Who is bankrolling Britain's democracy? Which groups shape the stories we see in the press; which voices are silenced, and why? Sign up here to find out.


We encourage anyone to comment, please consult the oD commenting guidelines if you have any questions.
Audio available Bookmark Check Language Close Comments Download Facebook Link Email Newsletter Newsletter Play Print Share Twitter Youtube Search Instagram WhatsApp yourData