What the Tories want to do with our health data, and why we need to stop them
Boris Johnson looks set on using our health data as a bargaining chip to keep his Brexit/US trade dream alive – but what does that mean for us?
On his trip to the US this week, Boris Johnson was forced to acknowledge that an all-encompassing, bilateral trade deal with the US might be hard to achieve. It’s unsurprising, given antipathy to aspects of such deals from Joe Biden’s trade union supporters.
So how will Johnson keep his dream of a US deal – on which he staked much of his credibility – alive? And what does the US – now firmly with the “whip hand”, as one trade expert told the Financial Times today – want from the UK in return?
Trade negotiations are about much more than tariffs on beef or cars, though these tend to be the things that grab the headlines and create sticking points. They’re increasingly about services, now largely digitally enabled – and about the rules (‘non-tariff barriers’, in trade parlance) that protect us from poor or exploitative business operations. And there’s no service like the National Health Service.
What does big US business want from Brexit? The answer, more than anything else, is unrestricted access to the UK’s health data. And a deal that could help fix a new global norm for tech-friendly, privacy-busting rules, into the bargain.
Help us uncover the truth about Covid-19
The Covid-19 public inquiry is a historic chance to find out what really happened.
US negotiators in ongoing secretive UK-US trade talks have already made clear that the “free flow of data is a top priority”, and the UK’s health data is seen as one of the most promising areas for transatlantic business.
Johnson promises we’ll continue to work “as fast as we can” for a giant deal with the US alone. Otherwise, we’re now told, the UK could join the US’s existing deal with Canada and Mexico. This recent deal (UMSCA) has a regulation- and privacy-busting “digital chapter” that “allows data to be transferred cross-border”, as well as extensively protecting the tech industry from liability, citizen redress, or having to reveal how their black box algorithms work.
Or if that doesn’t come to pass either, the other, perhaps most likely option is – as today’s Times reports – “to strike a series of smaller deals with the US, such as on aligning data and digital standards”. In other words, exactly what the US corporations most want.
Meanwhile, we’re drip-fed slogans.
Liz Truss, as minister in charge of trade, told us last week her aim was to “turbo-charge trade, particularly in digital” to secure “our future as a tech trade superpower” through “a network of next generation trade deals in services and digital” that would take us “from Silk Road to Silicon Road”.
Her colleague Oliver Dowden was given the chairmanship of the Conservative Party in last week’s cabinet reshuffle. As secretary of state at the department for digital, culture, media and sport, last month he told us that a data-sharing deal with the US was a key priority for him.
Watering down data protections doesn’t make sense for UK-based businesses
So much so that he’s happy to rip up the data-protection rules that the UK inherited from its EU membership to achieve it, judging by the proposals his department floated this month. The 12-week consultation contrasted its planned “ambitious, pro-growth and innovation-friendly data protection regime” with what it described as the EU’s “box-ticking” rules – rules that US trade negotiator Wilbur Ross explicitly called a “barrier to trade”.
As so often in the post-Brexit age, watering down protections doesn’t actually make much sense for UK-based businesses – which risk losing the data adequacy agreement struck with the EU in June, and the £85bn of value that goes with it.
Nor does it make sense for the UK public, already alarmed by plans to grab the GP data of everyone in England and share it with third parties including private companies. Those plans have been put on pause – for now – after attracting opposition from across the political spectrum, including a threatened legal challenge from openDemocracy, legal campaigners Foxglove and citizen groups.
But it makes sense for Johnson’s big dream.
Focusing ‘relentlessly’ on ‘genomics and health data’
In his foreword to July’s ‘Life Sciences Vision’ policy paper, the prime minister fulminated about “uniting… the power of our capital markets and the amazing data resource of our NHS… utilis[ing] the full breadth of our regulatory freedoms from Brexit”.
That ‘vision’ set out the government’s plans to “focus relentlessly” on genomics and health data, adding that “governance of, and access to, this data must be radically simplified…”.
It urged, as a first priority, “the NHS operating as a data-driven test bed for new technologies”.
Now the government’s consultation tells us, in our “hyper-connected world”, “international flows of personal data… underpin… the delivery of public services”.
These breathless words will be music to the ears of Google and Amazon, which have expanding healthcare divisions and are already involved in tie-ups that give access to some NHS data. Johnson sat down with Amazon boss Jeff Bezos during his trip to New York this week. Given the mutual interest in UK data rules, it stretches credulity that the prime minister spent the whole time telling the would-be astronaut off about his tax bill and talking about forests, as Johnson briefed the media yesterday.
The NHS’s door is already wide open to the profit-driven tech sector. The UK’s most popular institution is increasingly reliant on data flowing to firms such as Optum, which is already “planting seeds” in the NHS, stratifying you by how unhealthy (and costly) you are and helping shape decisions about who gets what treatments. The NHS is reliant, too, on spy-tech firm Palantir, which we were told could provide the “single source of truth” during the COVID pandemic, and whose profits soared on the back of its NHS contracts last year.
Then there’s Big Pharma, which wants granular data about how much you are costing the NHS. It can use this data to bolster its arguments that the UK should pay higher prices, and quicker approvals, for its drugs.
In recent years, pharma has shifted away from justifying its high prices on the basis of research and development costs towards highlighting the cost to health services of not buying, or sufficiently prescribing, their drugs. So information on the overall healthcare costs of individual diabetes patients – as one example of the kind of thing released to pharma companies – can be immensely useful for marketing purposes.
American drugs lobbyists “visibly bristle” at any mention of the NHS, according to the Financial Times’ Alan Beattie, detesting the way it uses its “own assessments of value for money” to hold down drug prices to reasonable levels – not just in the UK, incidentally, but also in the quarter of countries worldwide that use NHS prices as a reference.
These kinds of data-sharing aren’t what the public has in mind when they’re told that “data saves lives” and that their health data will be shared to “help us care for people in the best possible way”, as another government policy paper tells us. Greater data flows are tied to, and enable, greater influence of private companies over NHS decision-making.
Meanwhile, trust in the NHS is likely to decline as more decision-making and planning is handed over to people who know a lot about turning data into market insights and profits, but little about patients. NHS Digital boasts, for example, of how its data-sharing enabled management consultants McKinsey to come up with plans to reconfigure local hospitals.
Remember, under forthcoming NHS legislation, data-sharing no longer has to be for “the promotion of” health – only for “purposes connected with” health. As health privacy campaigners medConfidential point out, “Closing an accident and emergency unit in Chorley may not promote health, but it is definitely connected with it.”
Power and algorithms
Loss of trust, however, may be a price that US tech, pharma and private health giants are happy for the NHS to pay. They surely want to train and test data-driven management systems, treatment protocols and algorithms on the largest and most comprehensive set of healthcare data anywhere in the world. They want to develop products to sell back to the NHS, with its £140bn annual budget in England alone, and even more importantly, across the globe. These giants will be ecstatic if a UK government, desperate to give Brexiteers something to crow about, waters down rules and helps spread a new global norm for deregulated data flows.
And our government is duly proposing a raft of measures that will make it far easier to use – and abuse – more of our personal data, for profit.
One of the most alarming proposals in the new data consultation is the suggestion that the government could permit algorithmic “automated decision-making” without humans having to check those decisions or having any right of appeal.
That such systems need human oversight has been shown time and again in both the UK – where the government was forced to back down over relying on algorithms both in A-level marking and visa scoring – and in the US, where their gruesome impacts are widespread, from immigration and policing to health and social care.
Algorithms that assess people’s needs for home care have been found in US courts to be causing “irreparable harm”. An algorithm widely used in US hospitals required Black people to be sicker than white people before they were referred for extra help. Another, widely used in US hospitals and pharmacies, ingests a wide array of health and criminal justice data and assigns everyone an ‘overdose risk score’ for opioid addiction, resulting in severely ill patients being denied the pain-killing treatment they need.
And during the pandemic, of the hundreds of algorithmic tools built to catch COVID, “none of them made a real difference, and some were potentially harmful”, according to a damning analysis in MIT Technology Review.
Already, health workers live in fear of making decisions that contradict algorithms
The current safeguards need strengthening, not watering down. Already, professional health and welfare workers live in fear of making decisions that contradict algorithms that in theory are supposed to merely aid their decision-making, as Virgina Eubanks describes vividly in her book ‘Automating Inequality’. I heard the same thing from nurses in San Francisco two years ago. They told me how the human side of healthcare was being factored out of algorithms supposedly promoting efficient ways of working.
Here in the UK, the government is heavily promoting the use of AI to triage patients to decide who gets to see both GPs and consultants, despite concerns about the safety and reliability of some of these systems.
Other countries – including both China and the US – are moving towards greater, not less, human oversight of algorithmic decision-making. At this rate, the UK could end up pouring energy, money and our data into products that are so unethical that none but the most dubious regimes will ever want to buy them.
In whose interest?
Whatever happens to the proposals about human oversight of algorithms, the government has a bunch of other ideas up its sleeve to open up our data to industry demands, too.
Currently, one of the ways our health data is permitted to be shared without our consent is if it is considered to be in “substantial public interest”. The consultation at the Department for Digital, Culture, Media and Sport notes that industry has complained it doesn’t feel confident enough using this ground and thus asks it to suggest what additional things could be defined as being in “substantial public interest”.
The consultation also proposes giving companies much stronger grounds to crunch our personal data without consent in the name of research (including “technological development”), which will be given “a new, separate lawful ground” in its own right. The danger here is that ‘research’ and ‘science’ end up being defined as ‘what Big Tech does’.
The proposals also give both public and private researchers extensive rights to scoop, hoard and reuse data for purposes they haven’t told us about, and maybe haven’t even thought of yet.
Currently, in some circumstances, if health data is to be shared, a health professional must oversee the process – and health professionals are bound by a duty of confidentiality. But that requirement is to be scrapped, too. It’s suggested this change apply to only “public health or other emergencies”, but what else might count as an emergency or public health matter, in the coming years? The NHS’s annual winter crisis? Brexit fallout? COVID fallout? It’s not hard to suspect mission creep – after all, that has been the government’s modus operandi with regards to health data.
The NHS’s door is already wide open to the profit-driven tech sector
Even public-sector, academic researchers find it hard to navigate existing rules around research and data. But there are ways that could be improved for public research institutions – indeed, one of the proposals (not all of them are bad, to be fair) does just that.
But the proposals explicitly cover “commercial” as well as public entities. They appear to be designed to give a company confidence that it can train its algorithms on the personal data of UK citizens.
And just to be absolutely clear that private companies such as, say, US ‘spy-tech’ giant Palantir are protected when the government hands our data over to them, the proposals also suggest that companies carrying out public data tasks for the government “need not identify a separate lawful ground”. And they ask for further suggestions on how they could give “data intermediaries” including “industrial data platforms” (again, like Palantir) greater certainty that they do not need “recourse to consent”.
During the pandemic, most people have accepted that rules have had to be bent. But that shouldn’t be allowed to reset our expectations for the future. The government’s own medical-confidentiality advisers recently criticised the government’s “overreliance on examples from the pandemic response” to justify permanently watering down the rules that protect our health data.
More trade, less transparency
It’s clear the government is working to harmonise UK data rules with those of the US, to pave the way for some kind of trade deal with the US.
Not just that: any data-focused deal that’s signed with the US can then be used as a clamp to ratchet down protection still further, trumping the safeguards that remain.
Jean Blaylock of Global Justice explained further: “As well as directly tampering with our data standards, it’s also possible to sign up to things in trade deals and then later say we have to change our domestic policy in order to conform with the trade deal… in effect trade agreements do tend to override things.”
Trade deals also increasingly include ‘investor courts’ in which companies, as well as nations, can sue governments if their laws hinder their profitability.
Even before a trade deal is signed, our legal protections are looking vulnerable. Many of the rules that companies that use our data have to follow are to be swept away and replaced with self-regulation, whilst the privacy regulator itself, the Information Commissioner’s Office, comes under increased pressure to take a business-friendly line.
Outsourced management of our data will make it far harder to find out about and challenge any abuses, too. As Mariano delli Santi of the Open Rights Group told openDemocracy: “The more data transactions there are, the more difficult it is to hold someone accountable.”
Alarming as it is, the government’s consultation is just one of a number of ways in which rules around health data are being loosened, creating more potential points where our data can leak out.
More points of leakage
According to medConfidential, the new NHS bill currently before Parliament gives the health secretary powers to waive the NHS’s duty in regard to information standards, as well as to centralise and move data around. This includes data that’s previously been firewalled in NHS Digital, the part of the NHS that is most tightly regulated and monitored as a “safe haven” for data.
The bill also gives the NHS a new “duty to share” data with all ‘partners’ in the ‘system’. What system is this, though?
The partners include private firms – which the bill also allows to take seats on new local health boards (“Integrated Care Systems”) overseeing how the NHS’s money is spent.
It’s not just private business we should be worried about. The partners also include local authorities who provide social care, whose integration with the NHS is to be put on statutory footing as part of the bill. But as Phil Booth of medConfidential points out, there’s a “dramatic difference” between having our health data in the hands of medical professionals, who can be struck off by their professional regulators if they breach a duty of confidentiality, and having it on the screens of council officials.
The NHS App is sharing facial log-in scans with police and security agencies
There are already widespread concerns over how much detail about our lives councils and government departments are able to glean by matching datasets held by various government departments with those provided by commercial providers; how they use these to make highly sensitive predictions about us and our children; and how much this has expanded during the pandemic. One COVID product purchased by councils offered to predict who might break isolation rules on the basis of classifications including “unfaithful and unsafe sex”, being “potentially aggressive”, or having dangerous pets. And this week, The Guardian exposed how the NHS App is sharing facial log-in scans with police and security agencies.
The government seems to think we should trust them to use our data, and build ever more detailed pictures of us, only in our own interests. NHSX, the government’s new health-tech unit, set out in its recent data strategy its “ongoing” work to “improve appropriate data linkage” between health data and data held by local government, the education and justice departments, and the Department for Work and Pensions. And the ‘Life Sciences Vision’ policy paper told us: “Routinely, we must ensure that data from multiple sources can be linked to create a consolidated ‘picture’ of the whole person…”
NHSX’s strategy also sets out how it will use “secondary legislation” – which is not subject to full parliamentary scrutiny and debate – to ensure its data-sharing plans don’t breach “the common law duty of confidentiality”. Some privacy campaigners see this as a hint that it is the duty of confidentiality that will be restricted, rather than the data-sharing, particularly given moves in that direction proposed in the Department for Digital, Culture, Media and Sport consultation.
So with data-protection laws, NHS laws and common-law duties all pushing in the same, deregulatory direction, protection of our health data could ultimately depend on our protections from privacy-busting and undue interference, which are rights under the Human Rights Act. Except, we know this government isn’t exactly keen on keeping that act.
The data problem we really need to fix
Maybe we do need to improve data-sharing for the benefit of our health and wellbeing. But it’s questionable whether any of the business-and trade- friendly proposals mooted by this government will address the real problem: the lack of data flowing in the other direction, from Big Tech down to those who really need it.
During the Test and Trace debacle, we saw how systems developed and overseen by the likes of Deloitte and Serco were not set up to effectively share information with the local public health professionals who needed it.
Then there’s IQVIA (formerly IMS Health), which for years has been collecting and selling information on hospital prescribing to support pharmaceutical marketing, but shares only very limited amounts of the collated data back to the NHS and severely restricts any public use of it, as doctor and writer Ben Goldacre explained in a recent BMJ piece. Despite this history, IQVIA has been the key beneficiary of increased flows of NHS data during the pandemic, amid looser information-sharing rules, according to a recent report in the Financial Times.
NHS England staff themselves have complained that they are struggling to access the data in Palantir’s COVID Data Store, whilst also telling The Register that Palantir did little that couldn’t be done with open-source tools.
The vast network of both public- and private-sector organisations and IT suppliers involved in collecting data and shaping NHS decision-making has created a massive problem of lack of interoperability – which paves the way for companies like Palantir, who offer ways to connect disparate data sources together, to profit.
Imagine a world where data was controlled in the public interest. What might that look like, in healthcare? What might we know that we don’t currently? Exactly how much they’ve spent on tech, for a start. Who’s lobbying for what changes, and in whose interests. Now that would be a transparent world. One with data – which means not just information, but knowledge and power, too – flowing down to us, not up to them.
What price trust?
The effect of all of the government’s business- and trade-friendly approach to our health data is a massive hit on trust.
Consumer watchdog Which? found that 20 million people in the UK were unaware of the government’s plans to grab their GP data – and that when they found out about it, trust fell.
The NHS will have a ‘duty to share’ data with all ‘partners’ in the ‘system’. What system?
As Ross Anderson, professor of security engineering in the Computer Laboratory at the University of Cambridge, has explained: “In the run-up to the creation of the [US privacy law HIPPA], it is estimated that privacy concerns led 586,000 Americans to delay seeking cancer treatment, and over two million to delay seeking mental health treatment. Meanwhile, over one million simply did not seek treatment for sexually transmitted infections.”
In the UK, the risk of delayed contact with health services as a result of concerns about data-sharing is particularly strong amongst ethnic-minority communities who have already had experience of intrusive government data collection and sharing. The low uptake of vaccinations and worse COVID mortality amongst such groups have both been attributed in part to their low levels of trust.
But there are risks for other groups, too, as the increasing joining-up of our data between both government departments and commercially available datasets, raises the prospect of discrimination by government agencies, employers, insurers and medical providers themselves. Already some NHS areas have begun to exclude people with ‘unhealthy’ lifestyles from free treatment and to screen out some mental health users from accessing emergency services.
Trust, especially in the UK, isn’t just in individual doctors, but in the NHS as an institution. This isn’t just about who gets to find out that you’re depressed – and bombard you with ads, or blacklist you for jobs or insurance, or even medical care. It’s also about who gets to plan and provide your healthcare, and in whose interests.
Get our weekly email
CommentsWe encourage anyone to comment, please consult the oD commenting guidelines if you have any questions.