With stock markets plunging, central banks stitching together economic rescue plans, and the globalised world breaking up, you don’t have to look far to find parallels between the coronavirus pandemic and the global financial crisis of 2008.
Another less noted similarity is the extraordinary influence of complex mathematical models on the highest levels of decision-making. In 2008, they were at the root of the financial crash. This time, they have delayed action on tackling the pandemic.
The role of modelling burst into the open last Monday, when the UK government switched its strategy on the virus. Gone was the idea to allow it to pass through the population in a managed way (and build up ‘herd immunity’), and in came complete suppression. It soon became clear why.
A shocking new analysis from disease modellers at Imperial College suggested that 250,000 people would die under the old strategy. Some have reported the U-turn as a triumph for the modelling team, but that’s not the full story. Buried in the report was the admission that only “in the last few days” did the modellers update an assumption about the demand for intensive care beds. The demand had been assumed, based on pneumonia data, to be half the actual level observed elsewhere. Earlier versions of the Imperial College model, with the errant assumption, had been informing the UK and US government policy on the virus for “weeks”.
The Health Secretary Matt Hancock, who days before had boasted that the abandoned strategy was built on “the bedrock of the science”, must have felt the earth shake.
Richard Horton, a doctor and the editor of medical journal The Lancet, is one of many experts who is angry and looking for answers. The ‘new data’ was not new. Research from Chinese scientists in late January established the percentage of coronavirus patients needing intensive care. “We have lost valuable time,” Horton wrote in The Guardian. “There will be deaths that were preventable. The system failed. I don’t know why.”
The ‘computerised crystal ball’
Another field with a history of abstract, mathematical models seeding crises in economics. Since the 1960s economics has aspired to be a ‘real’ predictive science, attracting physics PhDs in droves to Wall Street and The City of London. The global financial crisis exposed the elegant equations propping up the complex derivative markets as a sham.
The story of Long-Term Capital Management (LTCM), an American hedge fund backed by two Nobel laureates in economics, is another example. The fund used ‘rocket science economics’ to calculate risks to decimal point precision. Then, in 1998, it lost billions, and had to be bailed out before it crashed the financial system.
Models can have a mesmerising hold on minds. They can cut through the messy real world and give precise numeric answers. Like a “computerised crystal ball”, they appear to see what will happen in future. But it’s often only an illusion. There’s a hidden fragility in the many assumptions at the base of the calculations. Worse, fixating on the model can lead to overconfidence and the downplaying of other sources of knowledge and expertise, like the general who confuses his map for the territory. There are clues that a similar dynamic may have played out in the British coronavirus debacle.
Rory Stewart, a critic of the government’s coronavirus complacency, blames the Prime Minister and his advisers for being “obsessed with the idea that they can precisely, scientifically model the way that this disease is going to go”. It fooled them into thinking they were in charge as the situation spun out of control.
Reading the Imperial College report – headed by the well-respected physicist turned modeller, Professor Neil Ferguson – you get the impression that the impact of the virus can be predicted and managed. It is all about doing ‘the right thing at the right time’. There’s a surprising lack of urgency. Their strategy focuses on the weekly number of patients in critical care testing positive for coronavirus as “testing is most complete for the most severely ill patients”. Even though the focus on hospital testing leaves a "two to three week lag" before the impact of any intervention can be known. Think of where Italy was three weeks ago.
Questions also need to be asked about whether the desire for more robust data from hospital testing contributed to bizarre, and widely criticised, decisions like abandoning testing in the community and reversing the decision to give weekly not daily updates on the geographical spread of the coronavirus.
Just like the financial modelling which helped precipitate the financial crisis, the coronavirus modelling rests on assumptions. Some of these are based on research, others are educated guesses. The twenty page Imperial College report includes the words ‘assume’ or ‘assumption’ 26 times. The modellers “assume 70% of households comply with the policy” of isolating at home for seven days if someone has symptoms. They also assume that “on recovery from infection, individuals are assumed to be immune to re-infection in the short term”. If any of these assumptions are wrong, as was the case for the need for intensive care beds, then the seemingly solid predictions crumble.
The sidelining of other scientists and experts
It is not anti-science to point out these weaknesses. The father of the scientific method, Francis Bacon, wrote of the need for “two kinds of thinking… both penetrating and comprehensive”. The Imperial College report was penetrating, but it was far from comprehensive. As Trevor Bedford, a professor of epidemiology at the University of Washington, wrote on Twitter: the modelling doesn’t consider reducing transmission by a huge rollout of testing as South Korea has done, the use of mobile phone location data, or tests to identify individuals who have had the virus and recovered. It doesn’t take into account higher death rates when health systems are overwhelmed.
The story of Britain’s coronavirus denial is not just about one influential study. It’s about the failure of political judgement: scientists advise, and politicians decide. But crucial political decisions opted to prioritise economics, put enormous faith in a scientific model, and sideline other experts and scientists.
Now is the time to listen to traditional ‘shoe leather’ epidemiologists, doctors, and the World Health Organization. We must also learn from what other countries have done. When the storm has passed, we must have answers on why we delayed bold action for so long.