Working to make government more effective

Comment

New approach needed to avoid Covid data disputes and modelling misunderstanding

The reaction to SAGE modelling has again showed up flaws in how science has been used in the pandemic.

The reaction to SAGE modelling has again showed up flaws in how science has been used in the pandemic, says Sam Freedman

The looming prospect of further Covid restrictions has again put a spotlight on the scientific modelling underpinning government decision making. Several journalists, from publications that have opposed many restrictions throughout the pandemic, pounced on an “admission” from SAGE member Graham Medley that his colleagues don’t model every scenario. This, they argue, shows the modellers are deliberately emphasising the worst case so as to scare politicians and the public into compliance with potentially unnecessary rules.

This is an unfortunate misunderstanding, deliberate or not – and Medley certainly created plenty of potential for confusion in the way he addressed the issue on Twitter. In reality the model under discussion, published by SAGE’s modelling group last Wednesday, was not a worst case scenario. They showed a wide range of outcomes. For instance, under a scenario with no further restrictions they suggested deaths could peak anywhere between 600 a day and 6,000. Critically they make clear that the outcome is highly uncertain and will depend on a range of factors about which they had had to make holding assumptions in their models – not least on the severity of Omicron as compared to Delta, and the behavioural impact of non-mandated guidance, as seen in the ‘soft lockdown’ we are witnessing currently.

SAGE’s paper is not the doomsaying script portrayed

It is reasonable to ask if this nuance could have been better communicated by scientists and officials with more information about the limitations of this type of modelling, given that most people will not read all the details. But there is no way you could read the actual paper, in good faith, and think the authors are arguing that reality will definitely match even the very wide ranges given. They are trying to offer a reasonable indication of what might happen, but the primary purpose is simply to offer a sense of what the impact of different restrictions would be.

If I were a politician or adviser my main takeaway would be that restrictions, if imposed now, would make a significant difference to the numbers. But not that the numbers will unquestionably be high enough to require restrictions. Where you’d go from there depends how wedded you are to the precautionary principle and your assessment of the costs of restrictions (which have, unhelpfully, not been modelled at any point during the pandemic). As we can see these are, reasonably enough, the questions that the cabinet are asking.

So it is wrong to accuse the modellers of deliberately trying to exaggerate the threat from Omicron. But the critics do have a point about the way these models have been used by other parts of the media and, sometimes, by government spinners. There’s no question that SAGE modelling has often been misrepresented to read worst-case scenarios as straightforward predictions – as headlines like “Scientists warn infections could hit 2 million a day”, this weekend, attest. Yes this is the outer peak of the range given in the paper, assuming no further restrictions, but it is definitely not a central assumption or a prediction.

The reaction to SAGE modelling shows up problems with how the UK approaches data

Covid has highlighted how difficult we find it, as a society, to talk about uncertainty. Because the experience of illness and lockdowns has had such a direct, personal, impact on all of us, it’s hardly surprising that people are desperate for some sense of certainty, one way or the other. It’s inevitable that, with a complex fast moving news story, numbers get shorn of their caveats and context.

But the problem goes much wider than Covid. How many people, for instance, realise that the Office of Budget Responsibility’s three year projections of public debt and GDP have an average error rate that dwarfs the sorts of tax rises and spending decisions that dominate budget conversations? Or have any sense of the huge uncertainty and complexity that sit behind a headline on climate change such as “World on 'catastrophic' path to 2.7C warming, warns UN chief”?

The astonishing increase in computing capacity over the past few decades has transformed the ability of experts to create ever more detailed and useful models for thinking about complex systems. These can be, and often are, of great assistance to policy makers. But they can also create an illusion of knowledge when misrepresented or oversold. The increasing importance of statistical analysis to politics has not been matched by an equivalent increase in numeracy amongst politicians, journalists or the wider public, which makes this risk a growing and ever more dangerous one.

The use of data in government – and wider society – needs refreshing

There are no easy solutions, but I would encourage serious media outlets to develop an intentional language of uncertainty that helps people think probabilistically without them requiring any formal maths. For example, rather than saying “scientists warn X could happen”, say “scientists think that X is highly likely” or “possible but unlikely” or where possible “that there’s is a 10% chance”.

Thinking longer term it is also worth considering a “maths for life” course for the substantial majority of 16-18 year olds who do not do maths or statistics A-level. The UK is unusual in stopping mandatory maths education at 16 and has low levels of adult numeracy compared to other developed countries. 

But we also need humility from those who are comfortable with numbers and can throw them around effortlessly. It’s important, for instance, that modellers publish post-mortems that explain why numbers turned out differently, like the OBR do with their forecast evaluation report. It’s all too easy to use numerical expertise to batter an opponent when they may have entirely valid points that haven’t been considered on whatever topic is under discussion. A numerical elitism will only lead to disengagement and the belief, as we’ve seen this past weekend, that “experts” are using mathematical witchcraft to get their way. Everyone involved in communicating complexity could learn a lot from watching Chris Whitty, whose press conferences are a master class in presenting abstract ideas in a humble and clear way. If only we could clone him.     

 

Publisher
Institute for Government

Related content