Working to make government more effective

In-person event

Is evidence enough? The limits of evidence-based policy making

A conversation with Jeremy Hardie and Nancy Cartwright, chaired by Jill Rutter.

Speakers:

  • Nancy Cartwright, Professor of Philosophy at UC-San Diego and London School of Economics
  • Jeremy Hardie, Fellow of King's College London and Vice President of the Royal Economic Society
  • Chair: Jill Rutter, Institute for Government

Opening, Jill Rutter noted that last year the Institute has run a number of events on the use of evidence and evaluation in policy making.  Nancy Cartwright and Jeremy Hardie’s book “Evidence-Based Policy: a practical guide to doing it better” provided some cautions on the usefulness of “evidence” in policy making.

Nancy Cartwright opened the discussion by emphasising the need to make the newly established ‘what works centres’ for social policy as effective as possible given the scale of resources going into them. Cartwright also emphasised the amount of optimism that has been invested in them and expressed concern at what this would mean should they not match up to expectations.

Both of the speakers were careful to point out that they were not against evidence-based policy or particular research designs such as randomised controlled trials per se. Cartwright emphasised the improvements made in recent years around:

  • vetting studies for internal validity (how much confidence we can have in the causal claims they make)
  • synthesising existing research
  • publishing these reviews.

Nevertheless, Cartwright went on to outline some of the limitations of evidence-based policy, focusing particularly on the external validity of policy evaluations (whether the same intervention would achieve the same results with a different population.) Jeremy Hardie gave the example of an RCT in the USA which found positive results from a reduction in class sizes which was rapidly scaled up, but failed to replicate the effects in the initial study because less experienced teachers had to be employed to staff the additional classes and the buildings were not available.

Cartwright argued that while social-scientists often stress that the populations must be “sufficiently similar” to ensure external validity, this condition is too vague to be useful. She went on to give a more precise pair of conditions, which are jointly necessary and sufficient for external validity:

  • there must be the same underlying causal structure in the two populations
  • there must be the same distribution of ‘supporting’ or ‘helping’ factors (what social scientists might call mediating variables, or economists call interaction variables.)

Both speakers argued that even these conditions remained useless in a practical sense, since policy makers will find it difficult to determine if they hold in a particular instance. Jeremy Hardie drew a comparison between the Education Endowment Foundation’s warnings about the external validity of their evidence base and a road sign reading ‘beware avalanches’. Although these are both accurate warnings it is unclear how the reader of the warning was supposed to alter their behaviour in response. Cartwright called for systematic study to aid better understanding in this area.

Hardie emphasised that there are no hard and fast rules (or algorithm) for understanding the causal process that underlay a particular policy intervention, and so for judging external validity. Randomised controlled trials, for example, do not tell you anything about how the intervention and the outcome of interest are linked. Rather, this has to be based on judgement which is a somewhat “mysterious notion”.

Hardie proposed a series of tools designed to make room for challenge and debate over the external validity of research findings. One example of this is a pre-mortem exercise in which policy makers adopt a structured approach to thinking through why a policy might not have the intended effects before implementation.

Lastly, Hardie cautioned against making exaggerated claims about the achievements of research-based rules for professionals to follow to achieve particular outcomes, on the grounds that this might prompt a backlash from public service professionals who feel their own role is being undervalued.

Questions from the floor largely focused on a series of difficulties in using policy evaluations to inform policy including: programmes being implemented differently in different institutions (e.g., project based learning in schools), whether measurable dependent variables will remained linked to the underlying variable of interest (e.g., exam results and better education), and whether it will be politically feasible to implement evidence-based policies (e.g., if the tabloid press is against them.) Audience members also challenged Hardie and Cartwright on whether the problem of external validity would be better approached through empirical, rather than philosophical methods and how you could go about researching the actionable or concrete conditions for external validity.

Sam Sims

 
Keywords
Academia
Publisher
Institute for Government

Related content

07 JUN 2023 Interview

Lord Bethell

Lord Bethell discusses being in DHSC during Covid, the importance of experts in government and 'hating' leaving office