Working to make government more effective

In-person event

Making Policy Better: The Randomisation Revolution

The Institute for Government is organising a series of seminars on improving the supply and demand for evaluation, evidence and learning in government

The Institute for Government, in partnership with the National Institute of Economic and Social Research and NESTA is organising a series of seminars on improving the supply and demand for evaluation, evidence and learning in government.  This follows up key recommendations in our report Making Policy Better.  In the first event of the series, we looked at how randomised control trials could help in the policy process.

Speaker:

Dr. Rachel Glennerster - Executive Director of the Abdul Latif Jameel Poverty Action Laboratory at MIT.  With Esther Duflo and Michael Kremer, she is one of the prime movers behind the "randomisation revolution" that has transformed both the theory and practice of development economics over the last decade.
Discussants:

Jonathan Portes - Director, National Institute for Economic and Social Research, former Chief Economist, Cabinet Office
Hasan Bakhshi - Director, Creative Industries, Policy and Research, NESTA.
Chair: Jill Rutter - Institute for Government
 

More information
   
•   Watch the video
•   Read Rachel’s presentation

Event Report

Dr. Rachel Glennerster explained that as a policy maker, she had always been frustrated at the lack of good rigorous evidence academic research could provide.  But the “randomisation revolution” had yielded specific lessons about how to improve health, education and empowerment.  The key was to establish directly comparable treatment and control groups – often decided by pairing like groups and tossing a coin.

Its value was to show what worked – and what didn’t – often against assumptions.   It was able to address a wide range of questions, was flexible and adaptable to  constraints and could be used to test theoretical insights. For example, RCTs had allowed researchers to test the claims of micro-credit by removing the selection bias of those who volunteered – and found it worked but had less of an effect than its proponents claimed.  It also allowed researchers to understand how to design cost effective interventions to overcome “present bias” to increase the uptake of fertiliser among farmers in Kenya – who underinvested despite a 70% return.

Randomisation could take many forms. One option was to randomise over the order of phase-in of a programme – but that risked losing long-term effects. Another option was to randomise amongst marginal recipient; a third was to randomise intensity and a final option for programmes where it was not possible to exclude anyone was to focus on “encouragement design”.
Trials took a long time to design and forced policy makers to “chunk up” problems into testable propositions.  It allowed the testing of a range of options at low cost – which allowed a solution to be designed and then scaled. Both top down commitment and bottom up innovation with rigorous testing were needed.  But in the UK at the moment there was too much national experimentation without proper evidence.

Jonathan Portes lamented the fact that evaluation in social policy was failing to keep pace with the slow but steady advances being achieved in development economics. The STAR programme in America, which used randomisation to allocate pupils to different sized classes, was proof that this could be done. One possible explanation for the difference was that it was much harder to isolate the impacts of social policy interventions.  The new Educational Endowment Fund was a welcome sign of commitment to rigorous evaluation.

Hasan Bakshi explained how NESTA’s Creative Credits experiment was designed to test the value added of business support for innovation. SMEs who applied were allocated free consultancy vouchers by lottery.  Initial results suggested a positive impact – but it was not sustained.  There was reluctance among many policy makers to subject their policies to RCTs. But they should not be regarded as a one-shot “attempt to discover the optimal intervention” to stages in a continuing process of testing, adjustment and re-testing.

Discussion focussed on the incentives both Ministers and civil servants faced around evidence and evaluation, and the potential role of the Treasury in promoting good use of evidence.  This topic will be addressed in the next seminar in our series:Good Policy, Bad Politics on 13 March
 

Publisher
Institute for Government

Related content

19 APR 2024 Podcast

Trust in government up in smoke?

Polling expert Will Jennings joins the podcast team to discuss what legacy Liz Truss has left the Conservatives in the polls.

02 APR 2024 Insight paper

Where next for levelling up?

This short paper highlights five key challenges that any government seeking to reduce regional inequalities will need to address.