One of the big contrasts between making policy and, say, making chocolate bars is that the latter are trialled and tested before they move into full scale production. Most innovations fail to make it from concept to supermarket shelf.
Before we allow any new medicines on the market, we subject them to a battery of tests to show their efficacy and to test for side-effects. Then the National Institute for Health and Clinical Excellence makes an assessment of their value for money and finally they are adopted for use.
We rarely apply this approach to new policies. Not to move straight to full scale implementation is seen as indecisive. And yet, as Lady Hamwee, the Lib Dem Home Affairs spokesman points out, we can’t know how things will work out in practice until we try them:
"There are concerns about things like where the boundaries are between the new commissioners and chief constables and seeing something working as a pilot ought to give a better basis for assessing the way the whole thing will operate."
This clearly plays into Coalition Agreement politics. But, taken at face value, piloting seems a sensible precaution before making a major change. The lack of appetite for trying out things and learning from those attempts is a theme in our Making Policy Better report published earlier this week.
Indeed, the ministers we interviewed as part of the research explained and how policies obtained an unstoppable momentum. One said:
"you can get into the situation where you end up defending a policy, not because it’s a particularly good policy but because it’s what you’ve got... the momentum of events, you know, it carries you along and suddenly you’re locked in and you haven’t got any options, you’ve just got to do whatever it is."
We also found that, when policies were evaluated, those evaluations were often not used or learnt from. Ministers saw them as a potential source of political embarrassment or they were focused on the next thing, not a policy introduced by their predecessor. Too often the timetable for evaluation meant by the time there were results, the policy had already changed.
That is why we are recommending a new Head of Policy Effectiveness to oversee government evaluation efforts – to make sure they are independent, timely and learnt from.
These are all elements of a more general issue: how to better use research evidence on what works to inform policy and implementation decisions. This was the subject of a roundtable we held jointly with NESTA last month (a report is available here - PDF, 82KB)
Chaired by Universities and Science Minister, David Willetts MP, it looked at why there is a poor translation between research on what works and public services. The headline message was that there are examples of good practice (which NESTA has published here). But there is no equivalent of the clinician who also does research seen in medicine and we do not have the same institutions to test the effects of policy interventions as are starting to emerge in the US.
If we are going to make policy better we need to be more willing to test ideas before moving to nationwide roll-out – and, as we argue in System Stewardship, we need to be able to build in scope for adaptation as we see how they land on the ground.
Above all, we need to change a culture which sees changing course in the light of evidence as a sign of weakness.