20 December 2013

The NAO’s new report on use of evaluation in government shows how patchy performance on evaluation is across government. Policy making will only improve if it is made more systematic and independent.

We have been waiting a long time for the NAO to produce its verdict on evaluation in government, but it is welcome nonetheless.

What’s wrong
At a time when budgets are under sustained pressure, it seems bizarre that government has adopted such an ad hoc approach to knowing whether its policies are working or not. The NAO points to:
•Limited references to evaluation evidence in spending bids
•Little systematic evidence on how evaluations have been used to inform policy decisions
•Ad hoc models for commissioning evaluations
•Cancellations of evaluations
•Poor access for external researchers to government datasets.

The NAO points to a range of “barriers to the production and use of evaluation evidence, on both the demand and supply sides. Chief analysts and their evaluation staff consider evaluation timescales and a lack of demand from their policy colleagues as key issues. We believe a key factor is the lack of incentives for departments to generate and use evaluation evidence, with few adverse consequences of failing to do so”. As such it echoes the findings in our 2012 report, Evidence and Evaluation in Policy Making: A Problem of Supply or Demand.

What’s getting better
But there have been some positive developments while the report has been under preparation:

•As the NAO report points out, the Civil Service Reform Plan makes accounting officers (usually the permanent secretary) responsible permanent secretaries must “be accountable for the quality of the policy advice in their department”, but it also says they must be “prepared to challenge policies which do not have a sound base in evidence or practice”. It is hard to see how they can do this without robust evidence on the impact of past policies. The Department for Education got Ben Goldacre of Bad Science fame to do a review of the way it used evidence.
•Second, the government launched its What Works Centres in March – including on the area of local economic growth – one of the areas the NAO singles out as a weak area for past evaluation.
•Third, if a bit belatedly, the Treasury and Cabinet office have started to make clear the links between evidence of cost-effectiveness and value for money.

The missing actor
Those are all useful starts – and the NAO report itself contains some useful prescriptions for making use of evaluations by government more rigorous and more systematically linked into the policy process.

But it ends with a strangely passive recommendation: “the government should consider how evaluation evidence can be used to support greater scrutiny by and accountability to Parliament, with a view to enhancing the robustness, credibility and impact of its evaluation activity”. This should not be for government to decide: it should be for Parliament – select committees and the PAC – to demand higher standards of evaluation evidence from government in order for it to be able to scrutinise new proposals and hold government to account for past decisions. That scrutiny failure is part of the reason why government has been able to treat evaluation so cavalierly up to now.