Previous event

Making Policy Better Series: Good Policy, Bad Politics

Tuesday 13 March 2012, 12:30

Event with:
Michael Kell, Chief Economist, National Audit Office,
Steve Webb MP, Minister for Pensions
Sharon White, Director General, Public Spending, HM Treasury
Kitty Ussher, Smith Institute

This is the second seminar in our Making Policy Better series, organised jointly with NIESR and NESTA and supported by the Alliance for Useful Evidence.  In the first event, we looked at new ways in which evidence could help policy makers make better decisions. Many in the audience felt that there was a problem with the lack of demand for evidence and evaluation from Ministers and civil servants. This was the subject of the second seminar, which took place under the Chatham House rule.

Much of the focus on the evidence debate in the past had been on the supply of evidence – the Performance and Innovation Unit’s report from 2001 on analysis and evidence in government had only one chapter which focused on the demand for evidence, out of 11 chapters in total. There is more analysis that can be done on demand-side factors, such as the use of VfM Direction Letters from Accounting Officers

Regarding spending departments, it is difficult to distil any specific information about the amounts spent on evaluation, but there were suggestions that expenditure varied a lot between departments, without obvious reasons why that should be the case.  Departmental culture also seems very important: the Department for Transport has long been renowned for the quality of its policy appraisal yet is less well regarded for its evaluation. Department for Work and Pensions, meanwhile, has a long-standing reputation for being strong on evaluation.  Some of the weaker analytical Departments needed to be clear and transparent on logic – and how they would quantify impacts so they could do proper cost-effectiveness evaluation which could be used as the basis for better informed decision making. Analysts had an important role to play in ensuring decisions were made in the light of evidence; a “name and shame” approach along the lines of capability reviews might influence senior management but the impact on Ministerial incentives was likely to be less. The more independent “what works institute” now being explored by the Cabinet Office could also have a role.

There were some real conflicts between timescales on the demand and supply sides, notably the difficulty of reconciling the timing of provision of evidence when new research was needed with the political cycle – answers could come well after the Minister who needed it had moved on.  But there was usually some evidence available – from what had been tried in the past, to international experience which might be applicable, to early findings from ongoing studies.  The Pensions Commission provided a model of a robust evidence based process which led to a set of policy conclusions around which a consensus could form and which allowed previously unthinkable options to be accepted (see the Institute’s case study on the Pensions Commission in our January 2012 ‘S Factors’ report).  

The increasing availability of big data sets opened up new possibilities for more evidence driven decisions.  On some issues Ministers’ minds were genuinely open to be persuaded by the evidence that was available. In that case it was important to surface the evidence and then make a decision.  Most Ministers wanted to make good policy choices and do things that worked, to leave a legacy. But they were also under pressure to make decisions and not to risk things going wrong – and that often determined where they and their advisers focussed.  But values and politics – where the public were on an issue - mattered too.  “Evidence” would never be the sole determinant of policy choices.

More use could and should be made by departments of academic links. It is important to identify academics who are capable of interacting with policy makers; they are also often a lot cheaper than consultants! There is scope for evidence to influence policy making while parties are in opposition, although it can then be difficult to generate resources to support future policy development.

Excessive turnover in the civil service and consequential low levels of expertise among officials could also have an impact on the demand for evidence.  The spending teams in the Treasury had a potentially important part to play. But turnover levels in the Treasury were very high, which could reduce their ability to effectively scrutinise policy - and there was some debate about whether the Treasury had lost some of its challenge function under the last government when it was a more activist policy making department.   Part of the answer might lie in changing civil service accountabilities and incentives to make sure policy was based on robust evidence.  

In the next event on 23rd April we will look at the issue from the other end – the role 'armchair evaluators' can play in pushing for better policy.

Jill Rutter and Benoit Guerin

Download the Making Policy Better report.