There are several ways of looking at government performance. Broadly speaking, the objective of governments is to maximise their citizens’ welfare. In pursuit of that goal, they create the conditions for economic growth, maintain the rule of law, provide public goods such as policing and defence, try to ensure their populations are healthy and well educated, share out the proceeds of the country’s growth, and regulate the behaviour of people and companies to ensure that no group is able to exercise its power at the expense of others. Different governments will have different priorities.

The ideal way to assess government performance would be to measure all the outputs that government produces or outcomes that it achieves, and compare these with the money it spends and resources it uses to assess its efficiency and productivity. This isn’t possible, given how difficult it is to define and measure many of the outputs of government (although we look at specific outputs throughout this report). A proxy for performance is whether departments are using technologies and working practices which are believed to be productivity-enhancing.[1] Again, we cover a number of these throughout the report, such as turnover, diversity, engagement and transparency.

In this chapter, we mainly use government’s own performance measures and polling on trust and satisfaction. Using government’s own performance measures – currently Single Departmental Plans – is also difficult given the quality of some of those measures, but in only three cases is performance against specific departmental objectives getting worse. Using public trust as a proxy reflects well on the civil service but badly on politicians, while satisfaction in the Government is currently at its lowest since May 2010 – but still higher than in the later periods of the Callaghan, Thatcher, Major and Brown premierships.


Government uses Single Departmental Plans to measure performance

Since 1979, UK governments have measured their performance in different ways, from efficiency and financial management initiatives in the early 1980s, to Public Service Agreements under Labour in the late 1990s, to Departmental Business Plans under the Coalition. The current framework, introduced in 2016, requires each department to have a Single Departmental Plan (SDP), which sets out activities and monitors progress.

SDPs are updated annually, stating each department’s priorities and indicating how these will be achieved. They are intended to help departments use their resources as efficiently as possible, and to serve as a performance monitoring framework. The full SDPs are for internal use and not published, so our analysis is limited to the public versions. Our understanding is that these are less detailed, particularly when it comes to information on spending and staff allocation.

The Institute for Government, the National Audit Office (NAO) and the Public Administration and Constitutional Affairs Committee all criticised early versions of SDPs for being ill-equipped to track performance, but more recent analysis by the Institute has shown that there have been some improvements.[2]

Structure of a Single Departmental Plan

All departments’ SDPs are structured in a similar way. Departments have a set of objectives (ranging from three for the Cabinet Office to seven for the Home Office). Each of these is broken down into between one and 13 sub-objectives, which provide greater detail. Departments also list the activities that they will undertake to achieve these.

At the end of each objective, SDPs list (where possible) the quantitative measures that departments consider appropriate for evaluating performance against each objective. There are valid reasons why some departments may find it harder to measure success. For instance, the achievements of the Foreign Office in promoting British interests internationally are harder to quantify than those of the Department of Health and Social Care (DHSC) and others responsible for delivering services.

To assess departments’ progress against their own objectives, we reviewed every measure specified in the SDPs and judged whether they had moved in the direction that government intended. Where the data allowed, we looked at change over the past five years.

We then applied a modified RAG rating:

  • red declining performance
  • amber broadly flat performance or no discernible trend
  • green improving performance
  • grey no data, or data not suitable for judging performance.

As an example, one of the measures at the Department for Work and Pensions (DWP) is ‘UK employment rate’ which, based on its objective to ‘support people into work’, we have assumed it would want to see increase. The Office for National Statistics’ Labour Force Survey shows that between the second quarters of 2013 and 2018 the employment rate in the UK rose from 71% to 76%. We therefore considered this measure to represent improving performance and classified it as ‘green’. We combined ratings for individual measures and applied an RAG rating to indicate performance for each objective as a whole.

Percentage of SDP objectives with performance measures, May 2018

Altogether, the 18 central government departments have 87 objectives. Of these, 51 (or 59%) have two or more associated measures. There is significant disparity between departments in the number of measures each has identified: DHSC, DWP, the Ministry of Defence (MoD) and Her Majesty’s Revenue and Customs (HMRC) have at least two measures for every objective, while the Foreign Office has only one measure in its entire SDP, despite having four objectives.

We did not analyse all the measures listed by departments because – in our view – around a third do not sufficiently capture performance against their objectives. There are two reasons for this:

  • Some measures capture things outside of a department’s influence. For example, the Department for Digital, Culture, Media and Sport (DCMS) lists data on ‘Subjective wellbeing’ as a measure of its success, even though wellbeing is influenced by factors that predominantly lie outside of that department’s control.
  • Other measures did not relate closely to their objective. For instance, it is not clear how the number of visits undertaken by ministers in the Department for International Trade (DIT) captures how well they are delivering on their objective to “use trade and investment to underpin the Government’s agenda for a global Britain and its ambitions for prosperity, stability and security worldwide”.

We excluded from our analysis any objective that had fewer than two sufficient measures, as defined by the criteria outlined above. We did this because we found that a single measure generally cannot capture a whole objective. For instance, the Home Office has an objective to ‘Reduce terrorism’, but its only performance measure is the number of people arrested for terrorism-related offences in a year. While this gives a partial indication of the Home Office’s success in its objective to ‘Reduce terrorism’, it tells us little about the department’s performance in other areas of its anti-terrorism activities.

However, more measures do not always enable more comprehensive performance monitoring. Even when an objective has multiple performance measures, these measures do not always cover all elements of an objective. For example, one of the objectives for HMRC is to ‘Transform tax and payments for our customers’, which is divided into three sub-objectives – ‘Ensure a smooth and orderly EU exit’, ‘Support welfare and pension reform’, and ‘Transform for our customers’. Despite including seven performance measures, all of them relate to customer experience and offer no evaluation of success in relation to Brexit or welfare and pension reform. Since our assessment of performance is based on the measures supplied by departments, our overall ratings of objectives only reflect the aspects of an objective for which there are measures.

We were left with just 31 objectives (out of 87) that we judged to have two or more good enough measures to assess government’s performance. For those objectives that remained, we then combined the ratings for individual measures to calculate an overall performance rating for the objective.

Performance of departmental objectives according to Single Departmental Plan measures

Out of the 31 SDP objectives that we were able to use to track performance, 12 (39%) indicate that the Government is making progress. On a further 16 (52%), departments have made no clear progress.

In some cases this is because some measures show improvement while others show decline. For instance, the objective of the Department for International Development (DfID) to ‘Strengthen resilience and response to crisis’ is evaluated by two measures which show different trends – while they are increasing the number of people reached through humanitarian assistance, they are spending less on climate adaptation.

However, the picture is often more complicated – for example, DCMS’s objective to ‘maximise social action, and participation in culture, sport and physical activity’ has four measures. The percentage of adults physically active has shown little change, while the percentage of adults engaging in arts, heritage, libraries, museums and galleries is a composite measure where there is no discernible trend across the four individual sub-measures. Finally, awareness of First World War centenary activities has decreased, while visitors to DCMS-sponsored museums and galleries are increasing. In cases such as this we have given an amber rating since there is no clear positive or negative trend in performance.

It is encouraging that only three objectives are getting worse. However, two of them – ‘Make home-ownership easier and reduce homelessness’ at the Ministry of Housing, Communities and Local Government (MHCLG) and ‘Support the NHS’ at DHSC – are areas in which declining performance could have significant repercussions for people’s health, safety and quality of life.

The final point the chart illustrates is that at present SDPs do not provide a comprehensive overview of government performance. Too many objectives have one or no measure, and out of the measures they do include, we found that many were not good indicators of performance. This may explain why neither individual departments nor the Treasury are consistently using SDPs to track performance. In a recent NAO survey of staff involved in business planning across all departments, 40% said that their department’s use of “performance information/data for decision-making” was “neither strong nor weak”, and 15% said it was “weak”. The same report also found that Treasury spending teams “do not routinely refer to measures set out in SDPs when assessing departments’ performance”.[3] SDPs are meant to be the main government framework for tracking performance, but they are currently underused. Departments should improve measures so that both they and spending teams have better tools to evaluate success and identify potential problem areas.

SDPs don’t just tell government how well it is operating – they are also meant to “enable the public to see how government is delivering on its commitments”.[4] Improvements to the SDPs as performance management tools would therefore allow departments to demonstrate to the public that they were delivering on their promises. What the public think about government is important. The amount of trust that people place in the government can be a proxy for transparency and honesty. Similarly, levels of public satisfaction can tell us whether government is actually serving the needs of the electorate. Of course, the public view of a government’s effectiveness is influenced by other factors such as individual political beliefs. But asking the people whether they are happy with how the country is being run provides a further way of understanding performance.


Trust in the civil service has increased, but trust in politicians remains low

Trust in professions to tell the truth, 1983–2018

Ipsos MORI’s Veracity Index presents the public with a broad list of professions and asks them, “For each, please tell me if you would generally trust them to tell the truth or not?”

The trend for civil servants is encouraging: when the survey began in 1983 only 25% of people trusted them, but this figure now stands at 62%. There is still room for improvement – they are not seen as any more trustworthy than the ‘ordinary man/ woman on the street’ – but confidence in the civil service has more than doubled over the past few decades.

The Veracity Index paints a bleak picture of public perceptions of politicians. Since the survey began in 1983, ‘Politicians generally’ and ‘Government ministers’ have consistently been seen as the country’s least trusted professions (with occasional competition from journalists). Regular polling, which started in 1997, showed that trust in both was lowest in 2009 – the year of the MPs’ expenses scandal. However, the 2017 revelations of bullying and sexual harassment in Parliament do not appear to have negatively influenced the public view of politicians – Ipsos MORI conducted a second wave of polling following this scandal and found that trust had not decreased.[5]

Net satisfaction with ‘the way the Government is running the country’, 1977–2018

How satisfied people are with the Government is another proxy for performance. If they see it as competent and as fulfilling its promises then they should be more satisfied. There are few regular polls asking the public explicitly what they think about this, and respondents may be answering based on their political views rather than an assessment of administrative effectiveness. Out of the different opinion polls, Ipsos MORI’s Political Monitor series provides the best insight into changing perceptions of government. It has been conducted regularly (and now roughly monthly) since 1977, and asks the public a question that captures views on performance: “Are you satisfied or dissatisfied with the way the Government is running the country?”

In September 2018 net public satisfaction in the Government dropped to minus 52%. This is the lowest level recorded under the Coalition and Conservative governments – the last time the public were that dissatisfied was under Gordon Brown in 2009.

The poll doesn’t tell us why satisfaction with the Government is low. Several factors relating to Brexit may have driven negative perceptions: lack of consensus within the Conservative Party, public frustration with the slow pace of negotiations, and increasing concern over the potential impact of a no-deal Brexit.

Minus 52% net satisfaction may be a record low for governments since 2010, but it is by no means the lowest ever recorded. Current public satisfaction with the May Government remains higher than for Labour under Gordon Brown in 2009, much of John Major’s second term, and the end of the Callaghan and Thatcher governments.

In this chapter we evaluated performance in two ways – government’s own SDPs and public opinion. Our analysis of SDPs shows that performance for the majority of objectives is not getting clearly better or worse. However, the availability and quality of measures was an issue – we found that many objectives have fewer than two measures and that where they did have measures, these were not always good indicators of how well a department is performing. Public assessment of performance is also mixed – trust in civil servants continues to grow, but politicians and ministers are among the least trusted professions. Finally, the public have become more dissatisfied with government in the past year, taking satisfaction to its lowest point since 2009.