20 July 2012

The Civil Service Reform plan announced the end of Capability Reviews. What did the final assessments tell us and what should come next?

“It is the right time to change the current arrangements” according to the Civil Service Reform Plan. Capability Reviews, which started in 2005 and have just completed a third and final phase, are to be scrapped in favour of a new ‘departmental improvement model’ (a DIM) and ‘departmental improvement plans’ (DIPs).

Is the Civil Service any more capable?

Before consigning Capability Reviews to history, we have taken a look at the aggregated data from the last two phases (2008/09 and 2011/12) to see what story it tells.

Plenty of caveats are required here. This is fraught with difficulties, not least that the results were not intended to be directly comparable across departments (though this hasn’t stopped departments doing so informally in the past). The latest round was also based on self-assessments by departments, where the previous rounds were externally conducted and run from the Cabinet Office. (Which, of course, makes it ironic that Cabinet Office is the only department not to complete a Review in this last phase).

The table below gives all the RAG ratings from phases 2 and 3, scored from 1 (red / serious concerns) to 5 (green / strong). The end column shows changes to departments’ aggregate score and the end row shows changes to aggregate scores for each theme.

Capability review table
Source data (.xlsx)

Several intriguing findings emerge:

- Overall, the score across all the major departments has increased only very marginally between phase 2 and 3 – from 505 to 509 on our scoring (or under 1%)
- 7 departments had lower scores in phase 3 than in phase 2, while 6 departments had higher scores. The Department of Health and Department for Work and Pensions fell furthest, while the Ministry of Justice and HM Revenue and Customs increased most
- The four most capable departments in phase 3 were the Department for International Development and the Department for Education (most and second most capable in phase 2) but also the Ministry of Justice and HM Revenue and Customs (least and second least capable in phase 2)
- Looking at this by theme, the Civil Service has got markedly worse at setting direction (-9 points), but better at planning, resourcing and prioritising (+5 points) and building capability / developing people (+4 points).
- Interestingly, no reds (serious concerns) were registered for any department across any theme in either phase 2 or phase 3.

What prospect for - DIMs and DIPs?

DIMS and DIPs are still at the drawing board stage. The Civil Service Reform Plan makes two things clear. First, while the capability assessment from Capability Reviews will be carried forward, this will be supplemented with measures of performance, efficiency and innovation and strategic risk and leadership of change. Second, these will continue to be owned by departmental boards (rather than Cabinet Office) and tailored to each department’s context.

Making a success of the new approach will require careful planning and design around both of these areas. First, what (and by extension who) are they really for? Saying they will be owned by departmental boards is fine in principle, but the centre is likely to take a close interest in the results and, given freedom of information and the transparency agenda, departments would be naive to think these assessments won’t also be made public. That creates a potential dilemma in making these meaningful and honest.

Second, how do they fit with other departmental processes? If DIMs and DIPs are to have real value, they will need to become an integral part of departmental planning processes. In an ideal world departments would have two things: a more recognisable business planning and performance cycle (a far cry from the current departmental business plans) and an improvement strategy which falls out of it. The danger is DIMs and DIPs simply add to an already cluttered set of processes required of departments and are completed because they have to be, not because they really support departments to become more capable.

Capability Reviews have been a hugely powerful force in opening up the Civil Service to assess and challenge its performance. If DIMs and DIPs are to prove worthy successors they will need to square both of these circles.


One further observation -

DECC had the lowest score of any department in the most recent (phase 3) review. It may be completely unrelated, but my understanding is that DECC was also the only department to invite an external assessment (rather than self-assessing) in this round.

Perhaps the Capability Review results indicate the appreciation of the need for development has remained broadly stable, at (on average) more than one degree below urgent.

The Departmental Improvement Model points to the right areas to provide a powerful tool for departments with brilliant leadership and ambitions matching the boldness of the reform plan. In well-intentioned but merely adequate departments it is likely to be as effective as existing internal challenge (and such peer review as the department chooses to invite).

What if this proves insufficient to deliver "profound change"? The Reform Plan rightly talks of stronger, sharper accountability. Respect for the inquiry of the House of Lords Constitutional Committee appears to have inhibited action beyond the requirement for "explicit" sign off of documents for which one might have thought an Accounting Officer would in any case feel some responsibility.

Who is managing the risks and with what resources?

So let me get this straight... instead of using external assessors, the third round asked the senior management of government departments to assess whether they were capable of doing their own job? And surprise, surprise, it turns out everything was all fine???

Wendy - the first part is right, but I'm not sure the second is. Some departments increased their capability fairly dramatically on a self assesment, but more departments found themselves to be less capable this time. And, overall, there was virtually no improvement.

Not having any red ratings anywhere in the last two phases also suggests that external reviewers didn't find it easy to be deeply critical either.

Add new comment

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
By submitting this form, you accept the Mollom privacy policy.