The good news is that the Government commissioned a rigorous independent evaluation of the Troubled Families programme from the respected National Institute for Economic and Social Research. That report, initially due at the end of 2015, was subject to delays and accusations that inconvenient results were being suppressed.
The final publication confirms the view that the programme, which intended to turn around the lives of 120,000 of the most 'troubled' families in the UK, did not produce statistically significant changes in quantitative outcomes. Simply put, the data contradicted ministerial claims (even from the Prime Minister at the time) that all the families had been ‘turned around’. That means savings did not materialise.
The programme did, however, appear to have more positive results, according to the families’ own subjective reporting. As the evaluation notes: ‘Families in the Troubled Families group were more likely to report managing well financially; knowing how to keep on the right track; being confident that their worst problems were behind them, and feeling positive about the future, when compared with a matched comparison group. The impact of the programme on these outcomes was statistically significant.’ (p.49)
Unfortunately it will take a long time to see whether those feelings of ‘greater control’ translate into fewer demands on public services or benefits – and those were not the successes ministers were trumpeting.
As many people have pointed out, the programme was always set up on shaky foundations – by hooking onto a figure for poor families – who were then deemed to be dysfunctional and in need of the sort of joined-up service intervention the Troubled Families programme could provide.
As one evaluation author, Jonathan Portes, has pointed out, bad statistics make bad policy. Proper problem definition is the key first step in any policy. The number came to drive the policy.
Even if the right families had been identified as those who really place huge burdens on public services, a phased roll-out to see if this type of targeted intervention worked would have made more sense. If the last couple of years had been spent experimenting, rather than assuming, ‘waste’ would have been much lower – and there would have been useful insights to plough back into future policymaking.
So the important lessons are: take time and effort on defining the problem rather than announce a solution and work the policy back from there; and experiment before a national roll-out.
The danger is that the more obvious media management takeaway is that ministers could have got away with claiming success for the programme if they hadn’t commissioned an independent evaluation. That would have saved them the embarrassment of suppression then final publication. But it would have left policymakers, taxpayers and troubled families all worse off as ministers continued to throw good money after bad.
And that in turn raises the issue of whether evaluations are too important to leave to the whim of the department to commission and ministers to publish. Time for an Independent Evaluation Office?