What can be done to improve the current situation around data and policy-making? Back in 2010, for our report Policy Making in the Real World, we asked ministers and civil servants to what extent policy making showed the ‘qualities’ laid out a decade earlier in Modernising Government. ‘Evidence Based’ scores well – at least in this self-assessment; but ‘Evaluation, Review and Learning’, critical to whether the government understands and is using the information from previous policy interventions, much less so. Later our 2012 report Evidence and Evaluation in Policy Making looked at some of the specific issues which could explain why use was patchy: we found supply factors (lack of timeliness, data not suited to rigorous testing, data not being available) and demand factors (ethical issues around experimentation, political risk, lack of culture and skills, lack of feedback). So, if we want to make progress on better use of data and evidence, here are just four themes we could think about: Better sharing of expertise and knowledge across government As my colleague Jen Gold has written, What Works Centres are already emerging as a powerful brand for bringing together and sharing evidence on effective interventions. They have also had added advantages: by including 4,500 schools in evaluation (many of them randomised control trials), the Education Endowment Foundation has actively involved teachers on the front line in the collection and use of evidence. Separately, the Civil Service Reform Plan contained proposals to improve the quality of policy advice in their departments, and a report by the Policy Profession Board proposals to drive improvement across government. One of the areas self-designated policy makers wanted to improve was use of data and evidence. Adapting to the age of ‘Big Data’ According to its enthusiasts, ‘Big Data’ isn’t just about data now being available on a massive scale, but about the ‘datafication’ of everything from location (through GPS) to friendship (through Facebook likes). It could lead to profound changes in how we work with data: moving from working with samples, to datasets approaching the totality of n=all; from pristine datasets to messy ones; and from caring about causation – ‘why?’ – to only correlation – ‘what?’ ‘Big data’ presents real opportunities – for example, looking at evidence in real-time. But we also need to guard against the hype, question some of the underlying assumptions (not least the idea that anyone, especially those in the policy-making world, should stop asking why things happen) and address some of the challenges. For example, at a recent event on big data and official statistics, the UK’s national statistician John Pullinger – who defined ‘big data’ as ‘a wake-up call’ to official statisticians – outlined five key challenges in adapting to big data: • Statistical e.g. how do we deal with bias in these new sources? • Technical e.g. there is lots of legacy technology in government. • Ethical e.g. how do we create rules to ensure our work is trustworthy and good? • Commercial e.g. there has to be a new business model of public value and commercial benefit. • Skills e.g. blend of statistics, mining, computing, community needed to unleash potential. We could probably add ‘Political’ to that list, but it is a helpful reminder that big data in and of itself doesn’t solve all of our problems. Policy-making is a good example: the sudden availability of lots of data does not automatically mean better policy. We still need to think how we use that data to help us make policy, and how we use data to understand how well we are implementing it – and whether it is delivering the intended outcomes. Adapting to the age of Open Data Open data is a success story for the UK, which tops the Open Knowledge Global Open Data Index and the World Wide Web Foundation Open Data Barometer. The number of datasets published by data.gov.uk has more than doubled over the last year and a half. This act of publication in itself is a good thing – the more data is published, the more people can use it, the more the quality of data should be improved. And as we know from our Whitehall Monitor project, there are still issues of comparability, consistency and usability. Data is good – information is even better. In Whitehall Monitor 2014, we looked at departments’ impact indicators – the data supposed to show the effect of policies and reforms in the real world. In many cases, this was much more difficult than it should have been, and it wasn’t always clear what the data actually meant in simple terms. It’s not just government that can turn data into information – we do it on Whitehall Monitor, and various data journalism sites and others (like the Royal Statistical Society and Full Fact) do so as part of a wider ecosystem. This is especially important given that there is little evidence that the army of armchair auditors the government hoped would enlist have actually done so. It is right that data is published in a format that anyone can use – but it still often requires those with resources and expertise to make something of it and other bodies to ensure proper auditing of government. Even better than information is data as evidence. We saw an example of that when we visited Maryland in December 2014: Martin O’Malley has made his name, as Mayor of Baltimore and then as Governor of Maryland, through using data to drive improvements in how city and state agencies operated. There are others across the US using this data-driven leadership model, known as the ‘stat’ or ‘performance stat’ model, to run cities, counties, states and even federal programmes. The wider Moneyball for Government movement has also highlighted how this can drive further improvements in evidence, with funding for policies reflecting the quality of the evidence going into them. Making the case to politicians Political incentives – including funding and scrutiny – could also drive better data and better policy. But perhaps the ultimate political incentive is votes. In polling for our Programme for Effective Government, we found that more than three-quarters of respondents wanted politicians to demonstrate that any difficult infrastructure decisions they had to make were based on objective evidence. (Though, as David Walker points out, some of their wishes may be incompatible.) Members of the public also said that they would be more likely to vote for parties that could demonstrate how they would implement their policies in government. None of this is about somehow replacing politicians with data or ‘rule by experts’. Instead, as our director Peter Riddell told the BBC, it’s about getting policymakers ‘to explain why they have taken a decision and measure themselves against real evidence’.