Lessons from Royal Bank of Scotlandâ€™s IT meltdown
First, relax â€“ itâ€™s not only government that experiences IT problems and, indeed, the fact that government keeps big, complex, legacy systems running in areas such as taxes and benefits is in itself quite an achievement.
Second, IT matters. When things go wrong with big systems, peopleâ€™s lives are affected: and when things go well time and money is saved and lives improved. As technology has become more central to our lives, we have gained greatly â€“ but we have also become more vulnerable. Misha Glennyâ€™s new book DarkMarket: How Hackers Became the New Mafia provides examples of the risks, from routine online fraud to an allegedly Russian-backed assault on Estoniaâ€™s technological infrastructure.
But itâ€™s the third lesson for government that is perhaps most relevant to those working in public services trying to improve ICT: that, while flashy new projects often win the plaudits, much of what IT professionals do is maintain existing services, rarely gaining much credit for doing so.
The Instituteâ€™s review of governmentâ€™s ICT System Upgrade? published yesterday raises exactly this point. The report, which I co-authored, argues that the Government is focusing on many of the right things in its IT strategy. By sharing infrastructure across departments, ensuring interoperability between different systems (which helps data sharing, among other things) and managing projects in new ways to avoid failure money should be saved and public services improved.
But, among many other findings, the report shows that the ICT strategy focuses predominantly on improving the performance of new projects and expenditure. There is considerably less focus on â€˜business as usualâ€™. For new projects, we heard many examples of how things were working differently. The Cabinet Office approvals process, which affects all projects with a major ICT component valued at over Â£5 million, has identified improvements (and savings) for several proposed projects (while admittedly delaying some perfectly good ones). There has been a positive push â€“ albeit in the Instituteâ€™s view not yet sufficient â€“ towards using modular, iterative and user-focused project management techniques. And there has been central funding for a number of sensible pan-government projects, most notably the creation of a common public service network (PSN).
Such progress is clearly commendable. But what of â€˜the basicsâ€™? Well, we found that itâ€™s surprisingly hard to tell whatâ€™s happening. Government does not unfortunately publish clear, reliable data on the IT performance of different departments â€“ not because government is unwilling to publish but because, generally, it does not have the data to release. Currently, itâ€™s impossible, for example, for the new head of government IT (CIO, Andy Nelson) to say whether policymakers in one department are happier with their IT than those in another. Similarly, itâ€™s hard to know if customers using online tax services are happier than those trying to get a driving license. Whatâ€™s more, the few cost benchmarks that have been published (for example, the cost per desktop in each department) are hugely unreliable, as departments appear to be using different definitions.
Producing better IT performance benchmarks has to be a priority, not just for the IT profession but also for departmental leaders. How can progress and the performance of government IT leaders be judged, after all, without it? And how can areas of good practice be identified and lesson learned across government about what works? How can IT leaders and procurement professionals know whether they are getting good value for money from their suppliers?
Collecting the data that is needed should not be excessively difficult or expensive, despite protests from those asked to report it. Indeed, both U.S. and, in particular, Australia already publish far more and higher quality IT performance data than the UK does. Small steps can achieve a lot â€“ and a good start would simply be to compare whether public servants in different departments think they have the IT they need to do their jobs effectively, adding a question on IT to the annual civil service surveys. In fact, this would arguably save time and money as several departments already collect user feedback data, albeit in different ways through different surveys. The government could add end-user satisfaction data, which is again often collected already but rarely in an easily comparable format. Finally, more detailed metrics on overall costs and costs for specific service offerings could be added. Currently many departments pay for this data â€“ and private sector benchmarks – from research companies like Gartner.
Those who already collect private sector benchmarks are often pleasantly surprised â€“ as the RBS case suggests. And this brings us to another reason for better information on governmentâ€™s business as usual IT performance. How else but with reliable data will ministers and civil servants be able to reassure commentators and voters that government IT is performing well and improving?