RBS meltdown: IT lessons for us all

26 Jun 12
Tom Gash

The Royal Bank of Scotland's problems highlight important issues for the public as well as the private sector. Not least, how well many public sector IT systems do

The Royal Bank of Scotland meltdown this weekend should raise three obvious lessons for government. First, relax – it’s not only government that experiences IT problems and, indeed, the fact that government keeps big, complex, legacy systems running in areas such as taxes and benefits is in itself quite an achievement.

Second, IT matters. When things go wrong with big systems, people’s lives are affected: and when things go well time and money is saved and lives improved.  As technology has become more central to our lives, we have gained greatly – but we have also become more vulnerable.

Misha Glenny’s new book DarkMarket: How Hackers Became the New Mafia provides examples of the risks, from routine online fraud to an allegedly Russian-backed assault on Estonia’s technological infrastructure.

But it’s the third lesson for government that is perhaps most relevant to those working in public services trying to improve ICT: that, while flashy new projects often win the plaudits, much of what IT professionals do is maintain existing services, rarely gaining much credit for doing so.

The Institute for Government’s review of government ICT System Upgrade? published yesterday raises exactly this point. The report, which I co-authored, argues that the government is focusing on many of the right things in its IT strategy. By sharing infrastructure across departments, ensuring interoperability between different systems (which helps data sharing, among other things) and managing projects in new ways to avoid failure money should be saved and public services improved.

But, among many other findings, the report shows that the ICT strategy focuses predominantly on improving the performance of new projects and expenditure. There is considerably less focus on ‘business as usual’.

For new projects, we heard many examples of how things were working differently. The Cabinet Office approvals process, which affects all projects with a major ICT component valued at over £5 million, has identified improvements (and savings) for several proposed projects (while admittedly delaying some perfectly good ones). There has been a positive push – albeit in the Institute’s view not yet sufficient – towards using modular, iterative and user-focused project management techniques. And there has been central funding for a number of sensible pan-government projects, most notably the creation of a common public service network (PSN).

Such progress is clearly commendable. But what of ‘the basics’? Well, we found that it’s surprisingly hard to tell what’s happening. Government does not unfortunately publish clear, reliable data on the IT performance of different departments – not because government is unwilling to publish but because, generally, it does not have the data to release.

Currently, it’s impossible, for example, for the new head of government IT (CIO, Andy Nelson) to say whether policymakers in one department are happier with their IT than those in another. Similarly, it’s hard to know if customers using online tax services are happier than those trying to get a driving license. What’s more, the few cost benchmarks that have been published (for example, the cost per desktop in each department) are hugely unreliable, as departments appear to be using different definitions.

Producing better IT performance benchmarks has to be a priority, not just for the IT profession but also for departmental leaders. How can progress and the performance of government IT leaders be judged, after all, without it? And how can areas of good practice be identified and lesson learned across government about what works? How can IT leaders and procurement professionals know whether they are getting good value for money from their suppliers?

Collecting the data that is needed should not be excessively difficult or expensive, despite protests from those asked to report it. Indeed, both the US and, in particular, Australia already publish far more and higher quality IT performance data than the UK does. Small steps can achieve a lot – and a good start would simply be to compare whether public servants in different departments think they have the IT they need to do their jobs effectively, adding a question on IT to the annual civil service surveys.

In fact, this would arguably save time and money as several departments already collect user feedback data, albeit in different ways through different surveys. The government could add end-user satisfaction data, which is again often collected already but rarely in an easily comparable format. Finally, more detailed metrics on overall costs and costs for specific service offerings could be added. Currently many departments pay for this data – and private sector benchmarks - from research companies like Gartner.

Those who already collect private sector benchmarks are often pleasantly surprised – as the RBS case suggests. And this brings us to another reason for better information on government’s business as usual IT performance. How else but with reliable data will ministers and civil servants be able to reassure commentators and voters that government IT is performing well and improving?

Tom Gash is programme director at the Institute for Government. The report on government IT is available on the Institute's website

Did you enjoy this article?

AddToAny

Top