07 October 2005
At first sight, the plan to cut down the public sector inspectorates from 11 to four seems sensible. But a closer look shows there is method in the seeming madness of the current messy, overlapping system
At the time of this year's Budget, the chancellor made great play of a decision to simplify audit and inspection in the public sector as part of his crusade against red tape and bureaucracy. In parallel with his efficiency drive to save £21.5bn, he promised to reduce the 'regulatory burden' in the public sector by cutting the number of inspectorates from 11 to four.
As usual with Gordon Brown's announcements, the detail turns out to be slightly less clear than the headlines. The best way of teasing this out is to look at the actual proposals for amalgamations. The biggest is to reduce from five to one the inspectors of the criminal justice system, merging those for the police, probation, prisons, courts and Crown Prosecution Service into a single super-inspectorate.
It seems sensible enough, you might think - why have all these separate inspectorates? But hang on a minute. These aren't separate inspectors inspecting the same thing, they are separate inspectors inspecting different things. While prisons and probation might be (sort of) merging through the National Offender Management Service, the other three are all distinct functions with different roles and responsibilities.
There might be economies of scale from putting all five inspectorates under one roof, where they could share back-office services, but it is hard to see the advantage in merging their core task of inspecting.
Checking up on prisons, police, probation, courts and prosecutors requires some in-depth understanding of each of these functions, and how they might be able to hide anything inconvenient. As we know from some of the spectacular failures of private sector audit, hiding bad stuff can be ridiculously easy. Although there are some generic detection and evidential skills that apply to all inspections - whether hospitals, schools or prisons - there is also an important element of specialised knowledge that usually only comes from years of experience of a particular sector.
That is why we had specialist inspectorates - often drawing heavily on experienced practitioners in the field for their inspectors - in the first place. While this approach has dangers, it clearly has the advantage of having 'poachers turned gamekeepers' doing the inspecting.
It is true that some of the other amalgamations reflect the closer integration of the services they are inspecting - health and adult social services, for example, and education, children's services and skills. But these still contain an element of specialised knowledge needed to inspect specific services, knowledge that could get lost in the new 'super' inspectorates.
There is another issue here though - is coherent, joined-up scrutiny of a single public service axiomatically a 'good thing'? This is at least debatable, although there seems very little appetite from politicians, service leaders or inspectors to discuss it. The counter-arguments are, however, quite powerful.
Most public services - especially those delivering what the Americans call 'human' services, such as health, education and social services - are complex. The 'production process' is rarely simple and the products and outcomes difficult to pin down.
In these circumstances, having unequivocal, objective evidence about what is 'good' and 'bad' performance is problematic. We can gather plenty of evidence but it rarely 'proves' anything incontestably. It can be very useful fuel for discussion, but it often raises more questions than it answers.
This is why having multiple forms of inspection - not necessarily coherently co-ordinated and amassing comparable evidence - might actually be a 'good thing'. In that hackneyed phrase, it might just bring some checks and balances into the system.
This is what Professor Christopher Hood, director of the Public Services Programme, calls 'contrived randomness' in inspection. The best analogy is with educational examinations. Candidates have acquired (we hope) large and complex bodies of knowledge and we can't possibly hope to test for all of it. So instead we ask them a few, seemingly random, questions about bits of it in the hope this will tell us how much they have learnt.
Maybe a bit of chaos in inspection might indeed be the very best thing for trying to pin down the complex and elusive performance of some public services. There are likely to be fewer places to hide bad practices than in a well-ordered, and hence predictable, inspection regime.
Colin Talbot is professor of public policy at Nottingham University and professor of public policy and management (designate), Manchester Business School