I dont like Mondays, by Colin Talbot, Carole Johnson and Jay Wiggan

20 Oct 05
Sunday is a day of rest and dread if you run a public body. For, come Monday, you know you have to run a gauntlet of inspectors, auditors, MPs and others who all have something to say about your service's performance. Colin Talbot, Carole Johnson and Jay Wiggan take pity

21 October 2005

Sunday is a day of rest – and dread – if you run a public body. For, come Monday, you know you have to run a gauntlet of inspectors, auditors, MPs and others who all have something to say about your service's performance. Colin Talbot, Carole Johnson and Jay Wiggan take pity

Imagine, if you will, a fictional head of a fictional prison service somewhere on an island country located, oh, let's say in north-west Europe. He's called Will. For Will, it's going to be the week from hell. As he sits in his garden on a quiet Sunday afternoon, he ponders why he ever took this job.

On Monday, the Commission for Racial Equality is going to issue a report about discrimination and violence against black prisoners. Will has seen the draft and the commission is threatening to use their quasi-judicial powers to issue a 'non-discrimination notice', which is legally binding. This would be a disaster for the service, far worse than the outcome of the Stephen Lawrence inquiry, which found the Met Police 'institutionally racist'.

And that's not his only quasi-judicial headache – an Employment Tribunal just found in favour of a 'whistle-blower' and suggested Will gets his act together on personnel policies too.

On Tuesday morning, Will has a meeting at the Home Office, where officials and the minister are going to give him a hard time over his annual performance targets. The results are actually not too bad, but that won't stop them. And that afternoon he will get an extra dose from the Treasury when he has to discuss his budget. With prison numbers soaring, they have to give him more money – but they always want strings and these days that usually means some sort of commitments about the service's contribution to Home Office 'Public Service Agreement' targets and other performance measures.

And they'll want to know about Will's progress in meeting the 'Gershon' efficiency targets (and so will the Office of Government Commerce, or the Gershon Police as they are fondly known, who are coming to visit next week).

On Wednesday, the Commons' home affairs select committee is holding its annual review of the prison service targets – so at least Will will have 'warmed up' the day before. Last year they berated him for having the wrong targets, but it's officially the minister who sets them, not Will.

On Thursday, it's the unions – the prison officers and the civil service unions both have their own ideas about what prison staff should be doing. These inevitably do not quite match the priorities of the service, or any of the other people who seem to think they have the right to tell Will how it should be performing.

Finally it's Friday, but Will isn't thanking any deity for this end-of-the-week day. The prison inspectorate is issuing its annual report – with yet more recommendations on performance priorities. And in between darting between various media studios to defend the service (again), Will's got to sort out the new 'service delivery agreement' with other criminal justice agencies. Yet more performance measures to be sorted, and if possible meshed with all the others.

And just to round the week off he has two other meetings – one with the lobby groups representing prisoners (the Howard League, Prison Reform Trust, etc) and one with those representing victims (Victim Support, etc) and he knows they are both going to want completely different things from him.

The hypothetical week from hell? Well, maybe, but this scenario is all too familiar to managers across most of the public services. The details change from service to service, but the main external actors are usually – to some degree at least – present.

This is what we call the 'performance regime' for each service – the whole array of outside players who can, quite legitimately, use either formal authority or money or both to tell public services how they ought to be performing.

The range of actors includes central ministries (Treasury, Cabinet Office and Number 10), line ministries, Parliament, judicial and quasi-judicial bodies, audit, inspection and regulatory bodies, professional groups and trade unions, users and their representatives, and partner organisations. They don't all have the same power. They don't all even try, in some cases, to use the power they do have – Parliament has been pretty poor at scrutinising performance. They don't all use their power in the same way or for the same ends. But they all can and do try to 'steer' the performance of public services.

Looking at how the national 'performance regime' has evolved in the UK, the past 20 years has experienced rapid change. In the early 1980s, few in central government were interested in performance. The health ministry was an exception. However, in other areas of public services – local government and local health services – there was a fair amount of activity.

The 'centre' only started to get really involved in the late 1980s, with the launch of 'executive agencies' in the civil service – all of which had to have 'key performance indicators'. Using performance measures became increasingly popular, especially when the 'Citizens' Charter' initiative (remember that?) led to government asking the Audit Commission to set performance measures for local government in the early 1990s. At this stage in the 'centre', only the Cabinet Office was showing any real interest, but the Treasury started to get active from the mid-1990s, and in 1998 it really started to throw its weight about with 'Public Service Agreements'.

The role of the Audit Commission in setting around 200 local government performance measures throughout the 1990s is interesting. In the Republic of Ireland, for example, the 42 measures there were negotiated directly between central and local government – no third party audit body was involved. The commission's role is symbolic of the massive growth in audit, inspection and regulation (standard setting) that took place throughout the 1990s. The public service world became a litany of acronyms for these bodies – Nice, Ofsted, Chi, etc. Some of these were old, well established bodies given new teeth but there were many new ones too.

Other actors have been slower to get involved, and mostly been less influential. Parliament has paid little attention to the deluge of performance reporting from public services bodies from ministries to local schools. Contrast this to the US, where Congress initiated the Government Performance and Results Act, under which federal agencies have to report annually on their performance to congressional committees. Closer to home, the Welsh Assembly has played a much more active role in shaping performance priorities in Wales than Parliament has in London. Some parliamentary committees are waking up to the possibilities of using performance for scrutiny and leverage – the home affairs select committee is really doing an annual report on Home Office performance.

Judicial review has been a small, but growing, area of intervention by the courts into public services. And quasi-judicial bodies such as the Commission for Racial Equality and employment tribunals also have an impact. Even public inquiries occasionally change things – the recommendations from the Bichard Inquiry (post-Soham) are nearly all about performance.

We could go on, but we think you get the picture. There are several questions that immediately spring to mind.

First, is all this performance 'steering' by a range of external bodies 'joined up', or are they all after different, often incompatible, things? The answer is probably more towards the 'chaos' than the 'rational' end of this spectrum, although it varies from service to service. Part of the problem is that although some aspects of some performance regimes have been studied, most have not. So we don't really know and we also don't have systematic comparisons across countries, services and time.

The second, does it matter? Well, seemingly it does. One of the most common complaints from service managers – for example in evidence to the Commons' public administration select committee's inquiry, On Target – was about too much and too fragmented audit and scrutiny of performance. The government itself has partially recognised this in its drive to rationalise audit and inspection bodies. In Wales and Scotland the local and central government auditors have already merged.

But is this the right policy? Because we do not know enough about performance regimes and even less about the way they affect different services, we simply don't know whether more rationalised, joined-up, performance regimes are a good or bad thing. Some theorists – notably in some of the 'new science' areas such as 'complexity theory' – suggest that a bit of chaos might be a good thing in external steering and scrutiny, especially for very complex and hard-to-pin-down services.

Thirdly, how do managers cope? Well, we know some of them cheat – they play off one set of imperatives against others or weigh up which ones really matter. We have heard managers talk about the 'P45 targets' – ie, the ones that you get sacked for failing to meet. Obviously, if there are P45 targets, there are also others you can safely ignore. But cheating and game playing are more reported on than actual. In our experience, most managers make serious efforts to get to grips with these, often contradictory, pressures on them for 'better' performance. Some still ignore it and hope it will go away, but most are trying out various ways of managing it.

Chief among these has probably been the growth of various types of 'balanced scorecards'. The 'scorecard' idea – borrowed for once usefully from the private sector – has been adopted and usually adapted to public services as a way of integrating all these conflicting demands. It is early days but some 'early adopters' say they are getting good results.

Finally, assuming the range of actors in any public services performance regime is likely to remain complex and often contradictory, how can services cope? A useful analogy might be drawn from some management theory about the contradictory and paradoxical pressures all managers experience (public and private). The US academic Bob Quinn developed the idea of 'paradoxical management' to meet just this problem. Most fairly good managers, says Quinn, get along by picking between conflicting pressures and just sticking to satisfying one set of them and ignoring the others. Really excellent managers somehow balance the conflicts and make the right moves, in the right place and the right time to satisfy them. The really appallingly bad managers do the same as the excellent ones – changing their actions all the time – but doing the wrong things, in the wrong places and at the wrong times.

Maybe further research will show that that is what the failing public services tend to do in reaction to complex and contradictory performance regimes, while their successful colleagues get it right.

Colin Talbot, Carole Johnson and Jay Wiggan are respectively professor, research fellow and research associate in public policy and management at Manchester Business School. They are authors of the report, Exploring performance regimes - a report for the National Audit Office and Exploring performance regimes: comparing Wales and Westminster - a report for the Wales Audit Office

PFoct2005

Did you enjoy this article?

AddToAny

Top