What a performance

11 May 12
Barry Quirk

With more than 20 years of management experience, Lewisham Council’s chief executive has seen it all when it comes to justifications for poor service performance. Here he offers his top ten explanations and excuses

Senior public managers tend to spend hours in meetings examining the previous month’s performance of the agency they work for.  This happens in hospitals, police forces, fire & rescue services as well as in local government.

Anyone who has participated in performance management discussions knows that there is a tendency to want to explain away inconvenient facts, patterns or trends so as to avoid accountability (or blame) for them.  It seems that the more managers meet to appraise performance the better they get at explaining why performance has worsened.

In simple terms, a ‘reason’ is an underlying cause as to why something happened.  People seek causes, they want to know ‘why’ something happened.  If event A preceded event B, we often infer that A ‘caused’ B.  But we also know that this inference is usually wrong. In human affairs, events (such as B) often arise from an interplay between probabilities, multiple factors and the conflicting outcomes of various acts of human agency.

For example, the grass in our parks could be longer than it should be because of unusual weather conditions; because the grounds maintenance staff have not paid sufficient attention to arranging staff rotas this month; because there is a dispute of some sort with the supplier of the service; because their plant and equipment is old and failing; or because of some complex interplay between each of these four factors.

Monitoring service performance is important because it informs us: (1) whether we are meeting our goals and objectives; (2) how well we are doing compared to other public agencies who are themselves engaged in delivering like services; and also (3) whether our service performance is improving or worsening compared to our past standards.

But while performance monitoring is an aid to performance management they are not one and the same thing.  Performance monitoring indicates what is happening relative to goals, comparisons and previous performance.  Performance management is about seeing beyond the statistics and getting involved in the detail of operational management so as to make it more effective and efficient in achieving improved service outcomes.

For ease of reference and some minor amusement I have classified all those reasons conventionally offered for when performance has worsened. These have been distilled from 20 years of management experience and (what seems like) thousands of hours of performance management meetings.  In general there are ten most used explanations or excuses offered for poor performance.

Each meeting can take hours to appraise and evaluate service performance.  Use of my list of ten can help to foreshorten discussion and enable managers to focus on what needs to be done to improve things.

To this end I have devised a simple scoring system.  I suggest that in your future management discussions you award points based upon the reason given as to why performance is below expectations.  In my schema, the relevant manager gets one point for using any of the first nine reasons; five points will be given for constructing any plausible reason not mentioned below: and 20 points are gained for choosing reason ten.

1: the explanation based upon ‘simple data errors’: this reason is used when it is believed that there is an error in the reporting of performance data.  These errors can arise from many sources, but few senior managers know enough detail about how the performance reports are completed and hence errors tend to be described in highly general terms.  The most common explanation involves errors that stem from transcription from a manual system to a computer-based system.  Transpositional errors (staff recording, say, 57% instead of 75%) tend to be the most common type of error.  Clever versions of this reason include reference to a formulaic error in a computer algorithm that generates the data or in 'interfacing’ problems, where the core system fails to inter-relate to the performance reporting system.

2: the explanation based upon ‘trend errors’: this reason is used when a specific figure reported is portrayed as a negative snapshot that runs counter to a positive underlying trend. Examples of this reason include arguments such as, ‘although this month's performance is down, the trend over a three- (five- or seven-) month period is upward’. Complex variations on this line of reasoning will refer to ‘smoothing problems in time-series analysis’.

3: the explanation based upon ‘seasonal’ variations: this is a commonly referenced reason for poor performance.  It suggests that underlying demands, needs or pressures in the particular service area are cyclical or seasonal and that this  reporting period is the 'natural low point' of the cycle. Sometimes this reason is called the, ‘August phenomenon’ as all service users disappear to Clacton, Benidorm or Skegness, while the senior staff are off in Tuscany, Ibiza or Cornwall. However, it is feasible for this reason to be used at Christmas, Easter, during Bank Holiday weekends and almost any weekend involving lots of football or celebrations involving the Royal family.

4: the ‘exceptional indicator’ reason: the argument underlying this particular reason is that the chosen performance indicator is one of several that is used in a service area, but  is the only indicator that shows this downward trend. Many services have several indicators and not all will point in the same direction at once. Crafty managers will argue that only by ‘triangulating’ performance indicators can a true and rounded picture be drawn - the problem is that each month they can explain away the declining indicator by reference to improvements in the other indicators.  In this way complex variations on this reason will include reference to how a variety of other multi-factor composite indicators point upward and not downward.

5: the ‘redundant indicator’ argument: in this case it is suggested that the indicator has ceased to be a reliable measure of performance. It was probably devised, somewhat hurriedly, some years ago and has never really captured the full tapestry of performance in the service area concerned. The second line of this argument is that the officers concerned are in the process of designing and devising a new indicator or (less usually) that they have engaged clever but expensive consultants to advise on a more consistent, reliable and/or valid indicator.

6: the ‘freakish volumes’ argument: in this case it is argued that wholly unexpected (even freakish) volume changes in demand for the service resulted in poor performance. The argument tends to be that the usual flow of demand for the service has been temporarily interrupted by external forces beyond our control (these are usually divided into natural hazards or man-made hazards).  For example, variety in weather patterns are often used as a reason - it was much sunnier or rainier than normal.  This can explain all manner of variances from expected performance as no one knows the relationship between the weather and the demand for services (other than for the purchase of ice-creams or umbrellas). If natural hazards are not plausible, managers fall back on explanations based on ‘moral panics’, where people simply developed over-exuberant or exaggerated demands for services.  It is then argued that these unforeseeable volume changes momentarily affected performance for the month in question.

7: the ‘novice member of staff’ argument: this is used in those exceptional instances when the designated member of staff who usually completes the relevant forms that feed into the performance reports was off sick (usually this is accompanied by explanations to engender empathy; such as they were attending a funeral or an important hospital appointment about a medical condition that is too embarrassing to disclose). As a result of this trusted person being off, a 'novice' compiled this month's data. And guess what, he or she got it wrong.  A variant of this argument is the generic staff sickness reason: in this case it is argued that there was an usually concentrated outbreak of sickness in a service areas (such as a food poisoning, hyper-concentrated flu pandemic or other illness that narrowly affected this one occupational group). The argument is then made that this sickness resulted in one-off, abnormally poor performance that will be remedied when everyone returns to full health.

8: the argument based on the idea that the relevant manager ‘chose the wrong target’: this argument is well articulated by performance management specialists. They argue that choosing an ambitious target is crucial to establish stretching performance goals and to motivate managers and staff alike.  However, the basis for choosing the target is often more mystical than rational ('the bottom of the top quartile for every organisation with an "R" in it's name' is not usually a basis for selecting a sensible target but on examination the actual target chosen is often based on just as capricious foundations). The advantage of using this argument is that the focus goes not on the service performance being achieved but on technical discussions about what target should be adopted. This line of reasoning can persist fruitfully for several months until everyone has forgotten what is being measured and why.

9: the argument of pragmatism - this is usually based upon the idea that management attention was diverted to something more urgent in the period in question: in this case some managers plausibly suggest that in the period for which performance monitoring information is reported, other critical events and incidents happened that drew away the attention of key managers to deal with these unexpected events. This argument is most used by free-wheeling, flexible and adaptive managers who can always find something more interesting to spend their time doing than simply deliver ‘business as usual’ services and report on variances in the performance of those services for which they are personally accountable.

10: the Mea Culpa argument: in this case there is no attempt given to manufacture some plausible explanation or excuse for poor service performance.  Instead, managers simply state that they took their eye off the ball; that the operational managers directly responsible were temporarily asleep at the wheel; and that as the senior managers in charge, they will give a sincere and fresh assurance that everyone will be smarter next month and raise their game.

This last argument must be used sparingly; honesty about failings is refreshing and can be disarming.  But it can only be used once or twice for the same indicator and usually only once in every three or so meetings.  Continual references to this argument lends others to believe that senior managers occupy top jobs, they are not actually doing anything in them.

Management should not be approached as though it is akin to weather forecasting.  You are not supposed to be describing what is happening or what has happened.  You are supposed to be creating the positive climate (the weather) in which things positively happen.  So while  ‘fessing up’ can be appealing, it’s better to have appropriate managerial grip in the first place.

Barry Quirk is the chief executive of the London Borough of Lewisham. He blogs at barryquirk.com

Did you enjoy this article?

AddToAny

Top