Many public bodies now undertake comparisons of their performance with similar organisations. But there are a number of dangers that must be avoided if the practice is not to be positively damaging
Anecdotally, there would seem to be growing interest across the public sector in benchmarking. At one level this understandable – it is always helpful to know how you are doing compared to others. When done well, benchmarking can be a powerful management tool and can also be of great value to political leaders too.
However, there is a serious risk of benchmarking being undertaken inappropriately; or in ways that undermine its potential contribution or even as ‘cover’ for doing nothing else. There is a concern that public money is simply wasted on consultants and in house teams undertaking wrong or badly-defined tasks.
The very notion that to simply benchmark costs and/or expenditure for allegedly the same service or activity will, on its own, be of any benefit is absurd. And yet, I am aware of organisations paying for benchmarking of expenditure/costs across a range of bodies that are vaguely similar but without taking into account factors such as: the relative size of these organisations; the cost per capita of staff, or local population; or the quality of the output from this expenditure. In such circumstances, the information produced is almost valueless.
When deciding to benchmark in the public, social and business sectors, certain conditions to ensure ‘best practice’ are essential:
- The organisation at a senior level – and possibly also politically – has to be clear about why it wishes to undertake benchmarking, what it will do with the results and that the costs of the exercise are proportionate to the benefits.
- It needs to be clear about the data that is to be benchmarked, and be confident that such information is available in comparable and verifiable form. In this regard, with the demise of the Audit Commission, such independent verification becomes even more critical.
- It has to include the right set of variables to measure and compare – these could include cost/expenditure measured in absolute terms, but preferably on the basis of, say, per capita of population. And, even here, there could be a need to moderate to take into account the profile of the local population – or, for example, in the case of human resources services, on the basis of expenditure per employee
- The choice of comparator organisations should be carefully assessed to ensure that either, there is a ‘like with like’ comparison or, preferably, that the comparison is with ‘industry-best’ performers. In the case of many support and some other public sector activities, this must include comparisons with the social and business sectors
- ‘All’ factors, such as the level of investment made in the service, must be taken into account and not simply the annual revenue costs – and the costs of such investment must be included. I am thinking here, for example, of the capital costs, interest and repayments of investment in new systems which have led to lower running costs
- The impact of the expenditure itself must be benchmarked and used to analyse the comparative levels of expenditure – for many public services, this could usefully include some user surveys
- Finally, and most importantly, there must be an informed ‘challenge’ process built into the exercise so that those responsible for policy, commissioning, procuring, operations and budgets have to both account for their comparative performance but also be supported in identifying options for improving it. This should not be a harshly critical ‘blame’ exercise (though very significant differences undoubtedly require to be explained rather than ignored in the hope that nobody notices) – but rather be seen as a constructive means of securing improvements in resource use and service outcomes.
Given the current public expenditure cuts and pressures across the public sector, there is unquestionably a need to find ways of doing things differently – and so understanding what others are doing is vital. As is being aware of and addressing one’s own relative performance.
Add to that the prevailing emphasis on transparency, the government’s appetite for ‘armchair auditors’, and the public demand (and rightful expectation) for greater accountability – all of these add to the case for effective benchmarking and understandable, comprehensible, non-jargon comparisons between organisations.
However, to deliver this requires honesty and professionalism from: public sector managers and politicians; the consultants they may engage to undertake benchmarking; and, where services are contracted, their business and social sector providers.
Consultants engaged to do benchmarking exercises need to be clear what good practice looks like and push back if asked to do anything else., Ideally the process will involve stakeholders including staff, user representatives and, in the case of local government, scrutiny panels in the setting of the parameters, the analysis of the data and the use of the outputs to drive change and improvement. And there will obviously be trade-offs between elements such as cost versus quality, which usually and ultimately require political judgements.
If policy decisions are to be based on robust comparative data and citizens are to be reassured that decisions on service provision at a local level are based on accurate and fair comparisons, then this all means that we have to move away from some functionary, low-level and overly simplistic cost comparison to a new, comprehensive and strategic form of benchmarking. In this, as in so many matters, CIPFA, Solace and the other professional bodies really need to lead the way.