The tricks, and trials, of measuring performance

I don’t know anyone who enjoys performance assessment, either having it done or doing it. At one stage in the public sector, many agencies offered performance pay: a salary top-up for those who were considered to have performed well. Although this practice seems to have died away, having one’s performance assessed is still important. Payment of increments, for examples, rather than being automatic as it once was, now depends on at least satisfactory performance. (Only senior people in the financial sector, it seems, are paid bonuses even when their company does badly.)

The real conundrum, though, is a familiar one: what can be measured is often not what is intrinsically valuable. This applies to every job, however mundane or esoteric it may be. Compare the bus driver who seems to waft you home, with one who jams on the accelerator, stomps on the brakes and careers around curves and corners as if practising for Mt Panorama. Both may be equally efficient, but the first makes bus travel a pleasant interlude, while the second turns it into a most uncomfortable, not to say anxiety-growing, experience.

Moves are afoot to reward teachers for good performance, as measured by their students’ test results. Make no mistake: if preferment depends on a particular metric, ambitious people, and even those who are simply conscientious about their jobs, will try to perform well in relation to it. If student ratings are used to assess teaching performance, teachers will concentrate on pleasing their classes.

But even if this measure could be adjusted correctly for all of the confounding variables, I am unsure many people would say that their ”best” teacher was necessarily the one from whom they received the best grades. An effective teacher can shine like a good deed in a naughty world, but still might not show up well in the performance-measurement stakes. Moreover, whether customer satisfaction is the best kind of relationship to encourage between teachers and student is open to question. Power shifts from instructor to student, from provider to customers. Failure to learn is, increasingly, construed as the teacher’s fault. But learning is always co-produced, in the sense that the ultimate work must be done by the learner. I am unsure that the messages we send when we define performance in this way are always the right ones.

We ought to find that people who are really good at their jobs are most in favour of performance pay. But I doubt this is the case. Where performance targets are negotiated between employee and supervisor, the overachievers tend to put up tougher targets than the underachievers. And I have never known a performance system that addresses the constant problem plaguing the public sector: the sheer unevenness of workload and the turbulence of workflow. It is possible to find, within the one agency, people who are overworked to the point of insanity, while others twiddle their thumbs.

Of course, good performance is meaningless if we are measuring things that, in the performance of them, have no bearing on the agency’s overall purpose. This is always a huge problem for the public sector and, indeed, for any organisation with complex goals (and that, I believe, is most of them). But for budget-funded agencies, where money is supplied for the performance of particular programs, the problems become exigent indeed.

Remember ”let the managers manage”? Measurement of program performance was supposed to liberate management from the direct, day-to-day controls that are the bane of bureaucratic life. But this has not proved to be the case. Indeed, financial management has gone backwards in the past decade. Rather than being ”paid” for the outcomes they achieve (which would imply a flexible use of resources and a strategic time frame), agencies are, more than ever, locked into a world of centralised controls and one-year spending horizons.

At least government agencies know what their resources for the budget year will be, and can go back for top-ups if they need more. In universities, no one knows for sure how much is going to be available, because the amount depends on enrolments, and enrolments are not known until well into the calendar year.

Of course, the question ”how well are we doing?” is always an important one to try to answer. Benchmarking used to be popular, although the problem of comparing like with like proved a difficult one to resolve. One public agency I know of spent years trying to benchmark itself against a group of companies that included banks and oil refineries, before they gave it up as a bad job.

For universities, the question ”how well are we doing?” has a particular bite to it. Every institution wants to be excellent at as many things as possible because, while no one knows the degree to which students’ choices are determined by rankings, university managers (and governments) are mesmerised by them. In these circumstances, it is not performance itself that is at stake, but reputation – the perception of performance.

Unfortunately, reputational comparisons have invidious consequences, both for those at the top and the bottom. Those at the top rarely understand that their pre-eminence comes not from the supposed brilliance of those who are there at the moment, but from the hard work of those who preceded them, just as those further down the pecking order are equally the products of history and of circumstance. Whether organisational or individual, if the business of performance is to have any meaning, it is surely improvement that matters, and remaining true to our values, rather than trying to beat others at their own game.

First appeared in the Canberra Times -click link: http://www.canberratimes.com.au/opinion/editorial/the-tricks-and-trials-of-measuring–performance-20111205-1ug2p.html#ixzz2BnFmphrP

Leave a Reply

Your email address will not be published. Required fields are marked *