Objective Versus Subjective Measures In Human Resources Evaluation

The desirability of objective/formulaic evaluation measures versus subjective/impressionistic measures hinges largely on considerations of strategy, technology, and culture. But either of these alternatives involves a number of complex considerations when it comes time to devise and implement a particular scheme.

Foremost among these complex considerations are perceptions of justice: An evaluation system that is purely subjective the evaluator simply announces whether she thinks the employee’s performance is excellent, good, fair, or poor – is apt to score low on procedural justice, being too susceptible to caprice and bias by the evaluator. Some basis for the evaluation should be offered. But highly formulaic systems, applied in a non formulaic environment different individuals face different challenges, have access to different resources, and so on – are equally apt to be seen as unjust, because they miss all the distinctive factors applying to the individual being evaluated. A compromise scheme that uses objective measures, but tailors the “formula” to the individual situation, invites corruption or at least politicking in the formula – setting process, and as a result can lead to perceptions of procedural injustice.

Schemes that rely on unsupported subjective judgments tend to have negligible administrative costs, but they can impose substantial emotional costs on the evaluator. Schemes that are formulaic, especially when the formula involves data that are easily obtained, are cheap both administratively and in terms of the evaluator, who can throw up her hands and tell her “evaluates” (in quotes because she isn’t really evaluating anyone): “It’s the system.”

Formulaic schemes tend to score well on reliability. However, depending on the environment, they can score poorly on validity. On the other hand, schemes that rely on subjective judgments that must be documented and supported are perhaps the most costly to maintain, but they do provide evaluators with some cover when dealing with employees who are unhappy with the evaluations they received.

In trying to strike the right balance here, many organizations have begun experimenting with having evaluations done by multiple sources. This can take a variety of forms: a literal committee – based evaluation process, where the committee typically includes the immediate superior of the person being evaluated; gathering input from multiple constituencies, such as subordinates, peers, and clients; or aggregating assessments obtained from multiple independent persons, all representing the same constituency. The hope is that the greater number and diversity of evaluation inputs can produce overall assessments that not only are more reliable and valid in a statistical sense, but more legitimate and informative from the vantage point of the person being evaluated.

Performance evaluations are literally produced by a group or committee (e.g., all the managers at a given level will collectively evaluate and rank the subordinates whom they manage), a practice that is quite common in both public and private sector organizations. Both from the perspective of evaluators and, in many cases, from the perspective of management in general, evaluations produced in this fashion can have significant advantages. This scheme enables other managers who have had contact with a given employee to provide input into the evaluation, providing a richer assessment than one based solely on one superior’s appraisal. It enhances knowledge about others in the workforce, so that placements, rotations, and transfers can be arranged more efficiently.

It can give a more uniform message as to what the organization desires; in contrast, when one group is evaluated by one manager’s set of criteria and a second group by a second manager’s, and when the two groups interact sufficiently to see that there are differences, the validity of the entire scheme is called into question. Evaluation by a group can give the individual supervisor of employee X some ability to layoff blame for a “poor” or mediocre evaluation of X, attributing the bad outcome to the group, and personal feedback (conducted by X’s supervisor, in most cases) from summary rankings (produced in committee) that are used for compensation administration. Finally, it can enhance the quality of the performance evaluations that are done, because it encourages evaluators to take the process seriously; if managers must justify their rankings in front of others (including, perhaps, their own superiors), then presumably they will take those rankings more seriously.

This kind of process has its potential pitfalls: It can encourage gaming (log-rolling, coalition formation) on the part of the evaluators; it can result in the systematic under-valuation of those who work for a less forceful, inarticulate, soft-spoken, or disrespected supervisor; it can help perpetuate patterns of discrimination that have a history of “social acceptability” within the organization. Furthermore, this process will work better in organizations with relatively low turnover among the managers participating in the collective evaluation process, so that they develop a shared vocabulary and body of experience with which to calibrate one another’s assessments. And (of course) producing evaluations through a committee process can be terribly costly in terms of the time needed to do it right.

If it is useful to broaden the evaluation inputs for a given employee to include the perspectives of other managers, it is not too big a leap to consider broadening things even further to include input from other constituencies with whom the employee interacts, including peers, subordinates, and clients (both inside and outside the organization). In addition to the potential increases in validity, reliability, and legitimacy that such “360 degrees feedback” systems can provide, they can be a useful symbol and tool of cultural change in organizations seeking to promote more internal cooperation and communication. But there are substantial problems to confront: If evaluation by a committee sounds time-consuming, 360 degrees feedback systems are in another league. And eventually all the disparate inputs received have to be aggregated or summarized into a form that can be communicated to the employee (and perhaps used as part of the formal evaluation process), which can be a very challenging task for the person to whom it is as-signed (usually, the employee’s immediate supervisor). What do you do, for instance, if you have solicited evaluations of one of your direct reports from two of your own superiors, say, or from two valued clients, and you receive back two diametrically opposite reports?

But our impression is that the trickiest issue raised by 360 degrees – type schemes has to do with the tension between performance feedback and performance evaluation. For obvious reasons, organizations implementing schemes that solicit performance information from peers, subordinates, or clients will generally want to be extremely cautious about using that information as the basis for high-stakes reward decisions (bonuses, promotions, etc.), The potential for abuse, dysfunctional competition, politicking, and all the other pathologies that can accompany performance evaluation is simply too huge. Hence, organizations generally adopt these types of systems with the intention of using them to provide performance feedback, not as the basis for formal performance reviews. For several reasons, however, things often don’t work out as the architects intended.

People tend to find it difficult and time-consuming to provide detailed performance feedback, and they often will be more in dined to do so to the extent that they believe their input will have consequences. Of course, the flip side of this coin is that we have also noted a tendency for people to be reluctant to be critical or harsh in their assessments when they know this may have severe consequences for the person being reviewed. Therefore, it is conceivable that reassuring people in their input will be used purely for developmental and feedback purposes can induce them to be more candid, especially peers and subordinates who may be fearful about bringing harm to a colleague or a superior. The difficulty, however, is that if the firm is operating a separate performance evaluation process that is used for purposes of compensation, promotion, and the like, there is the risk that the 3600 system comes to be perceived either as duplicative or, even worse, as a sham, thereby undercutting its symbolic and cultural benefits and possibly producing a variety of negative effects.

A second reason why inputs solicited for “feedback” purposes often end up becoming used for “evaluation” purposes is simply that once information has been collected, it is difficult for decision-makers not to attend to it. This is particularly true given the aversion that most decision-makers have to performance evaluation: If managers wish to economize on the time they devote to performance evaluation, and they already have a large stack of data gleaned through the 360 degrees process, we think it is fanciful to expect that they will disregard this information and carry out a thorough and independent evaluation for purposes of formal review.

Source by Artur Victoria

The following two tabs change content below.


Mum/The Boss/Editor at Autism Club
Georgie B aka Autism Club Mum is designer,listener and most importantly mummy to 2 beautiful cheeky teen boys with Autism. I want to help make life a little more easier for all of us, whether it be in sharing information, passing on what I have learned including mistakes made, laughing about everything else or just free goodies - we all love them right? God knows its not going to be an easy road ahead so don't do it alone, join us!