Chapter 5: Performance Measurement

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

Factors Explaining Why Supervisors Avoid Evaluating Subordinates

(1) 3 Most Important Factors: a. Length of time that the subordinate has reported to the superior (less time led to more reluctance). b. Amount of experience the subordinate had (less experience, resulting in more corrective feedback, led to more reluctance). c. Amount of trust between the supervisor and subordinate (lower levels of trust on part of the subordinate led to more challenges of evaluation and greater reluctance on the part of the supervisor to evaluate). (2) Subordinate's confidence in the operation of the performance evaluation system (lower levels of confidence in the system by the subordinate, the more likely he or she is to feel unfairly evaluated and to challenge the system, and the greater reluctance on the part of the supervisor to address it). (3) Logistical and procedural reasons, specifically that it takes time that the manager would prefer to be spending on other work related tasks. (4) Desire to Avoid Negative Feedback for Reasons of Fear: a. Fear of creating hostility in the workplace. b. Fear of experiencing the dissatisfaction the process can cause incumbents. c. Fear that they may be challenged on their evaluations. d. Fear that they may become party to a lawsuit brought by an incumbent charging unfairness.

3 Types of Rater Training

(1) Administrative Training (2) Psychometric Training (3) Frame-of-Reference Training

Organizational Goals in Performance Appraisals

(1) Between-Person Uses- salary administration, promotion, retention/termination, layoffs, and identification of poor performers. (2) Within-Person Uses- identification of training needs, performance feedback, transfers/assignments, identification of individual strengths and weaknesses. (3) System- Maintenance Uses- manpower planning, organizational development, evaluation of the personnel system, identification of organizational training needs.

Common Uses for Performance Measurements

(1) Criterion Data- correlation between an individual's performance data with test data to determine if the test predicts successful performance. (2) Employee Development- profiling employee strengths and weaknesses to develop a plan to improve weaknesses and build upon strengths. (3) Motivation/Satisfaction- setting appropriate performance standards that evaluate employees' success in meeting those standards while giving employees feedback, and determining how an organization can increase the motivation and satisfaction of those employees meeting or exceeding the standards. (4) Rewards- comparing workers to one another to determine how to distribute rewards such as salary increases and pay bonuses. (5) Transfer- determining which employees are best suited for transfer from one job family/title to another based on a profile of performance capabilities. (6) Promotion- the use of performance information as part of assessment procedure to determine promotions. (7) Layoffs- using performance as a guide to the selection of those to be laid off in the event of a downsizing, with those employees' who have the lowest performance being the most likely candidates for layoff.

Relationships among Performance Measures

(1) Each type of performance measure gives us a different perspective on performance. (2) We cannot simply substitute an objective measure for a performance rating, or vice versa. (3) Despite the intuitive appeal of objective measures, they are not necessarily more reliable.

Ratee Goals In Performance Appraisals

(1) Information Gathering- to determine the ratee's relative standing in the work group, future performance directions, and organizational performance standards or expectations. (2) Information Dissemination- to convey information to the rater regarding constraints on performance and convey a willingness to improve performance.

4 Steps of FOR Training

(1) Providing information on the multidimensional nature of performance. (2) Ensuring that the raters understand the meaning of anchors on the scale. (3) Engaging in practice rating exercises of a standard performance. (4) Providing feedback on that practice exercise.

3 Performance Appraisal Stakeholders

(1) Rater (2) Ratee (3) Organization

Why Halo Errors Occur

(1) Simple laziness of the rater. (2) The rater believes that one particular dimension is key and all other dimensions or performance areas flow from that one area. (3) The rater might subscribe to a "unitary view" of performance. (4) The rater considers a performance area not included in the rating form as the key to successful performance, allowing an "invisible" area to influence all of the other ratings.

What are the rating sources of a 360-Degree System?

(1) Supervisors (2) Peers (3) Self-Ratings (4) Subordinate Ratings (5) Customer & Supplier Ratings

3 Factors Influencing Overall Performance Ratings

(1) Task Performance- proficiency with which job incumbents perform activities that are formally recognized as part of their job. (2) Organizational Citizenship Behavior (OCB)- behavior that goes beyond what is expected. (3) Counterproductive Work Behavior (CWB)- voluntary behavior that violates significant organizational norms and threatens the well-being of the organization, its members, or both.

Rater Goals in Performance Appraisals

(1) Task Performance- using appraisal to maintain or the ratee's performance goals or levels. (2) Interpersonal- using appraisal to maintain or improve interpersonal relations with the ratee. (3) Strategic- using appraisal to enhance the standing of the supervisor or work group in the organization. (4) Internalized- using appraisal to confirm the rater's view of themself as a person of high standards.

3 Components of Performance Management

(1) The definition of performance, including organizational objectives and strategies. (2) The actual measurement process itself. (3) The communication between supervisor and subordinate about the extent to which individual behavior fits with organizational expectations.

3 Fundamental Characteristics of Performance Rating Scales

(1) The extent to which the duty or characteristic being rated is behaviorally designed. (2) The extent to which the meaning of the response categories is defined. The characteristics of the scale you use can affect the validity of the resulting ratings, regardless of the technique used to gather performance information. (3) The degree to which a person interpreting or reviewing the ratings can understand what response the rater intended.

Task-Based Raitings

A performance rating system that is usually a direct extension of job analysis, where the rater is asked to indicate the effectiveness of an employee on individual critical tasks or on groups of similar tasks to distinguish task groups from individual tasks. These types of ratings tend to be the most easily defended in court, and most easily accepted by incumbents, because of the clear and direct relationship between the duties rated and the job in question.

Weighted Checklist

A checklist that includes items that have values or weights assigned to them that are derived from the expert judgements of incumbents and supervisors of the position in question. These weighted values correspond to the level of performance represented by those items.

Computer Adaptive Rating Scales (CARS)

A computer based technology that eliminates the two drawbacks of the paired comparison method; that it is time consumer and does not provide a clear performance standard. Instead of pairing two employees in a better than/worse than format, it presents two statements that might characterize a given ratee, and the rater is asked to choose the statement that is more descriptive of the individual. The computer helps to identify the probable performance range of the employee, then presents pairs of performance statements that further help to narrow that range. The value of CARS depends on the speed of the computer to the narrow range of probable performance, dramatically reducing the number of comparisons that need to be made.

Behavioral Observation Scales (BOS)

A format that asks the rater to consider how frequently an employee has been seen to act in a particular way. This format grew out of the idea that it would be more accurate to have raters evaluate what someone actually did rather than what he or she might do.

Checklist

A method for collecting judgmental performance information that presents a list of behaviors to a rater, who places a check next to each of the items that best or least describe the ratee. The checklist items may have been taken directly from job analysis or a critical incident analysis.

Trait Ratings

A now unusable ratings system where supervisors evaluate subordinates on traits, which are defined as habits or tendencies displayed by an individual. Traits can be used as predictors of performance but not as measures of performance. Performance measurement systems based on behaviors are much more legally defensible than those based on traits.

Walk-Through Testing

A type of measurement that requires an employee to describe to an interviewer in detail how to complete a task or job-related behavior. The employee may literally walk through the facility, such as a nuclear power plant, answering questions as they actually see the display or controls in question. The interviewer then scores the employee on the basis of the correct and incorrect aspects of the description.

Central Tendency Error

A type of rating error in which raters choose a middle point on the scale to describe performance, even though a more extreme point might better describe the employee.

Paired Comparison

A variation of ranking technique in which each employee in a work group or a collection of individuals with the same job title is compared with every other individual in the group on the various dimensions being considered.

Adaptive Performance Ratings

Adaptive performance dimensions determines the adaptability behaviors exhibited by an employee. There are times when virtually every job requires one or more adaptability behaviors, however some jobs simply require more adaptability than others regardless of the person who occupies the particular position.

Advantages & Disadvantages of Electronic Performance Monitoring

Advantages: ∙ Can be very cost effective. ∙ Has potential for providing detailed and accurate work logs. ∙ Clearly objective, with some claiming it is more "fair" than other forms of performance measurement. Disadvantages: ∙ Opponents argue it is an invasion of privacy and disregards human rights. ∙ Can be seen to undermine trust. ∙ Potentially reduces autonomy and emphasized quantity at the exclusion of quality. ∙ Can cause stress, leading to a decline in employee morale and productivity.

Advantages & Disadvantages of Paired Comparison

Advantages: ∙ Employee comparisons can be useful in certain situations, such as when deciding who to lay off when downsizing is required. Disadvantages: ∙ Time consuming. ∙ Does not provide any clear standard for judging performance, instead it simply indicates that one individual is better or worse than another in some particular dimension.

Advantages & Disadvantages of Peers as Performance Information Sources

Advantages: ∙ More likely to know more about typical performance. ∙ Excellent source because of their immediate interactions with other workers. ∙ Can see how the worker interacts with others, including supervisors, subordinates, and customers. ∙ Valuable for nonadministrative purposes. ∙ More in tune with OCB. Disadvantages: ∙ Peers may be geographically separated from one another, particularly in tele-commuting situations. ∙ Problems are likely to arise when the peer ratings are used for administrative purposes, such as promotions and raises, because a conflict of interest is likely when peers are competing for fixed resources.

Advantages & Disadvantages of BARS

Advantages: ∙ Requires a great deal of SME interaction, which enhances perceptions of fairness and tends to promote a more strategic focus on performance improvement. Disadvantages: ∙ Very time consuming, can potentially take months to develop an effective set of scales.

Advantages & Disadvantages of Self-Rating

Advantages: ∙ The very act of soliciting information from the worker is likely to increase their perceptions of procedural judgement. ∙ When employees know self-ratings will be discussed with their supervisor, distortions are often minimized. ∙ Useful for nonadministrative purposes and in the context of a performance management system as self ratings may play an important role in understanding performance. Disadvantages: ∙ Potential for distortion and inaccuracy due to the fact that it is common for individuals to have higher opinions of their own work than their supervisors do. ∙ Conflict of interest if used for administrative purposes.

Simple Ranking

An employee comparison method that ranks employees from top to bottom according to their assessed proficiency on some dimension, duty area, or standard. It is better to get individual ranks on independent aspects of performance and average or sum them than to ask for an overall rank of an individual.

BARS vs. BOS Formats

BARS: ∙ The rater picks a point on the rating scale based on how the individual has behaved in the past or might be expected to behave. ∙ Asks the rater to describe what a worker might do in a hypothetical situation. ∙ More difficult to develop and opposed by raters. BOS: ∙ The rater evaluates what the individual actually did rather than what they might do. ∙ Asks the rater to consider how frequently an employee or manager has been seen to act in a particular way. ∙ Considerably easier to develop than BARS format because it is often developed directly from a job analysis. ∙ BOS is somewhat easier to defend than some other techniques. ∙ The format favored by raters because of its clarity and tendency to focus raters and ratees on the frequency of particular aspects of performance.

Halo Errors

Error that occurs when a rater assigns the same rating to an employee on a series of dimensions, creating a halo or aura that surrounds all of the ratings, causing them to be similar. A downside of this type of error is that it can have the effect of not identifying the employee's strengths and weaknesses, defeating the purpose of feedback.

Judgemental Performance Measure

Evaluation made of the effectiveness of an individual's work behavior, most often by supervisors in the context of a yearly performance evaluation.

Critical Incidents

Examples of behavior that appear "critical" in determining whether performance would be good, average, or poor in specific performance areas. Using these incidents, it is possible to develop rating scares that can serve as defining points or benchmarks along the length of that scale. In examining the rating scale, the rater gets a sense of both what is being rated and the levels of performance.

(1) Supervisors

First-and-second-level managers and supervisors are the most common source of performance information. ∙ Supervisors can closely observe the behavior of the incumbent, and they are in a good position to evaluate the extent to which that behavior contributes to department and organizational success. ∙ They are expected to provide feedback to the individual worker, both informally on a frequent basis and formally in periodic structured performance evaluations, however many supervisors actively avoid evaluation and feedback. ∙ More likely to be familiar with maximum performance than typical performance.

Employee Comparison Methods

Forms of evaluation that involves the direct comparison of one person with another.

Generic vs. Forced-Choice Checklists

Generic Checklist: ∙ Number of statements checked is left to the rater, resulting in inconsistent responses that can be influenced by social desirability. ∙ Should be used as an additional procedure to foster feedback. Forced-Choice Checklist: ∙ Managers do not like forced-choice methods because it is difficult for them to see exactly what will yield a hight or low performance score for the person they are rating. ∙ This format produced validity coefficients that were 50% higher than ratings of the more traditional form. ∙ Should be employed for generating criterion scores. Both checklists and forced-choice formats represent easy ways to generate a performance score for an individual, but they are not particularly conducive to providing feedback to the employee. They are also often seen as dated, representing the "old" view of performance assessment, rather than the newer view of management.

Graphic Rating Scales

Graphic display of performance scores that run from high on one end to low on the other end. ∙ These were the first type of scales used for evaluating performance. ∙ Graphic rating scales are associated with trait ratings, as those were the attributes originally rated. ∙ Critics of graphic rating scales cite flaws such as poor dimension definitions or poorly describe scale anchor points. however if a graphic rating scale has well-defined dimensions, understandable and appropriately place anchors, and an unambiguous method for assigning ratings to individuals it can be just as effective as any other format.

Duties

Groups of similar tasks described in a task-based performance rating system. Each duty involves a segment of work directed at one of the general goals of a job.

(2) Peers

In theory, peers should be a good source of performance information. ∙ More likely than supervisors to interact with a worker on a daily basis, therefore they are more likely to know more about typical performance. ∙ Valuable for nonadministrative purposes such as performance improvement or new skill development, as well as the context of work group or team performance. ∙ Peers may be more in tune with the presence of absence of OCB.

Rating Errors

Inaccuracies in rating that may be actual errors or intention or systematic distortions.

Reliability of Performance Ratings

Inter-rater reliability of performance ratings mat be in the range of 1.50 to 1.60. Usually the inter-rater reliabilities are calculated between a first- and a second-level supervisor. ∙ The first-level supervisor has the most frequent interactions with the ratee and the most awareness of day-to-day behavior. ∙ The second-level supervisor is more likely aware of the results of behavior or of extreme behavior on the part of the employee. The more information gathered, the more comprehensive and accurate the complete performance estimate will be.

Leniency Errors & Severity Errors

Leniency Errors are types of errors that occur with raters who are unusually easy in their ratings. They give ratings higher than an employee deserves. Raters high on the personality factor of agreeableness give high ratings and are more likely to have leniency errors. Severity Errors are types of errors that occur with raters who are unusually harsh in their ratings. They give ratings lower than employees deserve. Those lower on the personality factor of agreeableness give lower ratings and are more likely to have severity errors. ∙ One safe-guard against this issue is to use well-defined behavior anchors for the ratings scales, avoiding any type of distortion.

Electronic Performance Monitoring

Monitoring work processes with electronic devices.

Destructive Criticism

Negative feedback that is cruel, sarcastic, and offensive; usually general rather than specific, and often directed towards personal characteristics of the employees rather than job-relevant behaviors.

Nonproductive vs. Counterproductive Purposes

Nonproductive Purposes (e.g. online banking, shopping or selling goods, downloading songs). ∙ Some amount of nonproductive computer work is inevitable. ∙ An organization's most creative and curious employees are the ones most likely to engage in nonproductive computer use at work. Counterproductive Purposes (e.g. viewing pornography, harassing co-workers).

Performance Management vs. Performance Appraisal

Performance Appraisals: ∙ Occurs once a year and is initiated by request from HR. ∙ Developed by HR and handed to managers to use in the evaluation of subordinates. ∙ Feedback occurs once each year and follows the appraisal process. ∙ The appraiser's role is to reach agreement with the employee appraised about level of effectiveness displayed and to identify any areas for improvement. ∙ The appraiser simply clarifies the meaning of nonstrategic performance area and the definitions of effectiveness in that area. ∙ The appraisee's role is to accept or reject the evaluation and acknowledge areas that need improvement. Performance Management: ∙ Occurs at much more frequent intervals and can be initiated by a supervisor or a subordinate. ∙ Jointly developed by managers and the employees who report to them. ∙ Feedback occurs whenever a supervisor or subordinate feels the need for a discussion about expectations and performance. ∙ The appraiser's role is to understand performance criteria and help the employee understand how is or her behavior fits with those criteria, as well as look for areas of potential improvement. ∙ The supervisor and the employee are attempting to come to some shared meaning about expectations and the strategic value of those expectations. ∙ The role of the appraisee is identical to the role of the appraiser, which is to understand the performance criteria and understand how his or her behavior fits with those criteria.

Why is performance measurement important?

Performance measurement is important as it helps managers to make evidence-based decisions about important issues such as promotions and raises.

360-Degree Feedback

Process of collecting and providing a manager or executive with feedback from many sources, including supervisors, peers, subordinates, customers, and suppliers.

Psychometric Training

Rater training that makes the raters aware of common rating errors, such as central tendency, leniency/severity, and halo, in the hope that this will reduce the likelihood of errors. When this training occurs, the resulting performance ratings are actually less accurate because the raters are less concerned about accuracy describing performance than about avoiding distortions.

(3) Self-Ratings

Self-rating have often been part of the traditional performance appraisal system and occur as follows... (1) An individual is told to complete a rating form on themselves. (2) They bring the form to a meeting with the supervisor, who has filled out the identical form on the subordinate. (3) The supervisor and subordinate discuss agreements and disagreements in the ratings they've assigned. (4) A final rating evolves from the discussion and is placed by the supervisor in the employee's personnel file as a consensus form.

Behaviorally Anchored Rating Scales (BARS)

Sometimes called Behavioral Expectation Scales, this rating format includes behavioral anchors describing what a worker as done, or might be expected to do in a particular area.

(4) Subordinate Ratings

Subordinates may be in the best position to evaluate leadership behaviors and the effect of those behaviors on subordinates. ∙ As with peer and self-ratings it is important that this source of information is not used for administrative decisions, and is rather used for feedback and employee development. ∙ A positive outcome of subordinate ratings is that they encourage the subordinate to consider the challenges and performance demands of that supervisor, gaining a better appreciation of the duties of a supervisor.

Performance Management

System that emphasizes the link between individual behavior and organizational strategies and goals by defining performance in the context of those goals. It is jointly developed by managers and the people who report to them.

Anchors

The benchmarks on a performance rating scale that define the scale points.

Final Rating

The final rating for an individual is the sum or average of off items checked.

Objective Citizenship Behavior (OCB)

The identification of OCB dimensions determine that some factors appear to play a role in virtually all jobs and all aspects of performance, including task performance.

(5) Customer & Supplier Ratings

The perspective of outside parties-including customers as well as suppliers and vendors who provide materials and services to the organization- is unique and provides the opportunity to fill out the employee's performance profile. ∙ Organizations with a customer-driven focus should pay particular attention to customer-oriented behavior. ∙ These ratings should be limited to areas of performance that a customer sees, which will represent a subset of all employee duties including interpersonal, communication, and motivational issues. ∙ Supplier ratings can provide valuable information about the more technical aspects of performance, such as product specifications, costs, and delivery schedules.

Unitary View of Performance

The rater assumes that there is really one general performance factor and that people are either good or poor performers. They further assume that this level of performance appears in every aspect of the job.

What are the beliefs of an employee who is more likely to accept negative feedback?

The supervisor... (1) Has a sufficient "sample" of the subordinate's actual behavior. (2) And subordinate agree on the subordinates job duties. (3) And subordinate agree on the definition of good and poor performance. (4) Focuses on ways to improve performance rather than simply documenting poor performance.

Rater

The supervisory rating of performance evaluation, whose sources can include the supervisor, peers, the incumbent, subordinates of the incumbent, clients, suppliers, and others.

Validity of Performance Ratings

The validity of performance ratings depends on the manner by which the rating scales were conceived and developed. ∙ It is important to consider the meaning of performance in the organization when developing effective scales. ∙ The scales should represent important aspects of work behavior and have appropriate structural characteristics to support valid inferences about performance level. ∙ Valid inferences are supported by knowledgeable raters.

Frame-of-Reference (FOR) Training

Training based on the assumption that a rater needs a context or "frame" for providing a rating. This type of training has proven to be an effective method of providing rating accuracy.

Hands-On Performance Measures

Type of measurement that requires an employee to engage in work-related tasks. It usually includes carefully constructed simulations of central or critical pieces of work that involve single workers, eliminating the effect of inadequate equipment, production demands, or day-to-day work environment differences-some of the contaminating influences in objective measures.

Objective Performance Measure

Usually a quantitative count of the results of work such as sales, volume, complaint letters, and output.

Forced-Choice Format

Variation of a checklist format that requires the rater to choose two statements out of four that could describe the ratee. These statements have been chosen based on their social desirability value, as well as their value in distinguishing between effective and ineffective performance.

Paired Comparison Formula

n(n-1)/2 n = number of individuals to be compared.

Module 5.1 Summary

p. 201

Administrative Training

∙ For simple, straightforward, and well-design graphic rating systems, little administrative training is required. ∙ If the system is an uncommon one, such as BARS or BOS, the raters may need some training to understand how this system differs from the more traditional rating systems. ∙ If one or more structural characteristics are deficient, administrative training becomes more important. ∙ Training would be directed toward developing a consensus among raters about the meaning of dimension headings or anchor meaning.

Objective v. Judgemental Performance Measures

∙ Objective measures are not necessarily more reliable than judgmental measures. ∙ Objective measures tend to be narrower in scope than judgmental measures. ∙ Judgmental measures are more likely to capture the nuances and complexity of work performance than objective measures.


Kaugnay na mga set ng pag-aaral

Security+ Assessment Exam 1 (DG)

View Set

мовознавство терміни: фонетика

View Set

Pharmacology-Chapter 46 Antineoplastic Drugs Part 2

View Set

Texas Promulgated Forms Unit 1-9

View Set