MGT 352 Exam #2 (SDSU Dean)

Ace your homework & exams now with Quizwiz!

Systems approach to training

1) needs analysis 2) training design phase 3) training implementation phase 4) training evaluation phase

Big Five personality trait

(1) Agreeableness (Trust) — Teams, customer service (2) Conscientiousness*** rxy = .31 (detail) —Good across all jobs (3) Extroversion (Adaptability) — Sales, management jobs (4) Emotional Stability (Self-confidence) — Public safety, all jobs (5) Openness to Experience (Independence) — I tend to work on projects alone, even if others volunteer to help me; jobs requiring innovative thinking, creativity Expatriate assignments

internal recruitment sources

(1) Inventorying Management Talent through human resource information systems (2) Job Postings (3) Performance Appraisals (4) 9-Box Grids (5) Assessment Centers

Validity: Content

*Does the test adequately measure KSAs needed on the job? The extent to which a selection instrument, such as a test, adequately samples the knowledge and skills needed to perform a particular job. Focus on description rather than prediction Performed on tests under development (SME judgment) Example: typing tests, driver's license examinations

sources of error in measurement

- Environmental factors (Room temperature, lighting, noise) - Personal factors (Anxiety, mood, fatigue, illness) - Interviewer/recruiter behavior (If smiling at one applicant then scowling at another) - Test item difficulty level - Item response format (MC v. T/F) - Length of the test - Homogeneity of items (Do all items measure the same KSA?) - Homogeneity of group who took the test If applicants are more homogeneous does this tend to increase or decrease reliability (help the test look better or worse)?

rating scale types

- Graphic rating scales - behavioral observation (BOS) - behaviorally anchored rating scales (BARS)

external recruitment sources

- Internet external recruitment - Employee Referrals Two instances when an organization will use external recruiting: 1. For filling entry-level jobs 2. For filling upper level jobs with no available internal applicants

recruiter characteristics

- Knowledge of the recruited job's requirements and of the organization - Training as an interviewer - Personable and competent to represent the organization

court rulings/legal issues in performance appraisal

- Performance appraisals are "tests" under Title VII - Performance ratings must be job-related - Employees must be given a written copy of their job standards in advance of appraisals - Managers who conduct the appraisal must be able to observe the behavior they are rating - Supervisors must be trained to use the appraisal form correctly - An appeals procedure should be established to enable employees to express disagreement with the appraisal

Alternative Sources of Performance Appraisal Ratings

- Self - Subordinate - supervisor - peer - team - customers

relative judgements v. absolute ratings

- ranking employees among each other vs. using a performance standard baseline Differences - relative judgment: unclear on the difference between employees, force differences where no differences are, unless for feedback, department rankings could differ - Absolute ratings: Employees get more specific feedback, the standard isn't a moving target, more legally defensible

rater errors

- recency - central tendency - Leniency or strictness leniency

Performance Appraisal Methods

- trait - behavioral - results methods

criteria for developing performance standards

1. Measures aspect of the job that is truly important (5-7 essential functions) 2. relates to strategic goals 3. objectively measured 4. difficult yet achievable (goal setting theory) 5. Get employee input when setting standards (procedural justice) 6. the measure is free of contamination 7. Standards capture entire range of job (the measure is free from deficiency)

Four criteria for measuring training program effectiveness:

1. Reactions 2. Learning 3. Behavior 4. Results

Inappropriate application questions qualified privilege law

Age? Race? Religion? Date graduated from High School? Arrest record? Convictions? - depends on state law Maiden name? Disabilities? Are you a US citizen? What language to you commonly use? Title (Mr, Mrs, Miss)? Type of military discharge? Credit history - only if job related (see later slide on 2012 CA law)

RJP is most effective when they are ________ and at the __________ of the recruitment process.

verbal End

Application forms - background checks

Background checks are absolutely critical in order to avoid a negligent hiring lawsuit. Types of information sought in background checks include: educational credentials, verification of former employment, criminal records, and driving records.

Behaviorally Anchored Rating Scale (BARS)

A PERFORMANCE appraisal that consists of a series of vertical scales (usually 5-10), one for each dimension of job performance. More information on the rating scale than simply "excellent, average, poor ratings"

Graphic Rating-Scale Method

A TRAIT approach to performance appraisal whereby each employee is rated according to a scale of individual characteristics Rate this employee on his/her level of dependability. 1 = Excellent 2 = Above average 3 = Average 4 = Below average 5 = Poor

Person analysis (needs assessment phase)

A determination of which individuals who need training

Central Tendency error

A rating error in which all employees are rated about average (there is no differentiation).

Recency error

A rating error in which appraisal is based largely on an employee's most recent behavior rather than on behavior throughout the appraisal period

Leniency or Strictness Error

A rating error in which the appraiser tends to give all employees either unusually high or unusually low ratings (no differentiation)

Organizational analysis (needs assessment phase)

An examination of the environment, strategies, and resources of the organization to determine where training emphasis should be placed

Criteria 4: results (training evaluation phase - measuring training effectiveness)

Determining the bottom line impact of training Calculating the benefits derived from training: - How much has it contributed to profits? - What reduction in turnover did the company get after training? - How much has productivity increased? - How much cost savings have been realized? *Return on Investment (ROI)?

Peer Appraisal

Appraisal by fellow employees, compiled into a single profile for use in an interview conducted by the employee's manager. *Developmental

Manager and/or Supervisor Performance Appraisal

Appraisal done by an employee's manager *Administrative and developmental

Subordinate Appraisal

Appraisal of a superior by an employee *Developmental and Administrative

Relative Judgments

Asking the rater to compare an employee's performance relative to other employees in the same job Ranking method - rank order the employees from "best" to "worst" Forced distribution - for example - place ¼ of ees in top rating category, ½ in middle rating category, ¼ in lowest category Reasons why an organization might use either of these two methods: - to address rater biases - to make administrative decisions (pay raises, layoffs, etc.)

Absolute ratings

Asking the rater to judge an employee's performance against a performance standard or baseline *On a 1 - 10 point scale, rate the employee's ability to meet deadlines. Advantages of absolute ratings: 1. Employees get more specific feedback 2. The standard isn't a moving target 3. More legally defensible

situational interviews v. behavioral descriptive interviews (6)

Behavioral - Higher validity An interview in which an applicant is asked questions about what he or she actually did in a given situation. Tell me about a time when you had to lead a group... Tell me about a time when you had to discipline an employee... Situational An interview in which an applicant is given a hypothetical incident and asked how he or she would respond to it. How would you lead a group if given the opportunity? How would you go about disciplining an employee?

closed v. open internal recruitment

Closed - Employees are not made aware of openings, the organization contacts the employees they are interested in Pros: Makes the process more efficient by narrowing the applicant pool Cons: Limited applicant pool and fairness issues Open - Employees are made aware of the position opening (job posting) Pros: It is more fair (open to everyone) as well as expands the applicant pool. Cons: They have a lot of people to reject which results in the need to help them understand why they were not chosen. It's also not as efficient

Primary difference between criterion-related validity CONCURRENT v. PREDICTIVE validation

Concurrent validation: examining the extent to which test scores correlate with job performance data obtained from current employees - Low motivation - quickly done - Job tenure bad - Sample may not generalize demographically to our applicant pool Predictive validation: examination of the extent to which applicants' test scores match criterion data obtained from those applicants/employees after they have been on the job for some indefinite period (6 months - 1 year). - High motivation - takes a lot of time - No sample generalizability problem (applicant pool is the sample) - Equal job tenure

cost per hire

Cost of Recruitment/ # ees hired You can calculate recruitment costs in terms of: 1) Cost per hire per recruitment source 2) Cost per hire across all recruitment sources Example: Employee referrals = Total cost:$3,000, 3 hires = $1000 / hire

3) training implementation phase

Deciding what training method should be used On-the-job OR off-the-job training methods

Validity

Degree to which the inferences we make from a test scores are appropriate

interpreting reliability coefficients: How high is high enough?

Depends... rxx = .80 or higher is one rule of thumb

Criteria 1: reaction (training evaluation phase - measuring training effectiveness)

Determining trainee reactions to the training program The simplest and most common approach to training evaluation is assessing trainees reactions. Potential questions to assess reaction: - Did you like this program? - Would you recommend it to others who have similar learning goals? - What suggestions do you have for improving the program? - Favorable reactions are good, but no empirical relationship between reactions and on-the-job performance - Just because you liked it didn't mean you learned it

1) needs assessment phase

Determining whether training is needed - organizational analysis - task analysis - person analysis

4) training evaluation phase

Determining whether training worked Evaluation is the most overlooked phase of training Four criteria for measuring training program effectiveness: 1. Reactions 2. Learning 3. Behavior 4. Results

Criteria 3: behavior (training evaluation phase - measuring training effectiveness)

Determining whether what was learned in training actually got incorporated on the job Is transfer of training occurring? - Increase likelihood of transfer of training via instructional design - Incorporate desired behaviors learned into the performance evaluation process

2) training design phase

Developing a training strategy and preparing instructional plans

Developing instructional objectives (training design phase)

Developing a training strategy and preparing instructional plans - Developing instructional objectives - learning principles - transfer to training

Primary Purposes of performance appraisal (8)

Developmental (a coach) - feedback - strengths/weaknesses - goals Administrative (a judge) - promotion candidates - identify poor performance - reward

Reliability: Parallel Forms

Examines the consistency of measurement of an attribute across forms. Similar to test-retest but controls the effect of memory by using 2 different versions of a test and seeing how well the scores correlate Example: SAT, ACT

Criteria 2: learning (training evaluation phase - measuring training effectiveness)

Examining whether trainees actually learned anything. Designs for measuring learning: *Post-test design: Train -> Test *Pre-test Post-test design: Test -> Train -> Test Ideally, in addition to testing trainees, organizations should test other similar employees who did not attend the training (i.e., a control group) to estimate the differential effect of the training across people.

Self-Appraisal Performance

Generally on an appraisal form completed by the employee prior to the performance interview. *Developmental

360 degree feedback

Getting performance information from multiple sources Advantages: - The system is more comprehensive because feedback is gathered from multiple perspectives - It may lessen bias/prejudice that could be specific to one individual rater - Feedback from peers and others may increase employee self-development Disadvantages: - Feedback can be intimidating - There may be conflicting opinions, though they may all be accurate from the respective standpoints - Appraisers may not be accountable if their evaluations are anonymous

global v. dimensional ratings

Global rating: A single overall rating of job performance (1-10) - useful for administrative purposes Dimensional ratings: Ratings on specific aspects of performance "How well does this employee meet deadlines? "....follow directions? "... work with other team members? - useful for developmental purposes

Advantages of Predictive validity

High motivation No sample generalizability problem (applicant pool is the sample) Equal job tenure

Benefits of Realistic Job Previews (5)

Improved employee job satisfaction Reduced voluntary turnover Enhanced communication through honesty and openness Realistic job expectations

assessment centers

Individuals are evaluated as they participate in a series of situations that resemble what they might have to do on the job. -In-basket exercises -Leaderless group discussions -Role playing -Behavioral interviews

realistic job previews (RJP)

Informing applicants about all aspects of the job, including both its desirable and undesirable facets. RJP is most effective when they are verbal and at the end of the recruitment process. You may scare away some high quality applicants with options elsewhere

Inter-rater Reliability

Inter-rater - Measures the degree to which multiple raters give similar ratings to applicants Are some raters biased? Too Lenient? Too strict? Need multiple raters so we can know how reliable any one rater is Inter-rater reliability estimate assesses the degree of objectivity in ratings

Reliability Estimates: How consistent are scores across raters?

Inter-rater reliability

Reliability Estimates: How consistent are the items on a test in measuring a construct?

Internal Consistency

internet external recruitment

Internet Recruiting (e-cruiting) - Can reach a large audience - target your audience by where you place your message - relatively low cost - use to locate passive job seekers - use to show videos of a typical day in the job - help create an employment "brand" Methods: - Posting jobs on job websites - Posting jobs on organization's website - Social media - LinkedIn, Facebook, Blogs Cicso Systems Creative recruiting on their website: - "Make Friends @ Cisco" program - Online resume builder - "Oh no! My Boss is Coming" button ee" - Virtual tour of the Cisco campus

Advantages of Concurrent validity

It can be done quickly because you can collect all pieces of data simultaneously

problems with performance appraisal

Managers dislike the face-to-face confrontation of appraisal interviews Managers are not sufficiently adept in providing appraisal feedback (need training) Being both Judge & Coach difficult - the judgmental role of appraisal conflicts with the helping role of developing employees Manager may not be able to observe performance or have all the information Inflated ratings because managers do not want to deal with "bad news"

Behavior Observation Scale (BOS)

Measures the frequency of observed behavior Assessment of frequency rather than judgment Rater is asked to take the role of "observer" rather than "judge" Observing if someone did something is thought to be easier than evaluating how well they did something

Off-the-job training methods (training implementation phase) Pros? Cons?

Off-the-job training - Classroom training - Simulation - E-learning Pros: Cons:

On-the-job training methods (training implementation phase) Pros? Cons?

On the job training - Mentoring - Peer training - Job rotation Pros: Cons:

Reliability Estimate: How consistent are scores across different forms of a test?

Parallel Forms

Application forms - integrity tests

Passage of the Polygraph Protection Act led to the development of paper and pencil tests These are general personality questions. They ask indirect questions on theft. They believe personality traits are a predictor of future deviant behavior. Clear purpose integrity tests are more likely to be faked on. There is a lot more social stigma attached to failing an overt test vs. a general personality test Weed out devices

yield ratios

Percentage of applicants from a recruitment source that make it to ANY later stage of the selection or employment process. Example yield ratios: Above average ees / # qualified applicants # Job offers / # qualified applicants

Application forms - reference check

Personal references - no validity Best source of reference - former supervisor - they witness your job-related behaviors

_____________similarity between equipment/environment used in training and on the job

Physical fidelity

______________similarity between behaviors required in training and behaviors required on the job

Psychological fidelity

quality of hire

Quality of Hire = (PR + HP + HR) / N PR = Average job performance rating of new hires HP = % of new hires reaching acceptable productivity within an acceptable time frame HR = % of new hires retained after one year N = number of indicators

interpreting reliability coefficients

Reliability coefficient (rxx) Shows % of score that is thought to be due to true differences on the attribute being measured rxx = 0 -- no reliability rxx = 1.0 -- perfect reliability rxx = .90 -- 90% of the true variance between individual's scores are due to true differences (10% due to error) FOR THE EXAM, we will be using a rxx of 0.80 or higher rule of thumb. This means 80% of the true variance between individual scores is due to true differences while 20% is due to error.

cognitive ability tests

Research suggests it is one of the best predictors of job performance across a wide range of jobs High criterion-related validity (rxy = .45-.50) Low cost (paper and pencil tests) High adverse impact (relative to other selection devices)

Selection ratio

Selection ratio = # of positions/# of qualified applicants

Practical vs. statistical significance (interpretation of validity coefficients)

Statistical significance = generalizability to other samples; driven primarily by sample size Practical significance = size of the correlation—is it large enough to be useful?

subjective v. objective ratings

Subjective rating: Rate this employee on his/her level of dependability. Objective rating: How often is this employee absent? How well does this employee meet deadlines? Pros: - Less ambiguous - More legally defensible

Reliability Estimate: How consistent are scores on a test over time?

Test-Retest

application forms—appropriate v. inappropriate questions qualified privilege law

The EEOC cautions employers not to ask non-job related questions. Though some states strictly prohibit some questions on an application form, federal law does not prohibit any questions.

Reliability (6)

The degree to which interviews, tests, and other selection procedures yield consistent data

Validity: Construct validity

The extent to which a selection tool measures a theoretical construct or trait. Most difficult validation method - have to prove that your test really measures the concept it says it measures How do you know your intelligence test measures intelligence? Look at how your test compares to other similar and different tests: honesty tests, intelligence tests, personality tests

Criterion-related validity

The extent to which a selection tool predicts, or significantly correlates with, important elements of work behavior. If a test has criterion-related validity, a high test score indicates high job performance potential; a low test score is predictive of low job performance. Two options for determining the criterion-related validity of a selection test: 1. Concurrent validation 2. Predictive validation

P-value or Statistical Significance (interpretation of validity coefficients)

The p-value is the percentage probability that your result occurred by random chance. Generally, we want to see that the probability of this value occurring by random chance alone is less than 5% (that is, p < 0.05). This component is the statistical significance of validity. It is the generalizability to other samples and is driven primarily by sample size.

Task analysis (needs assessment phase)

The process of determining what the content of a training program should be on the basis of a study of the tasks and duties involved in the job

recommendation letters

These also have low validity. That said, letters that focus more on a candidate's intelligence and letters that were longer were more predictive of who ended up performing well on the job.

Training vs. development (7)

Training - effort initiated by an organization to foster learning among its members Focus: current job Time frame: provide skills to benefit the organization quickly Goal: fix a current skill deficit Development - effort to provide employees with KSAs needed by the organization in the future Focus: current and future jobs Time frame: benefits to organization in long run Goal: prepare for future work demands

learning principles (training design phase)

Training will be more effective the degree to which you take into account learning principles Goal setting Feedback and reinforcement Modeling Practice and repetition Massed vs. distributed learning Concerns the spacing out of training - which would be better 5 two hour sessions or 1 ten hour session? Whole vs. part learning Teach a whole task or break it down into parts?

interpretation of validity coefficients

Validity coefficients (expressed as rxy) can range between -1.0 to +1.0 When rxy = 0, there is no validity. When rxy = 1.0, there is perfect positive correlation. When rxy = -1.0, there is perfect negative correlation.

Return on investment (ROI) (7)

Viewing training in terms of the extent to which it provides knowledge and skills that create a competitive advantage and a culture that is ready for continuous change ROI = Training Results/Training Costs *If ROI is > 1, the benefits of the training exceed the cost of the program *If ROI is < 1, the costs of the training exceed the benefits

Types of validity

criterion-related, content, and construct validity

transfer of training issues (training design phase)

do employees transfer what they learned in training on the job Ways to facilitate transfer of training: 1. Demonstration of concepts being taught 2. Training practice or simulation 3. Practice back on the job 4. Fidelity of the training situation Physical fidelity - similarity between equipment/environment used in training and on the job Psychological fidelity - similarity between behaviors required in training and behaviors required on the job

Reliability: Test-Retest

estimates the degree to which individuals' test scores vary over time Two factors that effect test-retest: 1) Memory - will memory inflate or deflate reliability of a test? 2) Learning - will learning inflate or deflate the reliability of a test?

Behavioral methods (Performance appraisal)

focus on rating employee behaviors (not simply work traits) Advantages Use specific, concrete performance criteria Are acceptable to employees and superiors Are useful for providing feedback Legally defensible Behaviors are closer to performance than traits Disadvantages Can be time-consuming to develop/use Can be costly to develop

Trait Methods (performance appraisal)

focus on rating employee personal traits Advantages Are inexpensive to develop Are easy to use Disadvantages Have high potential for rating errors Too ambiguous Not legally defensible Focus is on the person not performance

Results Methods (Performance appraisal)

focus on rating the employee's results Raters assess the results achieved by the ee (not their personal traits nor how they achieved these results) Advantages Clear and unambiguous criteria Are acceptable to employees and superiors Link individual to organizational performance Disadvantages May encourage short-term perspective May encourage a "results at any cost" mentality Quality v. quantity concern Extraneous factors outside ee's control might impact performance

Internal Consistency Reliability

have a group of people take a test once and then examine how the items hang together (see the conscientiousness test example slide). Most commonly used reliability estimate in HR research Two types of internal consistency estimates: 1. Split half reliability estimate 2. Cronbach's alpha

Employee Referral Recruiting

most effective way of recruiting since employees are generally hesitant to recommend individuals who might not perform well Cons: Corporate inbreeding violating EEO regulations by accidentally blocking out protected classes

Types of reliability

test-retest, parallel forms, internal consistency, and inter-rater reliability

Direction (interpretation of validity coefficients)

the sign of the correlation (-/+) A test can be perfectly valid even if rxy = -0.40. This simply means there is a high negative correlation between the test and performance. It is just as useful as if the test produced an rxy of 0.40.

Magnitude (interpretation of validity coefficients)

the size of the correlation Is it large enough to be useful? low validity --> rxy = 0.00 to +/- 0.15 moderate validity --> rxy = +/- 0.16 to +/- 0.30 high validity --> rxy = +/- 0.31 or higher

methods for improving recruitment effectiveness

yield ratios, cost per hire, quality of hire, recruiter characteristics, realistic job previews


Related study sets

The patient with endocrine health problems

View Set

Business Ethics Final ND - Class 1-3

View Set

Intro to Supply Chain Chapter 8 Rutgers

View Set