PSYCH 333: FINAL EXAM

Ace your homework & exams now with Quizwiz!

What are factors that will affect the size of your validity coefficient?

1) Reliability of criterion and predictor: these can be corrected for though 2) Violation of statistical assumptions 3) Restriction of range: only including 1 extreme group will put everyone in the same range hence weakening the coefficient 4) Criterion contamination: when criterion measure of job performance is influenced by factors other than our predictor

What are the main types of selection interview questions?

1) Situational: future oriented ("what would you do?" 2) Behavior description: past oriented (describe a time when you...") 3) Social interaction questions: relates to types of WRC/KSAOs 4) Personality/motivation questions: organzational citizenship, handling stress, difficulty, impositions, teamwork/cooperation --Motivation: "are you able to attend Saturday night functions every 2 months?"

Chap. 4: Legal Issues in Selection What are the 4 legal issues cases?

1) Title VII Civil Rights Act of 1964 (CRA) 2) Civil Rights Act of 1991 3) Age in Discrimination Employment Act (ADEA) 4) Americans with Disabilities Act of 1990 (ADA)

What are the weaknesses of concurrent validity?

1. Can't do this with a small sample size because it is more subject to statistical sampling error 2. If job experience is related to performance on the job, then any predictor-criterion relationship may be contaminated by an irrelevant variable: this is because some stuff ppl need to learn on the job to really be good at it 3. If there is a relationship between the predictor and criterion 4. Excludes certain groups of people such as rejectees, employees who have left or promoted out of the job; this restricts the range 5. Participants may have different levels of participation because of their motivation. They already have their job so they might not care because it doesn't affect them (job applicants could be more motivated)

What are the weaknesses of Predictive Validity?

1. The amount of time is a lot because it has to be done over time 2. Difficult to get a sufficient (large) sample to conduct a predictive validation study 3. Hard to convince managers the importance before using the data for HR selection purposes

What are legal issues regarding reference checks?

ADA: Employers are prohibited from asking any questions of references that they may not request of applicants Employer's liability for negligent hiring -If injury happens because employers did not properly check references and the employee was unfit legal action can happen Defamation of character (DOC)" libel or slander false statements Some job situations require background checks

Chapter 11-13-14: Ability tests, personality tests, simulation tests, integrity, and CWB tests for selection Be able to demonstrate you understanding of the class material covered regarding different types of selection test methods, including cognitive abilities tests, mechanical and physical abilities tests, personality tests, performance/work sample tests, trainability testing, assessment centers, SJTs, intergrity tests, and CWB tests like drug tests

Ability tests are standardized measures of knowledge from formal learning experience Par Lahy 1908: paris' street car operators Army alpha: WWI mental ability/intelligence test EEO laws: initially decreased the use of some ability tests; use of tests in selection has increased recently General tests-overall mental ability score (g) Mental ability tests Development of mental ability tests: Binet and Simon (french school children) and Otis Self Administering Test of Mental Ability (the first group administered mental ability test to have widespread use in industry) Cognitive abilities tests Mechanical Characteristics that tend to make for success in work with machines and equipment Manual performance or written problems Measures spatial visualization, perceptual speed/accuracy, mechanical info Bennet Mechanical Comprehension test (operation and repair of complex devices using pictures and scenes for logical questions; measures aptitude for learning) Physical Reasons: More female applicants for male dominated jobs Reducing the incidence of work related injuries To determine the physical status of job applicant Legal issues: Adverse impact (have to be critical job tasks) Usually against female, older, and disabled Cannot use surrogate measures such as height/weight, assumptions, or stereotypes Must measure specific JOB RELATED physical abilities Pass score cutoffs must be carefully set Fleishman's taxonomy Static strength Dynamic strength Explosive strength Trunk strength Extent flexibility Dynamic flexibility Gross body coordination Stamina Hogan-3 components Muscular strength Cardiovascular endurance Movement quality Other types Manual dexterity tests Sensorimotor Sensory abilities Personality Trait: a continuous dimension on which consistent individual differences in reactions to the same situation may be measured or explained (agreeableness, conscientiousness, sociability, independence, and need for achievement) uses/advantages Define personality traits in term of job behaviors Little to no adverse impact Methods Inventory Self report questionnaires: MC-thoughts, emotions, and past experiences Validity: .17-.33 depending on the predictor Projective techniques: responses to intentionally ambiguous stuff that can provide insight Personality Characteristics Inventory (PCI) 150 MC 3-scale items with Big 5 (OCEAN) Others: emotional intelligence, proactive personality, core self evaluations You can use interviews if they are done carefully and specifically Legal issues: Compliance with ADA Privacy rights of individuals Faking happens but doesn't really impact validity coefficients Simulation tests - performance/work sample Limitations: Typically more expensive Assume applicants already have the KSA/WRCs necessary Difficult to construct simulation tests that represent job activities Types Content: motor or verbal Fidelity: degree to which the simulation matches or replicates the demands and activities of the job Work samples: high fidelity simulations Perform job analysis Identify important job tasks to be tested Develop testing procedures Develop scoring procedures Train judges Validity of work sample .30-.33 and show much smaller difference between black and white applicants

How valid is Biodata?

Advantages... Collects information usually obtained in the selection interview Empirical scoring procedures ensure that only job-related questions are posed Has generally been shown to be one of the best predictors of performance, because it is DEVELOPED based on empirical scoring Especially effective predicting for entry level jobs

Where should you seek out info about existing measures?

Annual review of psychology (Journal of Analytic Psych) JAP Personnel Psych

How is the scoring system for biodata/WAB developed?

Applicant (or current employee) responses are compared to their score on a job success criterion Response weights are based on the percent of those with a given response who are 'successful' on criterion Numerical scores are obtained for each applicant by summing the appropriate weights Employers use the resulting scores to make hiring decisions Horizontal percentage method

What are findings and implications from Griggs v. Duke Power?

Griggs v. Duke Power: -Case began in 1967 where 13 black employees filed a class-action suit against Duke Power, charging discriminatory employment practices. Suit centered on recently developed selection requirements for the company's operations units. -Black people were being discriminated against on the basis of having a high school diploma, passing an aptitude test, and a general intelligence test Applied and was not retroactive so there were people working there who did not have the requirements but still performed their job well They ruled that having such requirements did not relate to job performance

How does multiple regression differ from single regression?

Multiple Regression assumes 2 or more predictors

What is the O*Net, and what type of information/resources are on the O*Net?

ONET = Occupational Information Network: Free online database that contains hundreds of occupational definitions to help students, job seekers, businesses and workforce development professionals to understand today's world of work in the United States. Resources = 1.) The O*net Content Model 2.) O*Net Questionnaires 3.) O*Net Occupation Data 4.) O*Net Toolkit for Business -important information about occupations, worker characteristics, work skills and training requirements.

What makes someone display OCB's?

Organizational commitment, perceptions of fairness and of leader supportiveness, personality traits od conscientiousness, emotional maturity, agreeableness, -Indifference to rewards: negatively related to OCB -Gender is not related

Chap. 6: Measurement in Selection Be very familiar with the concepts of predictor and criteria, and common examples used in personnel psychology. What are x and y?

PREDICTORS: Measures used to decide whether to accept or reject applicants for a specific job. -Background Info = Applicant forms, reference checks, biographical data questionnaires, interviews, and tests (aptitude, achievement, and ability). -Interviews -Tests- --aptitude: how well you can perform a job or parts of the job --achievement: employed to test proficiency at the time of testing --Personality: candidates who will work harder, cooperate with other, cope better at work, which also should relate to their success on the job CRITERIA: Measures behavior or performance on the job. It's used to evaluate the predictors used to forecast performance (as well as evaluate employee job performance).

How does the 4) Americans with Disabilities Act of 1990 (ADA) influence personnel psychology and human resources practices?

Prohibits discrimination against disabled people who are qualified in areas of employment. Physical and mental impairment limits 1+ major life activities. Reasonable accommodation (RA) must be organized for qualified applicants. Avoid physical standards that aren't validated with WRC's. Limits pre-employment inquiries about health and medical disabilities. Medical examinations are administered ONLY after offer of employment.

What are selection ratio & base rate; how do you use these to determine the utility of a selection test?

Selection ratio: #hires/#applicants Base rate: proportion of employees successful on the job if chosen randomly You use these on the Taylor Russell Tables in order to find selection ratio

What are the strengths of Predictive Validity?

Since its applicants there is probably more motivation and is depicted more accurately for selection procedures

What entails Judgmental Data?

measures task performance Rating scale with numerical values that is done by others (subordinates, peers or customers Types of Judgmental Instruments = 1) Trait Rating Scales 2) Simple Behavioral Scales 3) BARS of BES-Behaviorally anchored rating scales

Chapter 15: strategies for selection decision making What are false positive, false negative, true positive, and true negative selection decisions

o False Positives (erroneous acceptances) -Appear acceptable, but once hired, perform poorly -These are worse bc wastes resources as well as can lead to disaster (imagine this for an airplane pilot!) o False negatives (erroneous rejections) § Appeared unacceptable but would have performed successfully if hired o True positives § Successful hires o True negatives § Not hired, wouldn't have done well o PR issues can happen due to false negatives but isn't as dangerous as false positives

How does the 1) Title VII Civil Rights Act of 1964 (CRA) influence personnel psychology and human resources practices?

-15 + employees -NOT private clubs, religious, organizations, -Congress, Native American reservations. -Prohibits discrimination on the basis of sex, religion, color, race, or national origin. -Amended in 1978 = prohibits discrimination based on pregnancy/childbirth. -Enforcing Agency = Equal Employment Opportunity Commission.

What are the characteristics of useful criteria?

-Individualization -Relevance -Measurability -Variance: there should be differences in the performance level of individuals If there isnt variance, it is because of standardization in output due to work process or inappropriate use of measurement devices

3) BARS of BES-Behaviorally anchored rating scales

-Issues with judgemental scores- intentional and inadvertent bias by the individual making the judgement -Inadvertent bias- rater error -Halo, severity, leniency, and central tendency -Relates to production data

What should be included in a job analysis? What are the components of a well written task statement?

-It should include information about the critical job tasks, duties or critical work behaviors and WRC's -What represents a successful work performance -Identification of critical job tasks and information on what represents successful work performance helps produce two important products that underpin a selection program --Performance evaluations, job tenure, and counterproductive work behaviors (prediction of future work behavior) --Selection procedures and predictors (tests, application forms and employment interviews)

What does reliability tell us?

-It tells us the expected degree of consistency in scores or "errors in measurement" -Errors of measurement: this shows the test's measure of error meaning how prone it inaccurately assesses important job related skills

Chap 10: selection interview (purposes: recruitment and selection) What are recommendations for designing an effective and legally sound selection interview procedure?

-Make sure you're not discriminating! Have to be job related and appropriate!!! Your interviewer must also avoid disparate treatment or a pattern of disparate impact Structured interview: rules, each applicant same questions, rating scale to record on each question; use of a job analysis to develop questions taht assess KSAOs that differentiate applicants; much higher reliability and validity based on 50+ years of research Unstructured: flexible, hiring manager uses questions they think best, questions are tailored to candidate based on their app/resume

What happens when you dont have a full range of scores on the predictor (or criterion)?

-Only including one external group will put everyone in the same area threatening the coefficient -Direct- when an employer uses the test as a basis for selection decision making -Indirect- the test being validated is correlated with the existing measures for job selection -Individuals who scored low are not hired

what are various remedies that may be used?

-Training programs, internships to enhance applicant success. -Train managers about bias, fair processes, value of diversity. -Supportive organizational culture. -Preferential selection-tie breaker.

What is 2) Parallel or Equivalent Forms Reliability Estimate?

to control the effect of memory on test-retest reliability, one strategy is to avoid the reuse of a measure and to use equivalent versions of the measure instead. A Pearson correlation would be computed between 2 sets of scores to develop a reliability estimate (Estimates computed = Parallel or Equivalent Forms Reliability Estimates).

What are major concepts underlying test 'reliability'? True score? Error score?

-True Score: the actual amount of the attribute measured that a person really possesses -Error Score: the amount that a person's score was influenced by factors present at the time of measurement that are unrelated to the attribute measured. The errors are assumed to represent random fluctuations or chance factors

Chap. 9: Application Forms and Biodata What are key things to be aware of in developing employment application forms?

-What application info will help predict performance criteria of interest? -Are we aware of any discriminatory content on application form? -Shall we empirically determine whether/how well specific items predict performance? And use this info to 'score' apps?

What is validity generalization? What is controversial about using VG for cognitive ability tests?

-When you are able to use a validation study for other jobs because it is generalizable from one situation to the next -It is not necessary to conduct validity studies within each organization for every job -Mental abilities test can predict job performance pretty much everything

Chap. 3: Job Analysis What specifically is job analysis, why is it important, and what are the many ways it is used?

-a systematic process for collecting information on the important work-related aspects of a job -Used in hr areas such as compensation, training, and performance appraisal -In HR selection, job analysis help to identify employee specifications or WRCs for success on a job -Select or develop selection procedures that assess these important applicant WRCs to forecast those job candidates who are likely to succeed on the job -Develop criteria or standards of job performance that represent employee job success

2) Simple Behavioral Scales

-based on information about tasks determined from the job analysis -Supervisor rated the subordinates on each major task of the job -Issue = supervisors often disagree about how important a task is

What are examples of OCB's?

Examples: Teaching new workers Assisting other workers Putting in extra time and effort

Be familiar with the basic steps would be involved in constructing a new predictor measure

"Utility" depends on... -Size of validity coefficient (rxy). -Selection Ratio (SR) = percentage of applicants to hire. -Base Rate (BR) = the current success ratio (what are your previous experiences when living with a roommate, have you ever had a roommate?) -Cost of using new predictor. -The value to the organization of hiring to performers vs. medium performer vs. poor performer. →Selection Ratio: # of hires to applicants. *EX. = Which is a better selection ratio? 9/10 = 0.90 5/10 = 0.50 1/10 = 0.10 ANSWER = 1/10 because you have a wide variety of choices.

What are OCBs?

-2nd form of job performance -Not part of job tasks, but are done in order to assist coworkers -Related to porsocial behavior and contextual performance -Came about when work switched from being individualistic to team-based What makes someone display OCBs??? Organizational commitment, perceptions of fairness and of leader supportiveness, personality traits of conscientiousness, emotional maturity, agreeableness, Indifference to rewards- negatively related to OCB Gender is not related

What is the difference between Adverse (Disparate) Impact and Disparate Treatment?

-ADVERSE (DISPARATE) IMPACT: The differences in outcomes of selection program for demographic groups. --*Note = THIS IS LEGAL—happens all the time. -Discrimination? = There's no job-related explanation for adverse impact. -Selection Standards? = Applies to all groups of applicants, but result is to produce differences in the selection of various groups. -Central Issue? = There are differences in percentages of selected applicants from different demographic groups. Seemingly neutral requirement (HS diploma, math test, height requirement) may disqualify many though not necessary for job performance. -DISPARATE TREATMENT: The different standards applied to various groups even though there may not be intentional prejudice. *EX. = Not hiring women for certain jobs, not hiring older applicants due to age, not hiring a person with an accent, not hiring someone b/c they're of a religion you have negative association (stereotypes) about, etc.

What makes an application (or selection) question "inadvisable"?

-Adverse impact questions -Disparate treatment questions -EEOC Pre employment guidelines must be followed so do NOT include --Disproportionately screen out minority group members or members of one sex --Do not predict successful performance on the job --Cannot be justified as a business necessity (evidence will be required, not just opinion) --Invasion of privacy

How does the 2) Civil Rights Act of 1991 influence personnel psychology and human resources practices?

-Allows compensatory and punitive damages for intentional discrimination and sexual harassment. -Prohibits adjustment of scores on selection tests based on race ("race norming."). -Returned burdens of proof to employer, but now plaintiff have to specify which part of the selection process led to "adverse impact."

How does the 3) Age in Discrimination Employment Act (ADEA) influence personnel psychology and human resources practices?

-Applies to private industry, state, gov't, employment agencies, and labor organizations. -Prohibits employment discrimination against those who are 40+ years old. *Note: age = NOT a valid indicator of ability to perform.

1) Trait Rating Scales (judgmental instrument)

-BAD -Measures personality traits such as dependability, ambition, positive attitudes, initiative -NO CORRELATION TO PERFORMANCE AT ALL

What info should you seek about measures you're considering?

-Be sure you completely understand the attribute or construct you are trying to measure -Search for and read critical reviews and evaluations of the measure -Order a specimen test and examine the reliability, validity, fairness etc

When is the best time to use an existing measure, and when is it best to construct your own?

-Benefits: a. Less expensive and less time consuming than creating new ones b. The information about the reliability, validity and other characteristics of the measure is usually available c. Generally of higher quality than anything that could be developed in-house -Disadvantages: Could be testing the wrong thing

Why is content analysis important?

-Good for cases in which only a small number of applicants actually are being hired. (small sample size) -Use when adequate amounts of predictor or criterion data are not available -Use when there are not suitable measures of job success criteria readily available (reduces the need for quantitative measures of employee performance) -Can lead to selection procedures that will increase the applicant's favorable perceptions of an organization's selection system *Note that face validity is not the same as content validity!!! Face validity is just the appearance of whether a measure is measuring what is intended

What are different 'types' of affirmative action what are various remedies that may be used?

1) Affirmative Action Programs (AAP) 2) Voluntary Affirmative Action Programs 3) Affirmative Action Remedies

Chap. 1: Intro to Selection What happens in each step of the process of developing a selection program? (Big Picture)

1) Job Analysis 2) IDENTIFICATION of Relevant Job Performance Dimensions, Identification of Work Related Characteristics (WRCs) Necessary for the Job 3) DEVELOPMENT of Assessment Devices to Measure WRCs 4) VALIDATION of Assessment Devices (Content or Criterion) 5) USE of Assessment Devices in the Processing of Applicants

What are the key issues of Griggs v. Duke Power?

1) Lack of discriminatory intent is not sufficient defense. 2) Selection test must be job related if adverse impact results. 3) Employer bears burden of proof in face of apparent adverse impact.

What are the various types of production/objective and judgmental criteria?

1) Production Data 2) Judgmental Data

If your valid selection test has adverse impact, what is the organization required to do in order to justify continued use of the test? How is that accomplished, specifically? (Note: know what BFOQs are, but that isn't the answer!)

BFOQs: Happens when members of a specific demographic group are the only ones who could adequately perform the job. *EX. = A company may have a requirement that only women can serve as attendants in women's restrooms. The company could use a BFOQ defense (no man could appropriately serve as such an attendant) if a male brought a discrimination charge against it. →It's impossible to frame a BFOQ defense for race or color. →If the company's successful in defending the adverse impact, the plaintiff has a chance to present another argument to refute the defense. →Nature of this defense is to establish whether another selection procedure could be used that would have less adverse impact.

What is biodata?

Biodata is biographical data: background, experiences, interests, attitudes, and values

Chap. 8: Validity of Selection Measures How is content validity established?

Content (local or indirect) validity is established by collecting key job behaviors (physical, psych,) and the associated KSAOs that are necessary for effective work performance. Content MUST be representative of the requirements for successful performance. This method emphasizes the role of expert judgment in determining the validity of a measure rather than relying on statistical methods

Criterion related validity (predictive and concurrent). What are the key steps to validate a selection test using predictive and concurrent validity? When/why is this important?

Criterion validity: focused on the selection procedure itself and its manifest relation with the job content domain, not a broader base of data and inferences. Inferences about performance on some criterion from scores on a predictor are made with criterion validity tests! -Couched in terms of quantitative indices Concurrent validity: information is obtained on both a predictor and a criterion for a current group of employees (job incumbents); Can the predictors being tested predict future job success as measured by criterion data? -Predictor and criterion information is collected then statistically correlated Predictive Validity (future employee or follow up method): collection of data over time; tests job applicants instead of incumbents

What entails Production Data?

Data results of work that can be counted, seen and compared directly from one worker to another (output, objective, and non judgemental performance measures) Are measured in both quantity and quality --Quantity- number of units produced within a specific time period --Quality- goodness of the product Are the easiest to use because...they are collected routinely Can be see and counted Cons-Are subjective if measured by a supervisor's standards

Simple linear regression-how would you calculate this and why? What is standard error of estimate?

Determination of how changes in criterion scores are functionally related to changes in predictor scores (assumes only 1 predictor) Formula: y=bx+a a=intercept value of regression line (where it hits y axis) b=slope of regression line x=observed score on predictor Standard error of estimate: our estimates of job performance scores using the regression equation are probably not going to be perfect. The SE estimate tells us the standard deviation of errors predicting criterion scores (y) from x

How useful and valid are letters of recommendation? Why?

Disadvantages: job relevance varies; letter quality depends on effort and ability of the writer; tend to be overly positive and lack specificity; you won't get the same info for all the applicants; relevant info may be omitted; scoring is subjective

For each of these selection test methods, what are examples of widely used measures? Strengths and weaknesses? Key things to be aware of? How is the method used/implemented? Which have more/less adverse impact? Which measures are more/less valid for predicting job performance? What would you need to be aware of if you were implementing this (each) testing method in your organization?

Effects of coaching and practice... Coaching: training has minimal effect on test scores Practice: repetition improves test scores due to -Better understanding of the test format and methods of responding; reduction of test anxiety; learning the specific skills tested; did not translate into increased job or training performance

What are 3) Affirmative Action Remedies?

Enhanced recruitment of underrepresented groups (ads, college fairs).

What is the coefficient of determination, and what does it tell us?

If the scores on a criterion are influenced by other factors besides the predictor Supervisor who is grading knows the predictor scores or has errors in grading It tells you the amount of variability of one factor that can be caused by its relationship to another factor. It is relied on heavily in trend analysis and is represented as a value between 0 and 1. The closer the value is to 1, the better the fit, or relationship, between the two factors

How valid is the selection interview at predicting job performance or other relevant criteria?

Increased with standardized process for gathering, recording, and interpreting applicant information Increased by standardizing the interview, and/or by relying on multiple interviewers arriving at independent evaluations for each candidate Is affected by the complexity of the job-using hypothetical questions is not as appropriate as using questions about what the candidate has done in actual situations

What are the strengths of concurrent validity?

Investigator has almost immediate information on the appropriateness of selection procedures as employment tools

What is 4) Inter rater Reliability Estimates (Split-half reliability estimates)?

Involves a single administration of a selection measure. The measure is divided or split into 2 halves, so that the scores for each part can be obtained for each test taker.

How are OCB's measured?

Judgemental scales Self report scales: the worker responds to questions We usually rate ourselves as very high

What would you DO (basic steps) to establish content validity of a selection test?

Steps: 1) Conducting a Comprehensive Job Analysis -Describing the task performed on the job -Measuring criticality and/or importance of the tasks -Specifying WRCs required to perform these critical tasks -Measuring the criticality and/or importance of WRCs which include --An operational definition of each WRC --Description of the relationship between each WRC and each job task --Description of the complexity/difficulty of obtaining each WRC --A specification as to whether an employee is expected to possess each WRC before being placed on the job or being trained --An indication of whether each WRC is necessary for successful performance on the job -Linking important tasks to important WRCs 2) Selecting experts participating in a content validity study 3) Specifying selection procedure content -Selection procedure as a whole -Item-by-item analysis -Supplementary indications of content validity 4) Assessing selection procedure and job content relevance -SMEs judge each item on relatedness to job performance -Lawshe's content validity ratio (CVR)

How do you calculate Adverse Impact?

Take the % of minority applicants hired. Divide that by the % of non-minority applicants hired. If result = 0.80 or higher, then you DON'T have adverse impact. If result = 0.80 or lower, then you DO have adverse impact.

What are 1) Affirmative Action Programs (AAP)?

Taking affirmative steps to recruit people.

What recommendations would you make to an organization that was interested in improving the selection interview's use as a predictor of employee performance?

Use structured interview format Train the interviewer! --Accurately receive info and accurately evaluate info received --Reg behavior in delivering questions --Results of interview training should be good

Chapter 7: Reliability of Selection Measures How do we determine if a test is reliable?

Use the 4 forms of reliability estimates... 1) Test-Retest Reliability Estimates 2) Parallel or Equivalent Forms Reliability Estimates 3) Internal Consistency Reliability Estimates 4) Inter rater Reliability Estimates (Split-Half reliability estimates)

How is Biodata used?

Used through differentiating response type and behavior type... Response type: the kind of response options (in the form of a scale) offered a respondent by an item Behavior type: the specific behavioral content (dimension) of an item

What are 2) Voluntary Affirmative Action Programs?

Voluntary programs conflict with Title VII's prohibition on race-based employment decisions; reverse discrimination claims.

How does one compute a validity coefficient? (no need to memorize formula but know how it's done)

You need to use Pearson's r correlation coefficient

Why is reliability important?

a. Reliability is the degree of dependability, consistency, or stability of scores on a measure is determined by the degree of consistency between two sets of scores on the same measure b. It is important because it tells us the expected degree of consistency in scores or "errors of measurement" c. We use this numerical score to serve as a basis for our decisions in terms of selection

What are T&E evaluations?

a. Training and Experience evaluations: previous training, experience, education -Screening for minimal qualifications -Rank ordering individuals from high to low based on TE score -Prescreening applicants prior to administering more expensive, time consuming predictors (like an interview) In combo with other predictors used for making employment decisions b. Reliability and validity -Reflect high interrater reliability (.80s) -Criterion related validity

What are the dimensions of OCB's?

helping behavior, sportsmanship, organizational loyalty, organizational compliance, individual initiative, civic virtue, self-development

What is restriction of range and how does it impact your validity coefficient?

restricting the range means to limit the data in the population to some criterion, or use a subset of data to determine whether two pieces of information are correlated, or connected

What is 1) Test-Retest Reliability Estimates?

same measure is used to collect data from the same respondents at two different points in time. The higher the test-retest reliability coefficient, the greater the true score and the less error preset.

What is 3) Internal Consistency Reliability Estimates?

shows the extent to which all parts of a measure are similar in what they measure. Selection measure's internally consistent/homogeneous when an individual's responses on one part of the measure are related to their responses on other parts.


Related study sets

WGU 724 Information Systems Management Study Guide

View Set

CDEV 021: Chapter 5-7 Homework (SJDC, Beyer)

View Set

Lord of the Flies Character Descriptions

View Set

Business Ethics and Law Chapter 5

View Set

International Business - Chapter 6

View Set

Peripheral Vascular Disease Worksheet

View Set