Selection
What would be a better way of writing the following task statement: "Assists with inspection of construction projects."
"Inspects/construction operations (erosion control, Portland cement concrete paving, asphaltic concrete paving, painting, fencing, sign placement) in order to ensure compliance with construction specifications, by comparing visual observations with construction specifications and plans, and by following verbal instructions; while under daily review by the supervisor."
What is the Civil Rights Act (1991)?
• Restores laws of disparate impact prior to Antonio-Wards Cove. • Prohibited sub-group norming • Allowed jury trials • Applicable to US companies overseas
What are predictor variables?
A measure of an employee attribute identified through a job analysis as being important for job success
What are the steps in Concurrent Validation studies?
1. Conduct analyses of the job. 2. Determine relevant KSAs and other characteristics required to perform the job successfully. 3. Choose or develop the experimental predictors of these KSAs. 4. Select criteria of job success. 5. Administer predictors to current employees and collect criterion data. 6. Analyze predictor and criterion data relationships.
What are the steps in Predictive Validation studies?
1. Conduct analyses of the job. 2. Determine relevant KSAs and other characteristics required to perform the job successfully. 3. Choose or develop the experimental predictors of these KSAs. 4. Select criteria of job success. 5. Administer predictors to job applicants and file results. 6. After passage of a suitable period of time, collect criterion data. 7. Analyze predictor and criterion data relationships.
What are the steps in Content Validation?
1. Conduct of a Comprehensive Job Analysis • A description of the tasks performed on the job • Measures of the criticality and/or importance of the tasks • A specification of KSAs required to perform these critical tasks • Measures of the criticality and/or importance of KSAs • Linkage of important job tasks to important KSAs 2. Selection of Experts Participating in Study 3. Specification of Selection Measure Content • Selection Procedure as a Whole • Item-by-Item Analysis • Supplementary Indications of Content Validity 4. Assessment of Selection Measure and Job Content Relevance
What are the steps in the selection process?
1. Job analysis 2. Identification of job performance dimensions 3. Determination of necessary KSAs 4. Selection of relevant assessment techniques to measure KSAs 5. Selection of subjects for validation study 6. Validation 7. Utility of selection tool 8. Process applicants
What are the steps for implementing a Job Analysis Program?
1. Organizing the job analysis 2. Choosing job 3. Review available literature 4. Selecting job agents -- Job analyst -- Subject Matter Experts: Incumbent, Supervisor 5.Collecting job information -- Direct Observation and Job Performance -- Interview -- Questionnaires -- Critical Incidents -- Self-report
What are the steps for developing content for task inventories?
1. Technical manuals, previous job analyses, and other job-related reports are reviewed for possible task-item content 2. Technical job experts (consultants, selected incumbents/supervisors) prepare lists of job tasks known to be performed 3. Interviews are held with job incumbents and supervisors in order to identify additional tasks 4. Tasks identified are reviewed for duplication, edited, and incorporated into an initial version of the inventory 5. First draft is prepared and submitted to a panel of experts (or incumbents and/or supervisors) for review 6. Panel of reviewers adds, deletes, or modifies tasks for the development of another draft of the inventory 7. Steps 5 and 6 are repeated, using the same or a similar panel, until an acceptable draft has been developed 8. Task inventory is then pilot-tested on a sample of respondents to whom the final version will be given 9. Appropriate modifications are made as needed. 10. Steps 8 and 9 are repeated until a final, acceptable version is developed.
What is selection and what are the objectives?
A process of collecting and evaluating relevant information to choose for employment a subset of available applicants Objectives: • Maximize the probability of making correct selection decisions • Minimize the chances of discrimination
What is Job Component Validity?
A process of inferring the validity of a predictor, based on existing evidence, for a particular dimension or component of job performance
What are the advantages and disadvantages of the Critical Incidents Technique?
Advantages • Creates a large amount of specific, job related behavioral (not trait-based) information • Identifies "critical" incidents that are important aspect of the job Disadvantages • Incidents may not represent full scope of the job • Analysts' judgments affect the stability of dimensions • Developmental process is labor intensive • Results are situation specific
What are the advantages and disadvantages of task inventories?
Advantages • Offer an efficient means for collecting data from large numbers of incumbents in geographically dispersed locations • Provide data that is readily quantifiable Disadvantages • Development can be expensive and time-consuming • Length and complexity can create respondent motivation problems • Ambiguities and respondent questions are not readily addressable during administration
What were the results of Spurlock v. United Airlines (1972)?
• College degree and experience requirements are shown to be job related • Company's burden of proof diminishes as human risks increase
What are the assumptions, advantages, disadvantages, and best uses of multiple cutoff?
Assumptions • A nonlinear relationship exists among the predictors and the criterion -- a minimum amount of each predictor attribute is necessary. • Predictors are not compensatory -- lack or deficiency in any one predictor attribute cannot be compensated for by having more of another Advantages • Candidates in pool are all minimally qualified • Easy to explain to managers Disadvantages • Requires assessing all applicants on all predictors • Identifies only the minimally-qualified candidates Best Uses • When physical abilities are essential for performance • When a minimum level of performance on each predictor is necessary to perform job safely (e.g., aircraft takeoffs and landings)
What are the assumptions, advantages, disadvantages, and best uses of multiple hurdle?
Assumptions • Each applicant must sequentially meet the minimum cutoff or hurdle for each predictor before going to the next predictor • Failure to pass on the next predictor eliminates the applicant from consideration. • An alternative procedure is to calculate a composite multiple regression for each applicant at each successive hurdle -- whenever the probability of success for an applicant drops below some set value, the applicant is rejected. Advantages • All candidates are minimally qualified on predictors • Do not have to apply all predictors to all candidates Disadvantages • Establishing validity for each predictor • Time required to apply predictors sequentially Best Uses • When extensive post-hiring training is required • When an essential KSA must be present at hiring • With numerous applicants and expensive procedures
What are the assumptions and advantages of multiple regression?
Assumptions • The predictors are linearly related to the criterion. • The predictors are additive and can compensate for one another. Advantages • Minimizes errors in prediction and combines the predictors to yield the best estimate of criterion status • Is a very flexible method that can be modified to handle nominal data, nonlinear relationships, and both linear and nonlinear interactions
What are the assumptions, advantages, disadvantages, and best uses of the combination method?
Assumptions • There are minimum cutoffs for each predictor • Multiple regression creates an overall score Advantage • Identifies and rank orders the acceptable candidates Disadvantage • All predictors must be applied to all applicants Best Use • When multiple cutoffs are acceptable, predictors can compensate, the applicant pool is small, and predictors have about equal costs
What is content validity in selection and why conduct content validation?
Content validity • When the content (items, questions, etc.) of a selection measure representatively samples the content of the job for which the measure will be used Why use? • Is applicable to hiring situations involving a small number of applicants for a position • Focuses on job content, not job success criteria • Increases applicant perception of the fairness of selection procedures
Regarding disparate impact, what is the Burden-of-Proof Defenses for Employers?
Defendant must demonstrate at least one of the following: • Business necessity -- application is limited to safety of workers and customers • Bona fide occupational qualification (BFOQ) -- disqualification of a demographic group from employment because no one person from that group could adequately or appropriately perform the job • Validity -- a plausible business reason, demonstrated job-relatedness of selection procedure
What are some legal cases in which discrimination in interviewing was not found?
Harless v. Duck (1977) • Structured questionnaire • Questions based on job analysis • Relationship between interview performance and training Maine Human Rights Commission v. Department of Corrections (1984) • Measurement of personality-related variables permitted • KSAs listed • Formal scoring system used Minneapolis Commission on Civil Rights v. Minnesota Chemical Dependency Assoc. (1981) • Permissible to use subjective measures of certain applicant characteristics that cannot be fully measured with objective tests • Additional questions asked only of this applicant were appropriate • Qualifications for job were posted • A set of formal questions was asked of all applicants
What is the Immigration Reform and Control Act (1986)?
Makes it illegal for an employer to discriminate with respect to hiring, firing, or recruitment or referral for a fee, based upon an individual's citizenship or immigration status.
What are the three phases of the interview model?
Pre-interview Phase • Interviewer differences in ability, motivation, expectations, beliefs, needs • Prior information about candidate, job, labor market • Candidate differences in ability, motivations, etc. Interview Phase • Social interaction between interviewer and applicant Post-interview Phase • Gathering/processing of information • Interviewer assessment of candidate: valid, reliable, fair? • Candidate assessment of organization
What is recruitment and what are the objectives?
Process of making applicant available for hire with minimal cost Objectives: • Develop an appropriate number of applicants while keeping costs reasonable • Meet the organization's legal and social obligations regarding the demographic composition of its workforce • Help increase the success rate of the selection process by reducing the percentage of applicants who are either poorly qualified or have the wrong skills
What is Title V11 of the Civil Rights Act (1964)?
Prohibits discrimination on the basis of race, sex, religion, color, national origin
What is the purpose, characteristics, application, and implementation of the Critical Incidents Technique?
Purpose • To generate a work-oriented list of observed good and poor (critical) job performance behaviors (incidents) of job incumbents to be grouped into job dimensions Characteristics • Focuses on a specific (single) observable behavior that has been, or could be, exhibited on the job • Briefly describes the context in which the behavior occurred • Indicates the consequences of the behavior Application • To generate a list of job-related behaviors on which to base inferences regarding worker specifications • To determine how to measure worker specifications that are consistent with what occurs on the job Implementation 1. Selecting the method for critical-incidents collection 2. Selecting a panel of job experts 3. Gathering critical incidents 4. Rating and classifying critical incidents into job dimensions
What are Job Analysis questionnaires?
Respondents are asked to make judgments (e.g., via rating scales) to indicate the degree to which various aspects of job information listed on the questionnaire apply to their jobs (e.g. activities, tasks, tools/equipment, working conditions, KSAs)
What are some legal cases in which discrimination in interviewing was found?
Stamps v. Detroit Edison (1973) • All interviewers were white • Interviewers made subjective judgments about applicant's personality • No structured or written interview format • No objective criteria for employment decisions Weiner v. Country of Oakland (1976) • All interviewers were male • Interview questions suggested bias against females • Selection decision rule not clearly specified King v. TWA (1984) • Female applicant did not receive same questions as males • History of interviewer's gender bias Robbins v. White-Wilson Medical Clinic (1981) • No guidelines for conducting or scoring interview • Interviewer's evaluation seemed racially biased based on own comments Gilbert v. City of Little Rock, AR (1986) • Content validity inappropriate defense for measurement of mental processes • Failure to operationally define KSAs • Dissimilarity between exam questions and actual work situations Bailey et al. v. Southeastern Area Joint Apprenticeship (1983) • Content of questions discriminatory toward women • Defense did not conform with EEOC Uniform Guidelines • Unclear instructions for rating applicant performance Jones v. Mississippi Department of Corrections (1985) • Little evidence of specific questions used • No scoring standards • No cutoff score for selection
What are some guidelines for identifying task statements?
Task statements should: • Characterize activities, not skills or knowledge • Have an identifiable beginning and ending • Represent activities performed by an individual worker, not activities performed by different individuals • Have an identifiable output or consequence • Avoid extremes in the phrasing of activities; statements should not be too broad or too specific • Be developed by full-time inventory writers (preferably); supervisors/incumbents should serve as technical advisers
What are some guidelines for writing task statements?
Task statements should: • Mean the same thing to all respondents • Be stated so that the rating scale to be used makes sense • Be stated so that the incumbent is understood to be the subject of the statement. The pronoun "I" should be implied. For example "(I) number all card boxes." • Be stated so that an action verb is in the present tense • Be stated so that the action verb has an object • Use terms that are specific, familiar, and unambiguous • Specify what the worker does, to whom or what he or she does it, what is produced (output of action), and what materials, tools, procedures, or equipment are used
What are the requirements for predictors?
They must be: • Relevant to the job • Appropriate ways to measure the employee attributes identified as critical to job success
What is Validity Generalization?
Uses evidence accumulated from multiple validation studies to show the extent to which a predictor that is valid in one setting is valid in other, similar settings
What is face validity?
• Concerns the appearance to job applicants of whether a measure is measuring what is intended • The appearance to applicants taking the test that the test is related to the job • Increases acceptability of a measure
What are factors influencing applicant attraction to an organization/position?
• Inducements offered by the firm (salary, benefits, flexibility, career path, etc.) • Firm's targeted applicant pool (education level, experience, applicant demographics, etc.) • External labor market conditions • Job and organizational characteristics
With regard to disparate treatment, what is the McDonnell Douglas Rule?
• A guideline for establishing a prima facie case • The plaintiff must show that the following conditions exist: (a) he or she belongs to a protected class, (b) he or she applied and was qualified for a job for which the company was seeking applicants, (c) despite these qualifications, he or she was rejected, (d) after this rejection, the position remained open and the employer continued to seek applicants from persons with the complainant's qualifications
What is a job analysis?
• A purposeful, systematic process for collecting information on the important work-related aspects of a job • Data is collected from incumbents or supervisors by a trained analyst asking questions about the duties and responsibilities, KSAs required, and equipment and/or conditions of employment for a job or class of jobs Purposes • To collect job information on job tasks that will serve as a basis for developing other job analysis measures, such as a job analysis questionnaire • To serve as a means for clarifying or verifying information collected previously through other job analysis methods • To serve as a method for collecting relevant job data for developing a selection system
What is a Task Analysis Inventory and its purpose?
• A questionnaire composed of a listing of tasks for which respondents make judgments using a task rating scale, such as frequency of task performance • Aids in content validation of selection measures
What were the results of Ricci v. DeStefano (2009)?
• Adverse impact present as blacks scored lower on tests than whites and Hispanics • Discrimination directed toward whites and Hispanics • Threat of lawsuit not defense for disregarding job-related selection tests • Adverse impact can be defended by job relatedness of selection tests
What are examples of recruitment sources?
• Advertisements • Agencies and organizations (e.g. search firms and job fairs) • Professional organizations • Inside organizational resources (e.g. company website, employee referrals) • Other (e.g. job boards, walk-ins/unsolicited resumes)
What is the Age Discrimination in Employment Act (ADEA) (1967)?
• Age discrimination involves treating someone (an applicant or employee) less favorably because of his age. • Only forbids age discrimination against people who are age 40 or older.
What were the results of Watson v. Ft. Worth Bank & Trust (1988)?
• Cases focusing on subjective selection devices, such as interviews and judgments, could be heard as disparate impact • Organization may need to validate interview in same manner as objective test
What were the results of Gross v. FBL Financial Services (2009)?
• Central question - how much evidence must plaintiff produce in age discrimination claim to force defendant to provide evidence that it did not use age in decision • Plaintiff must provide clear evidence that age was "but-for" reason in decision • Ruling significantly increases the amount of evidence that plaintiff must provide to obtain judgment that age discrimination occurred
What is the process for Behavior Description Interviews?
• Conduct a job analysis using the Critical Incidents Technique to identify examples of good and poor job performance • Sort incidents into groups of similar behaviors (behavioral dimensions) • Identify each dimension as describing either maximum or typical performance of the individual • Develop questions and probes (follow-up questions) for both experienced and inexperienced applicants • Score applicants by rank-ordering them on each dimension, then total individual scores.
What is the process for Situational Interviews?
• Conduct a job analysis using the Critical Incidents Technique to identify examples of good and poor job performance • Sort incidents into groups of similar behaviors (behavioral dimensions) • Select the most appropriate incidents and write related interview questions • Develop response scales for each question • Applicant scores are derived by summing their ratings on each scale
What is the process for conducting a Job Component Validity study?
• Conduct an analysis of the job using the Position Analysis Questionnaire (PAQ) • Identify the major components of work required on the job • Identify the attributes required for performing the major components of the job • Choose tests that measure the most important attributes identified from the PAQ analysis
What is construct validity and construct validation?
• Construct validity: a postulated concept, attribute, characteristic, or quality thought to be assessed by a measure • Construct Validation: a research process involving the collection of evidence used to test hypotheses about relationships between measures and their constructs
What were the results of Rudder v. District of Columbia (1995)?
• Content validity is acceptable defense for adverse impact • Job analysis, ensuring adequate representation of minority groups in data collection, is essential • Clear links must be shown between job analysis information, test questions, and correct answers • Attention to test security and administration are important
What factors are involved in the standardization of selection measures?
• Content: all persons assessed are measured by the same information or content, including the same format • Administration: information is collected the same way in all locations and across all administrators, each time the selection measure is applied • Scoring: rules for scoring are specified before administering the measure and are applied the same way with each application
What are the limitations of work samples/performance tests?
• Creating work samples representative of job activities • Relying on the assumption that applicants already possess KSAs to complete the job behavior • Costs of time, materials, and equipment required to develop and administer performance tests
What are the major types of validity relevant to selection?
• Criterion: inference about performance on a future job (predictive and concurrent) • Content: inference regarding the extent to which the candidate possesses job related proficiency or knowledge • Construct: inference regarding the degree to which a candidate possesses a trait or other characteristic
What are the types of choices that need to be made in conducting a work analysis?
• Descriptor type • Methods • Rating scales • Sources of data
What are the forms of discrimination?
• Disparate treatment: Situations in which different standards are applied to various groups of individuals even though there may not be an explicit statement of intentional prejudice. • Disparate impact: Organizational selection standards are applied uniformly to all groups of applicants, but the net result of these standards is to produce differences in the selection of various groups.
What are the broad types of work samples?
• Evaluations of prior training and past behaviors • Evaluations of demonstrable present capabilities
What are some important questions in making selection decisions?
• For a specific selection situation, what are the best methods for collecting predictor information on job applicants? • How should scores be combined or added together from multiple predictors to give applicants an overall or total score for selection decision-making purposes? • Once a total score on two or more predictors is obtained, how should this overall score be used to make selection decisions?
What are the types of rating scales used in work analyses?
• Frequency • Importance • Criticality (consequence of error) • Task difficulty • Required on entry • Level of attribute required (i.e. what level of ability/skill is needed)
What task-rating categories should be considered in content validation studies?
• Frequency of task performance • Task importance or critical • Task difficulty • Whether the task can be learned on the job relatively quickly
What behavioral dimensions are frequently measured in structured interviews?
• General intelligence • Job knowledge and skills • Personality (disposition, Big Five) • Applied social skills • Interests and preferences (preference for work environments, work styles, profession, topics) • Organizational fit (values, goals, culture/climate) • Physical attributes
What are some factors that affect the choices that need to be made in conducting a work analysis?
• How the information will be used • Cost • Quality • Legal defensibility
What are the uses of job analysis data?
• Identify employee specifications (KSAs) necessary for success on a job • Select or develop predictors that assess important KSAs and can be administered to job applicants and used to forecast those employees who are likely to be successful on the job • Develop criteria or standards of performance that employees must meet in order to be considered successful on a job
What were the results of OFCCP v. Ozark Air Lines (1986)?
• In disability cases, organization must prove that individual cannot perform job • Reasonable accommodation must be given to disabled individual
What are some sources of error in measurement?
• Individual responding to a selection measure (e.g. mood, motivation, stress, understanding of instructions, content of items) • Individual administering selection measure • Individual scoring selection measure (i.e. judgment, subjectivity and bias) • Physical conditions under which selection measure is administered (e.g. heat, lighting, noise, etc.)
What are the six aspects of work analysis data quality, according to Morgeson and Campion (1997)?
• Interrater reliability • Interrater agreement • Discriminability between jobs (between-job variance) • Dimensionality of factor structures (i.e. extent to which factor structures are complex and multidimensional) • Mean ratings (could reflect inappropriately elevated or depressed ratings) • Completeness (lack of criterion contamination/deficiency)
What is the Genetic Information Nondiscrimination Act of 2008?
• It is illegal to discriminate against employees or applicants because of genetic information. • Prohibits the use of genetic information in making employment decisions • Restricts employers and other entities covered by Title II (employment agencies, labor organizations and joint labor-management training and apprenticeship programs - referred to as "covered entities") from requesting, requiring or purchasing genetic information, and strictly limits the disclosure of genetic information
What is the relationship between reliability and validity?
• It is possible for a measure to be reliable but not valid • It is not possible for a measure to be valid but not reliable
What were the results of Griggs v. Duke Power (1971)?
• Lack of discriminatory intent not sufficient defense • Selection test must be job related if adverse impact results • Employer bears burden of proof in face of apparent adverse impact
What are the limitations of the Job Analysis Interview?
• Lack of standardization in collection process • Limited possibilities for covering large numbers of respondents • Documentation for each individual interview • Time requirements to conduct individual interviews • Costs of individually interviewing respondents • Variations in interviewer skills • Distortions in the information collected from interviewees
What are the general requirements of sound validity inferences?
• Measurement reliability and validity • Representative samples • Appropriate analysis techniques • Control over plausible confounding factors
When is content validation not appropriate?
• Mental processes, psychological constructs, or personality traits are not directly observable but inferred from the selection device • The selection procedure involves KSAs an employee is expected to learn on the job • The content of the selection device does not resemble a work behavior; when the setting and administration of the selection procedure does not resemble the work setting
What are some mechanical methods for combining predictor information?
• Multiple regression • Multiple cutoffs • Multiple hurdles • Combination method
What are the most common methods of data collection in a work analysis?
• Observation • Individual interviews w/ multiple types of respondents (e.g. workers, supervisors) • Group meetings (w/ subject-matter experts, including workers, supervisors, technical experts) • Questionnaires
What are four approaches to increasing minority representation (ranging from slow to fast)?
• Passive non-discrimination: focused recruitment or training directed toward target groups • Active recruiting • Minority preference: members of target groups are given advantage only in selection situations in which applicants are tied • Proportionate hiring: give preference to target group members even if they have inferior qualifications
What are work samples?
• Performance tests: Applicants complete some task under structured testing conditions. • Applicant completes a representative part of the job for which he or she is being evaluated • Provides direct evidence of the applicant's ability and skill to work on the job. Less susceptible to faking.
What applicant characteristics are assessed by interviews?
• Personality • Applied social skills • Mental ability • Knowledge and skills (if jobs require verbal communication of knowledge)
Interview can vary in...
• Position in the employment process • Bandwidth and depth • Structure • Role of interviewer • Job content vs. worker trait focused
What is the Americans with Disabilities Act (1990)?
• Prohibits discrimination against a qualified person with a disability • A disabled individual (a) has a physical or mental impairment that substantially limits one or more major life activities, (b) has a record of such an impairment, and (c) is regarded as having such an impairment. • 900 disabilities covered -- must limit major life activity or have an association with person with a disability • Excluded groups include homosexuals and bisexuals, transvestites, transsexuals, pedophiles, exhibitionists, voyeurists, and other sexual behavior disorders; compulsive gamblers, kleptomaniacs, pyromaniacs, and those currently using illegal drugs. • Qualified -- with or without reasonable accommodation can perform the essential functions of the job (not marginal functions) • Reasonable Accommodations -- it is the responsibility of the person with a disability to ask for reasonable accommodation • An organization is required to make changes in the work process for an otherwise qualified individual with a disability unless it would pose "undue hardship" • Undue Hardship Exception: (a) nature and cost of the accommodation, and (b) ability of parent employer to bear costs • Testing should reflect skill/aptitude being tested not the disability (use of interpreters) • Medical examinations must be job-related and consistent with business necessity, conducted only after conditional job offer, required of all employees, no pre-employment inquiries about disability, drug testing not a medical examination, can do physical and not medical testing
What are some of the sources of variance in work analysis data?
• Rater characteristics (e.g. general cognitive ability, personality, work experience, level in organization) • Social influences: group (e.g. conformity pressures, extremity shifts/group polarization, motivation loss in groups) and self-presentation (e.g. impression management, social desirability, demand characteristics) • Cognitive sources: limitations in information-processing (e.g. information overload, heuristics, categorization) and biases (e.g. carelessness, extraneous information, inadequate information, primacy/recency effects, contrast effects, halo effects, leniency/severity) • Contextual influences (e.g. features of the role, tasks, the organization, etc.)
What are the three major types of descriptors used in work analysis?
• Requirements of the work itself (tasks performed and general work responsibilities) • Worker requirements (KSA's and other characteristics such as personality, education, experience, licensure, certification, etc.) • Work context
What are some of the purposes for conducting work analyses?
• Selection system development • Job and team design • Performance management systems • Training system development • Compensation system development • Career management systems
What is utility analysis?
• Shows the degree to which use of a selection measure improves the quality of individuals selected versus what would have happened if the measure had not been used • Uses dollars-and-cents terms as well as other measures such as percentage increases in output, to translate the results of a validation study into terms that are important to and understandable by managers • Expressed in terms of money saved
What are some methods for assessing a measure's internal consistency?
• Split-half reliability • Kuder-Richardson reliability • Cronbach's coefficient alpha (α) reliability
What are the types of Job Analysis questionnaires?
• Tailored Questionnaires: measures developed by an organization (or its consultants) for a specific purpose or for application to a specific job • Prefabricated or Existing Questionnaires: generic measures developed for use with a variety of jobs that usually consist of a preestablished set of items describing aspects of a job that respondents judge using a rating scale (e.g. PAQ)
What are the three types of work contexts examined in work analyses?
• Task context: structural and informational conditions, such as task clarity, autonomy, available resources, etc. • Social context: forms of communication, interdependence, interpersonal relations/conflict • Physical context: elements of material space/environment, such as noise, lighting, temperature, hazardous work conditions (e.g. radiation), physiological demands (e.g. lifting, frequent standing, climbing, etc.)
What are some methods for assessing the reliability of a selection measure?
• Test-retest: correlation between scores on two administrations of the same measure • Parallel or equivalent forms: administering two equivalent versions (forms with different items but assessing the same measure) of a measure to the same respondent group • Internal consistency: an index of a measure that shows the extent to which all parts of a measure are similar in what they measure • Interrater reliability estimates
What is reliability in selection?
• The degree of dependability, consistency, or stability of scores on a measure (either predictors or criteria) used in selection research. • A perfectly reliable test is free from unsystematic errors of measurement. • Xobtained = Xtrue + Xerror -- Xobtained = obtained score for a person on a measure -- Xtrue = true score on the measure, that is, actual amount of the attribute measured that a person really possesses -- Xerror = error score on the measure that is assumed to represent random fluctuations or chance factors.
What is standard error, and how is it used for measurement in selection?
• The estimated error in a particular individual's score on the measure • Shows that scores are an approximation represented by a band or range of scores on a measure • Aids decision making in which only one individual is involved • Can determine whether scores for individuals differ significantly from one another • Helps establish confidence in scores obtained from different groups of respondents.
What are the guidelines for using norms?
• The norm group should be relevant for the purpose it is being used • Use local norms as opposed to norms based on national data • Norms are transitory • In using normative information statistical methods are employed to aid interpretation of what a test score means
What are some reasons an organization may adopt an affirmative action plan?
• The organization is a government contractor • Court order (losing a court discrimination case) or signing a consent decree: legally required to balance internal workforce with the relevant labor market; may require preferential treatment to minority groups to achieve goals or quotas within specific timetables • Voluntarily attempting to implement EEO principles
What is the definition of a work analysis?
• The systematic investigation of (a) work role requirements (of both the work and the worker) and (b) the broader context in which work roles are enacted • Work requirements include the tasks performed and the work activities of those performing the work • Worker requirements include KSA's and other characteristics that are needed to perform the work
What is cross-validation?
• The testing of regression equations for any reduction (shrinkage) in their predictive accuracy prior to implementation in selection decision making • If the regression equation developed on one sample can predict scores in the other sample, then the regression equation is "cross-validated"
List the major Equal Employment Opportunity (EEO) laws and executive orders regarding selection.
• Title VII Civil Rights Act of 1964 • Civil Rights Act of 1991 • Executive Order No. 11246 • Age Discrimination in Employment Act (ADEA) of 1967 • The Rehabilitation Act of 1973 • Americans with Disabilities Act (ADA) of 1990 • ADA Amendments Act of 2008 • Immigration Reform and Control Act of 1986 • Genetic Information Nondiscrimination Act of 2008 • U.S. Constitution 5th Amendment • U.S. Constitution 14th Amendment • Civil Rights Act of 1866 • Civil Rights Act of 1871
What are some strategies for generalizing validation evidence?
• Transportability: the use of a specific selection procedure in a new situation based on results of a validation research study conducted elsewhere • Synthetic validity/job component validity: the justification of the use of a selection procedure based upon the demonstrated validity of inferences from scores on the selection procedure with respect to one or more domains of work (job components) • Meta-analytic validity generalization: the accumulation of findings from a number of validity studies to determine the best estimates of the predictor-criterion relationship for the kinds of work domains and settings included in the studies
What are some selection decision errors?
• True positives/negatives • False positives (erroneous acceptances): appear acceptable but, once hired, perform poorly • False negatives (erroneous rejections): initially appear unacceptable but would have performed successfully if hired
What are methods for developing interview questions?
• Use job analysis information to identify general or fundamental KSAs that an applicant must possess and for which the organization does not provide training. • Use "job experts" to identify the most important of these characteristics. • Use a modified Critical-Incidents Technique to identify questions.
What were the results of U.S. v. Georgia Power (1973)?
• Validation strategy must comply with EEOC guidelines • Validation must include affected groups • Validation must reflect selection decision practices • Testing must occur under standardized conditions
What is validity and validation, and why is it important for selection?
• Validity: the degree to which available evidence supports inferences made from scores on selection measures. • Validation: the research process of discovering what and how well a selection procedure measures • Importance: shows how well a predictor (such as a test) is related to important job performance criteria; can indicate what types of inferences may be made from scores obtained on the measurement device
What are realistic job previews?
• When applicants are given both negative and positive aspects of the job • Should be credible, accurate, specific, breadth, and important
What are the aspects of a recruitment strategy?
• Which recruitment activities the organization will use • When and how these will be done • Whom to use as recruiters • What theme or message to convey
What are the common sources of work analysis data?
• Written documentation (e.g. job descriptions, previous work analyses, training manuals, operating guides) • Job incumbents • Supervisors (immediate or higher-level) • Clients • Job analysts (e.g. HR professionals)
What is affirmative action and affirmative action plans?
•Affirmative action: specific actions taken by an organization to actively seek out and remove unintended barriers to fair treatment in order to achieve equal opportunity. • Affirmative action plan: (a) a written document that explicitly states steps to be taken, information to be gathered, and the general baseline for decision making for each area of HRM; (b) a guideline for actions to ensure that EEO principles are implemented within the organization.
What are some guidelines for interpreting individual scores on a selection measure?
•The difference between two individuals' scores should not be considered significant unless the difference is at least twice the standard error of measurement of the measure. • Before the difference between scores of the same individual on two different measures should be treated as significant, the difference should be greater than twice the standard error of measurement of either measure.