Personnel Selection

¡Supera tus tareas y exámenes ahora con Quizwiz!

Organizational citizenship behaviors (Bormon & Motwidlo's contextual performance, Organ, 1997 on OCBs)

"Extra-role"/discretionary behaviors performed by an individual that are not part of their job requirement and often go unrewarded by the formal system. E.g. volunteering to help a coworker.

360 degree feedback

360 degree feedback is used in cases where evaluations (often times a form which requests a number of ratings on work behaviors and work results which is a 360 questionnaire) of an employee are gathered from multiple sources (e.g., supervisors, subordinates, and peers). This type of feedback is based on the opinion of others and is often considered judgmental data in selection. There are several research questions concerning 360 degree feedback that need to be addressed: should scores be averaged across or within source?

Validity generalization

A meta-analytic technique that evaluates past predictor-criterion relationships to assess the extent to which a predictor that is valid in one setting is valid in other similar settings. It tests two hypotheses: the situational specificity hypothesis and the validity generalization hypothesis. The situational specificity hypothesis suggests that the validity of a test is specific to the job situation on which validity is has been determined. The valid generalization hypothesis suggests that the validity of a test is generalizable from one situation to another that is very similar (in terms of the same type of test and job on which validity evidence has been accumulated). From this perspective, validity differences across situations can be largely explained by methodological deficiencies (e.g., sampling error, predictor reliability, criterion reliability, restriction of range, criterion contamination and deficiency, and computational/typographical errors).

Situational judgment test

A situational judgment test is a low fidelity simulation of important work tasks that are presented in a multiple choice format. This test usually depicts situational dilemmas that are related to the job competencies required to complete the job. Thus, individuals are presented with relevant scenarios and possible responses to each scenario. However, they tend to be job specific and not very appropriate for training and development. SJTs can be either knowledge-based (technical information is required) or behavior-based (asks participants what they should/want to do). Faking may be more common for behavior-based SJTs. For instance, a job applicant for a sales position might have to answer questions in regards to scenarios that depict angry customers with plausible options as responses. The applicant would have to choose the option that represents using good judgment in that situation.

Utility analysis

A utility analysis attempts to translate the results of a validation study using terms that managers find important and easy to understand. A utility analysis summarizes the overall usefulness of a selection measure or a selection system. Utility analysis uses metrics such as dollars-and-cents to show the degree to which using a selection measure improves the quality of individuals selected compared to what would have happened had the measure not been used. If an I/O psychologist wanted to convince an organization to use a validated test in their selection practices, she or he could use a utility analysis to show the expected net gains per year in monetary units expected from selecting an employee using this measure compared to the current method of selection.

Acquiescence

Acquiescence is the tendency to agree with items regardless of content. Responses to the positive and negative forms should logically be negatively correlated. However, with this bias, no matter whether the item is worded in one direction or its opposite, people tend to respond in the same way. Acquiescence is a type of response bias. The answer regarding how to address acquiescence remains unclear - with some researchers arguing for reverse coding some scale items and others arguing for scales designed to assess levels of acquiescence.

Adverse impact

Adverse impact describes an effect of a selection test or procedure. Specifically, adverse impact exists when a selection procedure or test differentially impacts different applicant groups (e.g., makes it less likely members of certain groups, such as protected groups, will be hired). The Four-Fifths rule is commonly used as evidence of adverse impact. The four-fifths rule states that adverse impact is present if the selection ratio of a protected group is less than 80% of the selection ratio of the advantaged group (non-protected group - typically the majority group). Adverse impact does not necessarily indicate test bias. For instance, there may be mean differences among Black applicants and White applicants with regard to scores on a cognitive ability test which results in Black applicants being less likely to be hired compared to White applicants if this test is used for selection. However, the test may be an equally valid predictor of job performance for both White and Black applicants. This would suggest adverse impact without differential bias (a form of test bias).

Prediction equation

An algebraic equation that mathematically describes how changes in a criterion scores are functionally related to changes in predictor scores. Once developed, it can be used to predict criterion scores using predictor information. In a situation in which there is only one predictor (conscientiousness) and one criterion (performance rating), a simple regression formula can be developed (Ŷ a bX). In this example, a is the intercept value of the regression line, b is the slope of the regression line (regression weight), and X is the score on the predictor variable (conscientiousness), and Ŷ is the predicted criterion score. Let's say our prediction equation is Ŷ = 3.00 + .5(X). A conscientiousness score of 2 would yield a predicted job performance score of 4.

Expectancy tables and charts

An expectancy table is a table of numbers that shows the probability that a person with a particular predictor score will achieve a defined level of success. An expectancy chart presents essentially the same data except that it provides a visual summarization of the relationship between a predictor and criterion. Expectancy tables and charts are useful for communicating the meaning of a validity coefficient.

Organizational needs analysis

An organizational needs analysis involves assessment of an organizational problem and generating hypotheses with regard to how to solve this problem. Conference methods and/or organizational survey methods can be used to conduct a needs analysis. In conference methods, key stakeholders often discuss contrasting views on the problem, and the opinions of subject matter experts (e.g., job incumbents, supervisors, etc.) are also incorporated into discussion. Organizational survey methods involve quantitative methods of assessing the issue by administering surveys to key organizational members (e.g., employees, supervisors, customers etc.). An organizational needs analysis is the first step that should be taken when trying to determine whether changes in selection procedures need to be made. For instance, it is possible that the organizational needs analysis will determine that the organizational problem is not one of selection but rather one of training.

Assessment centers

Assessment centers evaluate small groups of people at more or less the same time, and a group of observers work together to form a consensus about assessees. They consist of a standardized evaluation of behavior on multiple inputs. Multiple trained observers and techniques are used, and judgments about behavior are made, in part, from specially developed assessment simulations. The assessors at an evaluation meeting then pool these judgments and agree on the evaluation of the specified dimension or on an overall evaluation. It emphasizes structure and deemphasizes the role of assessor expertise.

Base rate

Base rate refers to quality of the applicant pool. The base rate shows the proportion of individuals who would succeed on the job versus fail on the job if hiring decisions were made without the use of predictors. Predictors are most useful for selection when 50% of applications are likely to succeed on the job and 50% of applicants are likely to fail. If 100% of applicants are likely to succeed, then hiring decisions can be done at random without any added benefit of using predictors. However, if 0% of applicants are likely to succeed, then predictors will not help in selecting employees who are likely to succeed on the job.

Critical incident technique

CIT involves the development of a series of behavioral statements developed by supervisors and other SMEs that describe competent, average, and incompetent job behavior. Critical incidents are descriptions of behavior that is specific, focuses on observable behaviors that have been, or could be, exhibited on the job, briefly describes the context in which the behavior occurred and indicates the consequences of the behavior. CIT can provide valuable information about the important components of the job that can then serve as a basis for developing descriptive information about the job.

Competency modeling

CM aims to identify the relevant competencies for a job and focuses on how a job is completed. It contains dimensions that are broader than KSAs and organize behavior into categories which are applicable across many jobs, thus helping to identify which job dimensions are most crucial for organizational success (e.g. leadership, ability to work collaboratively).

Counterproductive work behaviors (Sackett & Devore, 2001)

CWBs refer to behaviors that harm or are intended to harm the organization or specific people in the organization. The hierarchical model of CWBs suggests CWBs are a multidimensional construct consisting of interpersonal deviance and organizational deviance. Organizational deviance is further broken down into property deviance and production deviance. This can be an emotion-based response to stressful organizational conditions or a cognition-based response to experienced injustice. E.g. absenteeism, theft, gossip

Campbell's model of job performance (1990)

Campbell's model of job performance suggests there are two broad types of job performance: in-role performance which includes performance on technical aspects of the task (i.e., the tasks within the job description) and extra-role performance which includes tasks not necessarily in the job description but that contribute to the overall performance of the organization (e.g., organizational citizenship behaviors such as helping others or volunteering). Campbell suggests that there are three essential features of job performance - these are technical performance, extra-role performance, and personal discipline, analogous to task performance, organizational citizenship behaviors, and counterproductive work behaviors respectively. According to Campbell, three basic factors determine performance: distributive knowledge, procedural knowledge, and motivation.

Reliability and errors of measurement

Classic test theory suggests that an obtained score on a measure consists of the true score (actual amount of attribute measured that person actually possesses) and error. Errors of measurement are factors that affect the obtained score that are unrelated to the characteristic, trait, or attribute being measured. Reliability is the extent to which a selection measure is free from error. More broadly, reliability refers to the degree of dependability, consistency, or stability of scores on a measure used in selection research. One reason doctors have trouble reliably measuring blood pressure is because subjects often experience elevated blood pressure simply by being subjected to this test at the doctor's office (white coat hypertension). Anxiety regarding having ones blood pressure checked contributes to measurement error, leading to unreliable test results.

Individual psychological assessment

Commonly used for assessing the suitability of candidates for executive positions or for specialized assignments (i.e. law enforcement agents). Individual psychological assessment is a tool used to help organizations make decisions about hiring, promotion, and development. A typical individual psychological assessment consists of professionally developed and validated measures of personality, leadership style, and cognitive abilities among other things. The process often includes an interview. It deemphasizes structure and emphasizes the expertise of the assessor. "a process of measuring a person's knowledge, skills, abilities, and personal style to evaluate the characteristics and behavior that are relevant to (predictive of) successful job performance (Jeanneret & Silzer, 1998b)." Core characteristics include "an individual," "direct contact between the assessor and the individual" and a multimethod approach (Silzer & Jeanneret, 2011)

Computerized adaptive tests

Computerized adaptive tests are constructed to adapt to test takers as they answer questions. They tend to begin with a moderate item, based on the answer to the first question a harder, easier, or moderate item will be presented next. Once the equation has stabilized, the ability level has been determined and the test stops. These tests are advantageous because test results are immediately available and places less of a burden on test takers (who don't have to answer irrelevant questions). However, large samples are required for the development of the item pool. These types of tests are often used for individually administered ability tests and require the use of computers and item response theory for its particular sophistication. An example of this type of test is the GRE.

Content validation

Content validation relies on the use of expert judgment to determine the degree to which the content of a criterion (or predictor) represent adequate sample of important work behaviors (or KSAOs) defined by job analysis. The more closely the content of a selection procedure can be linked to actual job content (KSAOs necessary to perform job), the more content valid it can be said to be. Content validation is amenable to situations when only a small number of applicants are being selected to fill a position and statistical procedures used in other validation strategies is seriously curtailed. Suppose one is applying to be a bomb diffuser for Bomb Co. The selection process involves a 3-minute realistic bomb diffusing simulation. A content validation strategy may be appropriate in this situation to determine, through expert judgment, the degree to which this measure represents what it intends to assess: bomb diffusing skills.

Criterion contamination and deficiency

Criterion contamination occurs if scores are influenced by variables other than the predictor and this alters the magnitude of the validity coefficient. E.g. Students biased by the instructor's presence during the evaluations Criterion deficiency is the degree to which the actual criterion incompletely represents the conceptual criteria. E.g. the number of publications would not be a good indication of quality or contribution.

Criterion-related validation

Criterion-related validation assessed the relationship between a predictor and a criterion. In terms of selection, it is specific to the relationship between a selection assessment and a performance measure. These are assessed through validity coefficients that range from -1 (perfect negative relationship) to +1 (perfect positive relationship).

Differential validity

Differential validity describes the hypothesis that employment tests are less valid for minority, protected group members than non-minorities. Differential validity can be assessed when the validities for the same selection test in the two groups are statistically significant but unequal. For example, the test may be significantly more valid for white people than for black people. This is related to test bias. However, many studies have found that the majority of selection tests do not demonstrate evidence for differential validity.

Emotional intelligence

Emotional intelligence is a multidimensional construct characterized by an individual's ability to perceive, reason with, and regulate emotions. Prominent models of emotional intelligence suggest emotional intelligence includes four facets: being able to decode social and emotional cues, identification and perception of emotions, using emotions to facilitate thoughts, and emotion regulation. Emotional intelligence can be measured via self-report, personality tests (e.g., tests assessing related personality constructs such as self-regulation, self-monitoring, emotional stability) or standardized tests similar to a cognitive test (e.g., the MSCEIT). There is substantial debate surrounding the definition of emotional intelligence and also surrounding the utility of emotional intelligence measures in selection programs. Gatewood, Feild, and Barrick (2011) suggest that research is still in nascent stages and that further research needs to be conducted before a recommendation is made to include emotional intelligence in selection programs. There is some evidence to suggest emotional intelligence is a valid predictor of organizational citizenship behaviors (Mayer et al., 2008).

Levels of measurement

Four levels of measurement exist: nominal, ordinal, interval, and ratio. The degree of precision (and amount of detail) with which we can measure individual differences increases as we go from nominal to ratio. Nominal scale comprises two or more mutually exclusive categories represented by numbers (1 = male applicants / 0 = female applicants). The numbers do not contain a numerical rating. Ordinal scales rank-order objects from high to low on a variable of interest. If a supervisor is using an ordinal scale to rank employees on performance, an employee ranked 1 would be ranked highest in performance compared to other employees. Percentiles are also examples of ordinal rankings (76th percentile = applicant scored higher than 76 percent of others). With interval scales, differences between numbers are meaningful. The difference between 0 and 40 on an ability test is equal to the difference between 40 and 80. Since 0 on these scales does not represent a complete absence of a variable (like ability), we cannot conclude that someone who scores twice as high on a test compared to someone else is actually twice as high on that variable. Ratio scales are similar to interval scales, except they have an absolute zero point. We can make statements about ratio of one's score compared to another (John sold 10x as many baskets as Timmy!). Examples of these scales include: amount of pay, height, weight, number of absences from work.

Graphology and Polygraphy

Graphology is the analysis of an individual's handwriting in order to infer personality traits. There is no evidence to suggest that graphology works for selection. Evidence exists suggest graphology is not a valid selection measure. Polygraphy is a type of integrity testing that assumes deception can be assessed by measuring physiological reactions. It is suggested these physiological reactions may be associated with stress which may be associated with lying. Polygraphs produce a high rate of false positives which led to public outcry and the Employee Polygraph Protection Act of 1988 which prohibits the use of polygraphs in selection for most employers. AKA it is illegal to use these so don't.

Incremental predictive validity

In personnel selection, this refers to the ability of a measure of a trait to predict job performance above and beyond already established predictors. For example, the value in measuring a particular personality trait for selection purposes would depend on whether it adds to the predictability of job performance beyond what is already predicted by, for example, biodata, interview data, and cognitive ability. The incremental predictive validity of a particular measure can be tested using multiple regression analysis. In a selection context, this is used to determine if subsequent measures should be added to the selection process (i.e., if they add more value to the predictions than they cost to measure).

"g" model of intelligence

Intelligence refers the processes of acquiring, storing, retrieving, combining, comparing, and using in new contexts information and conceptual skills. There have been many researchers that have tried to develop a model of intelligence (e.g., Spearman, Cattell, Humphrey). Spearman conceptualized intelligence as general intellectual ability and coined the symbol g. This model of intelligence emerged from psychometric studies on cognitive ability. g is thought of as the common factor underlying intelligence constructs. This operationalization of intelligence refers to general mental processing speed. While g is often used in research, the topic of intelligence (and how to measure it) is contentious at best.

Frame of reference training

Involves identifying raters whose ratings are peculiar and helping them develop a common understanding of the dimensions to be rated and of the observations that support different levels of ratings. It is intended to get all raters on the same metric to mitigate idiosyncrasies caused by raters whose ideas about what is important when rating performance differ from the organization's standards. Someone is high on idiosyncrasies if his/her dimensions for performance (what they consider important to performance) do not match up with the organization's standards of performance. An individual is low on idiosyncrasies if his/her dimensions for performance are similar to the organization's standards of performance.

Employee comparison ratings

It compares ratees to others, either on overall performance or multiple dimensions. The usual result is a ranking of employees, achieved in the following ways: • Rank ordering - the best and worst performers are usually identified first. The task gets progressively harder towards the center of the distribution. • Forced distribution - when fine distinctions are not needed and gross rankings can be done • Paired comparisons - each ratee is compared to each of the others in a set

Behavioral observation scale

It is a technique for evaluating the performance of an employee that can be used as part of the appraisal process. Like behaviorally anchored rating scales, this technique involves a process of identifying the key tasks for a particular job, but the difference is that employees are evaluated according to how frequently they exhibit the required behavior for effective performance. The scores for each of these observed behaviors can then be totaled to produce an overall performance score. In such instances, the various measures of behavior are normally weighted to reflect the relative importance of the measure to the overall job. It takes less time to develop than BARS since prior item scaling isn't required and clearly communicates to the ratee, which behaviors should be engaged in frequently, and which behaviors should be avoided. The rater is merely an observer and reporter, not an evaluator.

Functional job analysis

It is an information-gathering tool that studies a particular job to determine the 'essence' of the collection of tasks within that job title. It assesses what is done, what should be known, what resources are used, what conditions/context the job is done in. It can be used for job descriptions/recruiting, selection, training, compensation, job redesign, promotion, and workforce reduction. The more information that is gathered from a variety of sources, the better we can understand the job.

KSAOs

KSAOs refer to the knowledge, skills, abilities, and other characteristics that are necessary to perform job tasks. In personnel selection, KSAOs are predictor variables. Some common KSAOs assessed include cognitive ability, personality variables (such as the Big Five), and job knowledge.

Mechanical ability tests

Mechanical ability refers to characteristics that tend to make for success in work with machines and equipment. Tests that analyze this construct are things like the Minnesota Assembly tests which emphasizes actual mechanical assembly or manipulation. Some of the main factors that make up mechanical ability are spatial visualization, perceptual speed and accuracy, and mechanical information. . Often, these tests involve two components: manual performance and written performance. However, the costs and time these tests take with administering and scoring for large numbers of individuals makes the use of these tests prohibitive.

Mental ability tests

Mental ability tests test a variety of cognitive abilities (i.e., qualitative and quantitative ability) and can target particular specific mental abilities, producing a single score (e.g., Wonderlic Personnel Test). This single score represents overall mental ability. Mental ability tests have commonly been validated using educational achievement as a criterion measure. Mental ability tests are not interchangeable; they can differ in the abilities that are measured because the items of the tests differ in content.

Non-compensatory selection approaches

Non-compensatory selection approaches refer to selection procedures in an applicant's strength in one phase of the process CANNOT compensate for relative weakness in other phases. An example of a non-compensatory approach is the use of multiple cutoffs, in which a predetermined cut score is assigned to each test. The applicant must score at or above the cut score on each test in order to be considered for selection. For example, if tests A, B, and C were employed with cut scores of 10, 20, and 30, respectively, the applicant would, at a minimum, need scores of 10 on test A, 20 on test B, and 30 on test C to be considered for employment.

Speed versus power tests

Performance tests can be thought of as either speed or power tests. Speed tests have participants answer as many questions as they can answer correctly in an allotted amount of time. Conversely, power tests present participants with a smaller number of more complex questions. Power tests may be more appropriate where higher levels of cognitive ability are expected. Speed tests increase candidate variability, whereas power tests assess the candidate's maximal performance.

Physical ability tests

Physical ability tests are used when testing physical abilities of applicants for placement into manual labor and physically demanding jobs. Because Americans with Disabilities Act prohibits pre-employment medical examinations, the most feasible way to collect data about the physical status of applicants is through the use of physical ability tests that measure the worker characteristics required by the job. Most physical ability tests require demonstrations of strength, oxygen intake, and coordination. For example, positive job predictors for firefighters include strength and aerobic capacity.

Projective techniques

Projective techniques allow respondents to expose central ways of organizing experience and structuring life because meanings are imposed on a stimulus having relatively little structure and cultural patterning. These tests have low reliability in terms of test-retest reliability, scoring is difficult, and few HR specialists are trained to score and interpret these tests. This self-report personality technique are intentionally ambiguous. For example, an applicant may be shown a picture and asked to construct a story about the pictured characters. Participants may also be given the beginning of sentences (e.g., "My mother...") and then asked to complete them. How the participants answers is thought to be a projection of their personality. Responses are interpreted by a scorer.

Psychometric rating errors

Psychometric rating errors exist when raters have a general tendency to select average scores to avoid rating applicants/employees as particularly good or poor (i.e., the central tendency error) and/or when raters have a tendency to generally rate applicants leniently or harshly (i.e., the leniency/severity error). These two types of rating errors are "between-applicant errors" (i.e., across all applicants, raters adjust scores in a particular way). A within-applicant rating error is the halo error in which raters tend to view particular applicants as particularly strong performers or weak performers across multiple performance domains (i.e., a bias to generally view a person as good or poor across performance domains regardless of actual performance). Another type of error exists in which applicants were previously rated by raters and raters agreed with the decision to either select or not select the applicant (prior impressions).

Methods of estimating reliability

Reliability can be assessed across measure sources, such as items (internal consistency) and raters/people (inter-rater reliability). Internal consistency reliability estimate shows extent to which all parts of measure (such as items) are similar in what they measure. These procedures are applied most often to estimate internal consistency: split-half reliability, Kuder-Richardson reliability, and Cronbach's alpha reliability. Inter-rater reliability estimate shows extent to which raters are consistent in their ratings of an applicant. These procedures are applied most often to estimate interrater reliability: percentage of agreement between raters, Kendall's W, Cohen's Kappa, interclass correlation, and intraclass correlation. Reliability can also be assessed across "measure time", such as across administrations (test-retest reliability) and across forms (parallel/equivalent forms). Test retest reliability assesses the consistency of results on retesting. Parallel or equivalent forms reliability estimates make use of two equivalent versions of a measure. The Pearson correlation is used to determine the extent of the relationship between test scores.

Construct validation

Research process involving collection of evidence used to test hypotheses about relationships between measures and their constructs. The purpose of construct validation is to have more assurance that we are actually measuring what we claim to be measuring. It is assessed by correlating test scores of a predictor or a criterion measure with other similar measures (convergent validity) or dissimilar measures (discriminant validity).

Selection decision errors

Selection decision errors broadly refer to the undesirable yet inevitable outcomes associated with selection decisions. There are two types: False positives and false negatives. False positives are erroneous acceptances which occur when applicants are selected after passing the selection process, but prove to be unsuccessful at the job. False negatives are erroneous rejections which occur when applicants who were not selected, likely for not passing one or more phases of the selection process, but would have been successful on the job. Selection decision errors can be minimized by using validated selection procedures and proven selection decision strategies.

Standard error of measurement

Standard error of measurement is a way to express reliability of a measure. It refers to the dispersion of measurement error around a true score. This value is a function of test reliability and variability of test scores. The higher the standard error, the more error present in the measure, and the lower its reliability. The equation is: σmeas= σx√(1-r_xx ) . If a math ability test has a reliability of .90 and standard deviation of 10, then the equation would look like this: σmeas= 10√(1-.90) = 3.16. We could estimate the degree to which an applicant's score would change if she retested later using the standard error of measurement. If someone scored 50 on this test, we could add/subtract the standard error of measurement from this score (50- 3.16/ 50 + 3.16). We could say that the chances are two to one that the applicants true score is somewhere between 46.84 and 53.16.

Test bias

Test bias exists when selection tests "behave" differently for different groups of individuals (e.g., selection tests are better predictors of behavior for men compared to women). There are two types of test bias. Differential bias exists when the validity coefficients are stronger for one group compared to another. For instance, if cognitive ability is a significantly stronger predictor of job performance for men compared to women than there is evidence of differential bias. This can be assessed by comparing the validity coefficients (the correlations between the predictor and criterion variable) for each group. Predictive bias exists when tests differentially predict criterion for different groups such that there are significant slope or intercept values for different groups. This can be tested using moderated multiple regression. A significant moderation (e.g., with gender or race as the moderator) would suggest test bias.

Americans with Disabilities Act (ADA)

The ADA of 1990 prohibits discrimination against individuals with a qualified mental or physical disability. It applies to employers with 25 or more employees in all areas of employment. An individual qualifies as disabled if their impairment substantially limits one or more life activities, has record of the impairment, and is regarded as having the impairment.

Age Discrimination in Employment Act (ADEA)

The ADEA of 1967 was designed to promote employment of older persons based on their ability rather than their age. It prohibits discrimination against individuals who are 40 year of age or older. It covers government agencies and private employers, unions, and employment agencies. The EEOC is charged with enforcing the ADEA.

Civil Rights Act of 1964 (CRA) and Title VII

The CRA of 1964 is a piece of civil rights legislation that prohibits discrimination against various protected groups across several domains. Title VII is the CRA provision that prohibits discrimination on the basis race, color, sex, religion, and national origin in the employment sector. Title VII covers government agencies, employers receiving federal funds, unions, employment agencies, and private employers with 15 or more employees. This movement was a catalyst for the field of selection research aimed at measuring, controlling, and reducing discrimination.

Dunning-Kruger effect (Kruger & Dunning, 1999)

The Dunning Kruger effect is a cognitive bias where unskilled individuals overestimate their skills compared to where they actually are; it can also be a cognitive bias where skilled individuals underestimate their abilities compared to where they actually are. This is often considered a self-insight issue. For example, this might occur when an individual perceives themselves as mastering material (80th percentile at least) and perceived themselves as performing well on a test (again, 80th percentile at least) when in actuality they performed poorly on the test (approximately the 10th percentile).

Cross-validation

The process of testing whether a regression equation developed for one group will lose predictive accuracy (shrinkage) when applied to a new group. Two general methods of cross-validation are used: empirical and formula estimation. Empirical cross-validation applies a regression equation developed on one sample to another sample. If that equation can predict scores for the new sample, then the equation is considered cross-validated. The formula method makes use of a single sample but applies special formulas to predict the amount of shrinkage that would occur if the regression equation was applied to a similar sample of people. The formula method is more efficient, simpler to use, and--with a large enough sample-- no less accurate than empirical cross-validation.

Reliability coefficient

The reliability coefficient can be interpreted as the extent to which individual differences in scores on a measure are due to true differences in the attribute measured and the extent to which they are due to chance errors. The reliability coefficient ranges from 0 (a measure composed entirely of error) to 1 (no error present in measure). A reliability coefficient is specific to the reliability estimation method and group on which it is calculated, is based on responses from a group of individuals, is expressed by degree, and is determined ultimately by judgment. If a measure is not reliable, it cannot be said to be valid. However, reliability is a necessary but not sufficient condition for validity.

Selection ratio

The selection ratio is an index ranging from 0 to 1 that reflects the ratio of available job positions to the number of job applicants. It is reached by dividing the number of hires by the number of applicants. The smaller the ratio of those hired from all applicants, the better than chance of choosing top performers. For example, 50 people applying for 5 positions would have a selection ratio of .10 (SR = 5/50). Selection ratios closer to 0 indicate bigger applicant pools, which also suggest that assessment tools are more valuable for predicting performance. There is little value to knowing if one applicant is better than another if most people who apply are hired (i.e., selection ratios closer to 1).

Validity coefficient

The validity coefficient is an index ranging from -1 to +1 that indicates the magnitude of the relationship between a predictor and a criterion. In a selection context, the criterion of interest is performance related. A significant validity coefficient is helpful in showing that for a group of people a test is related to job success. By squaring the validity coefficient, one can assess the amount of variance in a criterion that is accounted for by a predictor. This process should be guided by theory.

Types of performance measures

There are four main types of performance measures: • Production data - this is the quality or quantity of output that generally tends to be a physical measure of work. E.g. number of widgets produced • HR personnel data - personnel records and files that contains information about the workers. E.g. absenteeism, tardiness, voluntary turnover, etc. • Training proficiency - How quickly and how well employees learn during job training activities (trainability). E.g. error rates during a training period, scores on training performance tests, etc. • Judgmental data - performance appraisals or ratings. They most often involve a supervisor's rating of a subordinate on a series of behaviors or outcomes found to be important to job success. E.g. task performance, citizenship behavior, counterproductive behavior, etc.

Uniform Guidelines on Employee Selection Procedures

These are a set of principles intended to assist employers and other relevant agencies in the use of tests and other selection procedures in a legal manner. The guidelines specifically address issues of adverse impact, test validation, and record-keeping in selection. The guidelines are not legally binding themselves, but are the joint effort of several federal agencies including the EEOC. They serve as a primary reference for court decisions, and thus are a guide for HR and selection experts who aim to be legally defensible in their procedures.

Fixed and sliding bands

These are two methods of banding—a selection decision making approach that considers all scores within one or more standard error units of the top score to be equal. The range of scores in fixed and sliding bands are determined from the top score as a reference point. With fixed bands, all applicants in the top band must be selected before a new band is recalculated. Sliding bands are recalculated once an applicant is selected from the top band, using the highest applicant from those remaining as the reference point.

Forced-choice inventories

These inventories use tetrads of four descriptive statements, each with two statements about equally favorable and two equally unfavorable. These personality inventories force test takers to choose the most liked item from a list of generally desirable items. These inventories are advantageous in that they often reduce the likelihood of faking. However, these inventories are more demanding on participants, requiring deeper cognitive processing. While these inventories may present psychometric challenges, some studies have illustrated that they do yield good predictive validities.

Norm-referenced versus criterion-referenced assessment

These two types of procedures are used for interpreting a test score's meaning. The norm-referenced assessment refers to interpreting a score relative to a comparison group (e.g., GRE, ACT, etc.). Thus, good or bad is relative and depends on the performance of the norm group. Criterion-reference assessment refers to interpreting a score which represent doing poorly or well based on a predetermined standard (e.g., GED, BAR exam, course exams, etc.). Thus, good or bad is determined relative to the content domain being tested. Criterion-referenced tests illustrate how well participants perform, norm-referenced assessment compares a single score to other test takers.

Work samples test

These types of tests require applicants to perform job-relevant/similar tasks to what they would be expected to do on the job. While these tests often have high validity and generate positive perceptions from applicants, they may be costly to administer. Further, as only certain tasks are assessed, they may not be very generalizable. An example of this would be a flight test for a pilot's license in which someone is evaluating the individual on how many items or behaviors he performs during the flight. The process here is what is important.

Affirmative Action programs

a set of specific actions taken by an organization to actively seek out and remove unintended barriers to equal employment opportunity (EEO). It is a written document that serves as a guideline to ensure that EEO principles are implemented. Adoption of AAPs typically arises from three situations: The company is a government contractor, the company lost a court discrimination case, or the company has voluntarily adopted EEO principles.

Behaviorally anchored rating scales

They are a type of rating scale used to measure judgmental job performance data. It aims to combine the benefits of narratives, critical incidents, and quantified ratings by anchoring a quantified scale with specific descriptions of good, moderate, and poor performance on specified dimensions. For each job dimension a set of critical incidents is selected that represents various levels of performance on the dimension (usually from 1-7). For example: the scale could range from 1 - employee can be expected to remain silent until customer waves money or yells loudly to 7 - employee can be expected to smile, greet a regular customer by name as she approaches, and ask how specific family members are doing

Negligent hiring

This is a legal suit brought against an organization by a third party who was injured by an employee of the charged organization. Such a charge requires that the plaintiff was actually injured by the employee and that the employee was unfit for the job and the employer had or should have had knowledge of this. Further, the injury must have been a foreseeable outcome and the outcome of the employee's actions. For example, if a company hires a bus driver who as a record of driving under the influence and the driver crashes the bus, injuring passengers, the company could be charged with negligent hiring.

Cut-off scores (Truxillo, 1996)

This is a selection decision-making strategy in which a score on a predictor or combination of predictors is determined, and applicants who fall below this score are rejected. A variety of methods exist for setting cutoff scores. These can be broadly categorized under two approaches: The empirical approach, in which cutoff scores are determined using a comparison group, or expert judgment, in which cutoff scores are determined by the knowledge of job experts or incumbents.

Top-down selection

This is a selection decision-making strategy in which applicants' overall scores are rank-ordered from highest to lowest. Applicants are selected starting with the highest scoring applicant and moving down to the next highest and so on until all positions are filled. This approach assumes that higher scoring applicants will be better performers than lower scoring applicants (i.e., a linear relationship between scores and future performance is assumed).

Diversity-validity dilemma (Murphy, Cronin, & Tam, 2003)

This is a situation in personnel selection highlighting the conflict associated with choosing valid selection procedures and maintaining a diverse workforce. On one hand, employers want to use the most valid procedures to optimize job performance prediction. On the other hand, some of the most valid predictors result in subgroup differences and, thus, adverse impact. The dilemma can be addressed by adjusting selection procedures to reduce adverse impact or implementing affirmative action policies, both of which have costs and benefits.

Fleishman's taxonomy of ability

This taxonomy of different abilities tap into nonphysical and physical abilities that were deemed necessary for the completion of work activities. Specifically, Fleishman identified 9 physical abilities that have been used to select employees for physically demanding jobs including, static strength, dynamic strength, explosive strength, trunk strength, extent flexibility, dynamic flexibility, gross body coordination, gross body equilibrium, and stamina. Oftentimes, this taxonomy is provided to SMEs in order to have them select abilities they think are most important for particular jobs. This taxonomy has been used in a variety of specific jobs, including pipeline workers, correctional officers, and enlisted army men. For instance, two to four physical abilities correlated high (.87) with job performance when looking at enlisted army men.

Unproctored internet testing (UIT)

allows applicants to complete assessments anywhere, anytime, and without the supervision of a human proctor. The UIT is cost efficient for organizations because it is unproctored and internet-based. Several issues exist with the use of UITs such as the appropriateness of different kinds of tests (e.g., personality vs. cognitive assessment) for internet testing, potential for cheating, and costs and feasibility of measures used to reduce cheating.

Conditional reasoning tests

are a new approach to personality assessment in which faking is reduced by indirectly measuring unconscious cognitive biases that people rely on to justify or rationalize their behavior. These unconscious biases are assumed to relate to motives or traits. For instance, questions can be framed as though they are measuring logic when they can really be assessing aggression. This method is still fairly new, so it is uncertain whether it will become a permanent fixture of selection testing.

Maximal versus typical performance (Deadrick & Gardner, 2008)

describe maximal performance as the "can do" aspect of performance, which is what an individual can achieve when highly motivated; they also describe typical performance as the "will do" aspect, which is what an individual generally achieves. The difference in maximum performance and typical performance tends to focus on the impact of motivational factors on performance. Typical performance is determined by ability and personality characteristics; however, personality variables, like motivation and drive, may play a limited role in maximal performance.

Drug testing

includes any test assessing applicant or employee drug use. Drug testing in organizations has become more common with the findings that drug use is associates with lower job performance, more accidents, more injuries, and more involuntary turnover on the job. Tests can be in the form of paper-and-pencil, urine tests, hair analysis, fitness-for-duty tests, and oral fluid tests. Several legal arguments related to drug testing have been presented including that they represent an invasion of privacy, unreasonable search, and a violation of due process. Additionally, drug testing may violate the Civil Rights Act and the National Labor Relations Act. A final legal argument is that drug users are protected under the Americans with Disabilities Act.

Integrity testing

includes overt or covert tests used to predict the likelihood applicants or employees will engage in counter-productive work behaviors. Overt tests include questionnaire items assessing attitudes and beliefs about counter-productive work behaviors (e.g., theft). These tests might also involve requests for admissions of CWBs. Covert tests include personality-oriented tests or conditional reasoning tests in which integrity is inferred from personality traits related to CWBs. Integrity testing is sometimes used in selections in an effort to reduce the occurrence of employee theft. Employee theft is a big concern with an estimated 2-5% of each sales dollar being up charged to offset the costs of internal theft.

Structured interviews

interviews that rely on objective evaluation procedures and make use of standardization in gathering, recording, and evaluating information. Important characteristics of a structured interview include using job analysis as a basis for questions, asking the same questions of all applicants, posing only behavior-based questions, scoring each answer, having multiple KSA scoring scales, using scoring scales with behavioral examples, and training interviewers.

Five-factor model (Costa & McCrae, 1995; Goldberg, 1981)

is a prominent personality trait taxonomy that includes five broad traits - conscientiousness, openness to experience, agreeableness, neuroticism, and extraversion. Conscientiousness refers to self-regulation and achievement striving tendencies. Openness to experience refers to an individual's openness towards new ideas and propensity towards arts and culture. Agreeableness refers to an individual's level of cooperativeness and kindness. Neuroticism refers to an individual's tendency to experience negative emotions and abilities to regulate emotions. Extraversion refers to one's tendency to enjoy social stimulation and tendencies to experience positive emotions. Each broad trait includes more specific trait facets (e.g., achievement striving and organization are subsumed within conscientiousness). The five factor model is based on the lexical hypothesis that suggests all trait terms have been encoded in the English language. Factor analytic procedures were used to identify the five broad traits named above. Personality assessments used in selection procedures often assess the traits within the five-factor model. Of the five traits, conscientiousness consistently demonstrates the strongest validity coefficients in predicting job performance and is a valid predictor of job performance beyond general cognitive ability. Other traits such as agreeableness and neuroticism can validly predict counterproductive work behaviors. There is evidence that extraversion predicts success in sales positions.

Core self-evaluations

is a term coined by Judge and colleagues that refers to a set of personality traits shown to be important to work performance. These traits include generalized self-efficacy (beliefs about one's ability to be successful), self-esteem (general evaluations about the self), emotional stability (tendencies to experience negative emotions and use effective emotion regulation strategies), and locus of control (peoples' perceptions of responsibility or control over events in their lives). Research has found that high core self-evaluations is related to more satisfaction and interest in work. According to a meta-analysis there is a moderate correlation between core self-evaluations and job performance.

Social desirability

is a type of response set or style. Specifically, social desirability is the tendency for people to try to present themselves in a favorable light or say things that they think others want to hear when responding to stimuli such as inventory items. Social desirability becomes "faking" when somebody is deliberately attempting to look good on a test or inventory. Acquiescence is one type of social desirability.

Proactive personality

is an "action-oriented" trait characterized by taking initiative at work and effecting environmental changes. Proactive personality predicts important job outcomes such as salary, rate of promotions, job performance, and transformational leadership. This personality trait predicts behavior beyond conscientiousness and extraversion and it is anticipated that employers will continue to value proactive personality.

Weighted application blanks

items on an application form that are weighted to reflect their degree of importance in differentiating between good and poor performers. A total score is determined by summing the collective weights for responses to the items. Although use of WABs has waned, they can be effective in selection, particularly for lower-skilled jobs.

Bona Fide Occupational Qualification (BFOQ)

legal defense that protects employers from the exclusion of protected groups from employment. A valid BFOQ allows employers to do so if it is essential to the job or vital to the business' operation. For example, an airline might enforce mandatory retirement at a certain age for safety reason. A BFOQ defense cannot be used for exclusion based on race or color.

Organizational branding

one part of an organization's recruitment strategy in which companies create a favorable and unique organizational image. Organizational branding affects the kind of inferences that potential job applicants make and influences their attraction to potential employers. Thus, organizational branding can make a position more or less desirable and the strategic use of brand can give an organization an edge over their competitors. It may also be important for organizations to ensure that they remain consistent with their brand when implementing selection practices.

Realistic-job previews (RJPS) and ELPs

provide applicants with information regarding both the positive aspects and the negative aspects of a job. This information can be disseminated via multiple means including talks with recruiters, company visits, brochures, or discussions with employees. Relations with turnover and job satisfaction are small. Expectation-lowering procedures (ELPs) is a generalized realistic recruitment tool in which lacks organization- and job-specific details, and instead focuses on helping the applicant understand the realities of entering a new organization. ELPs are meant to be used as a replacement or an addition to the RJP and are based on the idea that most employees enter organizations with unrealistically high expectations. These previews to a job allow for applicants to adjust their expectations and possibly select out if they become uninterested in the job. The applicants that remain in the selection pool are left with more accurate expectations of the job.

Biodata

self-report data provided by the applicant that predicts relevant work outcomes and reflects applicant's past behaviors and experiences in a work context, educational setting, as part of a family, and community activities. Biodata questions offer an alternative method for collecting data that are usually collected during interviews. Research evidence has supported the validity of biodata however it is important to note that some types of biodata can be biased against certain groups of applicants.

Simulation fidelityx

simulation fidelity is the amount of realism a simulation provides in relation to actual performance on a job (imitation of the complex work). Fidelity can either be low or high. High fidelity simulations would imitate actual, complex work that applicants would be expected to perform on the job. This can also be differentiated between two different types: psychological fidelity which entails a task that requires the particular KSAOs required to perform the job and physical fidelity which entails a task that mirrors the physical aspects of the job. An example of a high fidelity simulation (aka highly realistic of the job) is the use of computerized manikins in nursing school. These manikins imitate a real life situation that the nurses may encounter and thus the nurses need to react to them as if they are reacting to a patient.

Mechanical/Clinical and judgmental decision making

these are methods of combining applicant test scores/information in order to make a selection decision. Mechanical (statistical) decision making refers to decisions independent of the use of human judgment, such as by entering applicant data into a statistical equation developed to predict job performance (e.g., a regression equation). Clinical (judgmental) decision making refers to decisions arrived at by using intuition or gut instinct, such as when an applicant's information/test scores are reviewed and the selection decision is based on the resulting impression on the decision maker. Mechanical approaches have consistently demonstrated to have as good if not better reliability and validity than clinical approaches at predicting job performance. However, clinical decision making is preferred by applicants and is seen as more face valid.

Impression management tactics

verbal and non-verbal strategies for controlling how individuals (e.g., applicants) present themselves in order to influence others' (e.g., employers) perceptions. Two main verbal IM tactics are self-promotion (i.e., speaking about one's qualifications, talents, and positive attributes) and ingratiation which includes opinion conformity (expressing beliefs in common with the interviewer/organization) and other-enhancement (praising the interviewer or organization). The research is mixed on how these impression management techniques influence hiring decisions.


Conjuntos de estudio relacionados

FIN 320 DePaul University Final Study Guide

View Set

Sine/Cosine/Tangent Special Angles

View Set

chapter 1 uCanPass life insurance

View Set