CH. 5 BOOK INFO: METHODS FOR ASSESSING AND SELECTING EMPLOYEES

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Faking pg. 118

purposely DISTORTING one's responses to a test to try to "beat" the test

Content validity

the ABILITY of the ITEMS in a measurement instrument to MEASURE ADEQUATELY the various characteristics needed to PERFORM a JOB

Reliability

the CONSISTENCY of a MEASUREMENT instrument or its stability over time

Test utility

the VALUE of a screening test in DETERMINING important OUTCOMES, such as DOLLARS GAINED by the company through its use

Validity generalization PG.117

the ability of a screening instrument to PREDICT PERFORMANCE in a JOB or SETTING DIFFERENT from the one in which the test was validated

Snap judgment PG. 123

arriving at a PREMATURE, early overall EVALUATION of an applicant in a hiring interview

Situational exercise PG. 119

assessment tools that require the performance of tasks that APPROXIMATE ACTUAL WORK TASKS

Weighted application forms PG. 98

forms that ASSIGN DIFFERENT WEIGHTS to the various pieces of information provided on a job application

Polygraphs PG. 114

instruments that MEASURE physiological reactions presumed to accompany DECEPTION; also known as lie detectors

Personality tests pg. 112

instruments that MEASURE psychological CHARACTERISTICS of individuals

integrity tests

measures of HONEST or DISHONEST ATTITUDES and/or behaviors

CONSIDERATIONS IN THE DEVELOPMENT AND USE OF PERSONNEL SCREENING AND TESTING METHODS CONT. PG. 102

A SECOND METHOD of ESTIMATING the RELIABILITY of an employment screening measure is the PARALLEL FORMS METHOD. Here TWO EQUIVALENT TESTS are CONSTRUCTED, each of which presumably MEASURED the SAME CONSTRUCT but using DIFFERENT ITEMS or QUESTIONS. TEST-TAKERS are administered BOTH FORMS of the INSTRUMENT. RELIABILITY is EMPIRICALLY EST. if the CORRELATION between the TWO SCORES is HIGH. Of course, the major DRAWBACKS to this method are the TIME and DIFFICULTY involved in creating two equivalent tests. Another way to ESTIMATE the reliability of a test instrument is by ESTIMATING its INTERNAL CONSISTENCY. If a test is RELIABLE, each item should MEASURE the SAME GENERAL CONSTRUCT, and thus PERFORMANCE on ONE item should be CONSISTENT with PERFORMANCE on all other items. TWO SPECIFIC METHODS are used to DETERMINE INTERNAL CONSISTENCY. The FIRST is to DIVIDE the test items into TWO EQUAL PARTS and CORRELATE the SUMMED SCORE on the FIRST HALF of the items with that on the SECOND HALF. This is REFERRED to as SPLIT-HALF RELIABILITY. A SECOND METHOD, which involves NUMEROUS CALCULATIONS (and which is more commonly used), is to determine the AVERAGE INTERCORRELATION among all items of the test. The resulting COEFFICIENT, referred to as CRONBACH'S ALPHA, is an ESTIMATE of the TEST'S INTERNAL CONSISTENCY. In summary, RELIABILITY REFERS to whether we can "DEPEND" on a set of MEASUREMENTS to be STABLE and CONSISTENCY, and several types of EMPIRICAL EVIDENCE (e.g., test-retest, equivalent forms, and internal consistency) reflect different aspects of this stability. VALIDITY refers to the ACCURACY of inferences or projections we draw from measurements. Validity REFERS to whether a set of MEASUREMENTS allows ACCURATE INFERENCES or PROJECTIONS about "something else." That "SOMETHING ELSE" can be a job applicant's standing on some characteristic or ability, it can be future job success, or it can be whether an employee is meeting performance standards. In the context of employee screening, the term VALIDITY most often REFERS to whether SCORES on a particular test or screening procedure ACCURATELY PROJECT future JOB PERFORMANCE. Validity refers to the QUALITY of SPECIFIC INFERENCES or PROJECTIONS; therefore, validity for a specific measurement process (e.g., a specific employment test) can vary depending on what criterion is being predicted. Therefore, an EMPLOYMENT TEST might be a VALID PREDICTOR of job performance, but NOT a valid predictor of ANOTHER criterion such as rate of absenteeism. CONTENT VALIDITY is EST. by having EXPERTS such as job incumbents or supervisors JUDGE the APPROPRIATENESS of the test items, taking into account information from the job analysis

MOTOR AND SENSORY ABILITY TESTS PG. 111

A number of tests MEASURE SPECIFIC MOTOR SKILLS or SENSORY ABILITIES. Tests such as the CRAWFORD SMALL PARTS DEXTERITY are TIMED performance instruments (speed tests) that REQUIRE the MANIPULATION of SMALL PARTS to MEASURE the FINE MOTOR DEXTERITY in HANDS and FINGERS required in jobs such as assembling computer components and soldering electrical equipment. For example, the CRAWFORD TEST uses BOARDS with SMALL HOLES into which tiny PINS must be PLACED using a pair of TWEEZERS. SENSORY ABILITY TESTS include tests of HEARING, VISUAL ACUITY, and PERCEPTUAL DISCRIMINATION. The most COMMON TEST of VISUAL ACUITY is the SNELLEN EYE CHART, which consists of ROWS of LETTERS that become increasingly SMALLER. Various electronic instruments are used to measure hearing acuity.

PERSONALITY TESTS CONT. PG. 113-114

A relatively NEW CONSTRUCT that has begun to capture the attention of I/O psychologists interested in the selection of employees is that of EMOTIONAL INTELLIGENCE. Emotional intelligence INVOLVES knowledge, understanding, and REGULATION of EMOTIONS, ability to communicate emotionally, and using emotions to facilitate thinking. Emotional intelligence is PARTLY PERSONALITY, partly an ABILITY, and partly a form of intelligence, so it does NOT FIT neatly into ANY of our CATEGORIES of tests. This construct might be related to performance as a supervisor or workplace leader who needs to inspire followers and be aware of their feelings, and ability to regulate emotions in a positive way might be beneficial for any worker, particularly when facing interpersonal problems or conflicts with other employees, or when under stress.

EMPLOYMENT TESTING PG. 101

AFTER the evaluation of the BIOGRAPH. INFO available from resumes, application forms, or other sources, the NEXT STEP in COMPREHENSIVE employee SCREENING programs is EMPLOYMENT TESTING. Today, the use of tests for employment screening and placement has expanded greatly. A considerable percentage of large companies and most government agencies routinely use some form of employment tests to measure a wide range of characteristics that are predictive of successful job performance.

CONSIDERATIONS IN THE DEVELOPMENT AND USE OF PERSONNEL SCREENING AND TESTING METHODS PG. 101

Any type of measurement instrument used in industrial/organizational psychology, including those used in employee screening and selection, must MEET certain measurement STANDARDS. TWO critically IMPORTANT CONCEPTS in measurement are RELIABILITY and VALIDITY. RELIABILITY refers to the STABILITY of a MEASURES over time or the consistency of the measure. RELIABILITY also REFERS to the AGREEMENT between TWO or MORE ASSESSMENTS made of the SAME EVENT or behavior, such as when two interviewers independently evaluate the appropriateness of a job candidate for a particular position. In other words, a measurement process is said to possess "RELIABILITY" if we can "RELY" on the SCORES or measurements to be stable, consistent, and free of RANDOM ERROR. A variety of METHODS are used for ESTIMATING the RELIABILITY of a screening instrument. ONE METHOD is called TEST-RETEST RELIABILITY. Here, a particular test or other measurement instrument is ADMINISTERED to the SAME PERSON at TWO DIFFERENT times, usually involving a one- to two-week interval between testing sessions. SCORES on the FIRST test are then CORRELATED with those on the SECOND TEST. If the CORRELATION is HIGH (a correlation coefficient approaching +1.0), evidence of RELIABILITY (at least stability over time) is empirically EST..

Biodata PG. 107

BACKGROUND INFO and personal characteristics that can be USED in employee SELECTION

BIODATA INSTRUMENTS PG. 107-108

BIODATA refers to BACKGROUND INFO. and PERSONAL CHARACTERISTICS that can be used in a SYSTEMATIC FASHION to select employees. DEVELOPING BIO. INSTRUMENTS typically involves TAKING INFO. that would APPEAR on APP. FORMS and other items about background, personal interests, and behavior and using that information to develop a form of forced-choice employment test. BIO. instrument might also INVOLVE PERSONAL QUESTIONS, probing the applicant's attitudes, values, likes, and dislikes. Biodata instruments are unlike the other test instruments because there are NO STANDARDIZED BIODATA INSTRUMENTS. Instead, biodata instruments take a great deal of RESEARCH to DEVELOP and VALIDATE. Because biodata instruments are typically DESIGNED to SCREEN applicants for ONE SPECIFIC JOB, they are most LIKELY to be USED ONLY FOR HIGHER LEVEL POSITIONS and can be effective SCREENING and PLACEMENT tools. COMPREHENSIVE biodata INSTRUMENTS can give a very DETAILS DESCRIPTION and CLASSIFICATION of an applicant's BEHAVIORAL HISTORY —a very GOOD PREDICTOR of FUTURE BEHAVIOR. One potential PROBLEM in the use of biodata instruments CONCERNS the PERSONAL NATURE of many of the questions and the POSSIBILITY of unintentional DISCRIMINATION against minority groups because of items regarding age, financial circumstances, and the like. Thus, biodata instruments should ONLY BE DEVELOPED and administered by PROFESSIONALS trained in test use and validation.

JOB SKILLS AND KNOWLEDGE TESTS PG. 111-112

EXS. of job skill tests for CLERICAL WORKERS would be a STANDARDIZED TYPING test or tests of other specific clerical skills SUCH AS proofreading, alphabetical filing, or correction of spelling or grammatical errors, as well as the use of software. For example, the JUDD TESTS Tests are a SERIES of TESTS designed to ASSESS COMPETENCY in several areas of COMPUTER competence, including word processing, spreadsheet programs, and database management. A SPECIAL sort of job skill test INVOLVES the USE of WORK SAMPLE TESTS, which MEASURE APPLICANTS' abilities to PERFORM brief EXAMPLES of some of the critical tasks that the job requires. The SAMPLE TASKS are constructed as tests, ADMINISTERED under STANDARD TESTING CONDITIONS, and scored on some predetermined scale. Their obvious ADVANTAGE is that they are CLEARLY JOB RELATED. In fact, WORK SAMPLE TESTS can SERVE as a REALISTIC JOB PREVIEW, allowing applicants to determine their OWN SUITABILITY for performing a job. A DRAWBACK is that work samples are usually rather EXPENSIVE to DEVELOP and take a great deal of TIME to administer. One EX. of a work sample test was developed for applicants for the job of concession stand attendant at a city park's snack bar. JOB KNOWLEDGE TESTS are INSTRUMENTS that ASSESS specific types of KNOWLEDGE REQUIRED to PERFORM certain jobs.

OTHER EMPLOYEE SCREENING TESTS PG. 115

In addition to the categories of employee tests we have discussed, there are OTHER TYPES OF TESTS that do NOT FIT neatly into any of the categories. For example, many employers concerned about both safety issues and poor work performance SCREEN APPLICANTS FOR DRUG USE, usually through analysis of urine, hair, or saliva samples. Unfortunately, current LAB TESTS are not 100% ACCURATE. Interestingly, the PROBLEM with drug testing is unlike the problem with polygraphs because drug-testing inaccuracies are MORE LIKELY likely to be FALSE NEGATIVES to detect the presence of drugs—rather than false positives. Unlike the polygraph, however, today there are FEW RESTRICTIONS on drug testing in WORK SETTINGS. In ADDITION to testing for the PRESENCE of DRUGS in applicants, PENCIL AND PAPER tests have been developed to SCREEN EMPLOYEES who have attitudes that are RELATED to DRUG USE. A very QUESTIONABLE screening "test" is HANDWRITING ANALYSIS, or GRAPHOLOGY. In GRAPHOLOGY, a person TRAINED in HANDWRITING analysis makes JUDGMENTS about an applicant's job POTENTIAL by EXAMINING the personality characteristics that are supposedly revealed in the shape, size, and slant of the letters in a sample of HANDWRITING. Although used by some companies to screen applicants, the VALIDITY of HANDWRITING ANALYSIS in assessing performance potential is highly questionable.

HONESTY AND INTEGRITY TEST PG. 114-115

In the past, POLYGRAPHS, or LIE DETECTORS—instruments designed to MEASURE physiological REACTIONS presumably associated with LYING such as respiration, blood pressure, or perspiration—were USED in EMPLOYEE SELECTION. Most often polygraphs were used to SCREEN OUT "DISHONEST" applicants for POSITIONS in which they would have to HANDLE CASH or expensive merchandise, although they had also been used by a wide number of organizations to screen and select employees for almost any position. RESEARCH QUESTIONS VALIDITY of polygraphs. A major PROBLEM concerned the RATE of "FALS POSITIVE" ERRORS, or innocent persons who are incorrectly scored as lying. Because of this questionable validity and the potential HARM that INVALID RESULTS could cause innocent people, the federal government passed legislation in 1988 that severely restricted the use of polygraphs in general employment screening. However, polygraphs are STILL ALLOWED for the TESTING of employees about SPECIFIC INCIDENTS. Since the establishment of RESTRICTIONS on the use of polygraphs, many employers have TURNED to using PAPER AND PENCIL measures of honesty, referred to as INTEGRITY TESTS. Typically, these TESTS ASK about PAST HONEST/DISHONEST BEHAVIOR or about attitudes condoning dishonest behavior. Typical QUESTIONS might ask, "What is the total value of cash and merchandise you have taken from your employer in the past year?" Like POLY., these TESTS also RAISE the important ISSUE of "FALSE POSITIVE," or honest persons who are judged to be dishonest by the instruments. On the other hand, META ANALYSES of VALIDITY studies of INTEGRITY tests indicate that they are SOMEWHAT VALID PREDICTORS of employee DISHONESTY and "counterproductive behaviors," such as chronic tardiness, taking extended work breaks, and "goldbricking" (ignoring or passing off assigned work tasks), but are LESS RELATED to employee PRODUCTIVITY. It has also been SUGGESTED that integrity tests might PREDICT productive employee BEHAVIORS because INTEGRITY OVERLAPS with work-related PERSONALITY constructs such as conscientiousness and emotional stability, although they should never be the sole basis for a hiring decision.

The Future of Employment Testing: "Smart" Tests and Performance-based Simulations PG. 116

Most companies use COMPUTER BASED TESTING (CBT) or Web-based programs to administer pencil-and-paper employment tests. In CBT, applicants COMPLETE the TEST INSTRUMENTS on a PC or online. COMPUTERS can then immediately SCORE the tests, RECORD the results in databanks, and PROVIDE the test-taker with feedback if appropriate. Besides being COST EFFECTIVE, meta-analytic RESEARCH has shown that for most uses, there are NO significant DIFFERENCES in test results between tests administered in computerized versus pencil- and-paper format. A more SOPHISTICATED development is the use of COMPUTER ADAPTIVE TESTING (CAT). Despite its prevalent USAGE in educational and governmental institutions, it is relatively recent that organizations have started to adopt CAT for preemployment testing purposes. In computer-adaptive tests (often referred to as "SMART" tests), the computer program "ADJUSTS" the difficulty of test items to the level of the person being tested.

ASSESSMENT CENTERS PG. 118-120

One of the MOST DETAILED FORM of employment screening and selection takes place in an assessment center, which offers a DETAILED, STRUCTURED evaluation of applicants on a WIDE RANGE of JOB RELATED knowledge, skills, and abilities. Specific managerial skills and characteristics an ASSESSMENT CENTER ATTEMPTS to measure INCLUDE oral and written communication skills, behavioral flexibility, creativity, tolerance of uncertainty, and skills in organization, planning, and decision making. Because a VARIETY of INSTRUMENTS are used to ASSESS PARTICIPANTS, the assessment center often MAKES UES of LARGE TEST BATTERIES. As we saw in Chapter 1, the ASSESSMENT CENTER APPROACH was DEVELOPED during WWII by the U.S. OFFICE OF STRATEGIC SERVICES (the forerunner of the CIA) for the selection of spies. Today, they are USED primarily to SELECT MANAGERS managers, but they are also being USED extensively for MANAGERIAL DEVELOPMENT purposes—to provide FEEDBACK to MANAGERS concerning their job performance-related strengths and weaknesses. In assessment centers, applicants are evaluated on a number of job-related variables using a variety of techniques, such as personality and ability tests that are considered to be valid predictors of managerial success. APPLICANTS also TAKE part in a NUMBER of SITUATIONAL EXERCISES, which are ATTEMPTS to APPROXIMATE certain ASPECTS of the MANAGERIAL JOB. These exercises are RELATED to WORK SAMPLES, except that they are approximations rather than actual examples of work tasks. Sometimes, these situational exercises are used INDEPENDENTLY in employment screening as a SITUATIONAL TEST. Situational tests can be WRITTEN, live tests, or be PRESENTED via VIDEO. One POPULAR SITUATIONAL EXERCISE is the IN BASKET TEST, which requires the APPLICANT to DEAL with a stack of MEMOS, LETTERS, and other materials that have supposedly COLLECTED in the "in-basket" of a MANAGER. The APPLICANT is given some BACKGROUND INFO about the job and then must actually TAKE CARE of the work in the IN BASKET by answering correspondence, preparing agendas for meetings, making decisions, and the like. A group of OBSERVERS CONSIDERS how each APPLICANT deals with the various tasks and ASSIGNS a performance score. Despite the obvious "FACE VALIDITY" of the in-basket exercise, some research has been CRITICAL of the IN BASKET exercise as a SELECTION TOOL. Much of the criticism, however, DEALS with the fact that in-basket exercises are DIFFICULT TO SCORE and INTERPRET because they are ATTEMPTING to ASSESS a variety of complex SKILLS and KNOWLEDGE BASED. Another SITUATIONAL EXERCISE is the LEADERLESS GROUP DISCUSSION. Here, APPLICANTS are put together in a SMALL GROUP to DISCUSS some WORK RELATED topic. The GOAL is to see how each applicant HANDLES the SITUATION and who emerges as a discussion leader. OTHER assessment center EXERCISES might REQUIRE the ASSESSEE to make a PRESENTATION, ROLE PLAY an ENCOUNTER with a supervisee, or ENGAGE in a team exercise with other assessees. TRAINED OBSERVERS RATE each applicant's PERFORMANCE on each EXERCISE. Because EVALUATION of ASSESSMENT CENTER EXERCISES is MADE by HUMAN OBSERVERS/assessors, to AVOID systematic biases and to ensure that assessors are in agreement on ratings of assessees (in other words, that there is reliability in the ratings), TRAINING of assessors is CRITICAL. The RESULT of TESTING at the ASSESSMENT CENTER is a very DETAILED PROFILE of each applicant, as well as some INDEX of how a particular applicant rated in comparison to others. Although research has indicated that assessment centers are relatively GOOD PREDICTORS of MANAGERIAL SUCCESS, the REASONS why assessment centers work are LESS CLEAR. Of course, the MAJOR DRAWBACK is the huge investment of time and resources they require, which is the MAJOR REASON that assessment centers are usually ONLY USED by LARGER ORGS. and for the SELECTION of CANDIDATES for HIGHER-LEVEL management positions. However, recent innovations using VIDEO TAPE and COMPUTERIZED assessment of participants has led to a recent RENEWAL of INTEREST in assessment centers, both in managerial selection and in other forms of evaluation.

PERSONALITY TESTS PG. 112-113

PERSONALITY TESTS are designed to MEASURE certain psychological CHARACTERISTICS of WORKERS. A wide variety of these tests are USED in EMPLOYEE SCREENING AND SELECTION in an attempt to MATCH the PERSONALITY characteristics of job applicants with those of WORKERS who have PERFORMED the JOB SUCCESSFULLY in the past. During the 1960s and 1970s, there was some CONTROVERSY over the use of such tests because of EVIDENCE that the connection between general personality dimensions and the performance of specific work tasks was NOT very STRONG or DIRECT, BUT 1990s meta-analytic reviews of research suggested that certain work-related personality characteristics can be quite GOOD PREDICTORS of job performance, particularly when the personality dimensions assessed are DERIVED from a thorough ANALYSIS of the requirements for the job. General personality INVENTORIES, such as the MINNESOTA MULTIPHASIC PERSONALITY INVENTORY, or MMPI, are also used to SCREEN OUT applicants who POSSESS some PSYCHOPATHOLOGY that might HINDER the PERFORMANCE of sensitive jobs, such as police officer, airline pilot, or nuclear power plant operator. HOWEVER, most of the time, personality tests are used to ASSESS the "NORMAL" characteristics that are deemed to be important for the performance of certain jobs. In the past several decades, there has been a TREND toward DEVELOPING personality tests that more SPECIFICALLY measure JOB RELEVANT aspects of PERSONALITY. For example, GOUGH (1984, 1985) has derived WORK ORIENTATION and MANAGERIAL POTENTIAL SCALES from the CALIFORNIA PSYCHOLOGICAL INVENTORY (CPI), a general personality inventory that MEASURES 20 PERSONALITY DIMENSIONS. The work orientation scale of the CPI is a PREDICTOR of employee performance across positions, whereas the managerial potential scale is used in screening and selecting candidates for management and supervisory positions. HOGAN and HOGAN and others have DEVELOPED a series of personality SCALES to MEASURE personality characteristics predictive of employee success in general job categories such as sales, management, and clerical work. The USE of personality tests in employee screening and selection is on the RISE. It is critically IMPORTANT, however, that the personality tests be CAREFULLY SELECTED to MATCH the requirements of the job. Research examining the use of personality tests in employee screening has found that certain personality characteristics, such as "CONSCIENTIOUSNESS" and "DEPENDABILITY," are good PREDICTORS of both job PERFORMANCE and WORK ATTENDANCE, but may NOT be PREDICTIVE of managerial success. The PERSONALITY traits of "DOMINANCE" and "EXTRAVERSION" are good PREDICTORS of success as a manager and of career success.

How to Conduct More Effective Hiring Interviews PG.125-126

RESEARCH indicates that typical hiring interviews, although widely used, are not always effective predictors of job performance. There are, however, ways to IMPROVE their RELIABILITY and VALIDITY, some of which are outlined here: USE STRUCTURED INTERVIEWS. Structured interviewing, in which the same basic questions are asked of all applicants, is nearly always more effective than unstructured interviewing, because it allows for comparisons among applicants. The use of structured questions also helps prevent the interview from wandering off course and assists in keeping interview lengths consistent. MAKE SURE THAT INTERVIEW QUESTIONS ARE JOB RELATED. Interview questions must be developed from a detailed job analysis to ensure that they are job related. Some researchers have developed situational interview questions, which are derived from critical incidents job analysis techniques that ask applicants how they would behave in a given job situation. Evidence indicates that situational interviews predict job success more accurately than the traditional interview format PROVIDE FOR SOME RATING OR SCORING OF APPLICANT RESPONSES. To interpret the applicant responses objectively, it is important to develop some scoring system. Experts could determine beforehand what would characterize good and poor answers. Another approach is to develop a scale for rating the quality of the responses. It may also be beneficial to make some record of responses to review later and to substantiate employment decisions rather than relying on memory. Huffcutt and Arthur (1994) emphasized that it is important that interviewers have both structured interview questions and structured criteria (e.g., rating scales) for evaluating applicants. LIMIT PROMPTING AND FOLLOW UP QUESTIONING. These are prone to bias. The interviewer can lead the applicant to the "right" (or "wrong") response through follow-up questions (Campion et al., 1998). USE TRAINED INTERVIEWERS. Interviewer training improves the quality of hiring interview decisions. There is also some evidence that interviewers may get better with experience. Interviewers can be instructed in proper procedures and techniques and trained to try to avoid systematic biases. Training is also important because of the public relations function of hiring interviews (e.g., the interviewer is representing the organization to a segment of the public; Stevens, 1998). CONSIDER USING PANEL OR MULTIPLE INTERVIEWS. Because of personal idiosyncrasies, any one interviewer's judgment of an applicant may be inaccurate. One way to increase interview reliability is to have a group of evaluators assembled in a panel. Although panel interviews may improve reliability, they may still have validity problems if all interviewers are incorrect in their interpretations or share some biases or stereotypes. Also, the use of panel interviews is costly. Using multiple (separate) interviews is another way to increase the reliability of judgments made in hiring interviews. However, there is evidence that different interviewers may not share information adequately to come up with a good hiring decision. USE THE INTERVIEW TIME EFFICIENTLY. Many times, interviewers waste much of the time asking for information that was already obtained from the application form and resume. In one study it was found that previewing the applicant's written materials yielded more information in the hiring interview. However, information obtained from the written materials should not be allowed to bias the processing of information received during the interview

Construct validity PG. 103

Refers to whether an EMPLOYMENT TEST MEASURES what it is SUPPOSED to measure

MECHANICAL ABILITY TESTS PG. 109, 111

STANDARDIZED TESTS have ALSO been developed to MEASURE ABILITIES in IDENTIFYING, RECOGNIZING, and APPLYING MECHANICAL PRINCIPALS. These tests are particularly EFFECTIVE in screening applicants for POSITIONS that REQUIRE OPERATING OR REPAIRING MACHINERY, for CONSTRUCTION jobs, and for certain ENGINEERING positions. The BENNETT MECHANICAL COMPREHENSION TEST, or BMCT, is one such COMMONLY used instrument. The BMCT CONSISTS of 68 items, each of which REQUIRES the APPLICATION of a PHYSICAL LAW or a MECHANICAL OPERATION.. One study using the BMCT and several other instruments DETERMINED that the BMCT was the BEST SINGLE PREDICTOR of job performance for a group of employees MANUFACTURING ELECTROMECHANICAL COMPONENTS. A U.K. military study also FOUND that a MECHANICAL COMPREHENSION TEST PREDICTED recruits' abilities to HANDLE WEAPONS.

CONSIDERATIONS IN THE DEVELOPMENT AND USE OF PERSONNEL SCREENING AND TESTING METHODS CONT PG. 103-104

Similar to our discussion of reliability, VALIDITY is a UNITARY CONCEPT, but there are THREE important FACERS of, or types of evidence for, DETERMINING the VALIDITY of a PREDICTOR used in employee selection. A PREDICTOR can be said to YIELD VALID INFERENCES about future performance based on a careful scrutiny of its content. This is referred to as CONTENT VALIDITY. Content validity refers to whether a predictor measurement process (e.g., test items or interview questions) ADEQUATELY SAMPLE important job behaviors and elements involved in performing a job. CONTENT VALIDITY is EST. by having EXPERTS such as job incumbents or supervisors judge the appropriateness of the test items, taking into account information from the job analysis. Ideally, the experts should determine that the test DOES indeed SAMPLE the job content in a representative way. It is COMMON for ORGS. constructing their own screening tests for specific jobs to RELY heavily on this CONTENT-BASED EVIDENCE of VALIDITY. Content validity is closely LINKED to JOB ANALYSIS. A SECOND TYPE of VALIDITY evidence is called CONSTRUCT VALIDITY, which refers to whether a PREDICTOR TEST, such as a PENCIL-and-PAPER test of MECHANICAL ABILITY used to SCREEN school bus mechanics, actually measures what it is SUPPOSED to measure—(a) the ABSTRACT construct of "MECHANICAL ABILITY" and (b) whether these measurements yield ACCURATE predictions of job performance. There are TWO COMMON FORMS of EMPIRICAL EVIDENCE about construct validity. WELL-VALIDATED instruments such as the SAT, and STANDARDIZED EMPLOYMENT tests, have EST. CONSTRUCT VALIDITY by demonstrating that these tests CORRELATE POSITIVELY with the RESULTS of other tests of the same construct. This is REFERRED to as CONVERGENT VALIDITY. In other words, a test of mechanical ability should correlate (CONVERGE) with another, DIFFERENT test of mechanical ability. In addition, a PENCIL-and-PAPER test of mechanical ability should CORRELATE with a PERFORMANCE-BASED test of mechanical ability. In EST. a test's construct validity, researchers are also concerned with DIVERGENT, or DISCRIMINANT, VALIDITY—the test should NOT CORRELATE with tests or measures of constructs that are totally unrelated to mechanical ability. Similarly to content validity, CREDIBLE judgments about a test's construct validity require sound PROFESSIONAL judgments about patterns of convergent and discriminant validity. CRITERION-RELATED VALIDITY is a THIRD type of validity evidence and is EMPIRICALLY demonstrated by the RELATIONSHIP between test scores and some measurable criterion of job success, such as a MEASURE of WORK OUTPUT or QUALITY. There are TWO COMMON WAYS that predictor-criterion correlations can be EMPIRICALLY GENERATED. The FIRST is the FOLLOW UP METHOD (often referred to as PREDICTIVE VALIDITY). Here, the screening test is administered to applicants WITHOUT INTERPRETING the SCORES and W/OUT using them to SELECT among applicants. Once the APPLICANTS become EMPLOYEES, criterion measures such as job performance assessments are COLLECTED. If the test instrument is VALID, the test scores should CORRELATE with the CRITERION MEASURE. The obvious ADVANTAGE of the PREDICTIVE VALIDITY method is that it DEMONSTRATES how SCORES on the SCREENING INSTRUMENT actually RELATE to future job performance. The major DRAWBACK to this approach is the TIME that it TAKES to EST. validity. During this validation period, applicants are TESTED, but are NOT HIRED BASED on their test scores. In the SECOND APPROACH, known as the PRESENT-EMPLOYEE method (also termed CONCURRENT VALIDITY), the test is GIVEN to CURRENT employees, and their SCORES are CORRELATED with some CRITERION of their CURRENT performance. Again, a RELATIONSHIP between test scores and criterion scores SUPPORTS the MEASURE'S VALIDITY. Once there is evidence of CONCURRENT validity, a COMPARISON of applicants' test SCORES with the incumbents' scores is POSSIBLE. Although the concurrent validity method LEADS to a QUICKER ESTIMATE of validity, it may NOT be as ACCURATE an assessment of CRITERION-RELATED validity as the PREDICTIVE method, because the job INCUMBENTS represent a SELECT GROUP, and their test PERFORMANCE is likely to be HIGH, with a RESTRICTED range of scores. In other words, there are NO TEST SCORES for the "POOR" job performers, such as workers who were fired or quit their jobs, or applicants who were not chosen for jobs. ALL predictors used in employee selection must be reliable/valid. STANDARDIZED and COMMERCIALLY available psychological tests have typically DEMONSTRATED evidence of RELIABILITY and VALIDITY for use in certain circumstances. However, even with widely used standardized tests, it is CRITICAL that their ability to predict job success be EST. for the particular positions in question and for the SPECIFIC criterion. It is especially necessary to assure the reliability and validity of NON STANDARDIZED screening methods, such as a WEIGHTED application form or a TEST constructed for a specific job.

The Use of Assessment Center Methodology for Assessing Employability of College Graduates PG. 121

Since the 1990s, the use of assessment centers and assessment center methods has GROWN. There has been an increase in the use of assessment centers in managerial selection, and assessment centers are also being used as a means of training and "brushing up" managers' skills. Assessment center methods are also being EXPANDED to facilitate screening and orientation of entry-level employees. In colleges and universities, assessment center methodologies are being used to evaluate new students or in outcome evaluation—measuring the managerial skills and potential "employability" of students as they graduate. For instance, in one university's master's-level program in industrial/organizational psychology, first-year master's students are put through an assessment center evaluation, with second-year master's students serving as evaluators. The goal was to evaluate the "managerial potential" of business school graduates and to track them during their early careers as a way of determining if the knowledge and skills measured in the assessment center are indeed predictive of future career success. A follow-up study did indeed demonstrate that college assessment center ratings of leadership potential correlated with later ratings of leadership made by the former college students' work supervisors. In another student assessment center, assessment center ratings were related to early career progress of the alumni. Why the SURGE of interest in assessment CENTERS? There are SEVERAL REASONS. FIRST, the assessment center methodology makes sense. It offers a detailed, multimodal assessment of a wide range of knowledge, skills, abilities, and psychological characteristics. This is the test battery approach we discussed earlier. SECOND, much of the measurement in assessment centers is "performance based," and there is a trend in assessment away from pencil-and-paper assessment and toward more behavioror performance-based assessment. THIRD, assessment centers are easier to conduct today. With computer and video technology, it is easy to conduct an assessment center and store the participants' performance data for later, more convenient, evaluation (Lievens, 2001). FINALLY, evidence indicates that assessment centers serve a dual purpose by assessing participants and also helping them to develop managerial skills by undergoing the assessment center exercises

COGNITIVE ABILITY TESTS PG. 108-109

TESTS of cognitive ability RANGE from TESTS of GENERAL INTELLECTUAL ABILITY to tests of SPECIFIC COGNITIVE SKILLS. Group-administered, pencil-and-paper tests of general intelligence have been used in employee screening for some time. TWO such WIDELY USED OLDER instruments are the OTIS SELF ADMINISTERING TEST OF MENTAL ABILITY and the Wonderlic Personnel Test (now called the WONDERLIC COGNITIVE ABILITY TEST). BOTH are fairly SHORT and ASSESS BASIC VERBAL/NUMERICAL ABILITIES. Designed to MEASURE the ABILITY to LEARN simple jobs, to follow instructions, and to solve work-related problems and difficulties, these tests are USED to SCREEN APPLICANTS for positions as office clerks, assembly workers, machine operators, and certain frontline supervisors. One CRITICISM of using GENERAL INTELLIGENCE TESTS for employee selection is that they MEASURE COGNITIVE ABILITIES that are TOO GENERAL to be EFFECTIVE PREDICTORS of specific job-related cognitive skills. However, research indicates that such general tests are REASONABLY GOOD PREDICTORS of job performance, although research indicates that general INTELLIGENCE is the most CONSISTENT PREDICTOR OF PERFORMANCE. There has been some RELUCTANCE on the part of employers to USE GENERAL INTELLIGENCE TESTS for screening job applicants. Because there is some EVIDENCE that SCORES on some GENERAL INTELLIGENCE tests may FAVOR the ECONOMICALLY and EDUCATIONALLY advantaged, there are FEARS that general intelligence tests might DISCRIMINATE against certain ETHNIC MINORITIES, who tend to be overrepresented among the economically disadvantaged. It has been argued that general intelligence tests may UNDERESTIMATE the INTELLECTUAL abilities and potentials of members of certain ethnic minorities. However, a SERIES of META-ANALYSES CONCLUDED that cognitive abilities tests are VALID for employee screening, that they are PREDICTIVE of job performance, and that they do NOT UNDER PREDICT the job performance of minority group members.

REFERENCES AND LETTERS OF RECOMMENDATIONS PG. 100

TWO other SOURCES of INFO used in employee screening and selection are REFERENCES and LETTERS OF REC. Historically, very little research has examined their validity as selection tools. Typically, REFERENCE CHECKS and LETTERS OF REC. can provide FOUR TYPES OF INFO: (1) employment and educational history, (2) evaluations of the applicant's character, (3) evaluations of the applicant's job performance, and (4) recommender's willingness to rehire the applicant. There are important REASONS that REFERENCES and LETTERS of REC. may have LIMITED IMPORTANCE in employee selection. FIRST, because APPLICANTS can usually CHOOSE their own sources for references and recommendations, it is UNLIKELY that they will SUPPLY the names of persons who will give bad recommendations. Therefore, letters of recommendation tend to be DISTORTED in a very POSITIVE direction, so positive that they may be useless in distinguishing among applicants. In addition, because of increased LITIGATION against individuals and former employers who provide negative recommendations, many companies are REFUSING to provide any kind of reference for former employees except for job title and dates of employment. Letters of recommendation are still widely USED, however, in applications to graduate schools and in certain professional positions. In many GRADUATE programs steps have been taken to IMPROVE the EFFECTIVENESS of these letters as a SCREENING and SELECTION tool by including forms that ask the recommender to rate the applicant on a variety of dimensions, such as academic ability, motivation/drive, oral and written communication skills, and initiative. The USE of BACKGROUND CHECKS for past criminal activity has been on the RISE, and has FUELED an industry for companies providing this service. Many companies are routinely CONDUCTING BACKGROUND CHECKS on most or all candidates for jobs before hire, in an attempt to protect employers from LITIGATION. Background checks are becoming commonplace, there has been very LITTLE research examining the impact on organizations.

HIRING INTERVIEWS PG. 121-123

To obtain almost any job in the United States, an applicant must go through at least ONE hiring interview, which is the most widely used employee screening and selection device. Despite its widespread use, if NOT CONDUCTED PROPERLY, the hiring interview can be a very POOR PREDICTOR of future JOB PERFORMANCE. I/O psychologists have CONTRIBUTED greatly to our understanding of the effectiveness of interviews as a hiring tool. CARE must be taken to ENSURE the RELIABILITY and VALIDITY of judgments of applicants made in hiring interviews. Part of the PROBLEM with the VALIDITY of interviews is that many interviews are CONDUCTED HAPHAZARDLY, with LITTLE STRUCTURE to them. You may have experienced one of these poor interviews that seemed to be nothing more than a casual conversation, or you may have been involved in a job interview in which the interviewer did nearly all of the talking. In these cases it is obvious that little concern has been given to the fact that, just like a psychological test, the hiring interview is actually a MEASUREMENT TOOL and employment DECISIONS derived from interviews should be held to the same standards of reliability, validity, and predictability as tests. A number of VARIATIONS on the traditional interview format have been DEVELOPED to try to IMPROVE the EFFECTIVENESS of interviews as a selection tool. One VARIATION is the SITUATIONAL INTERVIEW, which asks INTERVIEWEES how they would DEAL with SPECIFIC JOB RELATED, hypothetical SITUATIONS. Another variation has been referred to as the behavior description interview (Janz, 1982) or STRUCTURED BEHAVIOR INTERVIEW, which asks interviewees to draw on PAST job incidents and behaviors to deal with hypothetical FUTURE WORK situations. A meta-analysis suggests that ASKING about PAST behaviors is BETTER than asking about HYPOTHETICAL SITUATIONS, although the ADDITIONAL STRUCTURE and FOCUSING PROVIDED by these variations to traditional interviews are EFFECTIVE in IMPROVING the SUCCESS of hiring interviews as selection devices. There has been increased use of videoconference technology to conduct hiring interviews. One interesting finding is that interviewers tend to make more FAVORABLE EVALUATIONS of VIDCONFERENCE applicants than in faceto-face interviews, likely because there are some nonverbal cues, particularly cues that reveal anxiety and discomfort, absent in videoconference interviews. When USED CORRECTLY as part of an employee screening and selection program, the hiring interview should have THREE MAJOR OBJECTIVES. FIRST, the interview should be USED to HEKP FILL IN GAPS in the information OBTAINED from the applicant's RESUME and APPLICATION FORM and from employment tests, and to MEASURE the kinds of factors that are only available in a face-to-face encounter, such as poise and oral communication skills. SECONF, the hiring interview should provide applicants with REALISTIC JOB PREVIEWS, which help them decide whether they really WANT the JOB and OFFER an INITIAL ORIENTATION to the organization. FINALLY, because the hiring interview is ONE WAY that an ORG. INTERACTS DIRECTLY with a portion of the general public, it can SERVE an important P.R. FUNCTION for the COMPANY.

Work sample tests pg. 111

Used in job skill tests to MEASURE applicants' ABILITIES to PERFORM BRIEF examples of important job tasks

TEST FORMATS PG. 104-106

Test formats, or the WAYS in which TESTS are ADMINISTERED, can vary greatly. SEVERAL DISTINCTIONS are important when CATEGORIZING employment tests. INDIVIDUAL VS. GROUP TESTS —Individual tests are administered to only ONE person at a time. In individual tests, the test administrator is usually more INVOLVED than in GROUP tests. Typically, tests that require some kind of sophisticated apparatus, such as a driving simulator, or tests that require constant supervision are administered individually, as are certain intelligence and personality tests. GROUP TESTS are designed to be given simultaneously to MORE than ONE PERSON, with the administrator usually serving as only a TEST MONITOR. The obvious ADVANTAGE to GROUP TESTS is the REDUCED COST for ADMINISTRATOR TIME. More and more, tests of all types are being administered online, so the distinction between individual and group testing are becoming blurred, as many applicants can complete screening instruments online simultaneously. SPEED VS. POWER TESTS—Speed tests have a FIXED TIME LIMIT. An important FOCUS of a speed test is the NUMBER OF ITEMS COMPLETED in the time period provided. A TYPING TEST and many of the SCHOLASTIC achievement tests are examples of speed tests. A POWER TEST allows the test-taker sufficient time to COMPLETE ALL ITEMS. Typically, power tests have DIFFICULT ITEMS, with a focus on the percentage of items answered correctly. PAPER AND PENCIL VS. PERFORMANCE TESTS-"Paper-and-pencil tests" refers to both paper versions of tests and online tests, which require some form of written reply, in either a forced choice or an open-ended, "essay" format. Many employee screening tests, and nearly all tests in schools, are of this format. PERFORMANCE TESTS, such as typing tests and tests of manual dexterity or grip strength, usually INVOLVE the MANIPULATION OF PHYSICAL OBJECTS.

Criterion-related validity

The ACCURACY of a measurement instrument in DETERMINING the relationship between scores on the INSTRUMENT and some criterion of JOB SUCCESS

THE EFFECTIVENESS OF EMPLOYEE SCREENING TESTS PG. 115-118

The EFFECTIVENESS of using standardized tests for screening potential employees remains a CONTROVERSIAL issue. CRITICS of testing cite the LOW VALIDITY COEFFICIENTS validity coefficients (approximately 0.20) of certain employment tests. The VALIDITY COEFFICIENT is the CORRELATION COEFFICIENT between the PREDICTOR, or the TEST SCORE, and the CRITERION, usually a MEASURE of SUBSEQUENT JOB PERFORMANCE. However, SUPPORTERS believe that a COMPARISON of all SCREENING METHODS—tests, biographical information, and hiring interviews—across the full spectrum of jobs REVEALS that EMPLOYMENT TESTS are the best predictors of job performance. Obviously, the ABILITY of a TEST to PREDICT performance in a specific job DEPENDS on how well it can CAPTURE and measure the particular skills, knowledge, or abilities required. The MOST EFFECTIVE USE of screening tests OCCURS when a NUMBER of INSTRUMENTS are USED in COMBO to PREDICT effective job performance. Because most jobs are complex, involving a wide range of tasks, it is UNLIKELY that SUCCESSFUL PERFORMANCE is DUE to just ONE PARTICULAR TYPE of knowledge or skill. Therefore, ANY SINGLE TEST will only be able to PREDICT ONE ASPECT of a total job. Employment screening tests are usually GROUPED TOGETHER into a test battery. SCORES on the various tests in the BATTERY are used in combination to help SELECT the best possible CANDIDATES for the job. We have seen that standardized tests can be reliable and valid screening devices for many jobs. However, TWO IMPORTANT ISSUES regarding this use of tests must be considered: VALIDITY GENERALIZATION and TEST UTILITY. The VALIDITY GEN. of a screening test REFERS to its VALIDITY in PREDICTING PERFORMANCE in a job or setting DIFFERENT from the one in which the test was validated. If the test is also HELPFUL in CHOOSING MANAGERS in a service organization, its VALIDITY has GENERALIZED from ONE ORG to ANOTHER. Similarly, VALIDITY GEN. would EXIST if a test of clerical abilities is SUCCESSFUL in selecting APPLICANTS for both secretarial and receptionist positions. HIGH VALIDITY GENERALIZATION of a standardized test will GREATLY INCREASE its usefulness—and REDUCE the WORKLOAD of I/O psychologists—because the INSTRUMENT may NOT NEED to be VALIDATES for use with each and every position and organization. Some I/O psychologists, such as Schmidt and his colleagues, argued that the VALIDITY GEN. of most STANDARDIZED employee screening procedures is quite high, which means that they can be used successfully in a variety of employment settings and job classifications. At the OTHER EXTREME is the VIEW that the ABILITY of TESTS to PREDICT future job success is SITUATION SPECIFIC, and VALIDITY should be EST. for each USE of a SCREENING INSTRUMENT. From an INTERNATIONAL perspective, some types of tests may GENERALIZE better across COUNTRIES and cultures. For example, tests of cognitive abilities should be important for many jobs throughout the world, and evidence suggests they are less prone to cultural effects, whereas personality tests, for example, may be more susceptible to cultural effects. TEST UTILITY is the VALUE of a SCREENING TEST in helping to AFFECT important ORG. OUTCOMES. In other words, test utility DETERMINES the SUCCESS of a test in terms of DOLLARS GAINED by the company through the INCREASED PERFORMANCE and productivity of workers selected BASED on test scores. All in all, UTILITY ANALYSES of standardized employee testing programs INDICATE that such tests are USUALLY COST EFFECTIVE. Hunter and Schmidt (1982) went so far as to ESTIMATE that the U.S. gross national product would be INCREASED by tens of billions of dollars per year if improved employee screening and selection procedures, including screening tests, were routinely implemented. UTILITY ANALYSES ALLOW the employer to determine the FINANCIAL GAINS of a TESTING PROGRAM and then compare them to the costs of developing and implementing the program. Another important ISSUE in TESTING is the importance of ETHICS in the ADMINISTRATION and USE of EMPLOYMENT TESTING, including the PROTECTION of the privacy of persons being tested. I/O psychologists are very CONCERNED about ETHICAL ISSUES in testing. In fact, the Society for Industrial and Organizational Psychology (SIOP) published a fourth edition of its Principles for the Validation and Use of Personnel Selection Procedures (SIOP, 2003). This publication OUTLINES important ethical concerns for employment testing. A FINAL ISSUE concerning testing is the issue of FAKING. Faking is trying to "BEAT" the TEST by DISTORTING RESPONSES to the test in an effort to PRESENT oneself in a POSITIVE, socially desirable way. Faking is a particular CONCERN for personality and integrity TESTS. Laypersons tend to believe that employment tests are easily faked, but this is not the case: FIRST, many tests have SUBSCALES designed to DETERMINE if a TEST TAKER is trying to FAKE the test. SECOND, it is often DIFFICULT for the test-taker to DETERMINE exactly which RESPONSES are the CORRECT (desired) responses. FINALLY, there is EVIDENCE that PERSONALITY and INTEGRITY tests are quite ROBUST, still validly measuring their intended constructs even when test-takers are trying to fake

(PG.97-98) EVALUATION OF WRITTEN MATERIAL

The FIRST STEP in the SCREENING PROCESS involves the EVALUATION of WRITTEN MATERIALS, such as APPLICATIONS and RESUMES. Usually, STANDARD APPLICATION forms are used for screening LOWER-LEVEL positions in an organization, with resumes used to provide biographical data and other background information for higher level jobs, although many companies require all applicants to complete an application form. The MAIN PURPOSE of the APP. and RESUME is to COLLECT BIOGRAPHICAL INFO such as education, work experience, and outstanding work or school accomplishments, the apps. are often submitted ONLINE. Researchers have suggested that work experience can be MEASURED in both QUANTITATIVE and QUALITATIVE. It is also important to mention, however, that first impressions play a big role in selection decisions. Because WRITTEN MATERIALS are usually the FIRST contact a POTENTIAL EMPLOYER has with a job candidate, the IMPRESSIONS of an applicant's credentials received from a resume or application are very IMPORTANT. In fact, research has shown that IMPRESSIONS of qualifications from written applications INFLUENCED impressions of applicants in their subsequent interviews. Most companies USE a STANDARD APP. form, completed ONLINE or as a HARD COPY. As with all employment screening devices, the application form should COLLECT only INFO. that has been determined to be JOB RELATED. Questions that are NOT JOB RELATED, and especially those that may lead to job DISCRIMINATION. From the EMPLOYER'S PERSPECTIVE, the DIFFICULTY with APP. forms is in EVALUATING and INTERPRETING the INFO. OBTAINED to determine the most qualified applicants. There have been ATTEMPTS to QUANTIFY the BIOGRAPHICAL INFO. obtained from application forms through the use of either WEIGHTED APP. forms or BIOGRAPHICAL INFO. BLANKS (BIBs). WEIGHTED APP forms ASSIGN different weights to each piece of information on the form. The WEIGHTS are DETERMINED through detailed research, conducted by the organization, to determine the relationship between specific bits of biographical data, often referred to as BIODATA, and criteria of success on the job. ANOTHER TYPE of INFO. from job applicants is a WORK SAMPLE. Often a WORK SAMPLE consists of a WRITTEN SAMPLE (e.g., a report or document) and can be developed into standardized tests.

TYPES OF EMPLOYEE SCREENING TESTS PG. 104

The MAJORITY of employee screening and selection instruments are STANDARDIZED tests that have been SUBJECTED to RESEARCH aimed at demonstrating their VALIDITY and RELIABILITY. While many of these TESTS are PUBLISHED in the research literature, there has been quite a bit of GROWTH in CONSULTING ORGS. that ASSIST companies in testing and screening. These organizations EMPLOY I/O psychologists to create SCREENING TESTS and other assessments that are PROPRIETARY and used in their consulting work. More and more, companies are OUTSOURCING their personnel testing work to these consulting firms.

HIRING INTERVIEWS CONT. PG. 123-125

There are serious CONCERNS about the ACCURACY of JUDGMENTS made from HIRING INTERVIEWS, because UNLIKE screening tests or application forms, which ask for specific, quantifiable information, HIRING INTERVIEWS are typically more FREEWHEELING AFFAIRS. Interviewers may ASK completely DIFFERENT QUESTIONS of different applicants, which makes it very DIFFICULT to COMPARE RESPONSES. Although HIRING INTERVIEWS are supposed to be OPPORTUNITIES for GATHERING INFO about the APPLICANT, at times the INTERVIEWER may do the majority of the TALKING. These interviews certainly YEILD very LIL INFO about the applicant and probably no valid assessment of the person's qualifications. The RELIABILITY of INTERVIEWER JUDGMENTS is also PROBLEMATIC. Different interviewers may arrive at completely DIFFERENT EVALUATIONS of the same applicant, even when evaluating the same interview. Also, because of nervousness, fatigue, or some other reason, the same APPLICANT might NOT PERFORM AS WELL in one interview as in another, which further CONTRIBUTES to LOW RELIABILITY. Perhaps the greatest SOURCE of PROBLEMS AFFECTING hiring interview validity is interviewer biases. Interviewers MAY ALLOW FACTORS such as an applicant's gender, race, physical disability, physical attractiveness, appearance, or assertiveness to INFLUENCE their JUDGMENTS. There may also be a TENDENCY for an INTERVIEWER to make a snap JUDGMENT, arriving at an OVERALL EVALUATION of the applicant in the first few moments of the interview. The INTERVIEWER may then SPEND the remainder of the time trying to CONFIRM that first IMPRESSION, selectively attending to only the information that is consistent with the initial evaluation. Another potential SOURCE of BIAS is the CONTRAST EFFECT, which can OCCUR AFTER the interview of a particularly good or bad applicant. All subsequent applicants may THEN be EVALUATED either very NEGATIVELY or very POSITIVELY in contrast to this person. In general, the hiring interview may FAIL to PREDICT JOB SUCCESS ACCURATELY because of a MISMATCH between the SELECTION INSTRUMENT and the INFO. It obtains, and the requirements of most jobs. Receiving a POSITIVE EVALUATION in an interview is related to APPLICANTS' ABILITIES to PRESENT THEMSELVES in a positive manner and to carry on a one-on-one conversation. In other words, EVALUATIONS of INTERVIEWEES may be STRONGLY AFFECTED by their level of communication or SOCIAL SKILLS. Therefore, for some jobs, such as those that involve primarily technical skills, PERFORMANCE in the INTERVIEW is in NOT RELATED to performance on the job, because the TYPES of SKILLS REQUIRED to do well in the interview are NOT the SAME as those required in the job. Researchers have also found a relationship between general cognitive ability and interview performance—suggesting that MORE INTELLECTUALLY GIFTED PEOPLE receive more POSITIVE INTERVIEW EVALUATIONS interview evaluations. DESPITE THIS relationship, research suggests that INTERVIEW PERFORMANCE from a WELL CONDUCTED, structured interview can PREDICT job PERFORMANCE ABOVE and beyond the effects of cognitive ability

Test battery PG.116

a COMBO of employment TESTS used to INCREASE the ability to predict future job performance

Internal consistency

a common method of EST. a measurement instrument's RELIABILITY by EXAMINING how the VARIOUS ITEMS of the INSTRUMENT INTERCORRELATE

Validity

a concept REFERRING to the ACCURACY of a MEASUREMENT instrument and its ABILITY to make ACCURATE INFERENCES about a CRITERION

Assessment center PG.118

a detailed, STRUCTURED EVALUATION of job APPLICANTS using a variety of instruments and techniques

Parallel forms PG. 102

a method of EST. the RELIABILITY of a measurement instrument by CORRELATING SCORES on TWO DIFFERENT but EQUIVALENT versions of the SAME INSTRUMENT

test-retest reliability

a method of determining the STABILITY of a MEASUREMENT instrument by ADMINISTERING the SAME MEASURE to the SAME PPL. at TWO different TIMES and then correlating the scores

Emotional Intelligence (EI) PG. 113

ability to understand, REGULATE, and communicate EMOTIONS and to use them to INFORM thinking


संबंधित स्टडी सेट्स

marketing research quiz questions

View Set

Chapter 2,4 Assignment Questions

View Set

CompTIA Security+ Domain 1 Practice

View Set

Article 310-Conductors for General Wiring

View Set

Interpersonal Communication Module 6

View Set

Social Psychology Ch 11 Conformity & Obedience

View Set

Ch 3 chemistry test what kinds of changes do these words indicate

View Set