Personnel Psychology 333 Exam 1 Study Guide

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

I would correlate subjects' scores on a test with their scores on an independent criterion (another measure) of the trait assessed by the test.

How would you go about establishing the criterion-related validity of a selection measure.

Types of Selection Methods

Resumes and References, Background check, Physical and cognitive ability tests, Personality Inventories, Interviews, Work samples, Drug tests, and Integrity tests

predictive validity benefits

Simple, accurate, and direct measure of validity Samples the population of interest

In practice, one rarely sees a validity coefficient larger than .60, and validity coefficients in the range of .30 to .40 are commonly considered high. A coefficient is statistically significant if the chances of obtaining its value by chance alone are quite small: usually less than 5 in 100.

How are validity coefficients interpreted?

Correction for attenuation

How do we 'correct for' unreliability of our predictor?

Have raters identify whether the item should belong on the test and calculate inter-rater reliability.

How would you go about establishing the content validity of a selection measure?

Types of judgmental performance appraisal measures

written narratives, brief essays and numerical rating procedures, more prone to personal bias, comparison methods, and rating methods

Uniform Guidelines

Guidelines issued by federal agencies charged with ensuring compliance with equal employment federal legislation explaining recommended employer procedures in detail

Reliability analyses

Help us to determine the dependability of data we will use in selection decision making

Race norming

when different groups of people have different scores designated as "passing" grades on a test for employment

internal consistency

when the facts presented within a single source do not clash with each other

Restriction of range

when variance in scores on selection measures has been reduced.

Criterion

(n.) a rule, test; a standard for judgment or evaluation

CHARACTERISTICS OF JOB PERFORMANCE MEASURES

-individualization -relevance -measurability -variance

Enhancing Recruitment

1. Carefully consider the sometimes-competing objectives of attracting versus selecting job seekers; the ratio of qualified applicants applying per position opening is one measure. Selection procedure norms (preferably, local norms— see Chapter 3) and performance evaluations are examples of measures used for monitoring changes in applicant quality. 2. Identify specific objectives of a recruitment program—the number of positions to be filled, WRCs required for those positions, and when positions should be filled. 3. Formulate a specific strategy for accomplishing recruitment program objectives. 4. Ensure that recruitment screens (including self-selection screens) reflect the desired job-related characteristics of job candidates. Change the focus of these recruitment screens as you progress through the recruitment stages. Early on, when attracting and generating applicant interest, focus on organizational reputation. After applying, focus on maintaining applicant interest by providing information about PJ fit. As you approach the actual job offer, focus on PO fit. 5. Identify who represents the potential applicant population; the relevant labor market likely mirrors that population. 6. Use targeted recruitment to identify underrepresented protected groups in an organization. Search recruitment Web sites targeting such underrepresented groups and contact specific organizations representing such groups to find possible sources of applicants. 7. Develop a recruitment Web site that is attractive, easy to navigate, and easy to use for submitting application materials. Enhance the aesthetic quality of the recruitment Web site (including the use of video, podcasts, employee testimonials) on a recruitment Web site; all narrative content on the site should be well written in an engaging style. Be sure all materials (including photos or videos and themes presented) accurately reflect the true nature of the organization. 8. For organizations using recruitment Web sites and receiving large numbers of resumes, consider using resume screening and applicant tracking software to manage the number of resumes that must be individually reviewed and applicant contacts that must be made. As a form of prescreening, consider incorporating applicant assessments (team orientation; personal fit with the recruiting organization) as part of the recruitment program. 9. Evaluate an organization's image being communicated by recruitment and related media. Ensure that image is the desired organizational"brand." 10. Consider using recruitment Web sites for organizations searching for employees. If a recruitment Web site is used, establish links to it on popular social networking sites, particularly www.LinkedIn.com 11. Encourage referrals of potential applicants from employees; particularly valuable are those individuals recommended by high-performing employees. 12. Select recruiters who can play an important role in attracting initial interest in organizations on characteristics associated with dealing with people. They should be enthusiastic, personable, and extraverted. At a minimum, train them in interviewing and managing social interactions in recruitment and on knowledge of work operations for those jobs being recruited. 13. Use technology to aid in recruiting; however, use more personal means of communications (for example, making telephone calls) to contact applicants who are particularly desirable. Stay in touch with desired applicants throughout the recruitment process. 14. Use RJPs. Positive results are most likely to occur when (a) applicants have not formed an opinion of what a job is like, (b) applicants can predict how they will react to certain job characteristics presented in the RJP (for example, standing for an eight-hour day), and (c) applicants are aware of the WRCs they possess and their interests. 15. Use metrics and other measures to evaluate recruitment programs and associated activities to identify"what works."

nominal

2 or more mutually exclusive categories, numbers meaningless

Criterion-related validity, content validity, construct validity

3 Types of recognized validity

Split-half reliability

A measure of reliability in which a test is split into two parts and an individual's scores on both halves are compared.

Split-half reliability (internal consistency)

A measure of reliability in which a test is split into two parts and an individual's scores on both halves are compared.

Synthetic validity

A method of testing the validity of a selection procedure by combining jobs that require similar abilities and by separately validating the specific predictors intended to measure those abilities.

Job component validity

A process of inferring the validity of a predictor, based on existing evidence, for a particular dimension or component of job performance

Job Analysis

A purposeful, systematic process for collecting information on the important work-related aspects of a job.

Uniform Guidelines on Employee Selection Procedures

Advocate that job analyses is part of the development, application, and validation of selection procedures. Emphasize the role of job analysis in HR selection. Does not permit content validation to be used with abstract personality characteristics

ADEA

Age Discrimination Employment Act—forbids discrimination against any person aged 40 or older in hiring, firing, promotion, or other aspect of employment. Private employers, governments, unions, employment agencies are covered

Targeted recruiting

Aimed at specific types of individuals who are likely to apply/be interested Aimed at certain demographic and/or KSAOs

Expectancy tables

Allow a test user to interpret examinees' predictor scores in terms of potential scores on a criterion. They are a form of criterion-referenced interpretation.

ADA

Americans with Disabilities Act—forbids discrimination on the basis of a physical or mental disability if the individual can perform "essential function" of the job. Private employers, labor unions, employment agencies are covered

Internal consistency reliability estimate

An index of a measure's similarity of content. Shows the extent to which all parts of a measure (such as items or questions) are similar in what they measure.

True score

An individual's actual score on a variable being measured, as opposed to the score the individual obtained on the measure itself.

true score

An individual's actual score on a variable being measured, as opposed to the score the individual obtained on the measure itself.

Constructing New Selection Measures?

Analyzing the job for which a measure is being developed - Selecting the method of measurement to be used - Planning and developing the measure - Administering, analyzing, and revising the preliminary measure - Determining the reliability and validity of the revised measure for the jobs studied - Implementing and monitoring the measure in the human resource selection system

Background information

Application forms, training and experience evaluations, reference checks, and biographical data generally are used to collect information on job applicants and their backgrounds. Application forms typically ask job applicants to describe themselves, including contact information, educational background, relevant licenses, and the like. Training and experience often are assessed based on information on the application form. Prospective employers make reference checks by contacting individuals who can accurately comment on applicants'past work behaviors and associated characteristics as well as performance. These checks often are used to verify information stated on the application form as well as to provide additional data on the applicant. Biographical data questionnaires consist of questions about applicants'past life and work histories. Certain past life and work experiences are used based on the assumption that the best predictor of individuals'future behavior is their past behavior.

Predictors or Selection Procedures

Background information, interviews, and tests

Organizational Citizenship Behaviors

Behaviors that are not required of organizational members but that contribute to and are necessary for organizational efficiency, effectiveness, and competitive advantage

predictive validity (criterion-related validity) advantages

Better info for predicting future outcomes

Spurlock v. United Airlines (1972)

Black man rejected as a flight officer because lacked degree and 500 hours flight time = job related Company's burden of proof against adverse impact diminishes as human risk increases

Classical test theory

Charles spearman, reliability of an assessment is the ratio between true score variance to total score variance, allows use to figure out how much of the assessment score is because of the true score.

job analysis questionnaires (including task analysis inventories)

Checklists used to collect information about jobs in a uniform manner.

Title VII

Civil Rights Act of 1964—forbids discrimination based on sex, race, color, national origin, or religion. Private employers with at least 15 employees, governments, unions, employment agencies, employers receiving federal assistance are covered

Correction for attenuation, divide the validity by the unreliability of either the predictor or criterion measure

Correction of the reduction, caused by low reliability, in the estimated correlation between a test and another measure. The correction for attenuation formula is used to estimate what the correlation would have been if the variables had been perfectly reliable.

parallel forms

Different versions of a test used to assess test reliability; the change of forms reduces effects of direct practice, memory, or the desire of an individual to appear consistent on the same items.

Interviews

Employment interviews are used to collect additional information about job applicants. The employment interview principally consists of questions asked by job interviewers. Responses often are used for assessing applicants'job suitability and their"fit"with an organization.

Price Waterhouse v Hopkins 1989 Supreme Court Case

Established that gender stereotyping is actionable as sex discrimination and the mixed-motive framework that enables employees to prove discrimination when other, lawful reasons for the adverse employment action exist alongside discriminatory motivations or reasons.

.8

Evidence of discrimination could be found if the ratio of the hiring rate of minority group and the hiring rate of the majority group is less than _____.

Total sales, budget expenditure, and the number of claims processed

Example of objective performance appraisal criteria

Errors of measurement

Factors that affect obtained scores but are not related to the characteristic, trait, or attribute being measured

concurrent validity benefits

Fast and selection is less risky because it's short term

Employers realized that if tests no longer require expensive validation procedures for jobs at the entry level, then they could improve their selection programs by including tests, while saving money and time.

How would you use validity generalization in an organization?

Concurrent Validity Steps

Identify a predictor (such as a test): Administer test to present employees. Correlate test scores with present employees' performance on job. If acceptable correlation exists, test can be used for selection of future employees.

then criterion scores may be contaminated.

If scores on a criterion are influenced by variables other than the predictor

Legal issues in selection

Important tasks become "essential job functions" under the Americans with Disabilities Act. Companies can refuse to hire disabled applications if they cannot perform these functions (vs. "marginal job functions"), with some exceptions.

Factors Affecting Attraction of Applicants

Inducements offered by the firm Salary, benefits, flexibility, career path, etc. Firm's targeted applicant pool Education level, experience, applicant demographics, etc. External labor market conditions Job and organizational characteristics

objective performance appraisal criteria

Internet hits, total sales, and budget accuracy are all examples of

RJP's: Realistic Job Previews

Involve the presentation of both positive and potentially negative information to job candidates

Linear Regression

Involves the determination of how changes in criterion scores are related to changes in predictor scores. In general, you are likely to come across two common types of linear regression: simple and multiple regression.

job analysis interviews

Job incumbents tend to give the most accurate information about what is actually done on the job. Speak to job holder, colleagues, and supervisors to verify the information is accurate.

Tests

Literally hundreds of tests have been used for selection purposes. The types of tests available can be classified in many different ways; the descriptive labels indicated here will give you a feeling for the range of options available. Aptitude, or ability, tests, for example, measure how well individuals can perform a job or specific aspects of a job. Abilities measured by aptitude tests include intellectual, mechanical, spatial, perceptual, and motor. Achievement tests are employed to assess individuals'proficiency at the time of testing (for example, job proficiency or knowledge).

Regression Equation

Mathematically describes the functional relationship between the predictor and criterion. If the validity coefficient is statistically significant, and the regression equation is known, criterion scores can be predicted from predictor information

Criteria or Measures of Job Success

Objective Production Data, Personnel Data, Judgmental Data, Job or Work Sample Data, orTraining Proficiency Data

Predictive Validity Steps

Obtain scores from applicants and performance measures for the people who were hired and correlate these measures with the original test scores to obtain the predictive validity coefficient

ONET

Occupational Information Network compiled by the United States Department of Labor

coefficient of determination (r^2)

Percentage of variation in the criterion that we can expect to know in advance because of our knowledge; calculated by squaring the correlation coefficient

Judgmental Data

Performance appraisals or ratings frequently serve as criteria in selection research. They most often involve a supervisor's rating of subordinates on a series of behaviors or outcomes important to job success, including task performance, citizenship behaviors, and counterproductive work behaviors (CWBs). Supervisor or rater judgments play a predominant role in defining this type of criterion data.

Subject-matter expert (SME)

Person who is well versed in the content of a human resource development program.

Personnel Data

Personnel records and files frequently contain information on employees that can serve as important criterion measures. Absenteeism, tardiness, voluntary turnover, accident rates, salary history, promotions, and special awards are examples of such measures.

Price Waterhouse v Hopkins

Plaintiff, Ann Hopkins, claimed she was denied partnership at the firm for two years in a row based on her lack of conformity to stereotypes about how women should act and what they should look like. A written comment from one partner stated that hopkins needed a "course in charm school." Her head supervisor stated that she needed to "walk for femininely, talk more femininely, dress more femininely, wear make-up, have her hair styled, and wear jewelry." Hopkins was well qualified and frequently outperformed her male co-workers.

First you obtain test scores (job application), then correlate with subject's performance (criterion)

Predictive Validity Steps

Consider using recruitment Web sites Encourage referrals of potential applicants from employees Select recruiters on characteristics associated with dealing with people Using technology to aid in recruiting is an asset; however, use more personal means of communications to applicants who are particularly desirable and stay in touch with them throughout the recruitment process Use realistic job previews (RJPs) Use metrics and other measures to evaluate recruitment programs and associated activities to identify "what works"

Recommendations for recruitment

developing an appropriate # of job candidates (e.g., ten for each open position), keeping costs reasonable, meeting legal obligations and business needs, and reducing % of applicants poorly qualified or wrong skills.

Recruitment goals include

Affirmative action remedies

Recruitment, training programs, train managers about bias, fair processes, and value of diversity, supportive organizational culture, preferential selection-tie breaker, and preferential selection among qualified applicants.

Griggs v. Duke Power Company

Removed racial restriction

Uniform Guidelines on Employee Selection

Supersede previous regulations (a previous version referred to as the Guidelines) and represent a joint agreement among the EEOC, Department of Justice, Department of Labor, and Civil Service Commission.

HRIS (Human Resource Information System)

Systematic tool for gathering, storing, maintaining, retrieving, and revising HR data.

Organizational branding

Tailor strategies and methods to communicate image they wish to portray Positively influences applicants' early perceptions of the organization

Kuder-Richardson reliability

Takes the average of the reliability coefficients that would result from all possible ways of subdividing a measure, it helps solve the problem of how best to divide a measure for calculating reliability.

Job content domain

The behaviors and work-related characteristics (WRCs) that represent the content of the job to be assessed

Content validity

The degree to which the content of a test is representative of the domain it's supposed to cover.

content validity

The degree to which the content of a test is representative of the domain it's supposed to cover.

Test reliability

The extent to which a test gives the same result for an individual over time and situation.

Construct validity

The extent to which there is evidence that a test measures a particular hypothetical construct.

construct validity

The extent to which there is evidence that a test measures a particular hypothetical construct.

scorer reliability or inter-rater reliability and agreement

The extent to which two people scoring a test agree on the test score, or the extent to which a test is scored correctly

In a specific situation, a detailed job analysis may not be necessary technically, but failure to have evidence of an"adequate"job analysis, in the view of the trial judge, may result in an adverse decision in court.

The level of detail desired may also be determined by the likelihood of litigation. Prudent personnel researchers attend not only to the best wisdom of their profession but to the realities of the courtroom.

predictors

The pieces of information obtained from a job candidate that are used to predict how successfully the candidate will perform in a future job.

Selection

The process of collecting and evaluating information about an individual in order to extend an offer of employment.

Obtained score

The score actually calculated in the assessment process. used to estimate the true score.

Predictive validity

The success with which a test predicts the behavior it is designed to predict; it is assessed by computing the correlation between test scores and the criterion behavior.

predictive validity (criterion-related validity)

The success with which a test predicts the behavior it is designed to predict; it is assessed by computing the correlation between test scores and the criterion behavior.

Job or Work Sample Data

These data are obtained from a measure developed to resemble the job in miniature or sample of specific aspects of the work process or outcomes (for example, a typing test for a secretary). Measurements (for example, quantity and error rate) are taken on individual performance of these job tasks, and these measures serve as criteria. In practice, however, such measures are likely to be used as selection procedures rather than as criterion measures.

Objective Production Data

These data tend to be physical measures of work. Number of goods produced, amount of scrap left, and dollar sales are examples of objective production data.

Training Proficiency Data

This type of criterion focuses on how quickly and how well employees learn during job training activities. Often, such criteria are labeled trainability measures. Error rates during a training period and scores on training performance tests administered during training sessions are examples of training proficiency data.

Factors that affect the size of validity coefficients

True association, measurement error, and restricted range

Griggs v Duke Power

U.S. case that set the standard for determining whether discrimination based on disparate impact exists.

permitted where reasonably necessary to carrying out a particular job function in the normal operations of the business

Under the BFOQ defense, intentional discrimination is

Cross-validation

Verifying the results obtained from a validation study by administering a test or test battery to a different sample (drawn from the same population)

True score + error score

What are components of your observed score on a measure?

Test-Retest, internal consistency (using split-half or coefficient alpha), scorer reliability, and parallel forms

What are different strategies for estimating the reliability of a test?

Required of most companies that have contracts to sell goods or services to the federal government, as a remedy for discrimination under Title VII, or if the employer refuses to change and continues with the discrimination

What are factors considered in whether to have an affirmative action program?

conducting a thorough job analysis and HR action plan in order to target the adequate quality and quantity of candidates

What are key aspects of a recruitment website that increase its effectiveness?

Organizational branding and Targeted recruiting

What are key concepts in the recruitment field that will help your organization recruit a diverse pool of qualified applicants?

Job analysis, identification of relevant job performance dimensions, identification of work related characteristics necessary for job, development of assessment devices to measure WRCs, validation of assessment devices (content, criterion), and the use of assessment devices to measure WRCs

What are key steps in the development of a selection program?

Objective production data Personnel data Judgmental data Job or work sample data Training proficiency data

What are major categories of criteria used to measure job success?

Clinical, Statistical

What are several approaches to demonstrating the validity (job relatedness) of a selection test?

Use of existing measures is usually less expensive and less time-consuming than developing new ones. - If previous research has been conducted, we will have some idea about the reliability, validity, and other characteristics of the measures. - Existing measures may be superior to what could be developed in-house

What are some benefits of using existing selection measures versus constructing a new one?

Test-retest reliability (stability over time) and split-half reliability

What are the TWO TYPES OF TEST RELIABILITY?

legitimate, nondiscriminatory reason for the employment action. This has been interpreted to include reasons that are illogical, not business-related, and even illegal. Business necessity, Undue hardship, Bona fide occupational qualification, Standard deviation rule, Job relatedness, Business Necessity, BFOQ

What are the three options for defending a primae facie case?

True and error components

What are the two components of every observed test score?

how much error there is in prediction

What does standard error of estimate tell us?

the limitations of the correlation coefficient

What does the coefficient of determination tell us?

They are inversely related.

What is the general relationship between the standard error of measurement and test reliability?

Number of predictors used

What is the major difference between simple regression and multiple regression?

when adverse impact occurs, burden of proof is on employer, whom must demonstrate employment practice is job related or exempt

What is the organization required to do if its selection process results in adverse impact?

HRIS, ONET, or e-HRM

Where would you seek information about existing measures?

bona fide occupational qualification

Which method of defending a primae facie case is most likely to succeed?

It is very flexible with respect to data type

Which of the following is an advantage of the multiple regression selection decision making model?

The predictors are linearly related to the criterion.

Which of the following is an assumption of the multiple regression selection decision making model?

Multiple regression

Which selection decision making model is most appropriate to use when there is a tradeoff among predictors that does not affect overall job performance and there is a large sample size?

It is an indication of the reproducibility of an ability in an area.

Why is test reliability important?

Counterproductive Work Behavior

a broad range of employee behaviors that are harmful to other employees or the organization

Cronbach's coefficient alpha (α)

a measure of internal consistency, that is, how closely related a set of items are as a group. It is considered to be a measure of scale reliability

Coefficient of determination

a measure of the amount of variation in the dependent variable about its mean that is explained by the regression equation

Criterion-related validity

a measure of validity based on showing a substantial correlation between test scores and job performance scores

Test-Retest Reliability

a method for determining the reliability of a test by comparing a test taker's scores on the same test taken on separate occasions

Test-retest

a method for determining the reliability of a test by comparing a test taker's scores on the same test taken on separate occasions

criteria

a principle or standard by which something may be judged or decided

Ratio scale

a quantitative scale of measurement in which the numerals have equal intervals and the value of zero truly means "nothing"

Validation study

a research study that attempts to show that the predictor relates to the criterion

Interval scale

a scale of measurement in which the blank between numbers on the scale are all equal in size

Ordinal scale

a scale of measurement in which the measurement categories form a rank order along a continuum

Criterion contamination

a situation that occurs when an actual criterion includes information unrelated to the behavior one is trying to measure

Multiple regression

a statistical technique that computes the relationship between a predictor variable and a criterion variable, controlling for other predictor variables. Uses multiple predictor variables

Critical Incidents

a way of evaluating the behaviors that are key in making the difference between executing a job effectively and executing it ineffectively

graphic rating scale

absolute rating; evaluates employees against a defined standard

Civil Rights Act of 1991

amended the original civil rights act, making it easier to bring discrimination lawsuits while also limiting punitive damages that can be awarded in those lawsuits. Forbids discrimination based on sex, race, color, national origin, or religion. Private employers with at least 15 employees, governments, unions, employment agencies, employers receiving federal assistance are covered

Measurement error

an error that occurs when there is a difference between the information desired by the researcher and the information provided by the measurement process

measurement error

an error that occurs when there is a difference between the information desired by the researcher and the information provided by the measurement process

BFOQ (Bona Fide Occupational Qualification)

an exception in employment law that permits sex, age, religion, and the like to be used when making employment decisions, but only if they are "reasonably necessary to the normal operation of that particular business."

Reliability coefficient

an index of reliability, a proportion that indicates the ratio between the true score variance on a test and the total variance

Validity coefficients

correlations between different measures of the same trait obtained at the same time

Uniform Guideline

describe what evidence will be considered in judging discrimination and how an employer may defend a selection program.

Interrater reliability estimates

determine whether multiple raters are consistent in their judgments.

Clinical approach to selection

different evaluators assign different weights to an applicant's strengths and weaknesses.

benefits of using objective performance appraisal criteria

directly define the goals of the organization

concurrent validity disadvantages

don't know direction or causal element, range restriction, relationships could change over time, and doesn't predict the future

Task Performance

employee behaviors that are directly involved in the transformation of organizational resources into the goods or services that the organization produces

Griggs v. Duke Power Company Guidelines

employer doesn't have to be shown to have intentionally discriminated only if discrimination took place all employment practices must be job related; or everything required to perform a job must make sense burden of proof on employer to prove a requirement that is stated is truly needed for a job

Simple regression

estimates the relationship between the dependent variable and one independent variable

job relatedness

exists when a test for employment is a legitimate measure of an individual's ability to do the essential functions of a job

standard error of estimate

gives a measure of the standard distance between the predicted Y values on the regression line and the actual Y values in the data

Disparate Treatment

intentional discrimination that occurs when people are purposely not given the same hiring, promotion, or membership opportunities because of their race, color, sex, age, ethnic group, national origin, or religious beliefs

Job Analysis Methods

interviews, questionnaires (including task analysis inventories), and Critical Incidents

measurement

involves the systematic application of rules for assigning numbers to objects (usually, people) to represent the quantities of a person's attributes or traits.

Parallel or equivalent forms

like alternate forms but even more "equal" by being administered under the same conditions

Nominal scale

measurement in which numbers are assigned to objects or classes of objects solely for the purpose of identification

the 4 scales of measurement are

nominal, ordinal, interval, ratio

ratio

numbers have actual quantity, absolute zero

Adaptive Performance

performance component that includes flexibility and the ability to adapt to changing circumstances

May lose test-takers over the long-term and need to be patient, it's a slow process

predictive validity (criterion-related validity) disadvantages

types of objective performance appraisal criteria

production data (dollar volume of sales, units produced, number of errors, amount of scrap) as well as employment data (accidents, turnover, absences, tardiness)

error component

random error due to real life conditions -difference between observed and true score -unreliable measures introduce error

ordinal

rank order numbering according to quantity

interval

rank order numbering plus equal intervals

additive model Statistical approach to selection

rating the attributes of each alternative and selecting the one which has the highest sum

Employee comparison

relative ratings; rank order, paired comparison, forced distribution

Human Resource Systems

responsible for Recruitment, planning, payroll, skill and performance management, compensation. attract, retain, compensate, and motivate

Norms

rules and expectations by which a society guides the behavior of its members

Utility analysis

technique that assesses the economic return on investment of human resource interventions such as staffing and training

Validity generalization

the ability of a screening instrument to predict performance in a job or setting different from the one in which the test was validated

Interrater agreement

the amount of agreement between two clinicians who are using the same measure to rate the same symptoms in a single patient

Concurrent validity

the degree to which the measures gathered from one tool agree with the measures gathered from other assessment techniques

concurrent validity

the degree to which the measures gathered from one tool agree with the measures gathered from other assessment techniques

Face validity

the extent to which a test item appears to fit the particular trait it is measuring

Validity

the extent to which a test measures or predicts what it is supposed to

Reliability

the extent to which a test yields consistent results, as assessed by the consistency of scores on two halves of the test, on alternate forms of the test, or on retesting

validity generalization

the extent to which validity coefficients can be generalized across situations

Error score

the part of a test score that is random and contributes to the unreliability of a test

Validation

the process of determining how well a selection test or procedure predicts future job performance; the better or more accurate the prediction of future job performance, the more valid a test is said to be

E HRM electronic human resource management

the processing and transmission of digitized HR information, especially using computer networking and the internet

Measurement

the systematic application of rules for assigning numbers to objects (usually, people) to represent the quantities of a person's attributes or traits.

Predictor

the test chosen or developed to assess attributes identified as important for successful job performance

adverse impact

unintentional discrimination that occurs when members of a particular race, sex, or ethnic group are unintentionally harmed or disadvantaged because they are hired, promoted, or trained (or any other employment decision) at substantially lower rates than others

Behavioral checklist

use of critical incidents (i.e. specific behavior that is indicative of good or poor performance); behaviorally anchored rating scales (BARS), behavioral observation scales (BOS)


संबंधित स्टडी सेट्स

Texas Life and Health Section 2: Life Insurance Basics

View Set

Ch 5 Entrepreneurship and Starting a Small Business SmartBook...

View Set

American government test 2 study

View Set

Accounting Chapter 3 - Income Statement

View Set