Chapter 6

Ace your homework & exams now with Quizwiz!

Gottfredson and group differences on tests

According to Gottfredson, the answer to group differences on tests will not come from measurement related research because differences in scores on many of the tests in question arise principally from differences in job-related abilities

Evidence of homogeneity

How uniform a test is in measuring a single concept

Incremental validity

The degree to which an additional predictor explains something about the criterion measure that is not explained by predictors already in use

Test fairness

The extent to which a test is used in an impartial, just, and equitable way

Evidence of changes with age

Some constructs are expected to change over time (e.g. reading rate)

Minimize test bias

- *Prevention during test development* is the best cure for test bias, though a procedure called estimated true score transformations represents one of many available post hoc remedies

Factor analysis

- A class of mathematical procedures, frequently employed as data reduction methods. designed to identify variables on which people/factors may differ - A new test should load on a common factor with other tests of the same construct

Validity coefficient

- A correlation coefficient that provides a measure of the relationship between test scores and scores on the criterion measure - Validity coefficients are affected by restriction or inflation of range

Bias

- A factor inherent in a test that systematically prevents accurate, impartial measurement - Bias implies systematic variation in test scores - Prevention during test development is the best cure for test bias

Face Validity

- A judgment concerning how relevant the test items appear to be. - If a test appears to measure what it purports to measure "on the face of it," it could be said to be high in face validity. - Many self-report personality tests are high in face validity, whereas projective tests, such as the Rorschach tend to be low in face validity (i.e. it is not apparent what is being measured). - A perceived lack of face validity may lead to a lack of confidence in the test measuring what it purports to measure

Rating error

- A judgment resulting from the intentional or unintentional misuse of a rating scale. - Raters may be either too lenient, too severe, or reluctant to give ratings at the extremes (*central tendency error*)

Rating

- A numerical or verbal judgement that places a person or attribute along a continuum identified by a scale of numerical or word descriptors called a rating scale

Halo effect

- A tendency to give a particular person a higher rating than he or she objectively deserves because of a favorable overall impression

Central tendency error

- A type of rating error wherein the rater exhibits a general reluctance to issue ratings at either the positive or negative extreme and so all or most ratings cluster in the middle of the rating continuum - Consequently, all of the rater's ratings would tend to cluster in the middle of the rating continuum

False positive

- An error in measurement characterized by a tool of assessment indicating that the test taker possesses or exhibits a particular trait, ability behavior, or attribute when in fact the test taker does not

Expectancy data

- An expectancy table shows the percentage of people within specified test-score intervals who subsequently were placed in various categories of the criterion (e.g. placed in "passed" category or "failed" category). - In a corporate setting test scores may be divided into intervals (e.g. poor, adequate, excellent) and examined in relation to job performance (e.g. satisfactory or unsatisfactory). Expectancy tables, or charts, may show us that the higher the initial rating, the greater the probability of job success

Concurrent validity

- An index of the degree to which a test score is related to some criterion measure obtained at the same time (concurrently)

Predictive validity

- An index of the degree to which a test score predicts some criterion, or outcome, measure in the future - Tests are evaluated as to their predictive validity

Lawshe (1975)

- Developed a method whereby raters judge each item as to whether it is essential, useful but not essential, or not necessary for job performance - If more than half the raters indicate that an item is essential, the item has at least some content validity - The *content validity* of a test varies across cultures and time - Political considerations may also play a role - *Developed the content validity ration (CVR)*

Expectancy chart

- Graphic representation of an expectancy table - Tells us that the higher the initial rating, the greater the probability of job success

Expectancy table

- Shows the *percentage* of people within specified test-score intervals who subsequently were placed in various categories of the criterion (for example, placed in a "passed" category or "failed" category)

Validation

- The process of gathering and evaluating evidence about validity. - Both test developers and test users may play a role in the validation of a test. - Test users may validate a test with their own group of testtakers - local validation

Hit rate

- The proportion of people who are accurately identified as possessing or not possessing a particular trait, behavior, characteristic, or attribute based on test scores - For example, hit rate could refer to the proportion of people a test accurately identifies as possessing or exhibiting a particular trait, behavior, characteristic, or attribute

Criterion

- The standard against which a test or a test score is evaluated - An adequate criterion is relevant for the matter at hand, valid for the purpose for which it is being used, and uncontaminated, meaning it is not part of the predictor

Content validity

- This is a measure of validity based on an evaluation of the subjects, topics, or content covered by the items in the test - A judgment of how adequately a test samples behavior representative of the universe of behavior that the test was designed to sample - Do the test items adequately represent the content that should be included in the test?

Criterion-related validity

- This is a measure of validity obtained by evaluating the relationship of scores obtained on the test to scores on other tests or measures - A judgment of how adequately a test score can be used to infer an individual's most probable standing on some measure of interest (i.e. the criterion)

Validity

A judgment or estimate of how well a test measures what it purports to measure in a particular context

Test blueprint

A plan regarding the types of information to be covered by the items, the number of items tapping each area of coverage, the organization of the items in the test, etc.

Evidence from distinct groups

Scores on a test vary in a predictable way as a function of membership in some group (e.g. scores on the Psychopathy Checklist for prisoners vs. civilians)

Convergent evidence

Scores on the test undergoing construct validation tend to correlate highly in the predicted direction with scores on older, more established, tests designed to measure the same (or a similar) construct

Evidence of pretest/posttest changes

Test scores change as a result of some experience between a pretest and a posttest (e.g. therapy)

Construct validity

This is a measure of validity that is arrived at by executing a comprehensive analysis of *a.* how scores on the test relate to other test scores and measures, and *b.* how scores on the test can be understood within some theoretical framework for understanding the construct that the test was designed to measure - The ability of a test to measure a theorized construct (e.g. intelligence, aggression, personality, etc.) that it purports to measure. - If a test is a valid measure of a construct, high scorers and low scorers should behave as theorized. - All types of validity evidence, including evidence from the content- and criterion-related varieties of validity, come under the umbrella of construct validity

Discriminant evidence

Validity coefficient showing little relationship between test scores and other variables with which scores on the test should not theoretically be correlated


Related study sets

SIE Chapter 5: Investment Banking

View Set

Project Management Chapter 2 Questions

View Set

Transitions Final - All Questions from Lecture Packets

View Set

The autonomic nervous system Dr. E

View Set

Chapter 37 Assisting in Ophthalmology and Otolaryngology

View Set

Physical Science Test Review: Chapter 4, Sections 1-3

View Set

Bible 700 - Unit 3: The Attributes of God QUIZ 2: ATTRIBUTE OF MERCY

View Set