Chapter 6
What is the importance of incremental validity?
Adding a predictor will change the criterion measure, regardless of the already established predictors. If the additional predictor has value to the test it's important!
Fairness
the extent to which a test is used in an impartial, just, and equitable way.
miss rate
Miss rate: failure to identify accurately Validity involves accessing whether the test has an acceptable hit rate False positive vs. false negative
content validity
The degree to which the content of a test is representative of the domain it's supposed to cover. evaluating of the subjects, topics, or content covered by the items in the test
What is the difference between concurrent and predictive validity?
concurrent=degree which a test score is related to some criterion measure obtained at the same time predictive= degree which a test score predicts some criterion measure
Bias
a factor inherent in a test that systematically prevents accurate, impartial measurement
Validity
a judgement or estimate of how well a test measures what it is supposed to measure within a particular context.
Know the types of rater error and be able to think of examples
a judgment resulting from the intentional or unintentional misuse of a rating scale. Raters may be either too lenient, too severe, or reluctant to give ratings at the extremes (central tendency error)
base rate
extent to which the phenomenon exists in the population
concurrent validity
degree which a test score is related to some criterion measure obtained at the same time
How do bias and fairness relate? Can you have an unbiased, yet unfair test?
fairness is testing a cultural/academic/business ethical expectation and is supported by eliminating types of bias
evidence of homogeneity
how uniform a test is in measuring a single concept
Evidence of pretest/posttest changes
test scores change as a result of some experience between a pretest and a posttest (e.g. therapy).
Criterion
A criterion is the standard against which a test or a test score is evaluated "An adequate criterion is relevant for the matter at hand, valid for the purpose it is being used, and uncontaminated, meaning it is not part of the predictor"
Be familiar with the different types of evidence for construct validity
Evidence of homogeneity Evidence of changes Evidence of pretest/posttest changes Evidence from distinct groups
Understand what constitutes good face validity and what happens if it is lacking/why we might not want the test to be face valid
If a test seems subjectively relevant and transparent from the perspective of the test taker =good face validity One may not want the test to be face valid if they do not find it to be so
If a test has high construct validity, what does this tell you about the test?
Its a good test
face validity
Measures whether a test looks like it tests what it is supposed to test.
predictive validity
The success with which a test predicts the behavior it is designed to predict; it is assessed by computing the correlation between test scores and the criterion behavior.
construct validity
This is a measure of validity that is arrived at by executing a comprehensive analysis of: a. how scores on the test relate to other test scores and measures, and b. how scores on the test can be understood within some theoretical framework for understanding the construct that the test was designed to measure (Umbrella for validity)
What happens to the validity coefficient when you restrict or inflate the range of scores?
Using a full range of test scores enables you to obtain a more accurate validity coefficient, which will usually be higher than the coefficient you obtained using a restricted range of scores.
Know the definition of validity and how it differs from reliability
Validity: a judgement or estimate of how well a test measures what it is supposed to measure within a particular context. Different from reliability, which refers to the degree to which measurement produces consistent outcomes
What is the Halo effect?
a tendency to give a particular person a higher rating than he or she objectively deserves because of a favorable overall impression
hit rate
accurate identification (True-positive & True-negative)
criterion-related validity
validity of a test as measured by a comparison of the test score and independent measures of what the test is designed to measure evaluating the relationship of scores obtained on the test to scores on other tests or measures