Test Construction Domain Quiz

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Lowering the selection test cutoff will

increase the number of false positives decrease the number of false negatives

The item discrimination index (D) ranges in value from:

-1.0 to +1.0 The item discrimination index is calculated by subtracting the percent of examinees in the lower scoring group from the percent of examinees in the upper scoring group who answered the item correctly and ranges in value from -1.0 to +1.0. The item discrimination index indicates the extent to which a test item discriminates between examinees who obtain high versus low scores on the entire test or on an external criterion.

To maximize the ability of a test to discriminate among test takers, a test developer will want to include test items that vary in terms of difficulty. If the test developer wants to add more difficult items to her test, she will include items that have an item difficulty index of:

.10 An item difficulty level of .10 indicates a difficult item (only 10% of examinees in the sample answered it correctly) and is the best answer of those given.

The optimal item difficulty level (p) for a true/false test is:

.75 When considering the probability that an examinee can select the correct answer by chance alone, the optimal difficulty level is halfway between 100% of examinees answering the item correctly and the probability of answering the item correctly by chance alone. For a true/false item, the latter is 50%, so the optimal item difficulty is 75% (.75), which is halfway between 100% and 50%. The optimal item difficult level depends on several factors, including the probability that an examinee can select the correct answer by chance alone.

The minimum and maximum values of the standard error of estimate are:

0 and the standard deviation of the criterion The standard error of estimate equals the standard deviation of the criterion scores times the square root of one minus the validity coefficient squared. This formula indicates that the standard error of estimate ranges from 0 (which occurs when the validity coefficient is 1.0) to the standard deviation of the criterion scores (which occurs when the validity coefficient is 0).

The item characteristic curve provides three pieces of information about an item...

1.difficulty, 2.ability to discriminate between those who are high and low on the characteristic being measured, 3.the probability of answering the item correctly by guessing.

Stella S. obtains a score of 50 on a test that has a standard deviation of 10 and a standard error of measurement of 5. The 95% confidence interval for Stella's score is approximately:

40 to 60 The 95% confidence interval for an obtained test score is constructed by multiplying the standard error of measurement by 1.96 and adding and subtracting the result to and from the examinee's obtained score. An interval of 40 to 60 is closest to the 95% confidence interval and was obtained by multiplying the standard error by 2.0 (instead of 1.96) and then adding and subtracting the result (10) to and from Stella's score of 50.

A reliability coefficient of .60 indicates that ___ of variability in test scores is true score variability.

60% A reliability coefficient is interpreted directly as a measure of true score variability. A reliability coefficient of .60 indicates that 60% of the variability in scores is true score variability, while the remaining 40% of the variability is due to measurement (random) error.

Kuder-Richardson formula 20

A formula for computing split-half reliability that corrects for the fact that individual scores are based on only half of the total test items.

multitrait-multimethod matrix

A matrix that includes information on correlations between the measure and traits that it should be related to and traits that it should not theoretically be related to. The matrix also includes correlations between the measure of interest and other same-methods measures and measures that use different assessment methods.

multitrait-multimethod matrix

A matrix that includes information on correlations between the measure and traits that it should be related to and traits that it should not theoretically be related to. The matrix also includes correlations between the measure of interest and other same-methods measures and measures that use different assessment methods. When a measure correlates highly with other measures of the same trait, the measure has convergent validity; when it has low correlations with measures of different traits, it has discriminant (divergent) validity. Convergent and discriminant validity are used as evidence of construct validity, and the multitrait-multimethod matrix contains correlation coefficients that provide information about a measure's convergent and discriminant validity.

coefficient alpha

A measure of internal-consistency reliability that is the average of all possible split-half coefficients resulting from different splittings of the scale items Coefficient alpha, a measure of internal consistency, will yield a low reliability coefficient if items are not internally consistent (i.e., if items do not measure the same content domain);

discriminant validity

A successful evaluation of discriminant validity shows that a test of a concept is not highly correlated with other tests designed to measure theoretically different concepts.

Item Response Theory (IRT)

A theory that relates the performance of each item to a statistical estimate of the test taker's ability on the construct being measured Item response theory (IRT) was first proposed in the field of psychometrics for the purpose of ability assessment. It is widely used in education to calibrate and evaluate items in tests, questionnaires, and other instruments and to score subjects on their abilities, attitudes, or other latent traits.

Which of the following best describes the relationship between validity and reliability? Select one: A. A valid test is also a reliable test. B. A valid test may or may not be a reliable test. C. A reliable test is also a valid test. D. An invalid test is not a reliable test.

A valid test is also a reliable test Reliability sets an upper limit on validity, which means that a valid test must also be a reliable test. However, high reliability does not guarantee validity - i.e., a test can be free from the effects of measurement error but not measure the attribute it was designed to measure.

A 200-item test that has been administered to 100 college students has a normal distribution, a mean of 145, and a standard deviation of 12. When the students' raw scores have been converted to percentile ranks, Alex obtains a percentile rank of 49, while his twin sister Alicia obtains a percentile rank of 90. The teacher realizes that she made a mistake in scoring Alex's and Alicia's tests: Both should have received a raw score that was five points higher. In terms of their percentile ranks, when the teacher adds the five points to Alex's and Alicia's scores, she can expect that: Select one: A. Alicia's percentile rank will increase more than Alex's. B. Alex's percentile rank will increase more than Alicia's. C. Alicia's and Alex's percentile ranks will increase by the same amount. D. Alicia's and Alex's percentile ranks will not change.

Alex's percentile rank will increase more than Alicia's. Because of the above-described phenomenon, Alex's percentile rank will increase more than Alicia's. This makes sense if you think about the normal distribution: Since most of the scores are "piled up" near the center of the distribution, the increase in 5 points in Alex's score will position him above a larger number of examinee's than the 5 point increase in Alicia's score. This difference will be reflected in their percentile ranks. A problem with percentile ranks is that, when the raw scores are normally distributed, raw score differences near the center of the distribution are exaggerated when they are converted to percentile ranks, while raw score differences at the extremes are reduced. (A useful mnemonic for remembering this is "more change in the middle.")

Differential Validity

Confirmation that the selection tool accurately predicts the performance of all possible employee subgroups, including white males, women, visible minorities, persons with disabilities, and Aboriginal people. when it has different validity coefficients for different groups

Criterion contamination has which of the following effects?

It artificially increases the predictor's criterion-related validity coefficient Criterion contamination has the effect of artificially inflating the correlation between the predictor and the criterion. Criterion contamination occurs when a rater's knowledge of a person's predictor performance biases how he/she rates the person on the criterion.

What is the difference between norm referenced and criterion referenced?

Norm referenced tests may measure the acquisition of skills and knowledge from multiple sources such as notes, texts and syllabi. Criterion referenced tests measure performance on specific concepts and are often used in a pre-test / post-test format.

What is an example of reliability and validity?

Reliability implies consistency: if you take the ACT five times, you should get roughly the same results every time. A test is valid if it measures what it's supposed to. Tests that are valid are also reliable. The ACT is valid (and reliable) because it measures what a student learned in high school.

Which of the following is used to estimate the effects of shortening or lengthening a test on the test's reliabilty coefficient?

Spearman-Brown formula Although the Spearman-Brown formula is probably most often used in conjunction with split-half reliability, it can actually be used whenever a test developer wants to estimate the effects of increasing or decreasing the number of test items on the test's reliability coefficient.

In a normal distribution, which of the following represents the lowest score? A. percentile rank of 20 B. z-score of -1.0 C. T score of 25 D. Wechsler IQ score of 70

T score of 25 A T score is a standardized score with a mean of 50 and a standard deviation of 10. Therefore, a T-score of 25 is two and one-half standard deviations below the mean and represents the lowest score of those given in the answers. a. Incorrect A percentile rank of 20 is slightly less than one standard deviation below the mean. b. Incorrect A z-score of -1.0 is one standard deviation below the mean. c. CORRECT A T score is a standardized score with a mean of 50 and a standard deviation of 10. Therefore, a T-score of 25 is two and one-half standard deviations below the mean and represents the lowest score of those given in the answers. d. Incorrect Wechsler IQ scores have a mean of 100 and standard deviation of 15. Therefore, an IQ score of 70 is two standard deviations below the mean. you must be familiar with the relationship between percentile ranks, z-scores, T-scores, and IQ scores in a normal distribution.

Standard Scores or Standardized Scores

These scores express a person's distance from the mean, in terms of standard deviation of the distribution. They are continuous and have equality of units and allow for comparison between individuals.

Differential Validity

When a test yields significantly different validity coefficients for subgroups Differential validity refers to a situation where a test is predictive for all groups but to different degrees.

When the heterotrait-monomethod coefficient is large, this indicates:

a lack of discriminant validity The heterotrait-monomethod coefficient represents the correlation between different traits being measured with the same kind of method. If you are validating a test, you want the heterotrait-monomethod coefficient to be low so that you have evidence of discriminant (divergent) validity. When this coefficient is large, this indicates a lack of discriminant validity.

The assumption underlying convergent validity is that:

a measure of a characteristic should correlate highly with a different type of measure that is already known to assess the same characteristic. One way to establish a test's construct validity is to determine that it correlates highly with other measures that are already known to assess the same trait. When it does, the measure is said to have convergent validity.

standard error of estimate

a measure of variability around the regression line - its standard deviation The standard error of estimate equals the standard deviation of the criterion scores times the square root of one minus the validity coefficient squared. This formula indicates that the standard error of estimate ranges from 0 (which occurs when the validity coefficient is 1.0) to the standard deviation of the criterion scores (which occurs when the validity coefficient is 0).

All other things being equal, which of the following tests is likely to have the largest reliability coefficient? Select one: A. a multiple-choice test that consists of items that each have five answer options B. a multiple-choice test that consists of items that each have four answer options C. a multiple-choice test that consists of items that each have three answer options D. a true-false test

a multiple-choice test that consists of items that each have five answer options All other things being equal, tests containing items that have a low probability of being answered correctly by guessing alone are more reliable than tests containing items that have a high probability of being answered correctly by guessing alone. Of the types of items listed, multiple-choice items with five answer options have the lowest probability of being answered correctly by guessing alone.

alternate forms reliability

a procedure for testing the reliability of responses to survey questions in which subjects' answers are compared after the subjects have been asked slightly different versions of the questions or when randomly selected halves of the sample have been administered slightly different versions of the questions

In factor analysis, a factor loading indicates the correlation between:

a test and an identified factor In factor analysis, a factor loading is a correlation coefficient that indicates the correlation between a test and an identified factor. A factor loading provides information about a test's factorial validity.

The correction for attenuation formula is used to measure the impact of increasing:

a tests reliability on its validity. The correction for attenuation formula is used to determine the impact of increasing the reliability of the predictor (test) and/or the criterion on the predictor's validity.

In terms of item response theory, the slope (steepness) of the item characteristic curve indicates the item's:

ability to discriminate between examinees. The steeper the slope of the item characteristic curve, the better its ability to discriminate between examinees who are high and low on the characteristic being measured. The item characteristic curve provides three pieces of information about an item - i.e., its 1.difficulty, 2.ability to discriminate between those who are high and low on the characteristic being measured, and 3.the probability of answering the item correctly by guessing.

To obtain a "coefficient of stability," you would:

administer the same test twice to the same group of examinees on two separate occasions and correlate the two sets of scores. To obtain a coefficient of stability, the same measure is administered to the same group of examinees on two separate occasions and the scores obtained by the examinees are correlated. The result indicates the consistency (stability) of scores over time. The coefficient of stability is another name for the test-retest reliability coefficient.

To evaluate the concurrent validity of a new selection test for clerical workers, you would:

administer the test to a sample of current clerical workers and correlate their scores on the test with their recently assigned performance ratings To evaluate a test's criterion-related validity, scores on the predictor (in this case, the selection test) are correlated with scores on a criterion (measure of job performance). When scores on both measures are obtained at about the same time, they provide information on the test's concurrent validity. Concurrent validity is a type of criterion-related validity

Cronbach's alpha is an appropriate method for evaluating reliability when:

all test items are designed to measure the same underlying characteristic Cronbach's alpha is an appropriate method for evaluating reliability when the test is expected to be internally consistent - i.e., when all test items measure the same or related characteristics. Cronbach's alpha is another name for coefficient alpha and is used to assess internal consistency reliability.

Which type of reliability would be most appropriate for estimating the reliability of a multiple-choice speeded test?

alternate forms Alternate-forms reliability is an appropriate method for establishing the reliability of speeded tests. Speeded tests are designed so that all items answered by an examinee are answered correctly, and the examinee's total score depends primarily on his/her speed of responding. Because of the nature of these tests, a measure of internal consistency will provide a spuriously high estimate of the test's reliability.

Question 56 The best way to control consensual observer drift is to: Select one: A. use the correction for attenuation formula. B. use a true experimental research design. C. videotape the observers. D. alternate raters.

alternate raters. Of the actions described in the answers to this question, this one is the best way to alleviate consensual observer drift, which occurs when raters who are working together influence each other's ratings so that they assign ratings in increasingly similar (and idiosyncratic) ways. Consensual observer drift occurs when observers' ratings become increasingly less accurate over time in a systematic way.

criterion-referenced score

are easy to interpret because they make it possible to predict which criterion group an examinee is likely to belong to.

The primary advantage in using a percentile rank, z-score, or T-score is that these scores: Select one: A. are easy to interpret because they reference an individuals test performance to an absolute standard of performance. B. are easy to interpret because they reference an individual's test performance to the performance of other examinees. C. are easy to interpret because they make it possible to predict which criterion group an examinee is likely to belong to. D. normalize the raw score distribution so that parametric tests can be used to analyze test scores.

are easy to interpret because they reference an individual's test performance to the performance of other examinees. Because it is usually difficult to "make sense" of raw scores, they are often transformed into scores that are easier to interpret. The advantage of norm-referenced scores (which are a type of transformed score) is that they make it possible to determine how well an examinee did in comparison to other examinees. The scores listed in this question are all norm-referenced scores that indicate how well an examinee did in comparison to others in the norm group.

To maximize the inter-rater reliability of a behavioral observation scale, you should make sure that coding categories:

are mutually exclusive To maximize the reliability of a behavior observation scale, coding categories must be discrete and mutually exclusive. For example, if the behavioral categories for aggressiveness were "aggressive acts" and "emotional displays," the same behavior might be recorded twice, and an unreliable picture of a child's behavior would be obtained. When a person's behavior is to be observed and recorded, that behavior must be operationalized in order for the observations to be meaningful. For example, a psychologist interested in obtaining data about aggressiveness in children might record data using categories such as "hits others" or "destroys property."

A personnel director uses a mechanical aptitude test to hire machine shop workers. Several of the people hired using the test turn out to be less than adequate performers. These individuals are:

false positives False positives are individuals who are predicted to perform satisfactorily by the predictor but, in fact, perform poorly on the criterion. In other words, these individuals have been "falsely identified as positives."

Cronbach's coefficient alpha

formula used to find average degree of inter-item consistency. if items are scored dichotomously (right/wrong) Kuder-Richardson Formula 20 KR-20 is used

Which of the following types of validity would you be most interested in when designing a selection test that will be used to predict the future job performance ratings of job applicants? Select one: A. discriminant B. content C. construct D. criterion-related

criterion-related When a test is being used to predict performance on a criterion, you would be most interested in the test's criterion-related validity (e.g., in its correlation with the criterion measure).

A college freshman obtains a score of 150 on his English final exam, a score of 100 on his math exam, a score of 55 on his chemistry exam, and a score of 30 on his history exam. The means and standard deviations for these tests are, respectively, 125 and 20 for the English exam, 90 and 10 for the math exam, 45 and 5 for the chemistry exam, and 30 and 5 for the history exam. Based on this information, you can conclude that the young man's test performance was best on which exam?

chemistry In this case, the student's English score is equivalent to a z-score of +1.25, his math score is equivalent to a z-score of +1.0, his chemistry score is equivalent to a z-score of +2.0, and his history score is equivalent to a z-score of 0. Therefore, the student obtained the highest score on the chemistry test. The raw scores on different tests may be compared by converting the scores to z-scores, which is done by subtracting the mean from the examinee's raw score to calculate a deviation score and then dividing the deviation score by the standard deviation.

heterotrait-monomethod coefficient

coefficient represents the correlation between different traits being measured with the same kind of method.

A reliability coefficient is best defined as a measure of:

consistency A test is reliable when it provides consistent results, with inconsistency in test scores being the result of random factors that are present at the time of testing. A reliability coefficient indicates the proportion of variance in test scores that is consistent (i.e., is due to true score variability rather than to measurement error).

You ask a group of experienced salespeople to review the test items included in a test you have developed to help select new sales applicants. You are apparently interested in determining the test's ______ validity.

content A test's content validity refers to the extent to which test items represent the domain of knowledge, skills, and/or abilities the test was designed to measure. Content validity is established primarily by having subject matter experts evaluate items in terms of their representativeness.

A final exam is developed to evaluate students' comprehension of information presented in a high school history class. When the exam is administered to three classes of students at the end of the semester, all students obtain failing scores. This suggests that the exam may have poor ________ validity. Select one: A. concurrent B. incremental C. content D. divergent

content If all students do poorly on a test designed to assess their mastery of the course content, one possible reason is that the test questions do not represent that content; i.e., the test does not have adequate content validity.

An advantage of using the kappa statistic rather than percent agreement when assessing a test's inter-rater reliability is that the former: Select one: A. is easier to calculate. B. corrects for chance agreement. C. corrects for small sample size. D. takes into account the effects of multicollinearity.

corrects for chance agreement. The kappa statistic (which is also known as Cohen's kappa and the kappa coefficient) provides a more accurate estimate of reliability than percent agreement because its calculation includes removing the effects of chance agreement. If you know that the problem with percent agreement as a measure of inter-rater reliability is that it is inflated by chance agreement,

Incremental validity is associated with...

criterion-related validity and refers to the increase in decision-making accuracy that results from use of a predictor.

In factor analysis, the original factor matrix is usually rotated in order to:

facilitate interpretation of the identified factors The rotation of factors provides a clearer pattern of factor loadings - i.e., in the rotated matrix, some tests correlate most highly with one factor, while other tests correlate more highly with a different factor. This makes it easier to identify the factors (dimensions) that account for the intercorrelations between the tests. One characteristic of the original factor matrix is that it is usually difficult to interpret because it does not provide a clear pattern of factor loadings.

When the kappa statistic for a measure is .90, this indicates that the measure:

has adequate inter-rater reliability Reliability coefficients range from 0 to +1.0, so a coefficient of .90 indicates good reliability. the kappa statistic (also known as the kappa coefficient) is a measure of inter-rater reliability

Incremental validity is a measure of:

decision-making accuracy Incremental validity refers to the increase ("increment") in decision-making accuracy that results from the use of a new predictor (e.g., the increase in accurate hiring decisions).

You would use a "multitrait-multimethod matrix" in order to: Select one: A. compare a test's predictive and concurrent validity. B. determine if a test has adequate convergent and discriminant validity. C. identify the common factors underlying a set of related constructs. D. test hypotheses about the causal relationships among variables.

determine if a test has adequate convergent and discriminant validity. When a measure correlates highly with other measures of the same trait, the measure has convergent validity; when it has low correlations with measures of different traits, it has discriminant (divergent) validity. Convergent and discriminant validity are used as evidence of construct validity, and the multitrait-multimethod matrix contains correlation coefficients that provide information about a measure's convergent and discriminant validity.

most item characteristic curves provide information on three parameters

difficulty level, discrimination, probability of guessing correctly,

In a distribution of percentile ranks, the number of examinees receiving percentile ranks between 20 and 30 is:

equal to the number of examinees receiving percentile ranks between 50 and 60 The flatness of a percentile rank distribution indicates that scores are evenly distributed throughout the full range of the distribution. In other words, at least theoretically, the same number of examinees fall at each percentile rank. Consequently, the same number of examinees obtain percentile ranks between the ranks of 20 and 30, 30 and 40, etc. Knowing that a distribution of percentile ranks is flat (rectangular)

Assuming no constraints in terms of time, money, or other resources, the best (most thorough) way to demonstrate that a test has adequate reliability is by using which of the following techniques?

equivalent (alternate) forms Because equivalent forms reliability takes into account error due to both time and content sampling, it is the most thorough method for establishing reliability and, consequently, is considered by some experts to be the best method. The most thorough method for assessing reliability is the one that takes into account the greatest number of potential sources of measurement error.

A test developer would use the Kuder-Richardson Formula (KR-20) in order to:

evaluate a tests internal consistency reliability KR-20 is used to determine a test's internal consistency reliability when test items are scored dichotomously

incremental validity

extent to which a test contributes information beyond other more easily collected measures is used to determine if a new psychological measure will provide more information than measures that are already in use.

The applicants for sales positions at the Acme Company complain that the selection test they are required to take is unfair because it doesn't "look like" it measures the knowledge and skills that are important for successful job performance. Their complaint suggests that the selection test is lacking which of the following? Select one: A. incremental validity B. differential validity C. construct validity D. face validity

face validity Face validity refers to the extent that a test appears to be valid to test-takers - i.e., to the extent that the test "looks like" it is measuring what it is supposed to be measuring. In this situation, the selection test doesn't appear to be measuring the skills and knowledge that are important for success as a salesperson.

A test developer would construct an expectancy table to: Select one: A. facilitate norm-referenced interpretation of test scores. B. facilitate criterion-referenced interpretation of test scores. C. correct obtained scores for the effects of guessing. D. correct obtained test scores for the effects of measurement error.

facilitate criterion-referenced interpretation of test scores An expectancy table provides the information needed to interpret an examinee's score in terms of expected performance on an external criterion and, consequently, is a method of criterion-referenced interpretation.

standard error of measurement

hypothetical estimate of variation in scores if testing were repeated The standard error of measurement equals the standard deviation of the test

After reviewing the data collected on a new selection test during the course of a criterion-related validity study, a psychologist decides to lower the selection test cutoff score. Apparently the psychologist is hoping to do which of the following?

increase the number of true positives By lowering the selection test (predictor) cutoff score, the psychologist will increase the number of people who are accepted on the basis of their selection test score -- i.e., doing so will increase the number of positives, including the number of true positives, who are individuals who will be selected on the basis of their test scores and will be successful on the criterion.

norm-referenced scores

indicate how well an examinee did in comparison to others in the norm group. allow for a comparisons between the students taking the tests and a national average. percentile rank, z-score, or T-score

Classical Test Theory (CTT)

is a body of related psychometric theory that predicts outcomes of psychological testing such as the difficulty of items or the ability of test-takers. It is a theory of testing based on the idea that a person's observed or obtained score on a test is the sum of a true score and an error score

Cohen's Kappa Statistic

is a statistic that is used to measure inter-rater reliability (and also intra-rater reliability) for qualitative (categorical) items

alternate forms reliability

is better for speeded tests because it eliminates the problem of practice effects.

Content validity is of concern for tests designed to

measure a specific content or behavior domain

Construct validity is of concern for tests designed to

measure hypothetical traits

It would be most important to assess the test-retest reliability of a measure that:

measures a stable trait To evaluate test-retest reliability, the same test is administered to the same group of examinees on two different occasions. The two sets of scores are then correlated. If a test is designed to measure a stable trait, you would want to make sure that scores are stable over time. Therefore, test-retest reliability would be important for this kind of test.

In the multitrait-multimethod matrix, which of the following coefficients provides information about a test's convergent validity? Select one: A. heterotrait-heteromethod B. heterotrait-monomethod C. monotrait-heteromethod D. monotrait-monomethod

monotrait-heteromethod The monotrait-heteromethod coefficient is a measure of convergent validity. It indicates the correlation between the test that is being validated and another measure of the same trait (monotrait) that uses a different method of measurement (heteromethod).

In terms of magnitude, the standard error of measurement can be:

no greater than the standard deviation of the test scores The maximum value for the standard error of measurement is the value of the standard deviation of the test scores. The standard error is equal to the standard deviation when the reliability coefficient is zero.

Which of the following is NOT an example of a standard score? Select one: A. WAIS IQ score B. percentage score C. z score D. T score

percentage score Percentages are not standard scores. A standard score is a norm-referenced score that indicates an examinee's performance in terms of standard deviation units (e.g., a z score of 1.0 indicates a raw score that is one standard deviation above the mean).

False negatives are individuals who are

predicted to perform poorly by the predictor but, in fact, do well on the criterion.

True positives are individuals who are

predicted to perform satisfactorily by the predictor and, in fact, do well on the criterion.

False positives are individuals who are

predicted to perform satisfactorily by the predictor but, in fact, perform poorly on the criterion. In other words, these individuals have been "falsely identified as positives."

To evaluate the validity of a newly developed selection test for clerical workers, a test developer will correlate scores obtained on the test by newly hired clerical workers with the job performance ratings they receive after being on-the-job for six months. The resulting correlation coefficient will provide information on the test's: Select one: A. discriminant validity. B. predictive validity. C. construct validity. D. concurrent validity.

predictive validity. There are two types of criterion-related validity -- predictive and concurrent. As its name implies, predictive validity involves correlating predictor scores with criterion scores that are obtained at a later time to determine how well the predictor predicts future performance on the criterion. The test developer is correlating predictor (selection test) scores with future criterion (job performance) scores and, therefore, is conducting a criterion-related validity study.

Specificity refers to...

probability that a test will correctly identify people without the disease from the pool of people without the disease. It is calculated with the following formula: true negatives/(true negatives + false positives).

A personnel director hires all job applicants who obtain a high score on a job selection test but, after using the test for six months, realizes that many of the new employees are obtaining low performance ratings. Assuming that the selection test has adequate criterion-related validity, the personnel direct can reduce the number of unsatisfactory workers that she hires using the test by: Select one: A. lowering the selection test cutoff score and the job performance rating cutoff score. B. raising the selection test cutoff score and the job performance rating cutoff score. C. lowering selection test cutoff score. D. raising the selection test cutoff score.

raising the selection test cutoff score Applicants who are hired on the basis of their selection test scores but who perform poorly on the job are false positives. Raising the cutoff score on the selection test (predictor) should reduce the number of individuals who do poorly on the job - i.e., it will reduce the number of positives, including the number of false positives. Note that lowering the job performance rating (criterion) cutoff score would also reduce the number of false positives but that, in many work situations, an employer would not want to do this.

Z-scores (standard scores)

raw scores stated in standard deviation terms. a measure of how many standard deviations a given raw score is from the mean. A negative number (e.g. -1) means the score falls below the mean. 0 = the mean. shape of the distribution does not change when scores changed to z-scores (linear transformation)

The distribution of percentile ranks is always:

rectangular (flat) regardless of the shape of the distribution of raw scores A distinguishing characteristic of percentile ranks is that their distribution is always rectangular (flat) regardless of the shape of the distribution of raw scores. you want to be familiar with the shape of the normal distribution (bell-shaped) and the shape of a distribution of percentile ranks (rectangular).

percentage scores are a type of criterion-referenced score that

reference an examinee's score to the content of the exam and indicate how much of the content an examinee has mastered

incremental validity

refers to the degree to which use of a test increases decision-making accuracy.

discriminant validity

refers to the extent that scores on a test do not correlate with scores on tests measuring different traits. Discriminant validity is of concern for tests designed to measures hypothetical traits (constructs).

A psychologist develops a diagnostic test to identify people who have injection phobia. In this situation, the test's ________ refers to how good the test is at identifying people who have injection phobia from the pool of people who actually have injection phobia. Select one: A. specificity B. sensitivity C. positive predictive value D. negative predictive value

sensitivity Sensitivity refers to the probability that a test will correctly identify people with the disease from the pool of people with the disease. It is calculated using the following formula: true positives/(true positives + false negatives).

In the context of test construction, cross-validation is associated with which of the following? Select one: A. shrinkage B. criterion deficiency C. criterion contamination D. banding

shrinkage Cross-validation refers to re-assessing a test's criterion-related validity with a new sample. Because the chance factors operating in the original sample are not all present to those operating in the cross-validation sample, the validity coefficient usually "shrinks" (is smaller) for the new sample.

Norm-referenced scores that permit an examinee's score to be compared to the scores of others who are taking or have taken the same test.

stanine scores z-scores percentile ranks

When a test has been constructed on the basis of item response theory, an examinee's total test score provides information about his/her:

status on a latent trait or ability Scores on tests developed on the basis of item response theory are reported in terms of the examinee's level on the trait or ability measured by the test rather than in terms of a total score. An advantage of this method of score reporting is that it makes it possible to compare scores from different sets of items and from different tests. Item response theory is an alternative to classical test theory for the development of tests and interpretation of test scores.

Content sampling is not a potential source of measurement error for which of the following methods for evaluating a test's reliability? A. coefficient alpha and alternate forms B. alternate forms and test-retest C. split-half only D. test-retest only

test-retest only Because test-retest reliability involves administering the same test (i.e., the same content) twice, content sampling is not a source of error. Content sampling refers to the extent to which test scores depend on factors specific to the particular items included in the test (i.e., to its content). Note that this question is asking about the type of reliability that is not affected by content sampling.

Concurrent validity (a type of criterion-related validity) refers to...

the extent to which test scores correlate with scores on an external criterion.

concurrent validity

the extent to which two measures of the same trait or ability agree

In factor analysis, when two factors are "orthogonal," this means that:

the factors are uncorrelated In factor analysis, orthogonal factors are uncorrelated (independent) and oblique factors are correlated (dependent).

When using principal component analysis:

the first principal component represents the largest share of the total variance. A characteristic of principal components analysis is that the components (factors) are extracted so that the first component reflects the greatest amount of variability, the second component the second greatest amount of variability, etc

The point at which an item characteristic curve intercepts the vertical (Y) axis provides information on which of the following?

the probability of answering the item correctly by guessing The vertical axis indicates the probability of choosing a correct response as a function of an examinee's ability level. The point at which the item characteristic curve intercepts the vertical axis indicates the probability of choosing the correct response by chance alone.

A test designed to measure knowledge of clinical psychology is likely to have the highest reliability coefficient when:

the test consists of 80 items and the tryout sample consisted of individuals who are heterogeneous in terms of knowledge of clinical psychology. All other things being equal, longer tests are more reliable than shorter tests. In addition, the reliability coefficient (like any other correlation coefficient) is larger when there is an unrestricted range of scores - i.e., when the tryout sample contains examinees who are heterogeneous with regard to the attribute(s) measured by the test. The reliability of a test is affected by several factors including the length of the test and the heterogeneity of the sample in terms of the abilities or other attributes measured by the test items.

When using criterion-referenced interpretation of scores obtained on a job knowledge test, you would most likely be interested in which of the following? Select one: A. the total number of test items answered correctly by an examinee B. an examinee's performance relative to that of other examinees C. an examinee's standing on two or more measures designed to assess the same characteristic D. ensuring that test items are based on a systematic job evaluation

the total number of test items answered correctly by an examinee One criterion that is used to interpret a person's test score is the total number of correct items. This criterion is probably most associated with "mastery testing." A person is believed to have mastered a content area when he/she obtains a predetermined minimum score on the test that is designed to assess knowledge of that area.

Split-half reliability would yield a low reliability coefficient if...

the two halves of the test do not assess the same content.

A measure has divergent validity when...

when scores on the measure do not correlate with scores on measures of unrelated traits

True negatives are individuals who are

who are predicted to perform poorly by the predictor and, in fact, perform poorly on the criterion.

Lowering the predictor cutoff

will decrease the number of false negatives


संबंधित स्टडी सेट्स

GGC - Global Business - Chapter 5, 7, 8, & 9

View Set

Property and casualty guarantee exam 2

View Set

Biology: The Musculoskeletal System

View Set

MIS 330 Systems Analysis & Design

View Set

Quiz 1: A Primitive Government Primer NOT COMPLETE

View Set

Compensation Administration Chap 7 TNTech

View Set

Chapter 14 Endocrine System Diseases and Disorders

View Set

Campbell Biology Chapter 17 Cards

View Set