Test Construction Prepjet Exam #1,2,4,7

¡Supera tus tareas y exámenes ahora con Quizwiz!

In a multitrait-multimethod matrix, a large monotrait-heteromethod coefficient provides evidence of a test's: A. reliability. B. divergent validity. C. convergent validity. D. factorial validity.

C. convergent validity. A multitrait-multimethod matrix contains correlation coefficients that provide information about a measure's reliability and convergent and divergent validity. The monotrait-heteromethod coefficient is the correlation between two different measures (heteromethod) that assess the same trait (monotrait). When this correlation is large, it provides evidence of a measure's convergent validity.

Which of the following is not a norm-referenced score? A. z-scores B. T-scores C. percentile ranks D. percentage scores

D. percentage scores Percentage scores are a type of criterion-referenced score that indicates the percent of test items an examinee answered correctly. All of the other scores listed in the answers are norm-referenced scores that compare an examinee's score to the scores obtained by examinees in the norm (reference) group.

You would use which of the following to construct a confidence interval around an examinee's predicted criterion score? A. regression equation B. multiple regression equation C. standard error of measurement D. standard error of estimate

D. standard error of estimate The standard error of estimate indicates the amount of error that can be expected when an examinee's predictor score is used to predict his or her score on a criterion, and it is used to construct a confidence interval around the predicted criterion score. The standard error of measurement (answer C) indicates the amount of error that can be expected in an examinee's obtained (rather than predicted) score and is used to construct a confidence interval around the obtained score.

When a predictor has a criterion-related validity coefficient of _____, this means that 64% of variability in scores on the criterion is explained by variability in scores on the predictor. A. .80 B. .64 C. .40 D. .36

A. .80 A criterion-related validity coefficient, like other correlation coefficients for two different variables, can be interpreted by squaring it to obtain a measure of shared variability. This question gives you the squared number, so you have to take its square root to get the validity coefficient: The square root of .64 is .80. (Note: If there are any questions on the exam that require you to calculate a square root, the numbers will be easy ones like the one in this question - e.g., .81, .49, .36.)

When a test has a standard deviation of 10, the test's standard error of measurement will fall between: A. 0 and 10 B. 10 and 1.0 C. 0 and 1.0 D. -1.0 and +1.0

A. 0 and 10 A test's standard error of measurement equals its standard deviation times the square root of 1 minus the reliability coefficient. A test's reliability coefficient can range from 0 to 1.0, so the standard error of measurement for a test that has a standard deviation of 10 ranges from 0 when the reliability coefficient is 1.0 (10 times the square root of 1 minus 1 equals 0) to 10 when the reliability coefficient is 0 (10 times the square root of 1 minus 0 equals 10).

The test manual for an academic achievement test indicates that it has an alternate forms reliability coefficient of .80. This means that _____ of variability in test scores is true score variability. A. 80% B. 64% C. 36% D. 20%

A. 80% Reliability coefficients are interpreted directly as the percent of variability in test scores that is due to true score variability. When the reliability coefficient is .80, this means that 80% of variability in scores is due to true score variability and 20% is due to measurement error.

To evaluate the test-retest reliability of a newly developed measure of intelligence, a test developer administers the test to the same sample of examinees on two separate occasions. When he correlates the two sets of scores, he obtains a reliability coefficient of .60. To increase this reliability coefficient, the test developer should: A. increase the number of test items and make sure the new sample of examinees is heterogeneous with regard to level of intelligence. B. increase the number of test items and make sure the new sample of examinees is homogeneous with regard to level of intelligence. C. decrease the number of test items and make sure the new sample of examinees is heterogeneous with regard to level of intelligence. D. decrease the number of test items and make sure the new sample of examinees is homogeneous with regard to level of intelligence.

A. increase the number of test items and make sure the new sample of examinees is heterogeneous with regard to level of intelligence. A test's reliability coefficient is affected by several factors including the length of the test and the degree of similarity of examinees with regard to the attribute(s) measured by the test: In general, longer tests are more reliable than shorter tests and reliability coefficients are larger when they're derived from a sample that has an unrestricted range of scores - i.e., when examinees in the sample are heterogeneous with regard to the attribute(s) measured by the test.

Before adding a new selection test to the procedure that's currently being used to make hiring decisions, you would want to make sure that adding the test will increase decision-making accuracy. In other words, you'd want to make sure the new selection test has adequate: A. incremental validity. B. convergent validity. C. differential validity. D. external validity.

A. incremental validity. Incremental validity refers to the increase in decision-making accuracy that will occur when a new predictor (e.g., a selection test) is added to the current procedure for making hiring or other types of decisions.

Consensual observer drift __________ a measure's inter-rater reliability. A. tends to artificially increase B. tends to artificially decrease C. either artificially increases or decreases D. neither artificially increases nor decreases

A. tends to artificially increase Consensual observer drift occurs when two or more raters communicate with each other while they're assigning ratings. It causes increased consistency (but often decreased accuracy) of their ratings and overestimates a measure's actual inter-rater reliability.

The correction for attenuation formula is used to estimate the effects of increasing: A. the reliability of a predictor and/or criterion on the criterion-related validity coefficient. B. the reliability of a predictor and/or criterion on the predictor's incremental validity. C. the number of items included in the predictor on its criterion-related validity coefficient. D. the base rate on the predictor's incremental validity.

A. the reliability of a predictor and/or criterion on the criterion-related validity coefficient. The correction for attenuation formula is used to estimate what the maximum criterion-related validity coefficient would be if the predictor and/or criterion had a reliability coefficient of 1.0.

According to classical test theory, variability in test scores is due to a combination of: A. true score variability and random error. B. true score variability and systematic error. C. observed variability and divergent error. D. observed variability and convergent error.

A. true score variability and random error. Classical test theory describes observed variability in test scores as being the result of a combination of true score variability (variability in what the test is measuring) and measurement error (variability due to random error).

Which of the following is likely to produce the largest reliability coefficient for a newly developed achievement test? A. unrestricted range of scores and homogeneous content of test items B. unrestricted range of score and heterogeneous content of test items C. restricted range of scores and homogeneous content of test items D. restricted range of scores and heterogeneous content of test items

A. unrestricted range of scores and homogeneous content of test items Two factors that affect the size of a test's reliability coefficient are the range of test scores and the homogeneity of the test's content: All other things being equal, a test with an unrestricted range of test scores and homogeneous items will produce a larger reliability coefficient than will a test with a restricted range of scores and heterogeneous items. For example, a 50-item test that measures knowledge of neuropsychology and contains items that range from easy to very difficult can be expected to have a higher reliability coefficient than will a 50-item test that measures knowledge of neuropsychology, psychopathology, and clinical psychology and contains items that range only from difficult to very difficult.

For a test that consists of 45 true/false questions, the optimal average item difficulty level (p) is which of the following? A. 1.0 B. .75 C. .50 D. .25

B. .75 The optimal difficulty level for test questions depends on several factors including the chance that examinees can choose correct answers just by guessing. With regard to this factor, the optimal difficulty level falls halfway between 100% and the probability of choosing the correct answer by guessing: For true/false questions, the probability of guessing correctly is 50%, so the optimal difficulty level is halfway between 1.0 and .50, which is .75.

In a normal distribution, a T-score of 40 is equivalent to a percentile rank of: A. 3 B. 16 C. 84 D. 98

B. 16 For the EPPP, you want to be familiar with the relationship between percentile ranks, T-scores, and z-scores in a normal distribution. To identify the correct answer to this question, you need to know that, in a normal distribution, a T-score of 40 and a percentile rank of 16 are both one standard deviation below the mean.

Which of the following describes the relationship between a test's reliability coefficient and its criterion-related validity coefficient? A. A test's criterion-related validity coefficient can be no greater than its reliability coefficient. B. A test's criterion-related validity coefficient can be no greater than the square root of its reliability coefficient. C. A test's criterion-related validity coefficient can be no greater than the square root of one minus its reliability coefficient. D. A test's criterion-related validity coefficient can be no greater than the square of its reliability coefficient.

B. A test's criterion-related validity coefficient can be no greater than the square root of its reliability coefficient. A test's criterion-related validity coefficient can be no greater than the square root of its reliability coefficient. For example, if a test has a reliability coefficient of .81, its criterion-related validity coefficient can be no greater than the square root of .81, which is .90.

Job applicants complain that the items included in a selection test "don't look like they have anything to do with job performance." As described by these applicants, this test lacks ________ validity. A. content B. face C. convergent D. discriminant

B. face Face validity refers to the degree to which test items "look like" they measure what the test purports to measure. Face validity can affect test performance (e.g., by affecting a test taker's motivation to respond to items accurately), but it does not provide information on whether or not the test accurately measures what it was designed to measure.

The Kuder-Richardson Formula 20 (KR-20) can be used to estimate a test's ____________ reliability when test items are scored dichotomously. A. alternate forms B. internal consistency C. test-retest D. inter-rater

B. internal consistency KR-20 is a variation of coefficient alpha that can be used to evaluate a test's internal consistency reliability when test items are scored dichotomously (e.g., as correct or incorrect).

To evaluate the inter-rater reliability of a test when scores or ratings on the test represent a nominal scale of measurement, you would use which of the following? A. coefficient alpha B. kappa coefficient C. KR-20 D. Spearman-Brown

B. kappa coefficient The kappa coefficient is also known as Cohen's kappa statistic and is used to measure inter-rater reliability when scores or ratings represent a nominal scale of measurement. An advantage of the kappa coefficient as a measure of inter-rater reliability is that, unlike percent agreement, the kappa coefficient corrects for chance agreement between the raters.

Before using a selection test to estimate how well job applicants will do on a measure of job performance on their first few days of work, you would want to make sure the selection test has adequate: A. concurrent validity. B. predictive validity. C. differential validity. D. construct validity.

B. predictive validity. Evaluating a predictor's criterion-related validity is important when scores on the predictor will be used to estimate scores on a criterion. There are two types of criterion-related validity: Concurrent validity is most important when predictor scores will be used to estimate current scores on the criterion. Predictive validity is most important when predictor scores will be used to estimate future scores on the criterion, which is the situation described in this question.

A test's __________ refers to its ability to correctly identify individuals who are true negatives. A. sensitivity B. specificity C. positive predictive value D. negative predictive value

B. specificity Determining a test's sensitivity, specificity, positive predictive value, and negative predictive value are one way of evaluating its validity. A test's specificity refers to its ability to accurately identify people who do not have disorder or other attribute the test was designed to identify. It's calculated by dividing the number of true negatives by the number of true negatives plus false positives.

The incremental validity of a new selection test (predictor) is calculated by subtracting: A. the positive hit rate from the base rate. B. the base rate from the positive hit rate. C. the negative hit rate from the positive hit rate. D. the positive hit rate from the negative hit rate.

B. the base rate from the positive hit rate. Incremental validity refers to the increase in the accuracy of predictions about criterion performance that occurs by adding a new predictor to the current method used to make predictions. It can be calculated by subtracting the base rate from the positive hit rate using data collected in a criterion-related validity study: The base rate is the proportion of employees who were hired without the new selection test (predictor) and who obtained high scores on the measure of job performance (criterion). The positive hit rate is the proportion of employees who would have been hired using their scores on the new selection test and who obtained high scores on the measure of job performance.

The item discrimination index (D) ranges from: A. 0 to +1.0. B. 0 to 50. C. -1.00 to +1.00. D. -50 to +50.

C. -1.00 to +1.00. The value of D ranges from -1.0 to +1.0. When D is +1.0, this indicates that all examinees in the high-scoring group answered the item correctly and all examinees in the low-scoring group answered it incorrectly. Conversely, when D is -1.0, this indicates that all examinees in the low-scoring group answered the item correctly and all examinees in the high-scoring group answered it incorrectly.

The results of a factor analysis indicate that a test has a correlation coefficient of .20 with Factor I, .35 with Factor II, and .60 with Factor III. The correlation of .60 indicates that ____% of variability in test scores is explained by Factor III. A. 60 B. 40 C. 36 D. 64

C. 36 Factor loadings are interpreted like other correlation coefficients for two different measures and are squared to obtain a measure of shared variability. When the correlation between a test and a factor is .60, this means that 36% (.60 squared) of variability in test scores is explained by variability in the factor.

A T-score distribution has a mean of _____ and standard deviation of _____. A. 100; 10 B. 100; 15 C. 50; 10 D. 50; 15

C. 50; 10 A T-score is a type of transformed score that expresses an examinee's score in terms of its relation to the mean and standard deviation of the scores obtained by examinees in the normative (standardization) sample. The T-score distribution has a mean of 50 and standard deviation of 10. Therefore, a T-score of 50 means that an examinee's score is equal to the mean score achieved by the normative sample, a T-score of 60 means that the examinee's score is one standard deviation above the mean score achieved by the normative sample, a T-score of 40 means that the examinee's score is one standard deviation below the mean score achieved by the normative sample, etc.

In a normal distribution of scores, a T-score of _____ is equivalent to a z-score of _____ and a percentile rank of 84. A. 50; 0 B. 50; 1.0 C. 60; 1.0 D. 70; 2.0

C. 60; 1.0 In a normal distribution, a percentile rank of 84 is one standard deviation above the mean. The T-score distribution has a mean of 50 and standard deviation of 10, so a T-score of 60 is one standard deviation above the mean. And the z-score distribution has a mean of 0 and standard deviation of 1.0, so a z-score of 1.0 is one standard deviation above the mean.

Which of the following best describes classical test theory (CTT) and item response theory (IRT)? A. CTT and IRT are both test based. B. CTT and IRT are both item based. C. CTT is test based and IRT is item based. D. CTT is item based and IRT is test based.

C. CTT is test based and IRT is item based. One difference between CTT and IRT is that CTT is best described as "test based" while IRT is best described as "item based." CTT focuses on total test scores, and tests based on CTT do not provide a basis for predicting how an examinee or group of examinees will respond to a particular test item. In contrast, IRT focuses on responses to individual test items and provides the information needed to determine the probability that a particular examinee or group of examinees will correctly answer any specific item.

To estimate the effect of shortening or lengthening a test on the test's reliability coefficient, you would use which of the following? A. coefficient of determination B. coefficient alpha C. Spearman-Brown formula D. Kuder-Richardson formula 20

C. Spearman-Brown formula The Spearman-Brown formula is also known as the Spearman-Brown prophecy formula and is used to estimate the effect of adding or subtracting items to a test on the test's reliability coefficient.

The standard error of measurement is used to: A. estimate the degree to which variability in test scores is due to true score variability. B. estimate the degree to which variability in test scores is due to random error. C. construct a confidence interval around an examinee's obtained score. D. construct a confidence interval around an examinee's predicted score.

C. construct a confidence interval around an examinee's obtained score. The standard error of measurement is used to construct a confidence interval around an obtained score and indicates the range within which an examinee's true score is likely to fall given his/her obtained score. The standard error of estimate is used to construct a confidence interval around a predicted score (answer D) - i.e., a criterion score that's predicted from an obtained predictor score.

A test developer would use the multitrait-multimethod matrix to evaluate a test's: A. incremental validity. B. criterion-related validity. C. construct validity. D. differential validity.

C. construct validity. The multitrait-multimethod matrix is one method for evaluating a test's construct validity and is important for tests that are designed to assess a hypothetical trait (construct). When using the multitrait-multimethod matrix, the test being validated is administered to a sample of examinees along with tests known to measure the same or a related trait and tests known to measure unrelated traits. When scores on the test being validated have high correlations with scores on tests that measure the same or a related trait, this provides evidence of the test's convergent validity. And, when scores on the test have low correlations with scores on tests that measure unrelated traits, this provides evidence of the test's divergent validity. Adequate convergent and divergent validity provide evidence of the test's construct validity.

Job applicants who are hired on the basis of their scores on a job selection test but then obtain unsatisfactory scores on a measure of job performance six months later are: A. false negatives. B. true negatives. C. false positives. D. true positives.

C. false positives. To identify the correct answer to this question, you have to remember that a person's score on the predictor (in this case, the job selection test) determines whether he/she is a "positive" or "negative" and that the person's score on the criterion (the measure of job performance) determines whether he/she is a "true" or "false" positive or negative. Therefore, for this question, a "true positive," is an applicant who scored above the cutoff on the job selection test and receives satisfactory scores on the job performance measure, while a "false positive" (the correct answer) is an applicant who scored above the cutoff on the job selection test but receives unsatisfactory scores on the job performance measure. A "true negative" is an applicant who scored below the cutoff on the job selection test and would have received unsatisfactory scores on the job performance measure if he/she had been hired, while a "false negative" is an applicant who scored below the cutoff on the job selection test but would have received satisfactory scores on the job performance measure if he/she had been hired.

When conducting a factor analysis, a researcher would rotate the initial factor matrix to: A. reduce measurement error. B. increase the size of the communality. C. obtain a factor matrix that is easier to interpret. D. minimize the effects of missing data.

C. obtain a factor matrix that is easier to interpret. Rotation of the initial factor matrix simplifies the factor structure, thereby creating a matrix that is easier to interpret. In a rotated factor matrix, each test included in the factor analysis will have a high correlation (factor loading) with one of the factors and low correlations with the remaining factors. Consequently, the interpretation of and name given to each factor involves considering the tests that correlate highly with each factor. For example, if Tests A, B, and C all have high correlations with Factor 1 and low correlations with Factor 2, the content of these three tests will be considered to determine what they have in common, and that information will be used to name Factor 1. If the opposite pattern is true for Tests D, E, and F, the same procedure will be used to name Factor 2. Note that answer B is not the correct answer because the communality (the amount of variability in each test that is explained by all of the factors) is not affected by rotation.

In the context of diagnostic efficiency, prevalence refers to how common a disorder is in a particular population at a particular point in time, and its magnitude affects a test's positive and negative predictive values. When the prevalence increases: A. the positive and negative predictive values both increase. B. the positive and negative predictive values both decrease. C. the positive predictive value increases and the negative predictive value decreases. D. the positive predictive value decreases and the negative predictive value increases.

C. the positive predictive value increases and the negative predictive value decreases. A test's positive predictive value (PPV) is the probability that a person who tests positive for a disorder actually has the disorder, while the negative predictive value (NPV) is the probability that a person who tests negative for a disorder does not actually have the disorder. Both values are affected by the prevalence of the disorder, which can vary in different locations and at different times. When the prevalence of the disorder increases, the PPV increases and the NPV decreases, and vice versa.

Dr. Haar is concerned that the statistics tests she uses for her introductory statistics class are too difficult since so few students pass them. To make her tests a little easier, she will want to remove some items that have an item difficulty index (p) of ________ and add some items that have an item difficulty index of ________. A. +1.0 and higher; -1.0 and lower B. +.50 and higher; -.50 and lower C. .85 and higher; .15 and lower D. .15 and lower; .85 and higher

D. .15 and lower; .85 and higher The item difficulty index (p) ranges from 0 to 1.0, with 0 indicating a very difficult item (none of the examinees answered it correctly) and 1.0 indicating a very easy item (all examinees answered it correctly). Therefore, to make the statistics tests easier, Dr. Haar will want to remove some of the very difficult items (e.g., those with a p value of .15 and lower) and add some easy items (e.g., those with a p value of .85 and higher).

A job applicant's score on a selection test is used to predict what her future score on a measure of job performance will be if she's hired. If the applicant's predicted job performance score is 80 and the measure of job performance has a standard deviation of 7 and standard error of estimate of 3, the 99% confidence interval for the applicant's predicted score of 80 is: A. 73 to 87. B. 66 to 94. C. 74 to 86. D. 71 to 89.

D. 71 to 89. The 99% confidence interval for a predicted score is calculated by adding and subtracting three standard errors of estimate to and from the predicted score. In this situation, the applicant's predicted score is 80 and the standard error of estimate is 3, so the 99% confidence interval is 80 minus and plus 9 (three standard errors), which is 71 to 89.

A factor matrix indicates that one of the tests included in the factor analysis has a factor loading of .30 for Factor I. This means that ____ of variability in test scores is explained by Factor I. A. 81% B. 70% C. 30% D. 9%

D. 9% A factor loading is the correlation between a test and an identified factor and can be interpreted by squaring it to obtain a measure of shared variability. When a factor loading is .30, this means that 9% (.30 squared) of variability in the test is shared with (explained by) variability in the factor.

A manager and assistant manager were asked to rate 30 employees in terms of readiness for promotion. After reviewing each employee's file, the manager and assistant manager independently categorized employees as being ready or not ready for promotion. Which of the following is the appropriate technique for determining the inter-rater reliability of the ratings made by the manager and assistant manager? A. coefficient of determination B. coefficient alpha C. Kuder-Richardson 20 D. Cohen's kappa coefficient

D. Cohen's kappa coefficient Of the methods for assessing reliability listed in the answers, only Cohen's kappa coefficient is used to measure inter-rater reliability. It assesses the consistency of ratings assigned by two raters when the ratings represent a nominal scale (e.g., when two raters classify employees as either ready or not ready for promotion).

Before using a newly developed 10-item screening test to identify people who are depressed, you administer the test to a sample of clinic patients along with an established (validated) 50-item measure of depression and correlate the two sets of scores. In this situation, you are evaluating the screening test's: A. content validity. B. divergent validity. C. differential validity. D. concurrent validity.

D. concurrent validity. Concurrent validity is a type of criterion-related validity that involves determining how well a new predictor (e.g., a screening test) estimates current scores on a criterion (e.g., a validated measure of depression).

When using the multitrait-multimethod matrix to evaluate a test's validity, the matrix provides evidence of the test's __________ validity when scores on the test have high correlations with scores on other tests that measure the same or a related construct. A. differential B. incremental C. divergent D. convergent

D. convergent The multitrait-multimethod matrix provides information on a test's convergent and divergent validity which, in turn provide information on the test's construct validity. A test has convergent validity when it has high correlations with tests that measure the same or a related construct, and it has divergent validity when it has low correlations with tests that measure an unrelated construct.

In the context of factor analysis, "oblique" means: A. statistically significant. B. statistically insignificant. C. uncorrelated. D. correlated.

D. correlated. The factors extracted (identified) in a factor analysis can be either orthogonal or oblique. Orthogonal factors are uncorrelated, while oblique factors are correlated.

In the context of test construction, "shrinkage" is associated with: A. inter-rater reliability. B. factor analysis. C. incremental validity. D. cross-validation.

D. cross-validation. Shrinkage is associated with cross-validation and refers to the fact that a validity coefficient is likely to be smaller than the original coefficient when the predictor(s) and criterion are administered to another (cross-validation) sample. Shrinkage occurs because the chance factors that contributed to the relationship between the predictor(s) and criterion in the original sample are not present in the cross-validation sample.

When using the multitrait-multimethod matrix to assess a test's construct validity, a large heterotrait-monomethod coefficient indicates which of the following? A. adequate convergent validity B. inadequate convergent validity C. adequate divergent validity D. inadequate divergent validity

D. inadequate divergent validity The heterotrait-monomethod coefficient indicates the correlation between the test being evaluated and a measure of a different trait (heterotrait) using the same method of measurement (monomethod). For example, if the multitrait-multimethod matrix is being used to assess the construct validity of a self-report measure of assertiveness, a heterotrait-monomethod coefficient might indicate the correlation between the self-report measure of assertiveness and a self-report measure of seriousness. When this coefficient is small, it provides evidence of the test's divergent validity; when it's large, it indicates that the test has inadequate divergent validity. (A measure's construct validity is demonstrated when it has adequate levels of both divergent and convergent validity.)

A problem with using percent agreement as a measure of inter-rater reliability is that it may: A. underestimate reliability because it's susceptible to rater biases. B. overestimate reliability because it's susceptible to rater biases. C. underestimate reliability because it's affected by chance agreement. D. overestimate reliability because it's affected by chance agreement.

D. overestimate reliability because it's affected by chance agreement. A certain amount of chance agreement between two or more raters is possible, especially for behavior observation scales when the behavior occurs frequently. Percent agreement is easy to calculate but, because it's affected by chance agreement, it may overestimate a measure's inter-rater reliability.

The use of banding to assist with hiring decisions is based on the assumption that: A. the standard error of estimate is not the same magnitude throughout the distribution of selection test scores. B. small differences in criterion-related validity coefficients are not necessarily associated with meaningful differences in the accuracy of predictions of job performance. C. adding more predictors to a selection procedure will not necessarily lead to more accuracy in hiring decisions. D. small differences in selection test scores are not necessarily associated with meaningful differences in job performance.

D. small differences in selection test scores are not necessarily associated with meaningful differences in job performance. Banding is also known as statistical banding and test-score banding. When using banding, score bands (intervals) are created, usually using the test's standard error of measurement. Examinees whose test scores fall within the same band are considered to be equal in terms of the attribute(s) measured by the test. Banding is based on the assumption that the differences in selection test scores within a band are not associated with significant differences in job performance.

Which aspect of an item characteristic curve (ICC) indicates the probability of choosing the correct answer to the item by guessing alone? A. the position of the curve B. the slope of the curve C. the point at which the curve intercepts the x-axis D. the point at which the curve intercepts the y-axis

D. the point at which the curve intercepts the y-axis The various item response theory models produce item response curves that provide information on either one, two, or three parameters, with the three parameters being item difficulty, item discrimination, and the probability of guessing correctly. The probability of guessing correctly is indicated by the point at which the item response curve intercepts the y-axis.


Conjuntos de estudio relacionados

Fundamental Information Security Chapter 13: Information Systems Security Education and Training

View Set

Ch. 34 Acute Kidney Injury and Chronic Kidney Disease

View Set

Ch. 64: Assessment of reproductive function

View Set