Ch.7

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

As the coefficient approaches 1.00, the set of measures is viewed as: a. equivalent. b. identical. c. very different. d. unrelated.

A

In order to calculate an interclass correlation, how many raters are necessary? a. 2 b. 2 or more c. more than 2 d. more than 3

A

Which of the following is not a method for estimating internal consistency reliability? a. parallel or equivalent forms reliability b. Kuder-Richardson reliability c. Cronbach's coefficient alpha reliability d. split-half reliability

A

An obtained score consists of which two components? a. controllable and uncontrollable b. true and error c. systematic and unsystematic d. true and predictive

B

How many test administrations do you need in order to calculate a split-half reliability estimate? a. 1/2 b. 1 c. 2 d. 1/4

B

What is a true score? a. the score obtained for a person under normal conditions b. the score obtained because of the presence of external factors c. the mean/average score made by a person on many different administrations of tests d. the standard deviation on many different administrations of the same test on the same individual

C

What is the difference between interclass and intraclass correlations (reliability estimates)? a. minimum number of targets being rated b. minimum number of equivalent forms being used c. minimum number of raters needed for calculation d. minimum number of attributes measured

C

A good rule of thumb is that reliability must be .90 or higher. T/F

F

A selection measure is internally consistent or homogeneous when individuals' responses on one part of the measure are unrelated to their responses on other parts. T/F

F

A spilt-half reliability overestimates actual reliability. Therefore, a special formula, the Spearman-Brown prophecy formula, is used to make the correction. T/F

F

The standard error of measurement is another approach for estimating reliability. T/F

F

To achieve a parallel forms reliability estimate, at least two equal versions of a measure must exist. T/F

F

To control the effects of memory on test-retest reliability estimates, the same measure should be used the second time. T/F

F

A split-half reliability estimate is NOT a pure measure of internal consistency. T/F

T

Although interrater agreement indices have their limitations, they are still widely used in selection research. T/F

T

For which of the following selection measures is it most appropriate to use equivalent forms for reliability estimation? a. vocabulary b. personality inventory c. biographical inventory d. physical fitness

A

If rxx = .85, and the standard deviation of x is 10, then the standard error of measure- ment for measure x is a. 3.873 b. 3.16 c. 3.50 d. 3.30

A

Test-retest reliability estimation is most appropriate for which of the following? a. mental ability b. attitudes c. self-esteem d. self-concept

A

What is a correlation coefficient calculated between two sets of scores over time called? a. coefficient of stability b. coefficient of equivalence c. coefficient alpha d. coefficient of dependability

A

Calculation of reliability estimates results in a coefficient ranging from _____ to a. 0, 1.96 b. 0.00; 1.00 c. -1.00; 1.00 d. -1.00; 0.00

B

For a test with a time limit (i.e., a speed test), which reliability estimation procedure is not appropriate? a. test-retest b. split-half c. parallel or equivalent forms (immediate administration) d. parallel or equivalent forms (long-term administration)

B

In order to calculate an intraclass correlation, how many raters are necessary? a. 2 b. more than 2 c. 2 or more d. any number will do

B

The difference between two individuals' scores should not be considered significant unless the difference is at least ___________ the standard error of measurement of the measure. a. equal to b. twice c. three times d. four times

B

If we have a test called "x" and rxx = .80, this means a. 80% of the differences in test scores is due to error and only 20% is due to true variance. b. 20% of the test scores were used to obtain the reliability estimate. c. 20% of the differences in test scores is due to error and 80% is due to true variance. d. the test average is in the low 'B' range.

C

Among the most popular internal consistency methods are all of these EXCEPT: a. Cronbach's coefficient alpha reliability c. Split-half reliability b. Kuder-Richardson reliability d. Guion's measurement

D

Research has shown that the reliability of rating scales can be improved by offering from ____ to _____ rating categories: a. 1, 4 b. 1, 5 c. 3, 7 d. 5, 9

D

What impact does memory have on a test-retest reliability estimate? a. It is not possible to determine its effect on reliability. b. It has no effect on test-retest reliability. c. It will underestimate the true reliability of obtained scores. d. It will overestimate the true reliability of obtained scores.

D

What reliability estimate consists of administering the same selection measure twice and correlating the two sets of score? a. parallel forms b. split-half c. internal consistency d. test-retest

D

Which of the following is NOT one of the categories of statistical procedures for estimating interrater reliability? a. interclass correlation c. interrater agreement b. intraclass correlation d. underclass correlation

D

Which of the following would NOT be a likely cause of interrater disagreement? a. Raters view the same behavior differently. b. Raters interpret the same behavior differently. c. Error in rating or recording each impression. d. Length of time behavior is displayed.

D

With a long time interval between administrations of a measure (test-retest), what could cause scores to change resulting in an underestimate of the reliability? a. reasoning b. thinking c. memory d. learning

D

If all respondents on a selection measure remember their previous answers to an initial administration of a measure and then on the retest respond according to their memory, the reliability coefficient will decrease. T/F

F

In general, the amount of measurement error has little effect on how high the reliability of measurement error will be. T/F

F

Interrater agreement indices are generally restricted to interval or ratio data. T/F

F

Kuder-Richardson reliability procedures are rarely used. T/F

F

Selection measures involving traits of personality, attitudes, or interests are usually considered to be fairly static yielding high reliability coefficients T/F

F

Selection measures that are designed to assess job-related characteristics are more precise than measures of physical characteristics. T/F

F

Split-half reliability procedures tend to produce a conservative estimate of reliability. T/F

F

Surprisingly, increasing the lengths of time between administrations does not reduce the impact of memory effects on reliability T/F

F

Tests with many items that are very difficult are more reliable than tests containing many items of moderate difficulty. T/F

F

The standard error of measurement is affected by variability within the group of respondents to whom a measure has been administered. T/F

F

As the number of response options or categories on a measure increases, reliability also increases. T/F

T

Because of the way it is calculated, a higher reliability coefficient is desirable. T/F

T

Error score represents errors of measurement T/F

T

Generally speaking, the greater the variability or standard deviation of scores on the characteristic measured, the higher the reliability of the measure of that characteristic. T/F

T

If coefficient alpha reliability is unacceptably low, then the items on the selection measure may be assessing more than one characteristic. T/F

T

If our standard error is 3.16 and the difference between two applicants scores is 3, then it is possible that the difference in scores is due to chance. T/F

T

If variability or individual differences increase among respondents while variation within individuals remains the same, reliability will increase. T/F

T

In general, as the length of a measure decreases, its reliability increases. T/F

T

In the context of personnel selection, the reliability of criterion measures need not be as high as predicator measures T/F

T

Interrater reliability estimates test the hypothesis that ratings are determined by characteristics of the rater rather than by what is being rated. T/F

T

Kuder-Richardson reliability estimates are usually lower than those obtained from spilt-half estimates. T/F

T

Reliability coefficients computed between parallel forms tend to be conservative estimates. T/F

T

Reliability is a group-based statistic. T/F

T

Reliability is a necessary but not sufficient condition for validity. T/F

T

Reliability is generally determined by examining the relationship between two sets of measures measuring the same thing. T/F

T

Reliability of measurement in selection is synonymous with dependability, consistency, or stability of measurement. T/F

T

Selection measures are not simply "reliable" or "not reliable," there are degrees of reliability T/F

T

The higher the test-retest reliability coefficient, the greater the true score and the less the error T/F

T

The higher the value of a reliability coefficient, the less measurement error. T/F

T

Unreliable performance by a respondent on a reliable measure is possible, but reliable performance on an unreliable measure is impossible

T

When a measure is perfectly reliable, its obtained score is higher than its true score. T/F

T

With a long time interval between administrations of a measure, test-retest reliability may underestimate reliability. T/F

T

With increasing time intervals, test-retest reliability coefficients will generally decrease T/F

T


Set pelajaran terkait

NURS 124 PrepU Tissue Integrity

View Set

Relationship Management Marketing (CH.1,2,3,4,5, & 6)

View Set

Chapter 7: Interviewing Candidates

View Set

BIOL 1144 - Ch 42 - Anatomy of the Reproductive System - Male Section Only

View Set

CHAPTER 2: THE HEALTH HISTORY AND INTERVIEW

View Set