Chapter 11

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

The type of validity that employs primarily judgment in its assessment is which of the following? A) Content B) Concurrent C) Predictive D) Construct

Ans: A Feedback: Content validity concerns the sampling adequacy of the content being measured and is based on judgment. Expert ratings on the relevance of items can be used to compute a content validity index (CVI). Criterion-related validity (which includes both predictive validity and concurrent validity) focuses on the correlation between an instrument and an outside criterion. Construct validity, an instrument's adequacy in measuring the targeted construct, is a hypothesis-testing endeavor.

Which of the following terms does not belong with the other three? A) Face validity B) Criterion-related validity C) Predictive validity D) Concurrent validity

Ans: A Feedback: Face validity refers to whether an instrument looks as though it is measuring the appropriate construct. Criterion-related validity (which includes both predictive validity and concurrent validity) focuses on the correlation between an instrument and an outside criterion.

If the coefficient alpha for a stress scale was computed to be .80, the scale would be which of the following? A) More reliable than a scale with an alpha of .50 B) A valid indicator of stress C) Of indeterminate reliability until the scale's testñretest reliability was assessed D) Of unacceptably low reliability

Ans: A Feedback: Internal consistency is evaluated by calculating coefficient alpha (or Cronbach's alpha). The normal range of values for this reliability index is from .00 to +1.00. The higher the coefficient, the more accurate (internally consistent) the measure.

Cronbach's alpha is used to assess which of the following attributes of an instrument? A) Internal consistency B) Stability C) Equivalence D) Sensitivity

Ans: A Feedback: Internal consistency reliability, which refers to the extent to which all of the instrument's items are measuring the same attribute, is usually assessed with Cronbach's alpha. The stability aspect of reliability, which concerns the extent to which an instrument yields similar results on two administrations, is evaluated by testñretest procedures. Sensitivity is the instrument's ability to identify a case correctly (i.e., its rate of yielding true positives). Equivalence, in reliability assessment, primarily concerns the degree to which two or more independent observers or coders agree about scoring on an instrument.

Which of the following is the term representing the ìdifference between a true and obtained score''? A) Error of measurement B) An observed score C) Response-set bias D) Situational contaminant

Ans: A Feedback: Quantitative data consists of two parts: a true component and an error component. Error of measurement is the difference between true and obtained (or observed) scores. The procedures and the people themselves are susceptible to influences such as situational contaminants, response-set biases, transitory personal factors, and item samplingóall examples of factors contributing to measurement error.

One source of measurement error in social-psychological scales is which of the following? A) Response-set bias B) Nonresponse bias C) Attrition bias D) Selection bias

Ans: A Feedback: Response-set biases are enduring characteristics of respondents that can interfere with accurate measures. Sources of measurement error include situational contaminants, response-set biases, item sampling, and transitory personal factors. Nonresponse bias, attrition bias and selection bias are not examples of sources of measurement error.

With screening or diagnostic instruments, the concept indicating the instruments' ability to correctly identify a ìcaseî (i.e., to screen in or diagnose a condition correctly) is which of the following? A) Sensitivity B) Stability C) Specificity D) Sensibility

Ans: A Feedback: Sensitivity is the instrument's ability to identify a case correctly (i.e., its rate of yielding true positives). Specificity is the instrument's ability to identify non-cases correctly (i.e., its rate of yielding true negatives). The stability aspect of reliability, which concerns the extent to which an instrument yields similar results on two administrations, is evaluated by testñretest procedures. There is no research term known as sensibility.

The purpose of a study was to determine the effectiveness of two different analgesic agents in controlling pain during menstruation in females ages 18 to 21 years. Researchers administer a self-reported pain scale on days 2 and 3 of the menses. You infer that this procedure is an example of which of the following? A) Test-retest reliability B) Random sampling procedures C) Internal validity of the study D) Construct validation

Ans: A Feedback: The stability of an instrument is the degree to which similar results are obtained on separate occasions. Stability is assessed through testñretest reliability procedures. Researchers administer the measure to the same sample twice and then compare the scores. The self-reported pain scale administered on two different occasions is an example of test-retest reliability, not of random sampling procedures or construct validation. The internal validity of the study is not being measured.

Blood type is measured on which of the following? A) Nominal scale B) Ordinal scale C) Interval scale D) Ratio scale

Ans: A Feedback: There are four levels of measurement: (1) nominal measurement-the classification of attributes into mutually exclusive categories such as blood type; (2) ordinal measurement-the ranking of people based on their relative standing on an attribute; (3) interval measurement-indicating not only people's rank order but the distance between them; and (4) ratio measurement-distinguished from interval measurement by having a rational zero point and thus providing information about the absolute magnitude of the attribute.

A group of nurse researchers specializing in the care of pediatric oncology patients decides to perform interviews on nurses caring for pediatric oncology patients to determine patterns of nurse caring. After deciding on fifteen interview questions, they submit their draft to five pediatric oncology nurse practitioners for input. This practice illustrates obtaining which of the following? A) Internal consistency B) Content validity C) Face validity D) Equivalency

Ans: B Feedback: An instrument's content validity is based on the judgment of experts evaluating the instrument, which is the case here. There is no totally objective method for insuring adequate content coverage, but often a panel of subject content experts is asked to evaluate the content validity of new instruments. Researchers can calculate a content validity index (CVI) that indicates the extent of expert agreement. Internal consistency is a measure of the extent to which all of an instrument's items measure the same thing. Equivalence, in reliability assessment, primarily concerns the degree to which two or more independent observers or coders agree about scoring on an instrument. Face validity refers to whether an instrument looks as though it is measuring the appropriate construct.

Which of the following is an example of a ratio measurement? A) Fahrenheit scale for measuring temperature B) Body mass index C) Quality of life scale score D) Levels of education

Ans: B Feedback: Body mass index is an example of ratio measurement, as it has a meaningful zero and provides information about the absolute magnitude of an attribute (body mass). Ratio measurement is the highest level of measurement. The Fahrenheit scale for measuring temperature (interval measurement) has an arbitrary zero point. Quality of life scale is an example of interval measurement, and levels of education are an example of ordinal measurement.

Which of the following is indicated by the ìcontent validity index (CVI)î? A) Internal consistency of the measured items within one measure B) The extent of expert agreement ensuring adequate content coverage C) Equivalence of two separate forms of a measure D) Criterion-related assessment

Ans: B Feedback: Construct validity refers to the degree to which an instrument has an appropriate sample of items for the construct being measured. An instrument's content validity is based on the judgment of experts evaluating the instrument. There is no totally objective method for insuring adequate content coverage, but often a panel of subject content experts is asked to evaluate the content validity of new instruments. An instrument is internally consistent to the extent that its items measure the same trait. Equivalence, in reliability assessment, primarily concerns the degree to which two or more independent observers or coders agree about scoring on an instrument. In criterion-related validity assessments, researchers examine the relationship between scores on an instrument and an external criterion.

Which of the following is an example of a nominal measurement? A) Grams of carbohydrate intake B) Hand dominance (right or left ) C) Emotional intelligence quotients D) Age in years

Ans: B Feedback: Hand dominance (right or left) is an example of nominal measurement. Nominal measurement involves using numbers simply to categorize attributes. Numbers in nominal measurement do not have quantitative meaning. Nominal measurement provides information only about categorical equivalence; therefore, numbers used in nominal measurement can only be treated categorically. Grams of carbohydrate intake and age in years are examples of ratio measurement, and emotional intelligence quotient is an example of interval measurement.

A group of 150 seniors with type II diabetes consented to a study examining the relationship between self-care and quality of life. Seniors received didactic classes on proper diet, exercise, stress, and medication adherence with 30-minute low-impact exercise sessions once monthly for a period of 6 months. Ordinal level data collected during the study would include which of the following? A) Gender, ethnicity B) Education level, Heart Association classification C) Age, body mass index D) Scores on a self-care index

Ans: B Feedback: Ordinal measurement ranks people on relative standing on an attribute. Levels of education signify incremental ability as related to educational degree completed. Heart Association classification indicates level of heart health. Gender and ethnicity are variables measured by nominal measurement. Age and body mass index would be associated with ratio measurement. Scores on a self-care index would be examples of interval measurement.

Which of the following is an example of ratio measurement? A) Likert scale response to questions B) Twenty-four-hour oral cc intake C) Eye color (blue, brown, hazel, green) D) Ability to perform activities of daily living

Ans: B Feedback: Ratio measurement is the highest level of measurement. Ratio scales, unlike interval scales, have a meaningful zero and thus provide information about the absolute magnitude of the attribute. A person's twenty-four-hour oral cc intake has a meaningful zero; that is, the absolute amount of oral intake may be measured. Likert scale responses and ability to perform activities of daily living are examples of ordinal measurement, which ranks people based on relative standing on an attribute but cannot indicate the distance between them. Eye color is an example of nominal measurement, which involves using numbers simply to categorize attributes.

A measure of which of the following traits would be a particularly good candidate for a test-retest reliability assessment? A) Anxiety B) Fear of heights C) Mood D) Fatigue

Ans: B Feedback: The stability aspect of reliability, which concerns the extent to which an instrument yields similar results on two administrations, is evaluated by testñretest procedures. Attitudes, mood, and so forth can be changed by experiences between two measurements. Thus, stability indexes are most appropriate for fairly enduring characteristics, such as temperament or fear of heights. Anxiety, mood, and fatigue are not fairly enduring characteristics.

Type of college degree (associate's, bachelor's, master's, doctorate) is measured on which of the following scales? A) Nominal B) Ordinal C) Interval D) Ratio

Ans: B Feedback: There are four levels of measurement: (1) nominal measurementóthe classification of attributes into mutually exclusive categories such as blood type; (2) ordinal measurementóthe ranking of people based on their relative standing on an attribute such as type of college degree; (3) interval measurementóindicating not only people's rank order but the distance between them; and (4) ratio measurementódistinguished from interval measurement by having a rational zero point. College degrees are examples of ordinal measures, because they indicate a person's relative standing in terms of education, but do not indicate the distance between the measures.

The level of measurement that classifies and ranks people in terms of the degree to which they possess the attribute of interest is which of the following? A) Nominal B) Ordinal C) Interval D) Ratio

Ans: B Feedback: There are four levels of measurement: (1) nominal measurementóthe classification of attributes into mutually exclusive categories; (2) ordinal measurementóthe ranking of people based on their relative standing on an attribute; (3) interval measurementóindicating not only people's rank order but the distance between them; and (4) ratio measurementódistinguished from interval measurement by having a rational zero point and thus providing information about the absolute magnitude of the attribute.

Which aspect of reliability does the ìCronbach's alpha'' indicate? A) Measurement stability performance over time B) Equivalence of two separate forms of a measure C) Internal consistency of the measure items within one measure D) The extent of expert agreement ensuring adequate content coverage

Ans: C Feedback: An instrument is internally consistent to the extent that its items measure the same trait. Internal consistency reliability is the best way to assess an important source of measurement error in scales, the sampling of items. Internal consistency is evaluated by calculating coefficient alpha (Cronbach's alpha). The higher the coefficient, the more accurate (internally consistent) the measure. Cronbach's alpha does not indicate stability, equivalence, or content validity (extent of expert agreement ensuring adequate content coverage).

A study's purpose was to note maternal responses to infant cues within the first 48 hours after birth. The investigator and research assistant simultaneously but independently observed and scored the new mothers' behaviors while holding their infants en face. The agreement between the two raters can be described as which of the following? A) Content validity of the scoring instrument B) Internal validity of the research design C) Reliability of the scoring instrument D) External validity of the research design

Ans: C Feedback: Equivalence, in reliability assessment, primarily concerns the degree to which two or more independent observers or coders agree about scoring on an instrument. If there is a high level of agreement, then the assumption is that measurement errors have been minimized. The degree of error can be assessed through interrater or interobserver reliability procedures, which involve having two or more observers or coders make independent observations.

The aspect of reliability for which interobserver reliability is appropriate is which of the following? A) Stability B) Internal consistency C) Equivalence D) Specificity

Ans: C Feedback: Equivalence, in reliability assessment, primarily concerns the degree to which two or more independent observers or coders agree about scoring on an instrument. Internal consistency reliability, which refers to the extent to which the entire instrument's items are measuring the same attribute, is usually assessed with Cronbach's alpha. The stability aspect of reliability, which concerns the extent to which an instrument yields similar results on two administrations, is evaluated by testñretest procedures. Specificity is the instrument's ability to identify non-cases correctly (i.e., its rate of yielding true negatives).

Which of the following is an example of an ordinal measurement? A) Gender (male or female) B) Milligrams of a medication dosage C) Levels of education (associate's degree, bachelor's degree, master's degree) D) Score on the HESI preadmission or first year examination for nursing

Ans: C Feedback: Ordinal measurement ranks people on relative standing on an attribute. Levels of education signify incremental ability as related to educational degree completed. Ordinal measurement does not, however, differentiate how much greater one level is than another. Mathematical operations permissible with ordinal-level data are restricted. Gender is an example of nominal measurement. Milligrams of a medication dosage is an example of ratio measurement, as it has a meaningful zero and can provide information about the absolute magnitude of the attribute. A score on a test is an example of interval measurement, as it can rank people on an attribute and specify the distance between them, but does not have a meaningful zero.

Which of the following most accurately describes the relationship between reliability and validity? A) If a measure is reliable, it will be valid. B) As reliability increases, validity decreases. C) If a measure is not reliable, it cannot be valid. D) There is no direct relationship between the two.

Ans: C Feedback: Reliability and validity are not independent qualities of an instrument. A measuring device that is unreliable cannot be valid. An instrument cannot validly measure an attribute if it is erratic and inaccurate. An instrument can be reliable without being valid. An instrument's high reliability provides no evidence of its validity, but low reliability of a measure is evidence of low validity.

The Beck Depression Inventory (BDI) consists of 20 items describing depressive symptoms. Subjects respond to a Likert-type scale, rating each item (0 = no symptom to 3 = persistent or severe symptom presence). Previous research indicates that testñretest scores of the BDI ranged from 0.60 to 0.90. Critiquing the above statements you conclude which of the following? A) Evidence of the stability aspect of validity of the BDI is supported B) Conceptual and operational definitions of depression are not consistent C) Evidence of the stability aspect of reliability is supported D) The BDI will yield high amounts of error with obtained scores

Ans: C Feedback: The stability of an instrument is the degree to which similar results are obtained on separate occasions and is an aspect of reliability, not validity. Stability is assessed through testñretest reliability procedures. Researchers administer the measure to the same sample twice and then compare the scores. The test-retest results do not indicate that conceptual and operational definitions of depression are not consistent, nor do they indicate that the BDI will yield high amounts of error with obtained scores.

It is not meaningful to calculate an arithmetic average with data from which of the following? A) Nominal measures B) Ordinal measures C) Nominal and ordinal measures D) All measures can be meaningfully averaged

Ans: C Feedback: There are four levels of measurement: (1) nominal measurement-the classification of attributes into mutually exclusive categories such as blood type; (2) ordinal measurement-the ranking of people based on their relative standing on an attribute; (3) interval measurement-indicating not only people's rank order but the distance between them; and (4) ratio measurement-distinguished from interval measurement by having a rational zero point. It is not meaningful to calculate an arithmetic average with data from nominal and ordinal measures and thus providing information about the absolute magnitude of the attribute. Interval and ratio measurements can be meaningfully averaged, whereas nominal and ordinal measurements cannot.

A nurse researcher is evaluating a revised self-esteem questionnaire to determine whether all of the items on the questionnaire actually effectively measure self-esteem. Which aspect of reliability is she evaluating? A) Equivalence B) Validity C) Stability D) Internal consistency

Ans: D Feedback: An instrument is internally consistent to the extent that its items measure the same trait. Internal consistency reliability is the best way to assess an important source of measurement error in scales, the sampling of items. Internal consistency is evaluated by calculating coefficient alpha (Cronbach's alpha). The higher the coefficient, the more accurate (internally consistent) the measure.

Suppose a researcher were interested in assessing the adequacy of an instrument to measure the theoretical concept of hopefulness. The most appropriate type of validation procedure would be which of the following? A) Content B) Concurrent C) Predictive D) Construct

Ans: D Feedback: Construct validity, an instrument's adequacy in measuring the targeted construct, is a hypothesis-testing endeavor. Content validity concerns the sampling adequacy of the content being measured. Expert ratings on the relevance of items can be used to compute a content validity index (CVI). Criterion-related validity (which includes both predictive validity and concurrent validity) focuses on the correlation between an instrument and an outside criterion.

A researcher at a school of nursing decides to investigate the correlation between a pre- admission HESI examination, high school GPA, and SAT scores as a predictor of success in completing first year study. These admission variables will be reviewed again with grades achieved after the first year is completed. This use of data is known as which of the following? A) Construct validation B) Known-groups technique C) Concurrent validity D) Predictive validity

Ans: D Feedback: Predictive validity is an instrument's ability to differentiate between people's performances on a future criterion. When a researcher correlates applicant's pre-admission test, high school grades, and SAT score with subsequent grade point averages, predictive validity is being evaluated. Construct validity is a key criterion for assessing research quality, and construct validity has most often been linked to measurement. Construct validity in measurement concerns these questions: What is this instrument really measuring? and Does it validly measure the abstract concept of interest? One approach to construct validation is the known-groups technique. In this procedure, groups that are expected to differ on the target attribute are administered the instrument, and group scores are compared. Concurrent validity is an instrument's ability to distinguish among people who differ presently on a criterion.

The difference between a true score and an obtained score is referred to as which of the following? A) Internal inconsistency B) Non-equivalence C) Interobserver disagreement D) Error of measurement

Ans: D Feedback: The obtained (or observed) score is the value obtained from measurement. The true score is the true value that would be obtained if it were possible to have an infallible measure. The true score is hypotheticalóit cannot be known because measures are not infallible. The final term in the equation is the error of measurement, which is the difference between true and obtained scores. When the reliability assessment focuses on equivalence between observers or coders assigning scores, estimates of interrater (or interobserver) reliability are obtained. Internal consistency reliability, which refers to the extent to which the entire instrument's items are measuring the same attribute, is usually assessed with Cronbach's alpha.


Ensembles d'études connexes

Radiobiology Mosby (Module 2 Biophysical Events and Cellular and Subcellular Effects of Radiation)

View Set

8/29- Simulated 50-Question MBE (Set 1 & 2)

View Set

Международная Экономика

View Set

Chapter 8: Sugar-The Simplest of Carbohydrates

View Set

Exam 2 History and Systems Psychology

View Set