NR ch15

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

most important thing to a research consumer should be when evaluating research studies.

-Could something else explain the results? -The most important question to ask yourself as you read experimental studies is "What else could have happened to explain the findings?"

what type of research study is needed for instrument development.

Psychometrics focuses on the theory and development of measurement instruments

critiquing questions to ask when assessing the reliability and validity of a research instrument.

-If the sample from the developmental stage of the tool was different from the current sample, were the reliability and validity recalculated to determine whether or not the tool is appropriate for use in a different population? -Have the strengths and weaknesses related to the reliability and validity of each instrument been presented? -What kinds of threats to internal or external validity are presented by weaknesses in reliability and/or validity? -Are strengths and weaknesses of the reliability and validity appropriately addressed in the "discussion," "limitations," or "recommendations" sections of the report? -How do the reliability and validity affect the strength and quality of the evidence provided by the study findings?

critiquing questions to ask when assessing the reliability

-Was an appropriate method used to test the reliability of the tool? -Is the reliability of the tool adequate?

critiquing questions to ask when assessing the validity

-Was an appropriate method used to test the validity of the instrument? -Is the validity of the measurement tool adequate?

factor analysis

-assesses construct validity -This is a procedure that gives the researcher information about the extent to which a set of items measures the same underlying concept (variable) of a construct. -assesses the degree to which the individual items on a scale truly cluster around one or more concepts -Items designed to measure the same concept should load on the same factor; those designed to measure different concepts should load on different factors

construct validity

-based on the extent to which a test measures a theoretical construct, attribute, or trait. -It attempts to validate the theory underlying the measurement by testing of the hypothesized relationships.

reliability error

-concerned with random error -chance/random error are difficult to control -unsystematic in nature example: person's anxiety level at time of testing

validity error

-concerned with systematic error -Measurement error that is attributable to relatively stable characteristics of the study sample that may bias their behavior and/or cause incorrect instrument calibration example: person answers in a socially desirable way rather than reflecting on how they actually feel.

Kuder-Richardson

-measures reliability -Is the estimate of homogeneity used for instruments that have a dichotomous response format. -A dichotomous response format is one in which the question asks for a "yes/no" or "true/false" response. -minimum acceptable KR-20 score is r=0.70

Cronbach's alpha

-measures reliability -commonly used test of internal consistency -It's used when a measurement instrument uses a Likert scale. -A Likert scale format asks the subject to respond to a question on a scale of varying degrees of intensity between two extremes. The two extremes are anchored by responses ranging from "strongly agree" to "strongly disagree" or "most like me" to "least like me." simultaneously compares each item in the scale with the others. -above 0.70 are sufficient evidence for supporting the internal consistency of the instrument.

split-half reliability

-measures reliability -provides a measure of consistency -Involves dividing a scale into two halves and making a comparison -The two halves of the test or the contents in both halves are assumed to be comparable, and a reliability coefficient is calculated. -If the scores for the two halves are approximately equal, the test may be considered reliable.

Know what a given reliability coefficient value means. E.G. 0.80 or 0.5

-ranges from 0 to 1 and expresses the relationship between the error variance, the true (score) variance, and the observed score. -A zero correlation indicates no relationship. -The closer to 1 the coefficient is, the more reliable the tool. -When the error variance in a measurement instrument is low, the reliability coefficient will be closer to 1. *0.8 = Error variance is small and the instrument has little measurement error and is more reliable *0.5 = Error variance is high and the instrument has higher measurement level and is less reliable (for a research instrument to be considered reliable, a 0.70 or above is necessary.)

In the following scenario, what is being verified when stress is on use of the same technique each time a blood pressure is obtained? The nurse researcher is conducting a study which requires collection of data regarding blood pressure reading. A. Interrater reliability B. Homogeneity C. Test-retest reliability D. Construct validity

ANSWER: A RATIONALE: Interrater reliability. To accomplish interrater reliability, two or more individuals should make an observation or one observer should examine the behavior on several occasions. The observers should be trained or oriented to the definition and operationalization of the behavior to be observed. In the method of direct observation of behavior, the consistency or reliability of the observations between observers is extremely important. In the instance of interrater reliability, the reliability or consistency of the observer is tested rather than the reliability of the instrument. Interrater reliability is expressed as a percentage of agreement between scorers or as a correlation coefficient of the scores assigned to the observed behaviors.

What is the most common test for internal consistency? A. Cronbach's alpha B. Split-half reliability C. Kuder-Richardson (KR-20) coefficient D. Equivalence

ANSWER: A RATIONALE: The most commonly used test of internal consistency is Cronbach's alpha. The split-half method provides a measure of consistency in terms of sampling the content. The two halves of the test or the contents in both halves are assumed to be comparable, and a reliability coefficient is calculated. The Kuder-Richardson (KR-20) coefficient is the estimate of homogeneity used for instruments that have a dichotomous response format. A dichotomous response format is one in which the question asks for a "yes/no" or "true/false" response. Equivalence either is the consistency or agreement among observers using the same measurement instrument or is the consistency or agreement between alternate forms of an instrument. An instrument is thought to demonstrate equivalence when two or more observers have a high percentage of agreement of an observed behavior or when alternate forms of a test yield a high correlation.

When reviewing a research article, no information is provided about reliability of the measurement. The reader may: A. Assume the test is valid but not reliable. B. Search and review the original source. C. Assume Cronbach's alpha was used. D. Seriously question the merit and use of the tool and question the results.

ANSWER: C RATIONALE: If a research article provides information about the reliability of a measurement instrument but does not specify the type of reliability, it is probably safe to assume that internal consistency reliability was assessed using Cronbach's alpha.

Validity

Actually measuring exactly what you intend to measure (measures accuracy) ex: thermometer

face validity

The appearance of measuring the construct/concept -A type of content validity that uses an expert's opinion to judge the accuracy of an instrument. (Some would say that face validity verifies that the instrument gives the subject or expert the appearance of measuring the concept.)

content validity

The degree to which the content of the measure represents the universe of content or the domain of a given behavior.

Reliability

a measure of the consistency of test or research results example: show up to work all the time


संबंधित स्टडी सेट्स

lower respiratory disorders- med surg

View Set

IB biology super review for paper 1 + 2

View Set

PSY 2 Chapter 26, 25, 17, 13, 35

View Set

AP US History Vocabulary (Presidential Highlights and Study Guides)

View Set

Chapter 14: Care of a patient with a Neurologic disorder exam

View Set

Writing a Persuasive E-mail Assignment

View Set

The Lean StartUp, Identify Your Niche, Finding Your Audience Quiz, Finding Your Passion

View Set

13.1: Polyhedra and Other Solid Shapes

View Set

AP GOV unit 3 mcq (3.1, 3.2, 3.4, 3.5, 3.7, 3.9- 3.13)

View Set