Reliability, Validity, and Objectivity
Theory- Any measurement on a continuous scale contains measurement error
*X (Observed score) =true score (t) + error of measurement score (e) *Total variability for a set of scores can be measured also. Analysis of variance (ANOVA) can help variance into parts
** Validity**
-
Objectives
-Define and differentiate among reliability, objectivity, and validity, and outline the methods used to estimate these values. •Describe the influences of test reliability on test validity. •Identify those factors that influence reliability, objectivity, and validity. •Select a reliable, valid criterion score based on sound measurement theory.
if a test measures what it is supposed to measure
-Most important criterion to consider when evaluating a test -Coefficient ranges from -1.0 to 1.0
Stability Versus Internal Consistency
-not fair to compare internal consistency always higher than stability reliability coefficient.--assumed true ability has not changed from day to day --able to look at true score variance/error score variance
What is acceptable objectivity?
...
What is acceptable reliability?
...
Factors Affecting Validity
1) student chracteristics
B. Instrumentation
1. Description of ___instruments, technology, equipment used____ 2. Specific brand names, model, etc.. 3. Any evidence that this method is accurate (validity, reliability, etc..
C. Procedures
1. Order in which _procedure were administered__ 2.how" things were done 3.Details on intervention, dosage, treatment 4.timing__ of when things occurred
Participants
1. Who is your __target population 2. How were they selected 3. __demographics:__ 4. Age, gender, grade level, socioeconomic, ethnicity, fitness or health status level 5. How were participants assigned/selected 6. inclusion/exclusion criteria
Implications/Recommendations**
1.Considered part of discussion section of an article 2.Important because it provides practical of your findings 3.Can provide the big picture message 4.Providing the importance of why this should be studied further
Reliability/objective
A SET OF MEASUREMENTS CAN BE RELIABLE WITHOUT BEING VALID, BUT CANNOT BE VALID WITHOUT BEING RELIABLE
___(standard error of measurement?)___-uses the standard deviation and reliability coefficient for the test scores
Acts like a test score's standard deviation- specifies the limits within which we can expect scores to vary due to measurement error.
internal consistency____
All measures are collected in a consistent rate of scoring by individuals being tested throughout a test (single day) or multiple-trials (at least 2)
concurrent validity "For the present" validity
Ex. Strong relationship(r ) between heart rate and exercise intensity (as one increases, so does the other)
Criterion measurement selected
Mean score (most reliable), Best score (easiest)
Administrative procedures/length of tests
Reliability increases as number of test trials increases...
**Reliability**
Reproducibility-A reliable test should obtain approx. the same results regardless of the number of times it is given.
Data Analysis
Software used, nature of stats used (descriptive, inferential,....), Power anaylsis
**OBJECTIVITY**
The degree to which different examiners obtain the same score for a given performance
criterion validity
The degree to which scores on a test correlate with scores on specified criterion. Uses things like expert ratings or predetermined criteria -how strongly is the test related to a criterion?
realibility depends on:
a. Reducing variation attributable to measurement error (error variance) b. power to discriminate among different levels (detecting individual differences) of ability within group measured (true-score variance)
stability reliability-Correlation on two sets of scores Ex. Day 1 vs. Day 2 (test/reset method)
a. What contributes to low stability reliability coefficient? i. People tested may perform differently ii. Measuring instrument may operate or be applied differently iii. Person administering measurement may change
Factors Affecting __reliability__
a. heterogeneous in ability, motivation, readiness to be test, and directions b. test discriminates among ability groups c. testing environment and organization are favorable d. person administering test is competent
Issues/Scores can be inconsistent
a.if scorers can't agree (lack objectivity) b.lack of consistent performers c.failure of instrument measuring d.failure of tester to use standardized procedures
Realiability
can a test consistently measure what it is supposed to?
Factors Affecting Objectivity
clarity of scoring system, degree to which judge can assign score accurately
logical (content) validity
must be relevant and reliable as measurement of that trait 1. Criticism is that it is subjective decision making 2. Example
Objectivity
the degree and how close to which more than one person can measure a test
Validity
when a test measures what it is supposed to