Validity
Instrumentation
A difference in the way 2 comparable variables were measured
Operationalization
AKA Ecological Validity The DV must have some relevance in the 'real world'--variables used in the study are similar to those outside that laboratory
Statistical Regression
AKA regression to the mean ◦ An initial extreme score is likely to be followed by less extreme subsequent scores
Maturation
Changes in the DV over time irrespective of the IV ◦ Possible Solution: pre-test/post-test randomized group design
Logical process where connections between test items and the construct are established
Content validity
All types of validity except
Convoluted
Hawthorne Effect--Reactive effects of experimental arrangements
DV is influenced by the fact that it is being recorded
Pre-testing
Interactive effects due to the pre-test
What do you do in order to establish content validity
Locate a content/ subject matter expert
Reactive Effects of Experimental Arrangements
Participants detect the purpose of the study and behave accordingly
external validity threats
Threats to external validity compromise our confidence in stating whether the study's results are applicable to other groups.
internal validity threats
Threats to internal validity compromise our confidence in saying that a relationship exists between the independent and dependent variables.
History
Unplanned events between measurements
without sufficient validity
all of the above
Selection Bias
equivalent -Randomize group assignment, -Pre-test and post-test difference, -Repeated Measures Design.
Test Validity
examinees' known status as either masters or non-masters and their classification as masters or non-masters (i.e., pass or fail) based on the test. This type of validity provides evidence that the test is classifying examinees correctly. The stronger the correlation is, the greater the concurrent validity of the test is.
predicative validity takes place in the
future
MRS SMITH
maturation, regression to the mean, selection of subjects, selection x maturation, mortality, instrumentation, testing, history
______ assess the performance on the test or measure and their actual future performance
predictive validity
What is an assessment tool
validity
Validity:
• Degree to which a test or instrument measures what it purports to measure
Contrust validity
• Infers not only that the test is measuring what it is supposed to, but also that it is capable of detecting what should exist, theoretically • Relates to a hypothetical or intangible construsts • Makes assessment difficult
Face validity
• Infers that a test is valid by definition • It is cleat that the test measure what it is supposed to • Infers that a test is valid by definition • It is clear that the test measures what it is supposed to - face validity is determined by a review of the items and not through the use of statistical analyses. Unlike content validity, face validity is not investigated through formal procedures and is not determined by subject matter experts. Instead, anyone who looks over the test, including examinees and other stakeholders, may develop an informal opinion as to whether or not the test is measuring what it is supposed to measure.
Content validity
• Infers that the test measures all aspects contributing to the variable of interest • Subjective process - Content validity is a logical process where connections between the test items and the job-related tasks are established. If a thorough test development process was followed, a job analysis was properly conducted, an appropriate set of test specifications were developed, and item writing guidelines were carefully followed, then the content validity of the test is likely to be very high. Content validity is typically estimated by gathering a group of subject matter experts (SMEs) together to review the test items
Concurrent validity
• Infers that the test produces similar results to a previously validated test -Concurrent validity is a statistical method using correlation, rather than a logical method. Examinees who are known to be either masters or non-masters on the content measured by the test are identified, and the test is administered to them under realistic exam conditions.
Predictive validity
• Inners that the test provides a valid reflection of future performance using a similar test -Another statistical approach to validity is predictive validity. This approach is similar to concurrent validity, in that it measures the relationship between examinees' performances on the test and their actual status as masters or non-masters. However, with predictive validity, it is the relationship of test scores to an examinee's future performance as a master or non-master that is estimated
Objectivity
• The degree to which different observers agree on measurements
this validity is the most difficult to establish, it is based on underlying constructs or ideas
• construct validity
what validity? You want to know whether test measures some underlying psychological construct
• construct validity
concurrent validity uses the statistical method
• correlation
experts can not help you determine if you have content validity
• false
your test will not have great external validity if you don't have great
• internal validity
you need these people o helptest or evaluate the conten validty of a test/measure
• subject matter
if you cant seem to find construct validity you should
• take a close look at your theoretical rational • critically rethink your intelligence • decide if your definition model of aggression is wrong
if you don have the validity you want, you test isn't doing what it should
• true
Reliability
•The degree which a test or measure produces the same scores when applied
experimental morality
◦ Missing Data due to subject drop-out ◦ Reduced n = reduced statistical Power ◦ Not only challenges quality of data gathered (Internal Validity) but also our ability to generalize (External Validity).