Research Ch 11: Casual Inference and Experimental Designs
Inference
A conclusion that can be logically drawn in light of our research design and its findings.
3) Matruation
Effects due to the passage of time (grow out of problem on their own)
5) Testing
Effects due to the subjects' reaction to being tested (measured)
Best design maximizes 3 criteria
1) Strength of inference (IV caused DV) 2) Feasibility 3) Cost effectiveness
Measurement Bias
Ego involvement and vested interest in wanting the experimental group participants to show more improvement, limit finding credibility.
1) Exploratory/Formulative Design
Gain familiarity with a problem. Often qualitative.
IN ORDER TO REDUCE OR ELIMINATE THE THREATS TO BOTH INTERNAL AND EXTERNAL VALIDITY
IT IS BEST TO USE AN EXPERIMENTAL OR QUASI-EXPERIMENTAL RESEARCH DESGIN
7) Mortality
In group designs where groups are compared over time, effects due to different rates or types of dropouts (attrition)
**External Validity**
Refers to the extent to which we can generalize the findings of a study to settings and populations beyond the study conditions.
Posttest-Only Design with Nonequivalent Groups (Static-Group Comparison Design)
X O O Assesses the dependent variable after the intervention is introduced for one group while also assessing the dependent variable for a second group that may not be comparable to the fight group and that was not exposed to the intervention. Difference between groups could have caused difference.
Threats to External Validity
-Primarily concerned with the representativeness of the study sample, the setting, and the procedure -Researchers should report attributes of clients and practitioners, as well as the population the study is intended to represent] -Conditions of the study should reflect the 'real world' practice/life conditions to be externally valid
Casual Relationship
-Relationship between two variables in which the cause precedes the effect in time. -Two variables be empirically correlated with one another -The observed empirical correlation between two variables cannot be explained away as the result of the influence of some third variable that causes the two under consideration.
**Steps in Establishing Causality**
1) Association 2) Time Priority 3) Ruling out alternative explanations 4) Theoretical explanations
8 Main threats to Internal Validity
1) History 2) Statistical regression 3) Maturation 4) Instrumentation 5) Testing 6) Selection 7) Mortality 8) Interactions: various combinations of 1-7
Three Basic Elements of a Good Experiment (for 9 & 10)
1) Random assignment of subjects to experimental and control groups 2) Manipulation of independent variables 3) Control over extraneous variables
3) Action Research Design
A direct assessment of new approaches. Typically done in agencies to try a new method of practice.
10) Quasi-Experimental Designs
ALMOST certainly establishes a cause and effect relationship (attempts to control for internal validity). Investigator OR someone else manipulates the independent variable and subjects CANNOT be randomly assigned to groups.
Research Design
All the decisions made in the planning and conducting research, including decisions about measurement, sampling, how to collect data, and logical arrangements designed to permit certain kinds of inferences.
3) Ruling out Alternative Explanations
Can we show that X did NOT cause the association between the IV and DV? Rule out with: Sampling, random assignment, and statistical analysis.
Research Reactivity
Changes in outcome data that are caused by researchers or research procedures rather than the independent variable.
One-Shot Case Study
Doesn't establish correlation. Shorthand: X O A single group of research participants is measured on a dependent variable after the introduction of an intervention without comparing the obtained results to anything else.
9) Experimental Design
Establish a cause and effect relationship with virtual certainty. Investigator manipulates the independent variable and selects subjects and randomly assigns them to experimental or control groups. **CONTROLS FOR VIRTUALLY ALL 8 THREATS TO VALIDITY**
6) Slection
In group designs, unintended systematic difference or biases between groups, difference that affect the dependent variable (ex: self selecting people might be more motivated)
Establishing Causality
In scientific perspective, we can NEVER EVER establish causality. Can still try by following four steps in order.
Novelty and Distruption Effects
Introducing an innovation in a setting where little innovation has occurred can stimulate excitement, energy, and enthusiasm among recipients of the intervention.
6) Developmental Design
Investigate change over time (can be cross-sectional or longitudinal).
One-Group Pretest-Posttest Design
O1 X O2 Assesses the dependent variable before and after the intervention is introduced. Does not account for other variables affecting outcome.
Casual Inference
One derived from a research design and findings that logically imply that the independent variable has a causal impact on the dependent variable
Matching
Pairs of participants are matched on the basis of their similarities on one or mare variables and more member of the pair is then randomly assigned to the experimental group and the other to the control group.
4) Theoretical Exmplanations
Plausible explanation for results, preferably tied to theory. Gives us more confidence in its accuracy to describe and explain phenomena.
Classic Experimental design or Pretest-Posttest Control Group Design
R O1 X O2 R O1.....O2 Controls for all but one threat to internal validity: History, Statistical regression, Maturation, Instrumentation, Selection, and Mortality. Does not control for the possible effects of testing and retesting.
Solomon Four-Group Design
R O1 X O2 R O1.....O2 R X......O2 R..........O2 Combines classical with posttest-only. Controls for everything. Very expensive. Not often used, but well respected.
Alternative treatment design with pretest
R O1 Xa O2 R O1 Xb O2 R O1......O2 Used to compare the effectiveness of two alternative treatments. (doesn't cover testing)
Dismantling Studies
R O1 Xab O2 R O1 Xa O2 R O1 Xb O2 R O1.......O2 See if intervention is effective and what components of it may or may not be necessary.
Posttest-Only Control Group
R X O R....O Pretesting may not be possible or piratical ex: looking at child abuse. Controls for testing in this instance.
Compensatory Rivalry
Reading more, attending more workshops, and increasing contact with clients.
Experimental Demand Characteristics/Experimenter Expectancies
Research participants learn what experimenters want them to say or do, and then they cooperate with those "demands" or expectations.
Randomization
Research participants to be randomly assigned to groups. Usual voluntarily agree to participate. Randomization increases internal validity.
5) Single System Design
Sample size = one (individual, family, or group)
1) Associaion
Showing that the independent and dependent variables "go together" or vary systematically in relation to each other (ex: if one is true, the other is true. but NOT: they have no relationship).
2) Time Priority
Showing that the independent variable (A) preceded dependent variable (B). Methods for establishing this include: Logic, independent variable is attribute variable, collect data at several points in time, manipulate independent variable (experiment)
1) History
Specific environmental events which may influence the dependent variables during the course of the study (happens at the same time of the intervention, ex: change in environment)
Compensatory Equalization
Staff may seek to offset what they perceived as an inequality in service provision and may compensate for the inequality by providing enhanced services.
4) Instrumentation
Systematic biases introduced by the measuring instruments (questionnaires) or by the way data is collected
2) Statistical Regression
Tendency of extreme scores to more toward the middle (ex: examine some one on a good day vs. a bad or average day. Address this by doing multiple measures)
**Internal Validity**
The confidence we have that the results of a study accurately depict whether one variable is or is not a cause of another.
The Main Reason for Conducting an Experiment
To determine the potential effect of one variable (IV) on another variable (DV) while ELIMINATING or CONTROLLING all other variables which may confound such a relationship and thereby DETERMINE A CASUAL RELATIONSHIP
2) Historical Design
Used to reconstruct the past. Recall problems.
4) Descriptive Design
Usually termed "survey", examines the distribution of one variable.
Resentful Demoralization
When staff or clients become resentful because they did not receive special training or intervention. Their confidence or motivation may decline and may explain their inferior performance in outcome measures.
Obtrusive Observations
When the participants is keenly aware of being observed and may behave in ways the experimenter expects
Threats to Internal Validity
Whenever anything other than the independent variable can affect the dependent variable.
8) Causal Comparative/Ex Post Facto
a) Search for POSSIBLE cause and effect relationship b) may be retrospective (searching for cause in the past) or prospective (looking for the results of a cause in the future)
7) Correlational Design
a) extent to which change in one factor is associated with change in another factor b) typically (not always) involves correlation between 2 continuous variables.