PSYC 300 Exam 3 (8-14
descriptive stats
#'s that describe and summarize the data we collected; described our sample's data...e.g., mean, median, mode
power
(1-B) Depends primarily on *effect size* and *sample size* More power if... Bigger difference between means Smaller population, standard deviation, More people in the study (N) Also affected by significance level, one- vs. two-tailed tests, and type of hypothesis-testing procedure used Two distributions may have little overlap, and the study high power, because The two means are very different The variance is very small
main effect
*differences on these dependent measures across the levels of any one factor controlling for all other factors in the experiment* the effect of each IV separately *** *What are the effects of DIET and EXERCISE on weight loss?* If exercise in general reduced weight (regardless of diet), there would be a main effect of exercise. If each diet condition also had an effect (regardless of exercise), there would be a main effect of diet.
type 2 error
*fail to reject null (accept null), when it is false*. prob= beta correct- 1-beta (rej null when false)
type 1 errors
*reject null when true* (RT), probability of making this error= alpha, more dangerous (so set alpha lower) correct= 1-alpha (fail to rej null when true)
stat sig equation
*stat sig = effect size x sample size*
dealing with error variance
1. Reduce Error Variance Treat all participants the same Match participants on variables that have an effect on DV 2. ) Increase Effectiveness of IV Use levels of IV that are very different E.g., Effect of Alcohol on Memory Sober vs .05 BAC Sober vs .10 BAC 3. Randomize Error Variance Across Groups Participants have an equal chance of being in any group Error variance between groups will tend to balance out But not always, which is why we need... 4)Statistical Analysis: We've run an experiment and average scores are different across treatments Participants treated for depression have a lower score on a depression index compared to untreated participants How do we know that this difference wasn't there to begin with (e.g., due to Error Variance)?
complex comparison
2 means are compared at the same time
chi square
2 nominal variables--> ethnicity and attitude contingency table- # of individuals in each combination of 2 variables
contrast test
?
anova
ANOVA - used to detect differences between three or more means
advantages within part
Advantages Reduces error variance due to individual differences among subjects across treatment groups Reduced error variance results in a more powerful design Effects of independent variable are more likely to be detected Requires fewer participants
adv/disadv factoral design
Advantages: More efficient than doing separate experiments to test the effect of each IV Enhances external validity (by testing effect of IV under multiple conditions) Allows for testing of main effects AND interactions Disadvantages: Design more complex More participants (compared to studying just 1 IV) Higher-order interactions difficult to interpret
in your factorial design...
All IV's (factors) may be between participant factors All IV's (factors) may be within-participant factors Some factors may be between-participant factors, and others within (e.g., a training study in which different participants are given different strategies, training time: within-participant factor, training strategy: between-participant factor). *These are called MIXED designs (on test!)*
within participants design
All participants are exposed to all levels of the independent variable Subjects are not randomly assigned to treatment conditions The same subjects are used in all conditions Closely related to the matched-groups design
stat analysis of main effects, interactions
An ANOVA is typically used for this type of analysis ANOVA will give you a P value for the main effect of each factor separately ANOVA will give you a P value for interactions between factors
Quasi
At least IV is not manipulated (subj. variable) CATEGORICAL: factorial design CONTINUOUS: correlational/covariate design Treat covariate as an IV/predictor (assess don't control) Causation cannot be inferred from main effects or interactions with subject variables Threats to internal validity Advantages of using Quasi-IVs Allows you to test the generalizability of your results Is the treatment just as effective for Men vs. Women Different Races and Ethnicities Young vs. Old May allow you to find an effect of your IV that is specific At first glance, your IV may appear ineffective, but on closer inspection it may have been effect on some participants but not others Disadvantage of using Quasi-IVs:Causation Cannot Be Inferred Often misinterpreted! They look just like an IV, they're analyzed just like one, but they're not! You cannot draw causal conclusions from Quasi-IVs However, even though you cannot make causal claims, it may be important to know that a relationship exists
HOW TO MAINTAIN INTERNAL VALIDITY
By carefully designing our experiments to reduce the influence of extraneous variables when possible Well-designed experimental and control conditions When extraneous variables are unavoidable, try as much as possible to ensure they affect each group of participants equally Random selection (unbiased error)
latin squares
Controls for ordinal position of each treatment Each condition precedes or follows each other condition once
how to deal with carry over effects
Counterbalancing Try to minimize carryover effects Treatment order as an IV
between participants design
Different groups of participants are exposed to different levels of the independent variable simple Different groups get different treatments (i.e., levels of IV) We measure and record DV after treatment, use statistics to determine whether IV had a statistically reliable effect Independent Groups Two-level - t test for independent samples Multilevel - ANOVA Matched Groups Two-level - t test for related/paired samples Multilevel - Repeated-measures ANOVA Nonequivalent Groups Two-level - t test for independent samples Multilevel - ANOVA
disadvantages within partic
Disadvantages More demands put on participants (longer study, more complicated study, may increase drop-out rate) When participants do drop out, all their data is unusable Mistakes more costly (again, because you lose a large amount of data) *CARRY OVER EFFECTS*
demonstrating internal validity
Experimental Design: Show that variation in the dependent variable could only be accounted for by variation in the independent variable A good control condition is crucial! Correlational Design: Show that changes in the value of the criterion variable are solely related to changes in the value of the predictor variable Think about "third variable" effects Test these relationships as well
external validity threat
External validity is threatened by: highly controlled laboratory setting Artificial experimental manipulations or the subject's knowledge that he or she is a research subject may affect results. restricted populations Results may apply only to subjects representing a unique group. Pretests- A pretest may affect reactions to an experimental variable. demand characteristics experimenter bias subject selection bias multiple treatments- Exposure to early treatments may affect responses to later treatments.
6 threats to internal validity
History Something can happen (other than the independent variable) between when you take a measurement, and take a measurement again This "something" can account for changes in your dependent variable E.g., You are studying changes in anxiety that result from a treatment Between two tests, a natural disaster or terrorist attack occurs Maturation Change in your participants between tests Participants get older They get fatigued E.g., Instead of getting better after giving participants a useful strategy to solve problems, they get worse By the time you give them the strategy instruction, they may be bored or tired Testing Being tested once may change how participants behave when they are tested again Participants may remember the test Based on pre-testing, participants may form an idea of what the study is about, may change behavior Instrumentation Measurement instruments change over time Statistical Regression People tend to get more average over time (regression towards the mean) If you observe a participant at a moment of extreme emotional distress, this distress will likely not be there at the next observation A problem when you are selecting participants based on extreme scores E.g., Poor readers Biased selection If the participants in each group of your experiment are not equivalent, this may influence you DV scores Self-selection effects NOTE-- Participants drop out (also called attrition) If attrition rate is different between groups, a major problem! Survival of the fittest?
mixed designs:
Includes a between-participant factor, and a within-participant factor Example: Does training strategy influence learning rate of a complex video game? Within-participant factor: Training Time (everyone gets multiple sessions of training) Between-participant factor: Assigned Strategy (different groups get different training strategies)
difference between internal and external validity
Internal validity may be more important in basic research; external validity, in applied research
minimize carry over
Introduce breaks into the study (as Smit and Rogers did) to allow effect of previous treatment to wear off Practice trials (to let participants get used to the new treatment and forget about old treatment)
sources of carry over effects
Learning - Learning a task in the first treatment may affect performance in the second (irrespective of IV) Fatigue - Earlier treatments may affect performance in later treatments because participants will get tired Habituation - Repeated exposure to a stimulus may lead to unresponsiveness to that stimulus Sensitization - Exposure to a stimulus may make a subject respond more strongly to another Contrast - Participants will compare the current treatment to treatments that came before Adaptation - Physiological changes over time based on experimental setting
disadv matched
More difficult to implement Need to measure all participants before study If you have many groups, may be difficult to find a lot of people who match Requires the use of slightly different statistics (paired-samples t test, repeated-measures ANOVA) So make sure the thing you are matching on really does influence DV
multigroup
Multi-Group designs permit comparing two or more treatments to one or more control groups Multiple control groups may be necessary to rule out alternative explanations
how to increase power
Power can be increased by... Increasing mean difference Use a more intense experimental procedure Decreasing population SD Use a population with little variation Use standardized measures and conditions of testing Using less stringent significance level Using a one-tailed test instead of two-tailed Using a more sensitive hypothesis-testing procedure
limitations of quasi
Researcher doesn't control Quasi-IVs -Strong causal conclusions cannot be drawn -Weak internal validity Campbell & Stanley (1963) -To improve internal validity, add a control group of some kind
type 1 and type 2 error trade off
Setting a strict significance level (e.g., p < .001) Decreases the possibility of committing a Type I error Increases the possibility of committing a Type II error Setting a lenient significance level (e.g., p < .10) Increases the possibility of committing a Type I error Decreases the possibility of committing a Type II error
limitations counter balancing
Sometimes treatments produce irrepressible effects E.g., Experiments that might study surprise events E.g., Lesion studies in animals Differential Carryover effects A B produces a different effect compared to B A Variation in DV not specific to IV (order)
Ttest
T-test - used to detect whether there is a significant difference between means of two groups or two conditions
matched group
To maximize our ability to detect an effect, we may want to work to make sure all groups are equal on characteristics that might influence DV Matched-Group designs help to *control for error variance* Controls participant-related variability by matching groups on characteristics that influence performance measure, match, randomize Systematic variance easier to observe***
error variance
Variability in scores caused by extraneous variables or participant variability
counterbalancing
Varying the order of treatments to minimize carryover effects complete--> Advantage: All possible combinations of treatments are represented Disadvantage: Minimum number of subjects can get large fast K!(K factorial) 3 conditions: 3 X 2 X 1 = 6 subjects needed 5 conditions: 5 X 4 X 3 X 2 X 1 = 120 subjects needed! partial--> Includes only some of the possible treatment orders (reverse 1,2,3,3,2,1), latin squares random order--> Assign random order of treatment for each participant Carry-over effects may not completely balance, but Are randomly distributed across treatments Likelihood of treatment effects being the result of carryover effects small (but possible)
Why is mean difference not enough
Was there an effect of our Independent Variable? Not necessarily Difference may be the result of error variance (noise) It's unlikely that, even with no effect of the IV, that means will be *exactly* the same
covariates- Combining Correlational and Experimental Methods
We've talked about holding constant participant-related variables to reduce error variance If we believe IQ has an effect on our DV, we might only test participants with an IQ in a certain range Reduces error variance by holding an extraneous variable constant Problem: By only studying participants within a certain range of IQ, we lose our ability to generalize our result Include a COVARIATE in your experimental design Example: In your experiment design, measure your DV AND a covariate such as IQ By including a covariate in your data analysis, you can "subtract out" variation in the data based on IQ (or whatever your covariate is) Shrinks error variance by accounting for some of it This analysis will tell you: Correlation: Is there a correlation between your DV and your covariate? Experimental: Is there an effect of your IV (after subtracting out variance associated with covariate)? Advantages--> More powerful because it can account for some of the variance in your data that is independent of your IV Can be used to correct initial baseline differences between groups
interaction
When an INTERACTION is present, the effect of one IV changes based on the level of another IV An interaction is present when the answer to a question about the effect of an IV is "It depends" 3 Factors (A B C) - Four potential interactions One Way: Factor A x Factor B One Way: Factor B x Factor C One Way: Factor A x Factor C AND Three Way: Factor A x Factor B x Factor C The interaction between Factors A and B changes depending on the level of Factor C **Interactions are often discovered by plotting the data and observing non-parallel lines (statistics will then be used to confirm that these lines are indeed not parallel)
when to use within subjects
When participant variability is correlated with DV scores When you're trying to minimize number of participants When you're studying dose related effects Typically not when you're studying training Typically not when you can reasonably expect differential carryover effects A matched design would be better in these cases
simple
adv-Simple Relatively few participants required Data and stats easy, easy to interpret No pre-testing required to ensure equality of groups (we rely on randomization) disadv- Doesn't yield a large amount of info Differences, but not shape of function relating IV and DV May be insensitive to effects when participants differ greatly in their performance
prevent type 1 errors
alpha should be set as small as possible (find data is harder when alpha is smaller)
threat to external valdity
although it is claimed that results are more general, the observed effects may actually only be found under LIMITED CONDITIONS or for SPECIFIC groups of people
ANOVA
analysis of variance, compare *means* of the dependent variable across the levels of an experimental research design (IV) compares variance of the means of the DV between the different levels to the variance of the DV within each condition.
Random Error
any pattern of data that might have been caused by a true relationship btwn variables caused by chance. (i.e research never proves a hypothesis or theory)
experimenter bias
artifact that is due to the simple fact that the experimenter usually knows the research hypothesis--> may treat people in different conditions differently (external validity threat)
sample size
as N increases, the likelihood of the researcher finding a stat sig relationship btwn indep & dep variables also increases, and POWER of the test increases.
artifact
aspects of research methodology that may go unnoticed and that produce confounding effects (threat internal validity)
demand charateristics
aspects of research that allow participants to guess the research hypothesis (external validity threat)--> due to implicit demand of participants
blocked random assignment
assigning participants to conditions--> ....
means comparison
b/c a sig F value does not provide answers use this test---> which group means are sig different from each other
when alpha is set lower
beta is always higher (harder to find data to rej the null, miss the presence of weak relationships)
df
between groups df = number of levels -1 within groups df= # participants- # conditions
F stat
between groups/within groups variance F has an associated p value which is compared to alpha
pilot testing
brought to lab, administered manipulation, given checks, post experimental interview (could they guess the hypothesis, manipulation)
disadvantages of experimental research
cant manipulate some behavioral variables lab setting oversimplify
condition
certain combination of levels
single participant research designs
changes in behavior of a single person
planned comparison
compare only means in which specific differences were predicted by the research hypothesis
comparison group before-aftr
comparison group has experienced the same changes, regression to the mean is still a problem as the potential for differential attrition more than one group is studies, control group, DV assessed both roups
internal analysis
computing a correlation of scores on the manipulation check measure with the scores on the DV as an *alternative test of the research hypothesis*
valid
conclusions drawn by researcher are legit (does it work?)
threat to stat conclusion validity
conclusions regarding the research may be incorrect b/c of a type1 or type 2 error was made
participant replication
conduct replications using new types of participants
confounding variables
created unintentionally by experimental manipulations -other variable is mixed up with the IV making it impossible to determine changes in the DV effects internal validity
equivilance
created via between participants (different but similar) designs or repeated measures (same people in each of the experimental conditions) random assignment to conditions
threat to internal valdity
dependent variable caused by confounding variable
within participants design
differences across the different levels are assessed within the same participants. "repeated measures" advantages- increased stat power-->same people, economy of participants--> fewer participants required disadvantages--> carryover--> effects may last and influence next exp (when the effects of one level of manipulation are still present when the dependent measure is assessed for another level of manipulation) practice and fatigue how to dec disadvantages--> counterbalancing--> arranging order in which the conditions of repeated measures design are experienced so that each condition are experienced so each condition occurs equally in each position (one 1/2 kids view violent first, 1/2 view non violent first) latin squares design--> use a subset of all of the possible orders but to ensure that each condition appears in the same order--> counterbalancing the order of the conditions so that each condition appears in each order but also follows equally often after each of the other conditions
binomial distribution
distribution for events that have 2 equally likely possibilities -become *narrower* as the sample size gets bigger (extreme values are less likely to be observed)
factor
each of the manipulated variables 2X3 = 2 IV, one with 2 levels, one with 3 levels
effect size of F etn
eta
placebo effect
expectations--->*threat to internal validity*
factorial experimental design
experimental design with more than one IV
field experiments
experimental research designs that are conducted in natural enviornment
naive experiments
experimenters who do not know the research hypothesis
cover story
false or misleading statement about what is being studied prevent participant from guessing hypothesis.
time series designs
in longitudinal designs in which the Dependent measure is assessed for one or more groups more than 2x at regular interval, both before and after the experience of interest occurs. trends in data
quasi exp
independent variable is measured, not manipulated, correlational design. grouping, data analyzed with ANOVA (experimental component) there is no random assignment
program evaluation research
involves the use of existing groups, designed to study intervention programs such as after school programs, clinical therapies, prenatal care clinics, to determine whether the programs are effective in helping the people who use them
moderator variables
is a variable that produces an interaction of the relationship btwn two other variables such that the relationship btwn them is different at different levels of the moderator variable. aka gender, age
how to rule out reverse causation
longitudinal research design--> same indiv measured more than one time, long enough that changes in variables of interest could occur measured via path analysis--> displayed via path diagram--> represents association among a set of variables, paths represent regression coefficients
2 way design
manipulating 2 factors in the same experiment
non-linear relationships
may be independent, circle like relationships that change in direction--> curvilinear relationships U n
cross sectional design-
measure people from different age groups at the same time--> limited in ability to rule out reverse causation
threat to construct validity
measured variables, experimental manipulation do not relate to the conceptual variables of interest
manipulation checks
measures used to determine whether the experimental manipulation has had the intended impact on the conceptual variable of interest (hoped for impact on participants)
confound check
measures used to determine whether the manipulation has *unwittingly caused differences in the confounding variable*.
multiple regression
more than 2 measures at the same time, more than 1 predictor variable used to measure an outcome variable. stat technique based on pearson correlational coefficient both btwn each of the predictor variables and outcome variables among the predictor variables themselves. adv- researcher can consider how all predictor variables taken together predict outcome conducted on computer multiple correlational coefficient--> R, tested with F stat R- effect size stat for multiple regression, R^2
pairwise comparison
most common means comparison in which any one condition mean is compared with any other condition mean (there can be a lot of them)
experiments differ
number of levels (experimental, control), type of manipulation may add an additional control, such as no cartoons were watched to really control for any effect.
null hypothesis
observed data reflect only what would be expected under the sampling distribution H0
experimental control
occurs to the extent that the experimenter is able to eliminate effects on the dependent variable other than the effects of the independent variable.
manipulation
of IV (guarantee that IV occurs before DV). cartoon type rules out common causal variables, controlled or eliminated.
one way exp designs
one IV (experimental conditions) kids view violent cartoon behave more aggressively than children who did not view violent cartoons
casual relationships
probabilistic
power of stat test
probability that the researcher will on the basis of observed data be able to rej the null hypothesis, given that the null is actually false. The bigger the relationship, the easier it is to detect. *power= 1-beta* (can only be estimated) should be about .80, where b= .20 Basically this is how likely you are to find the effect when it really is there! Effects that are harder to find will need more power to find them. If an effect is fairly robust, you can decrease the power a little bit.
pvalue
probability value (each stat has one), the likelihood of an observed stat occurring on the basis of the sampling distribution. How extreme is the data?
alternative explanation
produced by confounding variables, differences in the confounding variable rather than the IV caused changes in the DV decreases internal validity
correlation coefficient
r, strength or effect size= distance from zero (i.e .54> .30) linear relationship
coefficient of determination
r^2--> the proportion of variance accounted for
extraneous variables
random error-->cause type 2 errors (failure, false), reduce power, increases within-groups validity (harder to find differences between experimental variables) are the main *threat to Internal Validity* Also called confounds or confounding variables Internal validity must be considered during the design phase of research
individual, multiple regression
regression coefficents, beta weights (control for other predictors)
temporal priority
relation btwn associated variables, order A--> B (B does not cause A b/c it occurs after A) C--> D
replication
repeating previous research which forms the basis of all scientific inquiry
Hypothesis Testing Flow Chart
research hypothesis--> set alpha .05--> power to determine the sample size that is needed--> collect data--> calculate statistic and pvalue--> compare p value to alpha (.05)--> p <.05 reject null, p> .05 fail to reject null
single group design
research that uses a single group a participants who are measured after they had the experience of interest (no control group= limitation) selection threat
review papers
results of research program summarized here discusses the research in a given area with the goals of summerizing the exisiting finidngs, drawing conclusions, etc
inferential stat
sample data used to draw inferences about the true state of affairs can estimate the probability that our results were due to chance (happy when p is small) Helps you infer how the results from your sample might generalize to the population Is there really a difference? Helps you draw conclusions about your data sample--> population Here's where inferential statistics come in...they help us decide if the difference we observe between groups is larger than the difference we'd expect due to chance or error variance
control of extraeous variables
select from a limited population (college students) before-after designs- memory assessed both before and after the experimental manipulation adv- differences influence both measures, only one condition disadv- retesting effects matched group design- participants measured on the variable of interest before the exp begins then are assigned to conditions based on their scores on that variable. standardized conditions- all particpants in all levels of IV are treated the same, use of experimental script, automated experiments--> video/audio
single before after
selection is not a problem b/c the same participants are measured before and after, attrition, maturation, history, retesting, are problems same people
levels
specific situation that are created within that manipulation (violent or non violent)
alpha, significance level
standard that the observed data must meet--> we may reject the null only if the observed data are so unusual that they would have occurred by chance at most 5% of the time. Significant differences basically mean that there is a probability of 5% that a score will be at least that extreme if the null hypothesis were true 5% chance that there really wasn't an effect of our IV
structural equation analysis
stat procedure that tests whether the observed relationships among a set of variables conform to a theoretical prediction about how those variables should be causally related. represents both the conceptual variable (latent variables) and the measured variable in the stat analysis
meta-analysis
stat technique uses the results of exisiting studies to integrate and draw conclusions about those studies. objective b/c it specifies inclusion criteria--> that indicate exactly what (rules) studies will or will not be included in the analysis, effect size limited by data that has been published, archival research
Scatterplot
std coordinate system in which the horizontal axis indicates the scores of the predictor variable, and the vertical outcome variable regression line is the straight line drawn through the points on a scatter plot, minimizes the squared differences on the line
the proportion of explained variability
strength of relationship--> proportion of the dependent variable explained by the independent variable as opposed to random error. this is indicated by the square of the effect-size stat. effect size * effect size=
research programs
study a topic of interest through conceptual and constructive replications over a period of time
2 sided p values
take into consideration unusual circumstances--> may occur in more than one way. (2x as big as the 1 sided p value) i.e multiply one sided by 2
constructive replication
tests the same hypothesis as the original experiment but also adds new conditions to assess the specific variables that might change the previous relationship. (did not view any cartoons)
generalization
the degree to which relationships among conceptual variables can be demonstrated among a wide variety of people and a wide variety of manipulated or measured variables
sampling distribution
the distribution of all possible values of a statistic --> each stat has an associated sampling distribution
experimental realism
the extent to which the experimental manipulation involves the participants in the research
experimentwise alpha
the p (type 1 error) in at least one comparison increases
n^2
the proportion of variance in the dep variable account for by the experimental manipulation
experimental manipulations
the researcher can rule out the possibility that the relationship btwn the IV and DV is supurious
conceptual replication
the scientist investigates the relationship between the same conceptual variables that were studied in previous research, but tests the hypothesis using different operational definitions of IV or measured DV (cartoons--> film clips)
restriction of range
the size of r may be reduced if there is a restriction of range--> occurs when most participants have similar scores on one of the variables being correlated. SAT--> college performance (only kids with high SAT's get admitted into college)
effect size
the size of the relationship, magnitude of the relationship. (larger + = stronger relationship) i.e small (.10), medium (.30), large (.50) Amount that two populations do not overlap The extent to which the experimental procedure had the effect of separating the two groups *Calculated by dividing the difference between the two population means by the population standard deviation*
common causal variable
third variable, not part of the research hypothesis but that cause both the predictor and outcome variable and thus produce the observed correlation btwn them if influence both outcome and predictor then the relationship is SPURIOUS--> the ccv produces and explains why the rel btwn the predictor and outcome variable exisits.
comparison group design
threat to internal validity b/c differences in the DV may be due to differences in groups that existed before the program--> selection threats (students interested in studying abroad) selection threat, only to the extent that is similar to experimental group
unrelated experiments technique
told participating in --> 2 separate studies, 2 separate experiments (but its really one study)
cells
total number of conditions, found via multiplying the number of levels in each factor i.e 2X2= 4 cells (conditions)
variance equation
total variance = systematic variance + error variance observed differences= effect IV + effect all other things
A-B-A design, reversal design
type of repeated measures experimental design in which behavior is initially measured during a baseline period, measured again after the intervention begins, measured once more after intervention removed.
correlation matrix
used when there are many correlations to be reported at the same time, IBM SPSS (the diagonal is r=1.00)
mediating variable
variable that is caused by the predictor variable and that in turn causes the outcome variable violent TV--> arousal--> aggressive play explain why a relationship btwn 2 variables occurs
extraneous variables
variables other than the predictor variable that influence the outcome variable but that DO NOT cause the predictor variable
between groups variance
variance among the condition means (across levels)
within groups variance
variance within the conditions
reverse causation, reciporcal causation
violent TV--> aggressive play observe violent TV <-- aggressive play reciprorcal is both ways
marginal means
when means are combined across the levels of another factor in this way--> control for the effects of another factor
post hoc comparison
when specific comparisons have not been planned ahead of time--> these are mean comparisons thatare made (many comparisons made)
participant variable design
when the grouping variable involves preexisting characteristics of the participant, the variable that different across the participant is known as the participant variable
impact
when the manipulation created the hoped for changes in the conceptual variable
regression to the mean
whenever the same variable is measured more than once, to the extent that the correlation btwn the 2 measures is less than 1 or more then -1, individuals tend to score more towards the avg of the group on 2nd measure than 1st measure
state dependent memory
you perform better when the testing context matches the study context