RMDA2 Exam Final

¡Supera tus tareas y exámenes ahora con Quizwiz!

*Quiz 9 Question:* Assume that you have conducted a replication with extension experiment. Assume, also, that your replication condition succeeded in producing the same pattern of results as previous work. There are two possibilities for the extension condition: (a) you get the same pattern as the replication condition or (b) you get a different pattern from the replication condition. What are the one-word labels that are used for these two possible findings? (Hint: both labels end with -tion.)

*Answer:* (a) (Same Results) = Generalization (b) (Different Results) = Moderation

*Quiz 8 Question:* Assume that you are testing the effectiveness of an intervention (or new teaching method) using a non-equivalent groups design, with one group getting the intervention and the other not. You plan to use a pre-test/post-test design, to control for (pre-existing) overall differences between the two groups. What should you add to this basic design in order to control for the possible effects of differential history (i.e., different external events occurring for the two groups between the pre- and post-test)? [long question that is best answered in two or three words]

*Answer:* A Control Measure

*Quiz 8 Question:* In order for a "classic" quasi-experiment (on its own) to provide strong evidence of a causal relationship, the quasi-IV (e.g., age or sex) must be determined or set by ___________ . [fill in the blank with two or three words]

*Answer:* A random process

*Quiz 9 Question:* What the key difference between a replication and an extension? (If it helps, you may assume that the replication is a direct or exact replication.)

*Answer:* A replication doesn't change anything that is theoretically important, while an extension repeats the experiment under theoretically different conditions.

*Quiz 7 Question:* Assume that you have conducted a two-factor experiment (e.g., Factor 1 was Stroop Condition [3 levels] and Factor 2 was the Language of the words [2 levels]) and the initial analysis produced a significant interaction. What is the next step to the analysis?

*Answer:* Choose a "parsing" (if you haven't already) and then conduct tests of the simple main effects

*Quiz 9 Question:* What is the key difference between a direct or exact replication and a conceptual replication?

*Answer:* Direct replication is the same in all ways (except the subjects), while a conceptual replication asks the same question (under the same conditions) using different methods (e.g., using a different operational definition of the IV or DV)

*Multiple-Choice Question:* Assume that you've run a one-way experiment with four levels and the initial analysis produced a p-value greater than .05 for the main effect of the four-level factor. How many pairwise comparison do you need? A. 0 B. 1 C. 3 D. 4 E. 6

*Answer:* Option A

*Multiple-Choice Question:* When should you conduct a two-factor experiment, instead of two separate one-way experiments? A. When you want to save time. B. When you're interested in whether one factor moderates the effect of other factor. C. When you're interested in whether one factor mediates the effect of other factor. D. When you want to run more subjects. E. When you want to run fewer subjects.

*Answer:* Option B

*Multiple-Choice Question:* In order to qualify as a "classic" quasi-experiment, _______. A. The quasi-IV should be more stable than the DV. B. The quasi-IV should be "older" (set earlier) than the DV. C. The quasi-IV should be set by a random process. D. All of the above E. None of the above

*Answer:* Option D

*Quiz 8 Question:* In order for a "classic" quasi-experiment (on its own) to provide strong evidence of a causal relationship, the quasi-IV (e.g., age or sex) must be ___________ than the DV. [fill in the blank with four or five words]

*Answer:* Set earlier (older) and more stable

*Quiz 6 Question:* Assuming a two-factor design, what is a "simple main effect"? [If it helps, a good definition of "overall main effect" for a two factor design is: the effect of one factor (on the data) when you average or collapse across the other factor.]

*Answer:* The effect of one factor at a particular level of another factor

*Quiz 9 Question:* According the basic tenets (i.e., rules) of all empirical sciences, including psychology, everyone should feel free (even encouraged) to argue about _______ but we must agree on the _______ if we hope to get anywhere. (One word each.)

*Answer:* Theory (or interpretation) ... Data

*Quiz 6 Question:* Assume that you have conducted an experiment with two factors, each with three levels. In other words, you have a 3x3 design. When it comes time to conduct the main analysis, how many null hypotheses will you be testing?

*Answer:* Three (Because all two way designs have three null hypotheses, regardless of the number of levels per factor)

*Quiz 7 Question:* How many different *null hypotheses* are tested by the initial ANOVA of a two-way (two factors) within-subjects analysis? (Just the initial ANOVA.)

*Answer:* Three (because all 2-way designs have 3 null hypos, regardless of number of levels or design type)

*Quiz 7 Question:* How many different *error terms* are created and used to conduct the initial ANOVA of a two-way (two factors) within-subjects analysis? (Just the initial ANOVA.)

*Answer:* Three (because every test gets its own error-term when it's a within-subjects design)

*Quiz 7 Question:* Assume that you have conducted a two-factor experiment (e.g., Factor 1 was what the subject Drank [3 levels] and Factor 2 was Time-of-Day [also 3 levels]) and the initial analysis found no interaction. What is the next step to the analysis?

*Answer:* Treat the experiment as if it were two (separate) one-ways ... look at each main effect and conduct pairwise comparison if (and only if) significant with three or more levels

*Short-Answer Question:* In some situations, it is best to conduct a mixed-factor design, instead of a (pure) between-subjects design or a (pure) within-subjects design. Assuming that you have already decided to use a mixed-factor design, what two issues should you focus on most when deciding which factor should be manipulated between-subjects and which factor should be manipulated within-subjects.

*Answer:* When choosing which factor should be the between and which should be the within, I would focus on two types of validity that trade-off when we decide if a single factor should be manipulated between or within subjects. Betweens have better construct validity because they have less demand characteristic, so if one of the manipulations is more susceptible to reactivity, then it should be the between. Conversely, withins have better statistical conclusion validity, and I'm probably hoping for an interaction, so I'll probably be doing simple main effects, andI want the factor that I'll be examining (for the SMEs) to be the within for more power.

*Short-Answer Question:* Repeating an experiment using subjects from a different culture is complicated because it isn't always clear whether it should be treated as a replication or an extension. Why is this important and how does setting a "domain" for your theory avoid this complication?

*Answer:* Whether the repetition of an experiment is a replication or an extension depends on whether the changes are minor or theoretically important. Knowing if the new experiment is a replication or an extension is important because the consequences of "failure" (to get the same data) are very different. A "failure" to replicate is always failure and a serious problem for any empirical science. A "failure" to extendis not a failure; it's the discovery of moderation, instead, which is great. If it isn't clear whether switching cultures would count as a minor change or theoretically-important change, then you can set a "domain" for the theory by listing the set of conditions or contexts that would count as a replication.

*Quiz 8 Question:* Are you allowed to also include "true" IVs (i.e., manipulated variables, such as Stroop Condition) in a quasi-experiment, creating a multi-way design? If you say NO, then also say why not. If you say YES, then also say whether this is done very often.

*Answer:* Yes! This is done all the time (more often than not, in fact)

Generalization vs. Moderation

- (Assuming exact replication success), the two possible patterns from a replication-plus-extension are: 1. A different pattern for the extension condition(s) (cf. the rep): =a significant interaction (obeying Rule 2), =moderation 2. The same pattern for the extension condition(s) (cf. the rep): =no interaction, =generalization - You really can't lose... as long as the difference that defines the extension is theoretically interesting. - This is good... as in encourages replication.

Partial Eta-Squared Formula

- (Sum of Squares) SS between conditions/ss total - ss between subjects

Construct Validity

- *Are you actually studying what you are supposed to be studying?* -"The whole truth and nothing but the truth." - The extent to which a set of empirical measures provides a selective and exhaustive estimate of the target theoretical variables.

What is a Null Hypothesis

- *Definition*: The hypothesis that there is no significant difference between specified populations, any observed difference being due to sampling or experimental error. - *Null Hypothesis for one-way ANOVA*: H_0: µ_1 = µ_2 *or* µ_1 − µ_2 = 0 - (when you see "_" after a letter/number but before a number means that the following number is a lower exponent.)

t-Test & t-Ratio

- *t-Test*: t= Mean Violation of the Null Hypothesis (H_0) / Standard Error of the Mean Violation - *t-Ratio*: t= X_1 - X_2 / S_x-x [note how the mean violation (of H_0) can be expressed in a single number (after subtraction)]

(Replication with) Extension Part #1

- A *replication with extension* is when you retest an IV-DV under both the same and a new set of conditions/context. - The difference between a conceptual replication and an extension is subtle but very important (because of the very different consequences of "failure"). - When one conducts a conceptual replication, nothing that is theoretically important is changed. - When one conducts an extension, the new vs. old conditions are theoretically different (e.g., extending "diffusion of responsibility" to cross-race).

History Effects

- A chance in behavior due to an external event (from the passage of time)

Testing Effects

- A change in behavior of subject due to previous testing, i.e. the participant may get a higher score just from practice instead of from the intervention (from repeated measures). Similar to reactivity.

Maturation Effects

- A change in behavior that emerges spontaneously over time from getting older (from the passage of time)

Instrumentation Effects

- A change in measurement due to previous use (from repeated measures)

Observer Bias

- A confound created when the beliefs of the experimenter (coder or observer) influence the results because they change their behavior at the time of collecting data.

Mixed-Factor Design (What & Why)

- A design with at least one between-subjects factor plus at least one within-subjects factor. - *Why not a fully within?*: Demand of AM vs PM. - *Why not a fully between?*: Need the power of a within. - The test for an interaction will have the power of a within if at least one of the factors is within-subjects.

Between-Subjects Design

- A different group of subjects is used for each different condition. - Comparisons are made within every subject. - Better construct validity vs. statistical validity.

Mixed-Factor Design (Analysis)

- All of the issues from both types of design apply. 1. Equality of variance (~Levene's) 2. Sphericity (Mauchly's) 3. All subjects in all error-terms 4. Removing between-subject differences (when possible)

What is an Analysis of Variance

- Also referred to as "*ANOVA*", Analysis of Variance, a statistical method in which the variation in a set of observations is divided into distinct components.

Independent Variables (IV)

- Always at least one, it is a defining attribute of an experiment. - It is a variable that is manipulated by the experimenters.

Dependent Variables (DV)

- Always at least one. - It is a variable that is affected as a result of the experiment and measured.

Extension

- An *extension* is when you retest an IV-DV relationship under a new set of conditions or in a new context. - It is standard (good) practice to include an exact replication condition in any experiment designed to extend a result.

What is a Quasi-Experiment Design?

- An experiment in which the subjects were not randomly assigned to a between subjects factor.

extraneous variable

- Any and every variable besides the IV and DV should be held constant in the experiment to prevent from being a confound. - There are two types of extraneous variables: Control & Controlled

Demand Characteristics

- Any aspect of the experiment that provides the subjects with info as to what is being studied (which isn't a problem on it's own, but often causes reactivity)

Consequences of "Failure" Part #2

- As before, no matter how you finally explain the "failure" to replicate, it remains a "failure" of some sort. - But when an extension "fails" (assuming that the paired exact replication succeeded), this is not often seen as "failure"; instead: you have found evidence of moderation! - So, the line between conceptual replication and extension is subtle (and theory-dependent), but very important.

If you are concerned that one IV is likely to cause demand, then should you run it as a between or a within?

- Between!

If you might add more levels to a factor then that one should be the between or within?

- Between!

Reactivity

- Changing behavior due to being studied, which is a huge threat to CONSTRUCT validity (big threat to within-subject groups)

So what is the order in which you look at your one way ANOVA?

- Check to see if there is a significant main effect. If it is significant, conduct pairwise. If not, you are done.

How do you decide whether to use an IV as a within or a between subject in a mixed design?

- Demand, simple main effects, expandability, extension, construct.

Design Confounds vs. Subject Confounds (Selection Effects)

- Design confounds are aspects of the experiment that vary systematically with the independent variable, but they are built into the experiment, such as a buzzing noise in dim lighting condition only. - Subject confound or selection effects can occur when the participants in the different conditions of an experiment are systematically different due to failure of random assignment.

Within-Subjects Design

- Design where all subjects take part in every condition. - Comparisons are made within every subject. - Main threats to internal validity are order effects (deal with this by counter-balancing the order of conditions across the subjects if possible), main trade-off is that within has better stats and worse construct due to demand characteristics/reactivity.

What is being excluded from any particular error-term, both for between-subjects and within-subjects tests?

- Difference between subjects!!!!

So what is the order in which you look at your two way ANOVA data?

- First make sure you have significant main effects. - If not, you're done. - If they are significant, look to see if the interaction is significant. - If not, analyze as separate one ways. - If so, look at the simple main effects. - If they are significant, do pairwise comparisons. - If not, you are done.

F-Ratio

- H_0: µ_1 = µ_2 = µ_3 - Observed Variance across Group Means / Expected Variance across Group Means via the Null - *In other words*; Between-Group Variance/Within-Group Variance (or Explained Variance vs. Unexplained Variance)

When & Why is an overall/common/global error-term used for every test? When are separate error terms used for each test? Which of the four "rules" plays a key role in this?

- In a between-subject two-way analysis, one overall error-term is used for every test to obey rule four. In a within-subject two-way analysis, three overall error-terms are used.

What part of the output from a two-way (or larger) analysis should you look at first and why?

- Look to see if the main effect is significant because if that is not significant, you are done.

counter balancing

- Only in Within-Subjects Design. - Switching the order of conditions for different subjects and groups. - Ex: Giving Group A Test #1 first, then giving them Test #2, while giving Group B Test #2 first and then Test #1.

What is our preferred measure of "standardized effect size" and how do you get this from SPSS?

- Partial Eta squared because not as dependent on sample size to see how different means are from each other

What happens if there is an interaction (over-additive or under-additive lines of data)?

- Reporting the main effect is required, but it may be misleading. - Look to see if the interaction is significant, and if so, you need to run the simple main effects to see if this correlation is moderated.

Why use a Quasi-Experiment?

- Sometimes you are interested in the effect of the quasi IV...like are clinically depressed people more anxious? You are usually interested in whether a quasi IV is a moderator of some other IV's effect.

Subject Confounds

- Systematic pre-existing differences for subjects assigned to each condition. - This is only an issue for between subject designs. - Caused by a failure of random assignment.

Bias

- Systematically affects participant performance. - *Ex:* Having two groups performing at very different times during the day. Have the two groups play in very different temperatures.

Statistical (Conclusion) Validity

- The extent to which conclusions about the relationship among variables based on the data are correct and/or reasonable. - Also the extent to which inferences based on a sample provide accurate estimates of the sampling population.

Internal Validity

- The extent to which the observed pattern of results is due to the causal relationships between the variables and not caused by external forces or confounds ("third variables"). - Objective, never theoretical. - All threats to internal validity are confounds. - In context of experiments, internal validity is the extent to which the pattern observed in the DV is due to the causal effect of the IV. Any variable that varies systematically with IV.

External Validity

- The extent to which the results of the experiment can be generalized to other people, places, and times, rather than being attributed to one specific case.

Central Limit Theorem

- The prediction that regardless of the parent population distribution from which the samples were drawn, 1) The sample mean would be the same as the population mean 2) The SD would be equal to the population: SD/√N 3) The shape could be assumed to be normal (called parametric stats) - If N is greater than or equal to thirty, we can assume normal

"Classic" Quasi-Experiment

- The quasi-IV is very stable (compared to the DV), making it unlikely that changes in the DV could cause changes in the quasi-IV and was set long before the DV again, making it unlikely that causation is DV→quasi-IV and was determined by some random process (or, at least, by a process that is unrelated to any other variable) making it unlikely that some confound caused both note how the criteria for a good quasi-experiment depend on both the quasi-IV and the DV.

Important Info on the previous term/MC Question

- This is a case where reading the question carefully was very important. The main effect was not significant ("p-value greater than .05"), so no pairwise comparisons are needed or even allowed. - Only when the main effect is significant do we do any pairwise comparisons. - If you're wondering how many there would have been if the main effect had been significant: when there are 4 conditions (A, B, C, & D), there are 6 distinct pairs (A v B, A v C, A v D, B v C, B v D, & C v D).

How do you deal with the inequality of the groups in a Non-Equivalent Group Design?

- Use a pre-test/post-test design and look at the change scores and then you have to add a control measure

How can a Quasi-Experiment establish causation?

- We must be sure that the quasi IV preeded hte DV and that the quasi-IV is the only difference.

(Replication with) Extension Part #2

- When a conceptual replication "fails" (to get the same data), you first focus on the operational definitions, then the stats, and then conclude: *the effect doesn't generalize.* - No matter how you finally explain the "failure", it remains a "failure" of some sort. - When an extension "fails" (assuming the exact replication succeeded), you have found evidence of moderation! (So, not a fail.) - Note: The line between conceptual replication and extension is not only subtle & important, but also theory-dependent. [Recall opening tenet: as long as we agree on the data, we can argue about theory as much as we wish.]

Consequences of "Failure" Part #1

- When an exact replication "fails" to get the same data (as this is a serious problem for any empirical science), you first focus on the details, then the stats, then (maybe) the possibility of bias or worse. - No matter how you finally explain the "failure" to replicate, it remains a "failure" of some sort. - When a conceptual replication "fails" (to get the same data), you first focus on the operational definitions, then the stats, and then conclude: *the effect is highly specific*

What does it mean when an interaction is sig? Not sig?

- When it's significant, it means that the effect of one factor depends on the level of the other factor. - If it's not significant, it is not moderated.

What is a Non-Equivalent Group Design?

- When pre-existing groups are assigned to the different levels of a between subjects IV. - These are said to be a type of quasi experiment because random assignment is not used, but the critical difference is that in a quasi, the attribute that defines the groups is the same as the IV of interest (age or sex). - In a non-eq group design, the attribute that defines the groups is not the IV (groups by class, lighting, etc.)

Exact/Direct Replication

- When the *same experiment* is repeated using the same methods, procedures, etc.; the only difference should be the specific subjects.

Conceptual Replication

- When the *same question* is asked (i.e., the same constructs are manipulated and measured) but some or many of the details are changed (e.g., repeating the "diffusion of responsibility" study in a new location and/or with a new reason for helping).

Experimenter Bias

- When the beliefs of the experimenter influence the results at the time of the manipulation because of their behavior ("sorry I know it's dark in here" in dim lights condition).

If comparisons within one factor equal measure, then it should be between or within?

- Within!

If you expect an interaction, your simple main effects will have more power if the examined is run as a between or within?

- Within!

Do the main effects have to be significant in order to look at the pairwise comparisons?

- Yes!

Do you need pairwise comparisons when the null is rejected?

- Yes! ONLY when it is rejected

When and why do you need to conduct pair-wise comparisons?

- You need to conduct pairwise comparisons when your simple main effects are significant, or for a one way ANOVA, when the main effect is significant because you need to see in what contexts a particular effect might be significant and compare them.

(Sum of Squares) SS total within subjects =

- ss between conditions + ss within

(Sum of Squares) SS total between subjects =

- ss between groups + ss within groups

Mediated vs. Spurious

-A mediated relationships is when X is causing Y, with Z being the mediator (because Z is a mediator, it is not considered a 3rd variable/confound). Ex: X -> Z -> Y - A spurious relationship is when Z is causing X and Z is causing Y (Z is a 3rd variable/confound in spurious relationships). Ex: X <- Z -> Y

Hierarchical regression

-A way to show if variables of your interest explain a statistically significant amount of variance in your Dependent Variable (DV) after accounting for all other variables

additive variance rule

-Adding occurs in squared (x2) form -Accept that variances, and nothing else, are additive

t-test

0. get the "basic inferentials" (i.e., best guess ± how wrong) 1. assume that H0 is true 2. calculate the probability of observing a "violation" of H0 as large (or larger) as that which we have found 3. reject H0 if the probability is very low decision rule (for Step 3): p < .05

Design Confounds

1) Unintended consequences of manipulation (like dim lighting that causes a buzzing noise and disturbs subjects). 2) Lack of control (bright light and dim light conditions run at different times of the day).

design confounds

1) Unintended consequences of manipulation (like dim lighting that causes a buzzing noise and disturbs subjects). 2) Lack of control (bright light and dim light conditions run at different times of the day).

Most worrisome threats to the validity of a Quasi-Experiment?

1. "The quasi selection effects" (due to lack of random assignment)...the quasi-groups are almost always different in many ways (is it a mediator or a confound) 2. "Unequal construct validity" the DV and other IVs used might not work the same for all quasi-groups (one minute of life may mean different things to the young vs. the old)

Mixed-Factor Design (Planning)

1. *Demand*: if one IV is very likely to cause demand, then that IV should be between-subjects (note: if both IVs cause demand, run fully between) 2. *Simple Main Effects*: (SME) if you expect an interaction, your SMEs will have more power if the examined is within-subjects (i.e., the moderator should be the between-subjects' IV) 3. *Expandability*: if you might add more levels to a factor, then that one should be the between. 4. *Extension*: if one factor has been used many times as a within, keep it as the within. 5. *Construct*: if comparisons within one factor = measures, then it must be within-subjects.

How do you choose a parsing?

1. The best option would be to go along with your theory. 2. Look at the graph. 3. Design type. 4. Design size.

What are the Four General "Rules" for ANOVA

1. You may not conduct any follow-up analysis unless it is justified by a previous *significant* result! (Such as no pairwise comparisons without a main effect.) 2. A difference in significance (p>.05 and p<.05) is not a significant difference! [We only "reject" the idea that two things are the same when a test of this idea produced a p-value < .05.] 3. Main effects are ignored when there's an interaction (because they can be highly misleading) and interactions can only be "parsed" in one way. 4. Every error term should always include every subject.

3 criteria for causality

1. co-variation 2. temporal precedence 3. internal validity

Three Uses of "Paradigm (Shift)"

1. loose / non-technical term - label for "current best theory" -"paradigm shift" = change in dominant theory 2. (upper-case) Paradigm - a set of assumptions that defines a sub-field of science - "paradigm shift" = change in assumptions 3. (lower-case) paradigm - a standard method for studying a theoretical construct - "paradigm shift" = change in standard method

point estimation steps

1. sample mean & standard deviation 2. convert to best guess and estimate of error -se = sx = s /√N 3. express the findings in standard format option 1- best guess ± how wrong on average mean ± standard error (of the mean) ex- 100.00 ± 5.00 3. express the findings in standard format Option 2: provide the range of values that has a 95% chance of containing the population mean this is the "95% confidence interval (for the mean) "lower bound = X - ( se × tcrit) upper bound = X + ( se × tcrit ) e.g., 90 < pop. mean < 110

manipulation check

A measure used to determine whether the manipulation of the independent variable has had its intended effect on a subject

panel study

A type of longitudinal study, in which data are collected from the same set of people (the sample or panel) at several points in time.

one-group pretest-posttest design

An experiment in which a researcher recruits one group of participants; measures them on a pretest; exposes them to a treatment, intervention, or change; and then measures them on a posttest. ex- does playing video games help with math participants (children)--> pretest DV (numerical processing)--> Intervention (action video game play) --> post test DV (numerical processing)

paired samples t-test

H0: μA = μB for within-subject designs -does not involve any simplifying assumptions does not really concern the separate conditions it really is a test of whether the mean (within-S) difference is zero

directionality

In correlational research, the situation in which it is known that two variables are related although it is not known which is the cause and which is the effect.

cross lagged analysis steps and prelims

Prelim #1 cross sectional correlation-the cause and effect should be correlated with each other, at least the early time-points (s) Prelim #2 auto-correlation cause and the effect should both be correlated with itself across time-points critical test- the cause should be correlated with the future values of the effect... and this correlation should be stronger than that between the effect and future values of the cause

alternative hypothesis

The hypothesis that states there is a difference between two or more sets of data.

dependent variable

The outcome factor; the variable that may change in response to manipulations of the independent variable.

2 threats to one-group, pre-test/post-test design is not addressed by adding a control group

Threat # 5: regression effects Threat #6: biased attributions

one group pretest postest design threats

Threat #1: testing effects Threat #2 Instrumentation effects Threat #3: Maturation effect Threat #4: History effects

situation noise

Unrelated events or distractions in the external environment that create unsystematic variability within groups in an experiment

*Quiz 6 Question:* Over-Additive Interaction

When an interaction's parallel lines move away as they go up on the graph

*Quiz 6 Question:* Under-Additive Interaction

When an interaction's parallel lines move closer together as they go up on the graph

Mediator

When the effect of one variable on another goes through an intervening variable x->z->y

Never provide a best guess (point estimation)

Without at estimate of how wrong we might be (standard error)

moderation

X and Y always have a relationship, but Z affects how big or small that relationship is X--->Y ^ z ex- Violent TV leads to more aggression in boys than girls

Autocorrelation

X1 and X2 are correlated from time 1-time 2 Y1 and Y2 are correlated from time 1-time 2 when they variable is correlated with itself from time 1 to time 2

cross sectional correlation

X1 and Y1 are correlated at time 1

spurious relationship

a (significant and often robust and replicable) relationship that is not causal in either direction (no relationship between X and Y, both are caused Z) caused by a third variable

Threat #6: biased attrition effects

a change in the data due to systematic (unequal) loss of participants -ex- more subjects getting the intervention drop out -threatens Internal Validity - solution=(do what you can to) avoid attrition omit all of the lost participants' data

between-subjects design

a different group of subjects is tested under each condition - face major threat to internal validity - use random assignment

Paradigm (lower-case)

a method for studying or measuring a theoretical construct - Construct Validity many constructs of interest are defined in terms of their effect by definition, an "effect" is change in one variable when some other variable is manipulated *we'll need an IV, as well as a DV

within-subjects design

a research design that uses each participant as his or her own control; for example, the behavior of an experimental participant before receiving treatment might be compared to his or her behavior after receiving treatment - subject is exposed to all forms of treatment - causes reactivity

Paradigm (upper-case)

a set of assumptions that defines a sub-field of science each of these assumption do one or both of two things: - determine what questions should be asked - determine how those questions should be answered

Type 2 error (beta)

accepting the null hypothesis when it is false - saying there is no effect when actually there is one - does not have any power (lack of power)

controlled variables

accounted for an subtracted before the experiment begins

control variables

accounted for subtracted data after the experiment ends

ceiling effect

all the scores are squeezed together at the high end independent variable (which is the variable being manipulated) is no longer affecting the dependent variable

floor effect

all the scores cluster at the low end dependent variable result in very low scores on the measurement scale

internal validity causal criteria

alternative explanation of the co-variation must be ruled out ex- - the co-variation must still be found when other variables are "controlled" (either experimentally or statistically)

measurement error

an error that occurs when there is a difference between the information desired by the researcher and the information provided by the measurement process

two group posttest only design

an experimental design when subjects are randomly assigned to an experimental or control group and measured after the intervention - logic= can't measure their effects on the data - then remove effects (by subtraction)

demand characteristics

any aspect of the experiment that provides subjects with information as to what is being studied and/or is expected not a problem (on its own)but often causes reactivity - ex- a change in behavior due to being studied), which is a huge threat to Construct Validity and can be stronger in one condition than another, causing more reactivity in one condition than another - a threat to Internal Validity

Reactivity

any change in the behavior of the subjects due to their being studied

Subject Confounds

any pre-existing difference between subjects assigned to different conditions

confound (for experiment)

anything that covaries (in any way) with the (manipulated or measured) putative cause

association claim

argues that one level of a variable is likely to be associated with a particular level of another variable, does not say that one caused the other

random assignment

assigning participants to experimental and control conditions by chance, thus minimizing preexisting differences between those assigned to the different groups

observer bias

beliefs of the researcher can affect the data -it's a confound when the researcher records or analyzes the data from different conditions in different ways -this occurs WHILE or AFTER the data are collected

casual relationship

cause and effect

Threat #4: History effects

change in behavior due to external event (occurs during experiment) - threatens internal validity

Threat #1: testing effects

change in behavior due to previous testing - threatens internal validity

Threat #3: Maturation effects

change in behavior that emerges over time (regardless of details of experiment) threatens internal validity

Threat #2: Instrumentation effects

change in measurement due to previous use - threatens internal validity

Pretest-posttest design two groups

compares the change that occurs within two different groups on some dependent variable (the outcome) by measuring that variable at two time periods, before and after introducing/changing an independent variable - BEST SOLUTION FOR FIXING THE FOUR THREATS IS TO ADD A CONTROL GROUP

95% confidence interval

confidence interval in which, roughly speaking, there is a 95% chance that the population mean falls within this interval lower bound = mean - ( standard error × tcrit ) upper bound = mean + ( standard error × tcrit)

bivariate correlation

correlation between two variables X and Y

standard error of the difference

difference between means -between subjects: the variances add together, so the standard error of the difference is larger -within subjects: pre-subtract (removing indvdl diffs) [which also causes you to have only one set of values] ---so the standard error of the difference is smaller

what is a tradeoff?

each option you can choose will raise [at least] one type of validity while lowering [at least] one type of validity

point estimation

estimate of the mean of the sampling population -this is a "point estimate" because it's the best guess concerning a single value - never provide (nor accept) a best guess without an associated estimate of how wrong it might be

placebo effect

experimental results caused by expectations alone

internal validity

extent to which we can draw cause-and-effect inferences from a study

external validity

extent to which we can generalize findings to real-world settings

cor-relational studies have what type of validity?

external validity

Threat #5: regression effects

extreme values are not likely to repeat - ex- when only low-scoring subjects given intervention - threatens Internal Validity solution: ensure equivalent groups at pre-test

Type 1 error (alpha)

false positive saying there is an effect when there isnt one - this is a false alarm - when you reject the null when you should retain it - a "risk"

equivalent groups

have same mean value on all attributes that could influence the dependent variable

between-subject design validity high and low

high construct validity low stats con validity

Within subject design has high what validity

high stats con validity low construct validity

how to deal with confound?

holding the variable constant (on average) or removing its effect via multiple regression

Experiments have more of what type of validity?

internal validity

Matching (in a between-subjects design)

look at pretest and postest

degrees of freedom (df)

n - 1, determine the number of scores in the sample that are independent and free to vary

experimenter bias

one way in which the beliefs of the researcher can affect the data -it's a confound when the researcher behaves differently when running the different conditions -this occurs BEFORE the data are collected

What is a p-value?

reject H0 when the p-value is very low, (and the "p" in "p-value" is short for "probability"),but a p-value is NOT the probability that H0 is true -probability of getting the (sample) data on the assumption that H0 is true -p-value less than .05 implies that the finding is "significant "note that "significant"

longitudinal design

research design in which one participant or group of participants is studied over a long period of time

individual differences (between subject variances)

some subject may score drastically different from each other on both conditions and this can cause the data to have a lot of variability

standard error

standard error = σ/√N σ=standard deviation

The Subtraction Method

target theoretical construct: the durations of mental "stages" original version the new subtractions now often produced negative values we can only estimate the total duration of all "central" processes (combined) using Subtraction Method

statistical conclusion validity

the extent to which inferences based on a sample provide accurate estimates of the sampling population

construct validity

the extent to which variables measure what they are supposed to measure

Co-variation

the putative cause and effect must show some relationship ex- must have significant (linear) correlation

temporal precedence

the putative cause must come before the effect - use an experiment or cross lag study

standard error of the mean

the standard deviation of a sampling distribution

Null Hypothesis (H0)

the statistical hypothesis tested by the statistical procedure; usually a hypothesis of no difference or no relationship

Semi-partial correlation

unique relationship between X and Y while I'm controlling and taking out the potential influence of a third variable - These are measures of unique relationships, controlling for all previous predictors. - E.g., to test the effect of X on Y, controlling for Z's set outcome to Y, then have Z enter before X. - X -> Y controlling for Z uses the label: SRv(yx times z) - When everyone knows what the outcome (Y) is, just say "the semi-partial for X, controlling for Z

independent samples t-test

used to compare two means for a between-groups design, a situation in which each participant is assigned to only one condition used to test H0: μ1 = μ2 for between-subject designs - usually simplified by assuming "equality of variance" i.e., σ^2 1 = σ^2 2 -which allows us to "pool" the two estimates of variance and maximize the number of degrees of freedom but we should check to make sure that this assumption is safe ... (automatically) done using Levene's Test if we have evidence against the assumption (i.e., the p-value for Levene's is < .05), then must switch to the equal-variance not assumed version of the test

What is T-CRITICAL?

value used, to adjust the width of confidence intervals to take into account that the value of standard error is only a guess as to how wrong we might be and it could be too low - approximately 2.00

independent variable

variable that is manipulated

Methods (upper-case)

when a (lower-case) paradigm requires an extra assumption beyond those included in the (upper-case) Paradigm best thought of as a special type of paradigm, because they measure a specific theoretical construct but are also a bit like a Paradigm, in that they involve (and rely on) one or more assumptions

third variable (for correlation)

when variable 1 does not cause variable 2 and variable 2 does not cause variable 1, but rather some other variable exerts a causal influence on both (hot weather is third variable) Z / \ X - - -Y Relationship between x and y is spurious

zero semi partial correlation

when we control for Z if we get a 0 semi partial correlation that might mean that the relationship between X and Y is mediated by Z

cross-lag correlations

you measure the two variables* of interest- the cause and the effect- at two (or more) points in time


Conjuntos de estudio relacionados

History chapter 6 section 2 (questions from section)

View Set

N2100_week9_ATI - Stress and coping

View Set

Chapter 10: Laws Governing Access to Foreign Markets

View Set

accounting information system 311 exam 2

View Set

Chapter 12. The Eukaryotes: Fungi, Algae, Protozoa, and Helminths

View Set

US History Chapter 25 (three questions are wrong I am not sure which ones)

View Set