Research Methods Test 3 Study Guide

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Explain interaction effects in words.

"short sleep was associated with increased risk for obesity, particularly for children from low SES backgrounds." = sleep (IV 1) has a significant effect on obesity risk (DV) but only for children with low SES backgrounds (IV 2).

Explain main effects in words.

"short sleep was associated with increased risk for obesity, regardless of one's level of socioeconomic status." = sleep (IV 1) has a significant effect on obesity risk (DV) when averaged across levels of SES (IV 2).

Demand characteristics

A cue that can lead participants to guess an experiment's hypothesis.

What is a design confound, is it systematic or unsystematic variance, and how is it related to internal validity; identify possible design confounds in research scenarios.

A design confound occurs when experimenters make a mistake when they design/manipulate the IV such that there is actually now a second variable that changes systematically with the actual IV, the second IV is serving as an alternative explanation for the results or why the main IV and DV may be related. If a study has internal validity, then you can be confident that the dependent variable was caused by the independent variable and not some other variable. The major threat to internal validity is a confounding variable: a variable other than the independent variable that 1) co-varies with the independent variable and 2) may be an alternative cause of the dependent variable.

Explain the one-group pretest/posttest design and why this is a problematic/flawed research design and why it's a threat to internal validity

A one-group pretest/posttest design is when a researcher recruits one group of participants; measures them on a pretest; exposes them to a treatment, intervention, or change; and then measures them on a posttest. This design differs from the true pretest/posttest design because it has only one group, not two. There is no comparison group. It is a flawed design and a threat to internal validity because it can result in history, maturation, regression, attrition, testing, instrumentation, and combined threats, and it includes no comparison group.

Practice effects

AKA fatigue effects, a long sequence might lead participants to get better at the task or to get tired or bored toward the end.

Why might researchers want to use more than one independent variable in an experimental design? Define the terms: Testing limits, Testing external validity, Moderation/moderating variables and testing interaction effects, and Testing theory

Adding an additional independent variable allows researchers to look for an interaction effect (or interaction)—whether the effect of the original independent variable depends on the level of another independent variable. Testing limits: Researchers conduct studies with factorial designs to test whether an independent variable affects different kinds of people, or people in different situations, in the same way. Testing external validity: When researchers test an independent variable in more than one group at once, they are testing whether the effect generalizes. Moderating variables/testing interaction effects: In factorial design language, a moderator is an independent variable that changes the relationship between another independent variable and a dependent variable. In other words, a moderator results in an interaction; the effect of one independent variable depends on (is moderated by) the level of another independent variable. Testing theory: The best way to study how variables interact is to combine them in a factorial design and measure whether the results are consistent with the theory.

Name and describe 3-4 specific advantages AND disadvantages to within-group designs.

Advantages: ensures the participants in the two groups will be equivalent, because individual or personal variables are kept constant, participants can act as their own control, fewer participants have to be recruited, and they reduce noise and increase ability to detect statistically significant effects of IV on DV. Disadvantages: threats to validity in the form of order effects, practice/fatigue effects, and carryover effects (see definitions above).

Double-blind study

An experiment in which neither the participant nor the researcher knows whether the participant has received the treatment or the placebo

What are control variables, how are they used in an experiment, and why are they used?

Any variable that an experimenter holds constant on purpose in order to allow researchers to separate one potential cause from another and thus eliminate alternative explanations for results. Control variables are therefore important for establishing internal validity. Researchers use control variables by holding all other factors constant between the levels of the independent variable.

What types of study designs are more likely to produce null results?

Between-groups -- Posttest only designs Within-groups - Repeated-measures designs Correlation designs

How can we reduce these threats or maintain high internal validity (from threats listed above)?, including: Blind experimental design Double-blind experimental design Double-blind placebo-control design

Blind experimental design: A study design in which the observers are unaware of the experimental conditions to which participants have been assigned. Also called blind design. Double-blind placebo-control experimental design: To determine whether an effect is caused by a therapeutic treatment or by placebo effects, the standard approach is to include a special kind of comparison group. As usual, one group receives the real drug or real therapy, and the second group receives the placebo drug or placebo therapy. the people treating the patients nor the patients themselves know whether they are in the real group or the placebo group. Double-blind experimental design: A type of clinical trial in which neither the participants nor the researcher knows which treatment or intervention participants are receiving until the clinical trial is over

Be able to explain and describe different types of replication approaches and be able to identify examples of each: Conceptual replication Direct replication Replication-plus-extension

Conceptual replication: researchers explore the same research question but use different procedures. The conceptual variables in the study are the same, but the procedures for operationalizing the variables are different. Direct replication: researchers repeat an original study as closely as they can to see whether the effect is the same in the newly collected data. Replication-plus-extension: researchers replicate their original experiment and add variables to test additional questions.

Why can't studies (compared to experiments) support a causal claim as it relates to temporal precedence?

Correlation tests for a relationship between two variables. However, seeing two variables moving together does not necessarily mean we know whether one variable causes the other to occur or the order in which these changes occurred.

What is covariance, and what are some of the main questions we ask in relation to this?

Covariance is one of the three criteria for establishing a causal claim, and variables in the study must be associated with one another in order to establish covariance. Some questions we ask in relation to covariance are: do the results show that the causal variable is related to the outcome variable, and are distinct levels of the independent variable associated with different levels of the dependent variable?

Know what purpose cultural psychology has and how it's related to generalization mode

Cultural psychology is a special case of generalization mode in which it studies how cultural contexts shape how people think, feel, and behave.

Define and give an example of a repeated-measures within groups design.

Define: a type of within-groups design in which participants are measured on a dependent variable more than once, after exposure to each level of the independent variable. Example: Researchers used a repeated-measures design to investigate whether a shared experience would be intensified even when people do not interact with the other person. They recruited 23 college women to come to a laboratory. Each participant was joined by a female confederate. The two sat side-by-side, facing forward, and never spoke to each other. The experimenter explained that each person in the pair would do a variety of activities, including tasting some dark chocolates and viewing some paintings. During the experiment, the order of activities was determined by drawing cards, but the drawings were rigged so that the real participant's first two activities were always tasting chocolates. In addition, the real participant tasted one chocolate at the same time the confederate was also tasting it, but she tasted the other chocolate while the confederate was viewing a painting. The participant was told that the two chocolates were different, but in fact they were exactly the same. After tasting each chocolate, participants rated how much they liked it. The results showed that people liked the chocolate more when the confederate was also tasting it. In this study, the independent variable had two levels: sharing and not sharing an experience. Participants experienced both levels, making it a within-groups design. The dependent variable was participants' rating of the chocolate. It was a repeated-measures design because each participant rated the chocolate twice.

Define and give an example of a concurrent-measures within groups design.

Define: participants are exposed to all the levels of an independent variable at roughly the same time, and a single attitudinal or behavioral preference is the dependent variable. Example: A study investigating infant cognition, in which infants were shown a male face and a female face at the same time, and an experimenter recorded which face they looked at the longest. The independent variable was the gender of the face, and babies experienced both levels (male and female) at the same time. The baby's looking preference was the dependent variable. This study found that babies show a preference for looking at female faces, unless their primary caretaker is male.

Define and give an example of a pre-test/posttest independent group design.

Define: participants are randomly assigned to at least two groups and are tested on the key dependent variable twice—once before and once after exposure to the independent variable Example: A study on the effects of mindfulness training: 48 students were randomly assigned to participate in either a 2-week mindfulness class or a 2-week nutrition class. One week before starting classes, students completed a verbal-reasoning section of a GRE test. One week after their classes ended, students completed another verbal-reasoning GRE test of the same difficulty. The results revealed that while the nutrition group did not improve significantly from pretest to posttest, the mindfulness group scored significantly higher at posttest than at pretest.

Define and give an example of a posttest only independent-group design

Define: participants are randomly assigned to independent variable groups and are tested on the dependent variable once. Example: The note-taking study is an example of a posttest-only design, with two independent variable levels. Participants were randomly assigned to a laptop condition or a longhand condition and they were tested only once on the video they watched.

Explain the following threats to internal validity: Design confounds Selection effects Order effects Observer bias Demand characteristics (or participant reactivity) Placebo effect

Design confounds: an experimenter's mistake in designing the independent variable; it occurs when a second variable happens to vary systematically along with the intended independent variable. Selection effects: when the kinds of participants in one level of the independent variable are systematically different from those in the other, selection effects can result. Order effects: when exposure to one level of the independent variable influences responses to the next level. Observer effects: observers inadvertently change the behavior of those they are observing, such that participant behavior changes to match observer expectations. Demand characteristics: A cue that can lead participants to guess an experiment's hypothesis. Placebo effect: when people receive a treatment and really improve—but only because the recipients believe they are receiving a valid treatment.

Be able to explain specific disadvantages to using small N designs

Disadvantages to using small N designs include internal and external validity problems. It is difficult to generalize study results to a whole population when the study was conducted on a single person or small group of people.

Covariance

Do the results show that the causal variable is related to the outcome variable? Are distinct levels of the independent variable associated with different levels of the dependent variable?

Temporal precedence

Does the study design ensure that the causal variable comes before the outcome variable in time?

Internal Validity

Does the study design rule out alternative explanations for the results?

Know the meaning of factorial notation and how to read it (e.g., 2x2) and how this tells us the number of IVs and levels of IVs

Factorial notation: The notation for factorial designs follows a simple pattern. Factorials are notated in the form "— × —." The quantity of numbers indicates the number of independent variables (a 2 × 3 design is represented with two numbers, 2 and 3). The value of each of the numbers indicates how many levels there are for each independent variable (two levels for one and three levels for the other). When you multiply the two numbers, you get the total number of cells in the design.

Main effects

In a factorial design, researchers test each independent variable to look for a main effect—the overall effect of one independent variable on the dependent variable, averaging over the levels of the other independent variable.

Marginal means

In a factorial design, the arithmetic means for each level of an independent variable, averaging over levels of the other independent variable.

Participant variable/quasi-independent variable

In a quasi-experiment in which the researchers might not be able to randomly assign participants to one level or the other; they are assigned by teachers, political regulations, acts of nature—or even by their own choice, because the researchers do not have full experimental control over the independent variable, we usually call it a quasi-independent variable.

Systematic Variability

In an experiment, a description of when the levels of a variable coincide in some predictable way with experimental group membership, creating a potential confound. See also unsystematic variability.

Unsystematic Variability

In an experiment, a description of when the levels of a variable fluctuate independently of experimental group membership, contributing to variability within groups.

Selection Effects

In an experiment, when the kinds of participants in one level of the independent variable are systematically different from those in the other

Compare and contrast independent-group (between-group) and within-group (within-person) experimental designs.

In independent-group designs, separate groups of participants are placed into different levels of the independent variable, (examples include posttest-only designs and pre-test/post-test designs), whereas in within-group designs, each person is presented with all levels of the independent variable. Examples of within-group designs include repeated measures design and concurrent measures design.

Crossover effect/interaction

In this example, there are two independent variables: the food you are judging (ice cream or pancakes) and the temperature of the food (cold or hot). The dependent variable is how much you like the food. A graph of the interaction is shown in which the lines cross each other; this kind of interaction is sometimes called a crossover interaction, and the results can be described with the phrase "it depends."

Be familiar with factorial design variations and how they differ from one another (i.e., particularly in the number of participants required for each cell and overall), including: Independent-groups factorial design Within-groups factorial design Mixed-groups factorial design

Independent-groups design: In an independent-groups factorial design (also known as a between-subjects factorial), both independent variables are studied as independent-groups. Therefore, if the design is a 2 × 2, there are four different groups of participants in the experiment. If the researchers decided to use 50 participants in each cell of the design, they would have needed a full 200 participants: 50 in each of the four groups. Within-groups factorial design: In a within-groups factorial design (also called a repeated-measures factorial), both independent variables are manipulated as within-groups. If the design is 2 × 2, there is only one group of participants but they participate in all four combinations, or cells, of the design. If researchers wanted to use 50 people in each cell of their study, they would need a total of only 50 people because every person participates in each of the four cells. Mixed-groups factorial design: In a mixed factorial design, one independent variable is manipulated as independent-groups and the other is manipulated as within-groups. If a researcher wanted 50 people in each cell of their 2 × 2 mixed design, they would have needed a total of 100 people: 50 at one level and 50 at the other, each participating at both levels of the study.

What is internal validity, and what are some of the main questions we ask in relation to this?

Internal validity is established when no other plausible explanation can be found between two or more variables after accounting for other variables. Some questions we may ask in relation to internal validity are: does the study design help rule out alternative explanations for why the IV and DV may be related?

What is the difference between main effects and interaction effects in factorial design, and what is the main question that each of these effects helps answer; also be able to tell the number of main effects and interaction effects we would expect if you are given a research scenario or example

Main effects: the overall effect of one independent variable on the dependent variable, averaging over the levels of the other independent variable. Answers the question: does one IV alone influence scores on the DV on average? Interaction effects: Whereas the main effects are simple differences, the interaction effect is the difference in differences. To determine this, start by computing two differences, beginning with one level of the first independent variable, then go to the second level of the first independent variable. If the differences are different, you can conclude that there is an interaction effect in the factorial study. Main question: Does the effect of one IV on the DV depend on the level of the other IV? In a factorial design with a 2x2x2 design, there are three IVs with 2 levels each. There are three main effects, three two-way (2x2) interactions, and one 3-way (2x2x2) interaction.

Know what manipulation checks are and how they are used in relation to ceiling and floor effects

Manipulation checks are a separate dependent variable that experimenters include in a study, specifically to make sure the manipulation worked. For example, in the anxiety study, after telling people they were going to receive a 10-volt, 50-volt, or 100-volt shock, the researchers might have asked: How anxious are you right now, on a scale of 1 to 10? If the manipulation check showed that participants in all three groups felt nearly the same level of anxiety, you'd know the researchers did not effectively manipulate what they intended to manipulate. If the manipulation check showed that the independent variable levels differed in an expected way—participants in the high-anxiety group really felt more anxious than those in the other two groups, then you'd know the researchers did effectively manipulate anxiety. If the manipulation check worked, the researchers could look for another reason for the null effect of anxiety on logical reasoning. Perhaps the dependent measure has a floor effect; that is, the logical reasoning test might be too difficult, so everyone scores low. Or perhaps the effect of anxiety on logical reasoning is truly negligible.

For each type of internal validity threat for one-group pretest/posttest design, be able to tell one or two main ways in which researchers would be able to change the design or fix the study design to reduce or eliminate this threat to internal validity (e.g., adding comparison groups, counterbalancing, changing to within-group design)

Maturation threats can be reduced/eliminated by adding comparison groups. History threats can be reduced/eliminated by also adding comparison groups. Regression threats can be reduced/eliminated by adding comparison groups, along with a careful inspection of the pattern of results. Attrition threats can be reduced/eliminated When participants drop out of a remove participants who have dropped out's scores from the pretest average too. That way, they look only at the scores of those who completed both parts of the study. Another approach is to check the pretest scores of the dropouts. If they have extreme scores on the pretest, their attrition is more of a threat to internal validity than if their scores are closer to the group average. Testing threats can be reduced/eliminated by abandoning a pretest altogether and use a posttest-only design. Instrumentation threats can be reduced/eliminated by switching to a posttest-only design, or researchers can take steps to ensure that the pretest and posttest measures are equivalent.

Explain the threats to internal validity in a one-group pretest/posttest design.

Maturation threats: a change in behavior that emerges more or less spontaneously over time. History threats: result from a "historical" or external factor that systematically affects most members of the treatment group at the same time as the treatment itself, making it unclear whether the change is caused by the treatment received. Regression threats: a statistical concept when a group average (mean) is unusually extreme at Time 1, the next time that group is measured (Time 2), it is likely to be less extreme—closer to its typical or average performance. Attrition threats: a reduction in participant numbers that occurs when people drop out before the end. Attrition can happen when a pretest and posttest are administered on separate days and some participants are not available on the second day. An attrition threat becomes a problem for internal validity when attrition is systematic; that is, when only a certain kind of participant drops out. Testing threats: a change in the participants as a result of taking a test (dependent measure) more than once. People might have become more practiced at taking the test, leading to improved scores, or they may become fatigued or bored, which could lead to worse scores over time. Instrumentation threats: occurs when a measuring instrument changes over time. In observational research, the people who are coding behaviors are the measuring instrument, and over a period of time, they might change their standards for judging behavior by becoming stricter or more lenient.

Be able to explain and tell about the three main problems that may occur with too much within-group variability and why we might get noise or overlap between groups/conditions: Measurement error Naturally-occurring, irrelevant individual differences Situation (environmental) noise

Measurement error: a human or instrument factor that can randomly inflate or deflate a person's true score on the dependent variable. Naturally-occurring, irrelevant individual differences: In the experiment on money and mood, for example, the normal mood of the participants must have varied. Some people are naturally more cheerful than others, and these individual differences have the effect of spreading out the scores of the students within each group. Situation (environmental) noise: external distractions are a third factor that could cause variability within groups and obscure true group differences. Suppose the money and mood researchers had conducted their study in the middle of the student union on campus. The sheer number of distractions in this setting would make a mess of the data. The smell of the nearby coffee shop might make some participants feel cozy, seeing friends at the next table might make some feel extra happy, and seeing the cute person from sociology class might make some feel nervous or self-conscious. The kind and amount of distractions in the student union would vary from participant to participant and from moment to moment. The result, once again, would be unsystematic variability within each group.

Spreading interaction

Most people know that either bacon or avocado will make a sandwich taste better. However, if you use both ingredients, the sandwich becomes particularly delicious. A graph of this interaction shows that the lines are not parallel and they do not cross over each other. This kind of interaction is sometimes called a spreading interaction, and the pattern can be described with the phrase "especially"

Know and be able to explain what noise is in relation to too much within-group variability that may explain null results

Noise is any external distraction— a third factor that could cause variability within groups and obscure true group differences.

Be able to explain each main type of quasi-experimental design; be able to tell whether it applies to an independent-group or within-group study design, what this study might look like (be able to identify from an example), and tell why each main type of quasi-experimental design not a true experiment: Nonequivalent control group posttest only designs Nonequivalent control group pretest/posttest designs Repeated-measures interrupted time-series design Nonequivalent control group interrupted time-series design

Nonequivalent control group posttest only designs: the participants were not randomly assigned to groups and were tested only once, after exposure to one level of the independent variable or the other. Applies to independent-group design. Nonequivalent control group pretest/posttest designs: the participants were not randomly assigned to groups and were tested both before and after some intervention. Applies to independent-group design. Repeated-measures interrupted time-series design: measures a variable repeatedly (in this example, suicide rates in the United States)—before, during, and after the "interruption" caused by some event. Applies to within groups design. Nonequivalent control group interrupted time-series design: combines two of the previous designs (the nonequivalent control group design and the interrupted time-series design). Applies to within groups design.

What do "null effects" or null results mean and why don't we see these often in research studies and journal articles?

Null effects/null results occur when the independent variable does not make much difference in the dependent variable, meaning that the 95% CI for the effect includes zero, such as a 95% CI of (-.21, .18). We don't see these often in research studies and journal articles because researchers like to report significant results, and most readers wouldn't be interested to hear about a null result.

What are three main threats to internal validity specifically for within-group experimental designs AS WELL AS the main way researchers reduce these problems and effects: Order effects Practice (fatigue) effects Carryover effects

Order effects: when exposure to one level of the independent variable influences responses to the next level. Practice (fatigue) effects: a long sequence might lead participants to get better at the task or to get tired or bored toward the end. Carryover effects: some form of contamination carries over from one condition to the next. The main way researchers can reduce these effects is by counterbalancing, in which they present the levels of the independent variable to participants in different sequences.

Be able to explain what publication bias and how this is related to replication and null effects

Publication bias is the tendency of scientific journals not to report insignificant or null results, which connects to the fact hat only ~40% of study results are actually replicated.

Know what a quasi-experimental design is (broadly) and why it is not a true experiment

Quasi-experiments differ from true experiments in that the researchers do not have full experimental control. They start by selecting an independent variable and a dependent variable. Then they study participants who are exposed to each level of the independent variable. However, in a quasi-experiment, the researchers might not be able to randomly assign participants to one level or the other; they are assigned by teachers, political regulations, acts of nature—or even by their own choice.

Understand and be able to describe the four main reasons why researchers might still use a quasi-experimental design: Real-world considerations External validity Ethical reasons Construct and statistical validity

Real-world considerations: they present real-world opportunities for studying interesting phenomena and important events. External validity: The real-world settings of many quasi-experiments can enhance external validity because of the likelihood that the patterns observed will generalize to other circumstances and other individuals. Ethical reasons: Many questions of interest to researchers would be unethical to study in a true experiment. It would not be ethical to assign a potentially harmful television show to young adults, nor would it be ethical to assign elective cosmetic surgery to people who don't want it. Quasi-experiments can be an ethical option for studying these interesting questions. Construct and statistical validity: To interrogate the construct validity of a quasi-experiment, you would interrogate how successfully the study manipulated or measured its variables. Usually, quasi-experiments show excellent construct validity for the quasi-independent variable. to assess a quasi-experimental study's statistical validity, you could ask how large the group differences were estimated to be (the effect size). You can also evaluate the confidence interval (precision) of that estimate.

What does replication mean?

Replication is whether the results of an investigation could be a fluke or whether they will get the same results if they conduct the same study again.

Be able to explain and identify different threats to internal validity (all covered in chapter 11 for general experiments and one-group pretest/posttest design) and how they might occur in the four main quasi-experimental designs listed above.

Selection effects: Selection effects are relevant only for independent-groups designs, not for repeated-measures designs. A selection threat to internal validity applies when the kinds of participants at one level of the independent variable are systematically different from those at the other level. Design confounds: in a design confound, some outside variable accidentally and systematically varies with the levels of the targeted independent variable. Maturation threat: in an experimental or quasi-experimental design with a pretest and posttest, an observed change could have emerged more or less spontaneously over time. History threat: occurs when an external, historical event happens for everyone in a study at the same time as the treatment. With a history threat, it is unclear whether the outcome is caused by the treatment or by the external event or factor. Regression to the mean: occurs when an extreme outcome is caused by a combination of random factors that are unlikely to happen in the same combination again, so the extreme outcome gets less extreme over time. Attrition threat: In designs with pretests and posttests, attrition occurs when people drop out of a study over time. Attrition becomes an internal validity threat when systematic kinds of people drop out of a study. Testing and Instrumentation threats: A testing threat is a kind of order effect in which participants tend to change as a result of having been tested before. Repeated testing might cause people to improve, regardless of the treatment they received. Repeated testing might also cause performance to decline because of fatigue or boredom. Instrumentation, too, can be an internal validity threat when participants are tested or observed twice. A measuring instrument could change over repeated uses, and this change would threaten internal validity. If a study uses two versions of a test with different standards (e.g., one test is more difficult) or if a study uses coders who change their standards over time, then participants might appear to change, when in reality there is no change between one observation and the next.

Be able to tell what small N designs are and some examples of common small N designs would be

Small N designs are a type of experimental design that uses a small sample size. Common examples include stable baseline designs, multiple baseline designs, and reversal designs.

Be able to explain and describe well-designed small N studies, including what aspects of these designs make them advantageous and "good:" Stable-baseline design Multiple-baseline design Reversal design

Stable baseline design: a study in which a practitioner or researcher observes behavior for an extended baseline period before beginning a treatment or other intervention. If behavior during the baseline is stable, the researcher is more certain of the treatment's effectiveness. Multiple baseline design: researchers stagger their introduction of an intervention across a variety of individuals, times, or situations to rule out alternative explanations. Reversal design: researchers observe a problem behavior both with and without treatment, but take the treatment away for a while (the reversal period) to see whether the problem behavior returns (reverses). They subsequently reintroduce the treatment to see if the behavior improves again. By observing how the behavior changes as the treatment is removed and reintroduced, the researchers can test for internal validity and make a causal statement: If the treatment is really working, behavior should improve only when the treatment is applied.

Be able to list or describe advantages of using quasi-experimental designs or studies like field studies or those with experimental realism (i.e., those that have high ecological validity)

Studies that have high ecological validity mean that the study task and manipulations are similar to situations people might actually encounter in everyday life. Studies that take place in field settings take place in the real world, which increases ecological validity. Experimental realism=lab study which creates situation where people have authentic emotions, motivations, or behaviors.

What is the difference between systematic and unsystematic variability, and which of these two indicate a design confound?

Systematic variability is a description of when the levels of a variable coincide in some predictable way with experimental group membership, creating a potential confound in an experiment. Unsystematic variability is a description of when the levels of a variable fluctuate independently of experimental group membership, contributing to variability within groups in an experiment. Systematic variability indicates a design confound, while unsystematic variability does not.

Matched group study design

Technique in research design in which a participant in an experimental group being exposed to manipulation is compared on an outcome variable to a specific participant in the control group who is similar in some important way but did not receive the manipulation.

What are the two main reasons why we might get null results in an experimental or research study, and what are the two possible reasons for confounding/alternative explanations?

The first main reason why we might get null results in an experimental study is that there are not enough between-groups differences, which may result in weak manipulations, insensitive measures, ceiling/floor effects, and design confounds acting in reverse. The second main reason for null results is within groups variability, which may result in noise, measurement error, and individual differences.

What are the two main types of combined threats to internal validity and be able to describe them and identify an example?

The first type of combined threat is known as a selection-history threat. In a selection-history threat, an outside event or factor affects only those at one level of the independent variable. The second type of combined threat to internal validity is known as a selection-attrition threat, in which only one of the experimental groups experiences attrition.

Factorial notation

The notation for factorial designs follows a simple pattern. Factorials are notated in the form "— × —." The quantity of numbers indicates the number of independent variables (a 2 × 3 design is represented with two numbers, 2 and 3). The value of each of the numbers indicates how many levels there are for each independent variable (two levels for one and three levels for the other). When you multiply the two numbers, you get the total number of cells in the design.

Know what the replication debate is and how it's related to null effects, as well as various issues that arise possible when a study fails to replicate

The replication debate is that replication studies are difficult to get published, which results in issues with replication, publication bias, not enough replications, issues with the original study, and questionable statistical analyses or research practices.

Random Assignment

The use of a random method (e.g., flipping a coin) to assign participants into different experimental groups.

Know the difference between Theory-testing mode and generalization mode, including examples of each

Theory-testing mode: study designed to specifically test support for a theory. Generalization mode: researchers want to generalize findings from sample to larger population

What is one way we can design experiments to help support or establish a causal claim as it relates to covariance?

We can establish comparison groups: a control group representing no treatment/neutral position, a treatment group representing some treatment, change/manipulation, and a placebo group that receives inactive treatment but the participants are unaware.

What is one way we can design experiments to help support or establish a causal claim as it relates to internal validity?

We can help reduce alternative explanations for results/outcome (confounds) by using a control variable(s).

What is one way we can design experiments to help support or establish a causal claim as it relates to temporal precedence?

We can make sure that the study design ensures that the IV (causal variable) is changed and manipulated before the DV is presented.

Be able to explain and tell about the FIVE main problems that may occur with not enough between-group differences that may explain null results: Weak manipulations Insensitive measures Design confounds in reverse Ceiling effects Floor effects

Weak manipulations: Why did the study show that money had little effect on people's moods? You might ask how much money the researchers gave each group. What if the amounts were $0.00, $0.25, and $1.00? In that case, it might be no surprise that the manipulation didn't have a strong effect. A dollar doesn't seem like enough money to affect most people's mood. Like the difference between two shakes and four shakes of hot sauce, it's not enough of an increase to matter. Insensitive measures: Occurs when the researchers have not used an operationalization of the dependent variable with enough sensitivity. Design confounds in reverse: Confounds are usually considered to be internal validity threats—alternative explanations for some observed difference in a study. However, they can apply to null effects, too. A study might be designed in such a way that a design confound actually counteracts, or reverses, some true effect of an independent variable. Ceiling/floor effects: In a ceiling effect, all the scores are squeezed together at the high end. In a floor effect, all the scores cluster at the low end, which can cause independent variable groups to score almost the same on the dependent variable.

Interaction effect

When adding an additional independent variable, researchers want to determine whether the effect of the original independent variable depends on the level of another independent variable.

Know what considerations need to be made when trying to generalize results to a) other people and b) other settings

When generalizing to other places/settings, ecological validity, field setting, and experimental realism need to be considered. When generalizing to other people, it's important to think about the population of interest. For external validity, it's about who's in the sample/how we sample them, not how many people. The best studies use random sampling and diverse samples.

Counterbalancing

When researchers present the levels of the independent variable to participants in different sequences in order to cancel out any order effects that may have occurred.

Moderating variable

When the relationship between two variables changes depending on the level of another variable, that other variable is called a moderating variable.

What are selection effects and what type of variability (systematic or unsystematic) are they associated with; What are some scenarios under which we see selection effects and what are the two main ways we can eliminate or get rid of selection effects?

With a selection effect, a confound exists because the different independent variable groups have systematically different types of participants. They are associated with systematic variability, and can also happen when we let the participant select which group they want to be in for the independent variable or when we assign all individuals with certain characteristic or at a certain time to one level of the IV. The two main ways we can get rid of selection effects are by random assignment and matched groups.

Know how three-way interactions differ from two-way interactions and how to examine/test whether there is a significant three-way interaction in a factorial design

You would find a three-way interaction whenever there is a two-way interaction for one level of a third independent variable but not for the other (because the two-way interactions are different—there is a two-way interaction on one side but not on the other). You will also find a three-way interaction if a graph shows one pattern of two-way interaction on one side but a different pattern of two-way interaction on the other side—in other words, if there are different two-way interactions. However, if you found the same kind of two-way interaction for both levels of the third independent variable, there would not be a three-way interaction. A three-way interaction, if it is significant, means that the two-way interaction between two of the independent variables depends on the level of the third independent variable.

Maturation threat

a change in behavior that emerges more or less spontaneously over time. People adapt to changed environments; children get better at walking and talking; plants grow taller—but not because of any outside intervention. It just happens.

What is a factorial design and what are some other names for this?

a factorial design is a type of study design in which there are two or more independent variables (also referred to as factors). In the most common factorial design, researchers cross the two independent variables; that is, they study each possible combination of the independent variables. Other names for a factorial design include experimental design with more than 1 IV and multivariate analyses/design.

Null results

an experimental outcome in which the dependent variable is not influenced by the independent variable, maybe due to study not being well designed (Type 1 or 2 Error)

Design Confound

an experimenter's mistake in designing the independent variable; it occurs when a second variable happens to vary systematically along with the intended independent variable.

Manipulation checks

an extra dependent variable that researchers can insert into an experiment to convince them that their experimental manipulation worked.

Testing threat

change in participants as a result of taking test more than once, so taking same pretest and posttest for example.

Be able to visually or in word distinguish between crossover interactions/effects and spreading interactions or effects

crossover interactions: lines on graph cross over each other, results are described as "it depends." spreading interactions: the pattern can be described with the phrase "especially," lines on a graph are both increasing, but increase in one factor especially when another variable is added.

Placebo effect

experimental results caused by expectations alone; any effect on behavior caused by the administration of an inert substance or condition, which the recipient assumes is an active agent, can be prevented with a double-blind placebo control.

History threat

historical/external factor occurs that systematically affects most members of the participant group in some way.

Measured v. Manipulated variables

measured variable: dependent variable, usually continuous (interval/ratio scale). manipulated variable: independent variable, associated terms= conditions, levels, groups

Instrumentation threat

occurs when a measure in the study changes over time, producing changes in the DV scores rather than just the treatment/intervention.

Attrition threat

participants systematically drop out of the study over time (or from pretest to posttest).

Regression to the mean/regression threat

random variables/events combine to lead to extreme scores.

Carryover effects

some form of contamination carries over from one condition to the next.

What is temporal precedence, and what are some of the main questions we ask in relation to this?

temporal precedence is established one variable precedes the other, we may ask: does the study design ensure that the causal variable comes before the outcome variable in time?

Order effects

when exposure to one level of the independent variable influences responses to the next level.


Set pelajaran terkait

Final Exam STATS 1430 (carmen homeworks)

View Set

Chapter 35: Management Structure of Corporations

View Set

Comptia A+ Chapter 7 - Introduction to TCP/IP

View Set