Research Methods-Final
Describe matching, explain its role in establishing internal validity, and explain situations in which matching may be preferred to random assignment.
(aka matched groups) When the researcher has at least three groups and measures each of the participants on a particular variable that might matter to the DV. Then. the researcher takes the top 3 scores on that measure & randomly assign one of them to each other the 3 groups. Then the next top three would be randomly assigned to three groups & so on until all the participants have been assigned -The advantage of randomness b/c each member is randomly assigned -Prevents selection effects -Ensures that the groups are equal on some important variable before the manipulation of the IV -May be preferred in situations where the third variable is important to the study such as in IQ tests when measuring how all a group performs on an exam.
Articulate how a factorial design works.
-A design in which there are 2+ independent variables (factors) Researchers cross the two independent variables, study each possible combination of the IVs.
Explain the value of pattern and parsimony in research.
-Combine results from a variety of research questions -Lots of experiments done to answer the same question
Use the three causal criteria to analyze an experiment's ability to support a causal claim.
-Covariance: IS the causal variable related to the effect variable? Are distinct levels of the IV associated w/ different variables of the DV? -Temporal Precedence: Does the causal variable come before the effect variable? -Internal validity: Are there alternative explanations for the results?
Describe how the procedures for independent-groups and within-groups experiments are different. Explain the pros and cons of each type of design.
-Different groups of participants are placed into different levels of the IV -Pros: Takes less time, allows for more participants -Cons: Requires more participants, can't use each person as their own control -There is only one group of participants and each person is presented w/ all levels of the IV. -Pros: Ensures participants in the the two groups will be equivalent, researchers have more power to notice differences b/t conditions, requires fewer participants -Cons: Takes more time, fewer participants
Describe interactions in terms of "it depends."
-Does the effect of the original independent variable depend on the level of another independent variable?
Describe random assignment and explain its role in establishing internal validity.
-Everyone has an equal chance of being put into a group & spreads out these differences -Desystematizing the types of participants who end up in each level of the IV
Interrogate two aspects of external validity for an experiment (generalization to other populations and to other settings).
-Generalization to other populations: Ask how the experiments recruited their participants. Ask about random sampling to know if the results can be generalized to other populations -Generalizing to other settings: Does the situation that was used in the experiment generalize to other situations in the real world?
Explain the basic logic of three-way factorial designs.
-If there is a 3-way interaction, it means the 2-way interactions are different, depending on the level of a third IV. *Ex: breaking onset time & cell phone use. The interaction is affected by the level of traffic
Determine, from a graph, whether a study shows a three-way interaction.
-If there is a graph of 2 variables & then it shows a third variable change the effects given
Interpret key words that indicate factorial design language in a journal article.
-If they use a 2x2 design, 2x2x2 design, etc. Indicates the # of IVs in the study, as well as how many levels there were -In the Results section may use the term "significant" and have the notation of "p< .05" with an asterisk to indicate that a main effect or interaction is statistically significant.
Explain why experimenters usually prioritize internal validity over external validity when it is difficult to achieve both.
-Internal validity is explaining alternative explanations. Confounds, third variables, what's going on inside of the study. -External is generalizing & you can't generalize until you understand the study.
Interpret key words in popular press articles that indicate a factorial design.
-It depends -This variable depends on another variable
Consider why journalists might prefer to report single studies, rather than parsimonious patterns of data.
-It's easier to understand a single study and easier to write about one study instead of many studies
Describe at least two ways that a study might show inadequate variance between groups, and indicate how researchers can identify such problems.
-Measurement error *solns: use reliable & precise measurements & measure more instances -Individual differences obscuring the study *solns: change the design, add more participants
Explain why large within-group variance can obscure a between-group difference.
-Not enough group differences which result from weak manipulations, insensitive measures, ceiling or floor effects or design confound acting in reverse -Too much within group variability caused by measurement error, individual differences, or situation noise.
Articulate the reasons that a study might result in null effects: not enough variance between groups, too much variance within groups, or a true null effect.
-Not enough groups difference: ineffective manipulation, insensitive measures, ceiling or floor effects -Too much variability within groups: measurement error, indiv. differences, situation variability/noise -The IV really doesn't affect the DV: True null effect Define measurement error- Any factor that can inflate or deflate a person's true score on a dependent measure
Describe the difference between generalization mode, in which external validity is essential, and theory-testing mode, in which external validity is less important than internal validity and may not be important at all.
-Researchers are careful to use probability samples with appropriate diversity of age, gender, etc. They focus on whether their samples are representative, whether the data from their sample apply to the population of interest, and even whether the data might apply to a new population of interest. -Applied research tends to be done in generalization mode -Frequency claims are always in generalization mode Researchers design studies that test a theory, leaving the generalization step for later studies, which will test whether the theory holds in a sample that is representative of another population
Identify interaction effects two ways: in a table and in a graph.
-Subtraction in a table down the columns, if they differ, there is an effect. -On a graph, the lines are touching
Explain why experiments are superior to multiple-regression designs for controlling for third variables.
-They allows the experimenter to randomly assign people to a group which increase internal validity. -Multiple regression only controls for third variables you thought to measure
Explain two reasons to conduct a factorial study.
-They test whether an IV effects different kinds of people, or people in different situations in the same way. -Can be used to test theories. Best way to study how variable interact is to combine them in a factorial design & measure whether the results are consistent w/ the theory
Describe an interaction as a "difference in differences."
-Whether the effect of the original independent variable depends on the level of another independent variable -Subtraction on the table
Identify effect size d and statistical significance and explain what they mean for an experiment.
-d takes into account the difference b/t means and the spread of scores within each group. -When d is larger, it means the IV causes the DV to change for most of the participants in the study -When d is smaller, it means the scores of participants in the two experimental group overlaps more -.20=small, .5=medium, .8 large effect
Interrogate the statistical validity of an association claim, asking about features of the data that might distort the meaning of the correlation coefficient, such as outliers in the scatterplot, effect size, and the possibility of restricted range (for a lower-than-expected correlation). When the correlation coefficient is zero, inspect the scatterplot to see if the relationship is curvilinear.
1. All associations are not equal some are stronger than others. As you'll recall the effect size describes the strength of association. Large effect size give more accurate predictions. Larger effect sizes are usually more important 2. Statistical significance refers to the conclusion a researcher reaches regarding how likely it is they'd get a correlation of that size just by chance , assuming that there is no correlation in the real world. Statistical inference being able infer what happen in one sample can happen to population in real world. Statistically significant is associated with the probability or p values of over 0.5 stating in is due to more than chance it can really occur. Non-significant is that the chance very high and the result one received could just be by chance. Usually the stronger the correlation the larger the effect size. 3. Depending on were the outlier is in relation to the sample it can have a strong effect on correlation coefficient r. Outliers can be problematic for association claims even though it one or two points it disproportionately affect the results. For example a seasaw if you have outlier in middle it does not hold much weight but if you are extreme ends one can determine direction of results or points. In a bivariate correlation are problematic when there is extreme scores on each end for both variables. Outliers matter the most when a sample is small it can change the results easier. 4. Restriction range if there is nor a full range of scores on one of the variables the association it can make correlation appear smaller than it really is. 5. Curvilinear association in which relationship between two variables is not on a straight line. For example a relationship of variables maybe positive at one point and than negative all of sudden at another point.
Review three threats to internal validity: design confounds, selection effects, and order effects.
1. An alternative explanation for the results, threat to internal validity. Results from poor experimental design 2. Occurs in independent-groups design when the kinds of participants at one level of the IV are systematically different from those at the other level 3. Alternative explanation b/c of the order in which the levels of the variable are presented
Distinguish an association claim, which requires that a study meet only one of the three rules for causation (covariance), from a causal claim, which requires that the study also establish temporal precedence and internal validity.
1. Causation if one variable is responsible for causing the change or effect on another variable. 2. Temporal precedence is determining that the cause did happen before effect so A caused B. 3. Internal validity address the degree to which a study can assert causality but it is simple association it cannot be interrogated.
Identify three types of correlations in a longitudinal correlational design: cross-sectional correlations, autocorrelations, and cross-lag correlations.
1. Cross sectional correlations they test to see whether two variables measured at the same point in time are correlated. 2. Autocorrelations determine the correlation of one variable with itself measured on two different occasions. 3. Cross-lag correlations which show whether the earlier measure of one variable is associated with the later measure of the other variables.
Identify the following nine threats to internal validity: history, maturation, regression, attrition, testing, instrumentation, observer bias, demand characteristics, and placebo effects.
1. History- A historical or external event that affects most members of the treatment group at the same time as the treatment, making it unclear whether the change in the experimental group is caused by the treatment received or by the historical factor. 2. Maturation-A change in behavior that emerges more or less spontaneously over time. -Ex: Participants adapting to a situations 3. Regression-When a performance is extreme at Time 1, the next time that performance is measured (Time 2), it is likely to be less extreme-- closer to a typical range/average performance 4. Attriton-When only a certain kind of participants drop out of the experiment. 5. Testing-A specific kind of order effect, refers to a change in the participants as a result of taking a test more than once 6. Instrumentation-When a measuring instrument changes over time 7. Observer Bias-Occurs when a researcher's expectations influence their interpretation of the results. 8. Demand Characteristics-Participants guess what the study is supposed to be about and change their behavior in the expected direction 9. Placebo Effects-People receive a treatment & only improve but only because the recipients believe they are receiving a valid treatment
Describe three techniques of nonrandom sampling: purposive, convenience, and snowball sampling.
1. Purposive sampling- If researcher want to study only certain kinds of people, they recruit only those particular participants. 2. Convenience sampling- using a sample of people who are readily available to participate. 3. Snowball sampling- in which participants are asked to recommend a few acquaintances for the study.
Explain five techniques for random sampling: simple random, multistage, cluster, stratified random sampling, and oversampling.
1. Simple random- people are chosen randomly names on a ticket place into a hat and individuals are chosen. 2.Multistage- two random sample methods are used. First a random sample of clusters of people are chosen than a random sample of people from each random cluster is picked. 3. Stratified random sampling- is a another multi-stage method in which researcher selects particular demographic categories on purpose and then randomly selects individuals within each of the categories. 5. Oversampling- a variation of the stratified method in which the researcher intentionally overrepresents one or more groups.
Define three sampling problems that lead to biased samples.
1. Some people of the population is purposely left out 2. Sample cannot be generalized to population 3. Only people that want to participate are picked and only people that are easy to access
observer bias
A bias that occurs when observers' expectations influence their interpretation of the participants' behaviors or the outcome of the study.
quota sampling
A biased sampling technique in which a researcher identifies subsets of the population of interest, sets a target number for each category in the sample, and nonrandomly selects individuals within each category until the quotas are filled.
purposive sampling
A biased sampling technique in which only certain kinds of people are included in a sample.
spurious association
A bivariate association that is attributable only to systematic mean differences on subgroups within the sample; the original association is not present within the subgroups.
statistical significance
A conclusion that a result from a sample (such as an association or a difference between groups) is so extreme that the sample is unlikely to have come from a population in which there is no association or no difference.
cell
A condition in an experiment; in a simple experiment, it can represent the level of one independent variable; in a factorial design it represents one of the possible combinations of two independent variables.
placebo group
A control group that is exposed to an inert treatment (e.g., a sugar pill). Also called placebo control group.
null effect
A finding that an independent variable did not make a difference in the dependent variable; there is no significant covariance between the two. Also called null result.
stratified random sampling
A form of probability sampling; a random sampling technique in which the researcher identifies particular demographic categories of interest and then randomly selects individuals within each category.
oversampling
A form of probability sampling; a variation of stratified random sampling in which the researcher intentionally overrepresents one or more groups.
self-selection
A form of sampling bias that occurs when a sample contains only people who volunteer to participate.
Latin square
A formal system of partial counterbalancing that ensures that each condition in a within-groups design appears in each position at least once.
confound
A general term for a potential alternative explanation for a research finding (a threat to internal validity)
comparison group
A group in an experiment whose levels on the independent variable differ from those of the treatment group in some intended and meaningful way. Also called comparison condition.
population
A larger group from which a sample is drawn; the group to which a study's conclusions are intended to be applied. Also called population of interest.
control group
A level of an independent variable that is intended to represent "no treatment" or a neutral condition.
Explain how longitudinal designs are conducted.
A longitudinal study can provide evidence for temporal precedence by measuring the same variables in the same people at several points in time.
full counterbalancing
A method of counterbalancing in which all possible condition orders are represented. See also counterbalancing, partial counterbalancing.
partial counterbalancing
A method of counterbalancing in which some, but not all, of the possible condition orders are represented. See also counterbalancing, full counterbalancing.
control variable
A potential variable that an experimenter holds constant on purpose.
cluster sampling
A probability sampling technique in which clusters of participants within the population of interest are selected at random, followed by data collection from all individuals in each cluster
systematic sampling
A probability sampling technique in which the researcher counts off members of a population to achieve a sample, using a randomly chosen interval (e.g., every nth person, where n is a randomly selected number).
multistage sampling
A probability sampling technique involving at least two stages: a random sample of clusters followed by a random sample of people within the selected clusters
field setting
A real-world setting for a research study.
conceptual replication
A replication study in which researchers examine the same research question (the same conceptual variables) but use different procedures for operationalizing the variables. See also direct replication, replication-plus-extension.
direct replication
A replication study in which researchers repeat the original study as closely as possible to see whether the original effect shows up in the newly collected data. Also called exact replication. See also conceptual replication, replication-plus-extension.
replication-plus-extension
A replication study in which researchers replicate their original study but add variables or conditions that test additional questions. See also conceptual replication, direct replication.
theory-testing mode
A researcher's intent for a study, testing association claims or causal claims to investigate support for a theory. See also generalization mode.
placebo effect
A response or effect that occurs when people receiving an experimental treatment experience a change only because they believe they are receiving a valid treatment.
interaction effect
A result from a factorial design, in which the difference in the levels of one independent variable changes, depending on the level of the other independent variable; a difference in differences. Also called interaction.
representative sample
A sample in which all members of the population of interest are equally likely to be included (usually through some random method), and therefore the results can generalize to the population of interest. Also called unbiased sample.
biased sample
A sample in which some members of the population of interest are systematically left out, and as a consequence, the results from the sample cannot generalize to the population of interest. Also called unrepresentative sample.
outlier
A score that stands out as either much higher or much lower than most of the other scores in a sample.
scientific literature
A series of related studies, conducted by various researchers, that have tested similar variables. Also called literature.
census
A set of observations that contains all members of the population of interest.
third-variable problem
A situation in which a plausible alternative explanation exists for the association between two variables. See also internal validity.
directionality problem
A situation in which it is unclear which variable in an association came first.
restriction of range
A situation involving a bivariate correlation, in which there is not a full range of possible scores on one of the variables in the association, so the relationship from the sample underestimates the true correlation.
multiple regression
A statistical technique that computes the relationship between a predictor variable and a criterion variable, controlling for other predictor variables. Also called multivariate regression.
t test
A statistical test used to evaluate the size and significance of the difference between two means.
pilot study
A study completed before (or sometimes after) the study of primary interest, usually to test the effectiveness or characteristics of the manipulations.
masked design
A study design in which the observers are unaware of the experimental conditions to which participants have been assigned. Also called blind design.
multivariate design
A study designed to test an association involving more than two measured variables
double-blind study
A study in which neither the participants nor the researchers who evaluate them know who is in the treatment group and who is in the comparison group.
experiment
A study in which one variable is manipulated and the other is measured.
longitudinal design
A study in which the same variables are measured in the same people at different points in time
factorial design
A study in which there are two or more independent variables, or factors.
double-blind placebo control study
A study that uses a treatment group and a placebo group and in which neither the research staff nor the participants know who is in which group.
cultural psychology
A subdiscipline of psychology concerned with how cultural settings shape a person's thoughts, feelings, and behavior, and how these in turn shape cultural settings.
moderator
A third variable that, depending on its level, changes the relationship between two other variables
design confound
A threat to internal validity in an experiment in which a second variable happens to vary systematically along with the independent variable and therefore is an alternative explanation for the results.
selection-history threat
A threat to internal validity in which a historical or seasonal event systematically affects only the subjects in the treatment group or only those in the comparison group, not both.
selection-attrition threat
A threat to internal validity in which members are likely to drop out of either the treatment group or the comparison group, not both.
regression threat
A threat to internal validity related to regression to the mean, a phenomenon in which any extreme finding is likely to be closer to its own typical, or mean, level the next time it is measured (with or without the experimental treatment or intervention).
selection effect
A threat to internal validity that occurs in an independent-groups design when the kinds of participants at one level of the independent variable are systematically different from those at the other level.
instrumentation threat
A threat to internal validity that occurs when a measuring instrument changes over time from having been used before. Also called instrument decay.
maturation threat
A threat to internal validity that occurs when an observed change in an experimental group could have emerged more or less spontaneously over time.
history threat
A threat to internal validity that occurs when it is unclear whether a change in the treatment group is caused by the treatment or by a historical factor or event that affects everyone or almost everyone in the group.
demand characteristic
A threat to internal validity that occurs when some cue leads participants to guess a study's hypotheses or goals. Also called experimental demand.
practice effect
A type of order effect in which people's performance improves over time because they become practiced at the dependent measure (not because of the manipulation or treatment). See also fatigue effect, order effect, testing threat.
carryover effect
A type of order effect, in which some form of contamination carries over from one condition to the next.
manipulated variable
A variable in an experiment that a researcher controls, such as by assigning participants to its different levels (values). See also measured variable.
measured variable
A variable in an experiment whose levels (values) are observed and recorded. See also manipulated variable.
predictor variable
A variable in multiple-regression analysis that is used to explain variance in the criterion variable. Also called independent variable.
participant variable
A variable such as age, gender, or ethnicity whose levels are selected (i.e., measured), not manipulated.
mediator
A variable that helps explain the relationship between two other variables. Also called mediating variable.
Distinguish measured from manipulated variables in a study.
A variable that is controlled, such as when the researchers assign participants to a particular level of the variable (IV) Take the form of records of behavior or attitudes (DV) IV-The manipulated variable DV-The measured/ outcome variable
independent variable
A variable that is manipulated in an experiment. In a multiple-regression analysis, a predictor variable used to explain variance in the criterion variable. See also dependent variable.
snowball sampling
A variation on purposive sampling, a biased sampling technique in which participants are asked to recommend acquaintances for the study.
meta-analysis
A way of mathematically averaging the effect sizes of all the studies that have tested the same variables to see what conclusion that whole body of evidence supports.
Explain why control variables can help an experimenter eliminate design confounds.
Allows for researchers to separate one potential cause from another and eliminate alternative explanations for results, and are important for establishing for internal validity
mean
An arithmethic average; a measure of central tendency computed from the sum of all the scores in a set of data, divided by the total number of scores.
curvilinear association
An association between two variables which is not a straight line; instead, as one variable increases, the level of the other variable increases and then decreases (or vice versa). Also called curvilinear correlation. See also positive association, negative association, zero association.
Interrogate the construct validity of an association claim, asking whether the measurement of each variable was reliable and valid.
An association claim describes the relationship between two measured variables so is it relevant how well was each two variable measured.
bivariate correlation
An association that involves exactly two variables. Also called bivariate association.
one-group, pretest/posttest design
An experiment in which a researcher recruits one group of participants; measures them on a pretest; exposes them to a treatment, intervention, or change; and then measures them on a posttest.
concurrent-measures design
An experiment using a within-groups design in which participants are exposed to all the levels of an independent variable at roughly the same time, and a single attitudinal or behavioral preference is the dependent variable.
repeated-measures design
An experiment using a within-groups design in which participants respond to a dependent variable more than once, after exposure to each level of the independent variable.
posttest-only design
An experiment using an independent-groups design in which participants are tested on the dependent variable only once. Also called equivalent groups, posttest-only design.
pretest/posttest design
An experiment using an independent-groups design in which participants are tested on the key dependent variable twice: once before and once after exposure to the independent variable.
independent-groups design
An experimental design in which different groups of participants are exposed to different levels of the independent variable, such that each participant experiences only one level of the independent variable. Also called between-subjects design or between-groups design.
within-groups design
An experimental design in which each participant is presented with all levels of the independent variable. Also called within-subjects design.
ceiling effect
An experimental design problem in which independent variable groups score almost the same on a dependent variable, such that all scores fall at the high end of their possible distribution. See also floor effect.
floor effect
An experimental design problem in which independent variable groups score almost the same on a dependent variable, such that all scores fall at the low end of their possible distribution. See also ceiling effect.
matched groups
An experimental design technique in which participants who are similar on some measured variable are grouped into sets; the members of each matched set are then randomly assigned to different experimental conditions. Also called matching.
control variables.
Any variable that an experimenter holds constant on purpose
Explain why representative samples may be especially important for many frequency claims.
Because frequency claims are about how often something happens in a population so a representative sample is needed so one can draw generalization of study or confirm external validity.
convenience sampling
Choosing a sample based on those who are easiest to access and readily available; a biased sampling technique.
Understand how the correlation coefficient, r, represents strength and direction of a relationship between two quantitative variables.
Correlation coefficient. ... Pearson product-moment correlation coefficient, also known as r, R, or Pearson's r, a measure of the strength and direction of the linear relationship between two variables that is defined as the (sample) covariance of the variables divided by the product of their (sample) standard deviations.
Define dependent variables and predictor variables in the context of multiple-regression data.
Dependent variables or criterion variable: variable one wants to measure or understand. The variable is always often specified in top row of regression table. Predictor variables or independent variables: is the variables being manipulated to see if has affect on measured variables. Usually found listed below criterion variable.
Interrogate the construct validity of a manipulated variable in an experiment, and explain the role of manipulation checks and theory testing in establishing construct validity.
Extra DV that is inserted to help the researchers quantify how well an experimental manipulation worked. -Used to collect empirical data on the construct validity of the IVs.
control for
Holding a potential third variable at a constant level while investigating the association between two other variables.
Give examples of how external validity applies both to other participants and to other settings.
If a study is intended to generalize to some population, the researchers must draw a probability sample from that population. If a study uses a convenience sample, you can not be sure of the study's generalizability to the population the researcher intends. Conceptual replications illustrate this aspect of external validity. When researchers extended the studies of serving container size to chips, popcorn, and soup, as well as pasta, it showed that the results of a large serving container generalize from one setting to another setting.
marginal means
In a factorial design, the arithmetic means for each level of an independent variable, averaging over the levels of another independent variable.
main effect
In a factorial design, the overall effect of one independent variable on the dependent variable, averaging over the levels of the other independent variable.
cross-lag correlation
In a longitudinal design, a correlation between an earlier measure of one variable and a later measure of another variable.
cross-sectional correlation
In a longitudinal design, a correlation between two variables that are measured at the same time.
autocorrelation
In a longitudinal design, the correlation of one variable with itself, measured at two different times.
attrition threat
In a repeated-measures design or quasi-experiment, a threat to internal validity that occurs when a systematic type of participant drops out of a study before it ends.
testing threat
In a repeated-measures experiment or quasi-experiment, a kind of order effect in which scores change over time just because participants have taken the test more than once; includes practice effects and fatigue effects.
order effect
In a within-groups design, a threat to internal validity in which exposure to one condition changes participants' responses to a later condition. See also carryover effect, practice effect, fatigue effect, testing threat.
manipulation check
In an experiment, an extra dependent variable researchers can include to determine how well an experimental manipulation worked.
counterbalancing
In an experiment, presenting the levels of the independent variable to participants in different sequences to control for order effects. See also full counterbalancing, partial counterbalancing.
systematic variability
In an experiment, the levels of a variable coinciding in some predictable way with experimental group membership, creating a potential confound. See also unsystematic variability.
dependent variable
In an experiment, the variable that is measured. In a multiple-regression analysis, the single outcome, or criterion variable, the researchers are most interested in understanding or predicting. Also called outcome variable. See also independent variable.
unsystematic variability
In an experiment, when levels of a variable fluctuate independently of experimental group membership, contributing to variability within groups. See also systematic variability.
Interrogate the external validity of an association claim by asking to whom the association can generalize.
Interrogating the external validity of an association claim involves asking whether the sample can generalize to some population. If a correlational study does not use a random sample of people or contexts the results cannot necessarily generalize to the population from which the sample was taken.
Articulate the difference between mediators, third variables, and moderating variables.
Moderators: A sub-group ; such as age or gender Third variables:External to the study, alternative explanations for a researcher's findings Mediators:Variables in a study that drive the correlation. -Ex: Correlation b/t FB and GPA, a mediator would be less time studying
condition
One of the levels of the independent variable in an experiment.
Explain the difference between concurrent-measures and repeated-measures designs.
Participants are exposed to all the levels of an IV at roughly the same time, and a single attitudinal or behavioral preference is the DV (Think of the ex where babies were shown a picture of two different people at the same time) A type of within-groups in which participants are measured on a DV more than once after exposure to each level of the IV.
Identify posttest-only and pretest/posttest designs, and explain when researchers might use each one.
Participants are randomly assigned to at least 2 groups and are tested on the DV twice, once before & once after exposure to the IV -Might use when they want to evaluate whether random assignment made the groups equal Participants are randomly assigned to independent variable groups and are tested on the DV once. -May be used when researchers are testing for covariance by detecting differences in the dependent variable
replicable
Pertaining to a study whose results have been obtained again when the study was repeated.
Explain why it is more important to ask how a sample was collected rather than how large the sample is.
Random samples are crucial when researchers are estimating the frequency of a particular opinion, condition, or behavior in a population. Nonrandom samples can occasionally be appropriate when the cause of bias is not relevant to the survey topic. Representative samples may be less important for association and casual claims.
Understand why a random sample is more likely to be a representative sample and why representative samples have external validity to a particular population.
Random sampling is more representative because anyone in population has same equal chance of being picked to be in study. It has a stronger external validity because you can make generalization to population because your sample was picked through random sample or method ex. pick out of a hat or phone dialer to pick individuals.
Interpret different possible outcomes in cross-lag correlations, and make a causal inference from each pattern.
Show whether the earlier measure of one variable is associated with the later measure of the other variable. The two cross-lag correlations thus address the directional problem and help establish temporal precedence. EX: aggression in third grade, is this associated with tv violence in 13th grade? And vice versa, is tv violence in 3rd grade associated with aggression in 13th grade? -Both are significant -One led to the other -The other led to the other
Estimate marginal means in a factorial design to look at main effects.
The arithmetic means for each level of an IV, averaging over levels of the other IV. -If there's a difference between the means, there is a main effect.
Consider times when an unrepresentative sample may be appropriate for a frequency claim.
The best one can do if they cannot have representative sample is interrogate the sampling techniques used asking how researchers obtained the sample. As long they used a random sampling technique you can be confident about external validity results.
parsimony
The degree to which a theory provides the simplest explanation of some phenomenon. In the context of investigating a claim, the simplest explanation of a pattern of data; the best explanation that requires making the fewest exceptions or qualifications.
measurement error
The degree to which the recorded measure for a participant on some variable differs from the true value of the variable for that participant. Measurement errors may be random, if over a sample they both inflate or deflate true scores, or they may be systematic, in which case they may result in biased measurement. Your Answer
experimental realism
The extent to which a laboratory experiment is designed so that participants experience authentic emotions, motivations, and behaviors.
ecological validity
The extent to which the tasks and manipulations of a study are similar to real-world contexts. Also called mundane realism.
sample
The group of people, animals, or cases used in a study; a subset of the population of interest.
file drawer problem
The idea that reviews and meta-analyses of published literature might overestimate the support for a theory, because studies finding null effects are less likely to be published than studies finding significant results, and are thus less likely to be included in such reviews.
generalization mode
The intent of researchers to generalize the findings from the samples and procedures in their study to other populations or contexts. See also theory-testing mode.
power
The likelihood that a study will show a statistically significant result when some effect is truly present in the population; the probability of not making a Type II error when the null hypothesis is false.
effect size
The magnitude of a relationship between two or more variables.
simple random sampling
The most basic form of probability sampling, in which the sample is chosen completely at random from the population of interest (e.g., drawing names out of a hat).
treatment group
The participants in an experiment who are exposed to the level of the independent variable that involves a medication, therapy, or intervention
probability sampling
The process of drawing a sample from a population of interest in such a way that each member of the population has an equal chance of being included in the sample, usually via random selection. Also called random sampling.
noise
The unsystematic variability among the members of a group in an experiment. Also called error variance, unsystematic variance.
random assignment
The use of a random method (e.g., flipping a coin) to assign participants into different experimental groups.
criterion variable
The variable in a multiple-regression analysis that the researchers are most interested in understanding or predicting. Also called dependent variable.
Analyze a correlational study in which at least one variable is categorical by looking at a bar graph and computing the difference between the two means.
To look at a bar graph with categorical variables on would analyze the range of each variable how the they vary from one another. Each person is not represented by one data point instead the graph shows the mean for all people in comparison with another categorical variable.
situation noise
Unrelated events, sounds, or distractions in the external environment that create unsystematic variability within groups in an experiment.
Describe counterbalancing, and explain its role in the internal validity of a within-groups design.
When researchers present the levels of the IV to participants in different orders. -Split participants into groups and each group receives one of the condition sequences -Helps get rid of order effects, which is a part of internal validity.
Explain how multiple-regression designs are conducted.
Which can help rule out some third variables. It can help address questions of internal validity. Multiple regression is the instrument of choice when the researcher believes several independent variables interact to predict the value of a dependent variable. The test measures the degree to which each of the independent variables contributes to the prediction. -Statistical technique that rules out third variables, put in all independent variables that might have an effect -Measure for all the variables -Use Beta to test for third variables
Explain what a meta-analysis does and what it has in common with direct and conceptual replication.
a way of mathematically averaging the results of all the studies that have tested the same variables to see what conclusion that whole body of evidence supports. It looks overtime the same abstract variables overtime they look all studies that are direct replications and conceptual replications that look at same construct variables.
Describe the differences among direct replication studies, conceptual replication studies, and replication-plus-extension studies.
direct replication- researchers repeat an original study as closely as they can, to see whether the original effect shows up in the newly collected data. conceptual replication- Researchers study the same research question but use different procedures. At the abstract level, the variables in the study are the same, but the procedures for operationalizing the variables are different. replication-plus-extension- Researchers replicate their original study but add variables to test additional questions.
Estimate results from a correlational study with two quantitative variables by looking at a scatterplot.
is a association that involves exactly two variables. To investigate at look at scatterplot one would look at the first measured variable and second measured variable on a a axis or graph look at the trend of the results either positive, negative, or no relationship.
Articulate the mission of cultural psychology: to encourage researchers to test their theories in other cultural contexts (that is, to generalize to other cultures).
to encourage researchers to test their theories in other cultural contexts - shown that many theories may be supported in some cultural contexts but not in others. So, it is important to test their theories in different cultures.