research methods

¡Supera tus tareas y exámenes ahora con Quizwiz!

Mode

The score that occurred most often; the most frequent score. For example, 2 is the mode of the following data set: 2, 2, 2, 6, 10, 50.

Functional relationship

The shape of a relationship. Depending on the functional relationship between the independent and dependent variable, a graph of the relationship might look like a straight line or might look like a U, an S, or some other shape.

functional relationship

The shape of a relationship. Depending on the functional relationship between the independent and dependent variable, a graph of the relationship might look like a straight line or might look like a U, an S, or some other shape.

AB design

The simplest single-n design, consisting of measuring the participant's behavior at baseline (A) and then measuring the participant after the participant has received the treatment (B).

Pretestposttest design

A beforeafter design in which each participant is given the pretest, administered the treatment, then given the posttest.

Normal curve

A bell-shaped, symmetrical frequency distribution that has its center at the mean.

Social desirability bias

A bias resulting from participants giving responses that make them look good rather than giving honest responses.

Order (trial) effects

A big problem with within-subjects designs. The order in which the participant receives a treatment (first, second, etc.) will affect how participants behave.

IRB

A committee of at least five membersone of whom must be a nonscientistthat reviews proposed research in an effort to protect research participants.

experimental design

A design in which (a) a treatment manipulation is administered and (b) that manipulation is the only variable that systematically varies between treatment conditions.

Blocked design

A factorial design in which, to boost power, participants are first divided into groups (blocks) on a subject variable (e.g., low-IQ block and high-IQ block). Then, participants from each block are randomly assigned to an experimental condition. Ideally, a blocked design will be more powerful than a simple, between-subjects design.

Placebo treatment

A fake treatment that we know has no effect, except through the power of suggestion. It allows experimenters to see if the treatment has an effect beyond that of suggestion. For example, in medical experiments, participants who are given placebos (pills that do not contain a drug) may be compared to participants who are given pills that contain the new drug.

placebo treatment

A fake treatment that we know has no effect, except through the power of suggestion. It allows experimenters to see if the treatment has an effect beyond that of suggestion. For example, in medical experiments, participants who are given placebos (pills that do not contain a drug) may be compared to participants who are given pills that contain the new drug.

operational definition

A publicly observable way to measure or manipulate a variable; a recipe for how you are going to measure or manipulate your factors.

Time-series design

A quasi-experimental design in which a series of observations are taken from a group of participants before and after they receive treatment. Because it uses many times of measurement, it is an improvement over the pretestposttest design. However, it is still extremely vulnerable to history effects.

Nonequivalent control-group design

A quasi-experimental design that, like a simple experiment, has a treatment group and a no-treatment comparison group. However, unlike the simple experiment, random assignment does not determine which participants get the treatment and which do not.

Manipulation check

A question or set of questions designed to determine whether participants perceived the manipulation in the way that the researcher intended.

Nominaldichotomous item

A question that presents participants with only twousually very differentoptions (e.g., Are you for or against animal research?). Such questions are often yes/no questions and often ask the participant to classify herself or himself into one of two different categories.

Self-administered questionnaire

A questionnaire filled out in the absence of an investigator.

95% confidence interval

A range in which the parameter you are estimating (usually the population mean) falls 95% of the time.

Linear relationship

A relationship between an independent and dependent variable that is graphically represented by a straight line.

Positive correlation

A relationship between two variables in which the two variables tend to vary togetherwhen one increases, the other tends to increase. (For example, height and weight have a positive correlation: The taller one is, the more one tends to weigh; the shorter one is, the less one tends to weigh.)

Quadratic relationship

A relationship on a graph shaped like a U or an upside down U.

Dependent groups t test

A statistical test used with interval or ratio data to test differences between two conditions on a single dependent variable. Differs from the between-groups t test in that it is to be used only when you are getting two scores from each participant (within-subjects design) or when you are using a matched-pairs design.

Chi square (Ç2) test

A statistical test you can use to determine whether two or more variables are related. Best used when you have nominal data.

Double-blind technique

A strategy for improving construct validity that involves making sure that neither the participants nor the people who have direct contact with the participants know what type of treatment the participants have received.

double-blind technique

A strategy for improving construct validity that involves making sure that neither the participants nor the people who have direct contact with the participants know what type of treatment the participants have received.

Blind (also called masked)

A strategy of making the participant or researcher unaware of which experimental condition the participant is in.

blind

A strategy of making the participant or researcher unaware of which experimental condition the participant is in.

Simple experiment

A study in which participants are independently and randomly assigned to one of two groups, usually to either a treatment group or to a no-treatment group. It is the easiest way to establish that a treatment causes an effect.

Exploratory study

A study investigating (exploring) a new area of research. Unlike replications, an exploratory study does not follow directly from an existing study.

experiment

A study that allows researchers to disentangle treatment effects from natural differences between groups, usually by randomly assigning participants to treatment group. In medicine, such studies may be called controlled clinical trials or randomized clinical trials.

Conceptual replication

A study that is based on the original study but uses different methods to assess the true relationships between the treatment and dependent variables better. In a conceptual replication, you might use a different manipulation or a different measure.

conceptual replication

A study that is based on the original study but uses different methods to assess the true relationships between the treatment and dependent variables better. In a conceptual replication, you might use a different manipulation or a different measure.

Quasi-experiment

A study that resembles an experiment except that random assignment played no role in determining which participants got which level of treatment. Usually, quasi-experiments have less internal validity than experiments.

Systematic replication

A study that varies from the original study only in some minor aspect. For example, a systematic replication may use more participants, more standardized procedures, more levels of the independent variable, or a more realistic setting than the original study.

systematic replication

A study that varies from the original study only in some minor aspect. For example, a systematic replication may use more participants, more standardized procedures, more levels of the independent variable, or a more realistic setting than the original study.

Interview

A survey in which the researcher orally asks questions.

Naturalistic observation

A technique of observing events as they occur in their natural setting.

Laboratory observation

A technique of observing participants in a laboratory setting.

hypothesis

A testable prediction about the relationship between two or more variables.

p < .05 level

A traditional significance level; if the variables are unrelated, results significant at this level would occur less than 5 times out of 100. Traditionally, results that are significant at the p < .05 level are considered statistically reliable and thus replicable.

Post hoc trend analysis

A type of post hoc test designed to determine whether a linear or curvilinear relationship is statistically significant (reliable).

Trend analysis

A type of post hoc test designed to determine whether a linear or curvilinear relationship is statistically significant (reliable). See post hoc trend analysis.

Testretest reliability

A way of assessing the amount of random error in a measure by administering the measure to participants at two different times and then correlating their results. If the measure is free of random error, scores on the retest should be highly correlated with scores on the original test.

known-groups technique

A way of making the case for your measure's convergent validity that involves seeing whether groups known to differ on the characteristic you are trying to measure also differ on your measure (e.g., ministers should differ from atheists on an alleged measure of religiosity).

Questionnaire

A written survey instrument.

Coefficient of determination (r2 or ·2)

The square of the correlation coefficient; tells the degree to which knowing one variable helps to know another. This measure of effect size can range from 0 (knowing a participant's score on one variable tells you absolutely nothing about the participant's score on the second variable) to 1.00 (knowing a participant's score on one variable tells you the participant's exact score on the second variable). A coefficient of determination of .09 is considered medium, and a coefficient of determination of .25 is considered large.

Sensitization

After getting several different treatments and performing the dependent variable task several times, participants may realize (become sensitive to) what the hypothesis is. Sensitization is a problem in within-subjects designs.

Mean

An average calculated by adding up all the scores and then dividing by the number of scores.

Eta squared (·2)

An estimate of effect size that ranges from 0 to 1 and is comparable to r-squared.

Within-groups variance (mean square within, mean square error, error variance)

An estimate of the amount of random error in your data. The bottom half of the F ratio in a between-subjects analysis of variance.

Factorial experiment

An experiment that examines two or more independent variables (factors) at a time.

Within-subjects design (repeated-measures design)

An experimental design in which each participant is tested under more than one level of the independent variable. The sequence in which the participants receive the treatments is usually randomly determined. See also randomized within-subjects design and counterbalanced within-subjects designs.

Repeated-measures design

An experimental design in which each participant is tested under more than one level of the independent variable. The sequence in which the participants receive the treatments is usually randomly determined. See within-subjects design.

Matched-pairs design

An experimental design in which the participants are paired off by matching them on some variable assumed to be correlated with the dependent variable. Then, for each matched pair, one member is randomly assigned to one treatment condition, and the other gets the other treatment condition. This design usually has more power than a simple, between-groups experiment.

Mixed design

An experimental design that has at least one within-subjects factor and one between-subjects factor.

Archival data

Data from existing records and public archives.

interval scale data

Data that give you numbers that can be meaningfully ordered along a scale (from lowest to highest) and in which equal numerical intervals represent equal psychological intervals. That is, the difference between scoring a 2 and a 1 and the difference between scoring a 7 and a 6 are the same not only in terms of scores (both are a difference of 1), but also in terms of the actual psychological characteristic being measured. Interval scale measures allow us to compare participants in terms of how much of a quality participants haveand in terms of how much more of a quality one group may have than another.

Fatigue effect

Decreased participant performance on a task due to participants being tired or less enthusiastic as a study continues. In a within-subjects design, this decrease in performance might be incorrectly attributed to a treatment.

Counterbalanced within-subjects design

Design that gives participants the treatments in different sequences. These designs balance out routine order effects.

Single-subject design

Design that tries to establish causality by studying a single participant and arguing that the covariation between treatment and changes in behavior could not be due to anything other than the treatment. A key to this approach is to prevent factors other than the treatment from varying. Single-n designs are common in operant conditioning and psychophysical research. See also AB design, ABA reversal design, multiple-baseline design.

Single-n designs

Design that tries to establish causality by studying a single participant and arguing that the covariation between treatment and changes in behavior could not be due to anything other than the treatment. A key to this approach is to prevent factors other than the treatment from varying. Single-n designs are common in operant conditioning and psychophysical research. See single-subject design.

History

Events in the environmentother than the treatmentthat have changed. Differences between conditions that may seem to be due to the treatment may really be due to history.

experimenter bias

Experimenters being more attentive to participants in the treatment group or giving different nonverbal cues to treatment group participants than to other participants are examples of experimenter bias. When experimenter bias is present, differences between groups' results may be due to the experimenter treating the two groups differently rather than to the treatment.

Extraneous factor

Factor other than the treatment. If we cannot control or account for extraneous variables, we can't conclude that the treatment had an effect. That is, we will not have internal validity.

Selection (or selection bias)

Apparent treatment effects being due to comparing groups that differed even before the treatment was administered (comparing apples with oranges).

Randomized within-subjects design

As in all within-subjects designs, all participants receive more than one level or type of treatment. However, to make sure that not every participant receives the series of treatments in the same sequence, the researcher randomly determines which treatment comes first, which comes second, and so on. In other words, participants all get the same treatments, but they receive different sequences of treatments.

Observer bias

Bias created by the observer seeing what he or she wants or expects to see.

Moderator variable

Variable that can intensify, weaken, or reverse the effects of another variable. For example, the effect of wearing perfume may be moderated by gender: If you are a woman, wearing perfume may make you more liked; if you are a man, wearing perfume may make you less liked.

moderator variable

Variable that can intensify, weaken, or reverse the effects of another variable. For example, the effect of wearing perfume may be moderated by gender: If you are a woman, wearing perfume may make you more liked; if you are a man, wearing perfume may make you less liked.

mediating variable

Variables inside the individual (such as thoughts, feelings, or physiological responses) that come between a stimulus and a response. In other words, the stimulus has its effect because it causes changes in mediating variables, which, in turn, cause changes in behavior.

hawthorne effect

When members of the treatment group change their behavior not because of the treatment itself, but because they are getting special treatment.

Maturation

Changes in participants due to natural growth or development. A researcher may think that the treatment had an effect when the difference in behavior is really due to maturation.

Covariation

Changes in the treatment are accompanied by changes in the behavior. To establish causality, you must establish covariation.

Demographics

Characteristics of a group, such as gender, age, social class.

Demand characteristics

Characteristics of the study that suggest to the participant how the researcher might want the participant to behave.

demand characteristics

Characteristics of the study that suggest to the participant how the researcher might want the participant to behave.

Matching

Choosing your groups so that they are similar (they match) on certain characteristics. Matching reduces, but does not eliminate, the threat of selection bias.

Stooge

Confederate who pretends to be a participant but is actually a researcher's assistant. The use of stooges raises ethical questions.

ethical

Conforming to the American Psychological Association's principles of what is morally correct behavior. To learn more about these guidelines and standards, see Appendix D.

Hypothesis-guessing

When participants alter their behavior to conform to their guess as to what the research hypothesis is. Hypothesis-guessing can be a serious threat to construct validity, especially if participants guess correctly.

Testing effect

When participants score differently on the posttest as a result of what they learned from taking the pretest. Occasionally, people may think the participants' behavior changed because of the treatment when it really changed due to experience with the test.

Sequence effect

When participants who receive one sequence of treatments score differently (i.e., significantly lower or higher) than those participants who receive the same treatments in a different sequence.

Temporal precedence

When the causal factor comes before the change in behavior. Because the cause must come before the effect, researchers trying to establish causality must establish that the factor alleged to be the cause was introduced before the behavior changed.

Independence

Factors are independent when they are not causally or correlationally linked. Independence is a key assumption of most statistical tests. In the simple experiment, observations must be independent. That is, what one participant does should have no influence on what another participant does and what happens to one participant should not influence what happens to another participant. Individually assigning participants to the treatment or no-treatment condition and individually testing each participant are ways to achieve independence.

Type 2 error

Failure to reject the null hypothesis when it is in fact false. In other words, failing to find a relationship between your variables when there really is a relationship between them.

type 2 error

Failure to reject the null hypothesis when it is in fact false. In other words, failing to find a relationship between your variables when there really is a relationship between them.

Random-digit dialing

Finding participants for telephone interviews by taking the area code and the 3-digit prefixes that you are interested in and then adding random digits to the end to create 10-digit phone numbers. You may use this technique when (a) you cannot afford to buy a list of phone numbers and then randomly select numbers from that list or (b) you want to contact people with unlisted numbers.

debriefing

Giving participants the details of a study at the end of their participation. Proper debriefing is one of the researcher's most serious obligations.

Response set

Habitual way of responding on a test or survey that is independent of a particular test item (for instance, a participant might always check agree no matter what the statement is).

Researcher effect

Ideally, you hope that the results from a study would be the same no matter who was conducting it. However, it is possible that the results may be affected by the researcher. If the researcher is affecting the results, there is a researcher effect.

Central limit theorem

If numerous large samples (30 or more scores) from the same population are taken, and you plot the mean for each of these samples, your plot would resemble a normal curveeven if the population from which you took those samples was not normally distributed.

Control group

Participants who are randomly assigned to not receive the experimental treatment. These participants are compared to the treatment group to determine whether the treatment had an effect.

Experimental group

Participants who are randomly assigned to receive the treatment.

Inferential statistics

Procedures for determining the reliability and generalizability of a particular research finding.

Leading question

Question structured to lead respondents to the answer the researcher wants (such as, You like this book, don't you?).

Open-ended question

Question that does not ask participants to choose between the responses provided by the researcher (e.g., choosing a, b, or c on a multiple-choice question or choosing a number between 1 and 5 on a rating scale measure) but instead asks the participant to generate a response. Essay and fill-in-the-blank questions are open-ended questions.

Dichotomous questions

Questions that allow only two responses (usually yes or no).

Independent random assignment

Randomly determining for each individual participant which condition he will be in. For example, you might flip a coin for each participant to determine to what group he will be assigned.

random assignment

Randomly determining for each individual participant which condition he will be in. For example, you might flip a coin for each participant to determine to what group he will be assigned.

Law of parsimony

The assumption that the explanation that is simplest, most straightforward, and makes the fewest assumptions is the most likely.

Parsimony

The assumption that the explanation that is simplest, most straightforward, and makes the fewest assumptions is the most likely. See law of parsimony.

Probability value (p value)

The chances of obtaining a certain pattern of results if there really is no relationship between the variables.

Significance level

The chances of obtaining a certain pattern of results if there really is no relationship between the variables. See probability value.

Practice effect

The change in a score on a test (usually a gain) resulting from previous practice with the test. In a within-subjects design, this improvement might be incorrectly attributed to participants having received a treatment.

sensitivity

The degree to which a measure is capable of distinguishing between participants who differ on a variable (e.g., have different amounts of a construct or who do more of a certain behavior).

Internal validity

The degree to which a study establishes that a factor causes a difference in behavior. If a study lacks internal validity, the researcher may falsely believe that a factor causes an effect when it really doesn't.

construct validity

The degree to which a study, test, or manipulation measures and/or manipulates what the researcher claims it does. For example, a test claiming to measure aggressiveness would not have construct validity if what it actually measured was assertiveness.

internal consistency

The degree to which each question on a scale correlates with the other questions. Internal consistency is high if answers to each item correlate highly with answers to all other items.

external validity

The degree to which the results of a study can be generalized to other participants, settings, and times.

ceiling effect

The effect of treatment(s) is underestimated because the dependent measure is not sensitive to psychological states above a certain level. The measure puts an artificially low ceiling on how high a participant may score.

Simple main effect

The effects of one independent variable at a specific level of a second independent variable. The simple main effect could have been obtained merely by doing a simple experiment.

floor effect

The effects of treatment(s) are underestimated because the dependent measure artificially restricts how low scores can be.

Population

The entire group that you are interested in. You can estimate the characteristics of a population by taking large random samples from that population.

face validity

The extent to which a measure looks, on the face of it, to be valid. Face validity has nothing to do with actual, scientific validity. That is, a test could have face validity and not real validity or could have real validity, but not face validity. However, for practical/political reasons, you may decide to consider face validity when comparing measures.

content validity

The extent to which a measure represents a balanced and adequate sampling of relevant dimensions, knowledge, and skills. In many measures and tests, participants are asked a few questions from a large body of knowledge. A test has content validity if its content is a fair sample of the larger body of knowledge. Students hope that their psychology tests have content validity.

Dependent variable (dependent measure)

The factor that the experimenter predicts is affected by the independent variable; the participant's response that the experimenter is measuring.

ratio scale data

The highest form of measurement. With ratio scale numbers, the difference between any two consecutive numbers is the same (see interval scale). But in addition to having interval scale properties, in ratio scale measurement, a zero score means the total absence of a quality. (Thus, Fahrenheit is not a ratio scale measure of temperature because 0 degrees Fahrenheit does not mean there is no temperature.) If you have ratio scale numbers, you can meaningfully form ratios between scores. If IQ scores were ratio (they are not; very few measurements in psychology are), you could say that someone with a 60 IQ was twice as smart as someone with a 30 IQ (a ratio of 2 to 1). Furthermore, you could say that someone with a 0 IQ had absolutely no intelligence whatsoever.

Null hypothesis

The hypothesis that there is no relationship between two or more variables. The null hypothesis can be disproven, but it cannot be proven.

null hypothesis

The hypothesis that there is no relationship between two or more variables. The null hypothesis can be disproven, but it cannot be proven.

Spurious

When the covariation observed between two variables is not due to the variables influencing each other, but is because both are being influenced by some third variable. For example, the relationship between ice cream sales and assaults in New York is spuriousnot because it does not exist (it does!)but because ice cream does not cause assaults, and assaults do not cause ice cream sales. Instead, high temperatures probably cause both increased assaults and ice cream sales. Beware of spuriousness whenever you look at research that does not use an experimental design.

Unstructured interview

When the interviewer has no standard set of questions that he or she asks each participanta virtually worthless approach for collecting scientifically valid data.

Stable baseline

When the participant's behavior, prior to receiving the treatment, is consistent. Single-n experimenters try to establish a stable baseline.

Subject bias (subject effects)

When the participants bias the results by guessing the hypothesis and playing along or by giving the socially correct response.

Levels of an independent variable

When the treatment variable is given in different kinds or amounts, these different values are called levels. In the simple experiment, you only have two levels of the independent variable.

Zero correlation

When there doesn't appear to be a linear relationship between two variables. For practical purposes, any correlation between .10 and .10 may be considered so small as to be nonexistent.

Illusory correlation

When there is in fact no relationship (a zero correlation) between two variables, but people perceive that the variables are related.

Selection by maturation interaction

When treatment and no-treatment groups, although similar at one point, would have grown apart (developed differently) even if no treatment had been administered.

Summated score

When you have several Likert-type questions that all tap the same dimension (such as attitude toward democracy), you can add up each participant's responses to those questions to get an overall, total (summated) score.

Experimental hypothesis

A prediction that the treatment will cause an effect.

reliability

A general term, often referring to the degree to which a participant would get the same score if retested (testretest reliability). Reliability can, however, refer to the degree to which scores are free from random error. A measure can be reliable, but not valid. However, a measure cannot be valid if it is not also reliable.

Scatterplot

A graph made by plotting the scores of individuals on two variables (e.g., each participant's height and weight). By looking at this graph, you should get an idea of what kind of relationship (positive, negative, zero) exists between the two variables.

Frequency distribution

A graph on which the frequencies of the scores are plotted. Thus, the highest point on the graph will be over the most commonly occurring score. Often, frequency distributions will look like the normal curve.

Empty control group

A group that does not get any kind of treatment. The group gets nothing, not even a placebo. Usually, because of participant and experimenter biases that may result from such a group, you will want to avoid using an empty control group.

Descriptive hypothesis

A hypothesis about a group's characteristics or about the correlations between variables; a hypothesis that does not involve a causeeffect statement.

environmental manipulation

A manipulation that involves changing the participant's environment rather than giving the participant different instructions.

Cohen's d

A measure of effect size that tells you how different two groups are in terms of standard deviations. Traditionally, a Cohen's d of .2 is considered small, .5 is considered moderate, and .8 is considered large.

cronbach's alpha

A measure of internal consistency. To be considered internally consistent, a measure's Cronbach's alpha should be at least above .70 (most researchers would like to see it above .80).

Standard deviation

A measure of the extent to which individual scores deviate from the population mean. The more scores vary from each other, the larger the standard deviation will tend to be. If, on the other hand, all the scores are the same as the mean, the standard deviation would be zero.

construct

A mental state such as love, intelligence, hunger, and aggression that cannot be directly observed or manipulated with our present technology.

Content analysis

A method used to categorize a wide range of open-ended (unrestricted) responses. Content analysis schemes have been used to code the frequency of violence on certain television shows and are often used to code archival data.

Survey

A non-experimental design useful for describing how people think, feel, or behave. The key is to design a valid questionnaire, test, or interview and administer it to a representative sample of the group you are interested in.

Correlation coefficient

A number that can vary from 1.00 to +1.00 and indicates the kind of relationship that exists between two variables (positive or negative as indicated by the sign of the correlation coefficient) and the strength of the relationship (indicated by the extent to which the coefficient differs from 0). Positive correlations indicate that the variables tend to go in the same direction (if a participant is low on one variable, the participant will tend to be low on the other). Negative correlations indicate that the variables tend to head in opposite directions (if a participant is low on one, the participant will tend to be high on the other).

Baseline

A participant's behavior on the task before receiving the treatment. A measure of the dependent variable as it occurs without the experimental manipulation. Used as a standard of comparison in single-subject and small-n designs.

Research journal

A relatively informal notebook in which you jot down your research ideas and observations. The research journal can be a useful resource when it comes time to write the research proposal. Note: Despite the fact that they sound similar, the term research journal is not similar to the term scientific journal. The term scientific journal is used to distinguish journals from magazines. In contrast to magazines, scientific journals tend (1) not to have ads for popular products, (2) not to have full-page color pictures, (3) to have articles that follow APA format (having abstract, introduction, method, results, discussion, and reference sections), and (4) to have articles that have been peer-reviewed.

Random sampling

A sample that has been randomly selected from a population. If you randomly select enough participants, those participants will usually be fairly representative of the entire population. That is, your random sample will reflect its population. Often, random sampling is used to maximize a study's external validity. Note that random samplingunlike random assignmentdoes not promote internal validity.

random sampling

A sample that has been randomly selected from a population. If you randomly select enough participants, those participants will usually be fairly representative of the entire population. That is, your random sample will reflect its population. Often, random sampling is used to maximize a study's external validity. Note that random samplingunlike random assignmentdoes not promote internal validity.

theory

A set of principles that explain existing research findings and that can be used to make new predictions can lead to new research findings.

Abstract

A short (fewer than 120 words), one-page summary of a research proposal or an article.

abstract

A short (fewer than 120 words), one-page summary of a research proposal or an article.

Reversal design (ABA design, ABA reversal design)

A single-subject or small-n design in which baseline measurements are made of the target behavior (A), then an experimental treatment is given (B), and the target behavior is measured again (A). The ABA design makes a more convincing case for the treatment's effect than the AB design.

ABA reversal design

A single-subject or small-n design in which baseline measurements are made of the target behavior (A), then an experimental treatment is given (B), and the target behavior is measured again (A). The ABA design makes a more convincing case for the treatment's effect than the AB design. See reversal design.

Multiple-baseline design

A single-subject or small-n design in which different behaviors receive baseline periods of varying lengths prior to the introduction of the treatment variable. Often, the goal is to show that the behavior being rewarded changes, whereas the other behaviors stay the same until they too are reinforced.

file drawer problem

A situation in which the research not affected by Type 1 errors languishes in researchers' file cabinets, whereas the Type 1 errors are published.

Double-barreled question

A statement that contains more than one question. Responses to a double-barreled question are difficult to interpret. For example, if someone responds, No, to the question Are you hungry and thirsty? we do not know whether he is hungry, but not thirsty; not hungry, but thirsty; or neither hungry nor thirsty.

Factor analysis

A statistical technique designed to explain the variability in several questions in terms of a smaller number of underlying hypothetical factors.

Multiple regression

A statistical technique that can take data from several predictors and an outcome variable to create a formula that weights the predictors in such a way as to make the best possible estimates of the outcome variable given those predictors. In linear multiple regression, this equation is for the straight line that best predicts the outcome data. Often, with multiple regression, you not only are able to predict your outcome variable with accuracy but you are also able to tell which predictors are most important for making accurate predictions. For more information on multiple regression, see Appendix E.

Analysis of variance (ANOVA)

A statistical test for analyzing data from experiments that is especially useful when the experiment has more than one independent variable or more than two levels of an independent variable.

Degrees of freedom (df )

An index of sample size. In the simple experiment, the df for your error term will always be two less than the number of participants.

Interobserver reliability

An index of the degree to which different raters give the same behavior similar ratings.

Between-groups variance (mean square treatment, mean square between)

An index of the degree to which group means differ; an index of the combined effects of random error and treatment. This quantity is compared to the within-groups variance in ANOVA. It is the top half of the F ratio. If the treatment has no effect, the between-groups variance should be roughly the same as the within-groups variance. If the treatment has an effect, the between-groups variance should be larger than the within-groups variance.

Variability between group means

An index of the degree to which group means differ; an index of the combined effects of random error and treatment. This quantity is compared to the within-groups variance in ANOVA. It is the top half of the F ratio. If the treatment has no effect, the between-groups variance should be roughly the same as the within-groups variance. If the treatment has an effect, the between-groups variance should be larger than the within-groups variance. See between-groups variance.

Standard error of the mean

An index of the degree to which random error may cause the sample mean to be an inaccurate estimate of the population mean. The standard error will be small when the standard deviation is small, and the sample mean is based on many scores.

Standard error of the difference

An index of the degree to which random sampling error may cause two sample means representing the same populations to differ. In the simple experiment, if we are to find a treatment effect, the difference between our experimental-group mean and control-group mean will usually be at least twice as big as the standard error of the difference. To find out the exact ratio between our observed difference and the standard error of the difference, we conduct a t test.

Interaction

An interaction occurs when a relationship between two variables (e.g., X and Y) is affected by (is moderated by, depends on) the amount of a third variable (Z). You are probably most familiar with interactions involving drugs (e.g., two drugs may both be helpful but the combination of the two drugs is harmful or a drug is helpful, except for people with certain conditions). If you need to know how much of one variable participants have received to say what the effect of another variable is, you have an interaction between those two variables. If you graph the results from an experiment that has two or more independent variables, and the lines you draw between your points are not parallel, you may have an interaction. See also moderator variable.

Semistructured interview

An interview constructed around a core of standard questions; however, the interviewer may expand on any question in order to explore a given response in greater depth.

Structured interview

An interview in which all respondents are asked a standard list of questions in a standard order.

Negative correlation

An inverse relationship between two variables (such as number of suicide attempts and happiness).

Participant observation

An observation procedure in which the observer participates with those being observed. The observer becomes one of them.

F ratio

Analysis of variance (ANOVA) yields an F ratio for each main effect and interaction. In between-subjects experiments, the F ratio is a ratio of between-groups variance to within-groups variance. If the treatment has no effect, F will tend to be close to 1.0.

Normal distribution

If the way the scores are distributed follows the normal curve, scores are said to be normally distributed. For example, a population is said to be normally distributed if 68% of the scores are within 1 standard deviation of the mean, 95% are within 2 standard deviations of the mean, and 99% of the scores are within 3 standard deviations of the mean. Many statistical tests, including the t test, assume that sample means are normally distributed.

Median

If you arrange all the scores from lowest to highest, the middle score will be the median.

Convenience sampling

Including people in your sample simply because they are easy (convenient) to survey. It is hard to generalize the results accurately from a study that used convenience sampling.

Fixed-alternative question

Item on a test or questionnaire in which a person must choose an answer from among a few specified alternatives. Multiple-choice, truefalse, and rating-scale questions are all fixed-alternative questions.

Likert-type item

Item that typically asks participants whether they strongly agree, agree, are neutral, disagree, or strongly disagree with a certain statement. These items are assumed to yield interval data.

instructional manipulation

Manipulating the treatment by giving written or oral instructions.

Nonreactive measure

Measurement that is taken without changing the participant's behavior; also referred to as unobtrusive measure.

Parameters

Measurements describing populations; often inferred from statistics, which are measurements describing a sample.

ordinal scale numbers

Numbers that can be meaningfully ordered from lowest to highest. Ranks (e.g., class rank, order in which participants finished a task) are ordinal scale numbers.

nominal scale numbers

Numbers that do not represent different amounts of a characteristic but instead represent different kinds of characteristics (qualities, types, or categories); numbers that substitute for names.

Mortality (attrition)

Participants dropping out of a study before the study is completed. Sometimes, differences between conditions may be due to participants dropping out of the study rather than the treatment.

Retrospective self-report

Participants telling you what they said, did, or believed in the past. In addition to problems with ordinary self-report (response sets, giving the answer that a leading question suggests, etc.), retrospective self-report is vulnerable to memory biases. Thus, retrospective self-reports should not be accepted at face value.

Participant bias

Participants trying to behave in a way that they believe will support the researcher's hypothesis.

power

The ability to find statistically significant differences when differences truly exist; the ability to avoid making Type 2 errors.

Unobtrusive measurement

Recording a particular behavior without the participant knowing you are measuring that behavior. Unobtrusive measurement reduces subject biases such as social desirability bias and obeying demand characteristics.

Type 1 error

Rejecting the null hypothesis when it is in fact true. In other words, declaring a difference statistically significant when the difference is really due to chance.

type 1 error

Rejecting the null hypothesis when it is in fact true. In other words, declaring a difference statistically significant when the difference is really due to chance.

replicate

Repeat, or duplicate, an original study.

replicable

Repeatable. A researcher should be able to repeat another researcher's study and obtain the same pattern of results.

Direct (exact) replication

Repeating a study as exactly as possible, usually to determine whether or not the same results will be obtained. Direct replications are useful for establishing that the findings of the original study are reliable.

direct replication

Repeating a study as exactly as possible, usually to determine whether or not the same results will be obtained. Direct replications are useful for establishing that the findings of the original study are reliable.

Null results (nonsignificant results)

Results that fail to disconfirm the null hypothesis; results that fail to provide convincing evidence that the factors are related. Null results are inconclusive because the failure to find a relationship could be due to your design lacking the power to find the relationship. In other words, many null results are Type 2 errors.

Nonsignificant results

Results that fail to disconfirm the null hypothesis; results that fail to provide convincing evidence that the factors are related. Null results are inconclusive because the failure to find a relationship could be due to your design lacking the power to find the relationship. In other words, many null results are Type 2 errors. See null results.

bias

Systematic errors that can push the scores in a given direction. Bias may lead to finding the results that the researcher wanted.

Proportionate stratified random sampling

Technique ensuring that the sample is similar to the population in certain respects (for instance, percentage of men and women) and then randomly sampling from these groups (strata) and having all the advantages of random sampling but with even greater accuracy.

Stratified sampling

Technique ensuring that the sample is similar to the population in certain respects (for instance, percentage of men and women) and then randomly sampling from these groups (strata) and having all the advantages of random sampling but with even greater accuracy. See proportionate stratified sampling.

Quota sampling

Technique ensuring that you get the desired number of (meet your quotas for) certain types of people (certain age groups, minorities, etc.). This method does not involve random sampling and usually gives you a less representative sample than random sampling would. However, it may be an improvement over convenience sampling.

Power

The ability to find statistically significant differences when differences truly exist; the ability to avoid making Type 2 errors.

t test

The most common way of analyzing data from a simple experiment. It involves computing a ratio between two things: (1) the difference between your group means and (2) the standard error of the difference (an index of the degree to which group means could differ by chance alone). If the difference you observe is more than three times bigger than the difference that could be expected by chance, then your results are probably statistically significant. We can only say probably because the exact ratio that you need for statistical significance depends on your level of significance and on how many participants you have.

Overall main effect

The overall or average effect of an independent variable.

Main effect

The overall or average effect of an independent variable. See overall main effect.

Results section

The part of an article, immediately following the method section, that reports statistical results and relates those results to the hypotheses. From reading this section, you should know whether the results supported the hypotheses.

results

The part of an article, immediately following the method section, that reports statistical results and relates those results to the hypotheses. From reading this section, you should know whether the results supported the hypotheses.

Method section

The part of the article immediately following the introduction. Whereas the introduction explains why the study was done, the method section describes what was done. For example, it will tell you what design was used, what the researchers said to the participants, what measures and equipment were used, how many participants were studied, and how participants were selected. The method section could also be viewed as a how we did it section. The method section is usually subdivided into at least two subsections: participants and procedure.

method section

The part of the article immediately following the introduction. Whereas the introduction explains why the study was done, the method section describes what was done. For example, it will tell you what design was used, what the researchers said to the participants, what measures and equipment were used, how many participants were studied, and how participants were selected. The method section could also be viewed as a how we did it section. The method section is usually subdivided into at least two subsections: participants and procedure.

Introduction

The part of the article that occurs right after the abstract. In the introduction, the authors tell you what their hypothesis is, why their hypothesis makes sense, how their study fits in with previous research, and why their study was worth doing.

intro

The part of the article that occurs right after the abstract. In the introduction, the authors tell you what their hypothesis is, why their hypothesis makes sense, how their study fits in with previous research, and why their study was worth doing.

Discussion

The part of the article, immediately following the results section, that discusses the research findings and the study in a broader context and suggests research projects that could be done to follow up on the study.

discussion

The part of the article, immediately following the results section, that discusses the research findings and the study in a broader context and suggests research projects that could be done to follow up on the study.

Stimulus set

The particular stimulus materials that are shown to two or more groups of participants. Researchers may use more than one stimulus set in a study so that they can see whether the treatment effect replicates across different stimulus sets. In those cases, stimulus sets would be a replication factor.

Interobserver (judge) agreement

The percentage of times the raters agree.

Order

The place in a sequence (first, second, third, etc.) when a treatment occurs.

Nonresponse bias

The problem caused by the refusal of people who were in your sample to participate in your study. Nonresponse bias is one of the most serious threats to a survey design's external validity.

Median split

The procedure of dividing participants into two groups (highs and lows) based on whether they score above or below the median.

Regression (toward the mean)

The tendency for scores that are extremely unusual to revert back to more normal levels on the retest. If participants are chosen because their scores were extreme, these extreme scores may be loaded with extreme amounts of random measurement error. On retesting, participants are bound to get more normal scores as random measurement error abates to more normal levels. This regression effect could be mistaken for a treatment effect.

Statistical regression

The tendency for scores that are extremely unusual to revert back to more normal levels on the retest. If participants are chosen because their scores were extreme, these extreme scores may be loaded with extreme amounts of random measurement error. On retesting, participants are bound to get more normal scores as random measurement error abates to more normal levels. This regression effect could be mistaken for a treatment effect. See regression (toward the mean).

Hypothesis testing

The use of inferential statistics to determine if the relationship found between two or more variables in a particular sample holds true in the population.

Parameter estimation

The use of inferential statistics to estimate certain characteristics of the population (parameters) from a sample of that population.

Independent variable

The variable being manipulated by the experimenter. Participants are assigned to a level of independent variable by independent random assignment.

Instrumentation bias

The way participants were measured changed from pretest to posttest. In instrumentation bias, the actual measuring instrument changes or the way it is administered changes. Sometimes people may think they have a treatment effect when they really have an instrumentation effect.

Restriction of range

To observe a sizable correlation between two variables, both must be allowed to vary widely (if one variable does not vary, the variables cannot vary together). Occasionally, investigators fail to find a relationship between variables because they study only one or both variables over a highly restricted range. Example: comparing NFL offensive linemen and saying that weight has nothing to do with playing offensive line in the NFL on the basis of your finding that great offensive tackles do not weigh much more than poor offensive tackles. Problem: You compared only people who ranged in weight from 315 to 330 pounds.

Single blind

To reduce either subject biases or researcher biases, you might use a single-blind experiment in which either the participant (if you are most concerned about subject bias) or the person running participants (if you are more concerned about researcher bias) is unaware of who is receiving what level of the treatment. If you are concerned about both subject and researcher bias, then you should probably use a double-blind study.

standardization

Treating each participant in the same (standard) way. Standardization can reduce both bias and random error.

Plagiarism

Using someone else's words, thoughts, or work without giving proper credit.

Post hoc test

Usually refers to a statistical test that has been performed after an ANOVA has obtained a significant effect for a factor. Because the ANOVA says only that at least two of the groups differ from one another, post hoc tests are performed to find out which groups differ from one another.

valid

Usually, a reference to whether a conclusion or claim is justified. A measure is considered valid when it measures what it claims to measure. See also construct validity, internal validity, and external validity.

convergent validity

Validity demonstrated by showing that the measure correlates with other measures of the construct.

Confounding variables

Variables, other than the independent variable, that may be responsible for the differences between your conditions. There are two types of confounding variables: ones that are manipulation irrelevant and ones that are the result of the manipulation. Confounding variables that are irrelevant to the treatment manipulation threaten internal validity. For example, the difference between groups may be due to one group being older than the other rather than to the treatment. Random assignment can control for the effects of those confounding variables. Confounding variables that are produced by the treatment manipulation hurt the construct validity of the study because even though we may know that the treatment manipulation had an effect, we don't know what it was about the treatment manipulation that had the effect. For example, we may know that an exercise manipulation increases happiness (internal validity), but not know whether the exercise manipulation worked because people exercised more, got more encouragement, had a more structured routine, practiced setting and achieving goals, or met new friends. In such a case, construct validity is questionable because it would be questionable to label the manipulation an exercise manipulation.

random error

Variations in scores due to unsystematic, chance factors.

discriminant validity

When a measure does not correlate highly with a measure of a different construct. Example: A violence measure might have a degree of discriminant validity if it does not correlate with the measures of assertiveness, social desirability, and independence.

Ex post facto research

When a researcher goes back, after the research has been completed, looking to test hypotheses that were not formulated prior to the beginning of the study. The researcher is trying to take advantage of hindsight. Often an attempt to salvage something out of a study that did not turn out as planned.

Researcher-expectancy effect

When a researcher's expectations affect the results. This is a type of researcher bias.

Statistical significance

When a statistical test says that the relationship we have observed is probably not due to chance alone, we say that the results are statistically significant. In other words, because the relationship is probably not due to chance, we conclude that there probably is a real relationship between our variables.

Carryover (treatment carryover) effect

When a treatment administered earlier in the experiment affects participants when those participants are receiving additional treatments. Carryover effects may make it hard to interpret the results of single-subject and within-subjects designs because they may make it hard to know whether the participant's change in behavior is a reaction to the treatment just administered or to a delayed reaction to a treatment administered some time ago.

Crossover (disordinal) interaction

When an independent variable has one kind of effect in the presence of one level of a second independent variable, but a different kind of effect in the presence of a different level of the second independent variable. Examples: Getting closer to people may increase their attraction to you if you have just complimented them, but may decrease their attraction to you if you have just insulted them. Called a crossover interaction because the lines in a graph will cross. Called disordinal interaction because it cannot be explained by having ordinal rather than interval data.

Disordinal interaction

When an independent variable has one kind of effect in the presence of one level of a second independent variable, but a different kind of effect in the presence of a different level of the second independent variable. Examples: Getting closer to people may increase their attraction to you if you have just complimented them, but may decrease their attraction to you if you have just insulted them. Called a crossover interaction because the lines in a graph will cross. Called disordinal interaction because it cannot be explained by having ordinal rather than interval data. See crossover (disordinal) interaction.

Interviewer bias

When an interviewer influences a participant's responses. For example, the interviewer mightconsciously or unconsciouslyverbally or nonverbally reward the participant for giving responses that support the research hypothesis.


Conjuntos de estudio relacionados

Distributed & Cloud Computing Test 2 Study Guide

View Set

Understanding Products and Their Risks 2

View Set

Art 105 practice quiz - Exam 4 review

View Set

Chapter 43: Nursing Care of the Child With an Alteration in Urinary Elimination/Genitourinary Disorder

View Set

RN nursing care of children online practice 2019 B w/NGN

View Set

Christian Ethics Study Guide Test 1

View Set

Assignment 1 Multiple Choice w/o problems

View Set