Research Methods, Kazdin- Research in clinical psychology

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

Significance Level (Alpha)

A criterion for decision making in statistical data evaluation ---use Alphas of p< .05 and .01 for decision making ---groups will never have identical means on the outcome measures, due to simply normal fluctuations and sampling differences

Multimodal distribution

A distribution that has multiple modes (thus two or more "peaks"). Indication of not normal

Sampling Frame

A list indicating an entire population ex: voting registration lists may be used as a SF or telephone numbers of all the households in a given area

Z-Scores

A number that represents the distance of a score from the mean of the variable (the mean deviation) expressed in standard deviation units.

What does p-vaules mean

A p-value of .001 indicates that such an outcome is extremely unlikely to have occurred as a result of chance, in fact only about once in 1,000 times

Institutional Review Board

A panel of at least five individuals, including at least one whose primary interest is in nonscientific domains, that determines the ethics of proposed research

Example of Common-causal variables

A potential common-causal variable is the discipline style of the children parent, For instance parents who use a harsh and punitive discipline style may produce children who both like to watch violent TV and behave aggressively

Stratified Sampling

A probability sampling technique that involves dividing a sample into subgroups (or strata) and then selecting samples from each of these groups by using random sample technique

Systematic Random Sampling

A probability sampling technique that involves selecting every nth person from sampling frame, easier than random

Confidence interval

A range of scores within which a population parameter is likely to fall, it is frequently known as the margin of error of the sample

Spurious Relationship

A relationship between two variables that is produced by a common-causal variable

Survey/ Interview

A series of self-report measures administered through either an interview or a written questionnaire

Questionnaire

A set of fixed-format, self-report items that is completed by a respondent at their own pace, often without supervision, cheaper than interviews, less likely to be influenced by the characteristics of experimenter,

Hypotheis

A specific and falsifiable prediction regarding the relationship between or among two or more variables -States the existence of a relationship and the specific direction of that relationship

Discriminant Validity

A type of construct validity, the extent to which a measured variable if found to be unrelated to other measured variables designed to measure other conceptual variables

Null Hypothesis

A type of hypothesis used in statistics that proposes that no statistical significance exists in a set of given observations.It is presumed to be true until statistical evidence nullifies it for an alternative hypothesis. Read more: http://www.investopedia.com/terms/n/null_hypothesis.asp#ixzz1uK7qyLRv

Parsimony (Occam's Razor)

Accept the simplest answer until evidence merits a different one.

Validity

Accuracy, how faithful is your measurement, Does this measure assess what it's supposed to assess

Valid logical conclusion

Affirming the Antecedent Given: if A (antecedent), then B (consequent) Denying the Consequent- Given if not B (consequent), therefore not A (antecedent)

Invalid logcial conclusion

Affirming the Consequent Given: if B (consequent), therefore A (antecedent) Denying the Antecedent Given: if not A (antecedent), therefore not B (consequent)

Modus ponens

Affirming the antecdent given A( antecedent), then B (Consequent) If i have a dog, i have a mammal

Sum of Squares

After each deviation is squared (X-Xbar)2, then they are summed to make SS

Populations

Age/setting: adolescents, outpatient, inpatient, high risk, school

Holding constant

All individuals could be observed in the same room, same time, by the same researcher, by standardizing the environment and procedures, most environmental variables can be held constant

Ratio Scale

All measures

which kinds of designs can be combined

All of them

Experimental Advantage

Allows drawing of conclusions about the causal relationships among variable

Benefits for using a Representative Sample

Allows the sample to make inferences about populations, because tries to get the true characteristics of the population

Naturalistic Methods Dis

Although the data can be rich and colorful naturalistic research often does not provide much information about why behaviors occurs or what would have happened to the same people in different situations

Achievement Tests

An evaluation of an individuals level of mastery ex: this test, driving tests

Analogue v. Clinical Studies

Analogue induces circumstance which relates to phenomena (e.g. anger to measure cortisol)

Common-causal variables

Another possible explanation for the observed correlation is that it has been produced by the presence of a common-causal variable (third variable), variables that cause both the outcome and the predictor variable

Basic Research

Answer fundamental questions about behavior and people, lays down foundation for applied driven by scientific curiosity or interest goal- expand man's knowledge, not create or invent something

Variable

Any attribute that can assume different values among different people or across different times or places

Participants at Minimal Risk

Are participants who will are not likely to experience harmful effects through taking part in the research project

Participants at RIsk

Are participants who, by virtue of their participation in the research project, are placed under some emotional or physical risk

Aptitude Test

Assess an individuals's ability or skill ex: SAT, Caree Test, Future tests

Purpose

Assessment, prediction, intervention, prevention, review

Nonreactive Behavioral measures

Behavioral measures that are designed to avoid reactivity because the respondent is not aware that the measurement is occurring, does not realize what the measure is designed to assess, or cannot change his or her responses Ex-studying prejudice

Absence of Reversal

Behaviors does not revert toward baseline levels: 1. Threats to internal validity such as history and maturation 2. Behavior comes under additional sources of stimulus control, which could involve sources of reinforcement maintaining the behavior that did not exist prior to the intervention 3. Treatment continues due to the behavior of those implementing the intervention 4. The behavior change observed in the initial treatment phase was dramatic and permanent

Example of Falsifiability

Benjamin rush- if recovery of patients meant confirmation of his treatment, death should have meant disconfimation but it didn't because he said they would die anyways. This made it impossible to falsify his work

Bimodal Distribution

Bimodal Distribution is defined as a type of Continuous Distribution that has two peaks. Each peak has different mean and st, dev, not a normal distribution

Nominal Scale

Can only calculate mode or frequency distribution, no mean or st dev

Interval Scale

Can use all measures of central tendency and st dev

Ordinal Scale

Can use mode, frequency distribution and median and percentiles, but no mean or standard dev

Intervention

Can we control or alter an outcome of interest?

Intervention

Can we control or alter the outcome of interest

Disadvantages of Experimental

Cannot experimentally manipulate many important variables

Non Probability Sampling

Cases with no sampling frame like all homeless people in NYC 1.) Systematic 2.) Convenience 3.) Quota 4.) Snowball -does not allow the study's finding to be generalized from sample to population

Random Error

Chance fluctuations in measured variables (error) -is self-canceling -affects reliability

Ad and Dis of Unacknowledged participants

Chance to get intimate information from workers, but researcher may change the situation be biased by living so close ,poses ethical questions

Instrumentation and Response shift

Changes in measuring instruments/methods over time. Changes to a person's internal response to standards of measurement.

Reactivity

Changes in responding that occur as a result of measurement

Psychological Ways

Characteristics such as hostility, anxiety, introversion or extroversion can also have an influence on participants' responses

Self-promotion

Common type of reactivity, occurs when research participants respond in ways that they think will make them look good

Example of Behavioral Measure

Conceptual Variable-Personality style Behavioral measure observation of the objects in and the state of people's bedrooms

Reason 4 of Correlation

Could be a spurious relationship- it's just a coincidence that these related in the sample under these conditions

Reason 2 of Correlation

Could be opposite direction of prediction being aggressive (B) could attract people to violent media (A)

Baseline Assessment

Critical functions of Baseline: 1. Descriptive function- the data collected during baseline phase describe the existing level of performance or the extent to which the client engages in the behavior or domain that is to be altered 2. Predictive Function- baseline data serve as the basis for predicting the level of performance for the immediate future if the intervention is not provided. This prediction is achieved by projecting or extrapolating a continuation of baseline performance into the future.

Empirical Study

Data was collected on a sample, descriptive and/or inferential data anlyses, results interpreted

Changing Criterion Design

Description and Underlying Rationale: i. Begins with a baseline phase and then proceeds through subphases 1. Subphases are all in the intervention phase 2. Number of subphases can vary ii. The effect of the intervention is demonstrated by showing that behavior changes gradually over the course of the intervention phase iii. As performance matches or meets the criterion with some consistency, the criterion is shifted to a new level iv. The effects of the intervention are shown when the performance repeatedly changes to meet the criterion

Multiple Baseline

Description and Underlying Rationale: i. Inferences are based on examining performance across several different baselines ii. Multiple baselines design demonstrates the effect of an intervention by showing behavior changes when and only when the intervention is applied iii. The repeated demonstration that behavior changes in response to staggered applications of the intervention usually makes the influence of extraneous factors implausible iv. The differing levels of a multiple-baseline design served as control conditions to evaluate what changes can be expected without the application of treatment v. Ideally, one will see the first set of data change and the others in baseline continue without showing a change

Case Study

Descriptive records of one or more individual's experiences and behavior

Quantitative research

Descriptive research in which the collected data are subjected to formal statistical analysis ex- questionnaires and systematic observations of behavior

Empir. Study Results

Descriptive stats. (range, means, SDs, frequencies) that characterize the sample and provide orientation to the metrics; Genderal descriptive associations among variables; Primary analyses: e.g. higher order inferential stats or modeling; Exploratory analyses and examination of alternative accounts for findings; Summary statements regarding nature of relations among _variables_, but usually no interpretation.

Independent Variable

Directly manipulated in experiments, independent because assumed not to be influenced by other variables, the cause in a cause and effect relationships aka the predictor

Content Validity

Does measure effectively sample the universe of situation/factors it is supposed to measure? -the extent to which the measured variable appears to have adequately covered the full domain of the conceptual variable

Conceptual Variable Example

Employee satisfaction, aggression, attraction, depression decision making skills

Ad Dis of Acknowledged participant

Ethically appropriate but might have been biased by friendships, potential for reactivity

Maturation

Events and processes within subjects: boredom, sickness, wiser, stronger, tired, older... Only a problem to internal validity if the effect cannot be accounted for. p25

Unacknowledged Participant

Ex- Roy observations in the raincoat factory or cult studies,, participant does not know they are being watched or that the person is a researcher

Split-half Reliability Example

Ex: look at 1st half of items and correlates them with 2nd half, doing odds or evens

Extraneous Variable or Confounding

Factors other than the IV that can influence the DV, may be controlled or not ex- the Pepsi challenge

Type 2 Error (Beta)

Find an association that doesn't exist

Cluster Sampling Example

First divide the US into regions (for instance, East, Midwest, South, Southwest). Then we would randomly select states from each regions, counties from each states, and colleges or universities from each county. Then you the sampling frame of the matriculated students at each college

Alternate (Equivalent) forms

Give two similar, but not identical, measures at different times (to same people) and compare scores ex: GRE, SAT

Construct Validity: Experiment

Given that the intervention was responsible for change, what specific aspect of the intervention or arrangement was the causal agent, that is, what is conceptual basis (construct) underlying the effect? p23

Variable Levels

High/low, normal/moderate/severe

Impact Factors

How often work is cited

Interpreting the results of a study

How to discuss results! Data interpretation can be tricky because the meaning of the quantitative results of a study can be easily misinterpreted and over interpreted

Example of Null Hypothesis

Hypothesis- the loss of my socks is due to alien burglary Null Hypotheis- The loss of my socks has nothing to do with alien burglary

When its okay to deceive participants

IF our results are to be unbiased or contaminated by knowledge of the experiment and the expectancies that such a knowledge may bring

Convergent Validity Example

If strength is being measured it should correlate to measures already made that test strength -a test of basic algebra should primarily measure alebra-related constructs

Stratified Sampling Example

If you expected that volunteering rates would be different for students from different majors, you could first make separate lists of the students in each of the majors at your school and then randomly sample from each list

Quota Sampling Example

If you want only 5 9th graders, 5 10th graders, 5 11th graders, 5 12th graders, so stop sampling that population when you research those goals

Negative results or no-difference findings

In most investigations the presence of an effect is decided on the basis of whether the null hypothesis is rejected. the null hypothesis states that the experimental conditions will not differ, that is, that the independent vARIABLE will have no effect. **REJECTION of this hypothesis is regarded as "POSITIVE" result. *** whereas, FAILURE TO REJECT this hypothesis is regarded as a "NEGATIVE" response. RESEARCH USUALLY RESULTS ARE REJECTING THE NULL HYPOTHESIS -----SO POSITIVE RESULTS.

Individual Sampling

In systematic observation the act of choosing which individuals will be observed, ex randomly selecting one child to be the focus

Time Sampling

In systematic observation, the act of observing individuals for certain amounts of time, ex- focusing on one kid for 4 mins and then moving to the next

method

Includes: participants, design, procedures the design is likely to include two or more groups that are treated in a particular fashion

Denying the antecedent

Incorrect If A, then B I have a dog, it is a mammal If not A, then Not B I don't have a dog, therefore i don't have a mammal

Affirming the Consequent

Incorrect If A, then B I have a dog, it is a mammal got B so therefore A I have a mammal, therefore it is a dog

Which validity is more important than

Internal Validity is more important than external validity

Split-half reliability

Internal consistency calculated by correlating a person's score on one half of the items with her or his score on the other half of the items, uses only some of the available correlations among items

Lab v. Applied/Field Research

Internal v. External Validity

Quota Sampling

Is a convenience sample with an effort made to insure a certain distribution of demographic variables are included

Multiple Regression

Is a statistical technique based on Pearson correlation coefficients both between each of the predictor variable and the outcome variable and among the predictor variables themselves -Ad, it allows the research to consider how all of the predictor variables, taken together relate to the outcome variables

Reactivity of assessment

Is treatment effective w/o being linked to testing. If subjects are aware their performance is being assessed, the measure is said to be *obtrusive.* Awareness to assessment may lead someone to respond differently than w/o assessment. This is a threat to external validity. p45

Construct validity

It interprets the basis of the casual relation demonstrated in investigation

Falsifiability

Karl Popper, confirmable: capable of being tested (verified or falsified by the experiment or observation, helps avoid confirmation bias

Example of Ratio Scale

Kelvin temperature scale zero represents absolute zero, GPA, Inches, length, millimeter, centimeters, meters

Discriminate Validity Example

Knowing how strong you are should have nothing to do with how happy you are -a test of basic algebra should not measure reading constructs

Acknowledged participant

Lets people know you are a research but go out and live in the community

LEVEL OF POWER

Level of confidence (ALPHA), the decision is based on convention about the margin of protection one should have against accepting the null hypothesis when, in fact, it is false (BETA).

Ad Dis of Unacknowledged Observer

Limits reactivity problems, but poses ethical questions

Attrition

Loss of subjects in an experiment

Threats to Statistical Conclusions Validity

Low statistical power; Subject heterogenity; Unreliablity of the measures; Multiple comparisons

Variance (s^2)

Measure of dispersion equal to the sum of squares divided by sample size s2= SS/N

Beneficence

Minimize the risks of harm and maximize the potential benefits, must do good science, have a plane for risk, and do not do it if risk out weights benefits

Positive Linear relationship

More you study better grade, r= +.82

Simple Random Sampling

Most basic probability sample, goal: to ensure that each person in the population has an equal chance of being selected in sample

Social Desirability

Most common type of reactivity, the natural tendency for research participants to present themselves in a positive or socially acceptable way to the researcher

Multiple Treatments Designs

Multiple treatment designs allow the comparison of two or more treatments that are usually within the same intervention phase, they include: 1. Multi-element Design 2. Alternating-treatments or Simultaneous-treatment Design 3. Other Multiple-treatment Design Options i. Simultaneous Availability of All Conditions ii. Randomization Design iii. Combining Components Design Variations a. Conditions Included in the Design i. One of the conditions included in the treatment phase can be a continuation of baseline a. Conditions Included in the Design i. One of the conditions included in the treatment phase can be a continuation of baseline

Example of Basic Research

Neurologist studying brain to learn about its general workings

Snowballing Sampling

Non Probability- that can be used when a population of interest is rare ex: One or more individuals from pop are contacted, these individuals are used to lead the researchers to other populations methods, homeless population

Systematic Selection

Non-Probability-Going to pick every 20th person in a line -Ex: taking everyone in 4th and 8th rows

Convenience Sampling

Non-Probabilty- Grab a handy population like college students -when researchers sample whatever individuals are readily available without any attempt to make the sample representative of a population

Inferential Statistics

Numbers such as a p-value, that are used to specify the characteristics of a population on the basis of the data in a sample, looking for relationships between variables, stats that allow you to make conclusions

Descriptive Statistics

Numbers such as the mean, median, mode, st dev, and variance that summarize the distribution of a measured variable, describes our varaibles

hypothesis Example

Observing violent television will cause increased aggressive behavior

Moderator

On what factors do relations depend?

Reverse Causation

One possibility for correlation is that the causal direction is exactly opposite from what has been hypothesis. So instead of violent tv making them aggressive, children who behave aggressively develop a residual excitement that leads them to want to watch violent TV shows

Reciprocal causation

One possibility for correlation, both causal directions are operating and that the two variables cause each other

Cohort effect

Participant differences within the same cohort group

Example of Matching

Participants could be assigned so that the average age is the same for all of the different treatment conditions. In this case, age is balanced across treatments, and therefore cannot be a confounding variable

Empir. Study Method

Participants, Recruitment source/approach, demographics, enrollment & eligibility (exclusion/inclusion); Measures, description, psychometric properties; Procedures, Details of assessment, manipulation, and schedule.

Test sensitization

Pre-/Post-Test effects.

Alpha 2

Probability of making Type 1 error (rejecting the null hypothesis when the null hypothesis is true)

Probability Sampling

Procedures are used to ensure that each person in the population has a known chances of being selected to be apart of the sample, allows inferences about population to be made 1.) Simple Random Sampling 2.) Systematic Random 3.) Stratified 4.) Cluster

Advantages of Descriptive

Provides a relatively complete picture of what is occurring at a given time

Interquartile Range (IQR)

Q3(75%)-Q1 (25%)=IQR, this can be week cause two very different sets of data can have the same IQR

True Experiment

Randomnized, Controlled clinical trials

Ordinal Scale

Rank order, numbers do not indicate the exact interval between the individuals on the conceptual variable

Testing

Refers to a threat to internal validity wherein performance changes as a subject repeats a test. a.k.a. practice effects.

Sampling

Refers to the selection of people to participate in research project, usually with the goal of being able to use these people to make inferences about a larger group of people

Type 1 Error (Alpha)

Reject an association that exists

Acknowledged Observer

Research does not participant in the group but the participants know they are being watched, like coming in to do a study. ex Pomerant study of children social comparison

Unacknowledged Observer

Research does not participant is the group and the participants do not know they are being watches, ex-watching kids on play ground

Experimental research

Research used to demonstrate a cause and effect relationship between 2 variables, something (IV) is manipulated by experimenter

Ad and Dis of Acknowledged Observer

Researchers able to spend entire session coding behavior, but potential for reactivity since children knew they were being watched

Example of Reverse-scored

Rosenberg scale, the reversed items are changed so that 1 becomes 4, 2 becomes 3, 3 becomes 2, and 4 becomes 1

Statistical significance is a direct function of and depends heavily on?

SAMPLE SIZE. The larger the sample size the smaller the group differences needed for statistical significance for a given level of confidence

Parsimony

SIMPLE *selecting the least complex interpretation that can explain a particular finding.

Example of Case Study

Sigmund Freud, Jean Paiget with his own children, Removing part of animals brain,

Empir. Study Introduction

Significance of problem, Relevant background, All focused on setting up the present study and hypotheses, Primary hypotheses (secondary and exploratory questions). *HOUR GLASS*

Reliability and Validity

Similar- because both are assessed through examination of the correlations among measured variables Different- R refers to correlations among different variables that the researcher is planning to combine into the same measure of a single conceptual variables and V refers to correlations of a measure with different measures of other conceptual variables

Test-retest Reliability

Simply give the test as two different times to same people, and compare scores on two instances -the greater the similarity, the higher the positive correlation, the higher the reliability

Novelty effects

Something works b/c it is new, not better.

Confidential

Sometime can't be anonymous because must keep track of which respondent said what ex- use unique identify number

Operationalization

Specification of the operations of procedures used to measure some variable relevant to IV and DV

Ad of using Systematic Observation

Specificity about the behaviors of interest has the advantage of both focusing the observers' attention of these specific behavior and reducing the masses about of data

Effect Size and Statistical Significance

Statistical significance = effect size X sample size

Empir. Study Discussion

Summary of primary findings; Findings of interest; Implications; Sometimes highlight study strengths; Limitations; Future directions; General conclusion; *HOURGLASS*

Interview

Survey usually administered in the form of an interview -in which questions are read to the respondent in person or over the telephone

Multiple correlation coefficient

Symbolized by the letter R -indicates the extent to which the predictor variables as a group predict the outcome variable, Thus it is the effect size for the multiple regression analysis

Selection Biases

Systematic differences between groups before an experimental manipulation or intervention on the basis of the selection or assignment of subjects to groups.

Meta Analysis

Systematically finds articles about specific issues and summarizes it.

INFORMED CONSENT

TELLING THEM EVERYTHING BEFORE THE STUDY IS CONDUCTED * EXCEPTION- archival records

Statistical Regression

Tendancy for extreme scores on any measure to regress/revert towards the mean of a distribution over time.

Example of Multiple correlation coefficient

The ability to predict the outcome measure using all three predictor variables at the same time (R=.34) is better than that of any of the zero-order correlations which are Social support .14, Study hours .19 SAT .21

Statistically Non significant

The conclusion to not reject the null hypothesis, made when the p-value is greater than alpha p>0.05

Multiple-treatment interference

The effects obtained in the experiment may be due in part to the context or series of conditions in which it was presented. Threatens external

rosenthal effect

The experimenter's preconceived idea of appropriate responding influences the treatment of participants and their behavior

Construct Validity

The extent to which a measured variable actually measures the conceptual variable that it is designed to measure, a measure only has construct validity if it measures what we want it to 1.) Convergent Validity 2.) Discriminant Validity

Criterion Validity

The extent to which a self-report measure correlated with a behavioral measured variable -extent to which one can infer from an individual's score how well she will perform some other external task or activity that is supposedly measured by the test in question 1.) Predictive validity 2.) Concurrent Validity

Statistical Power

The likelihood of finding differences between conditions when in fact the conditions are truly different in their effect.

ABAB Designs

The logic of the ABAB design and its variations consist of making and testing predictions about performance under different conditions The data in the separate phases provides three types of information: 1. Present performance (descriptive) 2. Prediction of the probability of future performance (predictive) 3. Test the extent to which predictions were accurate (replication)

Negative linear relationship

The more you study the less time you hve to watch tv r=.-70

Tension b/t Internal and External Validity

The more you tighten internal validity (the priority), you lose control of external validity.

Relationship between power and beta

The power of a statistical test is the probability that they research will be able to reject the null hypo given that the null hypo is actually false and should be rejected -Power and beta are redundant concepts because power can be written in terms of beta Power= 1-beta

Deception

The practice of not completely and fully informing research participants about the nature of a research project before they participate in it; sometime used when the research could not be conducted if participants knew what was really being studied

Regression Coefficient Example

The regression coefficient of .19 indicates the relationship between study hours and college GPA, controlling for both social support and SAT, so can conclude that study hours predicts GPA even when the influence of social support and SAT is controlled

Sample

The smaller group of people who actually participate in the research

Reading Multiple Correlation Coefficient

The statistical significance of R is tested with a statistic known as F. Because R is the effect size statistic for multiple regression analysis, and R2 is the proportion of variance measure, R and R2 can be directly compared to r and r2

Reason 3 of Correlation

Third variable- viewing media violence (A) and aggressiveness (B) both caused by third variable such as hostile personality (C)

Sampling Bias

This occurs when the sample is not actually representative of the population because the probability with which members of the population have been selected for participants is not known

History

This threat to internal validity refers to any event--other than the independent variable--occurring in the experiment or outside of the experiment, that may account for the results.

Reactivity of experimental arrangements

Threatens external validity by participants awareness to the fact that they are being studied. p42

Internal Validity

To what extent can the intervention, rather than extraneous influences, be considered to account for the results, changes, or group differences? p23

External Validity

To what extent can the results be generalized or extended to people, settings, times, measures, and characteristics other than those in this particular experimental arrangement? p23

Statistical Conclusions Validity

To what extent is a relation shown, demonstrated, or evident, and how well can the investigation detect effects if they exist?

Concurrent Validity

Type of Criterion validity, the extent to which a self-report measure correlates with a behavior measured at the same time

Predictive Validity

Types of Criterion validity, the extent to which a self-report measure correlated with (predicts) a future behavior -test score useful in predicting some future performances

Debriefing

Usually last step in research project, information given to a participant immediately after an experiment has ended that is designed to both explain the purposes and procedures of the research and remove any harmful aftereffects of participation -last goal- to eliminate longer-term consequences of having participate din the research

Major Constructs

Variations: suicide v. suicidal, depression v. depressive symptoms, delinquency v. conduct problems, externalizing v. antisocial behavior

Reason 1 for Correlation

Viewing media violence (A) does, in fact cause aggressiveness (B)

Stratified Sampling Example 2

Want to determine the average income in U.S so first stratify the sample by geographic region (north, east, Midwest) or/and stratify by urban, rural

Data analyses can enhance data interpretation

We have to understand subgroups. If two groups are being compared then the researchers are looking for main effect (overall effect) of two or more conditions

Mediator

What is the mechanism by which the variables are compared?

Abstract

What is/isn't in the abstract.

Alpha Beta Type 1 Type 1

When alpha level is set lower, beta will always be higher, so although setting a lower alpha protects us from type 1 errors, in doing so may lead us to miss the presence of a weak relationship and make a type 2 error

Cluster Sampling

When no sample frame is available, a probability sampling technique that breaks the population into a set of smaller groups (called clusters) for which there are sampling frames and then to randomly choose some of the clusters for inclusion in the sample

When to use different types of sampling methods

When non probability sampling techniques are used, either because they are convenient or because probability methods are not feasible, they are subjected to sampling bias, and they cannot be generalized from the sample to the population

Correlation matrix

When there are many correlations to be reported at the same time, they are presented in a correlation matrix, which is a table showing the correlations of many variable with each other, ex: correling SAT,Social Support, Study Hours, and College GPA

Reify, Reification

When we think what we difined is a real thing because we gave it a name.

Study Population

While the target population is theoretical the study population are the people that could actually be in your study Ex: All Americans with phone cause doing a phone studying

Timing of Measurement

Would the same results be seen if the timing had been different, say if the measure were recorded several months later? A different season or even time of day? p47

Response bias

Yea-sayers, participants who tend to answer yes to all questions Nay-sayers, participants who tend to answer no to all questions

"latent" variable

a construct represented by several measures

Replication and negative results are related because

a finding is replicated if the first (original) study and the next study both show a significant effect

Likert scale

a fixed-formate self-report scale that consists of a series of items that indicate agreement or disagreement with the issue that is to be measured, good for measuring opinions and beliefs, people's feelings about topics

Standard normal distribution

a hypothetical population distribution of standard scores, the mean=0 and SD=1, it allows us to calculate the proportion of scores that will fall at each point in the distribution

Random Assignment

a method of ensuring that the participants in the different levels of the independent variable are equivalent before the experimental manipulation occurs

Observable variable

a non-manipulated aspect of observational research

antecedent

a preceding circumstance, event, object, style, phenomenon

Operational Definition

a precise statement of how a conceptual variable is measured or manipulated

Example of Demand Characteristics

a psychologist may unintentionally put an emphasis on an idea or make a statement that leads the person to conform to a certain idea

Linear Relationships

a relationship between two quantitative varibles that can be approximated with a straight line

P-value or Probability value

a statistic that shows the likelihood of an observed statistic occuring on the basis If the P value is 0.03, that means that there is a 3% chance of observing a difference as large as you observed even if the two population means are identical Variance

Survey

a survey is a series of self-report measures administered either through an interview or a written questionnaire, most widely used method for descriptive info (type of descriptive research_

Convergent Validity

a type of construct validity, The extent to which a measured variable is found to be related to other measured variables designed to measure the same conceptual variable

Histogram

a visual display of a grouped frequency distribution that uses bars to indicate the frequencies, for quantitative, different from a bar graph because the bars touch each other and indicates the original variable is quantitative

Bar Chart

a visual display of frequency distributions

CCD: Problems and Limitations

a. Gradual Improvement Not Clearly Connected to Shifts in the Criterion b. Rapid Changes in Performance c. Correspondence of the Criterion and Behavior d. Magnitude of Criterion Shifts

Multiple Tx Design; Problems and Limitations

a. Omitting the Initial Baseline ii. If the two interventions are not different in there effects, without a baseline to compare it to you do not know whether both treatments were equally effective or equally ineffective b. Type of Intervention and Behaviors 1. Interventions suitable for multiple-treatment designs may need to show rapid effects initially and to have little or no carryover effects when terminated 2. Behaviors or outcomes of interest studied in multiple-treatment designs c. Discriminability of the Interventions 1. Client must be able to make this discrimination across different stimulus conditions 2. Ease of making these discrimination often rests in the a. similarity of the interventions b. Frequency of exposure to interventions d. Number of Interventions and Stimulus Conditions i. Theoretically any number of interventions can be evaluated 1. Practically as the number of interventions increases the number of days or sessions needed to counterbalance their presentation increases as well 2. General rule is that 2-3 interventions is optimal to compare ii.Counterbalancing 3. In general, most alternating treatment designs, balance the interventions across two levels of a particular dimension e. Multiple-intervention Interference i. Multiple treatment interference refers to the effect of one treatment being influenced by the effects of the other. 1. Can be directly influenced by the intervention 2. Can be influenced by the sequence that the treatments are presented 3. Occasionally investigators include a reversal phase in ABAB designs with multiple treatments with the belief that recovery of baseline levels of performance removes the possibility of multiple-treatment interference, however an intervening reversal phase does not rule out the possibility of sequence effects 4. The results of a particular intervention in a multiple-treatment design may be determined in part by the other intervention to which it is compared

Design Additions

a. Probes 1. Probes represent a strategic use of noncontinous assessment to answer questions about generalization of effects across situations as well as behaviors b. Graduated Withdrawal of the Intervention 2. Gradual withdrawal of interventions is a strategy that allows for a return to baseline with the expectation that behavior will be maintained

Theory of Knocks

"Female, 30 years old, 5 feet 2 inches tall, carrying a book and a purse in the left hand and knocking with the right" The less likely the observation, ie the riskier the prediction, the better test of a theory

Regression line

"line of best fit" because it is the line that minimizes the squared distance of the points from the line

what question do we ask to assess reliability?

"to what extent do observers looking at the client record in a consistent matter?"

Effect Size

(M1-M2)/SD = Cohen's d: Lg SD = Small effect size. Lg Numerator = Lg effect size

Regression Coefficient

(b)indicates the extent to which any one independent variable predicts the dependent variable, taking account of or controlling for the effects of all other independent variables

advantages of changing-criterion design

* does not require withholding treatment *convincing demonstration of effect when the performance matches the criterion as it shifts

bidirectional changes

* including both increases and decreases in the criterion/behavior change *making the criterion less stringent for a short period of time

Intervention Research Issues

* informing clients about treatment *withholding the intervention *control groups and treatments of questionable efficacy * consent and threats to validity

Deception

* misrepresenting nature of experiment *being ambiguous about experiment (not being told all of the details of investigation) *has major ethical issues * but sometimes is okay to be done if will be good for investigation

limitations of changing criterion

*arise when behavior changes don't precisely correspond with criterion shifts *gradual improvement that is not clearly connected to criterion shifts *rapid changes in performance that exceed the criterion

underlying rationale/logic

*baselines describe current performance and predict future performance *intervention phases test the prediction to see if performance departs from what it is expected if baseline was continued not implemented

demonstration of effect

*behavior changes incrementally in response to changes in the set criterion *enhanced through use of bi-directional changes

advantages of single-case designs

*demonstrating impact of interventions on individuals *making decisions to improve intervention is still in place *permit the study of low frequency phenomena

advantages of multiple treatment designs

*does not depend on reversal of conditions *interventions can be implemented and evaluated even when baseline data show initial trends

advantages of multiple baseline

*does not require reversal to show experimental design

Fraud in science

*explicit efforts deceive and misrepresent *deliberate efforts of researchers to deceive others *Tend to often be in area of health

limitations of single-case research evaluations

*lack of concrete design rules *only very marked effects may be noticed *particular patterns of data required (ex:mean, slope)

limitations of multiple treatment designs

*omitting the initial baseline *if two interventions are not different in their effects, without a baseline to compare it you do not know whether both treatments were equally effective

limitations of changing criterion

*requires careful selection of criterion change *loss of control

Debrief

*review about what was deceived about *explain those details lied about *minimize negative effects of deception *make sure subject is feeling comfortable and not duress * make sure subject does not think this is another deception

options for criteria

*specific points- a determined number of something *range- minimum and maximum values available for reinforcement

circumstances under which combined designs are used

*to address anticipated problems *to address problems that observed over the course of investigation

Why is this design used less than others?

*unclear guidelines for use compared to others *developing behavior gradually does not have clear guidelines either

Internal Validity

*when the intervention does not have any mistakes and is correct. *Internal Validity makes sure that the independent variable is free of issues and threats *priority for research

Face Validity and Construct Validity

- Not all measures that appear face valid are actually found to have construct validity, ex: when items ask about racial prejudice (they have high face validity) but people are unlikely to answer them honestly

How Cronbach coefficient alpha is interpreted

- Ranges from a= 0.00 (indicating that the measure is entirely error) to a= +1.00 (indicating that the measure has no error)

Reading Regression Coefficient

- each regression coefficient can be tested for statistical significance by p values, and because they represent the effect of each of the IV holding constant or controlling for the effect of other IV, they can be used to indicate the relative contribution of each predictor variable

Threats to external validity

---SAMPLE CHARCTERISTICS- a central question is the extent to which the results can be generalized to others who vary in age, race etc. --- STIMULUS CHARACTERISTICS & SETTINGS- the results extend across the stimulus characteristics of investigation. Differences between controlled settings and clinical. ---REACTIVITY OF EXPERIMENTAL ARRANGEMENTS- the influence of subjects' awareness that they are participating in investigation.

Why More Items is More Reliable

-Because random error is self-canceling, the random error components of each measured variable will not be correlated with each other, as a result when they are combined together by summing or averaging the use of many measured variables will produce a more reliable estimate of the conceptual variable (90)

Why Convergent and D are best for Construct validity

-Construct validity refers to the degree to which a test or other measure assesses the underlying theoretical construct it is supposed to measure, So in order to determine the CV of an algebra test, one would need to demonstrate that the correlations of scores on that test with scores on other algebra tests are higher (CV) than the correlations of scores on reading tests (DV).

Systematic Error Example

-Ex: individuals with higher self-esteem may score systematically lower on the anxiety measure than those with low self-esteem, and more optimistic individuals may score consistently higher. -so the measured variable is assess self-esteem and optimism instead of the conceptual variable anxiety -nay sayers, yea sayers

Defining Characteristic of an experimental design

-Instead of just measuring the IV and DV variables like correlation, experimental manipulated IV to see effect on DV which also inferences about causation

Example of Operational Definition

-Number of days per month that employee shows up to work on time, rating of job satisfaction from 1 to 9 (employee satisfaction -number of presses of a button that administers shock to another student (aggression)

LImited Population Designs

-One way for controlling variability among participants is to select froma limited population ex: college stduents dis- no way to know whether the findings are specific to college studyents or would hold up for other groups of people

Retesting Effects

-Reactivity that occurs when the responses on the second administration are influenced by respondents having been given the same or similar measures before

Role of the IRB

-Scientist submit a written application to the IRB requesting permission to conduct reearch -The goal of the IRB is to determine, on the basis for the research description, the cost-benefit ratio of a study

Item-to-Total Correlations

-Strategy commonly used in the intiial development of a scale, you calculate the correlations between the score on each of the individual items and the total scale score excluding the item itself -the items that do not correlate highly with the total score can then be deleted from the scale

Determining Sample size

-You have to have enough to identify an effect when it exists -enough to identify an effect or difference when it exits -Larger samples will produce a more accurate picture and thus have a lower margin of error

Content Validity Example

-an intelligence test that contains only geometry questions lacks content validity because there are other types of questions that measure intelligence that were not included

Safeguard for vulnerable population

-assessing the decision-making capacity of potential participants -ensuring incentives are not coercive -allowing adequate time to consider participation

Make Up of IRB

-at least five members including, in addition to scientist, at least one individual whose primary interest is in nonscientific domains, and at least one member who is not affiliated with the institution at which the research is being conducted

Less Reactive

-behavioral measures are less reactive do not involve direct questioning of people

Example of One Way Research Design

-children who view the violent cartoons will play more aggressively than those who had viewed the nonviolent cartoons

Two-way experimental design

-design with 2 IV

Experimental Manipulation

-done to guarantee that the IV occurs prior to the DV, Manipulation becomes IV, -allows research to rule out if the relationship between IV and DV was supiors

One-Way Experimental Design

-has one independent variable,

Concurrent Validity Example

-if have a self-report measure and behavioral measure assessing the same thing at the same time, ask someone if they are a racist, then watch the person interact with AA, they should have similar scores

Retesting Effect Example

-if people remember how they answered the questions the first times and believe the experimenter wants them to express different opinions on the second one -or if people try to duplicate their previous answers exactly

Example of Longitudinal designs

-measuring violent television viewing and aggressive in children when they eight and also when they where 18.

Issues involved in reporting research results

-mistakes can be made because the scientist is not careful about how he or she collects and analyzes data ex: errors made in the key punching the data or in the process of conducting the statistical analyses -Scientific fraud when scientist intentionally alter of fabricate data

Relationship between Validity and Reliability

-something can be reliable but not valid -something can not be valid but not reliable

Between-groups variance

-the variance among the condition mean, influence of manipulation -if this is higher then within-groups we conclude that the manipulation has influenced the D

within-participants (repeated measures)

-uses the same people in each level of IV - use Dependent t-test

Between-Participants Designs

-using different but equivalent participants in each level of the IV ex: one group of people asigned to A, other group asigned to B -use Independent t-test

Mediating variable example

-we might expect that the level of arousal of the child might mediate the relationship between viewing violent material and displaying aggressive behavior, so we can say viewing violent material increases aggression because it increases arousal

Predictive Validity Example

-when an industrial psychologist uses a measure of job aptitude to predict how well a prospective employee will person on job -when an education psychologist predicts school performance from SAT of GRE scores

Factorial Experimental Designs

-when you have more than one IV (manipulated) variable ex: viewing violent vs nonvolatile, and state of the child before viewing (frustrated verus nonfrustrated)

Accelerated multicohort longitudinal design (Cohort sequential design)

...

Bidirectional path

...

Case-control design

...

Combination of selection and other threats

...

Community

...

Construct Validity: Measurement

...

Cost of High 'd'

...

Cross-sectional

...

Design Strengths

...

Diffusion or imitation of treatment

...

Edited Chapter

...

Empiricism

...

Experiment

...

Exposure therapy

...

Falsifiability/Refutability

...

Inferences regarding sequencing and causality

...

Latent Variables

...

Longitudinal

...

Maximize conclusions

...

Methodology

...

Multi-group Cohort Design

...

Multi-informant

...

Observed Variables

...

Operational Definitions

...

Operationalism

...

Plausible rival hypotheses

...

Power (1-beta)

...

Prospective

...

Quasi-Experiment

...

Retrospective

...

Review Article

...

Sample characteristics

...

Single-Group Cohort Design

...

Single-Group Cohort Design (Birth Cohort)

...

Special Treatment or reactions of controls

...

Stimulus Characteristics and Settings

...

Table 5.1 pp.112-13

...

Threats to Construct Validity

...

Tiers

...

Unidirectional path

...

Strong Relationship

.54

Problems and Limitations with Reversal Design

1. Absence of Reversal 2. Undesirability of Reversal

3 areas of conflict on interest:

1. Actual conflict of interest 2. perception of a conflict 3. lack of communicating information

common concerns of stats significance testing

1. All-or-none decision making 2. significance is a function of N 3. says nothing about strength or importance of effects

Research designs include what 3 components?

1. Assessment 2. experimental design 3. data evaluation

social validation is a guide for...

1. Assessment 2. intervention

Threats to construct validity

1. Attention and contact with clients 2. Single operations and narrow stimulus sampling 3. experimenter expectancies

3 components of informed consent

1. COMPETENCE- making a well-reasoned decision 2. KNOWLEDGE- understanding the study 3. VOILITION- giving consent without being coerced.

General Requirements of Single-case Designs

1. Continuous Assessment 2. Baseline Assessment 3. Stability of Performance

4 types of Validity

1. External 2. Internal 3. Construct 4. Statsitical

4 broad charcteristics of behavior modification

1. Focus on behavior 2. Focus on current determinents of behavior 3. Focus on learning experiences to promote change 4. Assessment and evaluation

Common leaps in language and conceptualization in findings

1. Highly significant effects- has no significant meaning in null hypothesis testing. 2.one variable predicts another 3. implications of my findings

Negative results are interpretable (informative )

1. In the context of a program of research, negative results can be very informative 2. can be informative when negative results are replicated across several different investigators 3. when the study shows the conditions under which the results are and are not obtained.

MBL: Problems and Limitations

1. Interdependence of Baselines 2. Inconsistent Effects of the Intervention 3. Prolonged Baseline

Threats to statistical conclusion validity

1. Low-statistical power 2. variability in procedures 3. subject heterogeneity

misconceptions of stats significance testing

1. P reflects that the likelihood the null hypothesis is true 2. higher p value (p< .0001) is a more stronger effect 3. no difference means that there is no real effect

Combined Designs: Weaknesses

1. Practical issues can arise when weaknesses of designs have an additive effect when brought together a. This can be the case with the inclusion of a reversal phase included within a MB design, as this both delays the introduction of treatment and suffers from the disadvantages of the reversal phase as well

3 other conventions are also suggested to improve the aesthetics and readability of behavior analysis graphs:

1. Ratio of ordinate to abscissa- graphs are easiest to read if the data points are not too close together on the axis. 2. connecting the data points- a line should be drawn between data points that almost touches each point 3. amount of data presented per graph- researchers invariably take more than one measure of behavior.

types of research

1. TRUE EXPERIMENTS- random assignments of subjects 2. QUASI-EXPERIMENTS- cannot randomly assign subjects 3. CASE-CONTROL- variable of interest of subject

limitations within reversal design

1. absence of reversal- behavior does not revert toward baseline 2. undesirability of reversal- ethical considerations in "making the behavior worse" especially when the behavior that is changed is dangerous

4 possible factors make drawing any conclusion difficult in analyzing graphic data:

1. amount of variability 2. trending 3. replication 4. analysis of apparent treatment effects

2 ways to protect invasion of privacy

1. anonymity- the identity of subject and their performance is not revealed 2. confidentiality- information will not be disclosed to a third party without awareness and consent

what 3 questions does social validation ask?

1. are the goals relevant to everyday life 2. intervention procedures acceptable to consumers 3. outcomes of the intervention important?

Highly significant effects

1. be cautious paring words "highly" and "significant" because could cause issues. it may reflect a misunderstanding about hypothesis testing and cause mis and over interpretation

data analysis of treatments trials when some subjects drop out of the study:

1. completer analysis- (the most commonly used in psychological research .) the investigator merely analyzes the data for only those subjects who completed the study and who completed the measures on each occasion 2. intent-to-treat analysis- (is quite commonly used in other fields, medicine) is designed to preserve randomization of the groups by following one rule: "The data for any subject ought to be analyzed according to the group to which he or she was assigned, whether or not the intended treatment was given"

General requirements of single-case experiments?

1. continous assessment 2. baseline assessment 3. stability of performance

time frame of research

1. cross-sectional studies- comparison between groups at one given time 2. longitudinal studies- comparisons over extended period of time (usually years)

Baseline assessment

1. descriptive function- data collected during baseline phase describes the existing level of performance 2. predictive function- baseline data serves as a basis for predicting the level of performance for immediate future if the intervention is not provided

types of replication

1. direct replication- an attempt to repeat an experiment exactly as it was conducted originally 2. systematic replication - repetition of the experiment by systematically allowing features to vary

types of variables to study?

1. environmental- what happens around the subject 2. instructional- verbal or written statements about the experiment ad their participation 3. subject differences-charcteristics of subjects

guidelines for the design of graphs in ABA

1. historical precedents- this is regarding the visual, graphic representation of data that should be aesthetically pleasing and clear. ** graphs should have quality features and essential structure

ways to increase power

1. increasing sample size- most important 2. use directional tests for significance testing- 3. use pretests/repeated measures

limitations of multiple baseline

1. interdependence of baselines 2. inconsistent effects of intervention 3. prolonged baseline

conditions of experiments

1. lab vs. applied- highly controlled vs. everyday life 2. efficacy vs. effectiveness- intervention or program outcome research vs. target outcomes obtained in clinic settings where usual control procedures not implemented

alternatives to significance testing

1. magnitude or strength of effect- effect size but there are many others 2. confidence intervals- provide range of values and the likelihood that the ES in the population falls within a particular range 3. Meta-analysis- extends the use of ES across many studies

multiple treatment designs

1. multi-element design- CONTROL IS DIFFERENT CONDITIONS 2. alternating treatment

Several features of the study that warrant

1. multicenter 2. double-blind 3. placebo controlled trial

2 advantages of changing criterion

1. no reversal 2. no additional baselines are required to show experimental control

7 control groups

1. no-treatment 2. waiting list 3. no contact 4. nonspecific treatment or "Attention-placebo" 5. routine or standard treatment 6. yoked 7. non randomly or non equivalent

operational definition should meet the criteria ...

1. objectivity 2. completeness 3. clarity

continous assessment

1. reliance on repeated measures over time 2. basic requirement of single-subject design 3. several observations are obtained for one or a few persons

4 different concepts of statistical inference

1. statistical significance (Alpha) 2. Effect Size (ES) 3. Sample Size 4. Power

reviewers examine general issues as whether

1. the question is important for the field 2. the design and methodology are appropriate to the question 3. the results are suitably analyzed

what makes a behavior of functioning worthy or in need of an intervention?

1. the setting may dictate the focus of goals 2. dysfunction, maladaptive behavior 3. preventing problems from developing

some obvious reasons for no-difference findings

1. there are no differences in the population 2. power was low and probably too weak to detect a difference 3. levels of the independent variable were not optimal

Main sections of the journal article:

1. title 2. abstract 3. introduction 4. method 5. results 6. discussion

What to look for when analyzing your data

1. too short baseline 2. lack of stability before changing conditions 3. delayed onset of treatment effect 4. failure to replicate conditions

stability of performance

1. trend in the data 2. variability in the data

3 guidelines to determine the changes in criterion

1. use gradual changes to maximize likelihood of meeting the criteria early on 2. must be able to detect correspondence between criterion and behavior 3. changes don't have to be equal across the course of intervention

data evaluation in single-case research

1. visual inspection 2. changes in mean 3. changes in level- shifts from one phase to the next 4. latency of change

Naturalistic Method Ad

1.) A large amount of data can be collected very quickly and this info can provide basic knowledge about the phenomena of interest, provide ideas for future research 2.) Data has nigh ecological validity because they involve people in their everyday lives

Statistical power is influenced by

1.) Alpha level, the lower the alpha level the higher the beta level, so would lead to move type 2 errors and lower power 2.) Effect Size-Power depends on how big the relationship being searched for actually is 3.) Sample Size-As sample size increases so does power of the test

3 Factors in determining causation

1.) Association 2.) Temporal Priority 3.) Control of Common Causal Variables

Identify various ways of knowing

1.) Authority 2.) Personal Experience/Casual Observation 3.) Faith 4.) Common Sense/Tenacity 5.) Intuition/logical/reason 6.) Systematic Observation/ Scientific Method

Dis of WIthin Subjects

1.) Carryover effect 2.) Pr active and Fatigue

Strategies for Improving Reliability and Validity

1.) Conduct a pilot test 2.) Use multiple measures, the more items a test has, the more reliable it will be 3.) Ensure variability within your measures 4.) Write good items 5.) Attempt to get your respondent to take your questions seriously 6.) Attempt to make your items nonreactive 7.) Be certain to consider face and content validity by choosing items that seem reasonable and that represent a broad range of questions concerning the topic of interest 8.) when possible, use existing measures, rather than creating your own

Types of Validity

1.) Construct Validity 2.) Content Validity 3.) Convergent Validity 4.) Discriminant Validity 5.) Face Validity 6.) Criterion validity 7.) Predictive Validity 8.) Concurrent Validity

Participants as extraneous Variables

1.) Demand characteristics 2.) Good participant Effect 3.) Response Bias 4.) Response set

Naturalistic method

1.) Descriptive research 2.) Uses Behavioral measures 3.) Observation Research 4.) Case Study

Why People Sample

1.) Feasibility- the group of people we want to learn about is so large that measuring each person is not practical 2.) Economy 3.) Accuracy- tends to be more accurate then a census burro, cause can devote more resources,

Elements of Informed consent

1.) Give a general description of the project in which they are going to participate 2.) Inform the participants that no penalties will be invoked if they choose not to participate 3.) clearly state that participants have the right to withdraw their participation at anytime 4.) state risks to participant, whether any compensation is to be made for more that minimal risk, confidentiality of records

Three principles that guide interpretations of results

1.) Increasing the sample size, will increase the statistical significance of a relationship whenever the effect size is greater then zero. 2.) because p-value is influenced by sample size, as a measure of SS the p0value is not itself a good indicator of the size of a relationship 3.) we can see that the effect size is an index of the strength of a relationship that is not influenced by sample size.

Types of Risk

1.) Invasion of Privacy- ex naturalistic observation 2.) Breach of Confidentiality- most common, informed consent, unauthorized access to the data 3.) Study Procedures

Designs likely to violate free choice

1.) Naturalistic Observational Studies- individuals is observed without their knowledge 2.) Institutional Setting- schools, psych hospitals, corporation, prisons wen individuals are required by the institutions to takes certain tests 3.) authority- employees are assigned to or asked by their supervisors to participate in research

Ways Experimenter function as extraneous variable

1.) Physiological 2.) Psychological 3.) Experimenter expectancies 4.) Rosenthal Effect

Four Goals of Ethical Research

1.) Protecting participants from physical and psychological harm 2) Providing freedom of choice about participating in the research 3.) Maintaining awareness of the power differentials 4.) Honestly describing the nature and use of the research

Ways of controlling extraneous variables

1.) Random Assignment 2.) Elimination 3.) Constancy/Standardization 4.) Balancing/Matching 5.) Counterbalancing 5.) Washout period/ Distractor Task

Controlling Experimenter Effect

1.) Standardized methods 2.) Careful training to a set standard 3.) Standardize appearance, attitude 3.) Experimenter blind to conditions

Ad of Within Subjects Design

1.) Statical power is greater then between subjects because reposes directly compare two each other 2.) more efficient, less participants

Tests of Reliability

1.) Test-Retest 2.) Alternate (Equivalent) Forms 3.) Internal consistency a.) Split half b.) Cronbach's alpha

Waiver of Consent

1.) The research inolves no more than minimal risk to subjects 2.) Will not adversely affect the rights and welfare of subjects 3.) the research could not practicably be carried out without the waiver

Random Error Example

1.) misreading or misunderstanding of the question 2.) measurement of the individuals on different days or in different places 3.) the experimenter misprints the question or rerecords the answers

Respect for persons Things

1.) obtain and document informed consent 2) respect the privacy interests of research subjects 3.) consider additional protection when conducting research on individuals with limited autonomy

Justice Things

1.) select subject equitable 2.) avoid exploitation of vulnerable population or populations of convenience make sure vulnerable population don't solely carry the burden of research

Beneficence Things

1.) use procedure with least risk 2.) risks should be reasonable in relation to befits and the important of the knowledge expect to result 3.) maintain confidentiality

Demands of Scientific Method

1.)Science must be empirical-based on observation or measurement of relevant info 2.)Prodcedures must be objective 3.) Science be based on what has come before it(be able to be replicated) 4.) Results in a accumulation of scientific knowledge

The research process is composed of:

1.design, execution, analysis of the results, preparation of the report.

Statistical power

=1-beta, probability of concluding there was an effect when there was one, probability of detecting a real effect, probability of correctly rejecting the null hypo when it is false

Experimental Condition

=the level of the ID variable in which the situation of interest was created -violent tv

MBL- Design Variations

a.Multiple-baseline Design Across Behaviors i. Different baselines refer to several different behaviors of a particular person or group of persons b. Multiple-baseline Design Across Individuals i.Baseline data are collected for a particular behavior performed by two or more persons and the multiple baselines refer to the number of persons whose behavior is observed c.Multiple-baseline Design Across Situations, Settings, or Time i. Baseline data are gathered for a particular behavior performed by one or more persons and the multiple baselines refer to the different situations, settings or time periods of the day in which the observations were obtained. d.Number of Baselines i. Other things being equal, demonstration that the intervention was responsible for change is clearer the larger the number of baselines that show the predicted pattern of performance. Clearer here is used to describe the extent to which the change in performance can be attributed to the intervention rather than extraneous influences and various threats to validity e.Partial Applications of Treatment i. In some circumstances, the intervention may be applied to the first behavior/individual/situation and produce little or no change, and produce little or no change, in these cases a second intervention may be introduced in an ABC manner and if this second intervention leads to change it will be applied to the remaining behaviors/individuals/situations in the usual fashion of a multiple baseline ii. Another variation of partial application is a case in which one or more individuals/behaviors/situations never receive the treatment

multiple baseline - design variations

across behaviors across participants across settings or activities

most common variation/addition

adding a reversal

Counterbalancing

administrating each experimental treatment condition to all groups of participants, but you do it in different orders for different groups

use of pretests

advantages: it includes issues related to the information they provide.

routine or standard treatment

all persons receive an active treatment

Process Debriefing

an active attempt to undo any changes that might have occurred EX- If experiment created negative mood, a positive mood induction procedure might be given to all participant before they leave

Behavior modification

an approach to the assessment, evaluation, and alteration of behavior

controlling alpha levels

an investigator is likely to include multiple groups and to compare some or all of them with each other.

comparison groups

any group included in the design other than primary groups of interest

Conflict of interest

any situation in which investigator may have an interest that can be bias to a research project

Mean Deviation

are calculated for each individual as the person's score (X) minus the mean (Xbar), if the score i about the mean, the mean deviation is positive, and if the score is below the mean, the mean deviation is negative. -The sum of all mean deviations scores is always zero

Descriptive statistics

are numbers that summarize the pattern of scores observed on a measured variable, includes central tendency and dispersion or spread

These 3 tasks ..

are used to convey the story, that is, the problem or the question of interest and what is known about the problem

Counterbalancing

arranged one half of children view the violent carton first and the other half viewed the nonviolent cartoon first, with the order of viewing determined randomly

no treatment control group

assessed , but given no intervention *focuses on internal threats

single-case methodology

assessment,experimental design and data evaluation

Archival research

based on an analysis of any type of existing records of public behavior, ex newspaper articles, speeches, letters, tv,

Empirical observation

based on systematic collection and analysis of data

Basic Research vs Applied

basic provides underlying principles that can be used to solve specific problem, applied gives ideas for the kinds of topics that basic research can study

P value in Relation to Alpha level

because alpha sets the standard for how extreme the data must be before we can reject the null hypothesis and the p-value indicates how extreme the data are we simply compare p-value to alpha

Why Beta (power) can only be Estimated

because power depend in part on how big the relationship being searched for actual is- the bigger the relationship is, the easier to detect. The problem is that because the research can never know ahead of time the exact effect size of the relationship being search for he cannot exactly calculate the power of the statistical test

Rejecting the Null Hypothesis

because the null hypothesis specifies the least interesting possible outcome the resreach hopes to reject the null hypo, this is to conclude that the observed data was caused by something other than chance alone

Major reason why replications are not usually viewed with great excitement is

because they are partly repetitions of experiments that already have been done

when to use a multiple probe design

before, during and after a treatment and to test for generalization

changing criterion design

begins with baseline phase and then proceeds through all the intervention phases *the effect of intervention is demonstrated by showing that behavior changes gradually over the course of intervention phase ***CONTROL IS BEHAVIOR CHANGES WHEN PERFORMANCE CRITERION IS ALTERED AND THROUGH A MINI-REVERSAL PHASE

Overt behavior

behavior one can see, identify, detect or observe in some way

Example of Rosenthal Effect

being labeled slow can change how you view yourself.

Controlling Demand Characteristic

both experimenter and participants are unaware of which treatment is being administered to participant known as double0blind experiments

Research Hypothesis

can be defined as a specific and falsifiable prediction regarding the relationship between or among two or more variables -states the existence of a relationship and specific direction of that relationship ex- participating in psychotherapy will REDUCE anxiety

computerized data collection

can be graphed automatically

Advantage of Correlation Research

can be used to assess behavior as it occurs in people's everyday lives

Limitation of Correlation Research

can not be used to identify causal relationship among the variable IT IS NOT CAUSATION

Invasion of Privacy

can occur if personal information is accessed or collected without the subjects' knowledge or consent ex- e-mail communication with a subject about recovering from sexual assault might be read by family members

Response Shift

changes in person's internal standards of measurement

Reactivity

changes in responding that occur when individuals know they are being measured, occurs more in self-report measures

Physiological Ways

characteristics such as age, sex, and rage can have an influence on participants responses

Example of Measured variables

conceptual variable study time- measured variable seconds of study

discussion

consists of conclusions and interpretations of the study and hence is the final resting place of all issues and concerns

ABAB designs

consists of making and testing predictions about performance under different conditions *separate phases provides 3 types of information: 1. descriptive - present performance 2. predictive- prediction of probability of future performance 3. replication- test the extent to which predictions were accurate **** CONTROL IS REVERSING BACK TO BASELINE***

validity considers the...

content of the measure and whether the measurement assesses the domain of interest

no-contact group

controls for effect of participating in a study *participants do not know they are in the study ethical issue? informed consent

what is the relationship between the variables of interest?

correlate risk factor cause

Publication process: The Investigator does?

decides whether to publish the paper, what to publish, where and when to publish

wait-list group

delaying treatment * helps with ethical situation and attrition (internal threats)

changing criterion design

demonstrates control of an intervention through behavior changes that correspond to shifts in a criterion for reinforcement over the course of the study

Modus tollens

denying the consequent If A then B, If i have a dog, it is a mammal then if not B, therefore Not A If i don't have a mammal, i don't have a dog science wants to live here

Dispersion

describe how spread out the scores are, uses range, interquartile range, variance and the standard deviation

3 interrelated tasks for preparing manuscripts for the publication of the report:

description (mainly focused on), explanation, contextualization

Qualitative research

descriptive research that is focused on observing and describing events as they occur, with the goal of capturing all of the richness of everyday behavior ex- field notes and audio or video recording

how do you measure target behaviors?

direct measurement of overt behavior

Contingency table

displays the number of individuals in each of the combinations of the two nominal variables

Skewed distributions

distributions that are not symmetrical

control group

does NOT get intervention

Disadvantages of Descriptive

does not assess relationships among variables

what is assessment a precondition for?

drawing inferences- needs to be sure there was a change of outcome

Independent Variable Examples

drugs, experience word list, age height, iq

Interval Scale

equal intervals/gaps, the distance between any two units of measurement is the same but the zero point is arbitrary

Event Duration

ex the amount of time that child was attending to the word of other

Event frequencies

ex- the number of verbal statements that indicated social comparison My picture is the best, How many did you get wrong

Statistical Evaluations

examines whether groups are differing on a particular independent variable can be distinguished statistically on dependent variable

the Pepsi Challenge

example of extraneous variable, people liked the cup with m on itstead of q

Experimenter expectancies

expectations that cause experimenter to behave in such a manner that the expected response is more likely shown by participants

response maintenance

extending behavior change over time after the intervention is withdrawn

transfer of training

extending behavior changes to additional settings and situations

Internal Validity

extent to which we can trust the the conclusions that have been drawn about the causal relationship between IV and DV -ensured only when no confounding varibles

F

f=between-groups variance/ within-group variance -the the F is statistically significant then we can reject the null hypothesis and conclude that there is a difference among level

Statistical conclusion validity

facets of the quantitative evaluation that influence the conclusions we reach about experimental condition (this type of validity is usually neglected when studies are planned and executed)

Threats to internal validity

factors or influences other than the independent variable that could explain the results. ---HISTORY- events outside the experiment ---MATURATION- growing older, stronger, wiser ---TESTING- effects that taking a test may have on subsequent performance on the test ---INSTRUMENTATION- changes in the measuring devices or how it is being used ---ATTRITION- loss of subjects

Controlled

factors such as: -time of day -weather - how independent variable is implemented *CONTROL IS ACHIEVED BY DISPERSING THESE FACTORS EQUALLY ACROSS GROUPS BY ASSIGNING SUBJECTS RANDOMLY TO GROUPS

demand Characteristics

features of the experiment that inadvertently lead participants to respond in a particular manner, finding out the hypothesis

Conducting Simple Random Sampling

first you must have a sampling frame, then investigator randomly selects from the frames a sample of a given number of people by using random number table

Event Sampling

focusing in on specific behavior s that are theoretically related to topic of interest, ex: kids in classroom watch them only for a few days, or just look at kids during reading time

Case Study Dis

generalizing is poor because they are based on the experiences of only a very limited number of normally quite unusual individuals

Counterbalancing Ensure

gets rid of carry-over effect that is participants performance in a later treatment is different because of the treatment that occurred prior to it

abstract

has 2 critical features: 1. abstract is likely to be read by many more people than is the article 2. for reviewers: of the manuscript and readers of the journal article, the abstract is the only information that most readers will have about the study.

Scatterplot

horizontal axis indicates the scores on the predictor variable and the vertical axis represents the scores on the outcome variable, provide a visual image of the relationship between the variables

Hindsight bias

i knew it all along phenomenon -the tendency to think that we could have predicted something that we probably could not have predicted

Combined Designs

i. Combination ABAB and Multiple Baseline designs are most common 1. Reversal does not need to be extended across each participant, behavior or setting 2. Reversal phases can be added to other types of designs 3. Most combined designs consist of adding a reversal or return to baseline phase to another type of design "Within a single demonstration, combined designs provide different opportunities for showing that the intervention is responsible for the change"

Multiple Treatment Designs: Advantages

i. Does not depend on reversal of conditions ii. Interventions can be implemented and evaluated even when baseline data show initial trends (because difference between interventions can still be detected when superimposed on any existing trend) iii. Efficient design for comparing two or more interventions for a single individual iv. May be used to determine relative effectiveness of intervention and other relevant stimulus conditions (such as staff or classroom) simply by the way the data is plotted.

Undesirability of "Reversing" Behavior

i. Ethically considerations in "making the behavior worse" , especially when behavior that is changed is potentially dangerous ii. Similar considerations for behavior change agents iii. Design issues need to be weighed against considerations that make designs acceptable to the various persons involved. iv. There is value in knowing what is responsible for client change and knowing that it was the intervention rather than other influences

MBL: Advantages

i. No need to withdrawal treatment ii. The gradual application of the intervention across different behaviors has the practical benefits in increase treatment fidelity among behavior change agents iii. Gradual application also allows for an initial test of effectiveness prior to widespread implementation iv. Gradual nature of intervention perhaps analogous to shaping in its approach to behavior change for clients

Continuous Assessment

i. Reliance on repeated measures over time ii. Basic requirement of single subject design because single-subject designs examine the effects of interventions on performance over time iii. Instead of one or two observations of several persons (as in group design), several observations are obtained for one or a few persons.

Multiple Treatment Designs: Weakness

i. Subject to multiple treatment interference (as is the case with all multiple treatment designs

Stability of Performance

i. Trend in the data ii. Variability in the Data

failure to replicate

identical findings can yield contradictory results when statistical significance testing is the basis of drawing inferences

Type 1 and alpha

if alpha level is .05 when know we will make type 1 error not more than five times out of one hundred or .01 wont make it more than 1 out of hunder

When median is good

if the data is skewed by an outlier because it is not effected by outliers

Margin of Error

if the margin of error of the survey is listed as plus or minus three percentage points it because that the true value of the population will fall between the listed value minus three points and the listed value plus three point 95% of the time

How Split-Half is interpreted

if the scale is reliable, then the correlation between the two halves will approach r= 1.00, indicating that both halves measure the same thing

How test-retest is interpreted

if the test is perfectly reliable the correlation between individuals scores should be r-1.00 -however if the measured variable contains random error, the two scores will not be as highly correlate

Matching

if you have two groups you could attempt to find someone like each person in group one on the matching variable and place these individuals into group two

Example of holding constant

if you want to control for gender, you would only include females in you research study

importance of replication

important of replication in scientific research cannot be overemphasized. 1. by nature of null hypothesis testing and statical evaluation, it is possible that any finding may be result of chance. 2. its important because many influences might operate in a psychology experiment to lead to the patterns of results, other than independent variables we have studied

Null Hypothesis Role

in order to prove that your results are not due to random chance you need to compare the results against the opposite situation (null hypothesis)

Error variability

includes all sorts of influences in a study. holding constant variables that may increase error variation in the study or analyzing variables that might be included in an error term can be used to decrease error variation

combined designs

includes features from more than one design with the purpose of strengthening the experimental demonstration

Example of Quantitative Variable

indicate such things as how attractive a person is, how quickly she or her can complete a task, how many siblings she or he has

Self-report measures

individuals are asked to respond to questions posed by an interviewer or a questionnaire ex directly asking someone about his or her thoughts feeling, or behavior ex:free-format, fixed format

multiple baseline

inferences are based on examining performance across several different baselines. *one will see the first set of data change and the others in baseline continue without showing a change *demonstrates the effect of an intervention by showing behavior changes when and only when the intervention is applied ****CONTROL IS STAGGERING BASELINE***

Simple Random Sampling Example

interested in studying volunteering behavior in college students and want to collect random sample of 100 students, find a list of all currently enrolled students and then use a random number table or generator to produce 100 numbers that fall between 1 and 7,000 and select those 100 students

controlling or altering the outcome of interest?

intervention

Construct Validity

intervention was responsible for change, what was the specific aspect of intervention was the casual agent?

Applied Research

investigates issues that have direct implications and provides solution to problem goal-to improve human condition

Observational research

involves making observations of behavior and recording those observations in an objective manner

Systematic observation

involves specifying ahead of time exactly which observations are to be made on which people and in which times and places,

Ratio Scale

is a interval scale that has a true zero point, but still equal changes between the points on the scale(centimeters for instance) correspond to equal changes in the conceptual variable (length)

Random Sampling

is a probability sampling technique, makes it that the data from the sample can be generalized to larger population

Visual analysis

is a time-tested method. the advantages are: 1. the data being reviewed is raw data. what you see is what you get when it comes to behavioral data. 2. graphing data and analyzing it visually appears to be a conservative way of making judgements about trending, variability, immediacy, size of effect and consistency of data across phases

Mediating variables

is a variables that is caused by the predictor variable and that in turn causes the outcome variables, they are important because they explain why a relationship between two variables occurs,

Increased precision

is achieved by holding constant

Bonferroni Adjustment

is based on dividing alpha by the numbers of comparisons. it controls the overall error-rate.

Naturalistic research

is designed to describe and measure the behavior of people or animals that it occurs in their everyday lives

Weak or insufficient power

is not a minor nuisance or merely a worry for misinterpreting a particular study.

Representative Sample

is one that is approximately the same as the population in every important respect ex: a rep sample of the population of students at a college or university would contain about the same proportion of men, sophomores, and engineering majors as are in the college itself

explanation

is slightly more complex as this refers to presenting the rationale of several facets of the study. the justification, decision-maki process, and the connections between the decisions and the goals of the study move well beyond description

Analysis of Variance (ANOVA)

is specifically designed to compare the means of the DV across the different levels of the IV -does this by analyzing the variability of the DV

Major concerns of statistical significance testing is...

is the entire matter of null hypothesis and statistical significance testing. A difficulty with stats testing is that in their current use they require us to make a binary decision for the null hypothesis. We set a level of alpha (p<.05) and decide whether or not to reject the null hypothesis, that there is no difference

Decreasing variability (Error) in the study

is the final method of increasing power

Mean

is the most commonly used measure of central tendency, sum all scores then divide by the number of participants

description

is the most straightforward task and includes providing details of the study

X-bar

is the sample mean

Median

is the score in the center of the distribution, meaning that 50 percent of the scores are greater than the median and 50% are lower,

The standard deviation

is the single number that represent the average that a score typically different from the mean in a distribution

implications of my findings

it is critically important to discuss the implications of results in a study. Implications of a study receive little attention, and it should receive more attention.

If too much variability in either the baseline or treatment...

it may be difficult to determine if there was a cause-effect relationship especially if there is a significant overlap of data across conditions

When mean is helpful

it normally good unless data is skewed because it is highly influenced by the presence of outliers

Pro of Deception

it would be impossible to study altruism, aggression, stereotyping without using deception because if participant were informed what the study was involved, this knowledge would change their behavior

External Validity

knows if the research is valid enough to be used in other places whether it be school, home or a work setting

degree of criterion shift

larger shifts in criterion and immediate changes in performance- clear demonstration of effect

Quasi-experiemnt

like an experiment but lacks random assignment ex- gender cause you can't randomly assign gender

reject-resubmit decision

may be used if several issues emerged that raise questions about the research and the design

Disadvantages of informed consent

may influence the participant responses -if our results are to be unbiased or uncontaminated by knowledge of the experiment and the expectancies that such knowledge may bring

Alternative Hypoth

mean violent > mean nonviolent -states that their is a difference among the conditions and normally states the direction of those difference

Null Hypothesis

mean violent= mean nonviolent -is that the mean score on the DV is the same at all levels of the IV

"No evidence of harm"

means no difference!! A no difference finding can be very important when the problem or focus is of critical interest

NEGATIVE RESULTS

means that there was no statistically significant differences between groups that received different conditions or that the result did not come out the way the researcher hoped

Dependent Variable

measured (not manipulated) Values on this variable are thought to depend on exposure to IV the effect in a cause and effect relationship aka criterion or outcome

Behavioral measures

measured variable designed to directly measure an individual's actions ex: projective measures, associative lists, think-aloud protocol

through what relation or process does one lead to another?

mediator

which factors influence the relationship between variables?

moderator

Random Assignment

most common method of creating equivalence among participants in between-participant designs -uses a random process like flipping coin to put each participant in each level of IV -can be sure that in the different level sof the IV the participants are equivalent in every respect except for differences due to chance

of the three letters

most commonly received is the reject letter

Normal distributions

most quantitative variables. most of the data points are located in the center and the distribution is symmetrical and bell-shaped

contextualization

moves one step further away from description of the details of the study and address how the study fits into context of other studies

Chi square (x^2) statistic

must be used to assess the relationship between two nominal variables, first need to construct a contingency table

Negative correlation

negative values, means that as one variable goes up, other variable goes down study time and memory errors

Example of Applied Research

neurologist who is searching for origins of Alzheimer, improve agricultural corp production

Alpha

normally set to .05 we may reject the null hypo only if the observed data are so unusual that they would have occurred by chance at most 5 percent of the time

experimental precision

not all problems that can interfere with valid inferences can be predicted or controlled in advance

Invasion of Privacy

obtaining private information * personal matter *politics, income, sexual and religious beliefs

Correlation Vs Causation

occurrence can cause one another ( smoking causes lung cancer) or correlate with another (smoking is correlated with alcoholism) if one action causes another, they most certainly correlated, but just because two things cooccure does not be that they cause one another

Type one error

occurs when we reject the null hypothesis when it is in fact true. the probability of making this error is equal to alpha, so if want to prevent these errors alpha should be set as small as possible

Postive result

often is a major criterion for deciding whether a study has merit and warrants the publication

Ecological Validity

one advantage of naturalistic research, refers to the extent to which the research is conducted in situations that are similar to the everyday life experiences of the participants

Outliers

one or more extreme scores that make the distribution not symmetrical and skewed

control groups

one type of comparison group * usually used to control for threats to internal validity

Voluntary Participation

only legally competent adults can give consent minors and incompetent adults (e.g developmentally delayed or inebriated adults) cannot give consent

Belmont Report

outlined three basic ethical principles 1.) respect for person 2.) Beneficence 3.) Justice

Positive correlations

positive values., means that as one variable goes up, the other variable goes up ex- height and weight

plausibility

possible Internal and external validity should be PLAUSIBLE

what should a graphical display or data show?

present the data clearly, precisely and efficiently

Breach of Confidentiality

primary source- information obtained by researchers could harm subjects if disclosed outside the research setting ex- unintended disclosure of subject's HIV status resulting in loss of health insurance coverage

Vulnerable Populations

prisoners, dying patients, pregnant women, mentally retarded children. mentally ill, the demented and children -amplified by a coincidence of negatively valued statuses in an individual

graphing data

provides a graphic display of the results of your hard work

non specific or attention placebo

provides some form of pseudo-intervention ethical issues? needs to be credible

Example of Type 1 Error

psychologist concluded that his therapy reduced anxiety when it did not, or thinking you friend has EPSN when she doesn't

Case Study

qualitative research designs, descriptive records of one or more individual's experiences and behavior, usually conducted with rare things like abnormalities

Postexperimental interview

questions asked of participants after research has ended to probe for the effectiveness of the experimental manipulation and for suspicion

Pearson product-moment correlation coefficient

r, a statistic used to assess the direction and the size of the relationship between two variables

Zero

r=0 can't predict one variable from other

r

ranges from -1.00 to 1.00

Example of Ordinal Scale

rated your friends on friendliness, the scores tell us the ordering of the people (you believe Malik, 7, friendlier than Guillermo, 2) but the measure does not tell us how big the difference between M and G is.

Case Study Ad

really rich data set and very interesting

Function of Reverse- coded items

reduce the impact of acquiescent responding, rewrite some items so that a negative response represent agreement(control for yea-saying) or a positive response represents disagreement (control for nay-saying)

Operational definition

refers to a precise statement of how a conceptual variable is turned into a measured variable

Measurements

refers to the assignment of numbers to objects or events according to specific rules ex- rate a movie nine out of ten

Reliability

refers to the extent to which a measure is free from random error -How consistent is the measurement?

Face Validity

refers to the extent to which the measured variable appears to be an adequate measure of the conceptual variable, it is NOT a form of validity, focuses on untrained observes looking if it is accurate

External validity

refers to the extent to which the results of a resreach design can be generalized beyond the specific setting and participants used in the experiment to other places people and times

conceptualization

refers to the importance of the question, the theoretical underpinnings and how well thought out the question is as described in the report

Type 2 error

refers to the mistake of failing to reject the null hypothesis when the null hypothesis is really false concluding there as not an effect when there was

Levels

refers to the specific situations that are created within the manipulation, either violent or nonviolent cartoons

Confirmation bias

refers to the tendency to selectively search for and consider information that confirms one's beliefs ex- report writing an article may only interview experts that support her or his views on the issue

Nonlinear Relationships

relationship between two quantitative variables that cannot be approximated with a straight line

Replication

repetition of an experiment

probes

represent a strategic use of non-continuous assessment to answer questions

mediators

representation of deeper level of understanding beyond the relations

Mode

represents the value that occurs most frequently in the distribution, not frequently used

Intervention research...

requires that subjects complete measures usually before or after the intervention- characteristic of pretest-posttest design

Descriptive research

research designed to answer questions about the current state of affairs ex-care study, surveys, naturalistic observation describe behaviors

naturalistic observation

research designed to study the behavior of people or animals in their everyday lives

Quasi-experimental research design (book)

research designs in which the independent variable involves a grouping but in which equivalence has not been created between the groups

Correlational research

research that involves the measurement of two or more relevant variable and an assessment of the relationship between or among those variables goal- uncover vaiable that show systematic relationship with each other

Cross-sectional research designs

resreach in which comparison are made across different age groups, but all groups are measured at the same time, are very limited in their ability to tule our reverse causation

Exploring and predicting treatment moderators

results of experiments could be accepted or rejected more easily if a given variable were always shown to have either an effect or no effect. *** if variable has no effect, it is always possible that it would have an effect if some other condition of the experiment was altered.

Controlling Yea-saying

rewrite some items so that a negative response represents agreement(control for yea saying) or a positive response represents disagreement (control for nay-saying)

Validity Examples

saying shoe size measures intelligences, a measure might be reliable, but not valid

Dependent Variables Examples

scale on survey auracy, mood,

Meta-analysis

secondary analysis procedure. Has been used extensively for evaluating research.

confounds

several features within the experiment can interfere with the interpretation of results

one variable predicts another

several variables are studied at a given point in time and the investigator uses statical analysis in which the word "prediction" comes up *** any time someone sees the word prediction in a discussion of results, it is important to be mindful of whether the design warrants the use in which a time line is implied

Example of Nominal Variable

sex because identifies whether a person is male or female, numbers are arbitrary religion, united states, sport team,

Range

simple measure of dispersion, find the largest (maximum) and the smallest (minimum) values, looks at exterm measures and says nothing about middle Max-Min=Range

Study Procedures

simply participating in the research can cause social or psychological harm ex- subject who experienced abuse as children may experience emotional or psychological distress by participating in study

Naturalistic methods advantages

so many data available to be studies

subjective evaluation

soliciting the opinions of others who by expertise or familiarity with the client are in position to judge the behaviors in need of treatment

Software for graphing time-series data

some readers may prefer to use software for preparing their graphs for presentation

observed measures

specific measures

Null hypothesis

specifies that there are "no differences" between groups

Null Hypothesis

specifies that there are "no differences" in between groups

Falsifiability 2

states that for a theory to be useful the prediction drawn from it must be specific- it should go out on a limb in telling us what should happen and must also imply that certain things will not happen

weakness of multiple treatment designs

subject to multiple treatment interference

statistical validity

takes into account many factors about how well the study was conducted

Predictor Variable

term for Independent variable in correlational designs

Outcome Variable

term for dependent variable in correlatonal designs

one Benefit of behavior analysis research is...

that it is designed in such a way that the experimenter is directly in touch with the data that is being collected

Reject decision

that the reviewers and/or editor considered the paper to include flaws in conception, design or execution or that the research problem focus did not address a very important issue

Example of Likert Scale

the Rosenberg self-esteem scale, this scale contains ten items, each of which is responded to on a four-point response format ranging from strongly disagree to strongly agree

Ambiguity of negative results

the absence of group differences in an experiment is not usually met with enthusiasm by the investigator or by the reviewers who may be considering the manuscript for possible publication. This reaction derives from the ambiguity of negative results The reason for "no-difference" findings usually cannot be identified in the experiment

Variance Def

the average of the squared differences from the mean

Controlling for response set

the best safeguard against response set is to review all questions that are asked or items to be completed to see if a socially desired response is implied in any manner, the response should reflect the participant's own feelings attitudes, or motives rather than an attempt to appear intelligent, well0adjusted or normal

Statistically significant

the conclusion to reject the null hypothesis, made when the p-value is smaller than alpha p<.05

Zero-order correlations

the correlations which serve as the input to a multiple regression analysis

Population ( Target Population)

the entire group of people that the researcher desires to learn about ex: All Americans

Procedures to ensure human subjects are conducted ethically

the ethics of a given research project are determined through a cost-benefit analysis, in which the costs are compared to the benefits.

Response set

the experimental context or testing situation influences the participants responses

Experimental control

the extent that the experimenter is able to eliminate effects on the DV other than the effects of the IV, -the greater the experimental control, the more confident we are with results

Power

the extent to which an investigation can detect a difference when one exists. ---If we are going to use tests of statistical significance to evaluate results, it is critical to ensure that there is a good chance to show a difference when one in fact exists ---THE LEVEL OF POWER THAT IS "ADEQUATE" IS NOT DERIVED MATHEMATICALLY

Internal Consistency

the extent to which the scores on the items correlate with each other and thus are all measuring the true score rather than random error -the extent that all of the items are measuring true score, rather than random error is measured by the average correlation among the items approaching r=1.00 -split-half, cronbach coefficient alpha

Covariance

the extent to which the variance of the two variables move in the same direction

Correlation

the extent to which two or more variable are associated with each other

Correlation

the extent to which two or more variables are associated with each other

consequent

the following as an effect or result, resulting

How Alternate Forms is Interpreted

the greater the similarity (i.g the stronger the correlation), the higher the reliability

Conceptual variables

the ideas that form the basis of a research hypothesis ex- self-esteem, parenting style, depression and cognitive development, liking

Scientific fraud

the intentional alteration or fabrication of scientific data

Contol condition

the level of the IV variable in which the situation of interest was not created -non violent tv

Normal Distribution-Central Tendency

the mean, median and the mode all fall at the same point on the distribution in a normal distribution, in the middle

Explanation of Theory of Knocks

the more specific and precise the prediction was the more potential observation there were that could have falsified it. Good theories make predictions that expose themselves to falsification and not general statements

Cronbach's coefficient alpha

the most common and best index of internal consistency -an estimate of the average correlation among all of the items on the scale -Average of all possible split-half correlations

Degrees of Freedom

the number of levels of the independent variable as well as the number of resreach participants n the entire study ex F(1,1000)

True Score

the part of the scale score that is not random error - Actual score= True score + Random error -Reliability = true score/actual score

Central Tendency

the point in the distribution around which the data are centered, depicts a typical score in the distribution includes mean median and mode

Informed Consent

the practice of providing research participants with information about the nature of the research project before they make a decision about whether to participate

Power

the probability of rejecting the null hypothesis when that hypothesis is false

Beta

the probability of the scientist making a type 2 error

Coefficient of determination

the proportion of variance measure for r is r2, a nonsiginifcant r2 means no relationship -the proportion of variance measures is r2

Anonymous

the respondent does not put any identifying information onto the questionnaire ex- individuals can seal their questionnaire in an envelope

Reverse-scored

the responses to the reversed items must themselves be reverse-scored, so that the direction is the same for every item, before the sum or average is taken

Longitudinal designs

the same individual are measured more than one time and the time period between the measurements is long enough that changes in the variables of interest could occur

Variance

the single number that represents the total amount of variability in a distribution

Effect Size

the size of a relationship, indicated the magnitude of a relationship, zero indicates that there is no relatioship between the variables and larger (positive) effect sizes indicate stronger relationships

Behavioral categories

the specific set of observations that are recorded in systematic observational research, defined before the project begins, based on theoretical predictions

Standard deviation (s)

the square root of variance, the larger the st dev the more spread, how dispersion can is measure, measured through claculation of the distance of each of the scores from a measure of central tendency, such as th mean

Significance Level or alpha

the standard that the observed data must meet,

Good participant effect

the tendency of participants to behave as they perceive the experimenter wants them to behave

Timing of measurement

the time when follow up measures are taken into consideration

Curvillinear relationships

there is an association but not desicribed by a single straight line, relationships that change in direction ex- anxiety and performance r=.00

Nominal scale

things fall into a category giving it names, ex: United states, on sport team the kid who wears the number 6, not all that helpful

How to draw valid inferences

through experimentation that examines the direct influence of the independent variable upon the dependent variable.

Methodology and design play a major role in?

throughout the processes of planning, conducting, and communicating research results

Example of Interval Scale

time interval between the years 1981 and 1982 is the same between 1983-1984 365 days but zero point year 1 AD is arbitrary, degrees in Fahrenheit, GPA

methodological goal

to obtain valid inferences, measuring that their are no biases

The goal of statistical evaluation is...

to provide an objective, or at least agreed upon criterion, to decide whether the results we obtained are sufficiently compelling to reject this no-difference hypothesis

Purpose of research

to reach a well-founded (valid) conclusion about the effects of a given intervention and conditions under which it operates

Statistical conclusion validity

to what extent is a relation shown, and how well can an investigation detect effects if they exist.

Respect for persons (Autonomy)

treat individual as autonomous human beings, capable of making their own decisions and choices, and do not use people as a means to an end

Justice

treat people fairly and design research so that its burdens and benefits are shared equitable ex- Tuskegee

Con of Deception

trust is broken, highly unethical -participants might have decided not to participate in the research had they been fully informed

dismantling therapy study

two or more target groups that vary in the component of target provided

nominal variable

type of measured variable, used to name or identify a particular characteristic, categorical measurement

goal of research

understand phenomena of interest

understanding the phenomena

understanding why something does not generalize (external)

Plagiarism

use someone else's work without giving credit to the original author

yoked group

used to assess or rule out factors that arise during implementation *intervention time is variable matching one participant to another participant to experiences

Pearson Correlation coefficient

used to summarized the strength and direction of the association between two quantitative variables, designated by r

Quantitative Variable

uses numbers to indicate the extent to which a person possesses a characteristic of interest

The accept decision

usually means that the overall study provides important information and was well done

Frequency distribution

values and their frequencies, how many times each individual falls into each set of categories for nominal variables, is a table that indicates how many and in most cases what percentage of individuals in the sample fall into each of the set of categories

VARIABILITY IN THE DATA

variability is inherent in the nature of the subject performance in any investigation

Matching variable

variable you wish to control gender income intelligent

Extraneous variables

variables other than the predictor variable that cause the outcome variable but that do not cause the predictor variable, how aggressively a child plays at school is probably caused to some extent by the disciplining style of the child's teacher, but TV watching at home is probs not

moderators

variables that influence the direction, nature and magnitude of relation

Within-groups variance

variance within the conditions. random fluctuations among individuals within the levels

Use of directional tests

variation of alpha levels raises a related solution to increase power. In significance testing, Alpha is used to decide whether a difference between groups is reliable

Confounding Variables

varibles other than the IV that effect the DV

Condition

what the levels of the IV are frequently called in One-Way

functional relationship

when changes in dependent variable are due to changes in the independent variable

diffusion or imitation treatment

when control gets the wrong treatment

Suspicion check

when deception has been used- questioning the participants to determine whether they believed the experimental manipulation or guessed the research hypotheses

Carryover

when effect of one level of the manipulation are still present when the Dependent measure is assessed for another level of the manipulation -within subjects problem

Passive Deception

when participants are not told about the hypothesis being studied or the potential use of the data being collected ex- research studying eyewitness testimony might create a fake crime then later test participant on their memory of it

non randomly assigned

when randomization not possible

Active Deception

when researcher tells the participant that he or she is studying learning when in fact the experiment really concerns obedience to authority

Example of Type 2

when scientist concludes that the psychotherapy program is not working even thought it really is or friend does not have ESP when she does

specific treatment or reaction of controls

when services are provided to the control group, thus creating another intervention

negative results are important

when the context of possible harm, side effects or costs of alternative interventions Ex: cell phones -- society thinks cell phones are harmful because can cause cancer

Systematic Error

when the measured variable is influenced by other conceptual variables that are not part of the conceptual variable of interest -Error that vary with measurement in some methodical way (bias) -These variables systematically increase or decrease scores on the measure variable

Validity

when the measurement or intervention is true

Negative Skew- Central Tendency

when the outliers are on the left side of the distribution so it goes mean, median, mode with the hump closer to the right side

Positive Skew- Central Tendency

when the outliers are on the right side of the of the distribution so it goes mode, median, mean and hump is closer to left side

Independent

when the twp variables can not predict the other, random r=.00

multivariate

when there are multiple outcome measures, data is considered multivariate. Multivariate analysis includes several measures in a single data analysis. We do not use multivariate analyses merely because we have several dependent measures ***evaulates the composite variables, based on their interrelations

combo of selection

when threats to internal validity vary to different groups

concurrent schedule

when treatments are simultaneously presented in the same day, or session

univariate

whereas univariate analysis examine one measure at a time. ***the best way to analyze the multiple outcome measures is with multivariate analysis or with several univariate analysis ***** separate univariate tests might be appropriate, if the investigator does not view the measures as conceptually related or if the measures are uncorrelated, or if the primary interest is in the individual measures themselves.

Measured Variables

which consist of number that represent the conceptual variables -frequently referred to as measures of the conceptual variable two types nominal and quantitative ex- if study time is the conceptual seconds of studying is the measured

Systematic Random Sampling Example

wish to draw a sample of 100 students from a population of 7,000, first draw a random number between 1 and 70 (100/7000) and then sample the person on the list with that random -the create the rest of the sample by taking every seventieth person on the list after the initial person

effect size formula

without repeated measures: ES= m1+m2/S With repeated measures: ES= m1-m2/s square root of 1-r squared

acquiescent responding

yeah saying bias, a form of reactivity in which people tend to agree with whatever questions they are asked

Reliability Example

you can test the reliability of a bathroom scale by weighing yourself on it twice in a row- high reliable -stick as a measurement for height is not reliable because you wont get the same measurement every time


Kaugnay na mga set ng pag-aaral

Praxis:Principles of Learning and Teaching (PLT): Grades K-6 (5622)

View Set

Microsoft Windows 10 Configuring Windows Devices - Midterm

View Set

ch 69 Neurologic Infections, Autoimmune Disorders, and Neuropathies prepU

View Set

Lecture 12 (21/4) - Risk Management

View Set

Marketing Final Short Answer, Marketing Final Exam - Study Guide

View Set

Organizational Behavior (OB) | Chapter 5 - Motivation Theories (Multiple Choice Questions)

View Set

Directions Review and Quiz Listening: Conversations

View Set

Traditional indemnity health insurance commercial insurance carriers

View Set