PSY 301- University of Oregon

Ace your homework & exams now with Quizwiz!

Internal Validity

In a relationship between one variable (A) and another (B), the extent to which A, rather than some other variable (C), is responsible for the effect on B.

double-barreld questions

asks two questions in one; have poor construct validity because people might only be responding to the first half of the question, the second half, or both

observer effects

observers change the behavior of those they are observing, such that participant's behavior changed to match observer expectations; can occur even in seemingly objective observations;

observer bias

occurs when observer's expectations influence their interpretation of the participants' behaviors or the outcome of the study; rate behaviors according to their expectations or hypotheses

reactivity

occurs when people change their behavior (react) in some way when they know another person is watching; do not display their typical behavior

leading questions

avoid them! one questions wording affects the way they answer the next; ex one question has more negative context vs the other has more positive changing the responses

Why is precision important to scientists?

scientists use theories to guide hypotheses; they state these hypotheses using operational definitions

Internal reliability

the extent to which multiple measures or items are all answered the same by the same set of people

What is the Empirical Method?

The use of verifiable evidence as the basis for conclusions; collecting data systematically and using it to develop, support, or challenge a theory. Base one's conclusions on systematic observations.

Parsimony

"all other things being equal, the simplest solution is the best" if two theories explain the data equally well but one is simpler, most scientists will opt for the simpler theory

Type I Error

"false positive"

What was Milgram's obedience study?

"teacher" vs "learner" ; as teacher you have to punish the learner when they respond incorrectly and with each error there is an increase in the level of shock; the experimenter tells you to continues administering the shock even though the learner is screaming, the participants did not know the learner wasn't actually being shocked; caused a lot of stress on the teacher

What is the typical anatomy of an empirical article, and what goes into each section?

1. Abstract-summary 2. Intro- intro to the research and hypothesis 3.Methods- how it is tested; who participated 4. Results- main findings 5.Discussion- why does it matter?

Theory Data Cycle

1. Theory- how variables relate to each other 2. Research questions- specific to the theory 3.Research design- 4. Hypothesis- prediction; way of stating the specific outcome the researcher expects to see 5. data- set of observations 6. supporting data strengthens theory.. no supporting data leads to revise theory or make a new design

What was the Tuskegee syphilis study?

600 african american men with syphillis were lied to about treatment for their illness and some were not even aware they had the STD and when there was an appropriate cure they did not provide it; VERY UNETHICAL

correlation method

does your measure correlate with the behavior or outcome of interest?

What makes a sample representative? how can it be achieved? when does it matter most or least?

A sample is representative when it gives all people of the population a chance of being chosen. Can be achieved through simple random sampling, cluster sampling, multiple stage sampling, stratified random sampling, oversampling, systematic sampling, etc. Matters when wanting to generalize to population

Falsifiability

A theory must lead hypotheses that, when tested, could actually fail to support the theory.

Interrogating Association Claims

Construct Validity: How well did the researchers measure each variable? External Validity: Can it generalize to other populations, as well as to other contexts, times, or places? How representative is the sample? To what other problems might the association be generalized? Statistical Validity: What is the effect size? how strong is the association? is the association statistically significant? false positive? Internal Validity: not relevant

Interrogating Frequency Claims

Construct Validity: how well did the researchers measure their variables? How accurately did they operationalize the variable? External Validity: How did the researcher choose the study's participants and how well do those participants represent the intended population? Statistical Validity: How well do the numbers support the claim? (margin of error) Internal Validity: not relevant

Interrogating Casual Claims

Covariance Temporal Precedence Internal Validity: Was the study an experiment? does the study achieve temporal precedence? does the study control for alternative explanations by randomly assigning participants to groups? Does the study avoid the threats to internal validity? Construct Validity: How well was the researcher measured or manipulated the variables in the study? Statistical Validity: What is the effect size? Is there a difference between groups, and how large it is? Is the difference statistically significant? External Validity: To what populations, settings, and times can we generalize this causal claim? How representative is the sample? How representative are the manipulations and measures?

Why is systematic research usually preferable to simple experience?

Experience has no comparison group; research asked the critical question "compared to what?" which allows researcher to compare what happens with or without the thing they are interested in

Semantic Differential

Fats are Unhealthy 1 2 3 4 5 Healthy (numeric system that is anchored with adjectives)

Why is skepticism important to scientists?

Must treat conclusions with caution

Population vs Sample vs. Census

Population- the entire set of people or products in which you are interested Sample-smaller set taken from that population Census-testing the whole population

Representative vs Bias sample

Representative- all members of a population have an equal chance of being included in the sample; allow us to make inferences about the population Bias- some members of the population

Theory vs. Hypothesis

Theory: a statement or set of statements that describes general principles about how variables relate to one another Hypothesis: a statement of the specific relationship between a study's variables that the researcher expects to observe if a theory is accurate. -different than everyday definition of a theory because these theories spark questions that are actually tested...everyday definition is more of a guess

Covariance

X and Y are correlated

Temporal Precedence

X comes before Y

Internal Validity

Z does not cause X and Y; no alternate explanations

What is an Institutional Review Board?

a committee responsible for interpreting ethical principles and ensuring that research using human participants is conducted ethically

reliability vs measurement validity

a measure can be reliable but it might not be valid for its intended use; it can be less valid than reliable but not less reliable than valid

Content validity

a measure must capture all parts of a defined construct; must assess all components of variable like intelligence

Surveys as self report measures

a way of posing questions to people on the phone, in personal interviews, on written questionnaires or online.

open-ended questions

allows respondents to answer any way they like; provides researchers with spontaneous, rich information. Drawback is responses must be coded and categorized, which is difficult and time consuming

Confounds

alternative explanations; occurs, essentially, when you think one thing caused an outcome but in fact other things changed too so you are confused about what the cause really was

Cronbach's alpha

an average of all the possible inter-item correlations. the closer to one the better the reliability. if internal reliability is good researchers can average all the items together

fence-sitting

answering in the middle of the scale; to prevent take away the neutral option or give forced-choice with only 2 options

Ordinal scale

applies when numerals of quantitative variables represents a ranked order (ex top 10 selling books)

Association Claim

argues that one level of a variable is likely to be associated with a particular level of another variable; related; must involve at least two variables that are measured not manipulated; positive, negative, or zero association

Causal Claim

argues that one of the variables is responsible for changing the other; has three criteria: covariance, temporal precedence, and internal validity

order of survey questions

can affect responses to a survey; the earlier questions can change the way respondents understand and answer the later questions

negatively worded questions

can cause confusion therefore affecting the construct validity of a survey; makes questions unnecessarily complicated; ex abortion should never be restricted

blind (or masked) design

can prevent observer bias/effects; observers are blind to which conditions participants have been assigned and are unaware of what the study is about

Categorical variables

categories (sex,species,etc)

Quantitive variables

coded with meaningful numbers (height and weight)

Meta-analysis

combines the results of many studies and gives a number that summarizes magnitude of a relationship

Belmont Report (Respect for Persons, Beneficence, Justice)

conference in which a short document was made that outlines three main principles for guiding ethical decisions making -Respect for Persons: individuals participating in research should be treated as autonomous agents(free to make up their own minds about whether they wish to participate) entitled to informed consent; people who have less autonomy are entitled to special protection (ex children, disabled,prisoners) -Beneficence: researchers must take precautions to protect research participants from harm and ensure their well being; must carefully asses risks and benefits; must consider who could benefit or who could be harmed -Justice: calls for a fair balance between the kinds of people who participate in research and the kinds of people who benefit from it; people must be representative of the people who would benefit from its results

Frequency Claim

describe a particular rate or degree of a single variable. Claim how frequent or common something is. mentions a percentage of a variable, the number of people who engage in a certain activity, or a certain group's level

Empirical article vs. Review article

empirical- report on the results of an empirical research study and contain details on the study's method, the statistical tests used, and the numerical results review- provides a summary of all the published studies that have been does in one research area

Comparison groups

enables us to compare what would happen with and without the thing we are interested in

Criterion validity

evaluates whether the measure under consideration is related to a concrete outcome, such as behavior, that it should be related to, according to the theory being tested.ex previously using IQ test to measure sales aptitude but want to develop a better test...do the new test scores actually correlate with the key behavior (good sales)

What constitutes data to scientists?

good data is large in quantity and variety; a theory must change to accommodate data

External Validity

how well the results of a study generalizes to, or represent, people or contexts besides those in the study itself

What is the cherry-picking problem?

ignoring inconvenient data; only acknowledge certain data that aligns with what we want to see

Statistical Validity

is the extent to which a study's statistical conclusions are accurate and reasonable

Why is openness important to scientists?

it allows for replication, checks and balances, and existing in literature

Survey question wording

it is crucial that each question be clear and straightforward to answer

What are the components of measurement validity?

it is established with subjective judgments or with empirical data; establishes construct validity; face validity, content,criterion,convergent, and discriminant

Probabilistic

its (behavioral research) findings are not expected to explain all cases all of the time. Instead the conclusions of research are meant to explain a certain proportion of the possible cases.

Independent variable

manipulated variable

Dependent variable

measured variable

Type II Error

might mistakenly conclude there is no association when there really is; "miss"

Response set

non differentiation; shortcut when answering questions; people answer the same to all questions

ratio scale

numerals of a quantitative variable that equal intervals and when the value of 0 truly means "nothing"

interval scale

numerals that represent equal intervals (distances) between levels and there is no "true zero"(a person can get a score of 0, but zero doesn't mean "nothing")

unobtrusive observations and data

observations: make yourself less noticeable (to avoid observer effects) data: avoid reactivity; measure the traces that behavior leaves behind instead of measuring behavior; ex measuring the amount of empty liquor bottles to see how much alcohol is being consumed rather than observing how much someone drinks

Experiment

one variable is manipulated and the other is measured

Physiological measures

operationalizes a variable by recording biological data such as brain activity, hormone levels, or heart rate.

Self-report measure

operationalizes a variable by recording people's answers to questions about themselves in a questionnaire or interview

Confirmation bias

paying extra scrutiny or extremely skeptical of research that contradicts your own idea and less critical of work that fits with your idea

Likert Scale

people are presented with a statement and are asked to answer using a rating scale (strongly agree, agree, neither, disagree, strongly disagree) or could be likest-like scale and be (1-5)

What is the problem with "common sense" stories?

people do not generally step forward with non-examples

forced-choice questions

people give their opinion by picking the best of two or more options; researcher adds up the number of time people chose responses

Construct Validity

refers to how well a conceptual variable operationalized

Validity

refers to the appropriateness of a conclusion or decision; in general a valid claim is reasonable, accurate, and justifiable

self-reporting more than you know or remember

researcher cannot assume the reasons people give for their behavior are their actual reasons. people may not be able to accurately explain why they acted.

known groups paradigm

researchers see whether scores on the measure can discriminate among a set of groups whose behavior is already well understood. ex salivary levels to measure stress.. measure for speaker and for audience since speaking is a known stressor

Is deception ever used?

researchers withheld some details of the study from participants-deception through omission deception through commission- they actively lie to them; used in some cases but the principle of beneficence must be applied: what are the ethical costs and benefits of doing the study with deception?

nominal scale

same as categorical

Confirmatory Hypothesis Testing

selected questions that would lead to a particular,expected answer; conducted in a way that is decidedly not scientific

Discriminant validity

should correlate less strongly with measures of different constructs

Correlation Coefficient (r)

slope direction: postive=up; negative=down; zero=none strength of relationship: r falls between -1.0 and 1.0 and the closer r is to zero the weaker it is (sign indicates relationship)

waiting it out

solution for reactivity; let participants get used to the observers presence so they will display their typical behavior

Observational measures

sometimes called a behavioral measure; operationalizes a variable by recording observable behaviors or physical traces of behavior. ex a researcher could operationalize happiness by observing how many times a person smiles

inter-rater reliability

the degree to which two or more coders or observers give consistent rating of a set of targets

Face validity

the extent that it appears to experts to be a plausible measure of the variable in question; subjective judgment; if it looks good it has face validity

Effect size

the magnitude of a relationship between two or more variables

Convergent Validity

the measure should correlate more strongly with other measures of the same constructs

Test-retest reliability

the researcher gets consistent scores every time they use the measure. applies whether operationalization is self-report, observational, or physiological. Primarily relevant when researchers are measuring constructs that they expect to be relatively stable in most people

informed consent

the researchers obligation to explain the study to potential participants in everyday language and give them a chance to decide whether to participate. in most settings it is obtained by providing a written document that outlines the procedures, risks, and benefits. sometime not necessary when there is no potential harm(like in a educational setting)

What is Random Assignment?

the use of a random method (e.g. flipping a coin) to assign participants into different experimental groups

Give an example of an availability heuristic?

things that pop up easily in our mind then to guide our thinking; can lead us to overestimate frequency example: death by fire or falling? - falling occurs more frequently but we say fire because it is more memorable and imagined to be more fatal

inter-rater reliability

two or more independent observers will come up with consistent (or very similar findings). most relevant for observational measures

Give an example of Bias Blind Spot

we are biased to being bias. most of us think we are less biased than others. example: people who like the president may think anybody who disagrees with him must be unintelligent, failing to recognize their own bias in the president's favor

Present bias

we often fail to looks for absences and it is easy to notice what is present; it is related to the need for comparison groups.

Observational data

when a researcher watches people or animals and systematically records how they behave or what they are doing; basis for frequency claims; usually more trust worthy than self-report

acquiescence ("yea-saying")

when people say yes or strongly agree to everything; to prevent/catch throw in reverse wording

Debriefing

when researchers us deception they must spend time after the study debriefing each participant in a structured conversation; describe the nature of the deception and explain why it is necessary and to restore an honest relationship with the participants

Socially desirable responding (faking good)

when survey respondents give answers that make them look better than they really are. respondents are embarrassed, shy, or worried about giving an unpopular opinion; solution: anonymity


Related study sets

School Age Development Practice Questions NSG 363

View Set

Practice Test 2 for Principles of Management CLEP

View Set

HESI Health and Physical Assessment

View Set

MKTG 480: Chapter 8 - Product strategy and new product development

View Set

NURS 3107 - Exam 4 - EAQs: Upper Respiratory Problems

View Set

Patho: Check your understanding 16,18,19,20

View Set