ResearchMethods in Psychology Exam 2
known-groups paradigm
in which researchers see whether scores on the measure can discriminate among a set of groups whose behavior is already well understood
masked design (blind design)
observers are unaware of the conditions to which participants have been assigned and are unaware of what the study is about
criterion validity
does it correlate with key behaviors evaluates whether the measure under consideration is related to a concrete outcome, such as behavior, that it should be related to, according to the theory being tested important for self-report, predict their actual behavior
negatively worded question example
does it seem possible or does it seem impossible to you that the Nazi extermination of the Jews never happened? people who do not drive should never be punished
leading question example
"Is this the gun you saw being used in the robbery?"
name at least two ways to ensure that survey questions are answered accurately
-Give self-reports-Include reverse-worded items. It might slow people down so they answer more carefully.- For fence sitting, take away the neutral option.- For faking good/bad: Include special survey items that identify socially desirable respondents with target items like: "I never hesitate to go out of my way to help someone." You can discard this person.
name the three common ways in which researchers operationalize their variables
1. Self-Report 2. Observational 3. Physiological
solutions to reactivity
1. blend in 2. wait it out 3. measure the behavior's results
well-being (happiness) possible operational definition
10-point ladder of life scale another possible option; diener's 5-item subjective well being scale.
all variables must have at least how many levels
2
Reactivity
A change in behavior of study participants (such as acting less spontaneously) because they are aware they are being watched.
unobtrusive observation
A technique in which researchers observe the activities of people without intruding or participating in the activities.
Fence setting (not accurate response)
Always choosing the response in the middle of scale for all items. (staying mostly neutral)
many researchers believe criterion validity is more important than convergent and discriminant validity. can you see why?
Because only criterion validity establishes how well a measure correlates with a behavioral outcome, not simply with other self-report measures.
slope direction
Can be positive, negative, or zero-that is, sloping up, sloping down, or not sloping at all.
describe the difference between categorical and quantitative variables. come up with new examples of variables that would fit the definition of ordinal, interval, and ratio scales
Categorical are categories, nominal variables. Examples are sex, female male. They might assign numbers to these categories, like 1=male, 2=female. There is so significance to these numbers. Quantitative variables are coded with meaningful numbers. Examples are height and weight.
categorical variables (nominal variables)
Categorical variables take on values that are names or labels. The color of a ball (e.g., red, green, blue), gender (male or female), year in school (freshmen, sophomore, junior, senior). These are data that cannot be averaged or represented by a scatter plot as they have no numerical meaning.
codebooks
Clear rating scales that allow observers to make reliable judgements with less bias
Classify each operational variable below as categorical or quantitative. If the variable is quantitative, further classify it as ordinal, interval, or ratio. A) Degree of pupil dilation in a person's eyes in a study of romantic couples (measured in millimeters). B) Number of books a person owns. C) A book's sales rank on Amazon.com. D) The language a person speaks at home. E) Nationality of the participants in a cross-cultural study of Canadian, Ghanaian, and French students. F) A student's grade in school.
Degree of pupil dilation - quantitative: ratio# of books - quantitative: ratioA book's sales rank - Quantitative: ordinalLocations of a person's hometown: categoricalNationality of participants: categoricalStudent's grade in school: quantitative: interval
self report measure examples
Diener's five-item scale and the Ladder of Life question are good examples of self-report measures about life satisfaction. Asking people how much they appreciate their partner or their gender identity. parents may report for children
faking bad
Giving answers on a survey (or other self-report measure) that make one look worse than one really is.
face validity
Measures whether a test looks like it tests what it is supposed to test.
observer effects (expectancy effects)
Participants' behavior changes to match the observer's expectations. ex, bright and dull rats
population of interest examples
Population of interest; democrats in texas. basied sampling technique; recruiting people sitting in the front row at the texas state democratic convention unbiased sampling technique; obtaining a list of all registered texas democrats from public records and calling a sample of them through randomized digit dialing
For each of the three common types of operationalizations—self-report, observational, and physiological—indicate which type(s) of reliability would be relevant.
Self-report: test-retest and internal may be relevant. Observational: interrater would be relevant; psychological: interrater may be relevant.
what are three potential problems related to the wording of survey questions? can they be avoided?
double - barreled questionsnegative wordingleading questionsacquiescenceall can be avoided at least to an extent
reliability is about consistency. define three kinds of reliability, using the word consistent in each of your definitions
Test-Retest: researcher gets consistent results every time they use the measure.Interrater: consistent results are obtained no matter who measures or observes.Internal: a study participant gives a consistent pattern of answers, no matter how the researcher has phrased the question.
Explain why a variable will usually have only one conceptual definition but can have multiple operational definitions
The conceptual definition, or construct, is the researchers definition of the variable in question at an abstract level (like happiness). The operational definition of a variable represents a researcher's specific decision about how to measure or manipulate the conceptual variable (giving people a scale to say how happy they are).
content validity
a measure must capture all parts of a defined construct
Average inter-item correlation
a measure of internal reliability for a set of items; it is the mean of all possible correlations computed between each item and the others. an AIC between .15 and .50 means that the items go reasonably well together
survey and poll
a method of posing questions to people on the phone, in personal interviews, on written questionnaires, or online
Likert Scale
a numerical scale used to assess attitudes; includes a set of possible answers with labeled anchors on each extreme strongly agree, agree, neither agree nor disagree, disagree, strongly disagree if it doesn't follow this exact format its a likert-type scale
ratio scale
a quantitative scale of measurement in which the numerals have equal intervals and the value of zero truly means "nothing"
negatively worded questions
a question in a survey or poll that contains negatively phrased statements, making its wording complicated or confusing and potentially weakening its construct validity
observational measure example
a researcher could operationalize happiness by observing how many times a person smiles
nonprobability sampling
a sampling technique in which there is no way to calculate the likelihood that a specific element of the population being studied will be chosen results in a biased sample
sample
a smaller set, taken from the population
test-retest reliability
a study participant will get the same score each time they are measured with it
response sets (nondifferentiation)
a type of shortcut respondents can take when answering survey questions weaken construct validity because these survey respondents are not saying what they really think tend to happen at the end of a long questionnaire uncomfortable putting accurate answer
unbiased sample (representative sample)
all members of the population have an equal chance of being included in the sample allow us to make inferences about the population of interest
intelligence possible operational definition
an IQ test that includes problem-solving items, memory and vocab questions, and puzzles another possible option: recording brain activity while people solve difficult problems
Acquiescence (yea-saying)
answering "yes" or "strongly agree" to every item in a survey or interview
interval scale
applies to the numerals of a quantitative variable that meet two conditions 1. numerals represent equal intervals (distances) between levels 2. there is a no true zero (zero does not mean nothing)
gratitude toward one's relationship partner, possible operational definition
asking people if they agree with the statement "I appreciate my partner." another possible option, watching couples interact and count how many times they thank each other
gender identity, possible operational definition
asking people to report on a survey wether they identify as male, female, or nonbinary another possible option, in phone interviews, a researcher guesses gender through the sound of a person's voice
wealth possible operational definition
asking people to report their income within various ranges another possible option; coding the value of a car from 1 to 5
double-barreled questions
asks two questions in one do you enjoy swimming and wearing sunscreen?
in some cases, self reports might be the only option
can study if someone is dreaming, but need a self report to see what they were dreaming
operational variables are primarily classified as
categorical or quantitative
the levels of categorical variables are
categories
Cronbach's alpha
combines the AIC and the number of items in the scale the closer it is to 1.0, the better the scale's reliability
interrater reliability
consistent scores are obtained no matter who measures or observes
ways a sample may be biased
convenience sampling, self-selection
population of interest
describes group about which the observations are made
double-barreled question example
do you agree that the second amendment guarantees your individual right to own a gun
Population
entire set of people or products that you are interested
simple random sampling
every member of the population has an equal probability of being selected for the sample most basic form of probability sampling
probability sampling (random sampling)
every member of the population of interest has an equal chance of being selected for the sample, regardless of whether they are close by, easy to contact, or motivated to respond
example of operationalizing conceptual variables, the association between wealth and happiness
first they need to measure happiness and wealth they might operationally define wealth by asking about salary in dollars, bank account balances, or observing the car they drive
observational research
gathering primary data by observing relevant people, actions, and situations can be the bias for frequency claims
socially desirable responding (faking good)
giving answers on a survey (or other self-report measure) that make one look better than one really is
reliability
how consistent the results of a measure are
correlation coefficient (r)
indicate how close the dots, or points, on a scatterplot are to a line drawn through them a single number ranging from -1.0 to 1.0 used to indicate the strength and direction of an association. number below scatterplot
When is a sample biased?
it may contain too many "unusual" people
question wording matters
leading questions, double-barreled questions, negatively worded questions, question order
leading questions
leads people to a particular response
categorical characteristics and example
levels are categories examples; nationality, type of music, type of phone people use
quantitative characteristics
levels are coded with meaningful numbers
physiological measure example
moment-to-moment happiness has been measured using facial electromyography
is there an operationanalization that's best?
no
Ordianl Scale
numerals of a quantitative variable represent an ranked order
survey question formats include
open-ended, forced-choice, Likert scale, semantic differential
physiological measure
operationalizes a variable by recording biological data such as brain activity, hormone levels, or heart rate usually require equipment to amplify, record, and analyze biological data
observational measure (behavioral measure)
operationalizes a variable by recording observable behaviors or physical traces of behaviors
self report measure
operationalizes a variable by recording people's answers to questions about themselves in a questionnaire or interview
forced-choice questions
people give their opinion by picking the best out of two or more options do you like this class so far? Yes or No
open-ended questions
questions that allow respondents to answer however they want "what do you think of this class?"
which of the following correlations is the strongest: r=.25, r=-.65, r=-.01, or r=.43
r=-.65
construct validity of a measure has two aspects
reliability and validity
how to avoid acquiescence response bias
reverse worded items "If I had my life to live over, I'd change almost everything."
convergent validity
scores on the measure are related to other measures of the same construct does the pattern make sense
three common types of measurement
self-report measure observational measure physiological measure
For which topics, and in what situations, are people most likely to answer accurately to survey questions?
self-reports
biased sample (unrepresentative sample)
some members of the population of interest have a much higher probability of being included in the sample compared to other members
process of studying conceptual variables
start by stating a definition of their construct then create an operational definition
internal reliability
study participant gives consistent pattern of answers, no matter how the researcher phrases the question
How to avoid socially desirable responding
tell participants surveys are anonymous
discriminant validity (divergent validity)
tests whether concepts or measurements that are not supposed to be related are actually unrelated does the pattern make sense
Census
the official count of a population census try to go for 100% of the population
strength
the spread in a scatterplot corresponds to the strength
writing well-worded questions
the way a question is worded can determine how people answer
What do face validity and content validity have in common?
they both require an expert's judgement subjective ways to assess validity
how many types of quantitative variables are there
three ordinal scale interval scale ratio scale
how many types of reliability are there
three test-retest reliability interrater reliability internal reliability
Convenience sampling
using a sample of people who are easy to contact and readily available to participate most common sampling technique
quantitative variables
variables that can be counted or measured, meaningful numbers height and weight
self-selection
when a sample is known to contain only people who volunteer to participate can cause serious problems for external validity
observer bias
when observations may be skewed to align with observer expectations
validity
whether the operationalization is measuring what it is supposed to measure