chapter 5 and 6 psych 301

Ace your homework & exams now with Quizwiz!

ordinal scale

-applies when the numerals of a quantitative variable represent a ranked order -example: order of finishers in a swimming race, ranking of 10 movies from most to least favorite

self-reporting memories of events

-asking people what they remember is probably not the best operationalization for studying what really happened to them

double-barreled question

-asks two questions in one, thereby weakening its construct validity

slope direction

-can be positive, negative, or zero- that is, sloping up, sloping down, or not sloping at all

strength

-what the spread corresponds to -in general, the relationship is strong when dots are close to the line; it is weak when the dots are spread out

leading questions

-a type of question in a survey or poll that is problematic because its wording encourages only one response, thereby weakening its construct validity.

response sets

-also known as non differentiation -a shortcut respondents may use to answer items in a long survey, rather than responding to the content of each item -weaken construct validity because these survey respondents are not saying what they really think

discriminant validity (or divergent validity)

-an empirical test of the extent to which a measure does not associate strongly with measures of other, theoretically different constructs.

known-groups paradigm

-another way to gather evidence for criterion validity -researchers see whether scores on the measure can discriminate among a set of groups whose behavior is already well understood

interval scale

-applies to the numerals of a quantitative variable that meet two conditions -1st: the numerals represent equal intervals (distances) between levels -2nd: there is no "true zero" (a person can get a score of 0, but the 0 does not really mean "nothing") - example: IQ test, shoe size

ratio scale

-applies when the numerals of a quantitative variable have equal intervals and when the value of 0 truly means "nothing". -example: number of exam questions answered correctly or height in cm

convergent validity

- an empirical test of the extent to which a measure is associated with other measures of a theoretically similar construct -a measurement should correlate more strongly with similar traits(convergent validity) and less strongly with dissimilar traits(discriminant validity)

well-worded questions

- if the intention of a survey is to capture respondents' true opinions, the survey writers might attempt to word every question as neutrally as possible -wording definitely matters

observer effects

-a change in behavior of study participants in the direction of an observer's expectation -also known as expectancy effects -examples: bright and dull rats, clever hans

survey and poll

-a method of posing questions to people on the telephone, in personal interviews, on written questionnaires, or via the internet -these terms are interchangeable in this book

negatively worded questions

-a question in a survey or poll that contains negatively phrased statements, making its wording complicated or confusing and potentially weakening its construct validity -can also reduce construct validity

sematic differential format

-a response scale whose numbers are anchored with contrasting adjectives -example: ratemyprofessors.com uses following adjectives: easiness(easy to hard), helpfulness(useless to very helpful), clarity(confusing to crystal clear)

forced-choice format

-a specific way to ask survey questions in which people give their opinion by picking the best of two or more options -frequently used in political polls

open-ended questions

-a survey question format that allows respondents to answer any way they like -drawback is responses must be coded and categorized, a process that can often be difficult and time-consuming

likert scale

-a survey question format; a rating scale containing multiple response options that are anchored by the terms strongly agree, agree, neither agree nor disagree, disagree, strongly disagree -called likert-type scale if it does not follow that format exactly

categorical variables (also called nominal variables)

-categories -examples: sex, whose levels are male or female; and species, whose levels might be rhesus macaque, chimpanzee, or bonobo. a researcher might decide to assign numbers to the levels of a categorical variable, however these numbers do not have a numerical meaning -1=female, 2=male, doesnt mean being female is quantitatively "higher" than being male

quantitative variables

-code with meaningful numbers -height and weight are quantitative because they are measured in numbers

cronbach's alpha (or coefficient alpha)

-correlation-based statistic to see if researchers measurement scales have internal reliability -closer to cronbach's alpha is to 1, the better the scales reliability -an estimate of the average correlation among all of the items on the scale

criterion validity

-evaluates whether the measure under consideration is related to a concrete outcome, such as behavior, that it should be related to, according to the theory being tested -item might look good on a face validity level, but do the test scores correlate with a key behavior? -often represented with correlation coefficients

face validity

-extent that it appears to experts to be a plausible measure of the variable in question -based on subjective judgment

how to prevent fence sitting

-hard to distinguish between those who really mean it and those who just selected it -researchers may try to jostle people out of this tendency by taking away neutral option -makes the person choose one side or the other -can also used forced-choice format in which ppl must pick one of two answers

how to prevent yea-saying

-include reverse-worded items

how to minimize anonymity issues

-include special survey items that identify socially desirable responders with target items -researchers can also ask people's friends to rate them -researchers increasingly use special, computerized measures to evaluate people's implicit opinions about sensitive topics (ex:Implicit association test)

self report vs observational

-many researchers prefer to observe behavior directly, rather than rely on self reports.

self-reporting "more than they can know"

-most people willingly provide an explanation or an opinion to researcher, but sometimes they unintentionally give inaccurate responses -researchers cannot assume the reasons people give for their own behavior are their actual reasons -people may not be able to accurately explain why they acted

observer bias

-occurs when observers' expectations influence their interpretation of the participants' behaviors or the outcome of the study -instead of rating behaviors objectively, observers rate behaviors according to their own expectations or hypotheses

reactivity

-occurs when people change their behavior in some way when they know another person is watching -occurs in both human and animal participants

acquiescence

-or "yea-saying" occurs when people say "yes" or "strongly agree" to every item instead of thinking carefully about each one -people have bias to agree(say yes to) any item-no matter what it states -also threatens construct validity

masked research design

-or blind design -observers are unaware of the conditions to which participants have been assigned and are unaware of what the study is about -helps minimize observer bias and observer effects

socially desirable responding

-or faking good -giving answers on a survey that make one look better than one really is -because respondents are embarrassed, shy, or worried about giving an unpopular opinion, they will not tell the truth on a survey or other self-report measure

correlation coefficient

-or r, is a single number used to indicate how close the dots on a scatterplot are to a line drawn through them -more common and efficient way to evaluate reliability relationships is to use correlation coefficient

question order

-order questions are asked in can affect responses to a survey -earlier questions can change the way respondents understand and answer the later questions -prepare different versions of survey, with questions in different sequences in order to control order effects

fence sitting

-playing it safe by answering in the middle of the scale, especially when survey items are controversial -ppl might answer in the middle (or say idk) when a question is confusing or unclear -can weaken construct validity

observational research

-process of watching people or animals and systematically recording how they behave or what they are doing

r

-r indicates both the direction of the relationship and the strength of the relationship -when the slope is positive, r is positive; when the slope is negative, r is negative -value of r can fall only between 1.0 and -1.0. -when relationship is strong, r is close to either 1.0 or -1.0 -when relationship is weak, r is closer to zero

how to avoid socially desirable

-researcher might ensure that the participants know their responses are anonymous (not the perfect solution) -anonymity can cause respondents to treat surveys less seriously

scatterplots to evaluate reliability

-scatterplots can be a helpful tool for assessing the agreement between two administrations of the same measurement (test-retest reliability) or between two coders (interrater reliability) -using a scatterplot you can see whether the two ratings agree or whether they disagree

faking bad

-similar but less common phenomenon -giving answers on a survey that make one look worse than one really is

what should researcher do to reduce reactivity?

-solution 1: blend in -make unobtrusive observations- make yourself less noticeable -solution 2: wait it out -allow subjects to get used to your presence -solution 3: measure the behavior's results -use unobtrusive date -instead of measuring behavior, researchers measure the traces that a behavior leaves behind

content validity

-the extent to which a measure captures all parts of a defined construct -based on subjective judgment

what 3 problems threaten the construct validity of observations?

1. observer bias 2. observers effects 3. reactivity

3 types of quantitative variables

1. ordinal scale 2. interval scale 3. ratio scale

3 types of reliability

1. test-retest reliability 2. interrater reliability 3. internal reliability

internal reliability (also called internal consistency)

a study participant gives a consistent pattern of answers, no matter how the researcher has phrased the question

validity

concerns whether the operationalization is measuring what it is supposed to measure

interrater reliability

consistent scores are obtained no matter who measures or observes

which operationalization is best?

it is best if self report, observational, and physiological measures show similar patterns of results

physiological measures

operationalizes a variable by recording biological data such as brain activity, hormone levels, or heart rate.

self report measures

operationalizes a variable by recording people's answer to questions about themselves in a questionnaire or interview.

reliability

refers to how consistent the results of a measure are

test-retest

researcher gets consistent scores every time he or she uses the measure

3 common types of measures

self report, observational, and physiological

observational measure

sometimes called a behavioral measure, operationalizes a variable by recording observable behaviors or physical traces of behaviors.


Related study sets

Energy Resources and Consumption Test

View Set

IB French B SL Complex Structures

View Set

Поняття про штучний інтелект

View Set

What does it mean to be an outsider?

View Set