Research Methods 2

Ace your homework & exams now with Quizwiz!

Concurrent validity

How well a measure correlates with a previously validated measure

What can also affect the responses to a survey?

Question oder

What is the cut off for strong internal reliability?

.7

Which internal reliability (alpha) value is the strongest? .10 .92 .70 -.98

.92

Classifications of operational variables

1) categorical 2) quanititative

Three types of quantitative variables

1. Ordinal 2. Interval 3. Ratio

Three types of operationalization

1. Self-report 2. Physiological 3. Observational

Three types of reliability

1. Test-retest 2. Interrator 5. Internal

Say yes or agree with every item when people say "yes" or "strongly agree" instead of thinking carefulling about the asnwer

Acquiescene -Yea saying

Internal reliability is concerned with a group of people giving consistent answers to

All of the items on a scale

Surveys and polls

Asking people questions

Which operationalization is best?

Best if self-report, observational, and physiological measures show similar results

Convenience

Biased

Purposive sample

Biased

Self-selected sample

Biased

Eye color

Categorical

Parent's marital status

Categorical

A variable whose levels are cateogories ex: male/female

Categorical variable

Cronbach's alpha

Coefficient alpha to see if their measurement scales have internal reliability

A researcher's definition of a variable at the theoretical level- the construct -variable of interest, stated as an abstract

Conceptual definition/variable

Also involves subjective judgment about a measure- must capture all parts of a defined construct

Content validity

An empirical test of the extent to which a measure is associated with other measures of a theoretically similar construct

Convergent

A single number, ranging from -1.0 to 1.0, that indicates the strength and direction of an association between two variables

Correlation coefficient r

Empirical form of measurement validity that establishes the extent to which a measure is correlated with a behavior or concrete outcome that it should be related to

Criterion validity

Evaluates whether the measure under consideration is related to a concrete outcome, such as behavior, that it should be related to according to the theory being tested ***Relevant outcome

Criterion validity

An empirical test of the extent to which a measure does not associate strongly with measures of other, theoretically different constructs

Discriminant validity

It is better if participants ______ know the condition of the experiment

Do not -

Is your cell pjone new and does it have all the latest features? WHat is the biggest problem with this wording?

Double barreled question

Tyoe of question in a survey or poll that is problematic because it asks two questions in one, thereby weakening construct valadidity

Double barreled question

Random sampling is associated with

External validity- generalizing

It appears to experts to be a possible measure of the variable n question ***SUBJECTIVE -not empirical -Extent to which a measure is subjectively considered a plausible operationalization of the conceptual variable in question

Face validity

Which of the following does NOT test validity empirically?

Face validity- subjective

Adding more people to the sample will always make the sample more externally valid

False

Playing it safe by answering in the middle of the scale

Fence sitting

People give their opinion by picking the best of two or more options

Forced-choice questions

A study participant gives a consistent patterns of answers, no matter how the researcher has phrased the question-be a measure of the same construct: consistent across the entire scale

Internal reliability

Random assignment is associated with

Internal validity- experimental

Consistent scores are obtained no matter who measures or observes- two or more independent observers will come up with consistent (or similar) findings

Interrator reliability

A scale of measurement that applies to the numeral of a quantitiative variable that meets two conditions 1. Equal intervals 2. NO absolute zero

Interval

Seating order in a gymasium

Interval- equal spacing

Which of the following is NOT TRUE of a measure?

It can be valid but not reliable

You can tell when a study is correlational because?

It has two measured variables

Using two groups of people to connect to a certain measure

Know groups method

Whether scores on the measure can discriminate among a set of groups whose behavior is already well understood

Known-groups paradigm

type of question in a survey or poll that is problematic because its wording encourages only one response, thereby weakening its construct validity

Leading question

A survey question that has you rate a response from "strongly agree" to "strongly disagree" is an example of?

Likert scale

When a scale contains more than one item and is anchored by the terms agree, strongly agree, disagree, strongly disagree

Likert scale

A clear codebook makes it _________ likely that behavioral observations will have good interrater relaibility

MORE

When conducting a poll, adding more people to the sample will......

Make the margin of the error of the estimate smaller

Self reporting

Memories of events- accuracy vs. confidence

Self reporting

More than they can know

Do you use scatterplots for internal reliability?

NO- correlation coefficient r and slope and strength

A question in a survey or poll that contains negatively phrased statements making its wording complicted or confusing and potentially weakening its contruct validity

Negtively worded questions

Can a measure be valid and not reliable?

No- You need consistency to have a valid measure

A bias that occurs when observers' expectations influence their interpretation of the participants' behaviors or the outcome of the study- according to their own hypothesis, etc.

Observational bias

Method of measuring a variable by recording observable behaviors or physical traces of behaviors

Observational measure

Process of watching people or animals and systematically recording how they behave or what they are doing

Observational reserch

A change in behavior of study participants in the direction of an observer's expectations to MATCH them- subconsciously

Observer effects

Reactivity

Occurs when people change their behavior when they know someone else is watching -Observers have to blend in, etc.

The specific way in which a concept of interest is measured or manipulated as a variable in a study

Operational definition

Content validity is concerned with

Operationalization

How are you going to measure the variable of interest?

Operationalized variable

A scale of measurement that applies when the numerals of a quantitative variable represent a ranked order

Ordinal

Order of finishers in a 5K race

Ordinal

Karen is studying the effect of popularity on academic success for her research methods project. To do this, she has elementary school students rate how popular each member of their class is. She then uses this information to rank students on popularity (e.g., John is the most popular, Vanessa is the second most popular, ...). Which of the following best describes this variable?

Ordinal- ranked order

Difference between open-ended and forced-choice format?

Participants can say whatever they would like in open-ended format

Method of measuring a variable by recording biological data

Physiological measure

A(n) measure operationalizes a variable by recording a participant's? -self-report; observable behaviors -behavioral; intrapersonal thoughts -physiological; biological data -observational; questionnaire answers

Physiological; biological data

Men with dimentia

Population of interest

Professors at this university

Population of interest

South Asian Canadians

Population of interest

Rating of well-being on a 5 point scale

Quantitative- ordinal

Which of the following is correctly matched? a-Random assignment - internal validity b-random sampling - internal validity c-random assignment - external validity

Random assignment and internal validity- For causal relationships and experiments to rule out third variables

Blood alcohol content

Ratio

Reaction time at a computer task

Ratio

Scale of measurement that applies when the numerals of a quantitiative variable have equal intervals and when the value of 0 truly means nothing

Ratio

How consistent the results of a measure are

Reliability

Stratified random sample

Representative

Shortcut respondents

Response sets

For her research methods class, Serena plans to interview several teachers about their attitude toward teaching children who have ADHD. This is an example of what type of measurement?

Self-report

Instead of degree of agreement, respondants might be asked to rate a target object a numeric that is anchored with adjectives

Semantic differential format

Sloping up, sloping down, or not sloping

Slope direction

Upward, downward or neutral slope of the cluster of data points in a scatterplot

Slope direction

Positive negative or zero

Slope direction in a scatterplot

Can a measure be reliable and not valid?

Yes- You can still have a consistent measure that does not measure what it is supposed to

Giving answers on a survey that make one look better than one really is-faking good

Socially desirable responding

A description of an association indicating how closely the data points in a scatterplot cluster along a line of best fit drawn through them

Strength

When people are using an acquiescent response set they are:

Tending to agree with every item, no matter what it says

Which of the following is a means of controlling for the observer bias?

The observer does NOT know the study's hypothesis

Concerns whether the operationalized variable measures what it is supposed to

Validity

Which is a way of preventing reactivity?

Waiting for participants to become used to the observer

When do people most accurately answer survey questions?

When they are describing their subjective experience; how they personally feel about something

Predictive validity

Whether the measure predicts what it is supposed to or not

What are two examples of response sets?

Yea-saying and fence sitting

A response set can be in the form of? -leading questions -Yea-saying answers -A Likert-type scale

Yea-saying: short cut

Which of the following is an example of observer bias in a study on arm strength and mood?

a research assistant records the participant as stronger in the happy condition than the sad condition, because that fits the hypothesis

quantitative variables

coded with meangingful numbers- height and weight

A statistic based in part on sample size - An experimental design occurs when observaters are unaware of the experimental conditions to which participants have been assigned

masked design

Method of measuring a variable in which people answer questions about themselves in a questionnaire or interview

self report measure

The researcher gets consistent scores every time he or she uses the measure

test-retest


Related study sets

1.10 Compare and contrast types of display devices and their features

View Set