comm 201 exam 2
conduct experiments
1. introduce & obtain consent 2. randomly assign 3. manipulate the independent variable 4. measure the dependent variable 5. debrief the participants
experimental research key components
1. random assignment 2. independent variable (cause) 3. dependent variable (effect) 4. control (comparison)
solomon 4-group design
Pr T O1 Pr X O2 T O3 X O4 all have RA Pr= questions about school T= review X= no review O= grade
post-test nonequivalent groups design
T O1 X O2 ex. class gets review, half doesn't
true experimental designs
T O X O RA- equal chance in being in either group
experimental notation
T-treatment O-outcome X-control Pr- pretest RA- random assignment
validity
accuracy; systematic errors
telephone surveys
advantages: can call anyone-representative disadvantages: hang-up; overrepresentation of older people; excluding
online surveys
advantages: cheap; data is already entered disadvantages: lower quality data
mailed surveys
advantages: send anything anywhere disadvantages: low response rate; 7%
subject validity
an opinion
convergent validity
associate/relate positively to other things that are real
cluster
break people into groups, but only select from some groups
stratified random
breaking people into groups and randomly pick from each group; why? to have equal numbers; better for sub populations
concurrent validity
clout; who has skill/ability right now?
reliability
consistency and stability, can you consistently get the same result?; random errors
interceder reliability
do multiple coders see the same thing
perceptual questions
do you know the answer? yes or no ex.how many planets in solar system?
construct validity
does my construct even exist? is clout real?
systematic
every Nth person
experimenter expectancy effects
experimenter has a bias (they see what they want to see)
regression to the mean
extreme scores have a tendency to move toward the average
evaluation apprehension
fear of being tested for something; nervous being watched
types of behavioral questions
frequency?; duration?; timing?
likert scales
full sentence; indicative level on agreement
test-retest reliability
give two test; test time 1, test time 2- IQ tests
semantic differential scales
has a partial sentence; good 123 bad fun 123 not fun expensive 123 inexpensive ex. I feel that wvu is...
types of indicators for attitudes
how do I ask survey questions to measure people's attitudes?
attitude questions
how you feel about something? invaluative judgment good vs bad
internal consistency reliability
if you know your answer to Q1 you can predict their answer to Q2
face validity
look at something and it doesn't look right
discriminant validity
look for negative relationship to something that is real
criterion-related validity
measuring skills or abilities
pre-experimental designs (quasi)
no RA
quota
non-random stratified sample; break into groups and pick who you want
assumption
not everyone has equal chance of getting picked
experiments
only research that allows us to make casual statement
maturation
people change over time
convenience
pick people who are easy to find
simple random
picking one person at a time at random
overcoming disadvantages of survey research
pretest surveys- small group take it before everyone else; short- at max is 4 pages/120 questions; cover letter
threatening behavioral questions
questions people about behaviors they don't want to talk about
problems with random sampling
sampling error; shit happens- random uncontrollable stuff
sampling error
shit happens
face-to-face interviewing
sit down and ask questions advantages: see non verbals; more than yes/no- descriptive data disadvantages: lying; can only interview 1 person at a time
history
something outside the study affects something in the study
behaviors
something you do
reasons for using non probability samples
studying something brand new
cross sectional designs
survey at one point in time; snapshot- doing something and take a pic in the moment
longitudinal designs
surveys overtime
sampling bias
systematic difference between sample and population; population is bias
split-half reliability
take a measure, split it in half, compare the halves
demand characteristics
the participants figures out what the experimenter is focusing on
content validity
we make sure we measure all aspects of a variable
knowledge
what do people know
attrition
when people drop out of your study
network
when we use other people to collect data
purposive
when you choose people who meet a specific characteristic
predictive validity
who has skill/ability in the future?