Communication Research Exam 2

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

Sampling

How and why we select participants for a study

Does correlation imply the direction of the relationship?

No. (the chicken or the egg type question). A and B are related to one another, but we don't know whether A causes B or vise versa.

Pre and Post Tests

Pre-test: measurement of DV BEFORE exposure to the IV (serves as a baseline—how the sample fares before any manipulation. Pre-test can also compromise internal validity=testing threat Post-test: measurement of the DV AFTER exposure to the IV (allows us the measure the effects of the IV on the DV

Margin of Error and confidence

(when we see it presented along with a statistic from a random sample, what does it tell us about the population parameter?) "43% of American adults approve the job the president is doing." *with a margin of error of 3 percentage points MOE = how confident we can be that the true value of the population parameter is close to 43% It is the amount of uncertainty in an estimate Pertains only to sampling error (not systematic error) Random Sampling: are as likely to over- as under-estimate the parameter of a population CONFIDENCE INTERVAL: a range of values within which our population parameter is estimated. (ex. between 75-85% of registered Democrats will vote for Hillary Clinton. We want these to be small) CONFIDENCE LEVEL: How sure we are that our confidence interval is accurate. Ex. I'm 95% sure that 75-85% of registered Democrats will vote for Hillary Clinton in the 2016 presidential election.

Summary of moving parts of Experiments

*Pre- and post-testing -Take measurements of the dependent variable before and after the treatment. *At least two conditions -Two levels or amounts of the independent variable (one level can be zero). *Random assignment in those conditions -Treatment/Intervention: researchers manipulate the IV -Control: researchers do not manipulate the IV *Two possible explanations for the result -Our hypothesis (what we predict is going to happen) -Our null hypothesis (the opposite of what we predict) -The null hypothesis is often left unstated (researchers don't spell out the null).

True Experiments-decision tree

*true experiments have random assignment *Is Random assignment used? -Yes=true experiment -NO--->Is there a control group? -Yes=Quasi-experiment -No=Non-experimental R: random assignment O: observation/measurement X: independent variable (treatment) EG: Experimental group CG: Control group

What three things must we have if we want to establish causation?

-Correlations that are strong! -Temporal precedence (one thing came before the other) -No spurious variables (confound variables)

What are the three threats to internal validity (researchers)?

1. Personal Attribute Effects-ie appearance of researcher may change how subject answers. 2. Expectancy Effects-self-fulfilling prophecy 3. Observational Bias-when personal biases cause you to see observations wrong and make wrong conclusions

POLLING as an example of sampling (Lapinski interview we read for class) • What are challenges or problems with polling that relate to sampling (e.g. public misunderstanding of estimates, non-response bias, sampling frame errors)

2016 presidential election polling was perceived a failure Lapinski found the polls weren't that far off from reality Clinton ended up with a 2% popular vote margin over Trump And most nat'l polls were within that margin of error Problems?? Public doesn't understand what an estimate is or how much uncertainty (=margin of error) exists i.e. misunderstanding about what a poll result is actually telling us Public isn't participating in polls (non-response bias) Dem/Rep trends in participating may vary according to political climate (and introduce bias) Sampling frame errors: polling people who may be unlikely to vote

Correlation and Causation - Remember to establish causality we must meet 3 criteria How do experiments help researchers determine causality? Positive and negative correlations

At a minimum, we want the variables in our study to correlate with one another. Correlation does not imply a direction for the relationship/causation -A and B are related to one another, but we don't know whether A causes B or vice versa. Correlation does establish: -That a relationship exists. -The strength of that relationship. Positive correlations: When changes in one variable are associated with similar changes in another variable. Both variables moving in same direction (up or down)(ex. student grades and course evaluations) Negative correlations: When changes in one variable are associated with opposite changes in another variable.(when one goes up, one goes down. ex. tuition and satisfaction) If we want TO ESTABLISH CAUSATION, we must have: A. Correlations that are strong B. Temporal precedence (time order) C. No spurious variables Experiments are often designed to establish causation. - Allow us to control how variables are experienced and when -Allow us to control for other variables that may influence results

Designing response items/categories

Avoid overlapping response categories. Ex. How many times did your family eat dinner together in the last week? A: None B: 1-2 Times C: 2-4 Times Ensure all response categories are exhaustive (so questions aren't left blank Ex. What is your family income after taxes? A. $0-$29,999 B. $30,000-$49,999 C. $50,000-$69,999 D. $70,000-$89,999 Often ensured by including an "Other" option.

Conditions (In Experiments)

Experimental condition: a condition where the IV is manipulated Ex. Masculinity threat condition (masc threat was the IV that was manipulated for this group of men) Control condition: a condition where the IV is NOT manipulated Ex. Gender-affirming condition (masc threat was the IV and NOT manipulated for this group of men) By manipulating the feedback the two groups of men received, researchers created two conditions: a "masculinity threat" condition (the experimental condition) and a "gender-affirming" condition (the control condition - the condition in which the IV was NOT manipulated) Having two conditions (one where the IV was manipulated and one where it wasn't) allows for comparison of the two groups and thus allows us to understand how the IV impacted DV

Controlling other factors - matters for validity

Experimental control of other factors that could influence the outcome of the experiment ---------------------------------------------------- Example: RQ: Do people judge (same-quality) resumes of men more favorably than resumes of women? Condition 1: read & rate resume with a man's name given Condition 2: read & rate resume with a woman's name given Researchers compare ratings of men's vs. women's resumes By using identical resumes, we are controlling other factors that might affect participants' ratings Which helps establish that it is gender that causes any effect Other factors like handwriting, sentence structure, aesthetics are "controlled for" or held constant ----------------------------------------------------- Presence of an uncontrolled confound variable draws questions about the study's validity. INTERNAL VALIDITY: Have I ruled out or controlled for other reasons that could be giving me these results? -Emphasizes confounding variables -Accuracy of conclusions drawn from a study -does the study lead to accurate results? EXTERNAL VALIDITY: Do the findings of my study generalize to the "real world"? -generalization of conclusion -sampling, replication, ecological validity -can the results be applied to other cases -------------------------------------------------------- Why have a control group? Helps protect and establish our internal validity. Placebo effect: when just participating in an experiment shapes one's behavior Hawthorne effect: when being observed shapes one's behavior Double-blind experiments Why pre-test? Gives us a point of comparison and confidence to know that something changed because of our treatment. Why post-test? Without this measure, we would have no idea if our treatment worked or not.

Validity and its threats

Good experiments should: -Provide a logical structure that allows us to pinpoint the IV's effect on the DV and help us address our hypotheses and/or research questions. -Help us to rule out alternative explanations for our results (or confound variables). -Issues of internal validity -Apply to contexts outside of this one experiment. -Issue of external validity. -Apply to real-world contexts - ecological validity -------------------------------------------------------- Threats to Internal validity: HISTORY:Events that take place between measurements in an experiment that are NOT related to the treatment effect -Studying attitudes towards war when a war is declared -More of a problem when experiments continue over long periods of time SENSITIZATION: (a good example is influence of pre-testing participants when the post-test uses the same measure that the pre-test used). When an initial measurement influences measurements that follow When participants are given the same questions over time, they can adapt to that measurement Ex. Being asked a set of questions after each doctor's visit -------------------------------------------------------- Threats to Validity (Participants): *Selection (of participants) -Self-selection of participants -More common bias when we use nonrandom sampling techniques. -Remember: These techniques help us limit sampling bias. *Maturation -Internal changes that occur within participants but have nothing to do with the study -Ex. Effects of eating breakfast on reading scores *Mortality-Loss of research participants *Hawthorne Effects-Behavioral changes due to the fact that they know they are being studied *Inter-subject bias-People in experiments talk to each other and share details about the experiment with each other (that they should not know). -------------------------------------------------------- Threats to Validity (Researchers) -Researchers may (accidentally and unknowingly) influence responses and observations in experiments... *Personal attribute effects-Ex. Social support and sensitivity—influence of male vs. female researcher *Expectancy effects-Self-fulfilling prophecy -Ex. Elementary students and education achievement -Use a double-blind experiment to solve this issue. Researcher and participant don't know whether they are in the treatment or control condition. *Observational bias-Drawing incorrect conclusions because "seeing" certain things and not seeing others due to personal biases -------------------------------------------------------- Threats to External validity : Ecological validity-can the findings be generalized to real life?

History of Sampling-Literary Digest

Literary Digest poll in 1936 -Sent out 10 million ballots -Polled voters: Alf Landon or FDR? -Chose participants from phone directories and automobile registries. Results: Poll favored Landon (57%) Reality: Roosevelt won—by a lot! Predicted L would win 31 states. He won 2 What went wrong with their prediction? Sampling frame - telephone subscribers and automobile owners in 1936 Selected a wealthy sample and generalized to the larger voting population

Experimental designs

Quasi-experimental designs have no random assignment, but have a control group symbols: R: random assignment O: observation/measurement X: independent variable (treatment) EG: Experimental group CG: Control group Quasi-experimental: •2-GROUP (or Post-test only) design: [EG R X O] [CG R___O] Two-group experimental design: No pre-test; has post-test, treatment/control, and variables. Possible issue: unable to say with certainty that our treatment resulted in a change. •PRE-TEST/POST-TEST design [EG R O X O] [CG R O___O] -Classic (pre-test/post-test) design: all of the required elements -Possible issue: sensitivity -If you ask participants about a particular topic, you may prime them to focus on that topic -Delayed treatment to control-would look like an extra X at the end for the CG. If it is found that the EG benefitted from the treatment, researchers find it ethical to then give the control group that treatment too at the end of the experiment. •QUASI-EXPERIMENTAL [EG O X O] [CG O___O] No random assignment; all other conditions met. Possible issue: self-selection bias Can't be sure groups aren't different from one another in confounding ways. •SOLOMON 4-Group design [EG R O X O] [CG R O___O] [EG R ___ X O] [CG R___ ___O] all of the required elements divided over four groups so that we have every possible combination of elements. -Allows us to control for potential measurement effects. -Most effective at establishing internal validity.

Two types of sampling and differences

RANDOM (Probability) Sampling: each person in the population of interest has an equal chance of being selected = random selection based on assumptions of probability theory) vs NONRANDOM SAMPLING: any sampling approach that does not adhere to the principles of probability theory

What is the best way to eliminate threats to experiments?

RANDOMIZE Step 1: random sampling (probability sampling) Step 2: random assignment (experimental vs. control) Randomization controls for both known and unknown potentially confounding variables. WHY? Because randomization randomizes everything—including error and bias!

Random Assignment

Randomly assign participants to experimental and control conditions Works to ensure that the only difference between the experimental and control group is the IV By ensuring that individual characteristics such as biological traits, family background, attitudes, education are randomly distributed across conditions Historically interesting - used to use "matching" In our masculinity threat study, we would want to ensure that the men in both the control and experimental conditions don't differ from each other in systematic ways (ways that might explain the results that aren't due to the exposure to the IV) ---------------------------------------------------- Ex: RQ: Do negative gender stereotypes cause female students to perform poorly on aptitude tests? IV and DV? IV = Negative gender stereotypes (the cause; variable I'm going to manipulate) DV = test performance (the effect;; variable I'm going to measure) 2 Conditions? Experimental: I expose female students to negative gender stereotypes Control: I don't expose female students to negative gender stereotypes By subjecting participants to different conditions, we can determine whether exposure to IV caused changes in DV (by eliminating individual differences)

Double-Blind Experiments

neither the participants nor the researchers know which participants belong to the control group, nor the test group.

p <.05

no relationship exists (less than 5%)

Sampling error (random sampling) vs. Systematic error (nonrandom sampling)

SAMPLING ERROR-RANDOM SAMPLING: the difference btw the estimates and the true parameter is due to chance. degree to which a sample's characteristics differ from the population's characteristics high sample error=low representativeness SYSTEMATIC/NONRANDOM ERROR: When the difference is due to a 'flaw' in the design, we can't know the size of the bias introduced. Ex. Network news quick polls

Self-report vs. other-report - strengths and weaknesses of both

SELF-REPORT surveys: asking participants to report about their own attitudes, behaviors, opinions, etc. -Most common type of survey! -Benefits: -Useful for measuring one's psychological characteristics -Useful for assessing own behavior -Some things only the individual him/herself can answer. -Drawbacks: -Sometimes, individuals are unable to answer accurately -Recall issues or carelessness in response -Prone to social desirability bias OTHERS REPORTS: Ask an individual to respond about someone else's behavior, opinions, etc. -Benefits: -Useful when evaluating one's performance/skills -Can provide more accurate, objective information -Drawbacks: -Limited observations -Lack of motivation to report -Bias toward a person and social desirability

Clustering vs Stratification

Similarities Sampling is done in stages Population is divided into groups (clusters or strata) Differences Clusters are selected at random and sample is drawn only from selected clusters For stratified samples, sample includes some individuals from every stratum, but researchers make decision about what proportion of sample should be drawn from each stratum

Composite measures: Indexes vs. Scales

Some variables can be measured w/ 1 survey item Current smoking... use in the past 30 days? Select Y or N With constructs, we often design surveys that cover all aspects (content validity). The more complex the construct = more indicators, questions Ie Physical activity vs. Social justice COMPOSITE MEASURES: combining multiple items to create a single value/score that captures multifaceted construct. Often no single indicator of a complex construct. Composite measures allow a single value to represent a complex construct (we compose multiple items into one score) Allow for more precision in measurement. Indexes: -Employs multiple observations or items of measurement -Usually combines items w/o concern about their intercorrelations -Composite scores that allow you to rank more or less of your construct. Scales: -Employs multiple observations or items of measurement -Usually evaluates item-intercorrelations before selecting items for inclusion -Composite scores that allow you to rank more or less of your construct. Also allow for INTENSITY INDEX SUMS responses to survey items to capture key elements of a concept being measured Index items may not be statistically related to each other Ex. ACE index: divorced parents not related to sexual abuse Indexes allow for ranking of data. Fail to account for the fact that certain indicators are stronger/better measures of specific constructs. SCALE AVERAGES responses to series of related survey items that capture a single concept or trait Scales combine items (a composite of items) of different degrees (or intensity) Scales allow for ranking of data and inclusion of intensity. Contain more information than indexes More difficult to build Built to ensure that items that "hang together" measure a single construct

What can SD tell us?

Standard Deviation can give us information about the shape of our data.

What are the 2 frequently used quantitative methods in comm research

Surveys and Experiments

Surveys vs Polls

Surveys vary in their breadth and depth but usually contain several questions Poll is a brief, single-topic survey with usually a question or two ie Do you agree with the President's decision to withdraw from the Paris Climate Accord? On a scale from 1 (highly agree) to 5 (highly disagree), how do you feel about university's decision to raise tuition?

Main Issues related to research implementation (Wilkins)

once variables are conceptualized and operationalized, how do researchers acquire this data?

Surveys-uses, strengths, weaknesses

Surveys: collect attitudes, opinions, behaviors from a sample to make observations and generalize about the aggregate. -Describe patterns, test hypotheses, explore differences across groups of people, document patterns of stability and change -Relies on RESPONDENTS -Often feature the use of QUESTIONNAIRES Time in Surveys Cross-sectional Longitudinal -------------------------------------------------------- Strengths: -Breadth of topics can be covered -Comparisons across groups (and over time) -Most surveys include demographic info (age, race, sex, marital status, maybe education) -All respondents answer the same questions with the exact same wording and same response categories, allowing easy comparison -Ex. Are college graduates more politically liberal than people who did not go to college? (comparing subgroups) -Ex. Have political attitudes become more liberal or conservative? (tracking change over time) - capability for generalizing--surveys can be useful for describing characteristics of a larger population. (makes larger sampling possible, can survey a lot of people at little cost) -best accomplished through probability sampling - flexible: can ask multiple questions on a subject (not possible with other methods like experiments) -standardization: constructs keep consistent from respondent to respondent because explanations are written down. Fixed Qs & response categories allow us to compare people across categories: Ex. What is the highest educational degree that you have completed? Gradate degree or higher 4-year college degree 2-year college degree High school diploma Less than a high school diploma Compare to a non-fixed Q: How much schooling do you have? Fixed categories also allow us to compare subjective phenomena: Consider a 7-point scale on which political views people might hold are arranged from extremely liberal (1) to extremely conservative (7). Where would you place yourself on this scale? -------------------------------------------------------- Weaknesses: - Round, peg square hole problems: sometimes the problems we have don't neatly fit into a survey - Is your construct valid? (it's important to know about the questions you're asking. are they measuring what they're supposed to) - questions response lack context--issues of artificiality in responses (desirability bias can be a big issue) - inflexibility: once instrumented is finalized, it's difficult to adjust or change. -**in general, reliable, but not necessarily valid

Sampling Example: How Latinos Connect with Health Information

The same study conducted over the phone (through random digit dialing) and online yielded vastly different results. Why? BECAUSE OF THE LANGUAGE BARRIER. ( the phone survey was only available in English and Spanish; the online survey was English only)

Sampling Representativeness and generalizability

Understand how sampling matters for determining whether our sample is representative of the larger population we are interested in and thus whether we can generalize our findings to more than our sample Representativeness: How closely a sample matches its population in terms of the characteristics we want to study Representativeness --> generalizability ex: -I want to study resilience amongst homeless but I have access to a sample of 'couch surfers' ---Can I generalize to my population of interest? -I test 200 volunteer students on geography ---Can I generalize to the larger population of students in my district? Representativeness --> validity ex: -I test 200 MIT students on math ability to understand Americans' math skill compared to other countries -The validity of my results are compromised because they may be due to the unusual place I did the study-ie MIT students are more likely to have better math skills than the average American -I can't say much about Americans' math skill from my sampling approach

Variables

Variables: characteristics, numbers, or quantities that take different values in different situations. In experiments we either measure or manipulate variables. Characteristics such as age, gender, education, income Attributes such as extroversion, introversion, generosity Behaviors such as hostile behaviors, affiliative behaviors or problem solving behaviors Performance such as grades, GPA, employer evaluation scores Attitudes or orientations such as conservative/liberal, spontaneous/ •Independent variables (manipulated; the 'cause'): Independent (X) Believed to influence another variable. Manipulated by the researchers in the experiment. -------------------------------------------------- •Dependent variables (measured; the 'effect') Dependent (Y) Believed to be changed by another variable. Y depends on X Measured by the researchers in the experiment. Both must be operationalized IV and DV change from study to study manipulate the independent variable, measure the dependent Experiments manipulate the independent variables Meaning researchers actively change the level of the IV And observe the effects of that change With other methods, researchers may observe the natural variation of the IV (rather than actively control or change it) --------------------------------------------------- •Confound variables/spurious variables: variables not being controlled by the researchers that can also vary systematically with the dependent variable. -leads us to think that a relationship exists between our IV and DV when really something else is causing the effect Presence of an uncontrolled confound variable draws questions about the study's validity. Example: A recent study found that heavy drinkers don't live as long as people who abstain from drinking completely or who drink in moderation. -IV = amount of drinking (the cause) -DV = longevity (the effect) -Possible confounding variables = other variables that could impact longevity (diet, social support, pollution/environment)

"Chocolate milk" survey (discussed in the course reading) - what does the "chocolate milk" survey demonstrate about survey question design and how they can be manipulated? How does it reveal the importance of audiences understanding potential conflicts of interest?

Washington Post headline: "7% of American adults think that chocolate milk comes from brown cows" Survey commissioned by the dairy advocacy group to gauge what Americans know about ag/food production Intended to get "fun facts" about Americans knowledge The survey asked 1000 adult americans "Where does chocolate milk come from?" A. Brown cows (7%) B. Black and white cows (93%) C. I don't know "We've become accustomed to seeing these kinds of poorly phrased survey questions pop up and go viral because of some bonkers statistic they claim to support" -The survey implies that 7% of adult americans think chocolate milk comes from brown cows when it was 7% of the 1000 adult americans surveyed. -overall it was a poorly formatted question that set people up for failure

random error

a possible type, a stressed participant, or participant misread the question.

measurement reliability

a reliable measure is stable, consistent and has a little measurement error

Sample

a representative subset of a population that we use to make generalizations

Drawing Samples: Population

a complete list of persons/objects we want to study. (ex. census, class roster). Every possible item of a pre-defined aggregate that could be studied Ex. Communication students, university faculty, all clients served in 2016, deployed military officers

Statistical Significance

a way to understand whether our results are due to chance or bias whether they are due to the relationship between our IV and DV

Analysis of Variance (ANOVA)

allows us to compare differences between three or more groups. (ex. differences between big 10 schools and student satisfaction)

Nonrandom (Nonprobability) Sampling

any sampling approach that does not adhere to the principles of probability theory Ex. To study binge drinking among first-yr students, I survey 200 first year students in Psychology 101 (convenience sample) Would this sample accurately represent binge drinking among all first-yr students?- probably not - when might we use these different nonrandom sampling techniques and why? • Convenience sampling-use who is most conveniently available. ie 450 undergraduate students in Com 101 are asked to participate in a survey about balancing work and school • Volunteer sampling-Asking individuals to volunteer as participants. ie A researcher posts a flyer asking18-24 year old women diagnosed with depression to contact her if interested in participating in an interview. • Purposive/Quota sampling-Purposive/Quota sampling: Segmenting sample into different kinds of people, and then selecting individuals who fit those characteristics. -Common quota sampling: Gender, race, age, religious groups, political affiliation, etc. • Snowball sampling-Start with one participant, and snowball/branch out to include their references/acquaintances. -Useful for 'invisible' or difficult populations. You ask a subject if they know anyone else who could get involved with the study. -Participants are not randomly chosen -So: Not all participants have an equal likelihood of being selected for participation -More prone to bias and higher error (due to possible systematic biases introduced into sample) -Sample is thus NOT easily generalizable to the larger population Eg. -- journalists asking passersby on the street *Can't assume that results gathered from these samples generalize to larger populations because they are not random Limitations and benefits of nonrandom sampling: advantages: -more convenient -can be better for initial hypothesis testing -Diversity of representative samples can make detecting cause-and-effect relationships more difficult (explanatory studies are hard!) -Can often gather more/better info on nonrepresentative samples (e.g "The Nurses Health Study" got high participation rates of collecting blood and urine samples) limitations: -Nonrandom sampling methods make it hard to establish representativeness and generalizability. B/c our choices of who to sample may be biased and so not accurately represent the population Often used to test out survey or interview questions or used when you can't construct a sampling frame -not unbiased like random sampling

Median

are best when you have some extremely high or extremely low values that might affect your mean (income)

Mean

are reported most often; when in doubt, report the mean.

what is the main issue with the mean?

are subject to specific forms of bias due to dispersion, so they are not always the best.

Hawthorne Effect in Experiments

behavioral changes due to the fact that they know they are being study.

Systematic Sampling

choosing every Nth person in a population to create your sample.

Cluster Sampling

clusters (locations) are randomly sampled first, with individuals than randomly with individuals than randomly from the cluster. Allow us to get better results at lower cost Require weighting a sample so that some people 'count' more than others ex: I want to sample American university students (no list of all American university students) Student lists from 3 randomly selected universities are easily accessible. Randomly sample from each list. (Much cheaper to do) Another reason to use cluster sampling is when we want to sample a population across a large geographic region If we take a random sample of NY state, we'd have to travel pretty far to interview each person So we could do a cluster sampling of 5 counties More efficient and cost effective

Weaknesses of Natural Experiments

depends on the actual event, trade-off between internal and external validity

Range

distance between the highest and lower scores in a distribution

measurement error

is it a leading question or a double barrelled question?

Stratified Random Sampling

dividing the population along a chosen characteristics of and then randomly sample from each group. ie students by race and see who they are more likely to vote for. ex: Kalev et al. (2006) studied effectiveness of diversity training using sample of 700 businesses To identify population of businesses from which to sample: US employers w/ 100+ employees must file reports with EEOC Report = sampling frame A simple random sample of the roster would be dominated by small companies (even though more employers are employed by large companies in US) Divided employers into strata: 1) employing more >500 and 2) employing <500 Selected 2x as many large companies to better reflect proportions of Americans who work for them Advantages of Stratified sampling: Assures that you will be able to represent not only the overall population, but also key subgroups of the population Allows one to oversample - particularly smaller groups If you want to discuss subgroups, this is a way to assure you'll be able to get enough participants -Simple random sample may not include enough minority participants (b/c each individual has equal chance of getting selected) so stratification allows us to oversample Why do this? Sampling error is impacted by size of sample (larger samples = lower sampling error) and homogeneity of sample (higher homogeneity = lower sampling error)

Variance

do these questions achieve a variety of responses?

Observation Bias

drawing incorrect conclusions because "seeing" certain things and not seeing others due to personal biases

Equal Probability of Selection

every individual in the population has an equal chance of being selected for the study.

Chi-Square Example

gender and church attendence. (there is no relationship between gender and church attendence)

Mode

generally reported with nominal (less meaningful, race, university major)

Why have control groups?

helps protect and establish internal validity

High Standard Deviation

high dispersion, distribution is read out further, curves flatter.

Standard Deviation

how much individual scores vary (or deviate) from the mean score of the group.

Confidence Level

how sure we are our confidence interval is ACCURATE. (I am 95% sure that 75-85% of the registered Democrats will vote for Hillary Clinton. If i say 50%, my confidence interval is low. We want these to be high.)

When is research traditionally significant?

if the p < .05

Low Standard Deviation

low dispersion, it's clustered closley around the mean, (curve is taller)

Measures of Dispersion and what they tell us about our results?

measures that report how far a set of scores are spread around the center point of the data and across the distrubtion of the data.

Probability/Random Sampling

methods of identifying study samples that adhere to the assumptions of probability theory each person in the population of interest has an equal chance of being selected = random selection Based on probability theory = procedures assure that different units in your population have equal probabilities of being chosen (e.g. picking a name out of a hat; choosing the short straw; lottery drawings) Ex. Want to study binge drinking of university's 2,000 first-yr students. Take a random sample of 200. Sampling frame = freshman enrollment roster. Use computer program to assign each student a random number and then survey the lowest (or highest) 200 numbers. Equal probability of selection: Every individual in the population has an equal chance of being selected for the study. Random selection: Individuals chosen by chance. Helps researchers to avoid sampling biases. •SIMPLE random sampling-Sampling units randomly selected from a population. -Each unit has equal chance of being selected -Assumes the sampling frame contains all members of a population, and are numbered. -Relies on random numbers to identify sample -Usually used w/ smaller populations whose members can be identified individually (ie. You have a complete sampling frame) •SYSTEMATIC sampling-Choose every Nth person in a list to create your sample. -Sampling interval: The standard distance between elements (population size/desired sample size). -Ex. 240 students / 12 (sample size) = 20 Choose every 20th person in the population -Randomness improved by choosing a random starting point. *The difference btw systematic and simple random sampling: every member of the population in a systematic sample has equal prob of being selected. BUT NOT ALL PAIRS ARE EQUALLY LIKELY TO BE SELECTED. •CLUSTER sampling-clusters (locations) are randomly sampled first, with individuals than randomly from the cluster. -Advantages: -Does not assume complete sampling frame i.e. can't easily access a list of your sample That's why we use the clusters first -Much cheaper to conduct than simple random sampling •STRATIFIED random sampling (allows oversampling to ensure that enough individuals of smaller groups are present in the sample in large enough numbers). Dividing the population along a chosen characteristic and then randomly sample from each group Ex. Students by Race -Estimates based on random samples are unbiased. -To whatever extent estimates differ from the true population parameter, they are equally likely to over- as underestimate it. (This is not true of convenience samples) -In a random sample, the difference btw the estimates and the true parameter is due to chance. -it helps researchers avoid sampling bias - increases the likelihood of representation and generalizability; - reduces chance of sampling error.

Advantage of "Natural Experiments"

naturally occurring events (not controlled), these are events that would otherwise be unfeasible or unethical (natural disasters, etc.)

Inter-Subject Bias

people in experiments talk to each other and share details about the experiment with each other (that they should not know)

Central Tendency Bias

people's tendency to pick the neutral option in a survey/poll, etc

Index

ranking of data (how rich you are, how close you are with someone) EXAMPLES: TV Violence: # of violent events in an episode Political Preference: # of conservative politicians voted for in 2016 Health status: # of health symptoms one has Religiosity: # of religious beliefs one endorses CONSTRUCTING AN INDEX: 1.) Construct: "attitude toward apples" 2.) Identify indicators: Buy apples every time when grocery shopping Eat apples every day Enjoy the taste of apples Enjoy the crunch of apples Enjoy the smell of apples Enjoy the color of apples 3.) Convert indicators into survey items: Do you buy apples every time you grocery shop? Y/N Do you eat apples every day? Y/N Do you enjoy the taste of apples? Y/N Do you enjoy the crunching sound when eating apples? Y/N Do you enjoy the smell of apples? Y/N Do you enjoy the color of apples? Y/N 4.) Assign each item a score; for indexes each item is scored the same (Apple attitude index: score 0-6) Composite score is total sum of 'Yes' responses -------------------------------------------------------- Indexes Need/Have: FACE VALIDITY: Do these questions logically make sense? UNIDIMENSIONALITY: Do these questions only measure one construct? SPECIFICITY: Are these questions general/specific enough for my purposes? Ex. General religiosity or religious participation? (Even a single construct has many nuances) VARIANCE: Do these questions get a variety of responses? If a question identifies all participants in a random sample as the same, you might question your items.

Scale Construction

ranking of data AND intensity scales contain more information, and more difficult build. They must be build on analysis to ensure that items that "hang together" and measure one single construct. EXAMPLES: TV Violence: # of violent events in an episode AND their severity Political Preference: # of conservative politicians voted for in 2016 AND how conservative they are CONSTRUCTING A SCALE: 1.)Take our construct: "attitude toward apples" 2.)Our indicators: Buy apples every time when grocery shopping Eat apples every day Enjoy the taste of apples Enjoy the crunch of apples Enjoy the smell of apples Enjoy the color of apples 3.)Convert indicators into scale items I buy apples every time I go to the grocery store I eat apples every day I enjoy the taste of apples I enjoy the crunch of apples I enjoy the smell of apples I enjoy the color of apples 4.) Score each response option differently 5 - 4 - 3 - 2 - 1 strongly agree (5) - somewhat agree (4) - neutral (3) - somewhat disagree (2) - strongly disagree (1) I buy apples every time I go to the grocery store (2) I eat apples every day (4) I enjoy the taste of apples (5) I enjoy the crunch of apples (5) I enjoy the smell of apples (4) I enjoy the color of apples (3) Highest score possible: 30 points (strongly agree to every item) Composite score: 23 This is a called a Likert scale (lick-ert)

Simple Random Sampling

sampling units randomly selected from population. example: 1.) You work at a company that wants to research clients' views of quality of service over the last year How do we select a simple random sample for this study? Prepare sampling frame - sort company records, identify every client over last year to get list of clients (N=1000) Sample size - decide on number of clients in the sample (say you decide to sample 10% of the 1000 clients from last year s=100) Draw sample - put each client name in a hat and select 100; or use computerized random # generator Give all members of sample frame a number btw 1 and 1000 Randomly choose 100 numbers up to 1000 (from random # table) - it assumes that the sampling frame contains all members of a population - relies on random numbers to identify sampling - the most straight forward, but not necessarily the most accurate. 2.) Weitzer & Kubrin (2009) studied misogynistic song lyrics To prevent bias in song selection, they made a list of every song on a platinum-selling album btw 1992-2000 (2,000 songs) They used a computer to choose 400 songs at random from their sampling frame Each song on the list was as likely to be selected into the sample

Guttman Scales

scale measuring a SINGLE trait where different items are more/less intense than others. It assumes that individuals answering a certain way on more intense items will respond similarly to less intense items Constructing a Guttman Scale: Take our construct again: "attitude toward apples" Each item in a Guttman scale has yes/no option Each item is equal in score Have you ever eaten an apple? Have you eaten one or more apples in the past year? Have you eaten one or more apples in the past month? Have you eaten one or more apples in the past week? Have you eaten one or more apples in the past day? Have you eaten one or more apples today? Goal - to be perfectly logical so that individual who answers 'Y' to Item 6 will have also answered 'Y' to all others If participant answers 'Y' to 1-3 and 'N' to 4-6, they score a 3 on our Guttman scale

Purposive/Quota Sampling

segmenting sample into different kinds of people and then selecting individuals who fit those characteristics. ie gender, race, age, religious groups, political affiliation, etc.

T-Test

statistic used to find differences between two groupings of the independent variable on the continuous level dependent variable.

Chi-Square or X^2

tells us when two variables are associated with one another. Deteremined by comparing the expected distribution of our variable with the actual distribution.

Sampling Frame

the "list" from which all members of a population are sampled (those you can actually identify and access) Ex. Local telephone book to represent all residents of a county (practical but imperfect) Ex. Roster of enrolled university students Ex. Faculty directory

Parameter

the characteristic you're interested in studying (e.g. binge drinking, voting preferences)

Sampling Intervals

the standard distance between elements in our population (population size/desired sampling size)

when do you use percentages?

to see who gives a higher percentage of their GNI per year.

Social Desirability Bias

we are answering in a way that is truthful or in a way that we think would look good

When do you absolute numbers?

when reporting on just the numbers of the research. use absolute numbers to see who gives more money per year.

Acquiescence Bias

when someone answer's the way they THINK the researcher wants to see. (agreeing with all questions, positive correlation occurs)

Margin of error vs. other kinds of sampling errors

• Sampling frame error: ie polling/sampling the wrong people.Sampling frame error occurs when there is a mismatch between the people who are possibly included in the poll (the sampling frame) and the true target population • Nonresponse error or nonresponse bias-public isn't participating in polls/sampling when the likelihood of responding to a survey is systematically related to how one would have answered the survey. • Trends in how people respond to surveys-trends may vary according to social/political climate *to understand these, relate to the Lapinski example

Rules for designing survey questions

•ASK RELEVANT QUESTIONS: Know your audience! There is no point in asking for peoples ideas, opinions, or behaviors if people don't care or don't know about those issues. Ex. Are you familiar with the politician Tom Sakumoto? 9% said yes Half of that 9% said they had read about him •DON'T ASK INFO YOU DON'T NEED to answer your RQ: Don't ask for information you don't need—especially if it's sensitive information. Ex. Do you drink alcohol on a regular basis? vs. Please tell me the exact number of alcoholic drinks you have consumed during the last 7 days. •Pay attention to ORDER EFFECTS: Pay attention to question ordering. Early questions can affect how participants answer next questions This occurrence is known as order effects Ex. When asking about someone's charitable giving, might be bad idea to ask them to report their generosity level before asking about their monetary donations. Ex. 1. How religious are you? Not at all (1) Extremely Religious (6) 2. How many times have you attended religious services in the last month? _____. •Use MULTIPLE QUESTIONS to assess SAME CONSTRUCT: Write multiple questions to assess the same construct Ex. Rosenberg's scale on self-esteem asks 10 Qs and they average the score together to get a single self-esteem score Q1: I feel that I have a number of good qualities (circle one number) Q2: I feel that I am a person of worth, at least on an equal basis with others (circle one number) 0 1 2 3 4 5 6 7 Not Extremely at all

Principles for designing the actual questions in surveys (language/wording is important)

•AVOID LOADED QUESTIONS: questions that presume or suggest what the "right" or "reasonable" response is. Ex. Don't you think that obesity is a serious problem? Ex. Many believe that politicians are untrustworthy. How untrustworthy do you believe politicians to be? Ex. Can you do your job well? (asked to nurses on a staff satisfaction survey) •AVOID TRIGGER TERMS OR TERMS W/ BAGGAGE: specific words or phrases that carry larger cultural or political meanings. Ex. Do you think people should provide assistance to the poor vs. Do you think people should provide welfare to the poor? Ex. To what extent do you trust the police who protect and serve our city's residents? •AVOID DOUBLE BARREL QUESTIONS: combining two different ideas into one question when the respondent is forced to give only one answer. Ex. Are you comfortable talking to strangers and giving public speeches? Ex. Do you agree that women should have free access to birth control and a right to choose? •AVOID NEGATION: more likely to create confusion. Ex. Do you agree or disagree that the U.S. should not allow refugees to enter the country? •ASK SHORT, CLEAR, AND CLOSE-ENDED QUESTIONS: Avoid overly complex language, jargon, or technical terminology. Open-ended vs. close-ended: Are a set of answers provided? Ex. In your opinion, which of the following causes is the most important? Recycling Organic farming Reclaimed water systems Animal rights Ex. In your opinion, which causes are the most important? •CONTEXTUALIZE SENSITIVE QUESTIONS: Helps to soften the awkwardness of responding to the question. Also prepares the respondent for the questions to come. Ex. Everyone needs help from their friends and family sometimes. I'd like to ask you about times that you might need someone to help you with situations that require speaking, writing, reading, or understanding English. How often does a family member, a friend, or a child help you... Q1. Understand English-language television or radio? Q2. Understand newspapers written in English? Q3. Understand mail that comes to your house (bills, newsletters, etc.)?

Organizing survey questions into formats (or series of questions) - what are they are why might you use one over another?

•FUNNEL FORMAT: -Begins with broad questions, moves to more specific ones -Assists respondent in recall of detailed information -May ease into a sensitive topic Ex: Tell me about your childhood-> ->Best childhood memory? worst? ->How much did you get along with mom? Dad? ->How was discipline dealt with ->specific questions about abuse •INVERTED FUNNEL FORMAT: -Begins with specific questions, moves to more general questions -Initial questions set a frame of reference for following ones -May be used w/ topics that don't evoke strong feelings (direct Qs are often easier to answer first) Ex: Did you prefer fluoride toothpaste A or B? ->Which features/benefits appealed most to you? ->Why do you think it will appeal to other customers? •CONTINGENCY QUESTION FORMAT: -Use certain questions to determine which questions will be relevant for the participant -Can steer participants toward relevant questions and away from those that don't concern them Ie Have you ever smoked marijuana? ---> If yes, how many times? -Once -2-5 -6-10 -etc

Survey modes of administration - types, advantages and limitations

•Face-to-face/interview: Strengths -Interviewer can help make sure respondent understands the questions (researcher control) And does not skip any questions or "don't know"s -Tend to have higher response rates -interviewer can observe and ask questions Limitations -Interviewer may lead respondent to answer questions in a way that may not accurately reflect their beliefs (ie by appearance or other means) -Ex. People have been found to report more open-minded attitudes toward gender issues when interviewed by a woman --SOCIAL DESIRABILITY BIAS: report socially desirable behaviors -People may over-report volunteer work or attending religious services (Brenner's self-report vs. daily diary study) -------------------------------------------------------- •Telephone: Strengths -Interview control similar to face to face -In addition: lower cost to conduct, can be monitored by a supervisor who can ensure quality (researcher control), and requires little advance planning -respondants may be more honest than face to face Limitations -Respondents may be reluctant to talk about some issues over the phone -Increase in 'robocalls' is annoying so response rates are moderate -Workarounds (e.g. sending advance letters) add additional costs -Respondent fatigue may lead to incomplete data Lack "paradata" that face-to-face interviews provide -limited to people who have telephones (social class bias, excludes poor people)-not really much of an issue today -------------------------------------------------------- •Mail: Strengths -Less susceptible to researcher effects -More likely to report undesirable behaviors and attitudes -Exceptions: people more likely to report weight and smoker status in face-to-face interviews -Cheap Limitations -Less researcher control: More likely to respond w/ "I don't know" or N/A missing data -Low response rates: 20-40% (but may be as low as 3%) -------------------------------------------------------- •Email or web-based: Strengths -Cheap -Easy to design with software programs (SurveyMonkey) -Easy for respondents to navigate questions (b/c preprogrammed) -Can reach large geographic areas (global research) -Fewer researcher effects Limitations -Response rates only slightly better than mail surveys -Less researcher control -May be biased toward younger and resource-rich populations

5 Types of Scales

•LIKERT scales: Posing a series of questions with ordered responses that demonstrate intensity through their ordering. -Response format is very common in survey research -Can be anchored with numbers or not. Example: 1. All social science students should be required to take a course in research methods. Strongly Disagree Disagree Neutral Agree Strongly Agree 2. All social science students should be required to take a course in research methods. 1-Strongly Dis 2-Dis 3-Neutral 4-Agree 5-Strongly Agree Other potential Likert scale anchors: -Frequency: Very infrequently - Very frequently -Truth: Very untrue - Very true -Likelihood: Very unlikely - Very likely -Quality: Poor - Excellent Issues with Likert scales: -CENTRAL TENDENCY BIAS-people will gravitate to neutral/center of scale -SOCIAL DESIRABILITY BIAS-people will pick the choice they think is the most socially desirable/appropriate -------------------------------------------------------- •GUTTMAN scales: contains items of increasing order of intensity -Assume that individuals answering a certain way on more 'intense' items will respond similarly to less intense items. Example: 1. I like listening to music at times. 2. I like listening to music most of the time. 3. Listening to music is very important to me. I make time for it. 4. Music is my life. I don't know what I'd do without it. Person who responds "yes" to #4 likely responded "yes" to #1-3. Clearly a more intense music fan than others. ---------------------------------------------------- •BOGARDUS' SOCIAL DISTANCE scale - a special kind of Guttman scale used to measure a person's willingness to participate in social relationships. Are you willing to permit immigrants to live in your country? Are you willing to permit immigrants to live in your community? Are you willing to permit immigrants to live in your neighborhood? Are you willing to permit immigrants to live next door to you? Would you permit your child to marry an immigrant? -------------------------------------------------------- •THURSTONE scales - don't worry about knowing this one; it's rarely used but helps show broadly that there are many kinds of scales and different ways to develop them. Create a list of potential questions and use 10-15 judges to decide which indicators represent greatest intensity (of being an indicator of something. Ie in this case, liking apples. Seldom used because of difficulty attaining judges and compiling their perspectives. -------------------------------------------------------- •SEMANTIC DIFFERENTIAL scale: Presented with two opposite adjectives, and ask participants to rate something between those two words along the established continuum. Similar to a Likert scale. Example: Please think of journalists in the US today. Check a space between each of the adjectives below to indicate how you describe journalists in general. Educated ___ ___ ___ ___ ___ Uneducated Skilled ___ ___ ___ ___ ___ Unskilled Biased ___ ___ ___ ___ ___ Objective Dedicated ___ ___ ___ ___ ___ Uncommitted


Ensembles d'études connexes

LAB FINAL!! IR, TLC, Simple Distillation, Gas Chromatography, Dehydration of cyclohexanol, WHMC, Natural Products and Alkenes, Kinetics, Oxidation, Phase Transfer Catalysis

View Set

Module 2, Chest Tubes & Respiratory

View Set

SB 10, Representing MoleculesAssignment

View Set

The Modern Atomic theory assignment and quiz

View Set

Neuro deficits/disorders/impairments

View Set

Physical Science B lesson 16 Characteristics of Waves

View Set