soc 310

¡Supera tus tareas y exámenes ahora con Quizwiz!

Sampling frame

1. Locate a "list"(*) of all units in population (ex.: a collection of everyone's phone numbers, email, addresses ...) ◦ This is called constructing the sampling frame.◦ (*) doesn't have to literally be a "list" : if we wrote names of everyone in class on pieces of paper and put them in a hat, this would be a sampling frame for the class◦ Phone books are the classic old fashioned example. Random digit dialing on phone = drawing the sampling frame of phone numbers

Representative samples (fully vs partially)

A true or fully representative sample has the same proportions of all respondent characteristics as the population• This includes:• race, sex, gender, religion, hair color, height, ...• ... language spoken at home, GPA, left- vs right-handedness, ...• ... car ownership batting average, ice-cream preference, ...• ... zodiac sign, ticklishness, knowledge of Klingon ...• ... ... • ... ... • - really, every single conceivable (and not conceivable) characteristic• A partially or demographically representative sample is representative along only the specified demographic(s). Simple random samples are just one kind of probability sample ◦ : a sample where every population member has known, non-zero chance of being sampled◦ These can all generate representative samples (but may require statistics to do so)

Variables

A variable: a single measurable property of a population member ◦ ( "population member" might be person/organization/document/...) To make a concept or dimension into a variable: ◦ identify what possible values ("levels") it can take and (if appl.) the units ◦ If applicable, also the units. ◦ Ex.: ◦ Concept: age. ◦ Variable: age 0+ in years Collecting data = measuring our variables ◦ Variables take different values for different observations in our data: ◦ E.g., the age of respondents 1 through 5 might equal 21, 20, 21, 19, 20 We'll discuss two main types: numeric and categorical

Biased samples

Biased sample◦ A sample that does not match population proportions(that is, it under- or over-samples some groups) Biased sampling: 1936 Literary Digest poll • 1936 Literary Digest poll • Assembled sampling frame of 10 mil. American adults from automobile registration lists and telephone books & contacted everyone on list

Concept / Dimension

Concept (clearly defined version of the idea; usually a lot more specific) Dimensions(basically narrower/ more specific concepts): "Dimensions" are just smaller concepts Individual dimensions can themselves serve as concepts. • Dimensions are themselves concepts - • By calling it a "dimension", we just mean that it is more specific / narrow than another concept we are investigating • So if we decide "binge drinking" is a dimension of "drinking excessively" • ... we just mean "binge drinking" is one part of "drinking excessively"

Counter-explanations

Confounders offer a potential counter-explanation for the observed association between X and Y• If confounders are suspected and cannot be ruled out, then causality cannot be established with certainty. • This is the most common form of counter-explanation• "I don't believe the results of your study because the association you observe between X and Y is confounded by Z, which offers a more plausible explanation. If we argue that, for men, fatherhood causes higher wages, then which of the following is a possible confounder / counter-explanation?: Age. ◦ The fathers in this study are all older thana their non-father counterparts—and older have higher earnings(*).

Confounders

Confounding Variables The relationship between the independent (X) and dependent (Y) variables is confounded by Z if: ◦ Z is correlated with X and Y ◦ Z is a possible cause of Y ◦ Z is not a mediator for X->Y (so not X->Z->Y) ◦ In other words, X does not cause Z • Confounders offer a potential counter-explanation for the observed association between X and Y • If confounders are suspected and cannot be ruled out, then causality cannot be established with certainty. • This is the most common form of counter-explanation • "I don't believe the results of your study because the association you observe between X and Y is confounded by Z, which offers a more plausible explanation. Q: if TVs per capita do not cause an increase in life expectancy, why might countries with more televisions per capita have a higher life expectancy? A: It may be the case that per-capita wealth increases both per-capita TV ownership and life expectancy.• This is an example of confounding.

Construct Validity; biased measures

Construct validity:• Are we measuring what we say we measure? Four criteria:1. Fit? (Good:) conceptually, does what is measured match the definition of the concept?2. Coverage: broad enough? (Good:) does it capture all the dimensions of concept?3. Coverage: too broad? (Bad:) does it measure things that aren't part of concept?4. Bias? (Bad:) is there measurement bias—that is, does it systematically over- or under-estimate true value (* Bias could be for whole sample or for specific subgroup)• Re: "Jimmy's measurement": what are its problems with construct validity?• Reminder: he operationalized:• "personal trauma" Q: "Have you lost a pet or a loved one recently?"• "spirituality" Q: "How many times have you been to church in the past year? 1. variable: "Losing a loved one or a pet" <--> concept: "personal trauma"• There are many other kinds of trauma • (missing dimensions -> coverage too narrow)• Will systematically underestimate trauma for respondents with no living family or pets • (-> biased measurement)2. variable: "Going to church" <--> concept: "spirituality"• Most world religions do not have "churches". Term applies to Christians and will systematically underestimate for others • (-> biased measurement.)• This operationalization confuses two concepts: • "spirituality" refers to peoples' beliefs (ex., belief in God, soul, afterlife, mystical connectedness, etc.) • "religiosity": participating in an organized religion or established spiritual tradition• So "going to church" measures a different concept (-> problem with fit)• Put differently, it's an unintentional (and unnecessary) proxy• (* Note: I don't expect you guys to know the technical sociological definitions of spirituality / religiosity :)

"Random isn't the same as asystematic"

Convenience sample = sample selected solely based on who is the easiest to sample • (for example, surveying/interviewing your friends or housemates)• This is not "random": just asystematic.

Convenience samples

Convenience sample = sample selected solely based on who is the easiest to sample • (for example, surveying/interviewing your friends or housemates)• This is not "random": just asystematic.• Example: if we wanted to know the % of Americans who want Trump to be president in 2024, and we interviewed:• The students in this classroom?• People we met in a supermarket in Idaho?Convenience sampling is a bad strategy. • Serious problems with homophily (and sometimes lack of perspective.) If you don't know how to find subjects, avoid sampling your friends/housemates/Soc 310 students/etc!◦ This is convenience sampling, which is the worst sampling approach.◦ Even within constraints of this class, usually possible to do better than this.

Belmont report

General ethical principles in scientific research: Ethical principles and guidelines for the protection of human subjects in research

Fit of measure to concept

Indicators (a.k.a. measures) : specific things you will use to measure this variable. When phrasing your own survey/interview Q's Make sure that 1. • ... item closely fits the concept/dimension you aim to measure For each concept (or dimension), operationalization asks...◦ The range of possible values relevant to your RQ◦ The level of detail that is relevant to RQ and you can reasonably expect to measure◦ This often means simplifying and removing nuance◦ Measures that try to capture unnecessary levels can increase chance of error and create burden for respondents

Mediators; Moderators

Mediators If you ask "how does X cause Z?" or "by what means does X cause Z?"◦ The answer often takes the form: "X first causes Y, and Y in turn causes Z"◦ = "Y mediates the relationship between X and Z"◦ Mediators are intermediate variables that explain by what means the initial cause (IV) affects the final outcome (DV)◦ If X->Y->Z explains the entirety of X's effects on Z, then X->Z is fully mediated by Y Ex: How does education make people less prejudiced towards marginalized outgroups? Education -> Learning about social/historical conditions -> Prejudice lowered Education -> Intergroup contact -> Prejudice lowered Moderators ◦Questions "when does X cause Z" or "for whom does X cause Z" ask about factors that moderate the relationship X->Z◦: A moderator Y is a variable alters the relationship between X and Z: ◦ A factor that can make X->Z it stronger or weaker, make it reverse, or make it disappear ◦Moderators are variables that affect the relationship between other variables (not the variables themselves) ◦ The effect of parental status on earnings is different for men and women ◦ This means effect of parental status on earnings is moderated by employee gender Parental status -> Earnings ^ Gender pointing to middle arrow

Sample size (N)

N = sample size = number of units in sample

Numeric vs Categorical Variables

Numerical variables: their levels are numbers ... which can be ordered and can be added/subtracted Categorical variables Categorical variables are usually verbal ◦ It is not possible to do arithmetic with them ◦ They have a limited set of possible levels ◦ Example: ◦ Political ideology is often operationalized as a categorical variable with 5 levels ◦ Very Liberal, Liberal, Moderate, Conservative, Very Conservative ◦ Self-reported race can be operationalized as a categorical variable with 6 levels:◦ Black / White / Hispanic or Latino / Asian / Native American / Other

Operationalization

Operationalization• Deciding how your concepts will be measured:• 1. Defining variables • 2. Selecting their measures/indicators. Operationalization involves decisions For each concept (or dimension), operationalization asks... ◦ The range of possible values relevant to your RQ ◦ The level of detail that is relevant to RQ and you can reasonably expect to measure ◦ This often means simplifying and removing nuance ◦ Measures that try to capture unnecessary levels can increase chance of error and create burden for respondents For example, ◦ If you are asking respondents their weight: ◦ Reasonable to create variable: weight (in lbs)◦ Not reasonable to create variable: weight (in oz's) Operationalization consists of two steps: 1. Take each dimension (or 1-dimensional concept) and make it into variable 2. Take each variable and attach it to a concrete empirical measure (= indicator).

Ways to reduce SDR

Reducing SDR: anonymity Social approval & fear of getting in trouble only apply if R thinks interviewer (or someone with access to data) would know/care about R's answer This gives several ways of reducing it 1. Anonymity and confidentiality◦ When dealing with sensitive topics, you should always practice anonymity/confidentiality. ◦ To reduce SDB, it is also useful to repeatedly remind the subject of these practices; or to do the survey in a way that makes anonymity apparent. For example:◦ Using a confidential online form◦ Having respondent put their completed paper survey in a box with a slot (^) Reducing SDR: not signaling interviewer's preferences SDR often happens because R's want interviewer to like them—so avoid signaling your preferences:◦ First, avoid actually telling the Rs your own point of view. ◦ Usually, the less the Rs know about you, the better.◦ Avoid signaling your preferred answer through question phrasing: ◦ Do not use biased phrasings that may appear judgmental of one side of an issue◦ Avoid loaded language that is associated with one side of a political debate◦ Phrase your questions in the most neutral way possible◦ Avoid signaling approval or disapproval of R's answers (in words / gesture / tone of voice)◦ This is tricky because you also cannot appear neutral/emotionless◦ You should seem interested in learning about the respondent's world. ◦ It's a skill to develop with practice. Reducing SDR: normalizing sensitive responses◦ Convey to R sense that you would not find the sensitive behavior objectionable.◦ One part of this is using matter-of-fact language / tone, and otherwise acting like any answer to the question is no big deal. A well-known (joking) example from Barton (1967) in POQ:◦ "Do you happen to have murdered your wife?" ◦ Placing the potentially sensitive behavior/attitude in middle of a list of far less sensitive behaviors◦ Highlights that different people answer the Q in different ways. ◦ Ex.: "There has been lots of talk about X on the news these days. As you may know, some people support X, while others oppose it. What about you? What are your views on X?" ◦ If one category gets under-reported but never over-reported, could be even more direct in normalizing that response. Another Barton (1967) example:◦ "As you know, many people have been killing their wives these days. Do you happened to have killed yours?"◦ But this is a risky strategy: you don't want to bias respondents towards over-reporting the behavior Reducing SDR: changing interviewers Even if interviewer is careful, R's will make assumptions from interviewer's (I's) speech and appearance ◦ That means I's gender, race, manner of dress, etc. can increase/decrease SDR for some Qs For example, both white R's and black R's may be more likely to report support for affirmative action in hiring to a black interviewer than to a white interviewer◦ Which one do you think is the true attitude? Why?◦ Hard to know for certain without more data!◦ R's may believe that black I's would view them better if they supported SDB; or may believe white I's would view them better if they opposed it. ◦ True rate of support probably somewhere between the one reported to the two interviewers.◦ One solution: use interviewers from both groups, and randomly assign R's to I's◦ This doesn't eliminate the bias, but it lets you get an upper and lower estimate Reducing SDR: method choice People are willing to say things online they would rarely say face-to-face.◦ This is often a bad thing; but for SDB, it's actually good. For example:◦ Internet / paper have less SDB than telephone surveys; telephone has less SDB than face-to-face◦ Going to impersonal internet surveys / paper ballots involves big tradeoff: ◦ Impossible to do in-depth interviews◦ Internet & paper surveys have substantially lower reliability than telephone or face-to-face. ◦ In most situations, face-to-face yields highest quality data.◦ Not a coincidence: ◦ both are effects of R's putting more thought into their answers because another human being is observing them.◦ Can use both methods & compare to estimate how much SDR takes place

Reliability

Reliability is how dependable a measurement procedure is. ◦ = does it always produce the same result?◦ (assuming true quantity doesn't change) Important catch: • Highly reliable procedures can be completely invalid • (ex., always produce the same wrong result) What is the most intuitive way of testing this? ◦ Repeat the same measurement procedure ◦ Same measurement procedure = ask the same questions in the same way; or observe the same location; or examine same objects; etc. ◦ All the types of reliability we study come down to this◦ We will talk about two kinds: test-retest and inter-observer High reliability can come from:◦ Standardized procedures: measurement can actually be reproduced exactly◦ Clear, unambiguous instructions◦ Control of the study environment to remove unexpected factors◦ Automatization (using software algorithms instead of humans) Low reliability means procedure should be revised◦ Possible solution(s): the procedure simpler. Standardize more steps. Make the instructions for it more detailed. Make sure phrasing is unambiguous.◦ ... so another answer for where high reliability comes from is "effort"

What can and cannot be fixed by increasing N

Representativeness often more important than N◦A sample of 10,000 can produce a far more accurate result than a sample of 2.3 million◦ Because the literary Digest poll had a non-probability sample and a high non-response bias, resulting in sample was not representative of the population and produced biased estimates of voting intentions◦ The Gallup poll was demographically representative and (at N=10,000) sufficiently reliable to produce the correct estimate◦ Take-away: Don't be swayed by huge sample numbers! Think about where the sample came from.◦ Especially be cautious with huge online samples—they can be highly biased!

Respect for persons, beneficience, justice

Respect for persons◦ (1) Informed consent: subjects are...◦ ... able to decide whether to participate with a full understanding of risks & benefits of participation.◦ ... able to withdraw consent at any time◦ (2) Protection for persons with "diminished autonomy" Beneficience◦ Research should "maximize possible benefits and minimize possible harms." (Belmont Report.)◦ For research subjects and for humanity in general Justice◦ The risks and benefits of research should be distributed equally Respect for persons (informed consent and protection for diminished authority)◦ Subjects forced to participate: given no choice, or lied to in order to get participation◦ Or were not mentally capable of consent◦ Beneficience (maximize benefits and minimize harms)◦ Subjects suffered enormous physical and mental harm◦ In some cases, the knowledge gained was minor◦ Harm to subjects far outweighed benefit to knowledge◦ Justice (risks and benefits distributed equally)◦ Some of the studies did not benefit knowledge◦ Even when they did, the beneficiaries were not the same people as subjects◦ Oppressed social groups made to suffer the harms of research while dominant social groups reaped the benefits

Decreasing/increasing response quality

Response error in a nutshell Main reasons and principles for averting it:◦Reason: R refuses to answer(*)◦ Principle: Don't exhaust or upset your respondent◦ (* Not technically an 'error', but still a reason for lack of data)◦Reason: R misunderstanding◦ Principle: Don't confuse your respondent◦Reason: R does not know answer / has difficulty answering correctly◦ Principle: Don't ask R's things they don't know or can't answer◦Reason: R may intentionally deceive / mislead◦ Principle: Don't make R bad about responses◦ We'll focus on this one now

Sample

Sample: a limited number of units researcher selects to examine in their study. Sampling: selecting units for your sample

Sampling until saturation

Sampling until saturation◦ : continue sampling and gathering data as long as you are learning something new◦ : when it feels like you are not getting any new major insights through adding more cases, stop (or switch recruitment criteria)

How can you draw a random sample?

Simple random sample: every member of population has equal chance of being selected into sample. A simple random sample will on the average have the same proportions of women, undergraduates, Republicans, flute-players, left-handed people, ..., as the population as a whole—but if the sample size is small, it can be quite far off• What can we conclude about its validity and reliability?• Validity:• Simple random samples provide unbiased estimates of features of the population. • They are thus always(*) externally valid.• (* assuming you successfully gather data on every unit in your sample—more about this on Thurs.)• Reliability:• A small random sample is likely to underestimate or overestimate any population parameter• But as sample size increases, this becomes exceedingly unlikely• Thus, small random samples have a low reliability, • But (very) large random samples have a (very) high reliability Constructing a random sample 1. Locate a "list"(*) of all units in population (ex.: a collection of everyone's phone numbers, email, addresses ...) ◦ This is called constructing the sampling frame.◦ (*) doesn't have to literally be a "list" : if we wrote names of everyone in class on pieces of paper and put them in a hat, this would be a sampling frame for the class◦ Phone books are the classic old fashioned example. 2. You then draw a random sample of desired size by randomly selecting units◦ For example:◦ We could draw 10 names from the hat for an n=10 sample◦ Random digit dialing on phone = drawing the sampling frame of phone numbers◦ Academic research usually uses software to randomly choose addresses/phone numbers from database

Random samples; probability samples

Simple random samples are just one kind of probability sample ◦ : a sample where every population member has known, non-zero chance of being sampled◦ These can all generate representative samples (but may require statistics to do so) For example:◦ Cluster sampling : divide the population into clusters (ex.: households in U.S. divided by zip code.) First sample a set of zip codes. Then sample households within each zip code in your sample◦ You don't need to know cluster sampling for this class—just know that other types of probability sampling exist◦ For purposes of this class, if you hear "probability sample", think "simple random" • An interval sample is an approximation of a probability sample in which researchers choose every kth member of their sampling frame.• Not a true probability sample, but can still produce roughly representative samples

Contrast surveys vs interviews: how they differ

Surveys- Structure: Structured: all R's presented with a pre-determined set of questions. Question Format: Most Qs close-ended (= respondents choose from list of pre-set responses.) Possibly a few open-ended Qs. Mode: Can be face-to-face (read out by live interviewer), over internet, over telephone, or via physical ballot Samples: For research: ~ N = 1000 to 2000 For gov't: up to ~ N=50,000 to 100,000Usually probability samples. Analyses: Usually quantitative In-depth Interviews Structure: Semi-structured: "Top level" Qs are pre-written. Figure out follow-up questions on the spot. Question Format: Bulk of the Qs are open-ended. Possibly a few close-ended (demographics, etc.) Mode: Usually in person. (But increasingly over video chat. Can also be on phone.) Samples: Smaller (N = 30 to N =150). Usually purposive sampling. Analyses: Usually qualitative

Research Questions

Types of Research Question: Explanatory: (which asks) What is the effect of factor X on factor Y in population A? (or similarly) What is the association between factor X and factor Y in pop. A? Ex.: "is what is the effect of educational attainment on the lifetime earnings of contemporary Americans?"• These are called explanatory questions because they seek to explain factor Y in terms of factor X• Ex.: "about half of the variation in Americans' lifetime earnings can be explained by differences in their educational attainment" Exploratory: How does characteristic X differ between members of population A based on whether they have characteristic Y? Ex.: "Among of black students at an elite public university, how do educational experiences differ based on their first-gen. status*?"• These questions are called exploratory because they are suited for initial explorations of a little-understood subject * Format is more open-ended than the explanatory questions • "How do things differ" leaves a lot of room for possible answers Anatomy of an RQ: - What is the effect of factor X on factor Y in pop. A? - How does characteristic X differ between members of population A based on whether they have characteristic Y? These RQs ask a question (what, how, why, ...) ... about a relationship between concept X and concept Y• Concepts : ideas or phrases that • (i) identify a factor, characteristic, process, or other feature of reality • (ii) that can be defined and (iii) eventually measured or observed• Relationship is some type of connection or contrast:• "effect", "difference", "association", etc. • ... within a population—• : the collection of people / groups / organizations / ... that you are going to study Characteristics of a good RQ: • Clear and unambiguous conceptualization • It's an empirical question • It should be possible to gather data that, when analyzed, would suggest that one answer to it is more likely to be true than the others. • It is feasible • It's possible for you to gather/analyze these data this semester •It's a social question (because this is a sociology class) •It is a question you find interesting or important • The answer to the question is not already obvious -- different answers are reasonably possible • It is not overly narrow or overly simple

Units of Analysis

Unit of analysisGeneric type of entity being studied;

Validity

Validity: are we measuring what we say we measure?• We will talk about 3 kinds of validity: • construct, external, internal How to design a study with high validity?• No single answer, but in general, a lot comes down to:• Choosing research method that fits RQ:• 1. it can operationalize the concepts fully and without requiring weak proxies• 2. it can test the relationship between variables (more on this in two weeks)• Choosing a sampling procedure that gathers a representative sample• (More on what this means next week)• Meticulously matching your measures to exact letter of your concept/RQ• Are you measuring what you intended to examine, for the population you intended to examine? • Making sure your measurement procedures do not create biases• Do the measures apply to everyone / all situations equally well? • Does your measure systematically miss some cases? Or pay too much attention to other cases?• Can you think of situations where your measures will usually underestimate or overestimate the result?• And most importantly:• Before gathering actual data, pre-test your measures to see if they work the way you thought they would work!• Hint: on your first try, the answer is usually "no"!

"Variables have to vary"

Variables: common mistake to watch out for Variables have to vary (that is, take on different values.) ◦ Otherwise, they are constants, not variables. ◦ Same concept may serve as a variable in some settings but not in others. This can cause a problem when a variable is mistaken for the population ◦ For example, if your RQ is: ◦ "how does being a woman affect one's travel experiences"... ◦ ... and if you survey women (and only women) ◦ then respondent gender is not a variable in your analysis, since it always equals "woman" ◦ This means you cannot analyze the effect of being a woman ◦ (because this requires comparison to non-women)

Why use scientific approach to knowledge?

We are innately able to know things by: 1. Observing through our senses (seeing, hearing, feeling, tasting, smelling) 2. Generalizing (from what we have observed to broader patterns; enables us to apply knowledge to new cases) 3. Reasoning (based on our existing knowledge to infer new things about the world, or reevaluate prior conclusions) 4. Learning / teaching (Intentionally transferring knowledge) When learning from observation • We're good at generalizing from tiny numbers of observations • (even N=1) • We kind of have to: often, tiny N's are all we can observe in daily life!• But this often leads to overgeneralization: • Incorrect generalization stemming from assumption that what we have observed in a few cases is true for all or most cases. • In social life, this overgeneralization is made worse by homophily: the tendency of people to make social ties to those who are similar to themselves. • So, e.g., people often assume like "everybody loves Trump" or "everybody hates Trump" because everyone in their social group loves/hates him When reasoning • We're good at not getting bogged down by complex problems • ... we automatically replace them with "similar" simpler ones • (again, often without realizing this) • The problem is that we are not great of making use of better data or long available time for thinking • ... we usually still just use the same heuristics • ... and this causes many systematic biases (persistent errors) So why do we need science? • Innate learning is good enough most common daily human activities—• (Especially if you are hunter-gatherer) • But this serves different purposes from scientific knowledge • We are not so great at intuitively making systematic, generalizable, cumulative, unbiased knowledge

Independent variables (IV); dependent variables (DV)

We often explain a social phenomenon via causal statements that say variable X causes variable Y. Example: Frequent exposure to a style of music in early childhood causes individuals to enjoy that style of music later in life. ◦ X (the cause) is the Independent variables (IV)◦ Exposure to a music style ◦ Y (the consequence) is the Dependent variables (DV)◦ Enjoyment of a music style ◦ The terms IV/DV can be confusing. Useful mnemonic: if X causes Y, then the value of Y depends on the value of X "IV causes (or contributes to, etc.) DV" (We often write this as IVDV)

sociological vs. non-sociological explanations

What does it mean to examine or explain an issue "sociologically"? - Examples of sociological approaches: • in terms of social relationships, social groups/categories, organizations, networks, cultural conventions/institutions, etc. - Examples of non-sociological approaches: • in terms of the innate qualities of people or objects (biology, personality); • explanations focusing on unique or idiosyncratic aspects of peoples' background (biography; ideographic explanation); • explanations based around normative judgements about what's right or wrong/good or bad/etc. EX: Why do different people like different kinds of music? - Non-sociological explanation: Biological: because of inherited differences in peoples' ears or brain structures, Normative and personality-based: Because some people have bad taste and some have good taste (or vice versa, depending on your perspective), Biographical: "Well, let me tell you about why Bill and Ned have different musical tastes." - Sociological Explanation: • Because of differences in exposure and access during childhood, Because people use cultural tastes to draw ingroup boundaries, Because of direct social influence: people enjoy things more simply because they see others enjoy them

Purposive sampling

When are non-probability samples used?1. Studies where the N is very small◦ [This includes most interview-based/ethnographic/historical projects]◦ With small Ns, the reliability of random samples is unacceptably low◦ In practical terms this means your sample proportions can differ greatly from population's◦ You may also end up not having enough variation on some key variables to carry out study◦ (Partial) Solution: purposive sampling: ◦ intentionally selecting cases that would be most informative for your research question Purposive sampling• Purposive sampling: cases selected based on features that would make them the most promising for insights• These features are called inclusion (or exclusion) criteria • (rules you use to decide who can be in your sample)• These can be used to guarantee that you have variation on your key variables• Ex., If you are interested in gender differences, criteria may be to interview 3 men and 3 women. Purposive sampling: coverage or range That is, trying to get subjects with the widest range of roles / perspectives / experiences in the phenomenon being studied For example, who might you recruit for:◦ a study of student experiences in college◦ students from widest range of demographic categories (e.g., gender, race, class, age, ...)◦ students with different types of college enrollment (full-time, part-time, living on vs off-campus campus, work-study...)◦ students from different kinds of colleges (private, public, community college, ...) ◦ a study of the neighborhood consequences of streetcorner drug dealing◦ Drug dealers, buyers, police, neighbors, nearby shopkeepers, ...◦ To avoid selection bias, could also study:◦ people who left neighborhood; people who are now incarcerated typicality - • Selected case: Munice, Indiana. • Nothing particularly interesting about Muncie: it resembled a lot of towns of its size. • That's what makes it a typical case. extremity: : selecting unusually strong or vivid cases of a target phenomenon Benefit: ◦ Lets you observe particularly vivid examples of a phenomenon◦ Good for subtle phenomena where typical cases may be hard to observe in detail◦ May be useful for understanding general outline of a previously unstudied phenomenon Cost: ◦ Especially poor external validity with regards to the broader population of cases (which are usually non-extreme) Purposive sampling for deviant cases◦: selecting cases that are unexpected or hard to explain given the current theoretical understanding of a topic. ◦ Does some well-known process sometimes not lead to the expected outcome?◦ Hope is to extend / amend the theory◦Ex., people who live very long healthy lives despite smoking two pack of cigarettes a day◦Same problems with generalizability as sampling for extremity

Social desirability bias (SDB) & socially desirable responding (SDR)

Why do respondents deceive/mislead? Lying on surveys usually involves a deliberate, intentional choice◦ (Holtgraves 2004). ◦ Not something R's do automatically / unconsciously◦ That means it's fairly easy understand and prevent What makes R's choose to lie? It's generally social desirability bias (SDB).◦SDB = tendency to falsely report views/behaviors viewed favorably by others◦ When respondents lie in response to SDB, we call it socially desirable responding (SDR) Socially desirable responding SDR usually happens for one of two reasons (Krumpal 2013):1. Desire for social approval:◦ R's want to be liked or admired by interviewer;◦ Being liked and admired can be pleasant, and can also lead to other rewards◦ Or want to avoid being embarrassed◦ Experiencing guilt and shame can be painful/stressful2. Fear of getting in trouble for admitting to prohibited behavior/attitude◦ Fear of being punished formally (arrest/fines) ◦ or punished informally (being ostracized)◦ Usually possible to guess which Q's / response categories elicit these & adjust accordingly

Causation vs Correlation (= Association)

X and Y are correlated (or associated) if the values of X tend to vary along with the values of Y (and vice versa) • Positive correlation: if X goes up, then Y goes up. • Negative correlation: if X goes up, then Y goes down • Common phrasings: "X and Y are related" "X predicts Y" "Changes in X and Y parallel each other" "Increase in X is associated with a decrease in Y" X is causally associated with Y (or simply "X causes Y") if changing the value of X would result in changes to the value of Y • Some phrasings: "X and Y are causally related"; "X affects Y"; "X changes Y", "X results in Y"; "Y is a consequence of X", Correlation is related to causation... If X causes Y, this generally produces a correlation between X and Y. ◦ So correlation can be used to test for the possibility of causation ◦ If X and Y not correlated: ◦ Then X likely doesn't cause Y & Y likely doesn't cause X ◦ Establishing correlation usually an important step towards establishing causation... but correlation doesn't necessarily imply causation ◦ Correlations can also arise via other means ◦ If there is correlation without causation, we can say "X and Y are associated, but the association between X and Y is not causal" ◦ One possible reason for a correlation without causation: mere chance

Population

a population—• : the collection of people / groups / organizations / ... that you are going to study ... The full set of entities that you want to understand and make inferences about A population census: : researcher examines every single unit inside the entire population • (and not just a subset) Populations •RQ: What is the effect of educational attainment on Americans' lifetime earnings? • Population? Americans. • RQs are always asked about a population

Levels

make a concept or dimension into a variable:◦ identify what possible values ("levels") it can take ex: • 2. List (some of) the variable's levels• For clarity, ask: under this definition, what would most likely be the variable level (race) of children whose parents have different races? • (One possible) variable:• Self-identified gender• Type: Categorical (3 categories)• Levels: Female / Male / Transgender or other

Conceptualization

the process of developing and defining concepts - Start with your initial ideas/notions and precisely specify them End goal - Clear, easily understood terms w/ precise definitions - Often also a more clearly phrased RQ Why do careful conceptualization? - It's how you figure out what you are actually asking - Different conceptualizations might require different data to answer - It's a necessary step from ideas to measurement or operationalization (next week) - It makes it possible for others to understand / evaluate your argument quickly - Helps make sure that researchers are not talking past each other

"Sometimes a method with a higher reliability might have lower validity, and vice versa"

• Highly Reliable• = darts always hit the same spot• ... even if it's the wrong spot! • Highly Valid • = darts centered around the correct spot• ... even if they never actually hit the correct spot

Peer Review: Pros and Cons

• Main goal is to catch errors in work • Usually successfully catches major/obvious errors • (Can at times catch less obvious ones, if reviewers are good) • They also make sure that the authors ... • ... have sufficient evidence for original claims (or citations for existing ones) • ... follow acceptable research practices • ... don't claim to invent something that's already been invented • ... define concepts in conventional ways • ... cite other relevant work • ... are sufficiently clear & open about their research process • Benefits : • Published work was vetted by other experts in the field • ... and thus represents more than just the views of the authors • Researchers pre-emptively "fix" problems that peer reviewers are likely to catch • Forces researchers to be more careful • (perhaps too much so: a conservative force) • Overall, makes academic publications more trustworthy & academic knowledge more solid Problems with of peer review • 1. Reviewers often don't know your topic as well as you • 2. Peer review is volunteer work done on own time • Often a mismatch: your research was painstakingly, slowly done, and now it's being hastily reviewed 3. Your closest peers are often also your competitors • You compete with them for funding or jobs • Your work may be showing that they are wrong • They may be trying to be the first to publish the same finding • Often creates subtle/hard-to-avoid conflicts of interest • The expectation is that peer reviewers will be fair in their reviews, but it is just an honor system

Scientific Principles

• Scientific principles & method: conventions, practices and values that came about in order to solve these kinds of problems with reasoning • Emerged gradually throughout the 16th and 17th century • Perhaps the most central figure in its development is Robert Boyle - Founder of modern chemistry Boyle's approach •Intended for his work to serve as a model of how trustworthy scientific knowledge should be established • Knowledge should be secured empirically • ... by systematic, well-documented, impartial procedures • ... openly demonstrated to community of scientists • ... who can replicate the procedures • Results written in modest, clear, simple, impartial language • Another key part of approach: "Invisible College", later renamed Royal Society of London • Oldest national scientific institution in the world • Community of scientists to observe / verify each other's work • Hobbes noted that empirical observation is fallible • Machinery can fail; easy to introduce mistakes into procedures • Different ways of interpreting the same results (based on your assumptions) • Generalization from specific results to logical principles requires a logical leap • ... and that the opinion of observers is a poor guarantor of truth • People too easily mislead and too motivated by self-interest (same reasons he didn't love democracy) • Truly "open" empirical demonstrations are impossible or impractical • E.g., Boyle's Royal Society still only open to a few highly educated people. Most people would not be able to either access or interpret the experiments. • This could create a knowledge elite • ... which could manipulate people through selective interpretation of science. What about problems Hobbes pointed out? • Many have indeed been persistent and serious problems for science • Science is never perfectly open and never perfectly disinterested • And often very non-disinterested, especially when there is lots of money involved (ex., medicine) • Scientific observation is indeed often fallible • It creates only tentative facts that are subject to future amendment. • Individual studies and their interpretations often turn out to eventually be wrong. • So single studies may not be a good enough reason to change well-justified beliefs. • People have indeed used the language of science to mislead and manipulate others • So in a way, Hobbes correctly predicted some important problems with science • ... but drew too extreme a conclusion (and proposed a bad solution.) • Science isn't perfect: it's just better than the alternatives (a lot like

Surveys vs interviews: which is better for what purpose

• Surveys are best for explanatory questions - You need to preformulate the questions beforehand • In-depth interviews are a better fit for exploratory questions Surveys: In a structured survey, researchers select a sample of individuals, and ask them a series of questions • Surveys are closely tied to sampling: • Best if probability (random) sample of population. • Why? (remember your Soc 210?)• It creates an unbiased sample. This allows generalization to the whole population Surveys are highly structured and standardized: • The questions are all pre-written. They are (mostly) closed-ended questions with fixed response options. Every respondent is (usually) asked the same questions. Precise wordings really matter. Response options have to be designed carefully to avoid biasing respondents. Finding subjects for a survey 1. If you want to sample U.M. undergrads: ◦ often the best option is interval sampling 2. If you are interested in another sizeable population with a clear sampling frame (for example, you can get list of emails for many or most population group members): put in spreadsheet and draw simple random sample 3. If you are interested in a relatively small specific subpopulation (e.g., residential advisors), and this population is associated with a club or other student group: ◦ Ask whoever runs it if they would let you recruit in person at their meeting ◦ Failing that, ask to use their email list to survey invites for the survey ◦ Note that this will not produce a probability sample, so think about what bias this may create Interviews: • In-depth interviewing is a research method in which the researcher first asks open-ended questions, and then probe further with many follow-up questions. • The idea is to dig down and really understand the respondent's case deeply. • Each interview is usually quite lengthy. • Process: • Ask same key questions in every interview... • ... but adapt specific follow-up questions to responses. • Part of the challenge is coming up with right follow-up questions on the spot • Goal is to keep the respondent talking Trouble finding subjects for interviews: ◦Try recruiting from organizations ◦Try snowball sampling. ◦ For your first subject, ask friends to introduce you to their friends who match your criteria ◦ If you have a lot of options, first ask your least close friends—(this is a way of reducing bias from homophily) Trouble convince interview subjects to participate? 1. Remember that people love to talk about themselves. Just communicate to them that you are genuinely interested in learning about them! 2. You could also offer to pay respondents for the interview ($5?) This is common/accepted, but often unnecessary


Conjuntos de estudio relacionados

ECON201: Macroeconomics (practice 3)

View Set

Chapter 19_ Fundamental of Nursing

View Set

ECON-40: Ch.18-23: Microeconomics

View Set

Info Mgmt Quiz 2 - Data & Data Storage

View Set

Chapter 25: The Child with Renal Dysfunction

View Set

MNGT 301 || Chapter 8: Organizational Culture, Structure, and Design: Building Blocks of the Organization

View Set

Neolithic & early civilization test (Chpt 5)

View Set

Quiz #5 (Chapter 5: Short Term & Working Memory)

View Set