SRM: EXAM 2

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

random digit dialing

when computers are used to select random telephone numbers for interviewing

George Gallup & quota sampling

-G.G. & his American Institute of Public Opinion used quota sampling & correctly predicted the presidential winner in 1936, 1940, & 1944 -1948: picked Thomas Dewey as the winner over Harry Truman: were wrong WHY WRONG? -most pollsters stopped polling in early October, despite a steady trend toward Truman -many voters were undecided, when they decided, they went disproportionately for Truman -unrepresentative sample: quota sampling worked well for him until now: Gallup relied on 1940 census data, however, WWII caused a massive movement from the country to cities, & city dwellers tended to vote more Democratic SO the underrepresentation of city dwellers underestimated the number of Democrat votes

is social research often conducted in situations that do not permit the kinds of probability samples used in large-scale social surveys?

-NO -ex) if you want to study homelessness, there is no list of all homeless individuals

random selection

-a sampling method in which each element has an equal chance of selection independent of any other event in the selection process -ex) flipping a coin - the 'selection' of heads or tails is independent of previous selections of heads or tails -ex) rolling a die -social research isn't this easy though, so typically, social researchers use tables of random numbers or computer programs that provide a random selection of sampling units

nonprobability sampling

-any technique where samples are selected in a way not suggested by probability theory 1. reliance on available subjects 2. purposive (judgmental) sampling 3. quota sampling 4. snowball sampling

quota sampling

-based on a knowledge of the characteristics of the population being sampled: what proportion are men, women, of various incomes, ages. etc. -selects people to match a set of these characteristics -the quotas are based on the variables most relevant to the study

marginality of informants

-because they're willing to work with outside investigators, informants are almost always somewhat "marginal" or atypical within their group, sometimes this is obvious, sometimes this is not -ex) the county agent in Jeffrey Johnson's study identified one fisherman who seemed squarely in the mainstream of the community, he was cooperative & helpful to Johnson's research BUT, the more Johnson worked with the fisherman, the more he found the man to be a marginal member of the fishing community -informant's marginality might bias the view you get & limit their access (and hence yours) to the different sectors of the community you want to study

Theoretical Sampling (found under the "Purposive/Judgmental Sampling section"

-called theoretical sampling because the evolving theoretical understanding of the subject directs the sampling in certain directions -In qualitative research projects, the sampling of subjects may evolve as the structure of the situation being studied becomes clearer & certain subjects seem more central to understanding than others. -ex) You're conducting an interview study amon gmembbers of a radical political group on campus. You may initially focus on friendship networks as a vehicle for the spread of group membership & participation. As you conduct interviews, you find several references to interactions with faculty members in one of the social science departments. As a consequence, you may expand your sample to include faculty in that department and the other students that they interact with

negative case testing/analytic induction

-coined by Bruce Berg -a type of qualitative hypothesis testing -inductive bc it begins primarily with observations -analytic because it goes beyond descriptions to find patterns -see notes for example

content analysis

-communications (oral, written, etc.) are coded or classified according to some conceptual framework -ex) newspaper editorials may be coded as liberal or conservative -ex) radio broadcasts may be coded as propagandistic or not -ex) novels as romantic or not -ex) paintings as representational or not -ex) political speeches as containing character assassinations or not

relational analysis

-goes beyond observing the frequency of a particular concept in a sample of texts -ALSO examines the relationship among concepts -ex) you might look for references to "discrimination" in letters to the editor & also note the kind of discrimination being discussed (ex: racial, religious, gender, etc.)

a brief history of sampling

-has developed with political polling because it's one of the few opportunities where social researchers can discover the accuracy of their estimates -on Election Day, they find out how well or poorly they did

representativeness

-if the aggregate characteristics of the sample closely approximate those same aggregate characteristics in the population -ex) if a population contains 50% women, then the sample must contain "close to" 50% women -samples don't have to be representative in ALL respects, just in the characteristics that are relevant to the interests of the study's -a sample will be representative of the population its being selected from if all members of the population have an equal chance of being selected in the sample -the size of the sample selected also affects the degree of representativeness -samples, even carefully selected EPSEM samples, rarely ever perfectly represent the population from which they are drawn

the end product of your coding must: CLEARLY DISTINGUISH BETWEEN UNITS OF ANALYSIS & UNITS OF OBSERVATION

-initial coding must relate to the units of observation -ex) if novelists are the units of analysis & you wish to characterize them through a content analysis of their novels, your primary records will represent novels as the units of observation -you may then combine your scoring of individual novels to characterize each novelist, the unit of analysis

what does coding in content analysis involve?

-involves the logic of conceptualization & operationalization -you have to refine your conceptual framework & develop specific methods for observing in relation to that framework

the end product of your coding must: RECORD THE BASE FROM WHICH YOUR COUNTING IS DONE

-it would be useless to know the # of realistic paintings produced by a given artist without knowing the # they've painted in total; the painter would be regarded as realistic if a high percentage of paintings were of that genre -it would tell us little if the word 'love' appeared 87 times in a novel if we didn't know how many words were in the novel in total -this issue of observational base is most easily fixed if every observation is coded in terms of one of the attributes making up a variable -ex) rather than simply counting the # of liberal editorials in a given collection, code each editorial by its political orientation, even if It must be coded "no apparent orientation"

probability theory

-lets researchers produce representative samples & analyze the results of their sampling statistically & guess how accurate results are -provides the basis for estimating the parameters of a population -P.T. tells us about the distribution of estimates that would be produced by a large number of such samples -ex) P.T. allows pollsters to infer from a sample of 2,000 voters how a population of 100 million voters is likely to vote & to specify exactly what the probably margin of error is

nonprobability sampling: snowball sampling/chain referral

-often employed in field research -where each person interviewed may be asked to suggest additional people for interviewing -some consider a form of accidental sampling -appropriate when the members of a special population are difficult to locate (ex: homeless, migrant workers, undocumented immigrants) -because this results in samples with questionable representativeness, it's used primarily for exploratory purposes -can be revealing to show if the people you're interviewing know similar people & if they're willing to identify them -ex) to learn a community organization's pattern of recruitment over time, you interview fairly recent recruits, asking who introduced them to the group, then you interview those people, repeat the process. -ex) studying a loosely structured political group, you ask participants who they believe to be the most influential members of the group, you interview those people, repeat.

What do all large-scale surveys use?

-probability-sampling methods -it's the primary method of selecting large, representative samples

latent content

-related to content analysis -the underlying meaning of communications

what kind of sampling?: university researchers frequently conduct surveys among the students enrolled in large lecture classes

-reliance of available subjects -the ease & frugality of this method explains its popularity -seldom produces data of any general value -may be useful for pretesting a questionnaire, but such a sampling method should not be used for a study describing students as a whole

survey research

-represents the extreme of total specificity

depth vs. specificity

-researchers have a choice between depth (validity) & specificity (reliability) -usually opt for depth, wanting to base their judgments on a broad range of observations, even at the risk that another observer might reach a different judgment of the same situation

informant

-someone who is well versed in the social phenomenon that you want to study & who is willing to tell you what they know about it -a member of the group who can talk directly about the group -NOT TO BE CONFUSED WITH A 'RESPONDENT' -you want informants typical of the groups you're studying, and you want a well-rounded group of people -ex) to understand how a medical clinic is working, only interviewing physicians will not give you a well-rounded view

nonprobability sampling: reliance on available subjects

-sometimes called "convenience" or "haphazard" sampling -ex: stopping people at a street corner, or some other location -common method of journalists in their "person on the street" interviews -extremely risky sampling method -does not permit control over the representativeness of a sample -only justified if the researcher wants to study the characteristics of people passing the sampling point at specific times or if less-risky methods are not feasible -researchers have to be careful about generalizing their data -researchers should alert readers to the risks associated with this method

Jeffrey Johnson (1990)

-studied a salmon fishing community in North Carolina -used several criteria to evaluate potential informants & how useful they could be -ex) Did their positions allow them to interact regularly with other members of the camp, or were they isolated? -ex) Was their information about the camp limited to their specific jobs, or did it cover many aspects of the operation?

study population

-that aggregation of elements from which the sample is actually selected -as a practical matter, researchers are rarely able to guarantee that every element meeting the theoretical definitions laid down actually has a chance of being selected in the sample -ex) some students are always inadvertently omitted from student rosters, some telephone subscribers request that their names & numbers be unlisted -researchers often decide to limit their study populations more severely than indicated in previous examples -ex) national polling firms may limit their national samples to the 48 adjacent states, omitting Alaska & Hawaii for practical reasons -ex) a researcher wishing to sample psychology professors may limit the study population to those in psychology departments, omitting those in other departments -whenever the population is altered, you must make the revisions clear to your readers

element

-that unit about which information is collected & that provides the basis of analysis -distinguished from 'units of analysis,' which are used in data analysis -usually people or certain types of people -other units can count too: families, social clubs, corporations

probability sampling

-the key to generalizing from a sample to a larger population -the probability sampling methods used in 1948 were more accurate than quota-sampling techniques -the general term for samples selected in accord with probability theory, typically involving some random-selection mechanism -specific types of probability sampling include EPSEM, PPS, simple random sampling, & systematic sampling -used when researchers want precise, statistical descriptions of large populations ---ex) the percentage of the population that is unemployed, that plans to vote for Candidate X, or that feel a rape victim should have the right to abortion -***to provide useful descriptions of the total population, a sample of individuals from a population must contain essentially the same variations that exist in the population***

parameter

-the summary description of a given variable in a population -ex) the mean income of all families in a city -ex) the age distribution of the city's population -when researchers generalize from a sample, they're using sample observations to estimate population parameters

population

-the theoretically specified aggregation of study elements -ex) the vague term 'Americans' might be the target for study, the delineation of the population would include the definition of the element 'Americans' (ex: citizenship, residence) & the time referent for the study (Americans as of when?) -putting the abstract 'adult New Yorkers' into a workable population would require specification of the age defining 'adult' and the boundaries of NY -specifying the term 'college student' would include consideration of full & part time students, degree candidates & non degree candidates, undergrad & grad students, etc.

manifest content

-the visible, surface content -related to content analysis -the concrete terms contained in a communication

bias

-those selected are not typical nor representative of the larger populations they have been chosen from -does not have to be intentional -ex) you interview the first 100 students you find walking around campus -researcher's personal leanings could affect the sample to the point where it does not truly represent the student ---ex) you might consciously or subconsciously avoid interviewing "cool" students because you're afraid they'll make fun of your research efforts

the ultimate purpose of sampling

-to select a set of elements from a population in such a way that descriptions of those elements accurately portray the total population from which the elements are selected -probability sampling enhances the likelihood of accomplishing this & also provides methods for estimating the degree of probable success

nonprobability sampling: quota sampling

-units are selected into a sample on the basis of pre specified characteristics, so that the total sample will have the same distribution of characteristics assumed to exist in the population being studied -begins with a matrix/table describing the characteristics of the target population -depending on your research, you might need to know what proportion of the population is male, female, age, education, ethnicity, etc. -once you've created a matrix & assigned a relative proportion in each cell to the matrix, you collect data from people who have all the wanted characteristics & assign people a weight appropriate to their portion of the total population

Kath Browne (2005)

-used snowball sampling through social networks to develop a sample of non heterosexual women in a small town in the UK -says that her own membership in such networks greatly facilitated this type of sampling bc potential subjects were more likely to trust her than to trust heterosexual researchers

nonprobability sampling: purposive/judgmental sampling

-when the units of observation are picked on the basis of the researcher's judgment about which ones will be the most useful or representative for the purpose of the study -ex) in an initial questionnaire, you might select the widest variety of respondents to test the broad applicability of questions: although the study findings wouldn't represent any meaningful population, the test run might effectively uncover any peculiar defects in your questionnaire (this is a pretest, not a final study) -might want to study a small subset of a larger population where many members of the subset are easily identifies, but the enumeration of them all would be near impossible -ex) to study the leadership of a student protest movement, many of the leaders are visible, but it would not be feasible to define & sample all of the leaders, in studying all or a sample of the most visible leaders, you can collect data sufficient for your study -you can't enumerate & sample all left-wing & right-wing students, so you might sample from left & right leaning groups, such as the Green Party & the Tea Party: although this sample design wouldn't provide a good description of left or right wing students as a whole, it could suffice for general comparative purposes -selecting deviant cases for study -ex) you could gain insight into school spirit as exhibited at a pep rally, but interviewing people who did NOT seem emotional or did NOT attend at all

2 advantages of probability sampling

1. although never perfectly representative, probability samples are typically more representative than other types of samples bc biases are avoided 2. probability theory permits us to estimate the accuracy or representativeness of the sample

the end product of your coding must _________

1. be numerical 2. clearly distinguish between units of analysis & units of observation 3. record the base from which the counting is done

techniques to avoid misclassifying observations to support a hypothesis in analytic induction

1. if there are sufficient cases, select some at random from each category in order to avoid merely picking those that best support the hypothesis 2. give at least 3 examples in support of every assertion you make about that data 3. have your analytic interpretations carefully reviewed by others uninvolved in the research project to see if they agree 4. report all cases that don't fit your hypothesis, realize that few social patterns are 100% consistent, so you still may have discovered something important

PROS of content analysis

1. its economy in terms of time & money -no requirement for a large research staff -no special equipment is needed -only need access to the material to be coded 2. allows the correction of errors/easier to repeat -if you botch something, you can do it all over again... if you both field research, it may be impossible to redo the project because the event under study may no longer exist 3. allows the study of processes occurring over a very long time -can analyze data that covers a large time span 4. unobtrusive measures -the content analyst seldom has any effect on the subject being studied -novels have already been written, paintings already painted, etc. 5. the concreteness of materials studied strengthens the likelihood of reliability -you can always code your data & then recode the original documents from scratch -you can repeat as many times as you want (can't do this in field research because there is no way to return to the original events that were observed, recorded, & categorized)

CONS of content analysis

1. limited to the examination of recorded communications (oral, written, or graphic) but they must be RECORDED somehow in order to be analyzed 2. problems of validity are likely

EPSEM Samples

EPSEM: equal probability of selection method

problems with quota sampling

1. the quota frame (the proportions that different cells represent) must be accurate & so it's difficult to get up-to-date information for this purpose ---the Gallup failure to predict Truman as the presidential victor in 1948 was partly due to this problem 2. the selection of sample elements within a given cell may be biased even though its proportion of the population is accurately estimated ---ex) instructed to interview 5 people who meet a given set of characteristics, an interviewer might still avoid interviewing people who live at the top of 7 story walk ups, who live in run-down homes, or own vicious dogs

2 reasons for using random-selection methods

1. this procedure serves as a check on conscious or unconscious bias on the part of the researcher -the researcher who selected cases on an intuitive basis might select cases that would support their hypothesis -random selection erases this danger 2. random selection offers access to the body of probability theory, which provides the basis for estimating the characteristics of the population as well as estimating the accuracy of samples

computer programs for content analysis

MAXQDA T-LAB: qualitative analysis ex) mapping word associations in a political speech

Can non probability-sampling method guarantee that the sample we observed is representative of the whole population?

NO! However appropriate to your research purposes, non probability sampling methods cannot guarantee that the sample we observed is representative of the whole population

Chaim Noy

argues that the process of selecting a snowball sample reveals important aspects of the populations being sampled, uncovering "the dynamics of natural & organic social networks"

the end product of your coding must: BE NUMERICAL

ex) if you're coding latent content on the basis of overall judgement: 1 = very liberal 2 = moderately liberal 3 = moderately conservative ...and so on

How can the logic of quota sampling sometimes be applied usefully to a field research project?

ex) in the study of a formal group, you might want to interview both leaders & nonreaders ex) in studying a student political organization, you might want to interview radical, moderate, & conservative members of that group you might be able to achieve sufficient sampling to ensure that you interview both men & women, younger & older people, etc.

CONS of analytic induction

misclassifying observations so as to support an emerging hypothesis

conceptual analysis

observes the frequency of a particular concept in a sample of texts

respondent

people who provide information about themselves, allowing the researcher to construct a composite picture of the group those respondents represent

sampling unit

that element or set of elements considered for selection in some stage of sampling

coding

the process whereby raw data are transformed into standardized form suitable for machine processing & analysis

Karen Farquaharson (2005)

used snowball sampling to discover a network of tobacco policy makers in Australia, both those on the periphery & those at the core of the network


Kaugnay na mga set ng pag-aaral

#1-5F. Digital Transformation [Part C] - Finance - (Innovation Strategy: Developing Your Fintech strategy).

View Set

Module 11 E-mail and Social Media Investigations

View Set

PrepU #3 Parkinsons, MS, and Sz Quiz

View Set

Pearson Vol 3 Chapter 12: Perfusion

View Set

Chapter 8 Anatomy and Physiology

View Set