Sociology 361 - Exam 2
Impact evaluation (or analysis)
-a type of question focused on in evaluation research -analysis of the extent to which a treatment or other service has an effect; also known as summative evaluation -compares what happened after a program with what would have happened had there been no program -in a complex intervention, it is important to measure program components carefully in order to determine whether specific components were responsible for any impacts identified -evaluates whether the policy had an effect on outcomes. In other words, does the policy cause a change? -ex: we compare patient outcomes (e.g., health) and doctors' job satisfaction before and after the implementation of electronic health records -if outcomes are better, then the policy works; if not, then get rid of the policy or evaluate whether there are possible changes to implementation that can improve outcomes (see process evaluation) -an experimental design is the preferred method for maximizing internal validity - for making sure your causal claims about program impact are justified; cases are assigned randomly to one or more experimental treatment groups and to a control group so that there is no systematic difference between the groups at the outset; goal is to achieve a fair, unbiased test of the program itself, so that differences between the types of people who are in the different groups do not influence judgment about the program's impact; difficult to achieve because the usual practice in social programs is to let people decide for themselves whether they want to enter a program or not and to establish eligibility criteria that ensure that people who enter the program are different from those who do not; selection bias is introduced -impact analyses that do not use experimental design can still provide useful information ad may be all that is affordable, conceptually feasible, or ethically permissible in many circumstances (ex: SCHIP evaluation) -programs can be evaluated with quasi-experimental designs, survey, or field research methods, but if current participants who are already in a program are compared with nonparticipants, it its unlikely that the treatment group will be comparable with the control group; program participants will probably be a selected group, different at the outset from nonparticipants; as a result, causal conclusions about program impact will be on much shakier ground -rigourous evaluations often lead to the conclusion that a program does not have the desired effect -a program depends on political support for achieving its foals, and such evaluations may result in efforts to redesign the program or reduction or termination of program funding -concern when program participants know that a special intervention is being tests and not everyone is receiving it; staff delivering the control condition could adopt some elements of the intervention being tested to try to help their clients, while program clients could become demoralized because they are not receiving the intervention (important to design research carefully and consider all possible influences on program impact before concluding that a program should be terminated because of poor results) -p. 499-502; Mod. 9 PP
Graphs
-can be easy to read, and they highlight a distribution's shape -particularly useful for exploring data because they show the full range of variation and identify data anomalies that might be in need of further study -can be produced relatively easily with software on personal computers -have two axes: vertical axis (the y-axis), and the horizontal axis (the x-axis) Bar chart - a graphic for qualitative variables in which the variable's distribution is displayed with solid bars separated by spaces -good tool for displaying the distribution of variables measured at the nominal level because there is, in effect, a gap between each of the categories -ex: chart showing marital status percentages (married, widowed, divorced, separated, never married); central tendency is married because it is the most common value; moderate amount of variability because the half that are not married are spread across the categories of widowed, divorced, separated, and never married; because martial status is not a quantitative variable, the order in which the categories are presented is arbitrary Histogram - a graphic for quantitative variables in which the variable's distribution is displayed with adjacent bars -no necessary gaps between bars -ex: years of education (0-20) and percentages; clump of cases at 12 years and the distribution is negatively skewed because the percentage of cases tapers off to the low end much more quickly with a long tail Frequency polygon - a graphic for quantitative variables in which a continuous line connects data points representing the variable's distribution -an alternative to the histogram and is particularly useful when there is a wide range of variables -ex: years of education (0-20) and percentage; most common value is 12 years and is the center of the distribution; moderate variability in the distribution with many cases having more than 12 years of education; highly negatively skewed, with few reporting fewer than 10 years of education If graphs are misused, they can distort, rather than display, the shape of the distribution -ex: taking a small portion of a graph and altering the vertical axis numbers (ex: starting at 15 rather than 0) can alter the interpretation rather than seeing that the difference is just in how the graphs are drawn -adherence to several guidelines will help you to spot these problems and avoid them in your own work: -1. The difference between bars can be exaggerated by cutting off the bottom of the vertical axis and displaying less than the full height of the bars; instead, begin the variable at 0 on both axis -2. Bars of unequal width can make particular values look as if they carry more weight than their frequency warrants; always use bars of equal width -3. Either shortening or lengthening the vertical axis will obscure or accentuate the differences in the number of cases between values; the two axes should be of approximately equal length -4. Avoid chart junk that can confuse the reader and obscure the distribution's shape -Bar or pie chart: you have a nominal level variable, meaning the categories cannot be rank-ordered; ideally one or a few bars/slices will be much taller/wider than the rest (if they're all the same height/width = boring) -Histogram & Frequency Polygon: you have a variable where responses can be rank-ordered; ideally the peaks are much taller than the valleys because otherwise = boring. -p. 318-321; Mod. 10 PP
In-person interview
-one of five main types of surveys -a survey in which an interviewer questions respondents face-to-face and records their answers -if money is no object, this is often the best survey design -advantages: -response rates are higher than with any other survey design -questionnaires can be much longer than with mailed or phone surveys and can be complex, with both open-ended and closed-ended questions -the order in which questions are read and answered can be controlled by the interviewer -respondents' interpretations of questions can be probed and clarified -hazards: -respondents should experience the interview process as a personalized interaction with an interview who is interested in their experiences and opinions, but every respondent should have the same interview experience, asked the same questions in the same way by the same type of person who reacts similarly to the answers -so, the interview must be personal and engaging, yet consistent and nonreactive -careful training and supervision are essential because small difference in intonation or emphasis on words could alter respondents' interoperations of questions' meaning -without a personalized touch, the rate of response will be lower and answers will be less thoughtful and potentially less valid; without a consistent approach, information obtained from different respondents will not be comparable and thus less reliable and valid -although in-person interview procedures are typically designed with the expectation that the interview will involve only the interviewer and respondent, on or more other household members are often within earshot -response rates are higher than with any other survey design and survey instruments can be more complex -factors that influence response rates: contact rates are lower in central cities and for single-person households -Balancing Rapport and Control -adherence to some basic guidelines for interacting with respondents can help interviewers maintain an appropriate balance between personalization and standardization: -project a professional image in the interview; someone that is sympathetic but has a job to do -establish rapport at the outset by explaining what the interview is about and how it will work and by reading the consent form; ask the respondent if they have any questions and respond to these honestly and fully; emphasize confidentiality -ask questions from a distance that is close but not intimate; stay focused and keep interested posture; maintain eye contact, respond with appropriate facial expressions, and speak in a conversational tone of voice -maintain a consistent approach; deliver each question as written in the same tone of voice; avoid self-expression or loaded reactions -repeat questions if the respondent is confused; use nondirective probes for open-ended questions (ex: can you tell me more about that?) -rapport may be difficult when demographic characteristics like race or gender differ between interviewer and respondent (ex: white respondent may not disclose racial prejudice to an AA interviewer) -Computer-assisted personal interview (CAPI) - a personal interview in which a laptop computer is used to display interview questions and to process responses that the interviewer types in, as well as to check that these responses fall within allowed ranges -interviewers seem to like it and data are comparable in quality to data obtained in a noncomputerized interview -makes it easier for the researcher to develop skip patterns and experiment with different types of questions for difference respondents without increasing the risk of interviewer mistakes -allows respondent to answer directly on the computer without the interviewer knowing what their response is; helpful for socially undesirable behaviors like drug use, sexual activity, not voting, etc. (or these sensitive answers can be handed in separately in a sealed envelope) -Maximizing Response to Interviews -because of the difficulties finding all members of a sample, response rates may suffer -several factors affect the response rate in interview studies: -contact rates tend to be lower in central cities partly because of difficulties finding people at home, gaining access to high-rise apartments, and interviewer reluctance to visit some areas at night -single-person household also are more difficult to reach, whereas households with young children or elderly adults tend to be easier to contact -refusal rates vary with some respondents' characteristics -people with less education participate somewhat less in surveys of political issues and are more likely to give 'don't know' responses -high income persons tend to participate less in surveys about income and economic behavior -unusual strains and disillusionment in a society can also undermine the general credibility of research efforts and achieve an acceptable response rate -solutions? -send an advance letter introducing the survey project, multiple contact attempts through the day. make small talk to increase rapport -p. 293-295; Mod. 7
Evaluation research - ethical issues
-evaluation research can make a difference in people's lives while the research is being conducted as well as after the results are reported -job opportunities, welfare recipients, housing options, treatment for substance abuse, and training programs are each potentially important benefits, and an evaluation research project can change both the type and the availability of such benefits -this direct impact on research participants, and potentially their families, heightens the attention that evaluation researchers have to give to human subject concerns -when program impact is the focus, human subject considerations multiply -what about assigning persons randomly to receive some social program or benefit?; one justification for this has to so with the scarcity of resources; if not everyone in the population who is eligible for the program can receive it, because of resource limitations, what could be a fairer way to distribute the program benefits than through a lottery?; random assignment also seems like a reasonable way to allocate potential program benefits when a new program is being tested with only some members of the target population; random assignment to an alternative version seems particularly reasonable -other ethical challenges: -1. How can confidentiality be preserved when the data are owned by a government agency or are subject to discovery in a legal proceeding? -2. Who decides what level of burden an evaluation project may tolerably impose on participants? -3. It is legitimate for research decisions to be shaped by political considerations? -4. Must evaluation findings be shared with stakeholders rather than only with policy makers? -5. Is the effectiveness of the proposed program improvements really uncertain? -6. Will a randomized experiment yield more defensible evidence than the alternatives? -7. Will the results actually be used? -the Department of Health and Human Services also mandates: -1. Are risks minimized? -2. Are risks reasonable in relation to benefits? -3. Is the selection of individuals equitable? (randomization) -4. Is informed consent given? -5. Are privacy and confidentiality assured? -these various criteria must be considered long before the design of the study is made -important to inform stakeholders about the evaluation before it begins and to consider with them the advantages and disadvantages for potential participants and the type of protections needed -ethical concerns must also be given special attention when evaluation research projects involve members of vulnerable populations as subjects; ex: children must have parental consent (active vs. passive) -when it appears that it will be difficult to meet the ethical standards in an evaluation projects, modifications should be considered to reduce the possibly detrimental program impact: -1. Alter the group allocation ratios to minimize the number in the untreated control group -2. Use the minimum sample size required to be able to adequately test the results -3. Test just parts of new programs rather than the entire programs -4. Compare treatments that very in intensity (rather than presence or absence) -5. Vary treatments between settings rather than between individuals within a setting -p. 514-516
Phone survey
-one of five main types of surveys -a survey in which interviewers question respondents over the phone and then record their answers -became a popular method in the late 1900s -two matters may undermine the validity of these: not reaching the proper sampling units and not getting enough complete responses to make the results generalizable -these should no longer be considered the best method to use for general-purpose surveys; dramatic decline in response rates to phone surveys -many contact attempts can help -response rates can be much lower in population that are young, less educated, and poor; those who do respond are more likely to be engaged in civic issues, so estimates of attitudes and behaviors can be quite biased -can be problems with reaching the right people; response rates are higher than in mail surveys, but have been dropping over time. -factors that influence response rate: number of callbacks (20 needed); short clear instructions and questions, trained interviewers who can deal with distractions, etc. -Reaching Sample Units -three ways to obtain a sampling frame for telephone exchanges or numbers: -1. Phone directories provide a useful frame for local studies -2 A nationwide list of area code or exchange numbers from a commercial firm -3. Commercial firms can provide files based on local directories from around the nation -most telephone surveys use random digit dialing at come point in the sampling process -machine calls random phone numbers within the designated exchanges, whether or not the numbers are published -when the machine reaches an inappropriate household (like a business), the number is simply replaced with another -most survey research organizations use special methods to identify sets of phone numbers that are likely to include working numbers and so make the random digit dialing more efficient -Maximizing Response to Phone Surveys -Four issues require special attention in phone surveys: -1.Multiple callbacks will be needed form many sample members -those with more money and education are more likely to be away from home; such persons are more likely to vote Republican, so the results of political polls can be seriously biased if few callback attempts are made -response rate in phone calls has dropped/the number of callbacks needed to reach respondents by telephone has increased greatly in the last 20 years, with increasing single-person households, dual-earner families, and out-of-home activities -because of telemarketing, people are more accustomed to just say no to calls from unknown individuals, use the answering machine, or use caller ID to screen out unwanted calls -cell phone users are also harder (and more costly) to contact because their numbers are not published in directories -households with a cell phone but no landline tend to be younger -2. Difficulties with the impersonal nature of phone contact -visual aids cannot be used, interviewer must convey verbally all information about response choices and skip patterns, response choices must be short, and there must be clear instructions for the interviewer to follow -3. Interviewers must be prepared for distractions -interruptions by other household members, loss of interest, while on cell phone doing other activities like driving, etc. -rapport between the interviewer and the respondent is likely to be lower with phone surveys than with in-person interviews, and so respondents may tire and refuse to answer all the questions -4. Careful interviewer training is essential -ex: may use several training sessions for procedures and techniques Computer-assisted telephone interviews (CATI) - a telephone interview in which a questionnaire is programmed into a computer, along with relevant skip patterns, and only valid entries are allowed; incorporates the tasks of interviewing, data entry, and some data cleaning -procedures in phone surveys can be standardized more effectively, quality control maintained, and processing speed maximized when they use this Interactive voice response (IVR) - a survey in which respondents receive automated calls and answer questions by pressing numbers on their touch-tone phones or speaking numbers that are interpreted by computerized voice recognition software -allows even greater control over interviewer-respondent interaction in phone surveys than computer-assisted telephone interviews -can also record verbal responses to open-ended questions for later transcription -have been successful with short questionnaires and when respondents are motivated to participate, but people may be put off by the impersonal nature of the survey -p. 286-293; Mod. 7 PP
Response rate
-the number of people who participated in a study out of the number of people selected to participate. -ex: suppose you draw a simple random sample of 2500 workers to participate in a study. Of these 2500, only 1800 workers actually complete a survey. What is the response rate? 1800/2500 x 100 = 72% response rate -Mod. 7 PP
Index
-the idea is that idiosyncratic variation in response to particular questions will average out, so that the main influence on the combined measure will be the concept upon which all the questions focus; this can be considered a more complete measure of the concept than any one of the component questions -the only way to know that a given set of question does in fact form this is to administer the questions to people like those you plan to study; if a common concept is being measured, people's responses to the different questions should display some consistency (they should be correlated); special statistics called reliability measures help researchers decide whether responses are consistent; the most common of these is Cronbach's alpha, which varies from 0 to 1 with zero indicating that answers to different questions are completely unrelated, while one indicates the same response is given to every question in the index; an index is not considered sufficiently reliable unless is alpha is at least .7 -because of the popularity of survey research, indexes already have been developed to measure many concepts, and some have proven to be reliable in a range of studies; it is usually much better to use these indexes to measure a concept than to try to devise questions to form a new index; use of a preexisting one simplifies the work in the design and facilitates comparison of findings to those in other studies; ex: the Center for Epidemiologic Studies Depression Index (CES-D); people may have idiosyncratic reasons for having a particular symptom of depression (ex: poor appetite due to something else) but by combining the answers to questions about several symptoms, the index score reduces the impact of this idiosyncratic variation -three cautions: -1. Our presupposition that each component question is indeed measuring the same concept may be mistaken; although we may include multiple questions in a survey to measure one concept, we may find that answers to the questions are not related to one another, and so the index cannot be created; or we might find that answers to just a few questions are not related to the others and decide to discard those ones 2. Combining responses to specific questions can obscure important differences in meaning among the questions; ex: Aids research 3. The questions in an index may cluster together in subsets; all the questions may be measuring the intended concept, but we may conclude that this concept actually has several different aspects; a multidimensional index has been obtained; can create subscales of an index -usually calculated as an arithmetic average or sum of responses to the questions, so that every question counts equally -the interitem reliability of an index (Cronbach's alpha) will increase with the number of items included in the index, even then the association between the individual items stays the same -another approach to creating an index scores is to give different weights to the responses to different questions before summing or averaging the responses; also termed a scale; most often, the weight applied to each question is determined through empirical testing; ex: abortion test; giving a 1 to agreement that abortion should be allowed when the pregnancy resulted from rape or incest and a 4 to agreement that abortion should be allowed whenever a woman decided she wanted one; agreeing that abortion is allowable under any circumstance is much stronger support for abortion rights than is agreeing in the first case -p. 269-272
Interview schedule
-the survey instrument containing the questions asked by the interviewer in an in-person or phone survey), not in isolation from other questions -p. 273
Questionnaire
-the survey instrument containing the questions in a self-administered survey -p. 273
Surveys - ethical issues
-usually pose fewer ethical dilemmas than do experimental or field research designs -potential respondents can easily decline to participate, and a cover letter identifies the sponsors and motivators for the survey -only in group-administered surveys might the respondents be a captive audience and so these designs require special attention to ensure that participation is truly voluntary -the newly revised proposed federal regulation to protect human subjects allow most survey research to be exempted from formal review; surveys fall within exemption criteria, which stipulate that research is exempt from review if respondents cannot readily be identified or if disclosure of their responses would not place them at risk in terms of legal action, financial standing, employability, education advancement, or reputation -web surveys create some unique ethical issues; easy for respondents to skip through consent forms, screening for eligibility is a problem (ex: cannot ensure that surveys intended for adults are not be taken by kids) Confidentiality - a provision of research, in which identifying information that could be used to link respondents to their responses is available only to designated research personnel for specific research needs -is most often the primary focus of ethical concern in survey research -no one but research personnel should have access to information that could be used to link respondents to their responses, and even that access should be identified on the questionnaires only with numbers, and the names that correspond to these numbers should be kept in a safe location -follow-up mailings or contact attempts that require linking ID numbers with names and addresses should be carried out by trustworthy assistants under close supervision -encryption technology should be used for web surveys -many surveys include some questions (like income or health information) that most respondents would not like revealed in a public setting. -other survey questions may not seem embarrassing, but still pose confidentiality issues if they can be used to identify people. Ex: questions asking people their home address so that they can receive a follow-up survey. ex: electronic surveys > people may feel more comfortable completing survey questions about sexuality over the Internet than in an in-person survey; but, you now have access to the person's IP address, meaning you know exactly where this person was when they completed the survey; you're collecting and storing information that could identify respondents Anonymity - a provision of research, in which no identifying information is recorded that could be used to link respondents to their responses -not many surveys can provide this -this does not allow follow-up attempts to encourage participation by nonrespondents and prevent panel designs that measure change through repeated surveys of the same people -in-person surveys can rarely be anonymous, but phone surveys meant to collect information at one point in time (like political polls) could be completely anonymous -when no future follow-up is desired, group-administered surveys can be anonymous -to provide this in mail surveys, the researcher should omit identifying codes from the questionnaire and could include a self-addressed stamped postcard -p. 303-304; Mod. 7 PP
Stakeholders
-a basic of the evaluation research model -individuals and groups who have some basis of concern with the program -ex: clients, staff, managers, funders, the public, the board of a program or agency, the parents or spouses of clients, the foundations that award program grants, the auditors who monitor program spending, Congress, etc. have an interest in the outcome of the program evaluation -some may fund the evaluation, some may provide research data, and some may review or approve the research report -who the program stakeholders are and what role they play in the program evaluation will have tremendous consequences for the research -p. 491-492
Curvilinear
-discussed in relation to cross-tabulation, patterns -any pattern of association between two quantitative variables that does not involve a regular increase or decrease -p. 342
Outlier
-discussed in relation to range, a measure of variation (it can be drastically altered by just one value) -an exceptionally high or low value in a distribution -p. 333
Qualitative data analysis - techniques
-five different techniques are shared by most approaches to qualitative data analysis: -bring the "art" closer to science, meaning it tries to link our subjective thoughts to objective reality; are all attempts to display validity and reliability of qualitative data analysis -1. Documentation of the data and the process of data collection -the data for qualitative study most often are notes jotted down in the field or during an interview - from which the original comments, observations, and feelings are reconstructed - or text transcribed from audio recordings -new researchers may become overwhelmed by the quantity of information that has been collected (1 hour of interview can generate 20-25 pages of single-spaced text) -analysis is less daunting of the researcher maintains a disciplined transcriptions schedule -the various contacts, interviews, written documents, and whatever it is that preserves a record of what happened all need to be saved and listed -documentation is essential for keeping track of what will be a rapidly growing volume of notes, audio and perhaps video recordings, and documents; provides a way of developing and outlining the analytic process and it encourages ongoing conceptualizing and strategizing about text -by keeping track of what happened while you're collecting data and immediately after, it preserves a record of the circumstances shaping your interpretations. -2. Organization, categorization, and condensation of the data into concepts -identifying and refining important concepts so that they can be organized and categorized -sometimes, conceptual organization begins with a simple observation that is interpreted directly, pulled apart, and then put back together more meaningfully -analytic insights are often tested against new observations, the initial statement of problems and concepts is refined, the researcher than collects more data, interacts with the data again, and the process continues -research is first altered to a concept in observations/surveys, then refines his understanding of this concept by investigating its meaning -matrix - a form on which particular features of multiple cases or instances can be recorded systematically so that a qualitative data analyst can examine them later; a well-designed chart that can facilitate the coding and categorization process in qualitative data analysis; condenses data into simple categories, reflects further analysis of the data to identify degree of support, and provides a multidimensional summary that will facilitate subsequent, more intensive analysis -you begin to summarize a set of observations. Then, you test these summaries against new observations; a good, "scientific" summary should hold against new observations -3. Examination and display of relationships between concepts -examining relationships is the centerpiece of the analytic process because it allows the researcher to move from simple descriptions of the people and settings to explanations of why things happened as they did with those people in those settings -the simple relationships that are identified with a matrix can be examined and then extended to create a more complex casual model, which represents the multiple relationships between the constructs identified in a qualitative analysis as important for explaining some outcome -you're building connections across your organization/categorization/condensation to develop ideas about what is causing what; as you build these, you are revisiting your interpretations once more, in a sense "testing" them against the model you're building -4. Corroboration and legitimization of conclusions, by evaluating alternative explanations, disconfirming evidence, and searching for negative cases -no set standard exist for evaluating the validity, or authenticity, of conclusions in qualitative study, but the need to carefully consider the evidence and methods on which conclusions are based is just as great as with other types of research; individual items of information can be assess in terms of: -How credible was the informant? > Were statements made be someone with whom the researcher had a relationships of trust or someone the researcher had just met? Did the informant have reason to lie? If the statements do not seem trustworthy, can they at least be used to help understand the informant's perspective? -Were statement made in response to the researcher's questions, or were they spontaneous? > Spontaneous statements are more likely to indicate what would have been said had the research not been present -How does the presence or absence of the research or informant influence the actions and statement of other group members? > Reactivity to being observed can never be ruled out as a possible explanation for some directly observed social phenomenon; however, if the researcher carefully compares what the informant says goes on when the researcher is not present, wheat the research observed directly, and what other group members say about their normal practices, the extend of reactivity can be assessed to come extent -a qualitative researcher's conclusions should also be assessed by his or her ability to provide a credible explanation of some aspect of social life -that explanation should capture group members' tacit knowledge - in field research, a credible sense of understanding of social processes that reflects the researcher's awareness of participants' actions as well as their words, and of what they fail to state, feel deeply, and take for granted; the social processes that were observed, not just their verbal statements about these processes; the largely unarticulated, contextual understanding that is often manifested in nods, silences, humor, and naughty nuances -you step back and examine potential threats to the validity of your model; revisit to see if there may have been biases in what your respondents relayed (interviews) or did in front of you (observation); then, compare your interpretations against published studies similar to yours -5. Reflection on the researcher's role -confidence in the conclusions from a field research study is also strengthened by an honest and informative account about how the researcher interacted with subjects in the field, what problems he or she encountered, and how these problems were or were not resolved -such a natural history of the development of evidence enables others to evaluate the findings -qualitative data analysts display real sensitivity to how a social situation or process is interpreted from a particular background and set of values and not simply based on the situation itself -researchers are only human and must rely on their own senses and process all information through their own minds; by reporting how and why they think they did what they did, they can help others determine whether or how the researchers' perspectives influenced their conclusions -explain how the role of researcher and participants result in the analysis of the data, not just from the setting itself -now you evaluate yourself and recognize your own biases and how they may have shaped your interpretations; you should state these to others so that they are aware of what is shaping your analysis -qualitative researchers strive for an emic focus in their analysis, meaning they want to represent a setting with the participants' terms and from their viewpoint. They want to basically become an "insider" -here, you want to think about what prevented or helped you to become an insider. Were you already an insider before you began the study? If not, how much time did you spend trying to become one? If you didn't spend much time, did you try to have an insider present to help you interpret the data? -p. 419-425
Big data
-massive data sets accessible in computer-readable form that are produced by people, available to social scientists, and manageable with today's computers -are used to predict the spread of flu, the price of airline tickets, the behavior of consumers, etc.; to investigate the social world -the quantity of data generated by the interconnected web is astounding (examples) and the sources of Big Data are increasing rapidly -are also generated by GPS users, social media, smartphones, wrist health monitors, student postings, and even student activity in online programs -pubic utilities, government agencies, and private companies can all learn about their customers from analyzing patterns revealed in their records -everyone is already sharing their lives/opinions on through technology platforms, so no need to bother constructing a survey and let's just collect their Tweets/posts/videos/photos/likes/etc. -this is the type of data *many* employers are looking for people to analyze -ex: Follow the patterns of tweets and re-tweets through Twitter for news articles covering security breaches in medical records -a special type of secondary data is big data. -what is so "big"? The number of observations, which can reach millions/billions/trillions/etc. -examples include social media data: photos, videos, posts, tweets, likes, private messages sent on OkCupid.... -besides social media, social scientists are interested in really anything that computers can track about individuals: -what you purchase on Amazon (multiplied by every Amazon customer = "big") -the number of kilowatt hours of energy you used per day over the last year (multiplied by every household = "big") -the number of times you parked in a public space downtown last month (multiplied by every Milwaukean = "big") -the phone numbers you called or texted using your smartphone (multiplied by every smartphone user = "big") -the latest thing for employers (all of them > for-profit, non-profit, and government organizations) -some may be using technology to acquire big data from consumers (e.g., the electricity company that installs "smart" meters) -even if they don't, it's pretty much expected that they have a social media presence. What better way to find out about consumer needs than tracking social media activity on their accounts? There's a reason politicians are analyzing it. -examples of social science research using this: -ex: to understand how popular sociology is, you could see how frequently the name of the discipline has appeared in all books ever written in the world; Ngrams - frequency graphs produced by Google's database of all words printed in more than one third of the world's books over time (with coverage still expanding); in this example, it is limited to books in English, but that includes about 30 million books; the height of the graph represents the percentage that the term represents of all words in books published in that year, so a rising line means greater relative interest in the world, not simply more books being published -ex: used geographic Big data to investigation the addresses of emergency and non-emergency calls to the city police of Boston -ex: investigation of mood fluctuations throughout the day and across the globe by studying all tweets -ex: examining the relationship between friendship patterns and access to social capital by analyzing Facebook posts -p. 538-543; Mod. 9 PP
Mailed survey
-one of five main types of surveys -a survey involving a mailed questionnaire to be completed by the respondent -central concern is maximizing the response rate -extra steps must be taken to ensure that response rates increase -need an acceptable 70% response rate or higher -sending following up mailing to no respondents is the single most important requirement for obtaining an adequate response rate -an attractive questionnaire full of clear questions will typically get about a 30% response rate -factors that influence response rates: quality of the cover letter, number and quality of follow-ups, ease of survey completion; quality of incentives (provide them to everyone before survey completion) -Steps: -a few days before the questionnaire is to be mailed, send a brief letter to respondents that notifies them of the importance of the survey they are to receive -send the questionnaire with a well-designed personalized cover letter, a self-addressed and stamped return envelope, and if possible a token monetary reward -send a reminder postcard to all sample members 2 weeks after the initial mailing; include phone number for those who have not received a questionnaire -send a replacement questionnaire with a new cover letter to nonrespondents, 2 to 4 weeks after the initial questionnaire mailing; cover letter should be a bit shorter and more insistent and stress the survey's importance -6 to 8 weeks after the initial survey mailing, use a different mode of delivery or different survey design (usually over the phone) that encourages the importance of the survey and their response -cover letter - the letter sent with a mailed questionnaire that explains the survey's purpose and auspices (sponsors) and encourages the respondent to participate -critical for success because it sets the tone for the questionnaire -a carefully prepared one should increase the response rate and result in more honest/complete answers -must be: -Credible > letter should establish that the research is being conducted by a researcher or organization that the respondent is likely to accept as credible, unbiased authority; government sponsors tend to elicit high rates of response, reach conducted by well-known universities and recognized research organization (ex: Gallup), is also usually credible, but publishing firms, students, and private associations elicit the lowest response rates -Personalized > cover letter should include a personalized salutation with the respondent's name and the researcher's signature -Interesting > statement should interest the respondent in the contents of the questionnaire -Responsible > reassure the respondent hat information obtained will be treated confidentially and include a phone number should they have questions; point out that participation is voluntary -other steps are necessary to maximize the response rate: -particularly important in self-administered surveys that individual questions are clear and understandable to all the respondents because no one will be there to clarify -use no more than a few open-ended questions because respondents are likely to be put off by a lot of writing -use and ID number on the questionnaire to identify nonrespondents; explain this in the cover letter -enclose a token incentive with the survey, typically $2 or $5 seems to be the best incentive, reward, and indication of trust that participants will do the survey -include a stamped, self-addressed return envelope with each copy of the questionnaire (reduces cost of responding, personalizes the exchange, indicator of trust in the respondent) -consider presurvey efforts (ex: a vigorous advertising campaign for the Us census helped a lot) -average response rate for first mailing is about 24%, up to 42% by the postcard follow-up, up to 50% after the first replacement questionnaire, and to 72% after a second replacement questionnaire -response rate may differ by interest in the topic -when a survey has many nonrespondents, getting some ideas about their characteristics, by comparing late respondents with early respondents, can help determine the likelihood of bias resulting from the low rate of response -if resources do not permit phone calls to all nonrespondents, a random sample of nonrespondents can be selected and contacted by phone or in-person; with appropriate weighting, these new respondents can then be added to the sample of respondents to the initial mailed questionnaire, resulting in a more representative total sample -also a problem with incomplete response; some respondents may skip questions or stop answering at some point; this does not occur often with well-designed experiments -p. 282-286; Mod 7 PP
Focus Groups
-one of three distinctive qualitative research techniques -a qualitative method that involves unstructured group interviews in which the focus group leader actively encourages discussion among participants on the topics of interest -combines some elements of participant observation and intensive (in-depth) interviewing into a unique data collection strategy -groups of unrelated individuals that are formed by a researcher and then led into group discussion of a topic for 1-2 hours -the researchers ask specific questions and guides the discussion to ensure that group members address these questions, but the resulting information is qualitative and relatively unstructured -do not involve representative samples; instead, a few individuals are recruited who have the time to participate, have some knowledge pertinent to the focus group topic, and share key characteristics with the target population -usually involves several discussions involving similar participants -marketing researchers and political campaigns often use this method -used to collect qualitative data, using open-ended questions posed by the researcher or group leader -the discussion mimics the process of forming and expressing opinions -the researcher/group moderator uses in interview guide, but the dynamics of group discussion often require changes in the order and manner in which different topics are addressed -no formal procedures exists for determining the generalizability of focus group answers, but the careful researcher should conduct at least several focus groups on the same topic and check for consistency in findings; some suggest doing enough focus groups to reach the saturation point -most groups involve 5-10 people, participants usually do not know one another, opinions differ on the value of using homogenous (more welcoming) vs. heterogenous (more ideas) groups -should not have a group where some participants have supervisory/authority roles or ones that involve emotional or sensitive information (confidentiality cannot be ensured) -moderators begin the discussion by generating interest in the topic, creating the expectation that all will participate, and making it clear that the researcher does not favor any participant or perspective -all questions should be clear, simple, and straightforward; should begin with easy to answer general questions and then about a quarter of the way into the time, shift to specific issues -moderator should spend more time listening and guiding than asking questions -may conclude with asking respondents for recommendations to policy or further thoughts they didn't have a chance to express -emphasize discovering unanticipated findings and exploring hidden meanings -can be an indispensable aid for developing hypotheses and survey questions, for investigating the meaning of survey results, and for quickly assessing the range of opinion about an issue -important to consider how recruitment procedures have shaped the generalizability of focus group findings, as well as the impact of interviewer style and questioning -interviews with a group of respondents about a topic of interest, which could be led by the researcher; like intensive interviews, there is an agenda for discussion but the researcher is open to changing it based on what comments arise -when deciding when to use and when to avoid, consider what makes you comfortable and uncomfortable in a group setting: avoid using for discussing sensitive or embarrassing issues, avoid using when some group members have authority over other members, do use when you want to assess opinions, concerns, or experiences you believe are shared by many in the setting -as with intensive interviews, you're looking for saturation -p. 376, 397-399; Mod. 8 PP
Output
-a basic of the evaluation research model -the services delivered or new products produced by the program process -ex: clients served, case managers trained, food parcels delivered, or arrests made -may be desirable in themselves, but the primarily indicate that the program is operating -p. 491
Web survey
-one of five main types of surveys -a survey that is accessed and responded to on the World Wide Web -has become an increasingly useful survey method because of growth in the fraction of the population using the Internet and technological advances that make web survey design relatively easy and often superior to printed layouts -many population, such as professionals, middle-class communities, members of organizations, and college students, have very high rates of internet use so this design can be a good option for them -because of the internet's global reach, they can also survey dispersed populations even from different countries, but coverage remains a major problem with many populations -the extent to which the population of interest is connected to the web is the most important consideration when deciding whether to conduct a survey through the web -other considerations that may increase the attractiveness of a web survey include the need for a large sample, rapid turnout, or for collecting sensitive information that respondents might be reluctant to discuss in person -many web surveys begin with an e-mail message to potential respondents with a direct link to the survey website -particularly useful when a defined population with known email addresses is to be surveyed; the researcher can then send email invitations to a representative sample without difficulty; however, lists of unique email addresses for the members of defined populations generally do not exist outside of organizational settings; as a result, there is no available method for drawing a random sample of email addresses for people from any general population; instead, web surveys participants should be recruited from mailings or phone calls to their home addresses, with the web survey link sent to them after they have agree to participate -individuals who do not have internet access should be provided with a computer and internet connection; this increases the cost of the survey considerably, but it can be used as part of creating the panel of respondents who agree to be contacted for multiple surveys over time -coverage bias can also be a problem with web surveys that are designed for a population with high levels of internet use - if the topic of the survey leads some people to be more likely to respond on the web (ex: if you have a more extreme view on something, you might be more willing to participate) -biggest issue is who has Internet access? very good response rate (> 70%) can be achieved when web surveys are used with suitable samples -factors that influence rate of response: single biggest problem here is large number of individuals that do not have Internet access or are not comfortable in computer-based environments -when web surveys are administered to volunteer samples that are recruited onto a particular web sites or through social media, the result can be very large but very biased sample of the larger population; this bias can be reduced somewhat by requiring participants to meet certain inclusion criteria or weighting respondents based on key characteristics so that the resulting sample is more comparable to the general population in terms of certain demographics (weighting appears to reduce coverage bias 30-60%); coverage bias is not as important when a convenience sample will suffice for an exploratory survey about some topic -web surveys have some unique advantages for increasing measurement validity; questionnaires completed on the web can elicit more honest reports about socially undesirable behavior or experiences, including illegal behavior and victimization, when compared with phone interviews; are relatively easy to complete because respondents simply click on response boxes -the survey can be programmed to move respondents easily through sets of questions, not even displaying one that do not apply to the respondent, thus increasing rates of completion; use of visuals like pictures, sounds, and animation, can be used as a focus of particular questions to enhance visual survey appeal and thus understanding/completion -web surveys almost completely eliminate data entry errors -coverage bias is the single biggest problem with web surveys of the general population and of segments of the population without a high level of internet access, and none of the web survey methods fully overcome this problem -weighting does not by itself result in similar responses on many questions with those obtained from a mailed survey -many respondents do not agree to participate in web surveys and few of those contacted via phone agreed to provide an email address -some research has found that people choose the mailed survey over the web one and that phone surveys continue to elicit higher rates of response -visuals should be used with caution to avoid unintended effects on interpretation of questions and response choices -p. 296-300; Mod. 7
Jottings
-discussed in relation to participant observation, notes -brief notes written in the field about highlights of an observation period -used because it is too difficult to write extensive notes while in the field -serve as memory joggers when the researcher is later writing the field notes -p. 387
Evaluation research - black box vs. program theory
-a design decision in evaluation research -Black box evaluation - the type of evaluation that occurs when an evaluation of program outcomes ignores, and does not identify, the process by which the program produced the effect -the focus of the evaluation researcher is on whether cases seem to have changed as a result of their exposure to the program, between the time they entered the program as inputs and when they exited the program as outputs -assumption is that program evaluation requires only the test of a simple input-output model; there is no attempt to open the black box of the program process -Program theory - a descriptive or prescriptive model of how a program operates and produces effects -ex: research on welfare-to-work; found that children of welfare to work parents did worse in school than whose parents were just on welfare; must open the black box to find out why -specified how the program is expected to operate and identifies which program elements are operational; may identify the resources needed by the program and the activities that are essential in its operations, as well as how the program is to produce its effects -thus improves understanding of the relationship between the independent variable (the program) and the dependent variable (the outcome or outcomes) -theory-driven evaluation - a program evaluation that is guided by a theory that specifies the process by which the program has an effect; when a researcher has sufficient knowledge of the relationship between the independent and dependent variables before the investigation begins, outlining a program theory can help guide the investigation of program process in the most productive directions -program theory can be either descriptive or prescriptive: -1. Descriptive > specifies what impacts are generated and how they occur; it suggests a causal mechanism, including intervening factors, and the necessary context for the effects; generally empirically based -2. Prescriptive > specified how to design or implement the treatment, what outcomes should be expected, and how performance should be judged -p. 505-506
Regression analysis
-a statistical technique for characterizing the pattern of a relationship between two quantitative variables in terms of a linear equation and for summarizing the strength of this relationship in terms of its deviation from that linear pattern -Example: -scatterplot showing the relationship between years of education and family income -the data points tend to run from the lower left to the upper right of the chart, indicating a positive relationship (the more years of education, the higher the family income) -the line drawn through the points is the regression line, which is the best fitting straight line for this relations (the line that lies closest to all the points in the chart) -correlation coefficient - a summary statistic that varies from 0 to 1 or -1, with 0 indicating the absence of a linear relationship between two quantitative variables and 1 or -1 indicating that the relationship is completely described by the line representing the regression of the dependent variable on the independent variable -aka Pearson's r, or just r -tells us the strength of the relationship between two variables -closer to 1 or -1, the stronger the relationship -can also be used to study the association between three or more variables in a multiple regression analysis -p. 351-353
Correctional analysis
-a statistical technique that summaries the strength of a relationship between two quantitative variables in terms of its adherence to a linear pattern -Example: -scatterplot showing the relationship between years of education and family income -the data points tend to run from the lower left to the upper right of the chart, indicating a positive relationship (the more years of education, the higher the family income) -the line drawn through the points is the regression line, which is the best fitting straight line for this relations (the line that lies closest to all the points in the chart) -correlation coefficient - a summary statistic that varies from 0 to 1 or -1, with 0 indicating the absence of a linear relationship between two quantitative variables and 1 or -1 indicating that the relationship is completely described by the line representing the regression of the dependent variable on the independent variable -aka Pearson's r, or just r -tells us the strength of the relationship between two variables -closer to 1 or -1, the stronger the relationship -can also be used to study the association between three or more variables in a multiple regression analysis -p. 351-353
Correlation coefficient
-a summary statistic that varies from 0 to 1 or -1, with 0 indicating the absence of a linear relationship between two quantitative variables and 1 or -1 indicating that the relationship is completely described by the line representing the regression of the dependent variable on the independent variable -aka Pearson's r, or just r -tells us the strength of the relationship between two variables -closer to 1 or -1, the stronger the relationship -can also be used to study the association between three or more variables in a multiple regression analysis -p. 352-353
Skewness
-a type of distribution shape -the extent to which cases are clustered more at one or the other end of the distribution of a quantitative variation rather than in a symmetric pattern around its center; skew can be positive (a right skew), with the number of cases tapering off in the positive direction, or negative (a left skew), with the number of cases tapering off in the negative direction -can be represented in a graph or frequency distribution -we cannot talk about the skewness of a variable at the nominal level, because the ordering of values is arbitrary and thus we cannot say that the distribution is not symmetric because we could just reorder the value to make the distribution more or less symmetric -p. 316-318
Variability
-a type of distribution shape -the extent to which cases are spread out through the distribution or clustered in just one location -can be represented in a graph or frequency distribution -four popular measures of variation are: range, interquartile range, variance, and standard deviation (the most popular) -to calculate each of these measures, the variable must be at the interval or ratio level (some argue ordinal too) -measures of variability only capture part of what we need to be concerned with about the distribution of a variable, they do not tell us about the extent to which a distribution is skewed (researchers usually just eyeball skewedness) -p. 316, 331-332
Central tendency
-a type of distribution shape the most common value (for variables measured at the nominal level) or the value around which cases tend to center (for a quantitative variable) -can be represented in a graph or frequency distribution -usually summarized with one of three statistics: mode, median, or mean; analyst must consider the level of measurement, skewness of the distribution, and the purpose for which the statistic is used when choosing an appropriate measure -p. 316, 327
Specification
-a type of relationship involving three or more variables in which the association between the independent variables varies across the categories of one or more other control variables -this is what was found in the example; there is almost no association between income and coting for those with a high school education or less, but there is a moderate association for the higher educational categories -p. 348
Secondary data analysis - advantages
-allows analyses of social processes in other inaccessible settings -saves time and money -allows the researcher to avoid data collection problems -facilitates comparison with other samples Page 535 -may allow inclusion of many more variables and a more diverse sample than otherwise would be feasible -may allow data from multiple studies to be combined -p. 534-535
Chi-square
-an inferential static used to test hypotheses about relationships between two or more variables in a cross-tabulation -used in most cross-tabular analyses to estimate the probability that an association is not due to chance -probability is reported like p < .05 (the probability that the association was due to chance is less than 5 our of 100 or 5%) -is calculated with a formula that combines the residuals (expected count vs. actual count) for each cell in the cross-tab -inferential statistics are used in deciding whether it is likely that an association exists in the larger population from which the sample was drawn -even when the association between two variables is consistent with the researcher's hypothesis, it is possible that the association was just caused by random sample; should avoid concluding that an association exists in the population from which the sample was drawn unless the probability that the association was due to chance is less than 5% (95% confident that the association was not due to chance) -the larger the deviations of the expected from the observed counts in the various table cells, the less likely it is that the association is due only to chance -statistical significance - the mathematical likelihood that an association is due to chance, judged by a criterion set by the analyst (often that the probability is less than 5 out of 100 or p < .05); when the analyst feels reasonably confident (at least 95%) that an association was not due to chance, it is said that the association is statistically significant; an association is less likely to appear on the basis of chance in a larger sample than in a smaller one; can have a statistically significant finding, but the association is too weak to be substantively significant -p. 343-345
Algorithm
-detect patterns in data, which can include text, images, or sound -ex: could sift through millions of images searching for this pattern (one person, arm extended, head slightly tilted) and determine which images are likely selfies and which are not -ex: sifting through the replies to millions of tweets and noticed several replies to the same tweet said, "congrats!" and "good job," you'd probably guess that the original tweet was by someone who recently won an award or achievement. -are useful for identifying patterns like these, and then using the patterns to predict what will happen next -ex: Facebook uses an algorithm to detect patterns and predict suicide -as social scientists, we have two roles: -1. Help make sense of patterns the algorithm discovers. -this is referred to as unsupervised machine learning; the computer goes through the data and identifies the most commonly occurring patterns, with little knowledge of what the pattern means -this is useful in instances where you have little to no idea about the human behavior producing the data. Imagine trying to discover Facebook norms way back when it was first invented. -2. Help design the algorithms to search for specific patterns we already know exist. -this is referred to as supervised machine learning. We tell the computer the patterns to search for by giving it sample data we coded ourselves -for example, we could code several images ourselves as either selfies or not. The computer would then "learn" that selfies tend to have contain only a single person, whose face is tilted to the side. It uses this information to classify a new set of images as either selfies or not -Mod. 9 PP
Theory-driven evaluation
-discussed as a subset of program theory, which is discussed in relation to a design decision in evaluation (black box vs. program theory) -a program evaluation that is guided by a theory that specifies the process by which the program has an effect; when a researcher has sufficient knowledge of the relationship between the independent and dependent variables before the investigation begins, outlining a program theory can help guide the investigation of program process in the most productive directions -p. 505
Matrix
-discussed in relation or qualitative data analysis - organization, categorization, and condensation of the data into concepts -a form on which particular features of multiple cases or instances can be recorded systematically so that a qualitative data analyst can examine them later -a well-designed chart that can facilitate the coding and categorization process in qualitative data analysis -condenses data into simple categories, reflects further analysis of the data to identify degree of support, and provides a multidimensional summary that will facilitate subsequent, more intensive analysis -p. 423
Black box evaluation
-discussed in relation to a design decision in evaluation research; this vs. program theory -the type of evaluation that occurs when an evaluation of program outcomes ignores, and does not identify, the process by which the program produced the effect -the focus of the evaluation researcher is on whether cases seem to have changed as a result of their exposure to the program, between the time they entered the program as inputs and when they exited the program as outputs -assumption is that program evaluation requires only the test of a simple input-output model; there is no attempt to open the black box of the program process -p. 505
Social science (or researcher) approaches
-discussed in relation to a design decision in evaluation research; this vs. stakeholder orientation -an orientation to evaluation research that expects researchers to emphasize the importance of researcher expertise and maintenance of autonomy from program stakeholders -emphasize the importance of researcher expertise and maintenance of some autonomy to develop the most trustworthy, unbiased program evaluation -it is assumed that evaluators cannot passively accept the values and views of the other stakeholders -evaluators who adopt this approach derive a program theory from information they obtain on how the program operates and existing social science theory and knowledge, not from the views of stakeholders -extreme form is goal-free evaluation = researchers do not even permit themselves to learn what goals the program stakeholder have for the program; instead, the researcher assesses and then compares the needs of participants with a wide array of program outcomes; wants to see the unanticipated outcomes and to remove any biases caused by knowing the program goals in advance -p. 507
Ngrams
-discussed in relation to big data -frequency graphs produced by Google's database of all words printed in more than one third of the world's books over time (with coverage still expanding) -ex: to understand how popular sociology is, you could see how frequently the name of the discipline has appeared in all books ever written in the world; in this example, it is limited to books in English, but that includes about 30 million books -the height of the graph represents the percentage that the term represents of all words in books published in that year, so a rising line means greater relative interest in the world, not simply more books being published -p. 541
Street level bureaucrats
-discussed in relation to challenged involved in secondary data analysis -officials who serve clients and have a high degree of discretion -when officials make decisions and record the bases for their decisions without much supervision, records may diverge considerable from the decisions they are supposed to reflect -there is value in using multiple methods, particularly when the primary method of data collection is the analysis of records generated by these people -p. 535
Percentages
-discussed in relation to cross tabulation (cross tab provides number of cases and this) -relative frequencies, computed by dividing the frequency of cases in a particular category by the total number of cases and then multiplying by 100 -p. 338
Marginal distributions
-discussed in relation to cross-tabulation -the summary distributions in the margins of a cross-tabulation that correspond to the frequency distribution of the row variable and of the column variable; ex: family income, distribution of voting (the labels) -p. 338
Monotonic
-discussed in relation to cross-tabulations, patterns -a pattern of association in which the value cases on one variable increases or decreases fairly regularly across the categories of another variable -p. 342
Cost-benefit analysis
-discussed in relation to efficiency analysis, which is a type of question focused on in evaluation research -a type of evaluation research that compares program costs with the economic value of program benefits -requires that the analyst identify whose perspective will be used to determine what can be considered a benefit rather than a cost (clients with have a different perspective than taxpayers or program staff) -some anticipated impacts of the program are considered a cost to one group and a benefit to another, whereas some are not relevant to one group -must be able to make some type of estimation of how clients benefited from the program; normally, this will involve a comparison of some indicators of client status before and after clients received program services or between those who received services and a comparable group that did not -compares program costs to economic benefits; ex: does the switch from paper-based to electronic health records lower health care expenditures? -ex: therapeutic communities (TC) cost-benefit analysis of homeless and mentally ill individuals assigned to a TC or treatment as usual; found that the TC program has a substantial benefit relative to its costs -p. 503; Mod. 9 PP
Statistical significance
-discussed in relation to inferential statistics -the mathematical likelihood that an association is due to chance, judged by a criterion set by the analyst (often that the probability is less than 5 out of 100 or p < .05) -when the analyst feels reasonably confident (at least 95%) that an association was not due to chance, it is said that the association is statistically significant -an association is less likely to appear on the basis of chance in a larger sample than in a smaller one; can have a statistically significant finding, but the association is too weak to be substantively significant -p. 344-345
Quartiles
-discussed in relation to interquartile range, a measure of variation -the points in a distribution corresponding to the first 25% of the cases, the first 50% of the cases, and the first 75% of the cases -p. 333
Subtables
-discussed in relation to intervening and extraneous variables in quantitative data analysis -tables describing the relationship between two variables within the discrete categories of one or more other control variables -ex: the first sub-table is the income-voting crosstab for only those respondents who believe that people can be trusted, while the second is for respondents who believe that people cannot be trusted; if trust in ordinary people intervened in the income-voting relationship, then the effect of controlling for this third variable would be to eliminate, or at least substantially reduce, this relationship - the distribution of voting would be the same for every income category in both subtables; this was not the case -p. 346
Secondary data
-previously collected data that are used in a new analysis -the most common sources are social science surveys and data collected by government agencies, often with survey research methods -also possible to reanalyze data that have been collected in experimental studies or with qualitative methods -even a researcher's reanalysis of data that he or she collected previously qualifies as secondary analysis if it is employed for a new purpose or in response to a methodological critique -why consider this data?: -1. Data collected in previous investigations are available for use by other social researchers on a wide range of topics -2. Available data sets often include many more measures and cases and reflect more rigourous research procedures than another researcher will have the time or resources to obtain in a new investigation -3. Much of the groundwork involves in creating and testing measures with the data set has already been done -4. Most important, most funded social science research projects collect data that can be used to investigate new research questions that the primary researchers who collected the data did not consider -source examples include: Bureau of Labor Statistics, US Census Bureau, Inter-university Consortium for Political and Social Research -use of pre-existing data to answer a question, which could be evaluation research -ex: Health Information National Trends Survey, which you saw for Activity #4 and read articles using it for Activity #3 -p. 522; Mod. 9 PP
Qualitative vs. quantitative data analysis
-qualitative (as compared to quantitative) -a focus on meanings rather than on quantifiable phenomena -collection of many data on a few cases rather than few data on many cases -study in depth and detail without predetermined categories or directions, rather than emphasis on analyses and categories determined in advance -conception of the researcher as an instrument, rather than as the designer of objective instruments to measure particular variables -sensitivity to context rather than seeking universal generalizations -attention to the impact of the researchers' and others' values on the course of the analysis rather than presuming the possibility of value-free inquiry -a goal of rich descriptions of the world rather than measurement of specific variables -p. 418-419
Secondary data analysis
-the method of using preexisting data in a different way or to answer a different research question than intended by those who collected the data -secondary data - previously collected data that are used in a new analysis -the most common sources are social science surveys and data collected by government agencies, often with survey research methods -also possible to reanalyze data that have been collected in experimental studies or with qualitative methods -even a researcher's reanalysis of data that he or she collected previously qualifies as secondary analysis if it is employed for a new purpose or in response to a methodological critique -why consider this data?: -1. Data collected in previous investigations are available for use by other social researchers on a wide range of topics -2. Available data sets often include many more measures and cases and reflect more rigourous research procedures than another researcher will have the time or resources to obtain in a new investigation -3. Much of the groundwork involves in creating and testing measures with the data set has already been done -4. Most important, most funded social science research projects collect data that can be used to investigate new research questions that the primary researchers who collected the data did not consider -has become the research method used by many contemporary social scientists to investigate important research questions -source examples include: Bureau of Labor Statistics, US Census Bureau, Inter-university Consortium for Political and Social Research Secondary Data Analysis -uses secondary/pre-existing data to answer a research question -most common sources of secondary data are social science surveys and data collected by government agencies -many datasets are posted on the web, some universities maintain data archives -if you receive federal funding for your research study, you will be required to share your data publicly (after you remove any information that could be used to identify respondents) -p. 522; Mod. 9 PP
Secondary data - ethical issues
-subject confidentiality is a concern; whenever possible, all information that could identify individuals should be removed from the records to be analyzed so that no link is possible to the identities of subjects (if information cannot totally be removed, then the ICPSR restricts access to that data and requires agreement to confidentiality) -IRB has the responsibility to decide whether they need to review and approve proposals for secondary data -data quality is always a concern; trade-off between ability to use particular data and the specific hypotheses that can be tested; if data is not measured adequately in a secondary source, the study might have to be abandoned until a more adequate source of data is found -political concerns (ex: how are race and ethnicity coded in a census?) -concerns with research across national boundaries (different data collection and definitions) -ethical obligation to cite the original, principal investigators, as well as the data source, such as ICPSR -p. 544-547
Elaboration analysis
-the process of introducing a third variables into an analysis to better understand - to elaborate - the bivariate (two-variable) relationship under consideration; additional control variables also can be introduced -all three uses for three-variable cross tabulation (identifying an intervening variables, testing a relationship for spuriousness, and specifying the conditions for a relationship) are aspects of this -p. 345
Inter-University Consortium for Political and Social Research (ICPSR)
-the academic consortium that archives data sets online from major surveys and other social science research and makes them available for analytics by others -at the University of Michigan -the premier source of secondary data useful to social science researchers -founded in 1962 and not includes more than 640 colleges and universities and other institutions throughout the world -archives the most extensive collection of social science data sets in the US outside the federal government: more than 7,990 studies in over 500,000 files from 130 counties from various sources -survey data sets obtained in the US and other countries that are stored in ICPSR provide data on a wide range of topics; ex: attitudes, consumer expectations, military interventions -census data from other nations are also available; UN data, international population data -data sets are available for downloading directly from the website -data sets obtained from government sources are available directly to the general public, but many other sets are available only to individuals at the colleges and universities around the world that have paid the required fees -availability of some data sets is restricted because of confidentiality issues and to use them, researchers must sign a contract -can search data archives for a specific topic, studies, investigators, related literature, specific variables, etc. -can "analyze online"; allows you to immediately inspect the distributions of responses to each question in a survey and examine the relation between variables, without having any special statistical programs -also catalogs reports and publications containing analyses that have used ICSPR data sets; provides an excellent starting point for the literature search that should precede a secondary analysis -you shouldn't stop your review of the literature with the sources listed on the ICSPR site; conduct a search in SocINDEX or another bibliographic database to learn about related studies that used different databases -p. 525, 529-533
Qualitative data analysis - an art
-the process of qualitative data analysis is described by some as involving as must "art" as science - as a "dance" -three difference modes of reading the text within the dance of qualitative data analysis: -1. When the researcher reads the test literally, she is focused on its literal content and form, so the text "leads" the dance -2. When the researcher reads the text reflexively, she focuses on how her own orientation shapes her interpretations and focus; the researcher leads the dance -3. When the researcher reads the text interpretively, she tries to construct her own interpretation of what the text means -the qualitative data analyst reports on her notes from observing or interviewing, interprets those notes, and considers how she reacts to the notes; these processes emerge from reading the notes and continue while she is editing the notes and deciding how to organize them, in an ongoing cycle -greater leeway in analyzing qualitative data than in quantitative data because qualitative data analysis is viewed as an "art"; you pick up a paintbrush, and what you draw is likely very different from what someone else draws; how you code data could be very different from how someone else codes it. -say you interviewed a doctor and asked: "What do you think patients should do to protect their personal health information" and the doctor responded: "Patients shouldn't worry too much. Their doctors are here to help and keep their information private." -How would you analyze the doctor's response? Three main modes to analyze the doctor's response: -1. Literally - You asked for a solution to a problem, and you got one. You code this as an example of a solution to a problem. Apply a code to the text, let's say it's called "solution." -2. Interpretively - You wonder, why would the doctor say this? That's what you code the text as. Say you thought the doctor said this because s/he believes doctors are infallible. Apply a code to the text, let's say "doctor as hero." -3. Reflexively - You ask yourself, "why am I (the researcher) interpreting this response in this way"? Why am I jumping to the conclusion that this doctor believes doctors are infallible, even though the doctor doesn't literally say this? That's what you code the text as. Say I believe I jumped to this conclusion because it reminded me of my own need to view of doctors as "heroes." I, a patient, want my doctor to protect my data. Then I code the text as "reassuring patient." -these codes are just one of many options; for example, maybe instead of interpreting the doctor's response as indicating doctors are infallible, you may have interpreted it as an instance where the doctor was patronizing patients. You would then apply a code, say "patronizing," to the text; you also don't have to stick to one mode, but could use two or all three; the result is several different analyses of the same data. techniques that are common in most qualitative data analyses, can move us away from the "art" and closer to what we'd expect to see in science; include: documentation; organization/categorization/condensation of the data into concepts, examination and display of relationships, corroboration/legitimization of conclusions, reflection on researcher's role -p. 416-417; Mod. 10 PP
Field researcher
-a researcher who uses qualitative methods to conduct research in the field -most adopt a role that involves some active participation in the setting -p. 378
Anonymity
-discussed in relation to ethical issues in surveys -a provision of research, in which no identifying information is recorded that could be used to link respondents to their responses -not many surveys can provide this -this does not allow follow-up attempts to encourage participation by nonrespondents and prevent panel designs that measure change through repeated surveys of the same people -in-person surveys can rarely be anonymous, but phone surveys meant to collect information at one point in time (like political polls) could be completely anonymous -when no future follow-up is desired, group-administered surveys can be anonymous -to provide this in mail surveys, the researcher should omit identifying codes from the questionnaire and could include a self-addressed stamped postcard -p. 304
Feedback
-a basic of the evaluation research model -information about service delivery system outputs, outcomes, or operations that can guide program input -variation in outputs and outcomes, in turn, influences the inputs to the program -ex: if too many negative side effects result from a trial medication, the trials may be limited or terminated; if a program does not appear to lead to improved outcomes, clients may go elsewhere -p. 491
Inputs
-a basic of the evaluation research model -the resources, raw materials, clients, and staff that go into a program -clients, customers, students, or some other persons or units - cases - enter the program as this -ex: students begin a new school program, welfare receptionist may enroll in a new job training program, or crime victims may be sent to a victim advocate -the resources and staff a program requires are also program inputs -p. 491
Evaluation research
-a type of research that addresses the question: do policies actually change society (and society could be SOCIETY or little "societies" like your school or workplace)? -ex: Do the use of electronic health records improve or worsen patient-physician interaction? -five types of projects: -1.Needs Assessment -2.Evaluability Assessment -3.Process Evaluation -4.Impact Analysis -5.Efficiency Analysis -Mod. 9 PP
Adaptive research design
-discussed in relation to a main feature of qualitative research -a research design that develops as the research progresses -each component of the design may need to be reconsidered or modified in response to new developments or to changes in some other component; the activities of collecting and analyzing data, developing and modifying theory, elaborating or refocusing the research questions, and identifying and eliminating validity threats are usually all going on more or less simultaneously, each influencing all of the others -p. 370
Confidentiality
-discussed in relation to ethical issues in surveys -a provision of research, in which identifying information that could be used to link respondents to their responses is available only to designated research personnel for specific research needs -is most often the primary focus of ethical concern in survey research -no one but research personnel should have access to information that could be used to link respondents to their responses, and even that access should be identified on the questionnaires only with numbers, and the names that correspond to these numbers should be kept in a safe location -follow-up mailings or contact attempts that require linking ID numbers with names and addresses should be carried out by trustworthy assistants under close supervision -encryption technology should be used for web surveys -p. 304
Contingent question
-discussed in relation to survey questions, avoid confusing phrasing -a question that is asked of only a subset of survey respondents -ex: if you answered yes to this question, go on to this one; ex: if you answered yes to current employment, then how satisfied are you with your current job? -p. 262
Skip patterns
-discussed in relation to survey questions, avoid confusing phrasing -the unique combination of questions created in a survey by filter questions and contingent questions -ex: if you answered no to this question, please skip to this # -should be indicated clearly with an arrow or mark in the questionnaire -p. 262
Thick description
-discussed in relation to case studies, a basic of qualitative research; central to much of the case study research is the goal of creating this -a rich description that conveys a sense of what it is like from the standpoint of the natural actors in that setting -description that provides a sense of what it is like to experience that setting from the standpoint of the natural actors in that setting -p. 373
Bipolar response options
-discussed in relation to survey questions - maximize the utility of response categories -response choices to a survey question that include a middle category and parallel responses with positive and negative valence (can be labeled or unlabeled) -ex: very comfortable, mostly comfortable, slightly comfortable, feel neither comfortable nor uncomfortable, slightly uncomfortable, mostly uncomfortable, very comfortable -seven categories will capture most variation on bipolar ratings -p 265
Double negative
-discussed in relation to survey questions, avoid confusing phrasing -a question or statement that contains two negatives, which can muddy the meaning of a question; ex: do you disagree that there should not be a tax increase? -ex: does it seem possible or does it seem impossible to you that the Nazi extermination of the Jews never happened? -do not use this -p. 261
Intensive (in-depth) interviewing
-one of three distinctive qualitative research techniques -a qualitative method that involves open-ended, relatively unstructured questioning in which the interviewer seeks in-depth information on the interviewee's feeling, experiences, and perceptions -often used in conjunction with participant observation -a qualitative method of finding out about people's experiences, thoughts, and feelings -commitment to learning about people in depth and on their own terms and in the context of their situations -relied on open-ended questions and may allow the specific content and order of questions to very from one interviewee to another -expects respondents to answer questions in their own words -goal is to develop a comprehensive picture of the interviewee's background, attitudes, and actions, in his or her own terms; to listen to people as they describe how they understand the worlds in which they live and work -similar to participant observation, it engages researchers more actively with subjects than standard research does -researchers must listen to lengthy explanations, ask follow-up questions tailored to the preceding answers, and seek to learn about interrelated belief systems or personal approaches to things rather than measure a limited set of variables -are often much longer than standardized interviews, sometimes as long as 15 hours, conducted in several different sessions -becomes more like a conversation between partners than an interview between a researcher and a subject; a conversation with a purpose -actively try to probe understandings and engage interviewees in a dialogue about what they mean by their comments -to prepare for the interview, the interviewer should learn in advance about the setting to be studied, discuss with key informants, inspect written documents, and review your own feelings about the setting can help -follows a preplanned outline of topics; may begin with a few simple questions that gather background information while building rapport; these are often followed by a few general grand tour questions - a broad question at the start of an interview that seeks to engage the respondent in the topic of interest (meant to elicit lengthy narratives) -some projects may use relatively structured interviews, particularly when the focus is on developing knowledge about prior events or some narrowly defined topic -more exploratory projects, particularly those aiming to learn about interviewees' interpretations of the world, may let each interview flow in a unique direction in response to the interviewees' experiences and interests -interviewers must adapt nimbly throughout the interview, paying attention to nonverbal cues, expressions with symbolic value, and the ebb and flow of the interviewees' feelings and interests; has to be free to follow the data where they lead -random selection is rarely used to select respondents for these, but the selection method still must be considered carefully -if interviewees are selected in a haphazard manner, as by speaking just to those who happen to be available at that time and place, the interviews are likely to be of less value than when a more purposive selection strategy is used -researchers should try to select interviewees who are knowledgeable about the subject of the interview, who are open to talking, and who represent the range of perspectives -selection of new interviewees should continue, if possible, at least until the saturation point - the point at which subject selection is ended in intensive interviewing, when new interviews seem to yield little additional information -as new issues are uncovered, additional interviewees may be selected to represent different opinions about these issues -you ask several open-ended questions and often come up with follow-up questions you didn't think about until the respondent said something. -can be in-person, over the phone, email, Skype, messenger, etc. -when do you stop? When you reach the saturation point, meaning you keep hearing the same types of responses and are learning nothing new. -saturation usually occurs after 20-30 interviews*** -why so few for saturation? Remember, sociology (and other social sciences) wouldn't exist if people weren't predictable. -p. 376, 390-392; Mod. 8 PP
Survey research
-research in which information is obtained from a sample of individuals through their responses to questions about themselves or others -owes it popularity to three features: versatility, efficiency, and generalizability -1.Versatility -a well-designed survey can enhance our understanding of just about any social issue -hardly any topic of interest to social scientists that has not been studied at some time with survey methods -computer technology has made surveys even more versatile; computers can be programmed so that difference types of respondents are asked different questions; short videos or pictures can be presented to respondents on a computer screen; respondents may be given laptops to record their answers to sensitive personal questions, such as about illegal activities so that the interviewer will not know what they said -2. Efficiency -are also popular because data can be collected from many people at relatively low cost and, depending the research design, relatively quickly -cost; one-shot telephone interviews can cost as little as $30 per subject; large mailed surveys cost even less, about $10 to $15 per potential respondent, although this can increase when intensive follow-up efforts are made; surveys of the general population using personal interviews are much more expensive, with costs ranging from about $100 per potential respondent for studies in a limited geographic area to $300 or more when lengthy travel or repeat visits are needed to connect with respondents -efficient because many variables can be measured without substantially increasing the time or cost; mailed questionnaires can include as many as 10 pages of questions before respondents begin to balk; in-person interviews can be much longer; the upper limit for phone surveys seems to be about 45 minutes -these efficiencies can be attained only in a place with a reliable communications infrastructure; need a reliable postal service for mail surveys (not always possible in certain areas of the US); phone surveys have been very effective in the US where 96% of households have phones -modern technology has been a mixed blessing for survey efficiency; the internet makes it easier to survey some populations, but it can leave out important segments; ex: caller ID and answering machines make it easy to screen out unwanted calls, but make it harder to reach people in phone surveys; a growing number of people only use cell phones/no longer use landlines which results in researchers needing to spend more time and money in reaching potential respondents -3. Generalizability -survey methods lend themselves to probability sampling from large populations and thus it is appealing when sample generalizability is a central goal -survey research is often the only means available for developing a representative picture of the attitudes and characteristics of a large population -surveys are also the method of choice when cross-population generalizability is a key concern, because they allow a range of social contexts and subgroups to be sampled; consistency of relationships can be examined across subgroups -new technologies that are lowering the overall rate of response to phone surveys are also making it more difficult to obtain generalizable samples; ex: 13% of US households do not use the internet at home or work, but these people tend to be elderly, poor, rural, and have no more than a hs education compared to those who do use the internet; ex: about 90% of US households have at least one cell phone, but among half of US households that have only cell phone service, the adults are likely to be younger, renters, Hispanic, and poor, while those in households with landline phones only are more likely to be elderly; as a result, although surveys of the general population can include only cell phone numbers, those that target particular subgroups may need to include landlines -another challenge in survey research is the growing foreign-born population in the US (13% in 2014); these individual often require foreign-language versions of survey forms; otherwise, survey findings may not be generalized to the entire population -p. 255-257
Qualitative research - ethical issues
-when a participant observer becomes an accepted part of a community or other group, natural social relations and sentiments will develop over time despite initial disclosure of the researcher's role -when a qualitative interviewer succeeds in gaining rapport, interviewees are likely to feel they are sharing information with a friend rather than a researcher -when a focus group leaves participants feeling they are in a natural conversation with acquaintances, spontaneous comments are likely to be made without considering of their being recorded -there is a certain unavoidable deception that comes from trying to have both the researcher and informant forget that what is going on is not a normal, natural exchange but research -the natural, evolving quality of much interaction between qualitative researchers and participants can make it difficult in many situations to stipulate in advance the procedures that will be followed to ensure ethical practice -1. Voluntary Participation -the first step in ensuring that subjects are participating in a study voluntarily is to provide clear information about the research and the opportunity to decline to participate -even when such information is provided, the ongoing interaction between the researcher and the participant can blur the meaning of voluntary participation; this problem can be reduced when interviews are conducted in multiple sessions and participants are told each time about their opportunity to withdraw consent -difficult in participant observation studies, particularly cover participation, because it offers not way to ensure that participation by the subjects is voluntary -interpreting the standard of voluntary participation can be difficult even when the researcher's role is more open; much field research would be impossible if participant observers were required to request permission of every group or setting, no matter how minimal their involvement -some researchers recommend adherence to a process consent - an interpretation of the ethical standard of voluntary consent that allows participants to change their decision about participating at any point by requiring that the researcher check with participants at each stage of the project about their willingness to continue the project; ex: reading excerpts from a book about the participants to them to check with their consent at that point -information about the research that is provided to participants must be tailored to their capacities, interests, and needs if they are able to give truly informed consent -possible to determine if participation is voluntary in interview or focus group studies, but this can be a problem in observational studies -2. Subject well-being -every field researcher should consider carefully before beginning a project how to avoid harm to subjects -direct harm to the reputations or feelings of particular individuals is a primary concern; can be avoided somewhat by maintaining the confidentiality of research subjects -the well-being of the group or community studied as a whole should also be consider in relation to publication and reporting -problems are less likely in intensive interviewing and focus groups, but researchers using these methods should try to identify negative feelings both after interviews and reports are releases and then help distressed subjects fop with that via debriefing or referrals -3. Identity disclosure -current ethical standards require informed consent of research subjects, and most would argue that this standard cannot be met in any meaningful way if researchers do not disclose fully their identity -less-educated subjects may not readily comprehend what a research is or be able to weigh the possible consequences of the research for themselves -the intimacy of the researcher-participant relationship in much qualitative research makes it difficult to inject reminders about the research into ongoing social interaction -problem with disclosing identity that then results in changes in attitude toward the researcher -digital ethnographers have concerns with identity disclosure by participants in online communities who can easily misrepresent their own identity -internet-based research can also violate the principles of identity disclosure when recording and analyzing text without identifying themselves as researchers -4. Confidentiality -field researchers normally use fictitious names for the characteristics in their reports, but this does not guarantee confidentiality; other elements in the research may help identify the participant, so researchers should make every effort to expunge possible identifying material an alter unimportant aspects when necessary -no field research project should begin id some participants clearly will suffer serious harm by being identified in project publications -focus groups present a particular challenge because the researcher cannot guarantee that participants will not disclose information that others would like to be confidential; can be reduced by listing do's and don'ts' with participants, but should otherwise not be used for very sensitive topics -digital ethnography research that includes distinctive quotes base don online text can be searched to locate that original text; distinctive online names that are changed could still be identified through other factors -challenge is how to maintain confidentiality and still report the relevant information from the study -5. Appropriate boundaries -maintaining appropriate boundaries between the researcher and participants is a uniquely important issue in qualitative research projects that creates challenges for identity disclosure, subject well-being, and voluntary participation -must reduce the distinction between the researcher and participants by building rapport with those they plan to interview by expressing interest in their concerns and conveying empathy for their situation -involvements can make it difficult to avoid becoming an advocate for the research participants rather than a sympathetic observer -value of maintaining ethical professional boundaries is a two-way street -6. Researcher safety -research "in the field" should not begin until any potential risks to researcher safety have been evaluated -safety needs to be considered at the time of designing the research, not as an afterthought on arriving in the research site -being realistic about evaluating risk does not mean simply accepting misleading assumptions about unfamiliar situations or communities -p. 400-406; Mod. 8 PP
Qualitative research - features
-1. Collection primarily of qualitative rather than quantitative data; qualitative methods emphasize observations about natural behavior and artifacts that capture social life as participants experience it, rather than in categories the researcher predetermines -2. A focus on previously unstudied processes and unanticipated phenomena; previously unstudied attitudes and actions cannot be adequately understood with a structured set of questions or within a highly controlled experiment; qualitative methods have their greatest appeal when we need to explore new issues, investigate hard-to-study groups, or determine the meaning people five to their lives and actions -3. Exploratory research questions, with a commitment to inductive reasoning; qualitative researchers typically begin their projects seeking not to test preformulated hypotheses but to discover what people think, how they act, and why, in some social setting; only after many observations do qualitative researchers try to develop general principles to account for their observations; however, researchers do not begin research with a blank slate (tabula rasa), but recognize that researchers bring prior understandings and perspectives to their research that cannot be fully erased; must come into research with open eyes and a critical awareness for our own expectations -4.Sensitivity to the subjective role of the researcher (reflexivity); reflexivity = sensitivity of and adaption by the researcher to his or her own influence in the research setting; qualitative researchers recognize that their perspective on social phenomena will reflect in part their own background and current situation; where the researcher is coming from can affect what they find; some researchers believe that the goal of developing a purely objective view of the social world is impossible, but they discuss in their publications their own feelings about what they have studied so that others can consider how these feelings affected their findings -5. An orientation to social context, to the interconnections between social phenomena rather than to their discrete features; the context of concern may be a program or organization, community, or a broader social context -6. A focus on human subjectivity, on the meanings that participants attach to events and that people give to their lives; through life stories, people account for their lives; the themes people create are the means by which they interpret and evaluate their life experiences and attempt to integrate these experiences to form a self-concept -7. Use of idiographic rather than nomothetic causal explanation; with its focus on particular actors and situations and the processes that connect them, qualitative research tends to identify causes as particular events embedded within an unfolding, interconnected action sequence; the language of variables and hypotheses appears only rarely in the qualitative literature -8. Acceptance - by some qualitative researchers - of a constructivist philosophy; constructivism = a methodology based on questioning belief in an external reality; emphasizes the importance of exploring the way in which different stakeholders in a social setting construct their beliefs; believe that social reality is socially constructed and that the goal of social scientists is to understand what meanings people give to reality, not to determine how reality works apart from these constructions; rejects the positivist belief that there is a concrete, objective reality that scientific methods help us understand ; belief that people construct an image of reality based on their own preferences and prejudices and their interactions with others and that this is as true of scientists as it is of everyone else in the social world; this means that we can never be sure that we have understood reality properly, that objects and events are understood by different people differently, and those perceptions are the reality that social science should focus on; emphasizes that different stakeholders in a social setting construct different beliefs; give particular attention to the different goals of researchers and other participants and may seek to develop a consensus among participants about how to understand the focus of inquiry; may use an interactive research process, in which a researcher begins an evaluation in some social setting by identifying the different interest groups in that setting; hermeneutic circle = a representation of the dialectical process in which the researcher obtains information from multiple stakeholders in a setting, refines his or her understanding of the setting, and then tests that understanding with successive respondents; researcher interviews each respondent to learn how they construct their thoughts and feelings about the topic of concern and then gradually tries to develop a shared perspective on the problem being evaluated -9.Adaptive research design, in which the design develops as the research progresses; adaptive research design - a research design that develops as the research progresses; each component of the design may need to be reconsidered or modified in response to new developments or to changes in some other component; the activities of collecting and analyzing data, developing and modifying theory, elaborating or refocusing the research questions, and identifying and eliminating validity threats are usually all going on more or less simultaneously, each influencing all of the others -Reflexive research design = the design is changed or modified as the project progresses (definitely not happening in experiments and surveys after piloting is complete) - focus on text = "data" tends to consist of text, rather than numbers (could have text in quantitative research, but text is the emphasis in qualitative research) -different question design = questions tend to be open-ended, require a detailed response from the respondent that cannot be anticipated, and are often complex (can be difficult to compare responses, which quantitative research emphasizes) -a focus on social context = the fact that phenomena are mixed together in complex ways is often treated as a problem in quantitative research (think back to confounders and statistical control) but is embraced in qualitative research -p. 367-370; Mod. 8 PP
Outcomes
-a basic of the evaluation research model -the impact of the program process on the cases processed -can range from improved test scores or higher rates of job retention to fewer criminal offenses and lower rates of poverty -any social program is likely to have multiple of these, some intended and some unintended, some positive and some negative -p. 491
Forced-choice questions
-discussed in related to survey questions - minimize fence-sitting and floating -closed-ended survey questions that do not include 'don't know' as an explicit response; ex: most political pollsters use this -despite the prevalence of floating, people often have an opinion but are reluctant to express it -forcing them to choose a response could overlook the existence of an actual neutral zone in the population -p. 268; Mod. 7 PP
Reactive effects
-discussed in relation to an overt/complete observer, which is one of four roles researchers can adopt in participant observation -the changes in individual or group behavior that result from being observed or otherwise studied -it is not natural in most social situations for someone to be present who will record his or her observation for research purposes, so individual may alter their behavior -the researchers is even more likely to have an impact when the social setting involves hew people or if observing is unlike the usual activities in the setting -differences between the observer and those being observed also increase the likelihood of this; ex: researcher observing children on a school playground and the children viewed her as an adult authority figure and was pressured to participate in their activities -however, in most situations, even over observers find that their presence seems to be ignored by participants after a while and have no apparent impact on social processes -p. 378
Saturation point
-discussed in relation to intensive/in-depth interviewing -the point at which subject selection is ended in intensive interviewing, when new interviews seem to yield little additional information -selection of new interviewees should continue, if possible, at least until this -p. 392
Gatekeeper
-discussed in relation to participant observation, entering the field -a person in a field who can grant researchers access to the setting -p. 382
Experience sampling method (ESM)
-discussed in relation to participant observation, sampling -a technique for drawing a representative sample of everyday activities, thoughts, and experiences; participants carry a pager and are beeped at random times over several days or weeks; on hearing the beep, participants complete a report designed by the researcher -when field studies do not require ongoing, intensive involvement by researchers in the setting, this can be used -the experiences, thoughts, and feelings of a number of people are sampled randomly as they go about their daily activities -carry a pager and fill out reports when they are beeped -although a powerful tool for field research, it is still limited by the need to recruit people to carry pagers -the generalizable of these findings relies on the representativeness, and reliability, of the persons who cooperate in the research -p. 386
Interactive voice response (IVR)
-discussed in relation to phone surveys -a survey in which respondents receive automated calls and answer questions by pressing numbers on their touch-tone phones or speaking numbers that are interpreted by computerized voice recognition software -allows even greater control over interviewer-respondent interaction in phone surveys than computer-assisted telephone interviews -can also record verbal responses to open-ended questions for later transcription -have been successful with short questionnaires and when respondents are motivated to participate, but people may be put off by the impersonal nature of the survey -p. 292-293
Computer-assisted telephone interviews (CATI)
-discussed in relation to phone surveys -a telephone interview in which a questionnaire is programmed into a computer, along with relevant skip patterns, and only valid entries are allowed; incorporates the tasks of interviewing, data entry, and some data cleaning -procedures in phone surveys can be standardized more effectively, quality control maintained, and processing speed maximized when they use this -p. 291
Filter questions
-discussed in relation to survey questions, avoid confusing phrasing -a survey question used to identify a subset of respondents who then are asked other questions -ex: are you currently employed? -p. 262
Matrix questions
-discussed in relation to surveys, order of questions (some questions may be presented in this way) -a series of survey questions that concern a common theme and that have the same response choices -questions are written so that a common initial phrase applies to each one, which shortens the questionnaire by reducing the number of words that must be used for each question and emphasizes a common theme among the questions and so invites answering each question in relation to other questions in the matrix -ex: How much difficulty do you have...-going up and down stairs? kneeling? hearing? -p. 279
Cognitive interview
-discussed in relation to surveys, refining and testing questions -a technique for evaluating questions in which researchers ask people test questions and then probe with follow-up questions to learn how they understood the question and what their answers mean -ask people to describe what they are thinking when they answer questions, how the respondent understood one or more words in the question, how confusing it was, etc. -no single approach to this considered most effective -p. 274
Overt/complete observer
-one of four roles researchers can adopt in participant observation -a role in participant observation in which the researcher does not participate in group activities and is publicly defined as a researcher -does not participate, but is defined publicly as a researcher -researcher announces her role as a research observer -reactive effects - the changes in individual or group behavior that result form being observed or otherwise studied; it is not natural in most social situations for someone to be present who will record his or her observation for research purposes, so individual may alter their behavior; the researchers is even more likely to have an impact when the social setting involves hew people or if observing is unlike the usual activities in the setting; differences between the observer and those being observed also increase the likelihood of this; ex: researcher observing children on a school playground; the children viewed her as an adult authority figure and was pressured to participate in their activities -however, in most situations, even over observers find that their presence seems to be ignored by participants after a while and have no apparent impact on social processes -p. 377-378
Surveys - organization
-there are five basic designs: mailed, group-administered, phone, in-person, and electronic; these can also be combined in mixed-mode surveys -manner of administration: mailed, group, and electronic surveys (a survey that is sent and answered by computer, either through e-mail or on the web) are completed by the respondents themselves; phone and in-person interviews involve the research or staff person asking the questions and recording answers -setting: most mail, electronic, phone, and in-person surveys are intended for completion by one respondent; however, group-administered surveys are the exception, in which a group of respondent complete the survey while the researcher waits -questionnaire structure: most mailed, group, phone, and electronic surveys are highly structed, fixing in advance the content and order of questions and response choices; some types, such as mail, may include open-ended question; in-person interviews are often highly structures, but they may include many questions without fixed response choices or the interview may just be unstructured -cost: in-person interviews are the most expensive type of survey; phone interviews are much less expensive, and mail is cheaper yet; electronic can be the least expensive -p. 281-282
Survey questions - minimize fence-sitting and floating
-two related problems also stem from people's desire to choose an acceptable answer; there is no uniformly correct solution to these problems, so researchers have to weigh the alternatives in light of the concept to be measured and whatever the know about the respondents: -1. Fence-sitters - survey respondents who see themselves as being neutral on an issue and choose a middle (neutral) response that is offered; may skew the results if you force them to choose between opposites; in most cases, about 10-20% of such respondents will choose an explicit middle, neutral alternative; having the middle option is generally good because it identifies fence-sitters and tends to increase measurement reliability; forcing them to choose a response could overlook the existence of an actual neutral zone in the population. -2. Floaters - survey respondents who provide an opinion on a topic in response to a closed-ended question that does not include a 'don't know' option, but who will choose 'don't know' if it is available; choose a substantive answer when they really don't know or have no opinion; ex: a third of the public will provide an opinion on a proposed law that they know nothing about if they are asked for their opinion on a closed-ended survey question that does not include a 'don't know' as an explicit response choice, but 90% will choose the 'don't know' option if given; on average, offering an explicit response option increases the 'don't know' responses by about a fifth; important to have a don't know option, but this may result in some people taking the easy way and choosing this answer, so some recommend using a 'don't know' and a 'no opinion' option or an open-ended option for those to discuss their opinions; those doing phone or in-person surveys may get around this issue by recording a noncommittal response if it is offered; people who "don't know" could actually be interesting to capture, especially if the question is about knowledge on a topic -Forced-choice questions - closed-ended survey questions that do not include 'don't know' as an explicit response; ex: most political pollsters use this; despite the prevalence of floating, people often have an opinion but are reluctant to express it -p. 267-268; Mod. 7 PP
Range
-a measure of variation -the true upper limit in a distribution minus the true lower limit (or the highest rounded value minus the lowest rounded value, plus one) -is equal to the highest value minus the lowest value -often important to report this to identify the whole range of possible values that might be encountered -however, because it can be drastically altered by just one value (outlier - an exceptionally high or low value in a distribution), it does not do a good job of summarizing the extent of variability in the distribution -p. 332-333
Case study
-a basic of qualitative research -a setting or group that the analyst treats as an integrated social unit that must be studied holistically and in particularity -not so much a single method as it is a way of thinking about what a qualitative research project can, or should, focus on -the case may be an organization, a community, a social group, a family, or even an individual -the idea is that the social world functions as an integrated whole; social researchers therefore need to develop deep understanding of particular instances of phenomena (by contrast, quantitative research mistakenly slices and dices reality in a way that obscures how the social world functions) -is the study of the particularity and complexity of a single case, coming to understand its activity within important circumstances; the researcher emphasizes episodes of nuance, the sequentially of happenings in context, the wholeness of the individual -central to much of this is the goal of creating a thick description = a rich description that conveys a sense of what it is like from the standpoint of the natural actors in that setting; description that provides a sense of what it is like to experience that setting from the standpoint of the natural actors in that setting Case Study vs. Ethnography -a study could be one, but not the other. A study could also be both. -1. Case study that isn't an ethnography: -you study the Texas hospital that dealt with the Ebola outbreak. You conduct in-depth interviews with health care professionals and review administrative records. Your research is complete after just a month. -2. Case study that is an ethnography: -you study the Texas hospital that dealt with the Ebola outbreak. You become a patient care technician. You work and eat with others working in the hospital. You take down notes and maybe do in-depth interviews. Your research is complete after a year. -3. An ethnography that isn't a case study: -you deploy research assistants at multiple hospitals in Texas, one of which happens to be the one that dealt with the Ebola outbreak. Each research assistant becomes a patient care technician. They eat and work with the other workers in the hospital. They take notes and conduct in-depth interviews. Their research is complete after a year. -a netnography (aka digital ethnography) is a type of ethnography. We can go through the same motions we just did and come up with examples of when a case study is a netnography, when it isn't, and when a netnography isn't a case study. -p. 372-373; Mod. 8 PP
Ethnography
-a basic of qualitative research -the study of a culture or cultures that some group of people shares, using participant observation over an extended period -many qualitative researchers are guided by the tradition of this -usually meant to refer to the process by which a single investigator immerses himself or herself in a group for a long time (often one or more years), gradually establishing trust and experiencing the social world as do the participants -can be called naturalistic, because it seeks to describe and understand the natural social world as it is, in all its richness and detail; this goal is best achieved when an ethnographer is fluent in the local language and spends enough time immersed in the setting to know how people live, what they say about themselves and what they actually do, and what they value -there are no particular methodological techniques associated with this, other than just being there; the analytic process relines on the thoroughness and insight of the researcher to tell it like it is in the setting, as he or she experienced it -concern with being as objective as possible -a good ethnography is possible only when the ethnographer learns the subtleties of expression used in a group and the multiple meanings that can be given to statement or acts; also includes some reflection by the researcher on the influence his or her own background has had on research plans and the research setting Case Study vs. Ethnography -a study could be one, but not the other. A study could also be both. -1. Case study that isn't an ethnography: -you study the Texas hospital that dealt with the Ebola outbreak. You conduct in-depth interviews with health care professionals and review administrative records. Your research is complete after just a month. -2. Case study that is an ethnography: -you study the Texas hospital that dealt with the Ebola outbreak. You become a patient care technician. You work and eat with others working in the hospital. You take down notes and maybe do in-depth interviews. Your research is complete after a year. -3. An ethnography that isn't a case study: -you deploy research assistants at multiple hospitals in Texas, one of which happens to be the one that dealt with the Ebola outbreak. Each research assistant becomes a patient care technician. They eat and work with the other workers in the hospital. They take notes and conduct in-depth interviews. Their research is complete after a year. -a netnography (aka digital ethnography) is a type of ethnography. We can go through the same motions we just did and come up with examples of when a case study is a netnography, when it isn't, and when a netnography isn't a case study -p. 373-374; Mod. 8 PP
Evaluation research - obstacles
-1. Because social programs and the people who use them are complex, evaluation research designs can easily miss important outcomes or aspects of the program process -2. Because the many program stakeholders all have an interest in particular results from the evaluation, researchers can be subjects to an unusual level of cross-pressures and demands -3. Because the need to include program stakeholders in research decisions may undermine adherence to scientific standards, research designs can be weakened -4. Because some program administrators want to believe that their programs really work well, researchers may be pressured to avoid null findings or, if they are not responsive, may find their research report ignored -5. Because the primary audience for evaluation research reports are program administrators, politicians, or members of the public, evaluation findings may need to be overly simplified, distorting the findings -p. 517
Digital ethnography
-a basic of qualitative research -the use of ethnographic methods to study online communities; also termed netnography, cyberthnography, and virtual ethnography -online communities may be formed by persons with similar interest or backgrounds, perhaps to create new social relationships that location or schedules do not permit, or to supplement relationships that emerge in the course of work or school or other ongoing social activities -like physical communities, researchers can study online communities through immersion in the group for an extended period -in some respects, is similar to traditional ethnography; the researcher prepares to enter the field by becoming familiar with online communities and their language and customs, formulating an exploratory research question about social processes or orientations in that settings, and selecting an appropriate community to study -unlike in-person ethnographies, this can focus on communities whose members are physically distant and dispersed -the selected community should be relevant to the research question, involve frequent communication among actively engaged members, and have a number of participants who generate a rich body of textual data -the digital ethnographer's self-introduction should be clear and friendly -must keep both observational and reflective notes, but unlike a traditional ethnographer, can return to review the original data long after it was produced; the data can then be coded, annotated with the researcher's interpretations, checked against new data to evaluate the persistence of social patterns, and used to develop a theory that is grounded in the data -although contact mediated by digital technology is just not the same as direct contact, much of the social world now happens online and as social researchers, we need to investigate it with the full array of methods that have proven useful in investigating other areas of the social world; however, we need to consider the limitations we face if we try to understand the people who created the digital records only through their digital footprints -p. 375-376
Program process
-a basic of the evaluation research model -the complete treatment or service delivered by the program -some service or treatment is provided to the cases -ex: attendance in a class, assistance with a health problem, residence in new housing, or receipt of special cash benefits -may be simple or complicated, short or long, but it is designed to have some impact on the cases -p. 491
Participant observation - entering the field
-a critical stage in because it can shape many subsequent experiences -some background work is necessary before entering the field, at least enough to develop a clear understanding of what the research questions are likely to be and to review one's personal stance toward the people and problems likely to be encountered; need to have a sense of social boundaries around the setting you will study; must learn in advance how participants dress and wat their typical activities are to avoid being caught completely unaware -finding a participant who can make introductions is often critical, and formal permissions may be needed in an organization's setting; this may take weeks or months to accomplish -many field researchers avoid systematic and extensive reading about as setting for fear that it will bias their impressions, but entering without any sense of the social norms can lead to a disaster -gatekeeper = a person in a field who can grant researchers access to the setting -when participant observing involves public figures who are used to reporters and researchers, a more direct approach may secure entry into the field; motivations for these people to participate may include a change in daily routine, commitment to making themselves available, desire for more publicity, flattery of scholarly attention, and interest in helping teach others about politics -in essence, field researchers must by very sensitive to the impression they make and to the ties they establish when entering the field; this stage lays the groundwork for collecting data from people who have difference perspectives and for developing relationships that the researcher can use to surmount the problems in data collection; the researcher should be ready with a rationale for his or her participation; discussion about these issues with key participants or gatekeeps should be honest and identify what the participants can expect from the research, without going into detail -p. 381-382
Evaluation research - simple or complex outcomes
-a design decision in evaluation research -How many outcomes are anticipated? How many might be unintended? What are direct or indirect consequences of program action? -the decision to focus on one outcome rather than another, or on several, can have enormous implications -single-purpose programs (such as the Minneapolis dv study that focus on the outcome of recidivism) can turn out to be not quire so simple to evaluate; in the case of Minneapolis, there was not adequate single source for records of recidivism in dv cases so they had to hunt for evidence from court and police records, follow-up interviews with victims, and family members reports; more easily measured variables like partners' ratings of the accused's subsequent behavior eventually received more attention ; similar example regarding military boot camps -despite the additional difficulties introduced by measuring multiple outcomes, most evaluation researchers attempt to do so; measuring multiple outcomes may lead to identification of different program impacts for different groups; in general, multiple outcomes give a better picture of program impact -downside to collecting multiple outcomes is that policy makers may choose to publicize only those outcomes that support their own policy preferences and ignore the rest; often, evaluation researchers themselves have little ability to publicize a more complete story -p. 510-511
Evaluation research - quantitative or qualitative
-a design decision in evaluation research -evaluation research that attempts to identify the effects of a social program typically is quantitative -ex: did the response time of emergency personnel tend to decrease? Did the students' test scores increase? Did housing retention improve? Did substance abuse decline? -favored when comparing outcomes between an experimental and control group or tracking change over time in a systematic manner -qualitative methods can add much to quantitative evaluation research studies, including more depth, detail, nuance, and exemplary case studies -can help investigate the program process (what is inside the black box) -tracking service delivery by finding out what is happening to clients and how clients experience the program can often best be accomplished by observing program activities and interviewing staff and clients intensively -important for learning how different individuals react to the treatment (ex: adult basic skills program for immigrants; quantitative evaluation of student reactions to the program relied heavily on the students' initial statements for their goals, whereas qualitative interviews revealed that most new immigrants lacked sufficient experience in the US to set meaningful goals; their initial goal statements simply reflected their eagerness to agree with their counselors' suggestions) -can help reveal how social program actually operate; complex social programs have many different features, and it is not always clear whether the combination of those features or some particular features are responsible for the program's effect -a skilled qualitative researcher will be flexible and creative in choosing methods for program evaluation and will often used mixed methods, so that the evaluation benefits from the advantages of both qualitative and quantitative techniques -p. 509-510
Evaluation research - groups or individuals
-a design decision in evaluation research -evaluation researchers should consider randomizing groups rather than individuals to alternative programs in an impact analysis -ex: randomly assigning classes to a new program or alternative one rather than randomly assigning individual students -makes it easier to determine whether some characteristics of different sites influence program impact -however, it requires a large number of participants and often cooperation across many governmental or organizational units -p. 512-513
Evaluation research - researcher vs. stakeholder orientation
-a design decision in evaluation research -whose prescriptions specify how the program should operate, what outcomes it should try to achieve, or who it should serve? -in program evaluation, the research question is often set by the program sponsors or the government agency responsible for reviewing the program -in consulting projects for businesses, the client (a manager, division president, etc.) decides which question researchers will study -most often, these authorities also specify the outcomes to be investigated; the first evaluator of the evaluation research is the funding agency rather than the professional social science community -evaluation research is research for a client, and its results may directly affect the services, treatments, or even punishments (ex: in the case of prison studies) that program users receive -Social science (or researcher) approaches - an orientation to evaluation research that expects researchers to emphasize the importance of researcher expertise and maintenance of autonomy from program stakeholders -emphasize the importance of researcher expertise and maintenance of some autonomy to develop the most trustworthy, unbiased program evaluation -it is assumed that evaluators cannot passively accept the values and views of the other stakeholders -evaluators who adopt this approach derive a program theory from information they obtain on how the program operates and existing social science theory and knowledge, not from the views of stakeholders -extreme form is goal-free evaluation = researchers do not even permit themselves to learn what goals the program stakeholder have for the program; instead, the researcher assesses and then compares the needs of participants with a wide array of program outcomes; wants to see the unanticipated outcomes and to remove any biases caused by knowing the program goals in advance -Stakeholder approaches - an orientation to evaluation research that expects researchers to be responsive primarily to the people involved with the program; also termed responsive evaluation -encourages researchers to be responsive to program stakeholders -issues to study are to be based on the views of people involved with the program, and reports are to be made to program participants -the program theory is developed by the researcher to clarify and develop the key stakeholder's theory of the program -types: -1. Utilization-focused evaluation = the evaluator forms a task force of program stakeholders, who help to shape the evaluation project so that they are most likely to use its results -2. Action research/participatory research = program participants are engaged with the researchers as coresearchers and help to design, conduct, and report the research -3. Appreciative inquiry = eliminates the professional researcher altogether in favor of a structured dialogue about needed changes among program participants themselves -because different stakeholders may differ in their reports about or assessment of the program, there is not likely to be one conclusion about program impact; the evaluators are primarily concerned with helping participants understand the views of other stakeholders and with generating productive dialogue -disadvantages in both types -if stakeholders are ignored, researchers may find that participants are uncooperative, that their reports are unused, and that the next project remains unfunded -however, if social science procedures are neglected, standards of evidence will be compromised, conclusions about program effects will likely be invalid and results are unlikely to be generalizable to other settings -Integrative approaches - an orientation to evaluation research that expects researchers to respond to the concerns of people involved with the program - stakeholders - as well as to the standards and goals of the social scientific community -attempts to cover issues of concern to both stakeholders and evaluators and to include stakeholders in the group from which guidance is routinely sought -seek to balance the goal of carrying out a project that is responsive to stakeholder concerns with the goal of objective, scientifically trustworthy, and generalizable results -when the research is planned, evaluators are expected to communicate and negotiate regularly with key stakeholders and to consider stakeholder concerns; findings from preliminary inquiries are reported back to program decision makers so that they can make improvements in the program before it is formally evaluated; when the actual evaluation is conducted, the evaluation research team is expected to operate more autonomously, minimizing intrusions from program stakeholders -many evaluation researchers now recognize the need to account for both sides -p. 506-509
Bar chart
-a graphic for qualitative variables in which the variable's distribution is displayed with solid bars separated by spaces -good tool for displaying the distribution of variables measured at the nominal level because there is, in effect, a gap between each of the categories -ex: chart showing marital status percentages (married, widowed, divorced, separated, never married); central tendency is married because it is the most common value; moderate amount of variability because the half that are not married are spread across the categories of widowed, divorced, separated, and never married; because marital status is not a quantitative variable, the order in which the categories are presented is arbitrary -p. 318
Frequency polygon
-a graphic for quantitative variables in which a continuous line connects data points representing the variable's distribution -an alternative to the histogram and is particularly useful when there is a wide range of variables -ex: years of education (0-20) and percentage; most common value is 12 years and is the center of the distribution; moderate variability in the distribution with many cases having more than 12 years of education; highly negatively skewed, with few reporting fewer than 10 years of education -p. 318
Histogram
-a graphic for quantitative variables in which the variable's distribution is displayed with adjacent bars -no necessary gaps between bars -ex: years of education (0-20) and percentages; clump of cases at 12 years and the distribution is negatively skewed because the percentage of cases tapers off to the low end much more quickly with a long tail -p. 318
Gamma
-a measure of association -a measure of association that is sometimes used in cross-tabular analysis -a popular measure of association in cross-tabular analyses with variables measured at the ordinal level; varies from -1 (perfectly associated in an inverse direction) to 0 (no association) to +1 (perfect positive association) -closer to -1 or 1, the stronger the association -p. 343
Mean
-a measure of central tendency -the arithmetic or weighted, average, computed by adding the value of all the cases and dividing by the total number of cases -the sum value of cases/the number of cases -an algebraic equation is provided -makes sense to compute a mean only if the values of the cases can be treated as actual quantities - that is, if they reflect an interval or ration level of measurement, or if they are ordinal and are assumed to be able to be treated as interval; it makes no sense to calculate the mean for a nominal variable (ex: religion, impossible to calculate the mean of Protestant, Catholic, and Jew) -p. 328-329
Mode
-a measure of central tendency -the most frequent value in a distribution; also termed probability average -being the most frequent value, it is also the most probable value in the distribution -used much less often than the mean and median, because it can so easily give misleading impressions of a distribution's central tendency -one problem with the mode occurs when a distribution is bimodal (a distribution that has two nonadjacent categories with about the same number of cases, and these categories have more cases than any others) in contrast to being unimodal (a distribution of a variable in which there is only one value that is the most frequent); bimodal distribution has two or more categories with an equal number of cases, so there is no single mode; in a situation in which two categories have roughly the same number of cases, the one with slightly more will be the mode, even though the other one is only slightly less -another problem is that it might happen to fall far from the main clustering of cases in a distribution -it would be misleading in most circumstances to say simply that the variable's central tendency was whatever the modal value was -nevertheless, there are occasions where it is very appropriate; it is the only measure of central tendency that can be used to characterize the central tendency of variables measured at the nominal level (ex: marital status); provides the answer when seeking the most probable value (ex: which ethnic group is most common in a school?) -p. 327-328
Median
-a measure of central tendency -the position average, or the point that divides the distribution in half (the 50th percentile) -inappropriate for measures at the nominal level because their values cannot be put in order, and so there is no meaningful middle position -to determine this, we simply place all values in numerical order and find the value of the case that has an equal number of cases above and below it; if the median falls between two cases (when there are an even number of cases), the median is defined as the average of the two middle value cases and dividing by 2 -in a frequency distribution, this is determined by identifying the value corresponding to a cumulative percentage of 50; count the percentages of categories until you reach the category that encompasses the 50% -with most variables, it is preferable to compute the median from ungrouped data because that method results in an exact value for the median (such as age 49), rather than an interval; in grouped data, the median is only determined as a certain interval (such as people aged 40-50) -p. 328
Variance
-a measure of variation -a statistic that measures the variability of a distribution as the average squared deviation of each case from the mean -accounts for the amount by which each case differs from the mean -formula provided -more conventional to use the closely related standard deviation than this -p. 333
Interquartile range
-a measure of variation -the range in a distribution between the end of the first quartile and the beginning of the third quartile -quartiles - the points in a distribution corresponding to the first 25% of the cases, the first 50% of the cases, and the first 75% of the cases -third quartile minus the first quartile is equal to this -p. 333
Standard deviation
-a measure of variation -the square root of the average squared deviation of each case from the mean -the square root of the variance -formula provided -normal distribution - a symmetric, bell-shaped distribution that results from chance variation around a central value; is symmetric and tapers off in a characteristic shape from its mean; 68% of cases will lie between plus and minus one standard deviation from the mean; 95% of cases will lie between plus and minus two (1.96) standard deviations from the mean -the correspondence of this to the normal distribution enables us to infer how confident we can be that the mean (or other stat) of a population sampled randomly is within a certain range of the sample mean; confidence limits indicate how confident we can be, based on our random sample, that the value of some statistic in the population falls within a particular range -directions on how to compute the confidence limits provided -p. 333-334
Policy research
-a process in which research results are used to provide policy actors with recommendations for action that are based on empirical evidence and careful reasoning -a process rather than a method -a process that attempts to support and persuade actors by providing them with well-reasoned, evidence-based, and responsible recommendations for decision making and action -often draws on the findings of evaluation research projects and involves working for a client, so these researchers often confront many of the same challenges as do evaluation researchers -must summarize and weigh evidence from a wide range of sources, and so must be familiar with each of the methods presented in this book -goal is to inform those who make policy about the possible alternative courses of action in response to some identified problem, their strengths and weaknesses, and their likely positive and negative effects -reviewing available evidence may result in concluding that enough is known about the issues to develop recommendations without further research, but it is more likely that additional research will be needed using primary or secondary sources -must ensure that plans are: -1.Credible: informed by evidence and unbiased to the extent possible about the pros, cons, and risk of problems and potential interventions -2. Meaningful and engaging of representatives of stakeholder groups, including policy makers and regulators, and those affected by policy actions such as customers, suppliers, or service recipients -3. Responsible: consider a broad spectrum of potential negative consequences of policy change -4. Creative: recognize needs for new or different solutions -5.Manageable: doable within the available time and resources -begins with identification of a research question that may focus on the causes of or solutions to the policy problem -encouraged to make a clear distinction between the problem they hope to solve and the aspects of the problem they cannot deal with, to review carefully the context in which the problem occurs, to specify clearly why policy changes is needed and what possible risks might be incurred by making a change, as well as to develop a causal model of how the policy problem occurs -challenge is the longer time frame required for producing a goof product that involve primary data collection, compared with the short time required with most politically driven policy decisions -is becoming an expected element in policy making -p. 513-514
Part-whole question effect
-a subset of context effects, which is discussed in relation to surveys and order of questions -when responses to a general or summary survey question about a topic are influenced by responses to an earlier, more specific question about that topic -ex: married people tend to report that they are happier in general if the general happiness question is preceded by a question about their happiness with their marriage -prior questions can influence how questions are comprehended, what beliefs shape responses, and whether comparative judgments are made; the potential for context effects is greatest when two or more questions concern the same issue or closely related ones; the impact of question order also tends to be greater for general, summary type questions -can be identified empirically if the question order is reversed on a sunset of the questionnaires (split-ballot design) and the results compared; however, knowing that a context effect occurs does not tell use which order is best; could randomize the order in which key questions are presented, so that any effects of question order cancel each other out -p. 278
Omnibus survey
-a survey that covers a range of topics of interest to different social scientists -shows just how versatile, efficient, and generalizable a survey can be -covers a range of topics of interest to different social scientists, in contrast to the typical survey that is directed at a specific research question -has multiple sponsors or is designed to generate data useful to a broad segment of the social science community rather than to answer a particular research question -is usually directed to a sample of some general population so that the questions about a range of issues are appropriate to at least some sample members -ex: GSS of the National Opinion Research Center at the University of Chicago was one of sociology's most successful of these; Split-ballot design = unique questions or other modifications in a survey administered to randomly selected subsets of the total survey sample, so that more questions can be included in the entire survey or so that responses to different question versions can be compared; allows more questions without increases the survey's cost; also facilitates experiments on the effect of question wording, as different forms of the same question are included in the subsets; as allowed the investigation of many social research questions and has provided data for 25,000 publications, presentations, and reports -p. 258
Mixed-mode surveys
-a survey that is conducted by more than one method, allowing the strengths of one survey design to compensate for the weaknesses of another and maximizing the likelihood of securing data from different types of respondents; for example, nonrespondents in a mailed survey may be interviewed in person or over the phone -ex: survey is sent electronically to those who have email addresses and mailed to those who do not, phone reminder used to encourage responses to web or paper surveys, nonrespondents in a mailed survey interviewed in person or over the phone, etc. -not a perfect solution: -the need to choose between two modes of the survey may lead to some respondents not bothering with it -respondents may give different answers because of the survey mode, rather than because they actually have different opinions (use of the same question structures, response choices, and skip instructions across modes substantially reduces the likelihood of mode effects) -p. 300-301
Measure of association
-a type of descriptive statistics that summarizes the strength of an association -there are many of these measures, some of which are appropriate for variables measures at particular levels -gamma - a measure of association that is sometimes used in cross-tabular analysis; a popular measure of association in cross-tabular analyses with variables measured at the ordinal level; varies from -1 (perfectly associated in an inverse direction) to 0 (no association) to +1 (perfect positive association); closer to -1 or 1, the stronger the association -p. 343
Evaluability assessment
-a type of question focused on in evaluation research -a type of evaluation research conducted to determine whether it is feasible to evaluate a program's effects within the available time and resources -evaluation research will be pointless of the program itself cannot be evaluated -helps to learn in advance whether the program can be evaluated, rather than expend time and effort on a fruitless project -why might a program not be evaluable? -1. Management only wants to have its superior performance confirmed and does not really care whether the program is having its intended effects; very common -2. Staff are so alienated from the agency that they don't trust any attempt sponsored by management to check on their performance -3. Program personnel are just "helping people" or "putting in time" without any clear sense of what the program is trying to achieve -4. The program is not clearly distinct from other services delivered from the agency and so cannot be evaluated by itself -documents whether it is actually possible to evaluate a program. -ex: are doctors even in the mood to switch from paper-based to computerized medical records?; yes, you must consider the "mood" of stakeholders, like doctors in this example. How can you evaluate a policy that no one wants passed? For example, doctors who aren't in the "mood" for change might refuse to respond to your study or offer only negative evaluations, both of which will add bias to your findings. -because they are preliminary studies to check things out, they often rely on qualitative methods; program managers and key staff may be interviewed in depth, or program sponsors may be asked about the importance they attach to different goals -evaluators may then suggest changes in program operations to improve evaluability and may use the evaluability assessment to sell the evaluation to participants and sensitize them to the importance of clarifying their goals and objectives -if the program is judged to be evaluable, knowledge gleaned through the evaluability assessment is used to shape evaluation plans -complex community initiatives can be particularly difficult to evaluate due to the evolving nature of the intervention as it is adapted to community conditions, an often broad range of outcomes, and many times, the absence of a control or comparison community (ex: FJC Initiative and Abt evaluators) -p. 494-495; Mod. 9 PP
Needs assessment
-a type of question focused on in evaluation research -a type of evaluation research that attempts to determine the needs of some population that might be met with a social program -may be assessed by social indicators, such as the poverty rate or the level of home ownership; by interviews of local experts, such as school board members or team captains; by surveys of populations in need; or by focus groups composed of community residents -not as easy as it sounds, because whose definitions or perceptions should be used to shape our description of the level of need? -is a research project documenting that particular services are required (or not required) within a community or other setting. -ex: We survey or interview doctors to understand whether they see a need to move away from paper-based medical records. Why bother introducing electronic health records if paper-based records do just fine? "If it ain't broke, don't fix it." -ex: Boston McKinney Project on homeless mentally ill persons and group or individual housing: -attempted to answer the question: what type of housing do these persons need? -asked the homeless individuals what type of housing they wanted and asked two clinicians their recommendation for the type of housing for each individual -individuals mostly chose individual housing, while clinicians mostly chose group -individuals assigned to group housing were somewhat more successful in retaining their housing than were those who were assigned to individual housing -which need is best then? -the lesson here is that in needs assessment, it is a good idea to use multiple indicators and there is no absolute definition of need, nor is there likely to be in any but the most simplistic evaluation projects -a good evaluation researcher will do his or her best to capture different perspectives on need and then help others make sense of the results -p. 492-494; Mod. 9 PP
Efficiency analysis
-a type of question focused on in evaluation research -a type of evaluation research that compares program costs with program effects; it can be either a cost-benefit analysis or a cost-effectiveness analysis -answers the question: whatever the program's benefits, are they sufficient to offset by the program? Are the taxpayers getting their money's worth? What resources are required by the program? -efficiency questions can be the primary reason why funders require evaluation of the programs they fund -is often a necessary component of an evaluation research project -1.cost-benefit analysis - a type of evaluation research that compares program costs with the economic value of program benefits -requires that the analyst identify whose perspective will be used to determine what can be considered a benefit rather than a cost (clients with have a different perspective than taxpayers or program staff) -some anticipated impacts of the program are considered a cost to one group and a benefit to another, whereas some are not relevant to one group -must be able to make some type of estimation of how clients benefited from the program; normally, this will involve a comparison of some indicators of client status before and after clients received program services or between those who received services and a comparable group that did not -ex: therapeutic communities (TC) cost-benefit analysis of homeless and mentally ill individuals assigned to a TC or treatment as usual; found that the TC program has a substantial benefit relative to its costs -2.cost-effectiveness analysis - a type of evaluation research that compares program costs with actual program outcomes -costs are compared with outcomes, such as the number of jobs obtained by a program participant, the extent of improvement in reading scores, or the degree of decline in crimes committed -compares what you gain with what you spend implementing the policy. -1.Cost-Benefit Analysis: compares program costs to economic benefits; ex: does the switch from paper-based to electronic health records lower health care expenditures? -2.Cost-Effectiveness: compares program costs to actual outcome; ex: does the switch from paper-based to electronic health records improve patients' health?; but improvements in actual outcomes (e.g. better health) translate to economic benefits (e.g., lower health care expenditures)...right? I'm sorry to have to be the one to tell you: no, not always -p. 502-503; Mod. 9 PP
Process evaluation
-a type of question focused on in evaluation research -evaluation research that investigates the process of service delivery -answers the question: what actually happens in a social program? Did what was intended in the design occur? -is even more important when more complex programs are evaluated; many social programs comprise multiple elements and are delivered over an extended period, often by different providers in differences areas; because of this complexity, it is quite possible that the program is not delivered the same for all program participants or consistent with the formal program design -examines how a policy/program is actually implemented. -ex: how do doctors actually use the electronic health record with patients during the clinic visit? -sure, we could evaluate changes in patient outcomes (e.g., health) before and after implementing electronic health records (see impact analysis), but we could also study how doctors actually did the implementing. What did they do that worked / didn't work? Do they hate using the computers and, if so, why? -ex: DARE process evaluation found that it was operating as designed and running relatively smoothly -can also be used to identify the specific aspects of the device delivery process that have an impact, which in turn will help explain why the program has an effect and which conditions are required for these effects -formative evaluation - process evaluation that is used to shape and refine program operations; can specify the treatment process and lead to changes in recruitment procedures, program delivery, or measurement tools; can provide a strong foundation for managers as they develop the program -can employ a wide range of indicators; program coverage can be monitored through program records, participant surveys, community surveys, or users versus dropouts and ineligibles; service delivery can be monitored through service records completed by program staff, a management information system maintained by program administrators, or reports by program participants -qualitative methods are often a key component of process evaluation studies because they can be used to understand internal program dynamics, even those that were not anticipated; qualitative researchers may develop detailed descriptions of how program participants engage with each other, how the program experience varies for different people, and how the program changes or evolves over time -goal is to develop a bottom-up view of the process, rather than the top-down view that emerged from official program records -p. 495-498; Mod. 9 PP
Participant observation - personal dimensions
-because field researchers become a part of the social situation they are studying, they cannot help but be affected on a personal, emotional level -at the same time, those being studied react to researchers not just as researchers but as personal acquaintances - often as friends, sometimes as personal rivals -the impact of personal issues varies with the depth of the researchers' involvement in the setting; the more involved researchers are in the multiple aspects of the ongoing situation, the more important personal issues become and the greater the risk of "going native"; even when researchers acknowledge their role, increased contact brings sympathy, and sympathy in turn dulls the edge of criticism; can become desensitized to some issues -there is no formula for successfully managing the personal dimension of a field research project; it is much more an art than a science and flows more from the researcher's own personality and natural approach to other people than from formal training; shared similarities on certain demographics may help create mutual feelings of comfort, but such social similarities may mask more important differences -Guidelines: -1.take the time to consider how you want to relate to your potential subjects as people -2.speculate about what personal problems might arise and how you will respond to them -3.keep in touch with other researchers and personal friends outside the research setting -4.maintain standards of conduct that make you comfortable as a person and that respect the integrity of your subjects -p. 389-390
Coding
-discussed in relation to data analysis, preparation -the process of assigning a unique numerical does to each response to survey questions) of all responses should be done before data entry by assigning each a unique numerical value -once the computer database software is programmed to recognize the response codes, the forms can be fed through a scanner and the data will then be entered directly into the database -if responses or other forms of data have been entered on non-scannable paper forms, a computer data entry program should be used that will allow the data to be entered into the databases by clicking on boxes corresponding to the response codes -p. 316
Intensive interviews - partnerships
-because intensive interviewing does not engage researchers as participants in subjects' daily affairs, the problems of entering the field are much reduced -however, the social process of arranging long periods for personal interviews can still be pretty complicated -important to establish rapport with subjects by considering in advance how they will react to the interview arrangements and by developing an approach that does not violate their standards for social behavior -interviewees should be treated with respect, as knowledgeable partners whose time is values -a commitment to confidentiality should be stated and honored -in the first few minutes of the interview, the goal is to show interest in the interviewee and to explain clearly the purpose of the interview -during the interview: -the interviewer should maintain an appropriate distance from the interview, one the doesn't violate cultural norms -the interviewer should maintain eye contact unless cultural factors avoid that -there should not be distractions -an appropriate pace is also important -when covering emotional or stressful topics, the interviewer should provide the interviewee an opportunity to unwind at the interview's end -must be sensitive to the broader social context of their interaction with the interviewee and to the implications of their relationship in the way they ask questions and interpret answers -p. 392-393
Online interviews
-can facilitate interviews with others who are separated by physical distance and is a means to conduct research with those who are known only through online connections -can be either synchronous or asynchronous: -1. Synchronous -the interviewer and interviewee exchange message as in online chatting or with text messages -provides an experiences more similar to an in-person interview, this giving more of a sense of obtaining spontaneous reactions -requires careful attention to arrangements and is prone to interruptions -can include video, such as real-time conferencing -2. Asynchronous -interviewee can respond to the interviewer's question whenever it is convenient, usually through email, but perhaps through a blog, wiki, or online forum -allows interviewees to provide more thoughtful and developed answers, but it may be difficult to maintain interest and engagement if the exchanges continue over many days -should plan how to build rapport and how to terminate the relationship after the interview is concluded -can include video, such as sending video clips or podcasts -can facilitate the research process by creating a written record of the entire interaction without the need for typed transcripts -the relative anonymity of online communications can also encourage interviewees to be more open and honest about their feelings than they would have in in-person interviews -lack some of the appealing factors of qualitative methods: facial expression, intonation, and body language (unless videos are used) and intimate rapport -also, those online can present an identity that is completely removed from themselves (ex: changing characteristics of age, gender, physical location, etc.); but if people are creating personas online to connect with others, that too becomes an important part of the social world to investigate -p. 395-396
Floaters
-discussed in related to survey questions - minimize fence-sitting and this -survey respondents who provide an opinion on a topic in response to a closed-ended question that does not include a 'don't know' option, but who will choose 'don't know' if it is available -choose a substantive answer when they really don't know or have no opinion; ex: a third of the public will provide an opinion on a proposed law that they know nothing about if they are asked for their opinion on a closed-ended survey question that does not include a 'don't know' as an explicit response choice, but 90% will choose the 'don't know' option if given -on average, offering an explicit response option increases the 'don't know' responses by about a fifth -important to have a don't know option, but this may result in some people taking the easy way and choosing this answer, so some recommend using a 'don't know' and a 'no opinion' option or an open-ended option for those to discuss their opinions; those doing phone or in-person surveys may get around this issue by recording a noncommittal response if it is offered -people who "don't know" could actually be interesting to capture, especially if the question is about knowledge on a topic -p. 267; Mod. 7 PP
Fence-sitters
-discussed in related to survey questions - minimize this and floating -survey respondents who see themselves as being neutral on an issue and choose a middle (neutral) response that is offered -may skew the results if you force them to choose between opposites; in most cases, about 10-20% of such respondents will choose an explicit middle, neutral alternative -having the middle option is generally good because it identifies fence-sitters and tends to increase measurement reliability; forcing them to choose a response could overlook the existence of an actual neutral zone in the population. -p. 267; Mod. 7 PP
Double-barreled questions
-discussed in relation to survey questions, avoid confusing phrasing -a single survey question that actually asks two questions but allows only one answer; guaranteed to produce uninterpretable results -ex: do you think that President Nixon should be impeached and compelled to leave the presidency, or not? -asks about both impeaches and compelled to leave office at once, while people could think one or the other is true -do not use this -p. 262
Integrative approaches
-discussed in relation to a design decision in evaluation research; a combination of the researcher and stakeholder orientations -an orientation to evaluation research that expects researchers to respond to the concerns of people involved with the program - stakeholders - as well as to the standards and goals of the social scientific community -attempts to cover issues of concern to both stakeholders and evaluators and to include stakeholders in the group from which guidance is routinely sought -seek to balance the goal of carrying out a project that is responsive to stakeholder concerns with the goal of objective, scientifically trustworthy, and generalizable results -when the research is planned, evaluators are expected to communicate and negotiate regularly with key stakeholders and to consider stakeholder concerns; findings from preliminary inquiries are reported back to program decision makers so that they can make improvements in the program before it is formally evaluated; when the actual evaluation is conducted, the evaluation research team is expected to operate more autonomously, minimizing intrusions from program stakeholders -many evaluation researchers now recognize the need to account for both sides -p. 508
Program theory
-discussed in relation to a design decision in evaluation research; black box evaluation vs. this -a descriptive or prescriptive model of how a program operates and produces effects -ex: research on welfare-to-work; found that children of welfare to work parents did worse in school than whose parents were just on welfare; must open the black box to find out why -specified how the program is expected to operate and identifies which program elements are operational; may identify the resources needed by the program and the activities that are essential in its operations, as well as how the program is to produce its effects -thus improves understanding of the relationship between the independent variable (the program) and the dependent variable (the outcome or outcomes) -theory-driven evaluation - a program evaluation that is guided by a theory that specifies the process by which the program has an effect; when a researcher has sufficient knowledge of the relationship between the independent and dependent variables before the investigation begins, outlining a program theory can help guide the investigation of program process in the most productive directions -program theory can be either descriptive or prescriptive: -1. Descriptive > specifies what impacts are generated and how they occur; it suggests a causal mechanism, including intervening factors, and the necessary context for the effects; generally empirically based -2. Prescriptive > specified how to design or implement the treatment, what outcomes should be expected, and how performance should be judged -p. 505-506
Stakeholder approaches
-discussed in relation to a design decision in evaluation research; researcher orientation vs. this -an orientation to evaluation research that expects researchers to be responsive primarily to the people involved with the program; also termed responsive evaluation -encourages researchers to be responsive to program stakeholders -issues to study are to be based on the views of people involved with the program, and reports are to be made to program participants -the program theory is developed by the researcher to clarify and develop the key stakeholder's theory of the program -types: -1. Utilization-focused evaluation = the evaluator forms a task force of program stakeholders, who help to shape the evaluation project so that they are most likely to use its results -2. Action research/participatory research = program participants are engaged with the researchers as coresearchers and help to design, conduct, and report the research -3. Appreciative inquiry = eliminates the professional researcher altogether in favor of a structured dialogue about needed changes among program participants themselves -because different stakeholders may differ in their reports about or assessment of the program, there is not likely to be one conclusion about program impact; the evaluators are primarily concerned with helping participants understand the views of other stakeholders and with generating productive dialogue -p. 507-508
Constructivism
-discussed in relation to a main feature of qualitative research -a methodology based on questioning belief in an external reality; emphasizes the importance of exploring the way in which different stakeholders in a social setting construct their beliefs -believe that social reality is socially constructed and that the goal of social scientists is to understand what meanings people give to reality, not to determine how reality works apart from these constructions -rejects the positivist belief that there is a concrete, objective reality that scientific methods help us understand -belief that people construct an image of reality based on their own preferences and prejudices and their interactions with others and that this is as true of scientists as it is of everyone else in the social world; this means that we can never be sure that we have understood reality properly, that objects and events are understood by different people differently, and those perceptions are the reality that social science should focus on -emphasizes that different stakeholders in a social setting construct different beliefs -give particular attention to the different goals of researchers and other participants and may seek to develop a consensus among participants about how to understand the focus of inquiry -may use an interactive research process, in which a researcher begins an evaluation in some social setting by identifying the different interest groups in that setting -Hermeneutic circle = a representation of the dialectical process in which the researcher obtains information from multiple stakeholders in a setting, refines his or her understanding of the setting, and then tests that understanding with successive respondents; researcher interviews each respondent to learn how they construct their thoughts and feelings about the topic of concern and then gradually tries to develop a shared perspective on the problem being evaluated -p. 370
Reflexivity
-discussed in relation to a main feature of qualitative research -sensitivity of and adaption by the researcher to his or her own influence in the research setting -qualitative researchers recognize that their perspective on social phenomena will reflect in part their own background and current situation; where the researcher is coming from can affect what they find -some researchers believe that the goal of developing a purely objective view of the social world is impossible, but they discuss in their publications their own feelings about what they have studied so that others can consider how these feelings affected their findings -p. 369
Hermeneutic circle
-discussed in relation to constructivism, which is a main feature of some qualitative research; constructivism may use this interactive research process, in which a researcher begins an evaluation in some social setting by identifying the different interest groups in that setting -a representation of the dialectical process in which the researcher obtains information from multiple stakeholders in a setting, refines his or her understanding of the setting, and then tests that understanding with successive respondents -researcher interviews each respondent to learn how they construct their thoughts and feelings about the topic of concern and then gradually tries to develop a shared perspective on the problem being evaluated -p. 370
Split-ballot design
-discussed in relation to the GSS of the National Opinion Research Center, a an example of an omnibus survey -unique questions or other modifications in a survey administered to randomly selected subsets of the total survey sample, so that more questions can be included in the entire survey or so that responses to different question versions can be compared -allows more questions without increases the survey's cost -also facilitates experiments on the effect of question wording, as different forms of the same question are included in the subsets -p. 258
Data cleaning
-discussed in relation to data analysis, preparation -the process of checking data for errors after the data have been entered in a computer file -first step is to check responses before they are entered into the database to make sure that one and only one valid answer code has been clearly circled or checked for each questions (unless multiple responses are allowed or a skip pattern was specified) -the next step is to make sure that no invalid codes have been entered; these involve codes that fall outside the range of allowable values for a given variable and those that represent impossible combination of responses to two or more questions -whatever data entry method is used, the data must be checked for errors -Paper-based: -while entering data, human error can occur; ideally, especially for lengthy surveys, you'll have two people enter in each survey you received and then compare. -is checking for errors once data has been entered; (ex: scanning to see if someone entered a value outside the range (like 9, since it's right next to 0 on the keyboard) -Computer-assisted: -should still look over the data closely and do this just as you would for paper-based data collection; sometimes, the software or website you used wasn't programmed correctly (e.g., saved a value of 7 instead of 6); this will unfortunately likely affect every response you receive; it's not the end of the world and you just have to notice it and make sure you correct everything. -p. 316; Mod. 10 PP
Data entry
-discussed in relation to data analysis, preparation -the process of typing (word processing) or otherwise transferring data on survey or other instruments into a computer file) forms can be designed for scanning or direct computer entry -Paper-based: -sometimes you have to use pen and paper to collect your measures, for example: you mailed a paper survey, used a paper survey while administering it yourself in-person, handed out the paper survey to respondents in person, and then they self-administered it -whatever your reason, you now have to do this where you enter in responses onto a computer program -the computer program you use to enter data could be: a statistical package like SPSS, excel or any other spreadsheet software, Word or any other word processing software (usually, responses are separated by commas or tabs) -Computer-assisted: -sometimes you're able to use a computer to collect your measures, for example: you administered an electronic survey using a website like Survey Monkey, you used a computer to enter in responses you received either in-person or over the telephone. -whatever the reason, the software or website you use will likely create for you an Excel file containing responses for you to analyze -you won't have to worry about human error during this -p. 315; Mod. 10 PP
Precoding
-discussed in relation to data analysis, preparation -the process through which a questionnaire or other survey form is prepared so that a number represents every response choice, and respondents are instructed to indicate their response to a question by checking a number; easier to type in the strings of numbers than to type in the responses themselves -if a data entry program is not used, responses can be typed directly into a computer database; if data entry is to be done this way, the questionnaires or other forms should involve this -p. 316
Cost-effectiveness analysis
-discussed in relation to efficiency analysis, which is a type of question focused on in evaluation research -a type of evaluation research that compares program costs with actual program outcomes -costs are compared with outcomes, such as the number of jobs obtained by a program participant, the extent of improvement in reading scores, or the degree of decline in crimes committed -compares program costs to actual outcome; ex: does the switch from paper-based to electronic health records improve patients' health?; but improvements in actual outcomes (e.g. better health) translate to economic benefits (e.g., lower health care expenditures)...right? I'm sorry to have to be the one to tell you: no, not always -p. 503; Mod. 9 PP
Computer-assisted personal interview (CAPI)
-discussed in relation to in-person interviews/surveys -a personal interview in which a laptop computer is used to display interview questions and to process responses that the interviewer types in, as well as to check that these responses fall within allowed ranges -interviewers seem to like it and data are comparable in quality to data obtained in a noncomputerized interview -makes it easier for the researcher to develop skip patterns and experiment with different types of questions for difference respondents without increasing the risk of interviewer mistakes -allows respondent to answer directly on the computer without the interviewer knowing what their response is; helpful for socially undesirable behaviors like drug use, sexual activity, not voting, etc. (or these sensitive answers can be handed in separately in a sealed envelope) -p. 294
Grand tour questions
-discussed in relation to intensive/in-depth interviews -a broad question at the start of an interview that seeks to engage the respondent in the topic of interest -meant to elicit lengthy narratives -used after the simple questions that gather background info and build rapport -p. 392
Cover letter
-discussed in relation to mailed surveys -the letter sent with a mailed questionnaire that explains the survey's purpose and auspices (sponsors) and encourages the respondent to participate -critical for success because it sets the tone for the questionnaire -a carefully prepared one should increase the response rate and result in more honest/complete answers -must be: -Credible > letter should establish that the research is being conducted by a researcher or organization that the respondent is likely to accept as credible, unbiased authority; government sponsors tend to elicit high rates of response, reach conducted by well-known universities and recognized research organization (ex: Gallup), is also usually credible, but publishing firms, students, and private associations elicit the lowest response rates -Personalized > cover letter should include a personalized salutation with the respondent's name and the researcher's signature -Interesting > statement should interest the respondent in the contents of the questionnaire -Responsible > reassure the respondent hat information obtained will be treated confidentially and include a phone number should they have questions; point out that participation is voluntary -p. 283-284
Unimodal
-discussed in relation to mode, a measure of central tendency -a distribution of a variable in which there is only one value that is the most frequent -p. 328
Bimodal
-discussed in relation to mode, a measure of central tendency (a problem occurs when a distribution is this) -a distribution that has two nonadjacent categories with about the same number of cases, and these categories have more cases than any others -has two or more categories with an equal number of cases, so there is no single mode -in a situation in which two categories have roughly the same number of cases, the one with slightly more will be the mode, even though the other one is only slightly less -p. 328
Field notes
-discussed in relation to participant observation, notes -notes that describe what has been observed, hear, or otherwise experienced in a participant observation study; these notes usually are written after the observational session -should be written utilizes jottings either immediately afterward or at least within the next 24 hours -usually, writing up notes takes much longer (at 3x longer) than the observing -field notes must be as complete, detailed, and true as possible to what was observed and heard -direct quotes should be distinguished clearly from paraphrased quotes, and both should be set off from the researcher's observations and reflections -pauses and interruptions should be indicated -the surrounding context should receive as much attention as possible, and a map of the setting should always be included with indications of where the individuals were at different times -careful notetaking yields a big payoff; field notes will suggest new concepts, causal connections, and theoretical propositions; social processes and settings can be described -notes should also include descriptions of the methodology: where researchers where standing or sitting while they observed, how they chose people for conversation or observation, what counts of people or events they made and why -sprinkled throughout the notes should also be a record of the researchers' feelings and thoughts while observing: when they were disgusted by some statement or act, when they feel threatened or intimidated, why their attention shifted from one group to another, and what ethical concerns arose -notes may be, in some situations, supplemented by still pictures, videotapes, and printed material; can call attention to some features of the social situation and actors within it that were missed in the notes -p. 387-389
Key informant
-discussed in relation to participant observation, relationships -an insider who is willing and able to provide a field researcher with superior access and information, including answers to questions that arise in the course of the research -a knowledgeable insider who knew the group's culture and was willing to share access and insights with the researcher -p. 383
Theoretical sampling
-discussed in relation to participant observation, sampling -a sampling method recommended for field researchers by Glaser and Strauss; a theoretical sample is drawn in a sequential fashion, with settings or individuals selected for study as earlier observations or interviews indication that these settings or individuals are influential -a systematic approach to sampling in participant observation studies -when field researchers discover in an investigation that particular processes seem to be important, inferring that certain comparisons should be made or that similar instances should be checked, the researchers then choose new settings or individuals that permit these comparisons or checks -p. 385
Formative evaluation
-discussed in relation to process evaluation, which is a type of question focused on in evaluation research -process evaluation that is used to shape and refine program operations -can specify the treatment process and lead to changes in recruitment procedures, program delivery, or measurement tools -can provide a strong foundation for managers as they develop the program -p. 498
Tacit knowledge
-discussed in relation to qualitative data analysis - corroboration and legitimization of conclusions -in field research, a credible sense of understanding of social processes that reflects the researcher's awareness of participants' actions as well as their words, and of what they fail to state, feel deeply, and take for granted -a qualitative researcher's conclusions should also be assessed by his or her ability to provide a credible explanation of some aspect of social life -that explanation should capture group members' this -the social processes that were observed, not just their verbal statements about these processes -the largely unarticulated, contextual understanding that is often manifested in nods, silences, humor, and naughty nuances -p. 425
Normal distribution
-discussed in relation to standard deviation, a measure of variation -a symmetric, bell-shaped distribution that results from chance variation around a central value; is symmetric and tapers off in a characteristic shape from its mean -68% of cases will lie between plus and minus one standard deviation from the mean -95% of cases will lie between plus and minus two (1.96) standard deviations from the mean -the correspondence of standard deviation to this enables us to infer how confident we can be that the mean (or other stat) of a population sampled randomly is within a certain range of the sample mean; confidence limits indicate how confident we can be, based on our random sample, that the value of some statistic in the population falls within a particular range; directions on how to compute the confidence limits provided -p. 334
Behavior coding
-discussed in relation to survey pretests; this can enhance the value of survey pretests -an observation in which the researcher categorizes, according to strict rules, the number of times certain behaviors -the researcher observed the interviews or listens to recorded interviews and codes, according to strict rules, the number of times that difficulties occur with questions (asking for clarification, rephrasing questions, etc.) which is then used to improve question wording and instructions for interviews about reading the questions -p. 274
Likert item
-discussed in relation to survey questions - maximize the utility of response categories -a statement followed by response choices ranging from strongly agree to strongly disagree -a common approach for measures of attitude intensity; covers the full range of possible agreement -p. 265
Unlabeled unipolar response options
-discussed in relation to survey questions - maximize the utility of response categories -response choices for a survey question that use numbers to identify categories ranging from low to high (or high to low) -ex: How comfortable do you feel disagreeing with the person who supervises your work? With a range from 1 (meaning not at all) to 10 (meaning extremely) -five categories work well -responses are more reliable when labeled than for this -p. 265
Labeled unipolar response options
-discussed in relation to survey questions - maximize the utility of response categories -response choices for a survey question that use words to identify categories ranging from low to high (or high to low) -ex: extremely comfortable, very comfortable, quite comfortable, somewhat comfortable, not at all comfortable -five categories work well -responses are more reliable when these categories are labeled than when unlabeled/only identified by numbers -p. 265
Social desirability bias
-discussed in relation to survey questions, avoid making either disagreement or agreement problematic -the tendency to agree with a statement just to avoid seeming disagreeable -aka agreement bias or acquiescence effect; ex: a question states that individuals were more to blame for crime and lawlessness than social conditions had a higher rate of responses that agreed that individuals were more to blame; numerous studies of agreement bias suggest that about 10% of respondents will agree just to be agreeable, without regard to what they really think -steps to reduce the likelihood of this: -present both sides of attitude scales in the question itself; ex: in general, do you believe that individuals or social conditions are more to blame for crime and lawlessness in the US? -response choices should be phrased to make each one seem as socially approved/agreeable as the others -don't use a question such as: to what extent do you agree or disagree with the statement: the new healthcare plan is worthy of support? Instead use a question like to what extent do you support or oppose the new healthcare plan? -p. 265
Interpretive questions
-discussed in relation to surveys -questions included in a questionnaire or interview schedule to help explain answers to other important questions -helps the researcher understand what the respondent meant by his or her responses to particular questions -ex: when asked whether their emotional state affected their driving at all, respondents would reply that their emotions had very little effect on their habits; when asked to describe the circumstances surrounding their last traffic violation, respondents replied with things like: I was mad at my girlfriend, we had a family quarrel, etc.; unlikely that respondents were lying in the first question; more likely, they simply did not interpret their own behavior in terms of general concepts such as emotional state and their responses to the first question were likely to be misinterpreted without further detail provided by answers to the second -consider 5 issues when developing these: -1. What do the respondents know?; answers to many questions about current events and government policies are almost uninterpretable without also learning what the respondents know -2. What relevant experiences do the respondents have?; experiences undoubtedly color responses -3. How consistent are the respondents' attitudes, and do they express some larger perspective or ideology? -4. Are respondents' actions consistent with their expressed attitudes?; ex: interpret differently the support for gender equality among married men who help with chores and those who do not -5. How strongly are the attributes held?; the attitudes of stronger beliefs are more likely to be translated into action than are attitudes that are held less strongly; ex: strong beliefs on abortion or guns are more likely to result in that person protesting -the qualitative insights produced by open-ended questions can be essential for interpreting the meaning of fixed responses; ex: asking administrators, case managers, clients, and family members in mental health systems whether their programs are effective -when asked in closed-ended questions they usually responded that the programs were effective, but when asked open-ended questions they pointed to many program failings -p. 275-276
Survey - order of questions
-discussed in relation to surveys -the order in which questions are presented will influence how respondents react to the questionnaire as a whole and how they may answer some questions -the individual questions should be sorted into broad thematic categories, which then become separate sections in the questionnaire -both the sections and the questions within the sections should be organized in a logical order that would make sense in a conversation -the first question deserves special attention, particularly if the questionnaire is to be self-administered; this question signals to the respondent what the survey is about, whether it will be interesting, and how easy it will be to complete; therefore, it should connect to the purpose of the study, be interesting and easy, and apply to everyone in the sample -one or more filter or screening question may also appear to identify respondents for whom the question is not intended -question order can lead to context effects - when one or more survey questions influence how subsequent questions are interpreted -ex: first asking the question "do you think it should be possible for a pregnant woman to obtain a legal abortion if she is married and does not want more children?" had 58% agreement; when asked afterward, "do you think abortion should be allowed for a defective fetus?" only 40% agreed; the second question altered the frame of reference, making the first question seem frivolous in comparison -part-whole question effect - when responses to a general or summary survey question about a topic are influenced by responses to an earlier, more specific question about that topic; ex: married people tend to report that they are happier in general if the general happiness question is preceded by a question about their happiness with their marriage -prior questions can influence how questions are comprehended, what beliefs shape responses, and whether comparative judgments are made; the potential for context effects is greatest when two or more questions concern the same issue or closely related ones; the impact of question order also tends to be greater for general, summary type questions -can be identified empirically if the question order is reversed on a sunset of the questionnaires (split-ballot design) and the results compared; however, knowing that a context effect occurs does not tell use which order is best; could randomize the order in which key questions are presented, so that any effects of question order cancel each other out -some questions may be presented as matrix questions - a series of survey questions that concern a common theme and that have the same response choices -questions are written so that a common initial phrase applies to each one, which shortens the questionnaire by reducing the number of words that must be used for each question and emphasizes a common theme among the questions and so invites answering each question in relation to other questions in the matrix -ex: How much difficulty do you have...-going up and down stairs? kneeling? hearing? -p. 277-229
Context effects
-discussed in relation to surveys, order of questions can lead to this -when one or more survey questions influence how subsequent questions are interpreted -ex: first asking the question "do you think it should be possible for a pregnant woman to obtain a legal abortion if she is married and does not want more children?" had 58% agreement; when asked afterward, "do you think abortion should be allowed for a defective fetus?" only 40% agreed; the second question altered the frame of reference, making the first question seem frivolous in comparison -part-whole question effect - when responses to a general or summary survey question about a topic are influenced by responses to an earlier, more specific question about that topic -ex: married people tend to report that they are happier in general if the general happiness question is preceded by a question about their happiness with their marriage -prior questions can influence how questions are comprehended, what beliefs shape responses, and whether comparative judgments are made; the potential for context effects is greatest when two or more questions concern the same issue or closely related ones; the impact of question order also tends to be greater for general, summary type questions -can be identified empirically if the question order is reversed on a sunset of the questionnaires (split-ballot design) and the results compared; however, knowing that a context effect occurs does not tell use which order is best; could randomize the order in which key questions are presented, so that any effects of question order cancel each other out -p. 278
Survey pretest
-discussed in relation to surveys, refining and testing questions -a method of evaluating survey questions and procedures by testing them on a small sample of individuals like those to be included in the actual survey and then reviewing responses to the questions and reactions to the survey procedures -perhaps 15-25 respondents who are similar to those who will be sampling; try to identify questions that caused problems -can do this with mailing and actual interviews -on the pretest for a written questionnaire, you may include some space for individual to comment on a key question, or in in-person interviews, audio record them for later review -value of this can be enhanced with behavior coding - an observation in which the researcher categorizes, according to strict rules, the number of times certain behaviors; the researcher observed the interviews or listens to recorded interviews and codes, according to strict rules, the number of times that difficulties occur with questions (asking for clarification, rephrasing questions, etc.) which is then used to improve question wording and instructions for interviews about reading the questions -p. 274
Big Data - ethical issues
-ethical concern because it can reflect the behavior of many individuals who did not consent to participate in research -subject confidentiality is a concern; whenever possible, all information that could identify individuals should be removed from the records to be analyzed so that no link is possible to the identities of subjects (if information cannot totally be removed, then the ICPSR restricts access to that data and requires agreement to confidentiality) -data quality is always a concern -when enormous amounts of data are available for analysis, the usual procedures for making data anonymous may no longer ensure that it stays that way -make possible surveillance and prediction of behavior on a large scale; without strict rules and close monitoring, potential invasions of privacy and unwarranted suspicions are enormous -social experiments with Big Data can literally change the social environment, and so this too raises ethical concerns (ex: Facebook feed and voting) -we can say this of other sources of big data where it's individuals providing the data): -did these Twitter users consent to be studied by you? -can we ensure the same level of confidentiality we do with other data sources when Twitter can tell us where a person lives and works, and who is in their family/friendship network? -currently, most Institutional Review Boards, including UWM's, says it's fine to study big data like public tweets without the users' consent. -in fact, these Institutional Review Boards treat big data just like any other publicly available secondary dataset in that they waive review (they have no meeting to weigh the risks and benefits) -many debates to come the next few years; YOU (the human) need to decide on your own what you think is ethical -p. 544-547; Mod. 9 PP
Secondary data analysis - challenges
-greatest challenge results from the researcher's inability to design data collection methods that are best suited to answer his or her research question -the analyst cannot test and refine the methods to be used on the basis of preliminary feedback from the population or processes to be studied -not possible for the analyst to engage in the iterative process of making observations, developing concepts, or making more observation and refining the concepts in qualitative research -these limitations may mean that the analyst will not be able to focus on the specific research question of original interest or to use the most appropriate sampling or measurement approach for studying that research question; if the primary study was not designed to measure adequately a concept that is critical to the secondary analyst's hypothesis, the study may have to be abandoned until a more adequate source of data can be found; hypotheses, or even the research question itself, may be modified to match the analytic possibilities presented by the available data -data quality is always a concern, even when collected by an official government agency; government actions are result, at least in part, from a political process that may not have as their first priority the design or maintenance of high-quality data for social scientific analysis; research based on official records can only be as good as the records themselves; there is little guarantee that the officials' acts and decisions were recorded in a careful and unbiased manner; the same is true for data collected by employees of private and nonprofit organizations -there is value in using multiple methods, particularly when the primary method of data collection is analysis of records generated by street level bureaucrats (officials who serve clients and have a high degree of discretion); when officials make decisions and record the bases for their decisions without much supervision, records may diverge considerable from the decisions they are supposed to reflect -concern regarding research across national boundaries because different data collection systems and definitions of key variables may have been used -secondary analysts of qualitative data must seek opportunities for carrying on a dialogue with the original researchers -may not be able to match your research question to a suitable data source -data quality may still be a concern, even with government data -documentation problems may leave you with unanswered questions about how the data were generated -p. 535-536; Mod. 9 PP
Data analysis - preparation
-if you conduct your own survey or experiment, your quantitative data must be prepared in a format suitable for computer entry; several options are available: -questionnaires or other data entry (the process of typing (word processing) or otherwise transferring data on survey or other instruments into a computer file) forms can be designed for scanning or direct computer entry -coding (the process of assigning a unique numerical does to each response to survey questions) of all responses should be done before data entry by assigning each a unique numerical value; once the computer database software is programmed to recognize the response codes, the forms can be fed through a scanner and the data will then be entered directly into the database; if responses or other forms of data have been entered on non-scannable paper forms, a computer data entry program should be used that will allow the data to be entered into the databases by clicking on boxes corresponding to the response codes -if a data entry program is not used, responses can be typed directly into a computer database; if data entry is to be done this way, the questionnaires or other forms should be pre-coded; precoding - the process through which a questionnaire or other survey form is prepared so that a number represents every response choice, and respondents are instructed to indicate their response to a question by checking a number; easier to type in the strings of numbers than to type in the responses themselves -whatever data entry method is used, the data must be checked for errors; data cleaning - the process of checking data for errors after the data have been entered in a computer file; first step is to check responses before they are entered into the database to make sure that one and only one valid answer code has been clearly circled or checked for each questions (unless multiple responses are allowed or a skip pattern was specified); the next step is to make sure that no invalid codes have been entered; these involve codes that fall outside the range of allowable values for a given variable and those that represent impossible combination of responses to two or more questions -most survey research organizations now use a database management program to control data entry; the program prompts the data entry clerk for each response code, checks the code to ensure that it represents a valid response for that variables, and saves the response code in the data file; sharply reduced the possibility of data entry errors -if data are typed into a text file or entered directly through the data sheet of a statistics program, a computer program must be written to "define the data"; data definition program identifies the variables that are coded in each column, attaches meaningful labels to the codes, and distinguishes values representing missing data -p. 315-316
Cross-tabulation (crosstab; or contingency table)
-in the simplest case, a bivariate (two-variable) distribution, showing the distribution of one variable for each category of another variable; can be elaborated using three or more variables -most data analyses focus on relationships between variables to test hypotheses or just to describe or explore relationships; for each of these purposes, we must examine the association between two or more variables -aka bivariate distribution -also provides a simple tool for controlling one or more variables while examining the association between others -converting data from case records into crosstab: -1. Record each respondent's answers to both questions -2. Create the categories for the table -3. Tally the number of respondents whose answers fall in each table category -4. Convert the tallies to frequencies and add up the row and column totals -useful methods for examining the relationship between variables only when they have just a few categories -for most analyses, 10 categories is a reasonable upper limit, but even 10 is too many unless you have a pretty large umber of cases (more than 100) -if you wish to include in a crosstab a variable with many categories, or one that varies along a continuum with many values, you should first recode the values of that variable to a smaller number (ex: recoding a five-point index to just high, medium, and low) -marginal distributions - the summary distributions in the margins of a cross-tabulation that correspond to the frequency distribution of the row variable and of the column variable; ex: family income, distribution of voting (the labels) -the independent variable is usually the column variable, while the dependent variable then is the row variable -gives the number of cases and the percentages (relative frequencies, computed by dividing the frequency of cases in a particular category by the total number of cases and then multiplying by 100) -follow these rules when creating and reading a percentage table: -1. Make the independent variable the column variable and the dependent variable the row variable -2. Percentage the table column by column; should add to 100% -3. Compare the distributions of the dependent variable (row variable) across each column -the independent variables does not HAVE to be the column variable, but you must be consistent -reveals four aspects of the association between variables: -1. Existence > do the percentage distributions vary at all between categories of the independent variable? -2. Strength > how much do the percentage distributions vary between categories of the independent variable? In most analyses, the analyst would not pay much attention to differences of less than 10 percentage points between categories of the independent variable -3. Direction > for quantitative variables, do values on the dependent variable tend to increase or decrease with an increase in value on the independent variable? -4. Pattern > for quantitative variables, are changed in the percentage distribution of the dependent variable fairly regular (increasing or decreasing) or do they vary (ex: increasing, then decreasing) -monotonic - a pattern of association in which the value cases on one variable increases or decreases fairly regularly across the categories of another variable -curvilinear - any pattern of association between two quantitative variables that does not involve a regular increase or decrease -p. 337-338
Intensive interviews - questions
-intensive interviews must plan their main questions around an outline of the interview topic -the questions should generally be short and to the point -more details can then be elicited through nondirective problems (ex: can you tell me more about that? Uh-huh, etc.) -follow-up questions can be tailored to answers to the main questions -active questions does a better job of clarifying concepts than a fixed set of questions would have -important to learn how to "get out of the way" as much as possible during the interview -audio recorders commonly are used to record intensive interviews: -most researchers who have recorded interviews feel that they do not inhibit most interviewees and, in fact, are routinely ignored -the occasional respondent may "speak for the audio recorder" -constant note taking during an interview prevents adequate displays of interest and appreciation by the interviewer and hinders the degree of concentration that results in the best interviews, so recorders are used -may be avoided if the interviewee is concerned about public image, like those in politics for example -p. 393-395
Group-administered survey
-one of five main types of surveys -a survey that is completed by individual respondents who are assembled in a group -response rate is usually not a major concern because most group members will participate -seldom feasible because it requires what might be called captive audience; most populations (except for students, employees, military people, an organization) cannot be sampling in such a setting -the administering individual must be careful to minimize comments that might bias answers or that could vary between groups -a standard introductory statement should be read to the group that expresses appreciation for their participation, describes the steps of the survey, and emphasizes that the survey is not the same as a test -a cover letter like the one used in mailed surveys also should be distributed with the questionnaires -to emphasize confidentiality, respondents should be given an envelope in which to seal their questionnaires after they are completed -another issue is the possibility that respondents will feel coerced to participate and as a result will be less likely to answer questions honestly or if done in an organization, the respondents may feel that the researcher is not independent; no good solution to this, but an introductory statement about independent and allowing for questions from participants is good -p. 286
Overt participant observer
-one of four roles researchers can adopt in participant observation -a researcher who gathers data through participating and observing in a setting where he or she develops a sustained relationship with people while they go about their normal activities; used to refer to a continuum of possible roles, from complete observation to complete participation -specifically refers to a researcher who acknowledges her research role and participates in group activities -inform the individuals of their role and participate -participating and observing have two clear ethical advantages: because group members know the researcher's real role, they can choose to keep some information hidden; the researcher can decline to participate in unethical or dangerous activities without fear of exposing his or her identity -most of these researchers get the feeling that, after they have become known and at least somewhat trusted figures in the group, their presence does not have any real effect on members' actions -however, it can be difficult to maintain a fully open research role in a setting in which new people come and go, often without providing appropriate occasions during which the researchers can disclose his or her identity -the argument that the researcher's role can be disclosed without affecting the social process under investigation is less persuasive when the behavior to be observed is illegal or stigmatized, so that participants have reasons to fear the consequences of disclose to an outsider -even when researchers maintain a public identity as researchers, ethical dilemmas arising from participation in the group activities do not go away; researchers may have to prove themselves to the group members by joining in some of their questionable activities -experienced participant observers try to lesion some of the problems of identity disclosure by evaluating both their effect on others and the effect others have on them; they are also sure to preserve physical space and regular time to concentrate on their research -p. 377-379
Covert/complete participant
-one of four roles researchers can adopt in participant observation -a role in field research in which the researcher does not reveal his or her identity as a researcher to those who are observed while participating; researcher acts just like other group members and does not disclose her research role -used to lessen the reactive effects and gain entry to otherwise inaccessible settings -do their best to act similar to other participants in a social setting or group (ex: Laud Humphreys and the tearoom trade) -problems associated with this: -1.Cannot take notes openly or use any obvious recording devices; must write up notes based solely on memory and must do so at time when it is natural for them to be away from group members -2. Cannot ask questions that will arouse suspicion; therefore, they often have trouble clarifying the meaning of other participants' attitudes or actions -3. The role is difficult to play; researchers will not know how the regular participants would act in every situations in which the researchers find themselves; researchers' spontaneous reactions to every event are unlikely to be consistent with those of the regular participants; suspicion that researchers are not "one of us" may then have reactive effects -4. Need to keep up the act at all times while in the setting under study; researchers may experience enormous phycological strain, particularly in situations where they are expected to choose sides in intragroup conflict or to participate in criminal or other acts; some researchers may become so wrapped up in the role they are playing that they adopt not only just the mannerisms but also the perspectives and goals of the regular participants - they "go native" and abandon research goals and cease to evaluate critically what they are observing -ethical issues with this: -covert researchers cannot anticipate the unintended consequences of their actions for research subjects -if other people suspect the identity of the researcher or if the researcher contributes to or impedes group action, the consequences can be adverse -social scientists may be harmed when cover research is disclosed during the research or on its publication because distrust of social scientists increases and access to research opportunities decreases -however, a total ban on covert participation would prevent many studies; studies of unusual religious or sexual practices, institutional malpractice, etc. would not be possible -p. 377, 379-381
Covert observer
-one of four roles researchers can adopt in participant observation -a role in participant observation in which the researcher does not participate in group activities and is not publicly defines as a researcher -researcher observes others without participating in social interaction and does not self-identify as a researcher -in social settings involving many people, in which observing while standing or sitting does not attract attention, this is possible and is unlikely to have must effect on social processes -take notes, systematically check out the different areas of a public space or different people in a crowd, arrive and leave at particular times to do your observing, write up what you have observed and possibly publish it -should always remember to evaluate how your actions in the setting and your purposes for being there may affect the actions of others and your own interpretations -p. 377-378
Participant Observation
-one of three distinctive qualitative research techniques -a qualitative method for gathering data that involves developing a sustained relationship with people while they go about their normal activities -often used in conjunction with intensive interviewing -represents the core method of ethnographic research -a qualitative method in which natural social processes are studied as they happen in the field and left relatively undisturbed -this is the classic field research method - a means for seeing the social world as the research subjects see it, in its totality, and for understanding subjects' interpretations of that world -by observing people and interacting with them during their normal activities, participant observers seek to avoid the artificiality of experimental design and the unnatural structured questioning of survey research -encourages consideration of the context in which social interaction occurs, of the complex and interconnected nature of social relations, and of the sequencing of events -the term participant observer actually refers to several different specific roles that the researcher can adopt: -1. Covert observer - a role in participant observation in which the researcher does not participate in group activities and is not publicly defines as a researcher; researcher observes others without participating in social interaction and does not self-identify as a researcher -2. Complete (or overt) observer - a tole in participant observation in which the researcher does not participate in group activities and is publicly defined as a researcher; does not participate, but is defined publicly as a researcher -3.Complete (covert) participant - a role in field research in which the researcher does not reveal his or her identity as a researcher to those who are observed while participating; researcher acts just like other group members and does not disclose her research role -4.Participant (overt) observer - a researcher who gathers data through participating and observing in a setting where he or she develops a sustained relationship with people while they go about their normal activities; used to refer to a continuum of possible roles, from complete observation to complete participation; specifically refers to a researcher who acknowledges her research role and participates in group activities -ex: the researcher became a patient care technician, hung out with other workers, and took notes; did these workers know the patient care technician was a researcher? If covert observation = no; If overt observation = yes; ethical issue: A problem with covert is, how would you feel if you found out your co-worker buddy was taking down notes about your lunches together without you knowing? -p. 376-377; Mod. 8 PP
Qualitative research - generalizability
-qualitative research often focuses on populations that are hard to locate or are very limited in size; therefore, nonprobability sampling methods such as availability sampling and snowball sampling are often used -can increase generalizability in qualitative investigations by: -1.studying the typical > choosing sites on the basis of their fit with a typical situation is far preferable to choosing on the basis of convenience -2. performing multisite studies > a finding emerging repeatedly from numerous sites would be better than from just one or two sites -3.nvestigating social processes in a situation that differs from the norm will improve understanding of the hos social processes work in typical situations -some reject the value of generalizability as most understand it, instead arguing that understanding the particulars of a situation in depth is an important object of inquiry in itself -p. 399-400
Field research
-research in which natural social processes are studied as they happen and left relatively undisturbed -many early researchers doing this involved former social workers and reformers in studying various locations like cites, small towns, etc. -p. 367
Participant observation - relationships
-researchers must be careful to manage their relationships in the research setting so that they can continue to observe and interview diverse members of the social setting throughout the long period typical of participant observation -every action the researcher takes can develop or undermine this relationship -interaction early in the research process is particularly sensitive because participants don't know the researcher and the research doesn't know them -key informant - an insider who is willing and able to provide a field researcher with superior access and information, including answers to questions that arise in the course of the research; a knowledgeable insider who knew the group's culture and was willing to share access and insights with the researcher -advice from experienced participant observers for maintaining relationships in the field: -1.develop a plausible (and honest) explanation for yourself and your study -2.maintain the support of key individuals in groups or organizations under study -3. be unobtrusive and unassuming; don't show off your expertise -4.don't be too aggressive in questioning others -5.ask very sensitive questions only of informants with whom your relationship is good -6.be self-revealing, but only to a point -7.don't fake your social similarity with your subjects -8.avoid giving or receiving monetary or other tangible gifts without violating norms of reciprocity -9.be prepared for difficulties and tensions if multiple groups are involved -p. 383-384
Survey questions - maximize the Utility of response categories
-response choices should be considered carefully because they help respondents to understand what the question is about and what types of responses are viewed as relevant -questions with fixed response choices must provide one and only one possible response for everyone asked the question (exhaustive and mutually exclusive); ranges should not overlap and should provide a response option for all respondents -two exceptions to this (these should be kept to a minimum): -1. Filter questions may tell some respondents to skip over a question (the response choices do not have to be exhaustive) -2. Respondents may be asked to check all that apply (response choices are not mutually exclusive) -vagueness in response choices should also be avoided; questions about thoughts and feelings will be more reliable if they refer to specific times or events, but do not make unreasonable demands on people's memories; unless your focus is on major or routine events that are unlikely to be forgotten, limit questions about specific past experiences to the past month -avoid making fine distinctions at one end of a set of response choices, while using broader categories on the other end -sometimes problems with response choices can be corrected by adding questions; ex: redo the question how many years of schooling have you completed? to what is the highest degree you have received? -adding questions can also improve memory about specific past events; series of specific focus on an issue may increase ability to recall something -response choices should be mutually exclusive and exhaustive -response choices should reflect meaningful distinctions; ex: very satisfied and somewhat satisfied has a meaningful distinction, while between moderately satisfied and somewhat satisfied is not a meaningful distinction -Likert item - a statement followed by response choices ranging from strongly agree to strongly disagree; a common approach for measures of attitude intensity; covers the full range of possible agreement -Bipolar response options - response choices to a survey question that include a middle category and parallel responses with positive and negative valence (can be labeled or unlabeled); ex: very comfortable, mostly comfortable, slightly comfortable, feel neither comfortable nor uncomfortable, slightly uncomfortable, mostly uncomfortable, very comfortable; seven categories will capture most variation on bipolar ratings -Labeled unipolar response options - response choices for a survey question that use words to identify categories ranging from low to high (or high to low); ex: extremely comfortable, very comfortable, quite comfortable, somewhat comfortable, not at all comfortable; five categories work well; responses are more reliable when these categories are labeled than when unlabeled/only identified by numbers -Unlabeled unipolar response options - response choices for a survey question that use numbers to identify categories ranging from low to high (or high to low); ex: How comfortable do you feel disagreeing with the person who supervises your work? With a range from 1 (meaning not at all) to 10 (meaning extremely); five categories work well; responses are more reliable when labeled than for this -p. 264-265
Survey questions - avoid confusing phrasing
-several ways to avoid this: -in most cases, a simple direct approach to asking a question minimizes confusion; use shorter rather than longer words and sentences: ex: job concerns instead of work-related employment issues; try to keep the total number of words to 20 or fewer and the number of commas to 3 or fewer -sometimes when sensitive issues or past behaviors are the topic, longer question can provide cues that make the respondent feel comfortable or aid memory -breaking up complex issues into simple parts also reduces confusion (several shorter questions instead of one long one/paragraph) -do not use double negative - a question or statement that contains two negatives, which can muddy the meaning of a question; ex: do you disagree that there should not be a tax increase?; ex: does it seem possible or does it seem impossible to you that the Nazi extermination of the Jews never happened? -do not use double-barreled questions - a single survey question that actually asks two questions but allows only one answer; guaranteed to produce uninterpretable results; ex: do you think that President Nixon should be impeached and compelled to leave the presidency, or not?; asks about both impeaches and compelled to leave office at once, while people could think one or the other is true -important to identify clearly what kind of information each question is to obtain (attitudes, beliefs, behavior, attributes, etc.; rarely can a single question address more than one of these at a time) -be sure to ask only respondents who may have information to answer the question; ex: if you include a question about job satisfaction in a survey of the general population, first ask respondent whether they have a job because you will only annoy them if the question does not apply to them; filter questions - a survey question used to identify a subset of respondents who then are asked other questions (ex: are you currently employed?); Skip patterns - the unique combination of questions created in a survey by filter questions and contingent questions (ex: if you answered no to this question, please skip to this #; should be indicated clearly with an arrow or mark in the questionnaire); Contingent question - a question that is asked of only a subset of survey respondents; ex: if you answered yes to this question, go on to this one (ex: if you answered yes to current employment, then how satisfied are you with your current job?) -p. 261-262
Survey - refine and test questions
-simply asking what appear to you to be clear questions does not ensure that people have a consistent understanding of what you are asking; you need external feedback via some type of pretest, which is an essential step in preparing a survey -one important form of feedback results from simply discussing the questionnaire content with others; should consult expert researchers, key figures in the area of study, and individuals from the population to be sampled -review relevant literature to find results obtained with similar surveys and comparable questions -forming a panel of experts (psychologists, questionnaire design experts, methodologists, etc.) to review questions can help -focus groups could be used to check for consistent understanding of terms and to identify the range of events or experiences about which people will be asked to report -cognitive interview - a technique for evaluating questions in which researchers ask people test questions and then probe with follow-up questions to learn how they understood the question and what their answers mean; ask people to describe what they are thinking when they answer questions, how the respondent understood one or more words in the question, how confusing it was, etc.; no single approach to this considered most effective -survey pretest - a method of evaluating survey questions and procedures by testing them on a small sample of individuals like those to be included in the actual survey and then reviewing responses to the questions and reactions to the survey procedures; perhaps 15-25 respondents who are similar to those who will be sampling; try to identify questions that caused problems; can do this with mailing and actual interviews; on the pretest for a written questionnaire, you may include some space for individual to comment on a key question, or in in-person interviews, audio record them for later review; value of this can be enhanced with behavior coding - an observation in which the researcher categorizes, according to strict rules, the number of times certain behaviors; the researcher observed the interviews or listens to recorded interviews and codes, according to strict rules, the number of times that difficulties occur with questions (asking for clarification, rephrasing questions, etc.) which is then used to improve question wording and instructions for interviews about reading the questions -which method of improving questions is best?: -behavior coding is the most reliable method, whereas simple pretesting is least reliable -focus groups or cognitive interviews are better for understanding the bases of problems with particular questions -review of questions via expert panel is the least expensive and identifies the greatest number of problems with questions -p. 273-274
Survey questions - avoid making either disagreement or agreement problematic
-social desirability bias = the tendency to agree with a statement just to avoid seeming disagreeable; aka agreement bias or acquiescence effect; ex: a question states that individuals were more to blame for crime and lawlessness than social conditions had a higher rate of responses that agreed that individuals were more to blame; numerous studies of agreement bias suggest that about 10% of respondents will agree just to be agreeable, without regard to what they really think -steps to reduce the likelihood of this: present both sides of attitude scales in the question itself (ex: in general, do you believe that individuals or social conditions are more to blame for crime and lawlessness in the US?); response choices should be phrased to make each one seem as socially approved/agreeable as the others; don't use a question such as: to what extent do you agree or disagree with the statement: the new healthcare plan is worthy of support? Instead use a question like to what extent do you support or oppose the new healthcare plan? -may gain a more realistic assessment of respondents' sentiment by adding to a question a counterargument in favor of one side to balance an argument in favor of the other side; ex: don't just ask employees whether employees should be required to join the union; instead ask whether employees should be required to join the union or be able to make their own decision about joining -when an illegal or disapproved behavior or attitude is the focus, we have to be concerned that some respondents will be reluctant to agree that they have done or thought that thing; should write a question and responses that make agreement seem more acceptable; ex: instead of asking have you ever shoplifted something from a store? ask instead have you ever taken anything from a store without paying for it?; also asking about a variety of behaviors or attitudes that range from socially desirable to social unactable will soften the impact of agreeing with the socially unacceptable ones -p. 265-267
Survey questions - minimize the risk of bias
-specific words in survey questions should not trigger biases, unless that is the researcher's conscious intent -bias or loaded words and phrases tend to produce misleading answers; ex: difference between support for sending troops for a "situation like Vietnam were to occur somewhere in the world" vs. to "stop a communist takeover" -answers can also be biased by more subtle problems in phrasing that make certain responses more or less attractive to particular groups -to minimize biased responses, researchers have to test reactions to the phrasing of a question -responses can also be biased when response alternatives do not reflect the full range of possible sentiment on the issue -if the response alternatives fall on a continuum from positive to negative, the number of positive and negative categories should be balanced so that one end of the continuum doesn't seem more attractive than the other; ex: if on one end of the continuum you have completely satisfied, the other side should be completely dissatisfied; also better to state both sides of the scales in the question itself; ex: how satisfied or dissatisfied are you with the intramural sports program here? -the advice to minimize the risk of bias is intentionally ignored by those who conduct surveys to elicit bias; push polling = technique that has been used in some political campaigns in which negative information about the opposing candidate is conveyed; just a propaganda effort and not a survey -p. 262-263
Mean, Median, or Mode?
-the suitability of median and mean for a particular application must be carefully assessed -the key issues to be considered are the variable's level of measurement, the shape of the distribution, and purpose of the statistical summary -1. Level of measurement: -a key concern because to calculate the mean, we must add the values of the cases - a procedure that assumes the variable is measured at the interval or ratio level -mean is simply an inappropriate statistic for variables measured at the ordinal and nominal levels (but some researchers do use it for ordinal level variables) -because calculation of the median requires only that we order the value of cases, it is not as much of an issue (except for nominal) -2. Shape of the distribution: -when a distribution is perfectly symmetrical, the mean and median will be the same, but the mean and median are affected by skewedness; because the median accounts for only the number of cases above and below the median point, not the value of these cases, it is not affected in any way be extreme values; because the mean is based on adding the value of all the cases, it will be pulled in the direction of exceptionally high or low values -when the value of the mean is larger than the median, we know that the distribution is skewed in a positive direction, with proportionately more cases with higher than lower values -when the mean is smaller than the median, the distribution is skewed in a negative direction -the median corresponds to the middle case, but the mean is pulled toward the value of the cases with extremely high or low value -3. Purpose of the statistical summary (the single most important influence on the choice) -if the purpose if to report the middle position of a distribution, the median is appropriate, whether or not the distribution is skewed (ex: half the population is below as 49 and half is above) -it is not appropriate to use either median or mean for variables measured at the nominal level because they cannot be ordered as higher or lower; the mode should be used to measure this -the median is most suited to measure the central tendency of variables at the ordinal level -the mean is only suited to measure the central tendency for variables measured at the interval and ratio levels -it is not entirely legitimate to represent the central tendency of a variable measured at the ordinal level with the mean because it requires summing the values of all cases; at the ordinal level, these values indicate only order, not actual numbers; nonetheless, research do use the mean with these variables -p. 329-331
Participant observation - sampling
-theoretical sampling - a sampling method recommended for field researchers by Glaser and Strauss; a theoretical sample is drawn in a sequential fashion, with settings or individuals selected for study as earlier observations or interviews indication that these settings or individuals are influential -a systematic approach to sampling in participant observation studies -when field researchers discover in an investigation that particular processes seem to be important, inferring that certain comparisons should be made or that similar instances should be checked, the researchers then choose new settings or individuals that permit these comparisons or checks -experience sampling method (ESM) - a technique for drawing a representative sample of everyday activities, thoughts, and experiences; participants carry a pager and are beeped at random times over several days or weeks; on hearing the beep, participants complete a report designed by the researcher -when field studies do not require ongoing, intensive involvement by researchers in the setting, this can be used -the experiences, thoughts, and feelings of a number of people are sampled randomly as they go about their daily activities -carry a pager and fill out reports when they are beeped -although a powerful tool for field research, it is still limited by the need to recruit people to carry pagers -the generalizable of these findings relies on the representativeness, and reliability, of the persons who cooperate in the research -p. 384-386
Data analysis - ethical issues
-using statistics ethically means first and foremost being honest and open -findings should be reported honestly and the research should be open about the thinking that guided the decision to use particular statistics -it is possible to distort social reality with statistics, and it is unethical to do so knowingly, even when the error is due more to carelessness than deceptive intent -summary statistics can easily be used unethically, knowingly or not; when we summarize a distribution in one or two numbers, we are losing much information; neither central tendency nor variation describe a distribution's overall shape and taken separately, neither measure tells us about the other characteristics of the distribution -we should inspect the shape of any distribution for which we report summary statistics to ensure that the summary statistic does not mislead us or others because of an unusual degree of skewness -it is possible to mislead those who read statistical reports by choosing summary statistics that accentuate a particular feature of a distribution (ex: property values) -whenever you need to group data in a frequency distribution or graph, you can reduce the potential problems for problems by inspecting the ungrouped distributions and then using a grouping procedure that does not distort the distribution's basic shape -when you create graphs, be sure to consider how the axes you choose may change the distribution's apparent shape -p. 335-336
Survey research - errors
-without careful attention to sampling, measurement, and overall survey design, the survey effort is likely to flop; for a survey to succeed, it must minimize four types of error: poor measurement, nonresponse, inadequate coverage of the population, and sampling error -1. Poor Measurement -the theory of satisfaction can help us understand the problem -it takes effort to answer survey questions carefully: respondents have to figure out what each question means, then recall relevant information, and finally decide which answer is most appropriate -survey respondents satisfice (= accept an available option as satisfactory) when the reduce the effort required to answer a question by interpreting questions superficially and giving what they think will be an acceptable answer -primacy effect = tendency to choose responses appearing earlier in a list of responses -errors in measurement also arise when respondents are unwilling to disclose their feelings and behaviors, unable to remember past events, and misunderstand survey questions -what people say they can do is not necessarily when they are able to do, what people report that they have done is not necessarily what they have actually done -acquiescent response bias = a natural desire to say what the interviewer wants to hear or to answer questions in a way they believe is more socially desirable -could also be because your survey questions are good, but people feel discomfort responding to them; ex: you ask a set of questions that are validated to measure sexuality, but you ask them in-person -means that you won't be able to generalize and it limits the ability to develop nomothetic causal explanation -presenting clear and interesting questions in a well-organized questionnaire will help reduce measurement error by encouraging respondents to answer questions carefully and to take seriously the request to participate in a survey -tailoring questions to the specific population surveyed is also important; ex: people with less education are more likely to satisfice (accept an available option as satisfactory) in response to more challenging questions -2. Nonresponse -a major and growing problem in survey research -social exchange theory can help us understand why nonresponse is growing: a well-designed survey effort will maximize the social rewards for survey participation and minimize its costs, as well as establish trust that the rewards will outweigh the costs -the perceived benefits of survey participation have declines with decreasing levels of civic engagements and with longer work hours -perceived costs have increased with the widespread use of telemarketing and the ability of many people to screen out calls from unknown parties with answering machines and caller ID; recipients pay for time on cell phones too so that increases the costs of phone surveys -if certain types of people are less likely to respond, this produces nonresponse bias (ex: this plagued the 2016 election; Trump supporters who are anti-establishment are also likely to not respond to election polls) -nomothetic causal explanations are about finding common influences on variables; an inadequate sample limits ability to find commonality -3. Inadequate Coverage of the Population -a poor sampling frame can invalidate the results of an otherwise well-designed survey -the design of the survey can also lead to inadequate coverage of the population. -an inadequate sample limits ability to find commonality, and hence the ability to develop nomothetic causal explanations -4. Sampling Error -the process of random sampling can result in differences between the characteristics of the sample members and the population simply on the basis of chance -p. 258-259; Mod. 7 PP
Idiosyncratic variation
-writing single survey questions that yield usable answers is always a challenge; they are prone to error because of this -variation in responses to questions that is caused by individuals' reactions to particular words or ideas in the question instead of by variation in the concept the question is intended to measure -differences in respondent's backgrounds, knowledge, and beliefs almost guarantee that some will understand the same question differently -in some cases, the effect of this can be dramatic; ex: large difference in agreement when saying forbid vs. not allow in regard to public speeches against democracy; ex: did you see a broken headlight? vs. did you see the broken headlight? -best option is often to develop multiple questions about a concept and then to average the responses to those questions in a composite measure termed an index -the idea is that idiosyncratic variation in response to particular questions will average out, so that the main influence on the combined measure will be the concept upon which all the questions focus; the index can be considered a more complete measure of the concept than any one of the component questions -p. 268-269
Participant observation - notes
-written notes are the primary means of recording this data -can be handwritten, but many research jot down partial notes while observing and then retreat to their computers to write up more complete ones; this computerizes text can then be inspected and organized after it is printed out, or it can be marked up and organized for analysis using one of several computer programs -helpful to maintain a daily log in which each day's activities are recorded -jottings - brief notes written in the field about highlights of an observation period; used because it is too difficult to write extensive notes while in the field; serve as memory joggers when the researcher is later writing the field notes -field notes - notes that describe what has been observed, hear, or otherwise experienced in a participant observation study; these notes usually are written after the observational session -should be written utilizes jottings either immediately afterward or at least within the next 24 hours -usually, writing up notes takes much longer (at 3x longer) than the observing -field notes must be as complete, detailed, and true as possible to what was observed and heard -direct quotes should be distinguished clearly from paraphrased quotes, and both should be set off from the researcher's observations and reflections -pauses and interruptions should be indicated -the surrounding context should receive as much attention as possible, and a map of the setting should always be included with indications of where the individuals were at different times -careful notetaking yields a big payoff; field notes will suggest new concepts, causal connections, and theoretical propositions; social processes and settings can be described -notes should also include descriptions of the methodology: where researchers where standing or sitting while they observed, how they chose people for conversation or observation, what counts of people or events they made and why -sprinkled throughout the notes should also be a record of the researchers' feelings and thoughts while observing: when they were disgusted by some statement or act, when they feel threatened or intimidated, why their attention shifted from one group to another, and what ethical concerns arose -notes may be, in some situations, supplemented by still pictures, videotapes, and printed material; can call attention to some features of the social situation and actors within it that were missed in the notes -p. 387-389