Research Part 2

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

***According to the text, describe the principal advantages of research using available data.

Sources of Available Data The sources of available data may be placed in five broad categories: (1) public doc- uments and official records, including the extensive archives of the Census Bureau; (2) private documents; (3) mass media; (4) physical, nonverbal materials; and (5) social science data archives. These categories provide a useful summary of data sources, although they do not constitute a mutually exclusive typology. Any data source may be placed in one or more of these categories. Also, analysts may draw on more than one data source in any given study Advantages of Research Using Available Data The foregoing studies suggest several advantages as well as some problems with re- search using available data. Here, we discuss the principal advantages, and in the fol- lowing section we address some general methodological problems. The first two ad- vantages listed pertain to sources other than survey data archives. The remaining five benefits originally were outlined by Herbert Hyman (1972) with reference to the sec- ondary analysis of survey data but also apply to most other available-data research. Nonreactive Measurement A major problem in much social research is reactive measurement: changes in behav- ior that occur because of subjects' awareness that they are being studied or observed. Research with available data also encounters this problem to the extent that the data sources are surveys or documents like autobiographies in which the author is clearly aware that what is said will be in the public domain. Still, many available-data sources are nonreactive. With physical evidence and many other available-data sources, there is simply no reasonable connection between a researcher's use of the material and the producer's knowledge of such use. By analyzing census data, Schwartz and Mare's analysis of educational homogamy was nonreactive. Imagine, however, the kind of self-censorship that might have occurred if they had asked respondents in an inter- view survey if they preferred to marry someone with the same educational level. If married, their spouse's educational level is likely to dictate their answer; if not, their answer may reflect normative expectations and, in any event, would measure pref- erences rather than actual choices. The risk of reactivity is so high in some areas of study that available data may provide the only credible evidence. Consider, for example, studies of illegal activities such as consumption of illegal drugs. Survey evidence is likely to be contaminated by concealment and underreporting; police records such as number of arrests for con- trolled substances may be distorted by dif- ferential efforts at law enforcement. An in- genious use of available data, however, can provide nonreactive evidence on drug use. Noting that the federal government imposes a tax and keeps a record of taxes collected on cigarette papers and tubes, Marcus Fel- son (1983) observed that federal taxes collected on these items changed little during the 1950s and 1960s but jumped about 70 percent over 1960s levels in 1972. Mean- while, loose tobacco sales declined during this same period. The conclusion Felson reached is that a new market had been created for cigarette papers in the production of marijuana "joints." That market, it would appear, opened up in 1972. Analyzing Social Structure Despite the avowed focus of the social sciences on properties and changes in social structure, much of social research focuses on individual attitudes and behavior. Sur- veys are of individuals, and very few surveys utilize contextual or social network de- signs, which provide direct measures of social relations; experiments rarely study the group as the unit of analysis; and field studies are based on the observation of individual behavior. Available data, however, often enable the researcher to analyze larger social units. In many of the studies reviewed above, the unit of analysis was not the individual nor was the focus on individual behavior. For Durkheim as well as Pearson and Hendrix, the unit was the whole society; for Erikson, the commu- nity; for Farley and Frey, the metropolitan area; and for Schwartz and Mare, the married couple. Even the studies by Sales and Bailey on individual propensities oward authoritarianism and homicidal behavior, respectively, investigated these phenomena with societal-level data in terms of large-scale social processes. Studying and Understanding the Past Available data provide the social researcher with the best and often the only op- portunity to study the past. To study some aspect of American society 50 or more years ago, it might be possible to conduct a survey of people who were alive at the time. But to do so presents several methodological problems, from the inaccuracy of respondents' memories to survivor bias in the sample. To study periods before the twentieth century necessitates the search for available data. Documentary records and other archival evidence have therefore been a favorite source of data for historians, as we saw in Erikson's and Stannard's studies of the Puritans. More recent events also can be investigated with the aid of survey data archives. In fact, many social scientists see surveys from the past as a primary data source for his- torians of the future (see Hyman, 1972). But studies of the past are not limited merely to understanding the past. They also can be done to test general proposi- tions about social life, as we saw in Erikson's study, Bailey's research on the de- terrent effect of capital punishment, and Sales's research on threat and authori- tarianism. Understanding Social Change Both experiments and field research have limited time spans, and longitudinal sur- veys rarely were undertaken until the last quarter of the twentieth century. The analysis of available data, however, is well suited to studies of social and cultural change. Trend studies, such as Logan, Stults, and Farley's analysis of racial segre- gation and Schwartz and Mare's study of educational homogamy, have a long tra- dition among social demographers who rely on the census and other demographic data. Lindner's analysis of gender stereotyping in advertisements and Stannard's analysis of the carvings on gravestones illustrate other sources of evidence on so- cial change. Moreover, the establishment of data archives has resulted in a marked increase in the number of studies that trace relatively recent changes in various at- titudes, opinions, and behaviors (see Glenn and Frisbie, 1977). The GSS was de- signed partly to measure trends in social conditions. And as we noted, Felson has suggested a similar use for other available-data sources such as the Census of Man- ufacturers. Studying Problems Cross-Culturally In 1984, the International Social Survey Program was formed to provide cross- national survey data similar to that from the GSS (Smith, 1990). By 2008, forty- three collaborating nations were supplementing regular national surveys with a common core of questions, with these data pooled and made available to the so- cial science community. This is an important development because until re- cently there have been few cross-cultural surveys. In fact, Hyman (1972:17)estimated "about ten documented examples of comparable large-scale multina- tion surveys of the general population [existed] as of 1970." In spite of this de- velopment, however, other sources of available data—for example, national cen- suses and vital statistics as well as ethnographies—will continue to provide the primary data for cross-national studies. Pearson and Hendrix's investigation of the relationship between divorce and the status of women is but one of numer- ous examples. Improving Knowledge through Replication and Increased Sample Size Experiments and field studies use samples of very limited size, and most surveys of local populations are relatively small. Similarly, historical document analysis often focuses on a small number of cases. The use of available data, however, may afford the opportunity to generate unusually large samples. Drawing on two data sources, Schwartz and Mare analyzed data on nearly 2 million marriages; Lindner coded a total of 1374 magazine advertisements. Sample size is important for two reasons. First, large samples generally enhance our confidence in study results; with random sampling, increases in sample size increase the reliability of findings, as we saw in chapter 6. Second, by increasing sample size we may gain access to specialized prob- lems and smaller populations that otherwise could not be studied. One reason for using census data, including sample-based data, is that the huge samples provide reliable estimates for small segments of the population. Consider intermarriage— marriage between partners of a different race or ethnicity. Because of generally low rates of intermarriage (less than 5 percent of whites in 2000), only a data set as large as PUMS would enable researchers to examine intermarriage across various racial/ethnic groups. Increasing sample size in effect replicates observations. Although replications are relatively rare in the social sciences, they often may be carried out easily with available data. A good example is Sales's research on threat and authoritarianism. Sales used diverse sources. We mentioned his use of municipal budgets, listings of books on astrology and psychotherapy, and comic strips. But he used several other sources that we did not mention and analyzed changes in similar indicators for two later periods in the United States: 1959-64 and 1967-70. And that is not all. Other researchers, using comparable data sources, have replicated Sales's work for Germany in the 1920s and 1930s (Padgett and Jorgenson, 1982) and for later peri- ods in the United States (Doty, Peterson, and Winter, 1991; Peterson and Gerstein, 2005). One study (Perrin, 2005) used letters to the editors of seventeen newspapers to examine increases in authoritarianism in the wake of the September 11, 2001, terrorist attacks, a major national threat Savings on Research Costs Insofar as research using available data bypasses the stage of data collection, it can economize greatly on cost, time, and personnel. Whereas this is especially true of the secondary analysis of surveys, other sources of available data also tend to be less costly than experiments, surveys, and field studies. These costs vary depending on the nature of the data source and the time, money, and personnel required to obtain and to an- alyze the data. The tasks of the researcher using available data, such as searching for and coding relevant information, often are tedious and time-consuming. Imagine, for example, the efforts of Lindner and her assistants in acquiring sixty periodicals, identifying eligible advertisements, and coding nine categories in 1374 ads. Yet, the cost per case in such studies is generally quite small compared with the cost of inter- viewing a respondent or running a single subject through an experiment.

***How can a funnel sequence or inverted-funnel sequence solve the survey researcher's frame-of-reference problem?

The "Frame of Reference" Problem Often, the questions we ask people seem clear in meaning to us but can be answered from several perspectives or frames of reference. For example, suppose a survey of second-semester college students asked, "Generally speaking, how satisfied are you with your decision to attend State University?" Students giving the same response, such as "relatively satisfied," could have very different reasons for doing so. There are a number of ways to determine or to control the respondent's frame of reference. A straightforward way to determine the frame of reference is to follow the question with a probe such as "Can you tell me why you feel that way?" "What things specifically do you feel are satisfactory (or unsatisfactory) about State University?" An- other simple means of controlling the frame of reference for individual questions is to specify the frame of reference within the question, such as "Compared with other uni- versities in the state system, how do you feel about the intellectual life at Caufield State?" By particular arrangements of questions, the researcher also can direct the re- spondent to the investigator's frame of reference. A funnel sequence (Kahn and Cannell, 1957:158-60) moves from a very general question to progressively more specific questions. Suppose one wanted to study the impact of inflation on people's attitudes about the performance of the president. Research on question-order effects, reported earlier, suggests that asking specific questions about inflation first might impose this frame of reference on respondents so that later general questions about the president's performance are judged with reference to the president's inflation efforts. Instead, a funnel sequence could start out with gen- eral questions about achievements ("What do you think about the president's per- formance in office?" "Why do you feel this way?"), which will likely disclose the respondents' frame of reference. These questions then could be followed by ques- tions on inflation ("Do you think we have a serious inflation problem?" "Has it had much effect on you?") and, finally, specific questions about the president's ac- tivity in this area ("Do you believe the president is doing a good job of fighting inflation?"). The previous example illustrates the effectiveness of a funnel sequence in avoid- ing the possibility that asking more specific questions first will influence responses to more general questions. This sequence also offers the advantage of beginning with the respondent's ideas and perspectives, which may increase interest and mo- tivation. Funnel sequences may consist entirely of open-ended questions or of an open-ended question (or questions) followed by closed-ended questions. A common frame of reference also may be established through an inverted- funnel sequence of related questions (Kahn and Cannell, 1957:160). Here, one be- gins with the most specific questions and ends with the most general. While this ap- proach lacks the advantages of the funnel sequence, it is useful in some situations. First, it may be used to ensure that all respondents are considering the same points or circumstances before expressing their general opinions. For example, if we wanted to make sure that respondents were judging the president's performance in office on similar bases, an inverted-funnel sequence would enable us to bring up and question performance in specific areas (inflation, unemployment, foreign pol- icy) before asking for a general evaluation. A second advantage of the inverted fun- nel sequence is that, whether or not respondents have previously formed an opin- ion regarding the final question in the sequence, all will have time to think through certain aspects of a complex issue before giving their opinion. Instead of asking re- spondents to express immediately their attitude toward liberalizing laws on abor- tions, for example, one might ask about approval of abortion in various specific circumstances (if there is a strong chance of a birth deformity, if the woman became pregnant as a result of rape, if the woman's own health is seriously endangered, if the woman is married and does not want the child, if the family cannot afford any more children), at the end of which the respondents' general opinion would be sought. (See Box 10.2 for additional examples of a funnel sequence and inverted- funnel sequence.) Will an inverted-funnel sequence be susceptible to order effects, since respon- dents may avoid being redundant by excluding previously given information from the final general question? There is limited evidence (Schwarz, Strack, and Mai, 1991) that another conversational norm, a request for a summary judgment, may operate when a general question follows a block of specific questions Here is a sample funnel sequence from a study of union printers (Lipset, Trow, and Coleman, 1956:493-94). 7. (a) All things considered, how do you like printing as an occupation? Do you dislike it? Are you indifferent? Do you like it fairly well? Do you like it very much? (b) Why do you feel this way? 8. (a) Is there any occupation you would like to have other than the one you now have—either in or outside the printing trade? (If so) Which one? 9. Let's look at it another way: If you were starting all over again, what occu- pation would you want to get into? 10. (a) How would you rate printing as an occupation? For example: (1) Would you rate the pay as excellent, good, fair, or poor? (2) How about job security in the printing trade? Would you rate it as excellent, good, fair, or poor? (3) How about the prestige printing receives from people outside the trade? A brief inverted-funnel sequence used to measure perceptions of well-being (Andrews and Withey, 1976:376) is reproduced here. 13. Here are some faces expressing various feelings. Below each is a letter. The funnel sequence illustrates the benefits of using a sequence of questions to ex- plore complex issues. Similarly, a well-devised series of questions is invariably more effective than the simple question "Why?" in finding out the reasons for people's be- havior.10 Suppose we ask undergraduates why they decided to attend UCLA, and we receive the following responses: Mary—"My parents convinced me"; Sam—"Because I live in LA"; Pascual—"For a scholarship"; Irma—"To be with my boyfriend"; Reuben—"It's a fun school and I wasn't accepted at the other schools I applied to." Not only are these reasons brief and quite diverse, they seem incomplete; surely these students selected UCLA for more than one reason (only Reuben mentioned two rea- sons). Perhaps Mary and Sam also were influenced by scholarships; maybe Irma was not alone in having a close friend at UCLA, or Reuben was not the only one turned down by other schools. Also, other determinants of the respondents' choice of UCLA may have been overlooked, such as the recommendations of high school teachers, the academic reputation of UCLA, and the climate of southern California. The simple "why?" question implies that a simple, prompt answer is desired. Consequently, respondents may expend minimum effort (satisficing) to generate quickly a plausible answer that is easy to verbalize. How, then, can we go about dis- covering the main factors that influenced our respondents to attend UCLA? Fortu- nately, Hans Zeisel (1968) systematized the process of asking "why?" The key idea in Zeisel's reason analysis is the development of an "accounting scheme" outlining the general categories of reasons, or dimensions of the decision, which in turn provides a model or structure for formulating a comprehensive series of questions.

What does it mean to pretest a survey instrument? What purposes does pretesting serve?

Throughout the chapter, we repeatedly have emphasized the importance of pretest- ing—of evaluating survey questions to determine if respondents clearly understand and are able to answer them. Failure to conduct adequate pretesting can result in a meaningless study. Once the study has been conducted, it is too late to benefit from the information, for example, that in one item 99 percent of the respondents chose the same option or that a large number of respondents misunderstood the meaning of a question. Experience has shown that the amount of effort expended on study planning and pretesting is related directly to the ease with which data may be ana- lyzed and to the quality of results. Pretesting should begin as soon as the survey instrument, or portions of it, have been drafted. Traditionally, pretesting has been done solely "in the field," that is, in the homes of respondents drawn from the target population. However, sparked by the recent interest in cognitive aspects of sur- veys, researchers have begun to test ques- tions in the laboratory. We first review new procedures developed in the laboratory, which have proven to be very effective in identifying potential wording, ordering, and formatting problems. The information gained through these procedures gives direction to further revision efforts. Often, several pretests and revisions of a questionnaire may be necessary to arrive at a good semifinal draft. Once the questionnaire is in this form, it is routinely tested in the field; therefore, we also discuss various techniques used in field pretesting. Field pretesting a survey instrument consists of trying it out on a small sample of persons (usually twenty-five to fifty) having characteristics similar to those of the target group of respondents. The pretest group is normally not a probability sam- ple since you are not planning to publish your pretest findings. However, it should be as heterogeneous as the target population. For example, if your target group is a national sample of college and university students, the pretest group should include college students at all levels (first-year through graduate students) and from differ- ent types of institutions (large, small, religious, secular, liberal arts, technical, etc.). Field pretesting should provide answers to questions such as these: Does the level of language match the sophistication of the target population? Are instructions to respondents and to interviewers clear? Are transitions smooth and informative? Are the questions and question format varied enough to retain respondents' in- terest and attention? Are responses to open questions so diverse as to be impossible to analyze? Are the choice options to closed questions clear and exhaustive? Are interviewing aids such as cards or photographs effective and practical? Are there questions that respondents resist answering? How long, generally, does the interview take to complete? For some time, field pretesting to identify such problems has consisted of hav- ing either experienced interviewers or interviewers in training both administer the draft questionnaire and observe the process. The interviewers might take notes during the interviews or file reports afterward, but their observations generally are conveyed in a group oral debriefing (Converse and Presser, 1986). However, this process has several limitations (Fowler, 1995; Fowler and Cannell, 1996). Playing the role of interviewer may interfere with the task of observing the process; each interviewer's observations are based on a small number of interviews, which may not be adequate for reliably assessing question problems; the standards for evalu- ation may not be well articulated or may be applied inconsistently, resulting in a lack of agreement about problem questions; and the recognition of question- comprehension problems is limited to items in which respondents ask for clarifica- tion or give inappropriate answers. Some of these problems may be addressed by cognitive laboratory interviewing. Recently, however, several strategies have been applied to make field pretesting more systematic and reliable. These include behav- ior coding, respondent debriefings, interviewing ratings, split-ballot tests, and re- sponse analysis.

How popular is survey research compared to other methodological approaches; what questions can be asked?

Very common surveys are the most widely used method of collecting data in the social sciences, especially in sociology and political science The US Census 1. A large number of respondents are chosen through probability sampling procedures to represent the population of interest. 2. Systematic questionnaires or interview procedures are used to ask pre- scribed questions of respondents and record their answers. 3. Answers are numerically coded and analyzed. Surveys obtain information through interviews and/or self-administered questionnaires Among all approaches to social research, in fact, surveys offer the most effective means of social description; they can provide extraordinarily detailed and precise information about large, heterogeneous populations. By using probability sampling, one can be certain, within known limits of sampling error, whether the responses to a sample survey accurately describe the larger target population. Furthermore, the topics covered and the questions that may be included in surveys are wide-ranging. Topics of the studies we have cited ranged from academic achievement to alcohol consumption and from sexual activity to attitudes toward the police 1. Social background information (e.g., What is your religious preference? What is your date of birth?) 2. Reports of past behavior (e.g., Did you vote in the last presidential elec- tion? Have you ever been the victim of a crime? On an average day, about how many hours do you personally watch television?) 3. Attitudes, beliefs, and values (e.g., Do you believe that there is a life af- ter death? Do you think homosexual couples should have the right to marry one another?) 4. Behavior intentions (e.g., If the presidential election were held today, whom would you vote for? Would you yourself have an abortion if there were a strong chance of serious defect in the baby?) 5. Sensitive questions (e.g., Have you ever been arrested for a crime? Have you used cocaine in the past month?) As this listing suggests, surveys can address a much broader range of research topics than experiments can. Ethical considerations preclude studying some topics experimentally—for example, the effect of emotional traumas on mental health—while practical considerations rule out many others; for instance, one normally cannot experimentally manipulate organizations or nations. Besides this flexibility, surveys can be a very efficient data-gathering technique. While an experiment usually will address only one research hypothesis, numerous research questions can be jammed into a single large-scale survey. Furthermore, the wealth of data typically contained in a completed survey may yield unanticipated findings or lead to new hypotheses. The secondary analysis of surveys also affords many unique advantages. The cost of obtaining the data for analysis is usually a small fraction of the cost of collecting and coding the data. Survey data made available for secondary analysis tend to come from professional polling and research centers with the resources to obtain high-quality information from large, national samples. In addition, secondary analysis may enable one to (1) assess social trends by examining questions repeated over time and (2) increase sample size by combining data from several surveys.

***What are the strengths and weaknesses of internet surveys?

By automating many tasks and simplifying others, these methods reduce interviewer errors and facilitate the interview process. However, they do not displace the interviewer; they simply make his or her job easier. More recently, researchers have developed a variety of computer-mediated surveys that are self-administered. These include e-mailed questionnaires, interactive voice response (IVR) surveys, computerized self-administered questionnaires, and Internet (Web) surveys (Figure 9.2). E-mail and Web surveys are conducted over the Internet. Both involve computer-to-computer transmission of a questionnaire; in e-mail surveys the questions are sent as the text of an e-mail message or in an attached file, whereas in Web surveys the questionnaire is on specially designed Web pages. IVR surveys are conducted by telephone as respondents listen to prerecorded, voice-read questions and then use Touch-Tone data entry or give verbal answers, which are recorded In computer-assisted self-administered interviewing (CASI), the questionnaire is transmitted on a program that may be sent to respon- dents or on a laptop provided by the researcher. Of these innovations, Web surveys have had the broadest application. IVR surveys, which may not be suitable for long or complex questionnaires, have been used extensively in consumer marketing re-search but have had limited application in general social surveys. CASI often has been used in conjunction with interview-administered surveys. We focus here on the advantages and disadvantages of Web surveys. One of the greatest advantages of Web surveys is reduced cost. Compared to self-administered questionnaires, the cheapest of the traditional modes, Internet surveys eliminate the costs of paper, postage, assembly of the mailout package, and data entry (Dillman, 2007). The principal costs are computer equipment and programming support, questionnaire development and testing, and Internet service provider fees. A related advantage is time savings. Web surveys require much less time to implement than other survey modes; compared to mail surveys, which may take weeks or months for questionnaires to be delivered and returned, Web surveys may be completed in only a few days (Kwak and Radler, 2002). Finally, Web surveys can substantially reduce the cost of increasing sample size because once the electronic questionnaire has been developed, the cost of surveying each additional person is far less than in an interview or mail survey (Dillman, 2007). Another advantage of Web surveys, one they share with other computer-medi- ated methods, is flexibility in the questionnaire design. As Don Dillman (2007:354) points out, the questionnaire can be designed "to provide a more dynamic interac- tion between respondent and questionnaire" than is possible in a paper-and-pencil survey. Web questionnaires can incorporate extensive and difficult skip patterns, pop-up instructions for individual questions, drop-down boxes with lists of answer choices, feedback on possibly incorrect answers (e.g., birth date "1839"), pop-up word definition screens, and automatic fill-ins for later answers. They can use a great variety of shapes and colors and can add pictures, animation, video clips, and sound (Dillman, 2007:458). When designed carefully, Web survey options and fea- tures may be used to motivate and assist respondents and otherwise substitute for the role that an interviewer plays (Couper, Traugott, and Lamias, 2001; Manfreda and Vehovar, 2008:276-81). At this point, the great practical advantages and enormous design potential of Web surveys for social research are offset by some major weaknesses. Perhaps the greatest of these is coverage error. This error derives from two related problems: the proportion of the general population who are Internet users and the lack of a sam- pling frame to sample users. By 2007, 71 percent of U.S. households were using the Internet at home or elsewhere. However, a "digital divide" remains, with nonusers being more likely to be black or Hispanic, poorly educated, and older and with less income than those with Internet access (National Telecommunications and Infor- mation Administration [NTIA], 2007). The second problem, the absence of a good frame for sampling Internet users, is currently handled by limiting professional Web surveys to special populations having membership lists and Web access, such as college students or employees of an organization In the absence of a well-defined probability sampling plan, consumers should be wary of most Web polls and surveys as they are likely to entail self-selected samples that merely reflect the views of those who choose to respond. Nonresponse error poses another problem, as studies comparing response rates for mail, telephone, and Web surveys generally have found the lowest response rates for the Internet mode (Fricker et al., 2005; de Leeuw, 2008:129). Furthermore, not only is the response rate low (one meta-analysis of fifty-six Internet surveys found an average response rate of 35 percent [Cook, Heath, and Thompson, 2000]), but Internet surveys face problems similar to those underlying declining telephone re- response rates, including privacy issues, spam and "phishing" scams, and the proliferation of nonprofessional Web surveys. On the other hand, studies have shown that Web response rates comparable to mail rates can be achieved with special populations (college students with free e-mail ac- cess) and motivational efforts (an advance cover letter explaining the study's purpose) (Kaplowitz, Hadlock, and Levine, 2004). Also, Web respondents are less likely than mail or telephone respondents to leave specific questions unanswered and tend to write longer answers to open-ended questions Despite these coverage and nonresponse problems, Web surveys have developed so rapidly in recent years that some have argued that they eventually will replace mail and interview survey modes. Solutions to some of the problems are being explored, but it will take some time before the advantages of the Internet can be harnessed for use in large-scale, national surveys.

***What is content analysis? Describe the steps involved in doing a content analysis?

Content Analysis William Chambliss and Kai Erikson have rather divergent perspectives on the func- tions of crime in society. So it is not surprising that they would arrive at different interpretations of the events in Salem Village. This difference points to one of the problems with the mere reading of written documents—the lack of agreement or re- liability. One way to overcome this problem is to be explicit about how one should read the text. In fact, it is possible to develop systematic and objective criteria for transforming written text into highly reliable quantitative data. That is the goal of content analysis. More than just a single technique, content analysis is really a set of methods for analyzing the symbolic content of any communication. The basic idea is to re- duce the total content of a communication (e.g., all of the words or all of the visual imagery) to a set of categories that represent some characteristic of research inter- est. Thus, content analysis may involve the systematic description of either verbal or nonverbal materials. Sales's analysis (1973) of comic strips and Lindner's (2004) analysis of gender stereotyping in magazine advertisements are examples of content analysis. How- ever, Goffman's study (1979) of the meaning of gender roles as represented in mag- azine advertisements is not; he neither specified his content categories before the analysis nor systematically selected and described advertisements in terms of these categories. By contrast, Lindner carefully selected a sample of advertisements in two magazines and then proceeded to (1) identify the categories into which the ads were to be coded (e.g., relative size of men and women present in the ads, feminine touch, ritualization of subordination, and so forth), (2) define the categories according to objective criteria that could be applied by anyone, (3) code the advertisements in terms of these objective criteria, and (4) report the frequency of the categories into which the ads had been coded. The process just described is exactly the same as that found in systematic ob- servation studies. It is also the same process that one would use in analyzing re- sponses to open-ended questions (see Box 15.1). Sociologists have used content analysis to analyze unstructured interviews, and psychologists have applied it to ver- bal responses that are designed to assess the psychological states of persons. So, as you can see, its application is not limited to the analysis of existing data. Still, its most common application is to the available printed or spoken word. Content anal- ysis has been applied to written documents with varied and complex content, in- cluding newspaper editorials (Namenwirth, 1969), political party platforms (Weber, 1990), novels (Griswold, 1981), and recorded speeches (Seider, 1974). Let us take a closer look at the steps in carrying out such an analysis: selecting and defining content categories, defining the unit of analysis, deciding on a system of enumeration, and carrying out the analysis (Holsti, 1969). Selecting and Defining Content Categories To the extent that human coders are used, selecting and defining the categories for content analysis is analogous to deciding on a set of closed-ended questions in sur- vey research. Instead of giving the questions to respondents who provide the an- swers, the content analyst applies them to a document and codes the appropriate category. The "questions" applied to the document should be adequate for the re- search purpose, and the categories should be clearly defined, exhaustive, and mutually exclusive. Recall that Sales asked one question of the comic strips he analyzed: Is the cen- tral character strong and powerful? Wendy Griswold (1981), who analyzed a ran- dom sample of 130 novels published in the late nineteenth and early twentieth centuries, was interested in how the American novel might reflect unique proper- ties of American character and experience. Accordingly, she asked several questions pertaining to characteristics of the protagonist (e.g., gender? age? social class at the beginning of the novel? social class at the end?) and to the plot (e.g., What is the setting of the main action? What is the time period? Is adult heterosexual love important to the plot? Is money important in the novel?). Regardless of whether one uses a human coder or a computer, the reliability and overall value of the content analysis depend on the clear formulation of content categories and of definitions or rules for assigning units to categories. It is common for content analysts to report tests for intercoder reliability. For example, Lindner and an assistant separately coded seventy advertisements in two issues randomly selected from her sample of magazines. When their codes were compared, the per- centage agreements for the nine coding categories ranged from 86 to 97 percent, with a mean of 91.7 percent Defining the Unit of Analysis Content analysts refer to their units of analysis as recording units. The recording unit is that element of the text that is described by the content categories. It could be the single word or symbol; the sentence, paragraph, or other grammatical unit; the whole text; or some other aspect of the text such as the character or plot. Na- menwirth's recording unit was the word, whereas Griswold used three different units—character, plot, and whole novel. Lindner's unit was the whole advertise- ment; Sales's unit, on the other hand, was the character within the comic strip. In general, smaller units may be coded more reliably than larger units because they contain less information (Weber, 1990). On the other hand, smaller units such as words may not be sufficient to extract the meaning of the message, and there may be too many such units for the researcher to manage. Imagine, for example, using the word as the recording unit in Griswold's analysis of 130 novels! These limita- tions apply to the use of computers in content analysis because, at this time, the only units programmable for computer analysis are words, word senses, and phrases such as idioms and proper nouns. Because it may not be possible to place the recording unit in a particular cate- gory without considering the context in which it appears, content analysts also dis- tinguish context units (Holsti, 1969). One of Namenwirth's findings was that British elite newspapers were more concerned about relations with Europe and less concerned about the Cold War than mass newspapers. Concern with Cold War is- sues was indicated by a large number of references to the word categories "Soviet," "American," and "Atlantic." From a simple analysis of words, however, one cannot infer the extent to which editorial positions on the Cold War generally were pro- or anti-American. To make this inference, the coder would need to consider the larger context unit—the sentence, paragraph, or whole editorial—in which the words are embedded. Similarly, Sales's coders would need to be familiar with the comic strip to make judgments about the power of the main character; thus, his recording unit is the character, and his context unit is the comic strip. Deciding on a System of Enumeration There are many ways of quantifying the data in content analysis. The most basic systems of quantification are listed here. 1. Time-space measures. Early content analysts of newspapers often mea- sured the space devoted to certain topics. Using column inches as their measure, for example, Janet Lever and Stan Wheeler (1984) found that sports coverage in the Chicago Tribune increased from 9 percent of the newsprint in 1900 to 17 percent in 1975. Analogously, television content has been measured in time (e.g., the number of hours of televised violence). 2. Appearance. Sometimes it is sufficient simply to record whether a given category appears in a recording unit. Sales's measurement consisted of classifying the central character in a given comic strip as powerful ornot. Many of Griswold's categories were measured in this way: Is the main character a male? Is religion important to the plot? 3. Frequency. The most common method of measuring content is in terms of the frequency with which a given category appears in the contextual unit. Namenwirth counted the number of times categories appeared in each news- paper editorial. In an analysis of the Democratic and Republican party plat- forms, Robert Philip Weber (1990) calculated the proportion of words in the category "wealth" (e.g., "capital," "inflation," "unemployment").5 4. Intensity. When attitudes and values are the objects of the research, the con- tent analyst may resort to measures of intensity. For example, rather than ask whether money is important to the novel's plot, one might ask how im- portant it is. Devising mechanisms for making judgments of intensity is es- sentially the same as in the construction of indexes and scales, which we dis- cuss in chapter 13 How the researcher decides to enumerate the data depends on the requirements of the problem under investigation. However, the choice of a system of enumera- tion carries with it certain assumptions regarding the nature of the data and the in- ferences that one can draw from the data (Holsti, 1969). Space-time measures may appropriately describe certain gross characteristics of the mass media, but they are too imprecise to serve as indicators of most verbal content. Appearance measures also tend to be rather imprecise, although they are more flexible and can be applied to a larger range of content than space-time measures. Frequency measures are bet- ter still but involve two crucial assumptions that should be examined: first, they as- sume that the frequency of a word or category is a valid indicator of its importance, value, or intensity; second, they assume that each individual count is of equal im- portance, value, or intensity. It may be that some categories or some recording units should be weighted more heavily than others. It has been suggested, for example, that front-page articles might be more important and therefore weighted more than articles appearing elsewhere in a newspaper (Holsti, 1969). (Box 12.2 also discusses the problem of making inferences from frequencies of the "manifest" content of materials.) Carrying Out the Analysis To carry out the analysis, one first obtains a sample of material. As in survey sam- pling, the researcher should always be mindful of the population to which inferences are to made. Three populations are relevant to content analysis: communica- tion sources (e.g., types of newspapers, novels, speeches), documents (e.g., specific newspaper issues), and text within documents (e.g., pages). Often, a sample of doc- uments is drawn from a single source; for example, Griswold took a random sam- ple of all novels published in the United States between 1876 and 1910. Lindner and Namenwirth first purposefully sampled communication sources. Lindner chose one general interest magazine (Time) and one women's fashion magazine (Vogue); then she selected issues from two months (January and June) for six different years be- tween 1955 and 2002. Namenwirth chose three British prestige newspapers and three mass papers; then he randomly selected twenty-four documents—newspaper editorials—from each of the papers. Although researchers also have sampled text, Weber (1990:43) recommends that the entire text be analyzed when possible be- cause this preserves its semantic coherence. If it is necessary to sample text, then meanings are best preserved by sampling paragraphs rather than sentences. Having selected the sample, one proceeds to code the material according to the coding categories and system of enumeration. This gives the analyst a description of the communication content. Finally, the content analyst truly engages in analy- sis by relating content categories to one another or by relating the characteristics of the content to some other variable. Griswold compared the content of novels written by American and foreign authors, finding many similarities but also some interesting differences. For instance, American authors were likely to place protag- onists in the middle class; foreign authors favored the upper class. American au- thors also were more likely to set their novels in small towns and less likely to set the action in the home. KEY POINT L The content analyst must decide what communications to code and how to code them and then must sample and analyze the content

***How do users of historical documents go about determining the credibility of testimony?

Historical Analysis The analysis of available data takes as many forms as the data themselves. In part, the type of analysis is a function of research purposes and research design. De- scriptive accounts of a single event or historical period differ from tests of general hypotheses, which differ from trend studies. The analysis also depends on data sources. Researchers use very different techniques for analyzing population statis- tics, mass media communications, and historical documents. In this section and the next, we briefly discuss aspects of two sharply different approaches to the analysis of available data: historical analysis and content analysis. Descriptive and Analytical History The word "history" has several meanings. It refers, for example, to (1) actual events or happenings of the past, from the recent past, such as the 2008 presidential elec- tion, to the remote past, such as the assassination of President Lincoln; (2) a record or account of what has or might have happened; and (3) a discipline or field of study (Shafer, 1980:2). One type of historical analysis refers to the set of methods that his-torians apply when they gather and evaluate evidence in order to describe specific moments of the past. This form of analysis stresses the accuracy and completeness of the description of unique, complex events. Outside the discipline of history, how- ever, historical analysis moves beyond description to the use of historical events and evidence to develop a generalized understanding of the social world. Although this characterizes much of the field of historical sociology, we prefer the generic term analytical history to denote this type of historical analysis. Surprisingly, analytical history is a relatively recent redevelopment. Despite the historical orientation of such founding fathers of modern-day social science as Karl Marx, Emile Durkheim, and Max Weber, much of social research during the past half-century has lacked a historical focus. Only in the past 30 years has there been a revived interest in the historical perspective. Erikson's study of deviance in Puri- tan New England is representative of this work. Erikson was careful to reconstruct events with documents of the time, much as a historian might do. But the reconstruction was not an end in itself. Rather, he at- tempted to use a particular historical case as a way of demonstrating and elaborat- ing Durkheim's general theory of deviant behavior—that deviance provides a mech- anism for defining community boundaries and demonstrating shared values and norms. This sort of analysis, Theda Skocpol (1984:365) points out, is valuable "be- cause it prompts the theorist to specify and operationalize . . . abstract concepts and theoretical propositions." Thus, Erikson started with a general theory and used the specific case to explicate the theory. Another strategy for integrating social theory and history is to start with a particular historical event or pattern and then develop and test one or more expla- nations to account for it. For example, John Sutton (1991) attempted to explain the rapid growth of asylums in the United States between 1880 and 1920. To many ob- servers of this period, Sutton notes, "asylum expansion was a sign that America was undergoing an epidemic of madness," which they attributed to a range of evils, in- cluding rapid urbanization and uncontrolled immigration. Using quantitative data from the states (e.g., the number of persons living in urban areas, the number of persons over age 65, the number of asylum and almshouse inmates), Sutton tested several explanations of asylum expansion: (1) as reformers succeeded in shutting down almshouses, asylums were forced to absorb the aged poor who were expelled; (2) urbanization enhanced the development of specialized and formally organized means of treatment, such as asylums; (3) asylum expansion depended on the rev- enues of state governments; (4) the need for asylum placements was inversely re- lated to the distribution of direct benefits (e.g., pensions) to dependent groups; and (5) patronage politics may have supported expansion insofar as asylums were sources of jobs, contracts, and services that parties could use to reward supporters. Sutton's findings showed that all these factors, among others, contributed to asylum expansion. Still another strategy is to search for general causal explanations of well- defined historical outcomes or patterns (Skocpol, 1984). In this case, the inves- tigator does not focus on a particular historical event but rather on two or more similar events or cases, which are then compared systematically to identify causal regularities. For example, sociologist Theda Skocpol (1979) analyzed the causes of social revolutions by comparing the revolutions of France in 1789, Russia in 1917, and China from 1911 to 1949. Given the broad scope of her study, she did not consult original documents; instead, she drew upon the work of historians of each period and place to identify patterns of political conflict and development. Among the common factors that Skocpol identified as precipitating revolution were that each society (1) had strong peasant communities and (2) faced foreign pressures and entanglements that made it difficult to meet the needs of economic development. Finally, historical analysts may also treat history itself as an independent variable in their analyses. That is, they may examine sequences of past events as a way of un- derstanding the present. Used in this way, history represents the temporal dimension of social life rather than a particular outcome to be explained (as in Sutton's research) or a manifestation of large-scale social change (as in Skocpol's work).3 Representative of this type of analytical history is economist Paul David's (1985) analysis of the establishment of the "QWERTY" keyboard layout as a stan- dard of the typewriter industry. David showed that the influence of temporally re- mote events accounts for the persistence of this awkward layout on current type- writers and computer keyboards. The QWERTY format first appeared in 1873, as a result of an early effort to find an arrangement that would reduce the frequency of typebar jamming. The format was then modified into a sales gimmick. That is, E. Remington and Sons "assembled into one row all the letters that their salesmen would need" to rapidly type the brand name TYPE WRITER without lifting their fingers from the keyboard. The future of QWERTY was not protected by techno- logical necessities as competitive designs introduced in the 1880s eliminated the jamming problem and a keyboard arrangement patented by Dvorak and Dealey in 1932 was demonstrably more efficient. Rather, a key event occurred late in the 1880s that locked in the QWERTY standard. The advent of "touch" typing was adapted to Remington's keyboard so that typists began learning this design rather than others. Employers then found it less expensive to buy machines with the QWERTY arrangement than to retrain typists. Finally, non-QWERTY typewriter manufacturers adapted their machines to the QWERTY-trained typists. Historical analysis thus consists of 1. reconstructions of past events, which emphasize the accurate descrip- tion of what happened 2. applications of a general theory to a particular historical case(s), which focus on how the theory applies 3. tests of explanations of historical events, which examine why a specific event occurred 4. the development of causal explanations of historical patterns, which also analyzes why events occurred but seeks a more general under- standing of social phenomena 5. the use of history to understand the present or explain how and why particular phenomena came to be Each of these genres of historical research represents a slightly different level of ab- straction and analysis. Descriptive historians (1) are interested in presenting se- quences of specific, concrete events, whereas analytical historians (2), especially those applying abstract theories, may apply highly general concepts and proposi- tions. Quantitatively oriented analysts engaged in testing hypotheses of a particular historical instance (3) tend to follow the traditional scientific model of investigation and are more explicit about operationalizing concepts. Comparative historians (4), on the other hand, typically take an inductive approach similar to field researchers.4 Finally, those who examine long-term temporal sequences and connections among events (5) may combine the historian's narrative approach with the quantitative analyses of the sociologist. Regardless of these differences, however, all historical re- search involves, first, the use of written residues of the past to describe the past and, second, an interpretation of past events. Handling Documentary Evidence Although historical researchers may use any source of available data, they tend to rely mostly on documents. Historian Vernon Dibble (1963) classifies documents into two main categories: testimony and social bookkeeping. Historians tradition- ally have been especially fond of direct testimony by major actors as contained in autobiographies, depositions, private letters, and the like (Tilly, 1981). Through the testimony of witnesses, historians attempt to reconstruct where, when, and what happened. Testimony, however, tends to focus the analysis on the activities and mo- tivations of individuals, especially "major actors." Social bookkeeping refers to documents containing recorded information pro- duced by groups or organizations, such as bankbooks, court records, transcripts of congressional debates, vital statistics, and the list of graduates of Yale University. As the product of social systems, social bookkeeping is more likely than testimony to be used to draw inferences about social structural variables. Charles Tilly (1981:32) also points out that the numbers and abstractions that social scientists glean from such evidence have facilitated When drawing inferences from documents to events of the past, the historian is primarily concerned with the authenticity and credibility of the evidence. Judgmentsof authenticity, as we mentioned earlier, involve highly technical techniques that are best left to the professional historian or archivist. Once the evidence is authenti- cated, the researcher must evaluate how credible the evidence is. The best checks on the credibility of testimony are corrobora- tion and the absence of contradiction. Con- sistent independent sources of testimony enhance the probability that a particular ac- count is accurate. However, because cor- roboration is often impossible, historians use a variety of other checks to assess credibility. Robert Shafer's (1980:166-67) checklist includes the following suggestions: 1. Is the real meaning of the statement different from its literal meaning? Are words used in senses not employed today? 2. How well could the author observe the thing he [or she] reports? Were his [or her] senses equal to the observation? Was his [or her] physical location suitable to sight, hearing, touch? Did he [or she] have the proper social ability to observe: did he [or she] understand the language . . . ? 3. . . . Regarding the author's ability to report, was he [or she] biased? Did he [or she] have proper time for reporting? Proper place for reporting? Adequate recording instrumentation? . . . When did he [or she] report in relation to [the] observation? Sooner? Much later? [Reports written soon after an event are more likely to be accurate than reports recorded long afterward; disinterested, inci- dental, or casual testimony is more likely to be accurate than testimony that is ideologically relevant or intended for a particular audience.] 4. Are there inner contradictions in the document? With their emphasis on credible testimony and the accurate description of past events, historians put much stock in the use of primary as opposed to sec- ondary sources. Primary sources are eyewitness accounts of the events described, whereas secondary sources consist of indirect evidence obtained from primary sources. Kai Erikson used both types of evidence in his study of deviance in Puri- tan New England: (1) court records and the journals of those witnessing the events of the time (primary) and (2) the writings of numerous historians (secondary). Theda Skocpol relied on secondary sources for her study of revolutions. "As a gen- eral rule," Louis Gottschalk (1969:116) claims, "the careful historian will be suspi- cious of secondary works in history, even the best ones." Gottschalk therefore rec- ommends that these should be used for very limited purposes, such as to get general information about the setting of the historical period under investigation, to obtain bibliographic leads, and to derive tentative interpretations and hypothe- ses. However, it is difficult to imagine how broad-based historical analyses such as Skocpol's could ever be undertaken if she first reconstructed past events with pri- mary sources. Social bookkeeping requires a different kind of evaluation than does testimony. Because the documents are produced by groups or organizations, they must be read in the light of the social systems that produced them (Dibble, 1963:207). One might ask and try to discern, for example, the following: What processes intervened between the observing and recording? Was the record subject to editing (recall our example of selective deposit in the Congressional Record)? For whom was the record intended? For whom might it have been valuable, and who might have been hurt by it? In general, then, when the evidence is secondhand and the subject matter re- mote, the investigator must be all the more thoughtful about the evidence and skep- tical of his or her relationship to it (Erikson, 1970:335). Even when the researcher is confident of the authenticity and credibility of the documents, he or she must also wonder how they came to be preserved. With regard to the rich and varied docu- ments available for the study of seventeenth-century Massachusetts, Erikson (1970:335) points out that Beyond providing more or less direct evidence of historic events, documents are an important source of indicators or measures of large-scale social structural variables and processes. In his study of asylum expansion, for example, Sutton (1991) measured urbanization, the aging of the population, and changes in the number of inmates of asylums and almshouses with data from various U.S. census publications; he used reports to the U.S. Congress by the Commissioner of Pensions to determine the number of pensioners in each state; and from gubernatorial vot- ing data published in the Congressional Quarterly's Guide to U.S. Elections, he mea- sured party patronage by the closeness of the votes for Republican and Democratic gubernatorial candidates. The quality of such indicators depends not only on the credibility of the bookkeeping sources, but also on the reliability and validity of the data as measures of particular variables. Since multiple indicators and independent sources of validation are rarely available in historical research, validity assessment is largely a matter of face validation. This was not a problem for most variables in Sutton's study because of the directness of the measures; for example, data on the number of persons over age 65 have palpable validity as a measure of the age of the population. However, for less direct measures, such as the difference in votes for party gubernatorial candidates as an indicator of patronage, face validity is less than satisfactory, albeit often the only means of validation. Historical Interpretation The historical analyst is interested in understanding the past. For the descriptive historian, this implies establishing what happened in a factual way. During the Salem witchcraft hysteria, for example, who was accused by whom? Who was exe- cuted? But analysis never stops here. To arrive at some understanding of what hap- pened, even if the goal is merely to describe a sequence of events as accurately as possible, the researcher must order the facts according to some interpretation of the materials. As we repeatedly have noted, a tenet of social research is that facts do not speak for themselves. The search for evidence itself, however haphazard or rigor- ous, is always guided by a broad theory or interpretation relevant to the researcher's interest. Tilly (1981:10), for example, notes that "the American historian who ex- amines the treatment of slaves by undertaking a detailed study of slaveholders' di- aries, while neglecting the records of slave auctions, makes an implicit choice fa- voring a theory in which slaveholders' attitudes are significant determinants of slave experience." To examine the role of historical interpretation of particular past events and the importance of entertaining alternative explanations, we discuss dif- ferent studies of the Salem witchcraft episode, including Kai Erikson's aforemen- tioned study of deviance in Puritan New England. All of this points to a major difficulty and an important caveat regarding his- torical analysis. Historical events invariably are subject to a variety of interpreta- tions. It is possible for more than one interpretation to be valid, especially if the interpretations represent different levels (e.g., psychological versus sociological) or focus on different aspects of an event. For example, in explaining the witchcraft ma- nia, one may account for not only why it took place at this point in time in the Massachusetts colony (which is what Erikson was attempting to explain) but also why it was focused in the community of Salem Village (which Boyer and Nis- senbaum explained), why it began among these particular girls, why the citizens of the community actually could believe that there were witches in their midst (which historian Chadwick Hansen [1969] has attempted to explain), and so forth. On the other hand, if the researcher assumes that some explanations may be valid and others are not, it becomes important to entertain plausible rival interpretations and to evaluate these critically in light of the evidence. How well does a given interpreta- tion account for the evidence? What does the interpretation assume and what conse- quences follow from it? Chambliss assumes a singularly valid explanation in raising such questions about Erikson's hypothesis: If Erikson is right, then it follows that the KEY POINT L Researchers who interpret historical events should consider alternative inter- pretations and the theoretical assump- tions upon which each is based.

What are some of the factors that influence topic selection in social research?

Origins of Research Topics The starting point for research is the selection of a topic. Once a topic is chosen and the research question is set, we can discuss rules and guidelines for conducting re- search that will generate the most valid data and the most definitive answers. But there are no rules for selecting a topic. Given that anything that is "social" and "em- pirical" could be the subject of social research, there is a nearly endless variety of potential topics. So, how are specific topics likely to emerge in the social sciences? We have identified five factors that explain the origin of most topics. 1. The structure and state of the scientific discipline. With the scientific goal of advancing knowledge, most researchers select topics suggested by the ongoing de- velopment of theory and research in their particular fields of study. The organization of disciplines casts the framework for topic selection. Social psychologists, for exam- ple, divide their discipline with respect to various forms of social behavior, such as aggression, altruism, interpersonal attraction, and conformity, which act as organiz- ing themes or areas of research interest. Similarly, sociologists frequently study as- pects of various institutions, like religion, politics, education, and the family, around which the discipline is organized. As knowledge in an area develops, inconsistencies and gaps in information are revealed that researchers attempt to resolve. 2. Social problems. The focus and development of the social sciences are inti- mately related to interest in basic problems of the "human condition." Historically, this has been a major source of research topics, especially in sociology. The most eminent sociologists of the nineteenth and early twentieth centuries—people like Emile Durkheim, Karl Marx, Max Weber, and Robert Park—concerned themselves with problems emanating from great social upheavals of their day, such as the French and Industrial revolutions and massive foreign immigration to the United States. The problems wrought by these events—alienation, deviance, urban crowd- ing, racism, and many others—have remained a major focus of the discipline ever since. Indeed, many people are attracted to the social sciences because of their per- ceived relevance to social problems. 3. Personal values of the researcher. To carry out a research project, with its in- evitable complications, obstacles, and demands for time and money, requires consid- erable interest and commitment on the part of the investigator. What sustains this interest more than anything else are highly personal motivations for doing research on a particular topic. Thus, an investigator may choose a topic not only because it is considered theoretically important, novel, or researchable but also because it stimu- lates his or her interest. According to social psychologist Zick Rubin (1976:508-9), one reason for his embarking on the study of romantic love was that he was "by tem- perament and avocation, a songwriter. Songwriters traditionally put love into mea- sures." And so he set out to find a way of measuring love scientifically. In a similar fashion, it is not surprising that members of particular groups often have pioneered research on those groups; for example, women have led the way in research on women and African Americans in research on blacks (King, Keohane, and Verba, 1994:14). 4. Social premiums. There are also powerful social determinants of topic selec- tion. Through the availability of supporting funds, the prestige and popularity of the research area, and pressures within the discipline and within society, social premi- ums are placed on different topics at different times. Typically, these premiums will reinforce one another, with the social climate affecting funding, which in turn af- fects prestige. This was certainly true of the space program in the 1960s. Today, in the social sciences, the aging of the population as a whole has raised interest and support for research on the elderly, just as it has caused a dramatic increase in fed- eral expenditures on and services available to older people. Similarly, in the 1970s the women's movement spurred a dramatic increase in research on gender issues that has continued to this day. 5. Practical considerations. An overriding concern in any research project is cost. Research requires time, money, and personnel. Limitations on these resources, as well as other practical considerations such as the skill of the researcher and the availability of relevant data (see chapter 12), will shape both the nature and scope of the problem that the researcher can pursue. The choice of any given research topic may be affected by any or all of the fac- tors mentioned. Consider, a study by Beckett Broh (2002), which examined the effects of participation in extracurricular activities on high school academic achieve- ment. Using data from a national survey of high school students, Broh's study was designed to find out who benefits from participating in sports and other school ac- tivities and why. The study continued a line of inquiry on the impact of the ex- tracurriculum of theoretical interest to sociologists of education and sport. Social scientists, school officials, and the general public have long debated whether sport, in particular, builds character and has positive educational benefits. Given the costs of extracurricular programming, especially school sports, and the public concern about boosting academic achievement, Broh's study also had important practical implications. Finally, the topic was of special personal interest to Broh. She herself was a high school athlete, and after her collegiate athletic career was cut short by an injury, she turned to coaching middle and high school basketball in Michigan and Ohio. These experiences naturally sparked her interest in sport and education as a PhD student in sociology. And when her mentor suggested that she look at the questions in the National Educational Longitudinal Study, she found the means of testing some of her ideas about the impact of athletes' network of relationships on their academic achievement. Once a general topic has been chosen, it must be stated in researchable terms. This involves translating the topic into one or more clearly defined, specific ques- tions or hypotheses. We'll discuss the process of moving from general topics to spe- cific questions later. First, it is important to understand that the formulation of a researchable problem or question boils down to deciding what relationships among what variables of what units are to be studied. We will now turn our attention to these important terms.

CHAPTER 4

CHAPTER 4

***Why look at the way variables "relate" to one another? What three factors for establishing causality do we need to be concerned with most? Describe what these refer to

Characteristics of units that vary, taking on different values, categories, or attributes for different observations, are called variables. Variables may vary over cases, over time, or over both cases and time. For example, among individuals, any set of characteristics that may differ for dif- ferent people, such as age (range of years), gender (male and female), and marital status (single, married, divorced, widowed, etc.), is a variable. And for an individ- ual, any characteristic that may vary from one time period to the next, such as age, level of education (first grade, second grade, etc.), and income (dollars earned per year), is a variable. It is not unusual to see some confusion between variables and the attributes or categories of which they consist. "Gender" is a variable consisting of the categories male and female; "male" and "female" by themselves are not variables but simplycategories that distinguish persons of different gender. Likewise, "divorced" and "Republican" are not variables but categories of the variables "marital status" and "political party affiliation," respectively. To keep this distinction clear, note that any term you would use to describe yourself or someone else (e.g., sophomore, sociol- ogy major) is an attribute or category of a variable (academic class, major). Social scientists find it necessary to classify variables in several ways. One type of classification is necessitated by the complexity of social phenomena. For any given research problem, the researcher can observe and measure only a few of the many potentially relevant properties. Those variables that are the object of study—part of some specified relationship—are called explanatory variables, and all other vari- ables are extraneous (Kish, 1959). There are two types of explanatory variables: dependent and independent. The dependent variable is the one the researcher is interested in explaining and pre- dicting. Variation in the dependent variable is thought to depend on or to be influ- enced by certain other variables. The explanatory variables that do the influencing and explaining are called independent. If we think in terms of cause and effect, the independent variable is the presumed cause and the dependent variable the pre- sumed effect. Independent variables are also called "predictor variables" because their values or categories may be used to predict the values or categories of depen- dent variables. For example, when Broh studied the impact of extracurricular in- volvement on academic achievement, her independent variable consisted of whether students participated in specific school activities such as interscholastic sports and her dependent variable was level of academic achievement. One research question was whether sport participation explained (or predicted) differences in academic achievement. Research studies in the social sciences often involve several independent vari- ables and sometimes more than one dependent variable. Also, a variable is not in- trinsically independent or dependent. An independent variable in one study may well be a dependent variable in another, depending on what the researcher is trying to explain. Finally, it is conventional in mathematics and science for the letter X to symbolize the independent variable and for the letter Y to represent the dependent variable. This is a practice we shall follow in the remainder of the book. ex of variables: Age, fear of crime Growth of air traffic, economic growth Proportion of employees who are female, average wage Level of economic development, birth rate Length of engagement, marriage duration Racial composition of the team, average attendance Whether main characters in strip were powerful, when comic strips were introduced (1920s or 1930s) Gender of athlete, whether athlete identified by first nam Extraneous variables, which are not part of the explanatory set, may be classi- fied in two important ways. First, in relation to specific independent and dependent variables, they may be antecedent or intervening. An antecedent variable occurs prior in time to both the independent and dependent variable; a variable is inter- vening if it is an effect of the independent variable and a cause of the dependent variable. Antecedent variables in Broh's study were parents' income and a student's race and gender; each of these variables may affect both extracurricular involvement and academic achievement. An intervening variable was students' self-esteem. Ex- tracurricular involvement could affect self-esteem, which in turn could affect a stu- dent's academic performance. Figure 4.1 depicts these examples of antecedent and intervening variables. Each arrow in the figure represents causal direction. Thus "Parents' income → Extracurricular involvement" means that parents' income in- fluences or causes extracurricular involvement, and the absence of an arrow means that one variable does not cause another. Extraneous variables also may be categorized as controlled or uncontrolled. Controlled or, more commonly, control variables are held constant, or prevented from varying, during the course of observation or analysis. This may be done to limit the focus of the research or to test hypotheses pertaining to specific subgroups—for example, all males or all males under 18 years of age. Basically, the value or category of a control variable remains the same for a given set of observa- tions. Several techniques for holding variables constant are discussed at length in the following chapters. Some examples would be selecting only individuals of the same age and gender, observing groups of the same size, creating uniform labora- tory conditions or social settings in which to observe people or groups, and statis- tically controlling for specific attributes. Whenever a variable is held constant in research, that variable cannot account for (or explain) any of the variation in the explanatory variables. Suppose, for example, that you wanted to explain differences (i.e., variation) between people in their level of aggression. If you controlled for gender by studying only males, then the variable "gender" could not account for any of the observed variation in aggression. Holding variables constant thus simplifies complex social situations. It is a means of ruling out variables that are not of immediate interest but that might otherwise explain part of the phenomenon that the investigator wishes to understand RELATIONSHIPS AMOUNG VARIABLES: The kind of relationships with which social scientists are concerned, relation- ships among variables, have these same two features. Two or more variables are re- lated, or form a relationship, to the extent that changes in one are accompanied by systematic changes in the other(s). Since the manner in which the variables change or vary together will depend on whether the variables are qualitative or quantita- tive, we will consider the nature of relationships separately for each of these types of variables. Relationships may be described by their form (how changes in one variable vary with changes in the other), strength (how accurately values of one variable predict values of the other), and statistical signif- icance (likelihood that the relationship occurred by chance or random pro- cesses) It seems obvious that a rock thrown against a win- dow will cause the glass to shatter. And the fact that drinking too much soda causes me to get a stomachache is a causal relationship that you can comprehend even if you fortunately have not had the same experience. THREE FACTORS FOR ESTABLISHING CAULAITY In other words, what kind of evidence supports the belief that a causal relationship exists? Social scientists generally re- quire at least three kinds of evidence to establish causality. These requisites are as- sociation, direction of influence, and nonspuriousness. Association For one variable to be a cause of another, the variables must be statistically associ- ated. If the pattern of changes in one variable is not related to changes in another, then the one cannot be producing, or causing, changes in the other. Thus, for in- stance, if intelligence is unrelated to delinquency—that is, if adolescents of high and low intelligence are equally likely to be delinquent—then intelligence cannot be a cause of delinquency. Associations, of course, are almost never perfect; so a perfect association be- tween variables is not a criterion of causality. According to logicians, in fact, the very idea of causation implies imperfect associations between events. Causes can have in- variable effects only in "closed systems" that exclude all other factors that might in- fluence the relationship under investigation. Many of the laws of physics, for in- stance, are said to apply exactly only in a vacuum. However, vacuums are not found in nature; neither is it possible in real social situations to eliminate completely the influence of extraneous factors. Perfect associations may be expected, therefore, only under the theoretical condition that "all other things are equal" but not in the "real world" of observations. Barring "perfect" associations, then, the application of this first criterion nec- essarily involves a judgment about whether an association implies a meaningful causal relationship. In the social sciences, causal relationships often are implied from comparatively "weak" associations. One reason for this is that many measure- ments in the social sciences are relatively imprecise. The primary reason, though, is that in explaining human action, multiple causes may independently or jointly pro- duce the same or similar effects. A weak association may mean that only one of sev- eral causes has been identified, or it may mean that a causal relationship exists but not under the conditions or for the particular segment of the population in which the weak association was observed. Rather than strength of association, therefore, social scientists rely on tests of statistical significance to determine whether a mean- Direction of influence A second criterion needed to establish causality is that a cause must precede its ef- fect, or at least the direction of influence should be from cause to effect. In other words, changes in the causal factor, or independent variable, must influence changes in the effect, or dependent variable, but not vice versa. For many relationships in social research the direction of influence between variables can be conceived in only one way. For example, characteristics fixed at birth, such as a person's race and gen- der, come before characteristics developed later in life, such as a person's education or political party affiliation; and it is hard to imagine how changes in the latter could influence changes in the former. Direction of influence is not always so easy to determine. Suppose you found a correlation between racial prejudice and interracial contact showing that the more contact a person has with members of other races, the less prejudiced he or she is apt to be. One possible interpretation is that racial contact increases familiarity and contradicts stereotypes, thereby reducing prejudice. An equally plausible interpre- tation is that prejudiced people will avoid contact while tolerant people will readily interact with other races, so that racial prejudice influences racial contact. Without any information about the direction of influence between these variables, there is no basis for deciding which variable is the cause and which is the effect. To take an- other example, a correlation between grades and class attendance may mean that greater attendance "increases the amount learned and thus causes higher grades" or it may mean that "good grades lead students who obtain them to attend class more frequently" Nonspuriousness (elimination of rival hypotheses) If two variables happen to be related to a common extraneous variable, then a sta- tistical association can exist even if there is no inherent link between the variables. Therefore, to infer a causal relationship from an observed correlation, there should be good reason to believe that there are no "hidden" factors that could have created an accidental or spurious relationship between the variables. When an association or correlation between variables cannot be explained by an extraneous variable, the relationship is said to be nonspurious. When a correlation has been produced by anextraneous third factor and neither of the variables involved in the correlation has influenced the other, it is called a spurious relationship. The idea of spuriousness is obvious when we consider two popular examples in the social sciences. The first is a reported positive correlation in Europe between the number of storks in an area and the number of births in that area (Wallis and Roberts, 1956:79). This correlation might explain how the legend that storks bring babies got started, but it hardly warrants a causal inference. Rather, the correlation is produced by the size of the population. Storks like to nest in the crannies and chimneys of buildings; so as the population and thus the number of buildings in- creases, the number of places for storks to nest increases. And as the population increases, so does the number of babies. We also would expect to find a positive cor- relation between the number of firefighters at a fire and the amount of damage done. But this does not imply that firefighters did the damage. The reason for the correlation is the size of the fire: Bigger fires cause more firefighters to be sum- moned and cause more damage.In these examples the original relationship is an incidental consequence of a common cause: an antecedent extraneous variable. In the first example, the size of the population accounts for both the number of storks and the number of births; in the second, the severity of the fire determines both the number of firefighters and the amount of damage. These relationships are depicted in Figure 4.4. The exam- ples are intuitively obvious, and the third factor is fairly easy to identify. In actual research, spurious relationships are much less apparent, and the pos- sibility often exists that an unknown variable may have produced an observed as- sociation. For many years, numerous studies have shown that children who are breast-fed tend to have higher IQ scores than those who are not. Proponents of breast-feeding (which does have many other benefits for mother and child) have in- ferred a causal connection, contending that the effect may be due to a component of breast milk or perhaps to the physical interaction between mother and baby. Re- cent research suggests, however, that this association is spurious. Both breast-feed- ing and child intelligence are influenced by the mother's intelligence: Mothers with. SOMETIMES ONE CAN PREDICT THE OTHER??

***Describe some sources of measurement error in surveys attributable to the (a) interviewer and (b) respondent

hough there are no universally agreed-on criteria for the selection of interviewers, experience and common sense suggest that certain qualities are desirable. These would include articulateness; a pleasant personality that inspires cooperation and trust; a neat, businesslike appearance; freedom from prejudices or stereotypes toward the population being interviewed; interest in the survey topic; familiarity with computers; and the ability to listen, use neutral probes when needed, and record responses accurately. We suspect that the presence of these qualities is evaluated to some degree during the selection and training processes. This should create reasonably well-qualified interview staffs and would explain why research has found no consistent correlates between interviewer characteristics and the quality of interviewing (Fowler, 1991; Groves et al., 2004). Unless the survey is being done by a large research organization that has a permanent staff, the researcher must recruit all the interviewers for a given survey. The process of recruiting interviewers is basically the same as hiring for any job (Wein- berg, 1983). That is, positions are advertised and applicants are screened and selected. Beyond minimum reading and writing skills, availability and readiness to meet job requirements are the principal selection criteria. Other interviewer attributes are largely dependent on market forces. The majority of interviewers, according to data from a sample of interviewers at U.S. government statistical agencies, do not regard their job as a primary source of income or career, perhaps because of the intermittent nature of the work (Groves and Couper, 1998:198). This work feature as well as job requisites also may account for the composition of the interviewer workforce, which is predominantly female and young to middle-aged, with above-average education. Since such attributes are largely beyond the control of the re-searcher, it is fortunate that they appear to be much less important in determining the quality of a survey than the interviewer's ability achieved through careful training and experience. Interviewer Training Interviewers receive training in general interviewing skills and techniques as well as in specific procedures required for a particular survey project. In practice, these two aspects of training often are combined, with survey-specific materials (e.g., questionnaire or sampling procedures) used for practical application (Weinberg, 1983; Lessler, Eyeman, and Wang, 2008). More specifically, the training process must ac- accomplish several goals: 1. Provide interviewers with information regarding the study's general purpose, sponsor, sampling plan, and uses or publication plans. 2. Train interviewers in locating households and eligible respondents as prescribed by the sampling design. 3. Teach basic interviewing techniques and rules, such as how to gain re- respondent's cooperation, establish rapport without becoming overly friendly, ask questions and probe in a manner that will not bias the re- response, deal with interruptions and digressions, and so forth. 4. Acquaint interviewers with the survey instrument and instructions for CAPI or CATI. 5. Provide demonstrations and supervised practice with the interview schedule 6. Weed out those trainees who do not possess the motivation and ability to do an acceptable job. Studies of the effects of training on interviewer errors have shown that too little training (less than a day) is inadequate and that better-quality data are associated with more extensive training (Groves et al., 2004:294). A training program for FTF interviewing may combine home study with a series of classroom sessions. The first session might begin with a general introduction to the study, followed by a pre- presentation and instruction in basic interviewing skills and responsibilities. In the second session, the researcher would thoroughly familiarize interviewers with the survey questionnaire, going over the entire instrument item by item, explaining the importance of each item, and giving instructions for recording responses and examples of problematic responses and ways to deal with them. Next, the researcher would conduct a demonstration interview and then divide interviewers into pairs for supervised practice interviewing. Third and subsequent sessions would involve further practice, possibly including field experience, and further evaluation. Experienced interviewers generally receive survey-specific training through home study of the project's special interviewing procedures and survey instrument followed by discussions or mock interviews with a field manager. For an excellent discussion of what trainees should expect in telephone survey training, see Patricia Gwartney's The Telephone Interviewer's Handbook (2007 Pretesting A pretest consists of trying out the survey instrument on a small sample of persons having characteristics similar to those of the target group of respondents. The basic reason for conducting a pretest is to determine whether the instrument serves the purposes for which it was designed or whether further revision is needed. Pretesting may be carried out before, at the same time as, or after the interviewers are trained. An advantage to completing the pretest before interviewer training is that the final instrument may be used during training. An advantage to delaying pretesting is that the interviewers can assist with this step, either during the field practice part of interviewer training or after the formal training is completed. A disadvantage to delaying pretesting is that there may be a time gap between the completion of training and the start of the "real" interviews while the instrument is being revised. The subject of pretesting is discussed in more depth in chapter 10. Gaining Access Gaining access to respondents involves three steps: getting "official" permission or endorsement when needed or useful, mailing a cover letter introducing the study to persons or households in the sample, and securing the cooperation of the respondent. When doing a community interview survey, it is usually a good idea to write a letter to or visit a local official to describe the general purpose of the study, its im- importance, the organization sponsoring it, the uses to which the data will be put, the time frame, and so forth. In addition, endorsement letters from relevant local organizations may be sought, such as the county medical society if doctors will be interviewed, or the chamber of commerce if businesses are being sampled Respondent cooperation also will be enhanced by a good cover letter. In inter- view surveys, the cover letter is usually mailed a few days before the interviewer is to call on the respondent. In surveys using mailed questionnaires, the cover letter is sent with the questionnaire either as a separate sheet or attached to the question- naire. Dillman (2007) also recommends mailing a brief prenotice letter just prior to a questionnaire mailing. Occasionally, cover letters are mailed out in advance of phone surveys when mailing addresses are known. In an analysis of twenty-nine research studies, sending a letter in advance improved telephone response rates by an average of about 8 percent (de Leeuw et al., 2007). The objective of the cover letter, to persuade the respondent to cooperate with the survey, maybe met by (1) identifying the researcher and survey sponsor and the phone number of a contact person, (2) communicating the general purpose and importance of the study, (3) showing how the findings may benefit the individual or others (e.g., the results will be used to improve health care or to increase understanding of marriage relationships), (4) explaining how the sample was drawn and the importance of each respondent's cooperation to the study, (5) assuring individuals that they will not be identified and that their responses will be kept confidential and will be combined with those of others for data-analysis purposes, (6) explaining that the questionnaire will take only a few minutes to fill out or that the interview will be enjoyable and will be held at the respondent's convenience, and (7) promising to send respondents a summary of the study's findings. (See Box 9.2 for an example of a cover letter.) Also, response rates may be increased by including a token incentive, such as $1 or $2, with the cover letter (Dillman, 2007). A third step in interview surveys is gaining the cooperation of the respondent. Interviewers must contact or reach the sample person and then persuade him or her to cooperate by completing the survey. Making contact with designated households is primarily a matter of persistence and overcoming barriers. Interviewers vary the time of attempted contacts and may make repeated call-backs, typically at least six for FTF surveys and ten or more for telephone surveys (Fowler, 1993). We will discuss follow-up efforts further as a final phase of fieldwork. Avoiding refusals is a more difficult problem that requires special interviewer skills. According to Robert Groves and Mick Couper's (1996, 1998) theory of survey participation, which applies mainly to in-person interviews, in the initial moments of the survey encounter, the sample person is actively trying to comprehend the purpose of the interviewer's visit. He or she uses cues from the words, behavior, and physical appearance of the interviewer to arrive at an explanation (or identify a "script") and then evaluates the costs of continuing the conversation. Whether the person eventually agrees or refuses to participate depends on the interviewer's ability to quickly and accurately judge the particular script reflected in the house-holder's initial response and to react accordingly. The theory is consistent with analyses of interviewer-householder interactions. For example, Robert Groves, Robert Cialdini, and Mick Couper (1992) found that experienced interviewers use two related strategies to convince respondents to participate. First, they tailor their approach to the sample unit, adjusting their dress, mannerisms, language, and arguments according to their observations of the neighborhood, housing unit, and immediate reactions of the householder. Second, they maintain interaction, which maximizes the possibility of identifying relevant cues In contrast, tailoring is difficult in telephone interviews due to the absence of visual cues and inadequate time—often a few seconds—to maintain interaction and build up rapport (for an exploratory study, see Couper and Groves, 2002). Nevertheless, phone interviewers are trained to instantly make and maintain a positive impression upon the respondent through their voice (tone, pitch, speed, and enunciation) and pre-memorized scripts for introducing the survey and addressing respondent's reluctance to participate (e.g., "I'm not good [at] answering surveys," "I'm too busy," "You are invading my privacy") or other concerns (Gwartney, 2007) 1. Read the questions exactly as written. 2. If a respondent does not answer a question fully, use nondirective fol- low-up probes to elicit a better answer. Standard probes include repeat- ing the question, asking "Anything else?" "Tell me more," and "How do you mean that?" 3. Record answers to questions without interpretation or editing. When a question is open-ended, this means recording the answer verbatim. 4. Maintain a professional, neutral relationship with the respondent. Do not give personal information, express opinions about the subject matter of the interview, or give feedback that implies a judgment about the content of an answer Considerable empirical evidence and sound theoretical arguments justify standardization principles (Fowler, 1991; Fowler and Mangione, 1990). For example, if questions are not asked as worded, one cannot know what question was posed; and numerous experiments have shown that small changes in the wording of questions can alter the distribution of answers (Schuman and Presser, 1981). Also, experiments have demonstrated that suggestive questioning (presenting only a subset of answer alternatives that are presumed to be relevant) and suggestive probing can affect response distributions and relationships with other variables (Smit, Dijkstra, and van der Zouwen, 1997). While there is little doubt that standardization reduces the interviewer's contribution to measurement error, a more serious, long-standing concern is that presenting a standard stimulus in and of itself can produce measurement error. Ac- cording to this view, standardized interviewing stifles interviewer-respondent communication in two ways: (1) it inhibits the ability to establish rapport, which motivates respondents to cooperate and give complete and accurate answers, and (2) it ignores the detection and correction of communication problems Regarding motivation, some respondents feel irritated by the unilateral nature of a structured survey: They cannot converse with the researcher or interviewer, they cannot qualify or expand answers, and they may be forced to choose among alternative answers that they find unsatisfactory. Fowler (1995:99-102) believes that respondents' resistance to standardized survey interviewing can be overcome by initially orienting or training them to play the role of a respondent. Since inter- viewer-respondent interaction in a highly structured survey is quite different from everyday conversation, respondents should be given an explanation of the rationale and the rules of standardized surveys to prepare and motivate them for the inter-view task. Standardization also may reduce validity if respondents' misinterpretations of questions are ignored or uncorrected. In a widely cited article, Lucy Suchman and Brigitte Jordan (1990) argued that standardization suppresses elements of ordinary conversation that are crucial for establishing the relevance and meaning of questions. Interviewers who are trained to read questions as written and to discourage elaboration are not prepared to listen carefully for misunderstandings and correct them. From videotapes of standardized interviews, Suchman and Jordan gave several examples of miscommunication, such as an interviewer failing to correct a respondent who interprets "alcoholic beverages" to include hard liquor but exclude wine, which led to invalid responses. Such problems could be resolved, they claimed, if interviewers were granted the freedom and responsibility to negotiate the in- tended meaning of questions through ordinary conversational conventions. While acknowledging the communication problems identified by Suchman and Jordan, advocates of standardized interviewing tend to disagree with them about causes and remedies. Some advocates contend that problems arise chiefly because of poorly worded questions and that whether respondents interpret questions consistently and accurately depends on adequate question pretesting The real issue for him is how researchers can solve communication problems while harnessing the full benefits of standardization. One means is the development of better questions; another may involve adapting the role of the interviewer. For certain types of questions, such as requests for factual information, permitting interviewers to stray from standardized scripts to correct respondents' misunderstandings may improve data quality; but allowing interviewers too much flexibility will greatly increase interviewer error as well as interview length This is especially problematic in interviews, where biases may be produced not only by the wording, order, and format of the questions but also by the interaction between interviewer and respondent. Like the subject in an experiment, the respondent's chief concern often is with gaining the interviewer's social approval, or at least with avoiding his or her disapproval One source of response effects is the interviewer's physical characteristics. For example, the race of the interviewer has been shown to have a considerable impact on certain types of responses. In general, blacks express fewer antiwhite sentiments to white than to black interviewers and whites give fewer antiblack answers to black than to white interviewers In a fashion similar to experimenter effects, interviewers also may inadvertently communicate their expectations to respondents about how they should respond. To illustrate, if an interviewer believes a respondent to be of limited intelligence and inarticulate, he or she may expect shorter, less articulate responses and may communicate this indirectly by short pauses. Since the respondent is looking to the interviewer for clues to the appropriateness of his or her behavior, he or she will likely provide short responses, thus fulfilling the interviewer's expectations. On the other side, the respondent's reports to an interviewer may easily be distorted by such things as poor memory, desire to impress the interviewer, dislike for the interviewer, or embarrassment. Similarly, a respondent's feelings about the topic of the study or toward the organization sponsoring it may also affect the quality of data obtained. Finally, settings for interviews may present problems. A housewife who is being interviewed while supervising children may not be able to focus on the tasks of the interview sufficiently to provide as full and accurate responses as she might in another situation. In the British Household Survey, the presence of a spouse during an interview led to greater agreement between husbands and wives on several attitudinal and behavioral items (Zipp and Toth, 2002). A study by Nancy Brener and associates (2006) showed that high school students were more likely to report drinking alcohol, smoking marijuana, sexual activity, and other risk behaviors when questioned at school than when questioned at home. "to try several follow-up procedures, such as (a) calling back at different times and days of the week, (b) asking neighbors when people are usually at home or how they might be contacted, (c) leaving notes, and (d) using a reverse directory to get a telephone number for the household" (Davis and Smith, 1992:51). Follow-up efforts help to raise response rates. Besides follow-ups, response rates may be improved (1) by appropriate efforts to gain access, discussed earlier; (2) in interview surveys, by proper interviewer training and supervision; and (3) for mailed questionnaires, by inclusion of a stamped return envelope and a token prepayment, as well as by attention to the length, difficulty, and appearance of the questionnaire (see chapter 10). The particular follow-up activities depend on the survey mode. For telephone and FTF surveys, the major problem is dealing with refusals. In many surveys, more experienced interviewers or supervisors are used to try to gain the respondent's co-operation on the second try. The National Health and Social Life Survey (NHSLS) dealt with some initial refusals by employing follow-up interviewers who specialize in converting reluctant respondents and by sending out "conversion letters" that answered special concerns that potential respondents were raising (Laumann et al., 1994). In response to a clear refusal, however, one follow-up call or visit should be the limit to avoid respondent feelings of harassment. Because the NHSLS investigators were worried about reaching their target response rate (75 percent), they started offering incentive completion fees to reluctant respondents on a selective basis in low-response areas (Laumann et al., 1994:56-57). Although incentive fees ranging from $10 to $100 were offered for interviews, the fees were viewed as cost-efficient given the high cost of interviewer wages and travel costs (the cost of completed interviews averaged $450). Since response rates are typically lower for mailed questionnaires, follow-up efforts are especially important with this mode. Typically, three follow-up mailings are used. For example, in Henry Wechsler and associates' (1994) initial college alcohol survey, respondents received four separate mailings, approximately 10 days apart: the initial mailing of the questionnaire, a postcard thanking those who had completed the questionnaire and urging those who had not to do so, a mailing with another copy of the questionnaire again appealing for its return, and a second reminder postcard. Given that this survey was truly anonymous, all persons in the sample had to be sent the subsequent mailings. If questionnaires have been coded so that the researcher knows who has responded, there can be a savings in postage and paper as only nonrespondents need receive the follow-up mailings.

Compare face-to-face interviews and self-administered questionnaires with respect to (a) response rates and sampling quality, (b) time and cost, and (c) complexity and sensitivity of questions asked

A critical aspect of survey instrumentation is deciding on the mode of asking questions—interviewer-administered (face-to-face or telephone surveys) or self- administered (paper-and-pencil or computer-assisted questionnaires) or some combination of these modes. This choice depends partly on other planning deci- sions such as the research objectives, units of analysis, and sampling plan. For ex- ample, in the College Alcohol Study, the sampling plan required contact with a very large number of respondents at over 100 geographically dispersed colleges and universities. These requirements ruled out both face-to-face and telephone interviews as too expensive, time-consuming, and impractical; and questionnaires were used. In the study of young people's attitudes toward law enforcement, personal interviews were necessitated by the sensitivity and complexity of the topic and by the research The most expensive and time-consuming mode of survey research is face-to-face interviewing, the major costs of which are incurred from direct interviewing time and travel to reach respondents. If respondents are widely dispersed geographically, this method also will require an efficient sampling procedure for locating respondents. Under these circumstances, the most cost-efficient procedure usually is multistage cluster sampling. Almost all large-scale surveys are multistage, with stratification at one or more stages. If respondents are reached easily by mail or phone, there is no reason for use of clustering. Simple random or systematic sampling may be implemented, with or without stratification, provided that an adequate sampling frame can be obtained. 1. Coverage error: Differences between the target population and the sampling frame; this is produced when the sampling frame does not include all members of the population, as when a telephone survey omits people without telephones. 2. Sampling error: The difference between a population value and a sample estimate of the value that occurs because a sample rather than a complete census of the population is surveyed. 3. Nonresponse error: Differences in the characteristics of those who choose to respond to a survey and those who refuse or cannot be contacted because of an insufficient address, of a wrong telephone number, they are never at home, or they are on vacation. 4. Measurement error: Inaccurate responses associated with the respondent, the interviewer, the survey instrument, and the postsurvey data processing. Interviewers and an interview schedule permit a great deal more flexibility than is possible with a selfadministered questionnaire. For example, when research objectives necessitate the use of open-ended questions, which require respondents to answer in their own words, in contrast to closed-ended questions, for which specific response options are provided, an interviewer usually will be able to elicit a fuller, more complete response than will a questionnaire requiring respondents to write out answers. This is particularly true with respondents whose writing skills are weak or who are less motivated to make the effort to respond fully. In addition, interviewers can easily utilize question formats in which certain questions are skipped when they do not apply to a particular respondent, while such a format may be confusing for respondents completing a questionnaire. Furthermore, in cases where it is important that questions be considered in a certain order, the self-administered questionnaire presents problems because the respondent may look over the entire form before beginning to answer. Other advantages of interviewing include the ability of an interviewer to clar- ify or restate questions that the respondent does not at first understand. An inter- viewer may also help respondents clarify their answers by using probes, such as "I'm not sure exactly what you mean" or "Can you tell me more about that?" Interviewers help to ensure that every relevant item is answered; tedious or sensitive items cannot be passed over easily as in self-administered questionnaires. Even when a respondent initially balks at answering an item, a tactful explanation by the inter- viewer of the item's meaning or purpose frequently results in an adequate response. In addition to these characteristics that both modes of interviewing share, each has its own set of advantages and disadvantages. The oldest and most highly regarded method of survey research, face-to-face (FTF) or in-person interviewing has a number of advantages in addition to the ones al- ready mentioned. The response rate, the proportion of people in the sample from whom completed interviews (or questionnaires) are obtained, is typically higher than in comparable telephone or mail surveys This survey mode is appropriate when long interviews are necessary. FTF interviews of 1 hour's length are common, but they sometimes go much longer. (GSS interviews take about 90 minutes for completion of some 400 questions.) When FTF, one can use visual aids such as photographs and drawings in presenting the questions, as well as cards that show response options. The cards may be useful when response options are difficult to remember or when it is face-saving for respondents to select the option or category on the card rather than to say the answer aloud. Finally, FTF interviewing permits unobtrusive observations that may be of interest to the researcher. For example, the interviewer may note the ethnic composition of the neighborhood and the quality of housing. There are some disadvantages to this method, the greatest of which is cost. The budget for an FTF survey must provide for recruiting, training, and supervising personnel and for interviewer wages and travel expenses, plus lodging and meals in some cases. The difficulty of locating respondents not at home when the interviewer first calls is another disadvantage of this survey mode. In more and more households, no adult is at home during the day. The response rate for heterogeneous samples in metropolitan areas has been declining for several years. However, this is not true for rural areas or among specialized target groups. Telephone Interviewing: Like FTF interviewing, telephone interviewing has its advantages and disadvantages. Substantial savings in time and money are two of the reasons survey re- searchers choose to use this method. Large survey research organizations that have a permanent staff can complete a telephone survey very rapidly, and even those re-searchers who must hire and train interviewers can complete a telephone survey faster than one requiring FTF interviews or mailed questionnaires. The costs for sampling and data collection in telephone surveys have been estimated to be 10 to 15 percent of those for FTF interview surveys (Groves et al., 2004:161). However, telephone survey costs will exceed those for mailed questionnaires, even with several follow-up mailings included. Another major advantage of telephone interviewing is the opportunity for cen- centralized quality control over all aspects of data collection (Lavrakas, 1993), including question development and pretesting, interviewer training and supervision, sampling and call-backs, and data coding and entry. Administration and staff supervision for a telephone survey are much simpler than for a personal interview survey. No field staff is necessary; in fact, it is possible to have the researcher, inter- viewers, and coders working in the same office. This arrangement permits supervisors to monitor ongoing interviews, allowing immediate feedback on performance and helping to minimize interviewer error or bias. Coders may be eliminated and the interviewers can enter numbers corresponding to respondent answers directly into a computer terminal. If they are used, coders may provide immediate feedback to interviewers and their supervisors. In terms of sampling quality, the telephone survey mode falls between the FTF interview and the mailed questionnaire. In the past, lists of telephone subscribers were used in the sampling process, creating a problem of coverage error by excluding those who had unlisted telephone numbers or moved too recently to be directory listed (Steeh, 2008:226). The problem of missing those with unlisted numbers was resolved through random-digit dialing (RDD) procedures in which telephone numbers are chosen randomly. In the simplest RDD design, telephone prefixes (ex- changes) within the target geographic area are sampled, and then the last four digits of the telephone number are randomly selected. In actual practice, more complex "list-assisted" RDD procedures are used to screen out nonresidential and other ineligible numbers and "dual-frame" procedures are being developed to sample both mobile and landline phones Still, telephone surveys have their limitations. The declining response rates for telephone interviews Other limitations of telephone surveys are interview duration and the com- plexity of the questions asked. Conducting a telephone interview longer than 20 to 30 minutes increases the risk of nonresponse and mid-interview termination (de Leeuw, 2008). While the interviewer may repeat a question, it is desirable to develop questions simple enough to be understood and retained by respondents while they formulate an answer. Research also has shown that open-ended ques- tions yield shorter, less complete answers in telephone interviews than in FTF interviews (Groves and Kahn, 1979). Furthermore, questions with multiple response options may present difficulties in that the interviewer cannot present the options on cards but must read and, if necessary, repeat them to respondents at the risk of boring them. For these reasons, the telephone survey mode lacks the advantages of the FTF mode in regard to the types of questions that are used. Another disadvantage of the telephone interview is that it is more difficult for interviewers to establish trust and rapport with respondents than it is in FTF inter- views; this may lead to higher rates of nonresponse for some questions and under-reporting of sensitive or socially undesirable behavior. Robert Groves (1979) compared the results of two identical telephone surveys based on separate samples with the results of an FTF survey asking the same questions. At the end of the questionnaire were items about respondents' reactions to the interview. Among other questions, respondents were asked if they felt uncomfortable talking about certain topics, such as income, their income tax refund, political opinions, or racial attitudes. For each of the sensitive topics, more telephone respondents felt uncomfortable, with the largest differences for the income and income tax questions. Not surprisingly, the telephone surveys showed lower response rates to the income questions. Other studies confirm that respondents are less likely to divulge illegal or socially undesirable behavior to an interviewer by telephone than FTF despite these disadvantages, telephone surveys became the most popular survey method in the United States and Western Europe in the last quarter of the twentieth century. Reduced time and cost are major advantages. Furthermore, closer supervision, developments in RDD and CATI, and the high percentage of households with telephones made the quality of telephone surveys only slightly inferior to that of FTF interviewing. Now, however, unless a solution can be found for rapidly declining telephone response rates, some survey re- searchers offer a grim prognosis for the long-term future of this survey mode Perhaps the biggest challenge is the rapid proliferation of mobile telephones and the growth of the cell phone-only population. John Ehlen and Patrick Ehlen (2007) forecast that 40 percent of U.S. adults under age 30 will have a cell phone-only lifestyle by the end of 2009. Complex methodological, statistical, legal, and ethical issues must be resolved to develop effective dual-frame sampling of mobile and landline telephones Paper-and-Pencil Questionnaires Occasionally, the site of a paper-and-pencil questionnaire is a school or organization, where the questionnaire may be hand-delivered and filled out in a group or individually. Most often, however, the setting is the home or a workplace. To get to this setting, almost all self-administered questionnaires are mailed to respondents. Therefore, we will discuss the pros and cons of this method as a mail survey. This is less expensive than FTF or telephone interviews, even though the budget for printing and postage must be sufficiently high to permit follow-up mailings. No interviewers or interviewer supervisors are needed, there are no travel or telephone expenses, and very little office space is required. In some surveys, the staff may consist of just one or two persons in addition to the researcher. Groves et al. (2004:161) estimate that mail surveys generally cost 59 to 83 percent less than telephone surveys. The time required to complete the data-collection phase of the survey is greater than that for telephone surveys but usually less than that for FTF surveys. The sample size may be very large, and geographic dispersion is not a problem. Furthermore, there is greater accessibility to respondents with this method since those who cannot be reached by telephone or who are infrequently at home usually receive mail. Compared with interviews, self-administered questionnaires offer several ad- vantages to motivated respondents (Mangione, 1995). Respondents are free to se- lect a convenient time to respond and to spend sufficient time to think about each answer. The absence of an interviewer also assures privacy, which may explain why respondents are less willing to reveal illegal or socially undesirable behaviors or other sensitive information to an interviewer than in a self-administered questionnaire (Tourangeau and Smith, 1996:277-79). On the other hand, coverage and nonresponse errors may be magnified with this survey mode. The researcher must sample from a mailing list, which may have some incorrect or out-of-date addresses and which may omit some eligible respondents.3 Also, the response rate tends to be much lower with mailed questionnaires than with other survey modes. A meta-analysis of forty-five research studies in which two or three data-collection modes were compared found that, on the average, FTF surveys have the highest response rate (70.3 percent), telephone surveys the next highest (67.2 percent), and mail surveys the lowest (61.3 percent) (Hox and de Leeuw, 1994). Still, even though rates of 50 percent or lower are fairly common in mail surveys, it is possible to obtain higher response rates by following the highly detailed "tailored design" procedures developed by Don Dillman (2007). The most important factors in generating high return rates are reducing the costs for the respondent and increasing the perceived importance and rewards of survey participation (Heberlein and Baumgartner, 1978; Yammarino, Skinner, and Childers, 1991). Costs are reduced by including postpaid return envelopes, enclosing small cash repayments (rather than monetary incentives conditional upon survey completion), and making the questionnaire shorter, visually appealing, and easier to complete (Dill- man, Sinclair, and Clark, 1993; Warringer et al., 1996). The importance of the survey is impressed on respondents by using stamped rather than metered mail, by making special appeals within the cover letter, by personalizing correspondence (e.g., with real signatures and salutations with respondents' first names), and by making repeated contacts in the form of preliminary notification and follow-ups emphasizing different appeals (Dillman, 2007; Mangione, 1995). In addition to a generally lower overall response rate, self-administered questionnaires may introduce nonresponse bias due to response selectivity. Certain groups of persons, such as those with little writing ability and those not interested in the topic, would be less likely to respond to a mailed questionnaire than to a personal interview request. Also, more questions are left unanswered with self-administered questionnaires than with interview methods. The problem of item nonresponse may be alleviated to some extent by instructions explaining the need for every item to be answered, by assurances of confidentiality, and by making items easy to understand. While interviewer bias is eliminated, so are the advantages of an interviewer. There is no opportunity to clarify questions, probe for more adequate answers, or control the conditions under which the questionnaire is completed or even who completes it. A mailed questionnaire usually yields the most reliable information when closed questions are used, when the order in which questions are answered is unimportant, and when the questions and format are simple and straightforward. The questionnaire may serve the research purposes well under the following conditions: with specialized target groups who are likely to have high response rates,4 when very large samples are desired, when costs must be kept low, when ease of administration is necessary, and when moderate response rates are considered satisfactory

***What is the difference between trend, panel, and cohort studies? Which of these study designs permits the assessment of individual change?

Because cross-sectional designs call for collection of data at one point in time, they do not always show clearly the direction of causal relationships and they are not well suited to the study of process and change. Of course, investigators can make infer- ences about the logical relations among variables, and respondents can be asked about both past and present events. But both of these sources of evidence are highly fallible. To provide stronger inferences about causal direction and more accurate studies of patterns of change, survey researchers have developed longitudinal de- signs, in which the same questions are asked at two or more points in time. The questions may be asked repeatedly either of independently selected samples of the same general population or of the same individuals. This results in two main types of longitudinal designs: trend studies and panel studies. TREND: A trend study consists of a repeated cross-sectional design in which each survey collects data on the same items or variables with a new, independent sample of the same target population. This al- lows for the study of trends or changes in the population as a whole. Trend studies may be illustrated by the monthly government surveys used to estimate unemploy- ment in the United States (target population) as well as by repeated public-opinion polls of candidate preferences among registered voters (target population) as an election approaches. Ideally, all trend information would be obtained through measures repeated frequently at regular intervals, as in the GSS. However, much of our trend-survey data come from infrequent replications of studies. Most trend studies measure changes in the general population over time. To study the developmental effects of aging as well as chronological changes, it is also possible to focus on a specific cohort of persons. COHORT: A cohort consists of persons (or other units such as organizations and neighborhoods) who experience the same significant life event within a specified period of time Most often the life event that defines a cohort is birth, but it also might be marriage, completion of high school, entry into medical school, and so forth. Demographers long have analyzed birth cohorts from census data to predict population trends. In the past half- century, social researchers also have begun to do cohort studies of various attitudes and behaviors by tracing changes across cohorts in repeated cross- sectional surveys. Cohorts are identified (or tracked) by their birth date; for example, the 1972 cohort would be 18 years old in a 1990 survey, 28 years old in a 2000 survey, and 38 years old in a 2010 survey. Cohort trend studies enable one to study three different influences associated with the passage of time. To get a sense of these influences and of the difficulty of studying them at a single point in time, consider a cross-sectional survey contain- ing measures of age and attitudes toward premarital sex. The GSS, for example, includes the following question: "There's been a lot of discussion about the way morals and attitudes about sex are changing in this country. If a man and woman have sex relations before marriage, do you think it is always wrong, almost always wrong, wrong only sometimes, or not wrong at all?" If we found that responses to this question became more conservative (premarital sex is always or almost always wrong) with age or time, this could be due to one of three kinds of influences: life course (as people grow older, they become more conservative), cohort (older generations are more conservative than younger generations), or historical period (the prevailing culture has become more conservative over time, making it less socially acceptable to hold liberal views toward premarital sex). The problem with cross-sectional data is that one cannot begin to disentangle these various effects. Longitudinal data in general and cohort analyses in particular are generally superior for this purpose, although the reader should be aware that these techniques seldom provide clear causal inferences (Glenn, 1977). An example of a cohort study is David Harding and Christopher Jencks' (2003) analysis of age, cohort, and period effects on changing attitudes toward premarital sex. PANEL: Whereas trend studies identify which variables are changing over time, panel studies can reveal which individuals are changing over time because the same respondents are surveyed again and again. By repeatedly surveying persons over an extended period of time, a panel study can record life histories—schooling experiences, labor force participation, income changes, health conditions, and so forth— much better than retrospective questions from a cross-sectional survey. Some of the early panel studies analyzed data from small, highly selective samples over very long periods of time. Glen Elder's (1999) 30-year follow-up study of the impact of the Great Depression on the life course analyzed data from a panel study that began in 1932 with the selection of 167 5th-grade children from five elementary schools in Oakland, California. These individuals were observed, tested, and questioned more than a hundred times before they graduated from high school in 1939. They then completed a short questionnaire in 1941, were interviewed extensively in 1953-54 and 1957-58, and completed a final follow-up mailed questionnaire in 1964. In a study of attitude change among Bennington College students, women who were surveyed initially in the late 1930s either were interviewed or completed a mailed questionnaire in 1960 and then again in 1984 (Alwin, Cohen, and Newcomb, 1991). A major finding of this study was the persistence of political attitudes over the adult years, as the Bennington women, who were liberalized in the college years, remained politically liberal in later life. Panel studies of any duration were a rarity in the social sciences until the late 1960s, when the federal government began conducting large-scale longitudinal studies for secondary data analysis. Two drawbacks to studies of this magnitude are that they are very expensive and that they take considerable time. In addition, panel surveys have two problems not found in cross-sectional designs: respondent attrition and reactivity stemming from repeated measurement (Duncan, 2001). Panel attrition occurs when respondents interviewed in the initial wave do not respond in later waves. The more respondents who drop out, the less likely that follow-up samples will be representative of the original population from which the initial sample was drawn. Therefore, researchers typically make great efforts and devote sizeable resources, including incentive payments, to retain a high percentage of the initial sample in follow-up surveys.

Discuss the advantages and disadvantages of surveys.

Disadvantages: The major disadvantage of surveys relates to their use in explanatory research. Beyond association between variables, the criteria for inferring cause-and-effect relationships cannot be established as easily in surveys as in experiments. For example, the criterion of directionality—that a cause must influence its effect—is predetermined in experiments by first manipulating the independent (or causal) variable and then observing variation in the dependent (or effect) variable. But in most surveys this is often a matter of interpretation since variables are measured at a single point in time. Consider also the criterion of eliminating plausible rival explanations. Experiments do this effectively through randomization and other direct control procedures that hold extraneous variables constant. In contrast, surveys must first anticipate and measure relevant extraneous variables in the interviews or questionnaires and then exercise statistical control over these variables in the data analysis. Thus, the causal inferences from survey research generally are made with less confidence than inferences from experimental research. Although surveys are quite flexible with respect to the topics and purposes of research, they also tend to be highly standardized. This makes them less adaptable than experiments and other approaches in the sense that it is difficult to change the course of research after the study has begun. That is, once the survey instrument is in the field, it is too late to make changes. The experimenter, in contrast, can modify the research design after running a few subjects with the loss of only those subjects. A more serious weakness of surveys is one they share with laboratory experiments: They are susceptible to reactivity, which introduces systematic measurement error. A good example of this, noted in chapter 5, is the tendency of respondents to give socially desirable answers to sensitive questions. Another inherent weakness is that surveys rely almost exclusively on reports of behavior rather than observations of behavior. As a consequence, measurement error may be produced by respondents' lack of truthfulness, misunderstanding- ing of questions, inability to recall past events accurately, and instability of opinions and attitudes. Finally, a brief encounter for the purpose of administering a survey does not provide a very good understand- ing of the context within which behavior may be interpreted over an extended pe- period of time. For this kind of understanding, the best approach is field research

Describe some characteristics of a good opening question in an interview or questionnaire.

It is best to have an interesting and nonthreatening topic at the be- ginning that will get respondents involved and motivate them to cooperate in com- pleting the interview or questionnaire. The first question should be congruent with respondents' expectations: It should be a question they might reasonably expect to be asked, on the basis of what they have been told by the interviewer about the study. This sometimes involves using a question that has no research purpose other than motivating respondents by conforming to their preconceptions about what should occur in a competent survey. The first question also should be relatively easy to answer, thus preventing respondents from becoming discouraged or feeling in- adequate to fulfill their role as respondents. If both open-ended and closed-ended questions are used, the beginning is a good place to have an open-ended question.6 Most people like to express their views and to have someone listen and take them seriously. An interesting opening question is a good way to meet this need of respondents and to get them to open up and warm to the respondent role. Here are two examples: As far as you're concerned, what are the advantages of living in this neighborhood? What do you like about living here? Let's talk first about medical care. What would you say are the main differences between the services provided by doctors and hospitals nowadays compared with what they were like when you were a child? It would be prudent to avoid both boring, routine questions and sensitive, personal questions in the beginning; build up interest, trust, and rapport before risking these. Uninteresting routine questions such as background information (e.g., age, gender, marital status) are often placed toward the end of the survey instrument. Asking per- sonal questions (e.g., racial prejudices, income, sexual activity, alcohol or drug use, religious beliefs) prematurely may embarrass or otherwise upset respondents and possibly cause them to terminate the interview or question the researcher's motives. Some researchers place sensitive or personal topics at the end of an interview, arguing that, if the respondent fails to cooperate at this point, not much informa- tion will be lost. However, this may leave respondents with a bad taste in their mouths and may promote negative feelings toward survey research. Probably it is best to introduce such questions after the interview is well under way because the respondent will have invested time and effort by then and possibly will have devel- oped trust toward the research and/or interviewer. In addition, sensitive questions should fit into the question sequence logically; they should be preceded when pos- sible by related but less sensitive questions or topics so that the relationship of the personal questions to the topic and to the research is clear to the respondent. It also may be helpful to precede the most sensitive questions with a direct explanation of their importance to the research and to repeat an assurance of confidentiality.

***How can one minimize the tendency to give socially desirable responses?

One frequent response bias tendency is to answer in the direction of social desirability (see DeMaio, 1984). We all have our private self-image to maintain; in addition, many respondents will want to make a good impression on the researcher by appearing sensible, healthy, happy, mentally sound, free of racial prejudice, and the like. Some individuals and groups demonstrate this tendency more than others. In- deed, Derek Phillips (1971:87-88) suggests that the consistent finding of greater happiness, better mental health, and lower racial prejudice among middle-class compared with lower-class respondents may not reflect true class differences in these variables but instead a greater concern among the middle class to give socially desirable reponses. Some common techniques for minimizing social desirability bias have been mentioned previously: use of indirect questions, careful placement and wording of sensitive questions, assurances of anonymity and scientific importance, statements sanctioning less socially desirable responses, building of rapport between interviewer and respondent, and collection of sensitive information within face-to-face interviews with self-administered forms or other modes that provide privacy to the interviewees.

Compare the advantages and disadvantages of open versus closed questions. When is it advisable to use open rather than closed questions? Why should open questions be used sparingly in self-administered questionnaires?

Open-Ended and Closed-Ended Questions A major choice among "materials" concerns open-ended and closed-ended questions. The open-ended (also called "free-response") question requires respondents to answer in their own words (in written form on a questionnaire or aloud to an interviewer). The closed-ended (or "fixed-choice") question requires the respondent to choose a response from those provided. Here are examples of two questions writ- ten in both open and closed forms. The greatest advantage of the open question is the freedom the respondent has in an- swering. The resulting material may be a veritable gold mine of information, revealing respondents' logic or thought processes, the amount of informa- tion they possess, and the strength of their opinions or feelings. Frequently, the researcher's understanding of the topic is clarified and even completely changed by unexpected responses to open questions. But, alas, this very qual- ity of open questions, the wealth of information, has a drawback: the "cod- ing" problem of summarizing and analyzing rich and varied (and often ir- relevant and vague) responses. Coding such material is a time-consuming and costly process that invariably results in some degree of error Open questions also require interviewers skilled in "recognizing ambiguities of response and in probing and drawing respondents out . . . to make sure they give codable answers" (Sudman and Bradburn, 1982:151). The following interviewer- respondent exchange illustrates the importance of probing to give respondents encouragement and time to think and to clarify their responses Other problems with the open question include (1) the varying length of responses (some people are unbelievably verbose; others, exceedingly reticent), (2) the difficulty with inarticulate or semiliterate respondents, (3) the difficulty inter- viewers have in getting it all down accurately, and (4) the reluctance of many per- sons to reveal detailed information or socially unacceptable opinions or behavior. Open-ended questions also entail more work, not only for the researcher but also for the respondent. Indeed, open questions should be used sparingly, if at all, in self- administered questionnaires or Web surveys, where respondents must write or type rather than speak. Closed-ended questions are easier on the respondent because they require less effort and less facility with words. The presence of response options also enhances standardization by creating the same frame of reference for all respondents. When used in an interview, closed questions require less work and training to administer, and the interview may be shortened considerably. On the other hand, good closed questions are difficult to develop. It is easy to omit important responses, leading respondents to choose among alternatives that do not correspond to their true feelings or attitudes. Research shows that respon- dents tend to confine themselves to the alternatives offered, even if they are explic- itly given a choice such as "Other _____ (please explain)" (Schuman and Presser, 1981). This would be consistent with Grice's relevance principle because respon- dents are likely to view the list of response options as indicative of the researcher's interests. To provide a list of response options that are meaningful to the respon- dent, the recommended procedure is to use open questions in preliminary interviews or pretests to determine what members of the study population say spontaneously; this information then may be used to construct meaningful closed alternatives for the final instrument. Unfortunately, this procedure is not always fol- lowed; time and financial limitations may prevent pretesting of sufficient scope to yield adequate information on the population's responses. Even when the range of possible responses is known, they may be too numer- ous to list in a closed question, especially in a telephone interview. It is better, ob- viously, to ask the open question, "In what state or foreign country were you born?" than to list all states and foreign countries. Similarly, questions about occupations, medical conditions, favorite television shows, and the like, are best asked open- ended (Fowler, 1995). Given these advantages and disadvantages, when should one choose open or closed questions? Robert Kahn and Charles Cannell (1957) suggest these five con- siderations: (1) the objectives of the survey, (2) the level of information possessed by respondents in regard to the topic, (3) how well respondents' opinions are thought out or structured, (4) motivation of respon- dents to communicate, and (5) the extent of the researcher's knowledge of respon- dents' characteristics. 1. Consider first the study's objectives. If you simply want to classify respon- dents with respect to some well-understood attitude or behavior, the closed ques- tion would probably be appropriate and most efficient. However, the open question is usually preferable when the survey objectives are broader and you are seeking such information as the basis on which opinions are founded, the depth of respon- dent knowledge, or the intensity with which respondents hold opinions. 2. A second consideration is the amount of information that respondents al- ready have on the topics of interest. If you believe that the vast majority will have sufficient information regarding the survey's topics, the closed question may be acceptable. On the other hand, if you are uncertain as to the level of information of the respondents or if you anticipate a wide range in the amount of knowledge, the open question is more appropriate. With closed questions, uninformed respondents may conceal their ignorance by making arbitrary choices, yielding invalid reports. And even adding the response option "don't know" may not resolve the problem since this option is unlikely to be popular with respondents who are sensitive about appearing ill-informed.1 It is easier, for example, to respond "approve" or "disapprove" of the Supreme Court decision Roe v. Wade than to admit not knowing what it is. 3. A related consideration is the structuring of respondent thought or opinion. Are respondents likely to have thought about the issue before? Can they take a position or express a definite attitude? If respondents are likely to have given previous thought to the matter and the range of typical responses is known to the researcher, the closed question may be satisfactory. This might be the case, for example, with a survey designed to measure the attitudes of registered voters toward the performance of the president. However, if respondents' ideas are less likely to be structured, open questions may be preferable. Suppose you wanted to ascertain why college students chose XYZ University or why couples desired a certain number of children; for such questions, the reasons may be numerous and not always immediately accessible to respondents. A series of open questions would allow respondents time to recall, think through, and talk about various aspects of their decisions, rather than hastily selecting a possibly incomplete or inappropriate response provided by a closed question. 4. Motivation of respondents to communicate their experiences and thoughts is a further consideration. In general, the open question will be successful only when the respondent is highly motivated because this question type is more demanding in terms of effort, time, and self-disclosure. Therefore, with less motivated respondents, closed questions may lead to better-quality data. On the other hand, closed questions sometimes dampen respondent motivation, in that some people prefer to express their views in their own words and find being forced to choose among limited fixed-choice responses very irritating. 5. A fifth important consideration in choosing between open and closed questions is the extent of the researcher's previous knowledge of respondent characteristics. That is, how well does the researcher understand the vocabulary and amount of information possessed by the respondents, the degree of structure of respondents' views, and their level of motivation? Unless the researcher has done similar studies previously or has done extensive preliminary interviewing, the most likely answer is "not very well." If this is the case, open questions should yield more valid (albeit more difficult to summarize and analyze) data. for closed-ended questions. The simplest response option is a simple "yes" or "no." This would be appropriate for such questions as "Do you belong to a labor union?" and "Have you ever been threatened with a gun?" However, even though many types of information form natural dichotomies, this kind of question appears less frequently than you might think. With many apparently dichotomous items, respondents may prefer to answer "don't know," "uncertain," or "both." When measuring feelings or subjective states, on the other hand, respondents who are ambivalent, indifferent, or even lazy are likely to choose an explicit "don't know" option rather than attempt to express their feelings (satis- ficing strategy). In such situations, it may be more appropriate to omit "don't know" response categories, thus limiting no opinions to volunteered responses. In inter- viewer surveys, "don't know" options may be omitted from the question and left to the interviewer's discretion. If they are omitted from the set of responses in a Web survey, the respondent should be allowed to skip the question and not be forced to choose an answer to continue the survey.4

***What are the four units of analysis that can be studied in social science research? Give an example to differentiate among the four types.

The entities (objects or events) under study are referred to in social research as units of analysis These include individual people; social roles, positions, and relationships; a wide range of social groupings such as families, organizations, and cities; as well as various social artifacts such as books, periodicals, docu- ments, and even buildings. Ordinarily, the unit of analysis is easily identified. The unit is simply what or who is to be described or analyzed. Social groupings ex: For example, a re- searcher wanting to determine if larger organizations (in terms of the number of employees) have more bureaucratic rules and regulations than smaller ones would treat the organization as the relevant unit and would gather information on the size and bureaucratic complexity of different organizations. Individual people ex: In Broh's study of the impact of extracurricular activities on academic achievement, the unit of analysis was individuals or, more precisely, individual high school students. social units; social systems are aspects of communities or na- tions, not individuals. Markowitz thus chose to analyze cities. Examining the num- ber of psychiatric beds in hospitals and the crime and arrest rates in 81 U.S. cities, he found that the lower a city's capacity to place the mentally ill in public psychiatric hospitals, the higher the city's crime and arrest rates. The Lonely Crowd, in which he the- orized a general trend toward "other-directedness." Because of changes wrought by the Industrial Revolution, such as the expansion of white-collar and service jobs, in- creasing material abundance, and the development of mass media of communica- tion, people's actions were becoming less motivated by intrinsic values and more in- fluenced by the actions of others. But how can one study long-range trends in individual motivation when it is impossible to analyze individuals from the past? One social scientific solution is to rely on various social artifacts and to assume that such artifacts reflect the individual values and behavior of direct interest. To test Riesman's theory, for example, Sanford Dornbusch and Lauren Hickman (1959) chose as their units of analysis advertisements in a mass-circulation women's mag- azine for the period 1890-1956, to see if the advertising had appealed increasingly over time to the standards of others (other-directedness). Indeed, it had

***What is an ecological fallacy? Why do researchers need to be aware of this when analyzing study findings? Provide an example?

The most common fallacy involving the mismatching of units is the ecological fal- lacy (Robinson, 1950). This occurs when relationships between properties of groups or geographic areas are used to make inferences about the individual behaviors of the people within those groups or areas. This is quite similar to what logicians call the "fallacy of division": assuming that what holds true of a group also is true of in- dividuals within the group. Knowing that Sally attended a college whose students had relatively low average SAT scores, you would commit this fallacy if you as- sumed that Sally herself had low SAT scores. It is not always wrong to draw conclusions about individual-level processes from aggregate or group-level data. Social scientists have identified conditions un- der which it is reasonable to make such in- ferences (e.g., Firebaugh, 1978), but it is of- ten difficult to determine if these conditions are met. The implications of the ecological fallacy are clear: Carefully determine the units about which you wish to draw con- clusions and then make sure that your data pertain to those units. If you are interested in individuals but only aggregate data are available, then draw conclusions very ten- tatively, recognizing the possibility of an ecological fallacy. KEY POINT: If data describe social units (e.g., schools), one must be cautious in draw- ing conclusions about individuals (stu- dents within the schools). EX: At one time social scientists frequently performed ecological analyses such as the one above. For example, criminologists analyzed crime and delinquency rates in relation to other characteristics of census tracts in order to draw conclusions about characteristics of individual criminals and delinquents. A typical erroneous conclu- sion might be that foreign-born persons commit more crimes than native-born per- sons because the crime rate is higher in areas with greater proportions of foreign- ers. But such a conclusion is clearly unwarranted because we do not know who actually committed the crimes—foreign or native-born persons. Similarly, Durkheim's classic study of suicide was subject to the ecological fallacy by inferring that Protestants commit more suicides than Catholics from the observation that sui- cide rates were higher in predominantly Protestant nations than in predominantly Catholic ones.1


Ensembles d'études connexes

Chapter 2: Family-Centered and Community-Based Maternal and Pediatric Nursing

View Set

Chapter 67 Management of patients with Cerebral vascular disorders

View Set

Developmental Psychology Module 6

View Set

Bio 1107 lab quiz #2 brine shrimp:

View Set

Histology: Hypothalamus & Hypophysis

View Set

ECON Chapter 5 Dynamic Study Module

View Set