Soc. 200 Ch. 8

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

measurement issues (strengths & weaknesses of surveys)

*another weakness of surveys* - reliance almost exclusively on self-reports of beh. rather than observations of behavior = as a consequence, validity and reliability may be undermined by respondent's lack of truthfulness, misunderstanding of questions, inability to recall past events accurately, and instability of opinions & attitudes Like experiments, surveys also are susceptible to reactive measurement effects produced by participants' awareness of being studied. A good example of this, noted in Chapter 5, is the tendency of respondents to give socially desirable answers to sensitive questions; this is particularly likely to occur in inter- view surveys. - a brief encounter for the purpose of administering a survey does not provide a very good understanding of the impact of the context on behavior - for that understanding, social researchers turn to field research, (CH.9)

versatility (strengths & weaknesses of surveys)

*effectiveness of surveys* the topics covered and the questions that may be included in surveys are wide ranging. Topics of studies cited in this chapter range from what counts as family to the consequences of alcohol consumption to multitasking and academic performance The following questions from the GSS suggest the broad scope of possible survey questions: pg. 236

unstructured interview

A type of interview guided by broad objectives in which questions are developed (spontaneously) as the interview proceeds.

telephone interview def.

A type of interview in which interviewers interact with respondents by telephone.

semi-structured interview

A type of interview that, while having specific objectives, permits the interviewer some freedom in meeting them

structured interviews

A type of interview with highly specific objectives in which all questions are written beforehand and asked in the same order for all respondents, and the interviewer's remarks are standardized.

field pretesting

An evaluation of a survey instrument that involves trying it out on a small sample of persons. (similar to those of the target group of respondents) Powell and associates field pretested the CFS by administering a draft of the CFS to 21 randomly selected respondents using the same sampling methods applied in the main survey

coding def.

The sorting of data into numbered or textual categories.

choose sampling frame/ design and select sample

a sampling frame denotes the set of all cases from which the sample is actually selected for most surveys, the sampling frame consists of an available listing of all units in the target population Face-to-face interviews require a listing of residences, mail surveys need mailing addresses, telephone surveys need telephone numbers, and Web surveys need e-mail addresses. If a sampling frame fails to provide adequate coverage of the target population, a researcher may switch modes or resort to a mixed-mode strategy. the target population in the CFS consisted of adults (age 18 or older) living in US households - to draw a probability sample of this population, the researchers use *random-digit dialing (RDD)*

explanatory survey

a survey that investigates relationships b/w two or more variables, often attempting to explain them in cause-and-effect terms (more sophisticated)

descriptive survey

a survey undertaken to provide estimates of the characteristics of a population (more simple to 'describe') refer to table on pg. 207

first step in process of planning and conducting a survey:

choose mode of data collection consider the strengths and limitations of each; depends on the goals of the research and the resources available the principal goals of the CFS were to determine how Americans define family and to examine how definitions of family are related to social background and other vari- ables. In addition, the investigators wished to explore more deeply the thought processes behind people's views of family by asking several open-ended questions. (an FTF survey would be the best option, but is cost prohibitive, therefore, a telephone survey was the best option especially given the resources of the Center for Survey Research) - The Center had conducted many telephone surveys, had a CATI system in place, and had a well-trained staff of interviewers.

random-digit dialing (RDD)

def. A sampling- frame technique in which dialable telephone numbers are generated (sampled) randomly. in the simplest RDD design, telephone prefixes (exchanges) within the target geographic area are sampled and then the last 4 digits of the telephone # are randomly selected the CFS used a more complex *list assisted* RDD procedures: provides more efficient coverage in RDD sampling by eliminating nonresidential numbers and by including unlisted numbers in the sampling frame; this frame is created from a national database containing telephone #'s for all households w/ listed numbers - this info is then used to "assist" in creating a frame that will enable researchers to select random telephone numbers that will include both directory- listed and non listed numbers [once someone answers the phone in a randomly selected household, the interviewer must randomly select an eligible person to interview] - variety of ways to do this....

computer assisted personal interviewing (CAPI)

interviewers carry out the survey through a laptop computer, which they bring to the respondent's home. The computer screen prompts the interviewer with instructions and ques- tions; the interviewer reads the questions and then enters the respondent's answers. makes the interviewers job easier/ reduces mistakes/ and saves time and cost since the data are entered directly into a computer file and do not have to be entered manually = has become the standard for large-scale survey research in the US def. A software program, usually on a portable computer, that aids interviewers by providing appropriate instructions, question wording, and data- entry supervision.

variations in survey designs and modes

like experiments, surveys vary in research design and context the research design specifies the overall structure or plan by which a study will address the research question(s). The context refers to the setting in which the data are collected The *basic* (compare to experiments in CH 7) idea of a survey is to measure variables by asking people questions and then to describe the distribution of responses to single questions or examine the relationships among responses to multiple items. The major design decision is whether to ask the questions just once or to repeat the questions over time.

secondary analysis

analysis of survey or other data originally collected by another researcher, ordinarily for a diff. purpose

essential elements of a survey

asking a sample of people a predetermined set of questions in order to estimate characteristics of the population from which the sample was drawn (estimates are based on RANDOM sampling) At their best, surveys produce accurate *quantitative data* that not only describe the attitudes, opinions, and behavior of a target population but also reveal how social characteristics, such as gender, race/ethnicity, class, and age, are related to people's responses. the primary features of surveys are relatively large probability samples, structured questioning, and quantitative analysis.

longitudinal designs

provides stronger inferences about causal direction and more accurate studies of patterns of change same q's are asked at two or more points in time the q may be asked repeatedly either of independently selected samples of the same general population or of the same individuals - this results in 2 main types of longitudinal designs: trend studies & panel studies def. Survey design in which data are collected at more than one point in time.

computer-assisted self-interviews (CASI)

researchers have developed a variety of computer-mediated surveys that are self-administered. the questionnaire is transmitted on a computer program that may be sent to respondents (e.g., as a link in an e-mail) or provided by the researcher on a laptop. Whereas computer-assisted personal and telephone interviewing (CAPI and CATI) make the interviewer's job easier, CASI replaces the interviewer. Examples of CASI include e- mailed questionnaires, interactive voice response (IVR) surveys, computerized self-administered questionnaires, and Internet (Web) surveys E- mail and Web surveys involve computer-to-computer transmission of a questionnaire; in e-mail surveys the q's are sent as the text of an e-mail message or in an attached file, whereas in Web surveys the questionnaire is accessed on specially designed Web pages IVR surveys are conducted by tele- phone as respondents listen to prerecorded, voice-read questions and then use touch-tone data entry or give verbal answers, which are recorded Web surveys have the broadest application & have increased dramatically over the years def. An electronic survey in which a questionnaire is transmitted on a computer disk mailed to the respondent or on a laptop computer provided by the researcher. the greatest advantage of Web surveys is reduced cost - it eliminates the costs of paper, postage, assembly of the mailout package, and data entry - principal costs are computer equipment & programming support, questionnaire dev. and testing, and Internet service provider fees *time savings* - Web surveys req. much less time to implement than other survey modes; compared to mail surveys, which may take weeks or months for questionnaires to be delivered and returned, Web surveys may be completed in only a few days Web surveys can substantially reduce the cost of increasing sample size because once the electronic questionnaire has been developed, the cost of surveying each additional person is far less than in an interview or mail survey - there is also flexibility in the questionnaire design (can add in pop up instructions, drop -down boxes w/ lists of answer choices, & feedback on poss incorrect answers, pop up word def. screens, pics/ shapes vid clips etc)- can motivate and assist respondents weaknesses: response rates are lower compared to other modes; coverage error (the error produced when the sampling frame does not include all members of the population) - this error derives from 2 related prob's: proportion of the general population who are Internet users and the lack of a sampling frame to sample users. - digital divide -pg. 220 - also the absence of a good frame: there is no list available and no means of gener- ating a list of all U.S. households with Internet service. Researchers often address this problem by limiting their Web surveys to special populations having membership lists and Web access, such as college students, certain professionals, or employees of an organization.

close-ended question

survey q's that req. respondents to choose responses from those provided provides quantitative analysis that drives survey research

data- collection modes (pg. 214)

surveys also differ in how the data are collected A critical aspect of survey research is the mode of asking questions: interviewer- administered (face-to-face or telephone surveys), self-administered (paper-and-pencil or computer-assisted questionnaires), or some combination of these modes. At one end of the continuum, involving all channels of communication, is the face-to-face interview; this is followed, in turn, by telephone interviews, various comput- er-assisted self-interviews, and autonomous self-administered questionnaires. (4 basic data-collection modes along w/ typical computer-based variations) Each mode has its distinctive advantages and disadvantages; the choice depends on many considerations, including research objectives, quality of measurement, and available resources such as time and money.

establishing causal relationships (strengths & weaknesses of surveys)

*major disadvantage of surveys* = explanatory research Beyond association between variables, the criteria for inferring cause-and-effect relationships cannot be established as easily in surveys as in experiments. ex. the criterion of directionality—that a cause must influence its effect—is predetermined in experiments by first manipulating the independent (or causal) variable and then observing variation in the dependent (or effect) variable. But in most surveys (i.e., cross-sectional designs) this is often a matter of interpretation since variables are measured at a single point in time. though a longitudinal design may address this issue, surveys are also problematic in meeting the criterion of eliminating plausible rival explanations ---- Whether using cross-sectional or longitudinal designs, surveys must first anticipate and measure relevant extraneous variables in the interviews or questionnaires and then exercise statistical control over these variables in the data analysis. Thus, the *causal inferences from survey research generally are made with less confidence than inferences from experimental research*

cross-sectional designs

*most commonly used* involves a sample or "cross section" of respondents chosen to represent a particular target population data are gathered at essentially one point in time = the data are collected in as short in time as is feasible as opposed to that respondents are interviewed or that self - administered questionnaires are collected simultaneously (might be the case with some questionnaire studies) def. the most common survey design, in which data are gathered from a sample of respondents at essentially one point in time. Each iteration of the CFS was a cross-sectional survey. For the 2003 CFS, it took almost two months, between May and July, for 26 interviewers to complete interviews with 712 respondents Because cross-sectional designs call for collection of data at one point in time, they do not always show clearly the direction of causal relationships. Moreover, they are not well suited to the study of process and change.

general features of survey research

1. A large number of respondents chosen to represent a population of interest. 2.Structured questionnaire or interview procedures that ask a predetermined set of questions. (close ended rather than open ended questions) 3. The quantitative analysis of survey responses. (may be descriptive [to describe a population], explanatory [to test hypothesis], or both) (there are variations)

sequential steps of planning and conducting a survey

1.) choose a mode of data collection 2.) construct and pretest the questionnaire 3.) choose a sampling frame 4.) design and select the sample, 5.) recruit the sample and collecting data 6.) code and edit data, which are then analyzed Each of these steps involves additional steps or considerations: - Choosing a mode of data collection depends on the researcher's goals and the resources available. Constructing and pretesting the questionnaire depends on the mode of data collection, and researchers should strive to write unambiguous and neutral questions, present them in a logical order, and get feedback on question drafts. Likewise, choosing a sampling frame, selecting and recruiting a sample, and collecting data depend on the mode of data collection. - At the minimum, this involves ensuring that the sampling frame is as close to the target population as is possible; selecting respondents randomly; clearly explaining the purposes of the survey to potential respondents and their rights; and attempting to gain their cooperation. Once the data are collected, they need to be coded and edited before being analyzed.

leading questions

A question in which a possible answer is suggested, or some answers are presented as more acceptable than others. (pg. 227)

double barrel question

A question in which two separate ideas are presented together as a unit. adresses two or more issues at once

computer-assisted telephone interviewing(CATI)

A set of computerized tools that aid telephone interviewers and supervisors by automating various data-collection tasks.

interview schedule

A survey form used by interviewers that consists of instructions, the questions to be asked, and, if they are used, response options.

open-ended question

A survey question that requires respondents to answer in their own words. are adopted when the research purpose is NOT to derive quantitative descriptions, but to understand respondent's interpretations & experiences (qualitative research) open-ended questions pose several problems that limit their use in survey research. Summarizing and analyzing rich and varied (and sometimes irrelevant and vague) responses is a time-consuming and costly process. Respondents may be reluctant to reveal detailed information or socially unacceptable opinions or behavior. And open-ended questions require more effort to answer; indeed, they often are left blank—and therefore should be used sparingly—in self-administered questionnaires or Web surveys, where respondents must write or type rather than speak.

Strengths and Weaknesses of Surveys

surveys are the method of choice for much data collection in the social sciences - esp. in sociology and political science the use of surveys outside the scientific comm. is even more extensive - Media opinion polls, marketing research, and government surveys shape major decisions by politicians, businesspeople, and government officials. Even a good deal of our everyday knowledge comes from the reported results of surveys since surveys carry so much weight in our knowledge and decision-making, we all need to know something about their strengths and weaknesses

mixed mode surveys

Choosing a data-collection mode is difficult when none of the primary modes seems optimal for the intended research. An alternative solution is to design a mixed-mode survey, which uses more than one mode, either sequentially or concurrently, to sample and/or collect the data. In this way the weaknesses of one mode may be offset by the strengths of another mode. For example, since 1970 the U.S. decennial census has combined less expensive mail surveys followed by more expensive in-person interviews with people who do not return the mail questionnaires. 3 of the most common ways of mixing survey modes: - Use one mode to recruit, screen, or contact respondents and another mode to ad- minister the survey. - As another example, a researcher might use an inexpensive telephone survey to screen and locate special- ized populations, such as people with a rare disease, for a study requiring expensive FTF interviews. - Use a second mode to collect data on a subset of questions from the same respon- dents. - A mode shift to self-administered questionnaires—paper-and-pencil or CASI—often is used in FTF surveys to increase privacy in the collection of sensitive information. [respondents are less susceptible to social desirability biases, which are more likely when questions are administered by an interviewer] - Use different modes to survey different respondents. - One solution to the coverage problem in Web surveys, for example, is a respondent-specific approach whereby those without Web access are surveyed in-person or by mail. [maj weakens: whether the data surveyed by diff modes are comparable] (The first two mixed-mode designs have been common practice for some time; their advantages in reducing cost, increasing response rates, and improving data quality are well established)

quantitative data analysis

Data-analysis techniques depend on whether the survey's purpose is descriptive, ex- planatory, or a combination of the two. Surveys that are primarily *descriptive*, such as many opinion polls, make use of simpler forms of analysis to describe the distribution of certain characteristics, attitudes, or experiences within a population. *Explanatory* surveys, on the other hand, require more sophisticated data-analysis techniques to investigate relationships b/w two or more variables and attempt to explain these in cause-and-effect terms the CFS involved both descriptive and explanatory analysis: Table 8.1 (pg. 207), which reports the percentage of respondents who counted each living arrangement as family, is descriptive. /// When the investigators turned their attention to the influence of demographic and other variables on what counts as family, the analysis was explanatory. As noted earlier, for example, the investigators found that beliefs about family were related to gender, education, and religious views. This analysis was *multivariate*, examining relationships while statistically controlling for key socioeconomic variables; it also included tests of statistical significance.

response rate

In a survey, the proportion of people in the sample from whom completed interviews or questionnaires are obtained.

Code and Edit Data

Like the analysis of experiments, *survey analysis is quantitative*; that is, the results are presented in numerical form. But survey data require much more extensive preparation (or processing) for data analysis than is the case for data collected in experiments. survey respondents' answers must be coded (transformed into #s), entered into a data file, and checked and corrected for errors - Some correcting for errors, called editing, occurs during survey data collection. Coding answers for closed-ended questions is straightforward. You simply assign unique numbers to each response category. For the CFS, the first information coded was the respondent's sex; a code of 1 was assigned to "male" and a code of 2 for "female." The particular codes were arbitrary and were specified directly on the interview schedule. The coding of textual responses to open-ended questions, on the other hand, is much more complicated. Because the number of unique responses may number in the hundreds, the researcher must develop a coding scheme that does not require a separate code for each response but that *adequately reflects the full range of responses* - this is typically done through a computer software *editing a survey involves checking for errors and inconsistencies in responses* (ex.'s - multiple responses to a single item or a response w/ a code outside the range of numbers allowed) - (e.g., a code of "3" for respondent's sex, when a code of 1 is assigned to male and a code of 2 is for female) - inconsistencies occur when responses to certain q's are not related in plausible ways to particular other items [ex. it would be unreasonable, and therefore an indication of an error, to find a respondent who is married and age 5 OR a medical doctor with 3 years of formal schooling] Researchers edit and, when possible, correct responses to mail surveys manually by going over completed questionnaires; however, *most editing is programmed into computer-assisted interviewing and online surveys*---- CFS, for example, flagged responses that were outside the acceptable limits and prompted interviewers with follow-up questions for apparent inconsistencies. Thus, in many surveys, editing (as well as coding and data entry) occurs during the process of data collection.

organizing the questions

Once researchers have developed the questions, the next steps are to decide the order in which to ask them and to write an introductory statement as well as appropriate transitions from topic to topic. the introduction used in the CFS was fairly standard: The interviewer gave his/her name, identified the survey sponsor, and briefly stated the gen- eral purpose of the study—that they were interested in what people "have to say about American families and family practices." Deciding how to order the questions after this introduction involved several considerations. (1) the opening q should be relatively easy to answer (starting w/ a diff one can discourage respondents) (2) following these warm-up q's, shift to more demanding q's that req more thought [the CFS asked respondents for their opinions about what counts as a family pg. 229 considerations: - placement of background questions: gender/ age/ religious preference/ - it is much better to place such uninteresting routine q's at the end of the survey: good strategy is to start w/ an interesting q at the beginning that is congruent w/ the respondents' expectations on the basis of what they have been told by the interviewer about the study - placement of sensitive q's: ex. - same sex issues - *pg. 229* ; asking sensitive q's prematurely may embarrass or upset respondents leading them to terminate the interview or question the researcher's motives (put them towards the ending or middle end to build up comfortability w/ the oncoming sensitive q's - that way there is trust toward the researcher and/or interviewer) *as a general rule, questions on the same topic should be grouped together* *Transitions* focus the respondent's attention on the new topic, and may be used to explain briefly why the new topic will be discussed or how it relates to the research purposes. (pg. 230)

the process of planning and conducting a survey

Once you've decided to do a survey, your research objectives and available resources will determine two key decisions: the measurement of variables and the selection of the sample Survey measurement occurs by asking questions, and the wording and complexity of the questions will depend, first, on the survey mode. pg. 223 (impt. reference diagram; figure 8.3)

pretesting

Powell and associates constructed a draft of the instrument that interviewers would use to administer the survey. In interview research, this instrument is called an *interview schedule*, whereas in self-administered surveys it is referred to as a *questionnaire*. pretest the interview schedule by trying it out and evaluating its effectiveness before the main survey (can greatly improve the ease w/ which data may be analyzed and the quality of results) - b/c, for no matter how carefully a researcher may follow guidelines for best practices, it is still possible that a large number of respondents misunderstand the meaning of a question or resist answering some questions may involve a variety of methods: the survey researcher may ask colleagues or experts in the field to critique the questions, apply formal schemes or computer software to identify question-wording problems, or administer the survey to a small sample of respondents the usual method is *field pretesting* which involves trying out the survey instrument on a small sample of persons having characteristics similar to those of the target group of respondents To analyze the questions, the researchers transcribed the interviews and examined how long different sections of the interview were taking and where people seemed to be having trouble understand- ing or answering the questions. Indicators of problems included respondents asking interviewers to repeat or clarify, interviewers probing to follow up on inadequate an- swers, and interviewers incorrectly reading or skipping a question. (they also examined the distribution of answers to each q. - for ex. = knowing that 100% of the respondents choose the same response option, such as saying that a particular living arrangement was not a family, might imply that an item was uninformative and unnecessary) ---- Similarly, pretesting can be done for an on-campus student research project by asking randomly selected students on campus to respond to an initial draft of the survey. On the basis of pretesting, Powell and associates revised the survey instrument and were ready to launch the survey. Of course, to do so they needed a sample of potential respondents. In fact, as the researchers were developing their survey instrument, they also were designing and selecting a sample. (pg. 231)

large-scale probability sampling

Professional surveys make use of large samples chosen through scientific sampling procedures to ensure accurate estimates of population characteristics. National opinion polls typically number around 1000 respondents, and the General Social Survey (GSS) has had sample sizes ranging from 1,372 (in 1990) to 4,510 (in 2006). Surveys of national samples can be much larger. In the National Edu- cational Longitudinal Study (NELS), for example, the probability sample in 1988 con- sisted of 24,599 8th graders from 1,052 public and private schools. *sample accuracy* is a function of 2 factors: sample design & sample size - It is only possible to estimate sample accuracy with probability samples, which involve random selection; and when cases are chosen randomly, the larger the sample, the more accurate the sample estimates of population characteristics. Both of these factors req. considerable resources: time/ money/ personnel that may be beyond the capacity of ind. researchers or small research teams --- many surveys therefore involve smaller samples drawn from state or local populations Although large-scale probability samples are the ideal, surveys vary considerably in sample size and sampling design. There are legitimate reasons for doing a small-scale survey, particularly if you have a low budget or some specialized or applied research purpose.

structured interviews or questionnaires

Surveys gather data by asking people predetermined questions following standardized procedures that are designed to enhance the reliability of the data. [structured interviews] = close -ended questions (from intro ex.) all the questions were written beforehand and asked in the same order for all respondents, and interviewers were highly restricted in the use of introductory and closing remarks, transitions or *bridges* from topic to topic, and supplementary questions to gain a more complete response *(probes)*. by contrast, an *unstructured interview* has broad research objectives and involves a wide-ranging discussion in which individual q's are dev. spontaneously in the course of the interview (w/ mostly open- ended questions) B/w the two extremes the *semi-structured interview* would have specific objectives, but the interviewer would be allowed some freedom in meeting those objectives - the scope of the interview would be limited to certain subtopics, and key q's probably would be dev. in advance *unstructured and semi-structured interviews are often used in qualitative research* surveys can contain many types of q's and q formats, but they almost always include *close-ended questions* which req. respondents to choose a response from those provided, much like a multiple- choice test - "It is generally better if a woman changes her last name to her husband's name when she marries. Do you: (1) strongly agree, (2) somewhat agree, (3) somewhat disagree, or (4) strongly disagree?" as a follow up to this close-ended question, in 2006 respondents who strongly or somewhat agreed were asked an *open-ended question* - which required them to answer in their own words: "Why do you think it's better for a woman to change her name?" Or, if they disagreed, they were asked: "Why don't you think it is better for a woman to change her name?" One reason that survey researchers prefer closed-ended questions is that they produce data that lend themselves well to the kinds of quantitative analysis that drives survey research. In contrast, open-ended questions are adopted when the research pur- pose is not to derive precise quantitative descriptions but to understand respondents' interpretations and experiences, as in qualitative research. *open-ended q's provide flexibility in meeting broad research objectives and in developing theory*

construct and pretest questionnaire

To construct the survey instrument, the researcher should first outline, based on re- search objectives, the question topics to be covered in the interview or questionnaire. For the 2003 CFS, the main topics included definitions of family, rights associated with marriage and other relationships, maternal and paternal responsibilities, causes of child behaviors and traits, and gay marriage and adoption. With these topics as a guide, the researchers had to come up with an appropriate set of questions and organize them into a meaningful sequence. (Powell & his associates adopted q's used in previous research as it helped to cut down the measurement process/ capitalizes on others' expertise/ enables researchers to compare results across studies) The CFS also explored topics, such as definitions of family, for which the researchers had to compose new questions... depends on the question topic and whether you are asking about factual events and behaviors or subjective states such as knowledge, perceptions, feelings, and judgments the survey researcher must also choose b/w question forms such as open and closed and the # and type of response categories [there are many useful guides to survey design online] If you're designing a survey, you need to *pay particular attention to language*, as even slight changes in the wording of a question can greatly affect responses. For example, a question might be written, "What is your annual income?" or "What is your total annual income from all sources?" A person answering the first item might neglect to consider income from such sources as interest on stocks or savings, sale of stocks, and rental income. in general, you want to use q's that: (1) respondents understand in a consistent way, so that they would give the same answer if they were asked the same question again (2) mean the same thing to all respondents (3) have the same meaning for respondents as they do for the researcher. Especially troublesome are indefinite words such as "usually," "seldom," "many," "few," "here," and "there"; these will have different meanings to different respondents. (pg. 225 - 226) A double-barreled question is one in which two separate ideas are presented together as a unit. An example might be, "What factors contributed to your decision to marry and have children?" (there are two separate q's here that need to be reformatted pg. 226) avoid "leading questions" pg. 227

generalizations to populations (strengths & weaknesses of surveys)

Whereas *experiments are used for explanatory, hypothesis-testing research*, surveys are used extensively for both descriptive and explanatory purposes. among all approaches to social research, surveys offer the most effective means of social description - By using probability sampling, survey researchers can be certain, within known limits of sampling error, of how accurately the responses to a sample survey can be generalized to the larger target population

efficiency (strengths & weaknesses of surveys)

While an experiment usually will address only one research hypothesis, numerous research questions can be covered by a single large-scale survey. Furthermore, the wealth of data typically contained in a survey may yield unanticipated findings or lead to new hypotheses Adding to their cost effectiveness, data from many large-scale surveys such as the GSS are made available to the public. Such data are usually of high quality, and the cost of obtaining the data for analysis is a small fraction of the cost of collecting the data. It is a common practice, called *secondary analysis*, for social scientists to analyze publicly accessible survey data. According to the GSS website, "the GSS is widely regarded as the single best source of data on societal trends," and analyses of GSS data can be found in over 20,000 scholarly publications

intro ex. (Powell & associates - Constructing the Family Survey)

addresses a fundamental q. (which living arrangements do Americans count as family?) they made telephone calls to U.S. adult residents asking for their participation in the survey. CSR staff randomly se- lected telephone numbers, and at each residential telephone number, randomly selected a household member to interview. Typical of many social surveys, the interview addressed not only the core research question—how Americans define family—but also a variety of related topics, including attitudes toward gay men and lesbians, marital name changes, and the relative importance of biologi- cal and social factors in child development. the interview also included q's on background characteristics such as education, age, gender, race, religion, and marital status = in this way researchers could examine how views of family are related to an individual's background, life experiences, and social attitudes they also read a list of family arrangements and were asked whether they personally thought that each arrangement "counts as a family" (pg. 206-207) second stage: latent class analysis - three classes or "ideal types" that best captured the way Americans define family: Exclusionists, moderates, and inclusionists

telephone interviews

became popular in the last quarter of the 20th century in the US and Europe Virtually all opinion polls are telephone surveys the CFS was a telephone survey conducted by trained staff using *computer-assisted telephone interviewing* (CATI), the telephone counterpart to CAPI. The primary reason for the widespread use of the telephone survey is its substantial savings in cost and time. cost much less than FTF surveys survey research organizations like the Center for Survey Research, which have a permanent staff, can complete a telephone survey very rapidly. Even those researchers who must hire and train interviewers can complete a telephone survey in less billable time than one requiring FTF interviews or mailed questionnaires. another maj advantage: the opp. for centralized quality control over all aspects of data collection, including question dev. and pretesting, interviewer training and supervision, sampling and callbacks, and data entry - Administration and staff supervision for a telephone survey are much simpler than for an FTF interview survey. limitations: w/o the benefit of visual aid, the q's in a telephone survey must be simpler, w/ fewer response options, than in a FTF interview/// w/o face to face contact, it is more diff. for interviewers to establish trust and rapport w/ respondents - which may lead to higher rates of nonresponse for some q's and underreporting of sensitive or socially undesirable behavior *response rates tend to be lower* than in FTF interview surveys; and conducting a telephone interview longer than 20 to 30 minutes increases the risk of nonresponse and mid-interview termination most serious problems facing telephone surveys: rapid proliferation of mobile telephones and the growth of the cell phone - pg. 217 Factors contributing to the plunging telephone response rates include the growth of caller ID call-screening and call-blocking technologies, heightened privacy concerns in the face of increased telemar- keting calls, and the increase in cell phone-only households... LOL, they won't answer these type of calls unless they recognize who's calling or have interest in talking to them

panel study

can reveal which *individuals* are changing over time b/c the same respondents are surveyed again and again def. A longitudinal design in which the same individuals are surveyed more than once, permitting the study of individual and group change. the initial sample of respondents is asked the same questions each time the survey is administered /// Data show how individuals change over time (ex. pg. 213) Panel studies of any duration were a rarity in the social sciences until the late 1960s, when the federal government began conducting large-scale longitudinal studies. the longest-running household panel survey in the world is the Panel Study of Income Dynamics (PSID), conducted by the Survey Research Center in the Institute for Social Research at the University of Michigan. (pg. 214) Two drawbacks to studies of this magnitude are that they are very expensive and that they take considerable time. Therefore, cross-sectional and trend designs are far more common.

editing def.

checking data and correcting for errors in completed interviews or questionnaires

recruit sample and collect data

collecting the data: multi-step process In an interview survey, if mailing addresses are known, the process may begin by sending respondents a cover letter in advance of being contacted. Then, the interviewer must contact a household member, select an eligible person to interview, obtain his or her agreement to participate, and conduct the interview. (if no one answers a doorbell or picks up the phone, the interviewer must make additional attempts to reach someone at that address or #) In RDD surveys, it is not possible to notify respondents about the survey in advance. The process of collecting data thus began in the CFS when someone answered a dialed number. Initially, the interviewer used the introduction that we described earlier. Re- search has shown that incentives can enhance response rates in all kinds of surveys (CFS provided each person who completed the interview w/ a 120 minute calling card) -to facilitate the random selection of an eligible household member, the interviewer ascertained that the contact was at least 18 years old and a member of the household - "To find out which adult in your household I need to speak with, I need to ask how many people age 18 or older, including yourself, currently live in this household?" If the contact is the only one eligible, that person is interviewed. If more than one person is eligible, the interviewer keys in this information, the CATI software randomly selects a number and, if necessary (when more than two people are eligible), prompts the interviewer to ask for a complete listing of everyone in the household. - after randomly selecting a household member to interview, the interviewer must gain his/ her cooperation = interviewers are trained to make a positive impression and to memorize the intro and scripts for addressing respondents' reluctance to participate, in order to do this (e.g. "I'm not good at answering surveys," "I'm too busy," "You are invading my privacy" or other concerns) finally BEFORE the first question is asked, interviewers MUST read an informed consent statement and ask for permission to begin the interview For telephone and FTF surveys, the major problem is dealing with refusals to partic- ipate. In many surveys, more experienced interviewers or supervisors are used to try to gain the respondent's cooperation on the second try. In response to a clear refusal, how- ever, one follow-up call or visit should be the limit to avoid respondent feelings of harassment The process of recruiting respondents is simpler in computer-assisted self-interviews, such as web surveys, which is one of the reasons for their appeal (pg. 234)

trend studies

consists of a repeated cross-sectional design in which each survey collects data on the same items or variables w/ a new, ind. sample of the same target population allows for the study of "trends" or changes in the population as a whole Examples of trend studies are the monthly government surveys used to estimate unemployment in the United States (target population) and repeated public-opinion polls of candidate preferences among registered voters (target population) as an election approaches. def.: a longitudinal design in which a research question is investigated by repeated surveys of independently selected samples of the same population Each CFS consisted of an independent cross-sectional design; conducting a second CFS in 2006 created a trend study. When the investigators compared the results from 2003 and 2006, they found a decrease in the percentage of exclusionists (from 45.4 to 38.1) and an increase in the percentage of inclusionists (from 25.4 to 32.4). This led them to conclude that "it is just a matter of time before same-sex couples are no longer counted out" - to continue to track changes in public opinion on this issue, Powell & his associates conducted a THIRD CFS in 2010, and announced plans to carry out a FOURTH survey in 2015 *identifies w/ variables changing over time* A different independent sample of respondents, representative of the same target population (e.g., U.S. adults), is asked the same questions in each survey /// Data show how the *population* changes over time

face-to-face (FTF) interviews

oldest and most highly regarded method of survey research involves direct, in-person contact b/w an interviewer and interviewee The General Social Survey (GSS) is an FTF interview survey. From 1972 until 2002, GSS interviewers circled or recorded answers in writing on questionnaires, but in 2002, the GSS shifted to computer-assisted personal interviewing (CAPI). def. A type of interview in which the interviewer interacts face-to-face with the respondent. offer many advantages w/ more flexibility: if research objectives call for open-ended q's, interviewers can probe for more complete responses, they can also clarify or restate q's that the respondent does not understand the response rate, the proportion of people in the sample who completed interviews (or questionnaires), is typically higher than in comparable telephone or mail surveys sustains respondents attention & motivation = FTF mode is generally the best choice when long interviews are nec. (one hour in length is common but sometimes they go on for much longer) With the FTF technique, one can use visual aids such as photographs and drawings in presenting the questions, as well as cards that show response options - The cards may be useful when response options are difficult to remember or when it is face-saving for respondents to select the option or category on the card rather than to say the answer aloud. *greatest disadvantage* is the cost that provides for recruiting, training, and supervising personnel but also for interviewer wages and travel expenses, including lodging and meals in some cases; it also takes much longer to complete each interview in a FTF survey = cost per interview much greater (GSS interviews take about 90 minutes for completion of some 400 questions)

paper-and-pencil questionnaire

the site for this mode is usually a school or organization where the questionnaire may be hand-delivered and filled out in a group or individually MOST OFTEN, the setting is the home or a workplace, to which a self-administered questionnaire is mailed to respondents interesting ex. of mail survey - pg. 217 several advantages of using a mail survey: less expensive than interview surveys w/ costs estimated at 20 to 70 percent less than telephone surveys; no interviewers or interviewer supervisors are needed; no travel or telephone expenses; very little office space is req. The time needed to complete the data-collection phase of the survey is greater than that for telephone surveys but usually less than that for FTF surveys. The sample size may be very large, and geographic dispersion is not a problem. (ex. The 1993 CAS, for example, surveyed a random sample of 17,592 students at 140 col- leges in 39 states and the District of Columbia) there is also greater accessibility to respondents w/ this method since those who cannot be reached by telephone or who are infrequently at home usually receive mail finally, in contrast to interview surveys, mail surveys can provide respondents w/ anonymity (which is impt. in investigating sensitive or threatening topics such as college drinking or illicit drug use) - respondents are more likely to admit to undesirable beh. w/ self-administered than w/ interview -administered surveys *response rate for mail surveys tends to be much lower w/ rates of 50% or lower common* Certain groups of people, such as those with little writing ability and those not inter- ested in the topic, are less likely to respond to a mailed questionnaire than to a personal interview request. More questions are left unanswered with self-administered questionnaires than with interview methods. - w/o an interviewer there is no opp. to clarify q's / probe for more adequate answers/ or control the conditions under which the questionnaire is completed or even who completes it def. A survey form filled out by respondents.


Set pelajaran terkait

Chapter 4- Neo-Analytic and Ego Aspects of Personality

View Set

LearningCurve Module 22. Biology, Cognition, and Learning

View Set

N2 Ch. 25 asepsis and infection

View Set

Accounting 3303 chapter 1 mgraw hills

View Set

BE 401, Chapter 8: Managing in Competitive, Monopolistic, and Monopolistically Competitive Markets

View Set