Chpt 10-Survey Research

Ace your homework & exams now with Quizwiz!

1. Organization of questionnaire. 2. Order effects 3. Context effects

Question Order or Sequence

The primary tool of survey research Paper-and-pencil Structured and semi-structured interviews In person, via phone, via the Internet Either/or and Likert scale response are most common Lots of things to consider to get useful results Well-constructed questions (and entirely different lecture for which we do not have the time) Reliability Validity

Questionnaires

A specialized method in survey research used for very sensitive topics; the random receipt of a question by the respondent without the interviewer being aware of the question to which the respondent is answering.

Randomized response technique (RRT)

A result in survey research that occurs when respondents choose the last answer response offered rather than seriously considering all answer choices.

Recency effect

a measure should yeild similar results each time A person's place in the distribution should be basically the same each time they are tested Measurements tend to be more reliable when there is greater heterogeneity (diverse samples) vs. homogeneity

Reliability (internal consistency, test-retest, inter-rater)

1. Swayed opinion. This involves falsely overstating a position as with the social desirability bias, or falsely understating or withholding a position as with sensitive topics. 2. False positive. This results from selecting an attitude position but lacking any knowledge on an issue and really having no true opinion or view on it. 3. False negative. Caused when a respondent refuses to answer some questions or withholds an answer when he or she actually has information or really holds an opinion.

Respondents may answer three ways that yield invalid responses.

when questions are asked in a way that can responses A researcher asks, "To what extent to you enjoy shopping with the price-gouging vendors in shopping malls?"

Response bias

when some people are more likely to respond than others A researcher offers "mall dollars," which can only be used at a mall, to individuals for completion of a survey about love of shopping malls

Response rate bias

The first involves an inaccurate direction of a response toward a normative position, the second substitutes wild guesses for a serious response, and the last type is the partial and selective nonresponse to the survey

Responses overlap.

specific list of the members of the population in order to select a subset of that population

Sampling frame

Avoiding exerting cognitive effort when answering survey questions and giving the least demanding answer that will satisfy the minimal requirements of a survey question or interview situation

Satisficing

when we choose the sample in a way that leads to overrepresentation/underrepresentation of some segment of the population, A researcher collects data at a mall to determine how much time people spend shopping.

Selection bias

every element has an equal chance

Simple Random sample

1. Errors by the respondent. Forgetting, embarrassment, misunderstanding, or lying because of the presence of others 2. Unintentional errors or interviewer sloppiness. Contacting the wrong respondent, misreading a question, omitting questions, reading questions in the wrong order, recording the wrong answer to a question, or misunderstanding the respondent 3. Intentional subversion by the interviewer. Purposeful alteration of answers, omission or rewording of questions, or choice of an alternative respondent 4. Influence due to the interviewer's expectations about a respondent's answers based on the respondent's appearance, living situation, or other answers 5. Failure of an interviewer to probe or to probe properly 6. Influence on the answers due to the interviewer's appearance, tone, attitude, reactions to answers, or comments made outside the interview schedule

Six Categories of Interview Bias

Survey research inquiry about nonexistent people or events to check whether respondents are being truthful.

Sleeper question

1. Errors in selecting the respondent-Who we ask? a. Sampling errors (e.g., using a nonprobability sampling method) b. Coverage errors (e.g., a poor sampling frame omits certain groups of people) c. Nonresponse errors at the level of a sampled unit (e.g., a respondent refuses to answer) 2. Errors in responding to survey questions-Measurement a. Nonresponse errors specific to a survey item (e.g., certain questions are skipped or ignored) b. Measurement errors caused by respondent (e.g., respondent does not listen carefully) c. Measurement errors caused by interviewers (e.g., interviewer is sloppy in reading questions or recording answers) 3. Survey administration errors a. Postsurvey errors (e.g., mistakes in cleaning data or transferring data into an electronic form) b. Mode effects (e.g., differences due to survey method: by mail, in person, over the Internet) c. Comparability errors (e.g., different survey organizations, nations, or time periods yield different data for the same respondents on the same issues). Pg. 321

Sources of Errors in Survey Research

A survey research inquiry for which the answer categories do not include a "no opinion" or "don't know" option.

Standard-format question

-define purpose & objectives -select a target pop/sample -design the survey -write a cover letter -administer the survey -follow up -input survey data -analyze data -communicate findings above is from video Book Figure 1 pg 320 has dif steps

Steps in conducting research

divide to strata then draw sample

Stratified Random sample

A series of cross-sectional surveys are conducted over time. The samples are independent because a different sample of respondents completes the survey at each time point The same set of questions should be asked of each sample of respondents Different (but comparable) samples should be drawn at each time from the same population Ex: gallup polls

Successive Independent samples

Can describe Can see what is correlated/associated Can predict Cannot establish causality All respondents complete the same items, verbally (interview) or in writing (questionnaire) Used to obtain data about feelings, attitudes, preferences, symptoms, etc. of a specific population of people Perhaps the most common of all methods of data

Survey

Need a representative sample for data to be generalizable to a population extent that a sample exhibits distribution characteristics of the population

(Representative) Sample

1)Open-ended vs closed-ended questions 2)Categorical-puts respondents into categories. Can have 1 category (M/F) or several. 3)Continuous items a)Numerical responses-ex-What's ur age? Blocking 40-45 yrs b)Rankings-highest preference to lowest (not as widely used) c)Ratings-(continuum) Never, Occassional etc. *popular d)Likert-type (most commonly used) example Very good, strongly agree etc. e)Sematic Differentials-scale (job placement/careers

3 types of items

Technique in which the interviewer sits before a computer screen and keyboard, reads questions from the screen, and enters answers directly into the computer

Computer-assisted telephone interviewing (CATI)

1)Jargon, slang, abbreviations 2)Emotional language and prestige bias 3)Double-barreled questions 4)Leading questions 5)False premises 6)Double negatives 7)Unbalanced responses 8)Issues beyond respondent capabilities 9)vagueness 10)distant future intentions pg. 326

Problems in Survey writing

-computer-assisted self-administered interviewing (CASAI) -computer-assisted personal interviewing (CAPI) -Randomized response technique (RRT)

Anonymous questioning methods

(subject attrition) a threat to internal validity occurs when participants are lost from an experiment when participants drop out of the research project, loss of participants changes nature of group from that established prior to the introduction of the reatement, by destroying equivalence of groups that had been established though random assignment

Attrition

A technique used in pilot testing surveys in which researchers try to learn about a questionnaire and improve it by interviewing respondents about their thought processes or having respondents "think out loud" as they answer survey questions.

Cognitive interviewing

A particular survey interview in which the respondent and interviewer work together to reach the meaning of the survey question as intended by the researcher and produce an accurate response to it.

Collaborative encounter model

Percentage of cooperating respondents who complete the survey.

Completion rate

Technique in which an interviewer sets up a laptop computer and is available to help respondents who hear questions over earphones and/or read them on a screen and then enter answers.

Computer-assisted personal interviewing (CAPI)

Technique in which a respondent reads questions on a computer screen or listens over earphones and then answers by moving a computer mouse or typing on a keyboard

Computer-assisted self-administered interviewing (CASAI)

Location rate: Percentage of respondents in the sampling frame who are located. Contact rate: Percentage of located respondents who are contacted. Eligibility rate: Percentage of contacted respondents who are eligible. Cooperation rate: Percentage of contacted, eligible respondents who agree to participate. Completion rate: Percentage of cooperating respondents who complete the survey. Total response rate: Percentage of all respondents in the initial sampling frame who were located, contacted, eligible, agreed to participate, and completed the entire questionnaire.

Confusion about Response Rates

Percentage of located respondents who are contacted.

Contact rate

A result in survey research when an overall tone, setting, or set of topics heard by respondents affect how they interpret the meaning of subsequent questions.

Context effect

A two-part survey item in which a respondent's answer to a first question directs him or her either to the next questionnaire item or to a more specific and related second question.

Contingency Questions

most common form of nonprobability sampling, involves selecting respondents primarily on the basis of their availability and willingness to respond

Convenience sample

is the measure correlated with other measures of the same construct? Measures of empathy should go together with other related measures (e.g., sharing) Measures of empathy should NOT go together with unrelated constructs (e.g., competitive drive)

Convergent

A flexible technique based on the collaborative encounter model in which interviewers adjust interviewing questions to the understanding of specific respondents but maintain the resesearcher's intent in each question.

Conversational interview

Percentage of contacted, eligible respondents who agree to participate.

Cooperation rate

exists when 2 different measures of the same people, events, or things vary together, makes it possible to predict values on 1 variable by knowing the values on the 2nd variable magnitude/strength of relationship, Direction of relationship : closer to 1 = strong

Correlation (magnitude/direction)

is the measure associated with real world examples of the construct? People high in empathy should engage more in helping activities in real world

Criterion-prediction

One or more samples are drawn from a population at one time. Done all at once, a snapshot in time Ex: "what does it mean for a relationship to be "Facebook official" "Facebook official relationship means both partners are exclusively dating each other".

Cross-sectional

does the measure distinguish between groups? Social workers, but not Sociopaths, should not score high in empathy

Discriminant

Each member of the population

Element

Percentage of contacted respondents who are eligible.

Eligibility rate

is the difference between obtained values and "true values."

Error

does the measure obviously measure the construct? People think I am a compassionate person. (good) I enjoy reading about cars. (bad)

Face

Nonattitudes with three types of attitude questions: standard-format, quasi-filter, and full-filter questions.

Floaters

A survey research inquiry that first asks respondents whether they have an opinion or know about a topic; then only those with an opinion or knowledge are asked specifically about the topic.

Full-filter question

Organization of survey research questions in a questionnaire from general to specific questions.

Funnel sequence

extent to which the findings from one group (or sample) can be generalized or applied to other groups (or population)

Generalizable

Descriptive, non-experimental method Assess relationship among naturally occurring variables to describe and/or predict "predictive" = correlation Yield: correlation coefficient (often Pearson's r)

Goals of Surveys

30s-Sampling-Area Probability (geo areas)/ Interviews-Face-to-face/ Data environment (Stand alone) 70s (phone) Sampling-Random digital dialing probability (cheaper/faster). Interviews on phone. Data environment (stand alone. Now-Non-probability Sampling. Interviews r computer-administered. Data environment is linked.

History of Survey's

1)create comfort and trust 2)use enhanced phrasing, 3)establish a desensitizing context 4)use anonymous questioning methods.

Increase honest answering about sensitive topics

do different people rate the same behavior in the same way

Inter-rater reliability

A technique in telephone interviewing in which respondents hear computer-automated questions and indicate their responses by touch-tone phone entry or voiceactivated software.

Interactive voice response (IVR)

("Cronbach's Alpha" > .70): Do all the questions/items measure the same thing?

Internal consistency reliability

A false and deceptive surveylike instrument using the format of a survey interview but whose true purpose is to persuade a respondent

Pseudosurvey

A survey research inquiry that includes the answer choice "no opinion," "unsure," or "don't know."

Quasi-filter question

A hypothesis of survey research cooperation that states that different respondents find different aspects of a survey interview to be salient and decide whether to cooperate based on different specific aspects of the interview

Leverage saliency theory

Percentage of respondents in the sampling frame who are located.

Location rate

Same sample of respondents surveyed more than once Plus: can examine patterns of change for each individual over time

Longitudinal

A survey research inquiry that groups together a set of questions that share the same answer categories in a compact form.

Matrix question

A particular standardized survey research type in which there are no communication problems and respondents' responses perfectly match their thoughts.

Naïve assumption mode

Some debate to add a "neutral" choice to surveys...others argue against it.

Neutral Positions

not random Often done by convenience sampling

Nonprobability sampling

The failure to get a valid response from every sampled respondent weakens a survey

Nonresponse

1. Location—Could a sampled respondent be located? 2. Contact—Was a located respondent at home or reached after many attempts? 3. Eligibility—Was the contacted respondent the proper age, race, gender, citizenship, and so on for the survey purpose? 4. Cooperation—Was an eligible respondent willing to be interviewed or fill in a questionnaire? 5. Completion—Did a cooperating respondent stop answering before the end or start answering most questions with "do not know" or "no opinion"?

Nonresponse rates (five components)

A type of survey research inquiry that allows respondents freedom to offer any answer they wish to the question

Open-ended question

A result in survey research in which a topic or some questions asked before others influence respondents' answers to later questions.

Order effect

A type of survey research enquiry in which respondents are given a fixed set of answers to choose from, but the addition an "other" category is offered so that they can specify a different answer.

Partially open question

All the cases in a group being studied, from which samples may be drawn. Anyone or anything that could possibly be selected to be in the sample. Representative group.

Population

Survey research grew within a positivist approach to social science

Positivist approach to Survey

A problem in survey research question writing that occurs when a highly respected group or individual is associated with an answer choice.

Prestige bias

random sampling of population No scientific polling use probability sampling (News polls) They're convenience sampling

Probability sampling

A follow-up question asked by an interviewer to elicit an appropriate response when a respondent's answer is unclear or incomplete.

Probe

In experimental research we divide small numbers of people into equivalent groups, test one or two hypotheses, manipulate conditions so that certain participants receive the treatment, and control the setting to reduce threats to internal validity (i.e., confounding variables).

THE LOGIC OF SURVEY RESEARCH

Encouraging a respondent's cooperation in survey research interviews by having interviewers highlight specific aspects of the interview that a respondent finds salient and values positively

Tailoring

1. Situational framing. 2. Decomposition. 3. Landmark anchoring. 4. Bounded recall.

Techniques to Reduce Telescoping

Survey research respondents' compressing time when answering about past events, overreporting recent events, and underreporting distant past ones.

Telescoping

1. Address the questionnaire to a specific person, not "Occupant," and send it first class. 2. Include a carefully written, dated cover letter on letterhead stationery. In it, request respondent cooperation, guarantee confidentiality, explain the purpose of the survey, and give the researcher's name and phone number. 3. Always include a postage-paid, addressed return envelope. 4. The questionnaire should have a neat, attractive layout and reasonable page length. 5. The questionnaire should be professionally printed, be easy to read, and have clear instructions. 6. Send two follow-up reminder letters to those not responding. The first should arrive about one week after sending the questionnaire, the second a week later. Gently ask for cooperation again and offer to send another questionnaire. 7. Do not send questionnaires during major holiday periods. 8. Do not put questions on the back page. Instead, leave a blank space and ask the respondent for general comments. 9. Sponsors that are local and are seen as legitimate (e.g., government agencies, universities, large firms) get a better response. 10. Include a small monetary inducement ($1) if possible.

Ten Ways to Increase Mail Questionnaire Response

administer test to sample at two times

Test-Retest reliability

A specialized type in which respondents record details about the timing and duration of their activities over a period of time

Time budget survey

-good idea to prepare an outline of what u hope to accomplish w/survey questionnaire -have others review ur survey (scientific community) -prepare a timeline before you go into the field to collect quantitative data (good estimate of how long it will take). -make stems clear, not too wordy -stems response options need to match

Tips (some) on question writing

Percentage of all respondents in the initial sampling frame who were located, contacted, eligible, agreed to participate, and completed the entire questionnaire.

Total response rate

Avoid possible confusion and keep the respondent's perspective in mind. *don't pry-frowned upon *don't assume ppl taking the survey have same education as you (average American reads at 8/9th grade level

Two key principles guide writing good survey questions

the truthfulness of a measure, a valid measure is one that measures what it claims to measure

Validity

1-Behavior 2-Attitudes/beliefs/opinions 3-Characteristics. 4. Expectations 5. Self-classification. 6. Knowledge. We can use surveys for exploratory, descriptive, or explanatory research.

What is asked in a Survey?

The modern survey goes back to ancient forms of the census. Domesday Book was a census of England conducted from 1085 to 1086 by William the Conqueror.

When did Survey's start?

Results in survey research when the use of a specific term or word strongly influences how some respondents answer a survey question.

Wording effects

A type of survey research inquiry in which respondents must choose from a fixed set of answers.

close-ended question

Cross Sectional: Can't assess change over time, Cohort effects Successive Independent Samples: Can't see CAUSE of any patterns, Cohort effects, Non-comparable successive samples Longitudinal: Causes of change can be hard to identify, Attrition, Questioning over time: Participants may try to be consistent/inconsistent, May be more sensitive to issue than general population

cross sectional vs successive independent samples vs longitudinal

questions that attempt to get at multiple issues at once, and so tend to receive incomplete or confusing (ambiguous) answers

double-barreled questions

A problem in survey research in which respondents give a "normative" response or a socially acceptable answer rather than an honest answer.

social desirability bias


Related study sets

Employment Law Exam 2: week 2 quizzes

View Set