Survey Research

Ace your homework & exams now with Quizwiz!

What is the cognitive model of survey response?

"Respondents must interpret the question, retrieve relevant information from memory, form a tentative judgment, convert the tentative judgment into one of the response options provided (e.g., a rating on a 1-to-7 scale), and finally edit their response as necessary."

Response Rate

# answered / # contacted

What are some of the major practical, legal, and ethical issues with using cell phones in survey research?

Cannot do RDD on cell phones, Illegal to do RDD on cell phones, Caller ID on cell phones, many people won't pick up

Satisficing

Choosing no-opinion Straight lining < 1 second per question Choosing socially desirable responses Agreeing with everything (acquiesence response bias)

How are fabrication, falsification, and plagiarism different concepts?

Fabrication: Making up data or results and recording them Falsification: Manipulation materials, equipment, or procedures or changing or omitting results Plagiarism: Theft or misappropriation of intellectual property. Copying someone else's work. Interviewer Falsification includes intentionally: Making up survey responses Coding refusals as ineligible (to avoid trying to convert( Interviewing people not included in sampe Miscoding answers to a question to avoid follow up questions

What are alternative methods of data collection—telephone, face to face, web, mail.

Face to Face, Telephone Interviewing, Mail Surveys, Dropped off surveys, Internet surveys

Ketter Study: Parallel Phone Survey

Found only 2% difference in results between standard survey (36% response rate)and rigorous survey (66% response rate)

Strategies for reducing interviewer variance:

Give exact questions to ask Training interviewers to explain the interview process Probing non-directively Manage interpersonal behavior Professionalism Don't chat Interviewers record answers exactly as given

What is expressive responding (Schaffner and Luks,2018)?

Individuals intentionally and knowingly provide misinformation to survey researchers as a way of showing support for their political side. Example is the crowd size article where respondents were asked to identify which crowd size belonged to Trump and Obama.

Internet Survey Disadvantages

Internet access unequal, Some email addresses are not really used by people facebook/social media/survey fatigue, Who's behind the email address? Adult? Child?, Pricey to start

Interviewer Variance

Interviewer asking questions in an inconsistent way that makes estimates unstable and hard to replicate

Whats the difference between expressive responding and misinformation?

Misinformation- respondent truely believes something Expressive Responding - answers incorrectly intentionally to express political views (more likely to occur among highly aware and highly motivated)

Dropperd off surveys advantages

Person can verify the survey gets to the correct person, Does not have to be as well trained as an actual interviewer

Eye Tracking Findings

Respondents spend more time reading choices at the top of a list (randomize to solve) Drop Down choices don't encourage careful reading Rollover or Pop Up boxes not used by most people

How has eye-tracking data contributed to what we know about survey response? (Galesic et al, 2008)

The eye-tracking study provided evidence that indicated: Respondents spend more time looking at the first few options in a list of response options than those at the end of the list Respondents are not likely to read definitions of survey concepts that are only a mouse click away Respondents also ignore initially hidden response options. Don't use drop down/one at a time response formats..does not work. Some respondents are more prone to these and other cognitive shortcuts than others

Interviewer Bias

The interview is shifting responses in a uniform way. May be due to age, sex, facial expressions ...

Measurement Error (aka errors of observation)

Survey answers do not accurately describe respondents characteristics

Dropped off surveys disadvantages

costs close to personal interview

Probability Sampling

creates generalizable sample Simple Random Sampling Systematic Random Sampling Stratified Random Sampling Cluster Sampling Multi-stage Cluster Sampling

Disclosure Limitation

A procedure that evaluates risk of identifying someone in data and attempts to reduce that risk by: -restricting access to data -restricting contents of data

Imputation Methods

Create synthetic (made up) dat to limit risk of disclosure

Why pretest?

Once you have a complete draft of your survey, a pretest is a necessary next component of the research process because it helps you pinpoint problem areas, reduce measurement error, reduce respondent burden, determine whether or not respondents are interpreting questions correctly, and ensure that the order of questions is not influencing the way a respondent answers. Pretest is critical examination of your survey instrument that will help determine if your survey will function properly as a valid and reliable social science research tool. Pretest also allows researcher to assess response latency (the amount of time it takes to complete individual items in the surveys as well as the fully survey, which can then be reported in the intro of the full-scale survey) Important feature of pretesting is the technical report (paper trail) - helps avoid future problems encountered at the various stages of the study design. Lends creditability to your proposed work and accountability to you as a researcher, which could also potentially increase the probability of obtaining research funding. Evaluate respondents' understanding of the concepts under study as well as the quality of their interviews. All respondents should understand concepts and ideas in the same exact way. Choosing not to pretest a questionnaire poses a potentially serious threat to the accuracy of the survey questions and resulting data. Test survey on at least 12-50 people. Expert-driven pretests Researchers sometimes call upon experts in a given field to identify problems with questions or response options in a survey. They are crucial when assessing the face validity and construct validity of a measurement. Go through entire survey but asking about whether the questions: very strongly represent the construct, somewhat strongly, unsure, somewhat weakly represents the construct, very weakly represents the construct... Experts can tell you if all questions in the survey are relevant and necessary, and if the flow works. Respondent-driven pretests - admin of the pretest to friends and colleagues, but the most useful pretesting is done on a small subsample of the sample population. Collecting pretest data - 4 strategies to conduct a valid and reliable pretest assessment of your survey: Behavior coding - researchers themselves administer the survey and ask respondents to take it in their presence. Researchers watch as respondents take test. Cognitive interview - researcher encourages pretest respondents to think out loud and voice their ongoing mental reactions Individual debriefing assessment - researchers debrief respondents after they have completed the survey, explicitly to gather feedback and reactions to specific questions Group debriefing assessment - researchers bring test respondents together after the survey for a focus group discussion of the instrument Important issues to identify in pretests: Unclear directions Skipped item in the survey (not part of a skip pattern) that were missed or avoided. Refusal or inability to answer a question. "I don't know" and "NA" are response options that should ideally be selected for only a very few items. "Other" responses - if respondents mark "other" on a frequent basis, it may indicate that there are additional response options that may have been overlooked and need consideration. Little or no response variation: A lack of variation in responses can be an indication that a question is not relevant enough to warrant inclusion in the final survey. Easily misinterpreted questions: are there double-barreled or negative-worded questions in the survey or other questions that could possibly be misinterpreted? Sensitive questions Inconsistent scales - can respondents differentiate the points on a scale clearly? Also keep in mind that respondents usually look at the responses at the top of a survey than the bottom, so a good way to safeguard against this is to randomize the list so that the order is different across each survey. Computer-based and technical problems - are skip patterns correctly designed and implemented? Does the progress bar function? Etc. Other pretesting issues: Even if pretest survey items are not skipped, answered incorrectly, or identified as problematic in follow-up interviews, researchers should be attuned to a few more issues and opportunities pretesting presents: Check and improve respondent recall - respondents may need to be given specific time references to help them recall events that occurred in the past. Clarify complex concepts - ask pretest respondents to define specific concepts in order to help design questions for those concepts in the full-scale survey. Track question response timing Assess adequacy of space for responses to open-ended questions - assess whether or not more room is needed for open-ended responses (like if you're taking a pen-and-paper survey and more physical space is important). Assess survey appearance on varying media Updating time-specific surveys and multi-language surveys - when using or building on existing instruments, pretesting the instrument is important to assess whether the questions have stood the test of time. After the pretest, researchers can compare preliminary results with those from existing studies using the results from the pretest as a cross-validation of the measure's accuracy.

What is the Cognitive Model of Survey Response?

2 Tracks for completing a survey Track 1 (High Cognitive Load) respondents work hard to understand questions and give accurate responses. Track 2 Respondents rush to get through survey, responses marked by acquiescence and satisficing

Interviewers: Good, Bad, Ugly

Good: help select eligible respondents, aid in cooperation and question comprehension, edit answers for quality Bad: Respondents might not admit things to a live person or may try to please interviewer with answers Ugly: Interviewer Bias

What is the deal with data intruders?

How to protect from Data intruders(hackers) NK rule-- Really unique data (the one super rich person in the study for example) can be used to identify a person and it needs to be suppressed Lori talks about the breach at the U and how she pays to have an off-site service to clean the data and provide her with non-identifying data

Internet Survey Advantages

Low cost, Quick, Private, Most people are familiar with internet

Face to Face Disadvantages

More expensive, Have to have staff, staff needs to be trained, Pay staff, Takes longer: People aren't home Have to go back Can't gain access (gated communities)

Telephone Interviewing Disadvantages

Non-response, Hang up, People don't have time, Caller id "Scam likely", Hard for the interviewee to remember all the parts of the question, People can't talk about the subject in present audience

Mail Surveys Disadvantages

Not actually talking to the person, People get a lot of mail, may not look at it right away, Did the survey make contact?

What are non-response rates and how are they different from response rates?

Response Rate: # that answered/# you contacted Proportion of people who return the survey Non-response rate: # uncompleted surveys/total sample size

What does the Rueltext say about how non-response can lead to measurement error?

Ruel measurement error: Non-response does not occur randomly. Think of the table with the two samples and the second sample percentages by age group do not match the population. Only 4% of people ages 18-30 responded, even though they make up 21% of the population. The survey will not do a good job of representing the population .

What are the threats to privacy presented by web-based surveys—are all web panels the same? (Regnerus 2012)

Two kinds of web panels: Non-probability (members do not have a known probability of being selected; self selected "sample") and probability (probabilistic approach recruits panel members from a RDD sample of HH phone numbers. Using reverse look up, invitation is mailed and phone follow up. Cooperation rate is about 33%. Probability basis makes inferential statistics possible and can potentially generalize to a larger population)

Non Probability Sampling

a sampling technique in which there is no way to calculate the likelihood that a specific element of the population being studied will be chosen Convenience Sampling Quota Sampling Purposive Sampling Snowball Sampling Respondent Driven Sampling

Nonresponse Bias

the difference that results when participants are significantly and qualitatively different from non participants

What does the research say about informed consent protocols?

3 Areas of Research How do respondents' react to informed consent protocols? Providing people with information about the content of the survey ahead of time does NOT impact response rates or quality. Respondents appreciate being given a heads up about sensitive topics before taking the survey. What about methodological surveys where participants need to be surprised or unaware of parts of the study? You can apply for exemptions. Sometimes it's okay to leave participants in the dark for the sake of science. Written or Oral Consent? Asking for a signed informed consent MAY negatively affect responses. Some people are willing to answer survey questions but signing a document makes them nervous. Lori suggests that this requirement should be waived in cases of minimal risk because informed consent doesn't help respondents, it just protects researchers. Verbally reassuring respondents that their information will be kept confidential has positive effects on response rates to sensitive questions. This is usually a good thing to do, but don't go into too much detail or make a huge deal of it- you might make people suspicious and scare them away.

What are the key issues in sampling? What did the lecture say about the different types?

The two types of sampling are probability sampling and nonprobability sampling. Probability sampling is when a random selection is made and you can calculate the probability for each person being included in the sample. Methods include simple random samples, stratified random samples, and cluster sampling. Nonprobability sampling is when you don't know the probability of a person falling into a sample. This happens in different forms of convenience sampling or snowball sampling.

What are the implications of question order and framing (Galesic et al 2007)?

Cluster questions according to their topic; unorganized questions may confuse the respondent. Consider headings to aid organization. Order can impact completion: Begin with engaging and easy questions. Save sensitive questions for the middle. Save demographic questions for the end. Some questions may cue readers to think of something specific they wouldn't have thought of otherwise. (E.g. asking a question about religion and then asking who they admire.) Habituation - respondents encounter a series of questions that are so similar, they don't read carefully enough or assume they're all the same and answer in that way. In terms of answer order, respondents are more likely to select answers at the top of a list. (We see this in the Galesic et al. eye-tracking article.) (Mostly from Wk. 3, Ch. 3 and PPT)

Interviewer variance:

Clusters of respondents interviewed by the same person tend to have more similar responses than do clusters of respondents interviewed by different interviewers. This cluster effect can appear, for example, if an interviewer uses inappropriate or inconsistent probing techniques, has different interpretations of questions and re-words them accordingly, or differs in the way he or she reads answer categories. Interviewer variance is the "component of overall variability in survey statistics associated with the interviewer"

What makes good questionnaire items?

Concise, clear, simple language Specific rather than vague phrases (e.g. are you a social drinker?) Not double-barrelled (aka a "two-in-one" question) Unbiased, not leading Carefully phrased if sensitive topic Ensure response options do not overlap (from Wk. 3, Ch. 4 and PPT)

What are the issues with coverage of a target population by a frame

Overcoverage and undercoverage are potential issues when the target population differs from the sampling frame. Undercoverage is when people in the target population are missing from the sampling frame. E.g., individuals without a cell phone or a home address. Overcoverage is when people that are not members of the target population are included in the frame population.E.g., business telephone number included in frame trying to cover household population Coverage error, then, is a function of the sampling frame and the target population. Note that coverage error exists before the sample is drawn because it is an issue of a sampling frame mismatch.

What are trends in non-response over time?

Pew reported a survey response rate of 6%, down from 36% in 1997. Contact rates and cooperation rates declining too (Diane Meppen)

How is pretesting different than pilot testing?

Pilot testing (also known as a feasibility study) is when the interviewers, final survey, and some stages of coding and analysis are rehearsed prior to the actual survey administration. The pilot study is a trial run of the entire study from start to finish that increases the likelihood of success for the main study. They are conducted to test the entire research process - usually from a methodological standpoint in actual field conditions. Unlike survey pretests, which are usually done on a small scale, pilot studies have no cognitive interviews or focus groups to determine which measure and concepts are appropriate in the survey questions. Rather, a pilot study is systematically administered to a diverse cross-section of the sample to ensure that the entire survey schedule runs smoothly and that the coding and analysis can be done properly and efficiently.

What where some of the key implications of incentives for survey research?

Prepaid incentives more effective than post interview incentives Prepaid = social contract Post-interview = payment for work Encourage cooperation Value restricted by Institutional Review Boards (IRBs) to ensure still in voluntary status Goods are not as effective as cash

How does wording priming affect survey response?

Priming is when you ask a question in a way that gets someone thinking about something in a certain way, like "would you favor or oppose your local town or city council passing this kind of non-discrimination ordinance, such as the one recently endorsed by the LDS Church?" You can also prime respondents by ordering questions in a certain way. You want to start with the general question, then move into specific questions. Otherwise, your respondent will have the specific response in their head as they are answering the general question. Priming could potentially alter survey responses.

How bad is non-response in survey research? What can we do to fix it?

Response rates have fallen substantially over the years (the average response rate today is in the single digits), and this is problematic when certain types of people don't respond to the survey because this can introduce nonresponse bias. There are a few methods that can reduce non-response, all of which require additional time and money from the survey researcher. A handful of options include sending pre-notification letters, offering an incentive, contacting non-responders multiple times, following up using different modes, reducing the respondent burden, and having the survey sponsored by a governmental entity.

Sampling

Sampling is defined as collecting information on a portion of the population. The main issue with sampling is ensuring that it is representative of the target population.

3 Models of Non Response

Separate Causes Model Common Cause Model Survey Variable Model

Recode

Smooth outliers in data by rounding or categorizing

Thinking about the survey life cycle from a quality perspective, what are the sources of error from both the measurement and representation dimensions?

The measurement dimension (related to responses): Construct validity - Errors involving the gap between the measures and the construct - E.g., using standardized tests as a measure of aptitude Measurement error - Errors that result from the departure from the true value of the measurement E.g., abortion or cocaine usage underreporting Processing error - Errors arising from efforts to prepare data for statistical analysis. E.g., removing an entry of someone claiming multiple assaults a day, but that person is a bouncer The representation dimension (related to statistics): Coverage error - Errors arising when enumerating the target population using a sampling frame. E.g., outdated list of businesses in the area causes you to miss a handful of valid units Sampling error- Errors stemming from surveys measuring only a subset of the frame population Sampling errors either arise through bias (too many units of one characteristic selected) or variance (sample size is too small to get a reliable estimate of the sample mean) - E.g., rule of thumb that n > 30 Nonresponse error - Errors for failing to measure all sample persons on any or all measures. E.g., absent students not taking standardized tests, inflating/biasing the estimator Adjustment error - Errors from the construction of statistical estimators to describe the full target population. E.g., assigning weights requires human judgement

Mail Surveys Advantages

Unit cost high, variable cost low, Not as much staff, Mail goes everywhere, Interviewee can complete survey on their own timeframe

Telephone Interviewing Advantages

Unit/Variable costs Unit = start-up cost Variable = cost to do each interview Random digit dialing (RDD) makes variable cost very low; efficient Access to gated communities Still talking to a person Interviewer can answer questions, Response rates were decent

Perturbation Methods

Use statistical methods to alter data in order to limit risk of disclosure

How is informed consent important?

You need to make a plan, think about the platforms that you use, Protecting Respondent Information through administrative channels Everything should be password-protected Identifiers & consent forms should be stored separately from survey data Use interviewer/staff confidentiality agreements Training staff and norm enforcement

What are some administrative and technical procedures for safeguarding confidentiality?

2 Main Ways to Reduce Risk of Disclosure (of a subject being identified) Restrict Access to Data -Restrict who has access to data -Make staff/interviewers sign confidentiality agreements -Everything should be password protected Example: research data centers like the new one at the U Restrict Contents of the Data that may be released -Identifiers and consent forms should be stored separately from data - Suppress any unique information that allows a respondent to be identified. Restrict geographic details and outliers. -Adjust data that will be released to the public- only publish aggregates. When there are "intolerable risks of re-identification" (When people could be really hurt if identified) the records can be altered in 5 ways: Geographic Thresholds: Determine the size of the smallest geographic thresholds for data to be released. (only release county level data, or only areas of 100,000 residents or more) Data Swapping: trade data values across records so intruders don't know which info goes with which individuals Recoding: change values of outliers (round up, round down, categorize, or drop data) Perturbation Methods: Use statistical models to alter individual values, like adding randomly generated numbers to data values Imputation Methods: replace real values with made up (synthetic) data The greater the amount of "noise" added to the data, the greater the protection but the higher the loss of information. Key Terms N K Rule If a small number of respondents (n) contribute to a large percentage (k) of the total cell value, then the cell is judged sensitive. Sensitive cells can be suppressed or recoded. Population Unique: an element in a larger population that is the only element of in a cell (has a unique combination of values for the included variables) Sample Unique: same as above, but unique in the sample, not the general population

Uses of focus groups in survey research:

A tool for generating ideas Can help generate hypotheses if researchers are investigating a new area For complementing quantitative methods Better for answering "why?" or "how?" questions As the primary data collection method Can be used to investigate topic areas that cannot easily be studied via quantitative methods, such as Brene Brown's work studying shame For the development and evaluation of programs The needs of the population and the effectiveness of the program are particularly suited for focus groups

What are models of non-response?

Arrow diagram One thing influences whether or not an individual participates, another influences whether they answer a certain question The same thing affects if the person participates and if they answer the questions Something about the questions affects participation Time use surveys: participant more likely to be at home to answer the questions, may overestimate how much time they spend at home Literacy: survey is about how much you can read, but the participant can't read the questions If reason for non-response is related to the focus of the survey, you will have an increased risk of response bias

What are key principles of ethical research—beneficence, good, justice. What were the implications of the Iraqi death count study (Burnham et al, 2006)?

Beneficence: minimize possible harm and maximize possible benefits for the subject and decide when it is justifiable to seek benefits in spite of risks and when to forgo because of risks. Justice: Fair balance between those who bear the burden and those who benefit from research. Respect for person: Ethical requirements for informed consent The purpose of the Iraqi death count study was to update Iraqi mortality figures in the wake of the 2003 Invasion of Iraq. Method: Cross-sectional cluster sampling interview Households: N=1,849 Individuals: N=12,801. The Burnham et al study found more than four times the number of excess violent deaths (601,000) compared to passive surveillance measures (112,000) data collected between May and July 2006 Weakness: Political bias -- the intent was to publish result before midterm elections to influence voting Methodological Issues: Can't include households where everyone has died Field teams had to make judgement calls selecting households for inclusion Random selection of streets based on proximity to main streets might overstate violence Ethical Issues: Study authors would not make survey questions public Field teams collected names in violation of approved study protocol. Potentially endangered interviewers and the households interviewed. Questions about adequate informed consent.

How are response rates different than cooperation and contact rates?

Contact Rate: the proportion of eligible cases in the sampling pool in which a member of a sampled household was contacted—that is, reached by an interviewer (in telephone and in-person surveys) or received the survey request (in the case of mail and Internet surveys) Example: how many people opened the email invite vs. how many emails sent out Cooperation Rate: the ratio of all cases interviewed out of all eligible units ever contacted Example: how many people filled out survey vs. how many opened the email Response Rate: the ratio of all cases interviewed out of all eligible sample units in the study, not just those contacted Example: how many people filled out survey vs. how many emails you sent out

What are some of the issues in question item construction? Implications of the choice of different question formats—satisficing, habituation, and social desirability?

Habituation - respondents encounter a series of questions that are so similar, they don't read carefully enough or assume they're all the same and answer in that way. In ordering questions, be mindful with similar questions and similar responses such as a long set of questions with a Likert scale. Social desirability - respondents tend to overreport socially desirable traits or behaviors and underreport socially undesirable or stigmatized traits or behaviors. To get more accurate answers, avoid loaded words such as illegal, emphasize anonymity and confidentiality, consider an self-administered format, place sensitive questions in the middle. Satisficing - respondents take shortcuts and don't take time to complete surveys as they should. This could manifest as habituation, item nonresponse, choosing all neutral responses, agreeing with all assertions (acquiescence response bias),​​ or social desirability. Acquiescence response bias - respondent tendency to agree with any assertion, regardless of its content Respondent burden - respondents may need to exert a high degree of concentration they did not expect to if they are asked sensitive questions or asked to recall events that are difficult to remember. Respondent fatigue - too much burden can lead to fatigue and ultimately lead to a high nonresponse rate or low quality data. Thus, it's important to make the layout streamlined, pretest, emphasize importance in cover letter, provide accurate estimates of length of survey and time needed, emphasize anonymity and confidentiality (From Wk. 3, Ch. 3 & 4 and PPT)

What is item non-response? Unit non-response?

Item Non-response Respondent chooses not to respond to a question Often income Could be: Respondent doesn't want to give the information Wording confusion Question is too complex Unit Non-response Total failure to complete survey

What are the methods to deal with item non-response?

Item non-response = respondent skipped a question "item non response is more complex than you would first think...." Sometimes people don't want to answer a question, but sometimes the wording of the question or the complexity of the question is the issue. Could be related to response fatigue too. Pascale article about under-reporting of Medicaid coverage. Likely a result of the questionnaire design. Respondents didn't quite fit into any of the categories or couldn't remember what kind of coverage they had. 2 Methods to deal with item non response in your data: Ignoring: Ignore all of that respondent's responses if they skip one item. Only include 100% complete surveys in analysis. (Recommended if missing item is your dependent variable) Adjusting/Imputation: Fill in the missing item and flag the item so you don't forget that it was imputed. Options include: Substitute with a measure of central tendency, like the mean response for the respondent's subgroup Regression Imputation: use other variables to predict missing item Hot Deck Imputation: predicted residual borrowed from another case in the data set

Why would you use a list experiment? What did the lax et al article say about getting truthful answers to survey questions?

Recall that a list experiment is truly an experimental model using surveying. From Lax et al. (2016): "In a list experiment, subjects are randomized into control and treatment groups. In the control group, subjects are given a list of J nonsensitive items and asked to report how many, not which ones, they support. Members of the treatment group are assigned the same task, but receive a list of J + 1 items." With a large enough sample, researchers can find the mean difference between the two groups to estimate population views. A list experiment could be used to measure attitudes toward a sensitive, taboo, or stigmatized topic. The idea is that this method reduces the impact of social desirability because respondents don't say exactly which ones they support or don't. However, the researchers carefully pick items that seem like the vast majority of people would agree or disagree (e.g. drunk driving laws should be removed) and two negatively correlated items (e.g. ObamaCare and reducing spending on food stamps) which helps avoid all agrees or all disagrees (ceiling and floor effects) In Lax et al. (2016), they found no evidence that social desirability impacted survey responses. Specifically, other polls, including national telephone polls, that showed increased approval for same-sex marriage were accurate, and not a result of social desirability. (From Wk. 5, Lax et al. article)

What did articles have to say about cell phone-only populations?

That overall, results weren't that different from surveys that used both cell phone and landlines. Cell phone sampling is typically more representative of the U.S. anyway, whereas landline users skew old and white. (Kennedy et al. in Week 11)

What are the issues in survey interviewing bias? what are the strategies to reduce

Social desirability: see above. Experience level of interviewer: Experienced interviewers tend to have better response rates; perhaps they are more comfortable enlisting cooperation.. Yet experience was shown to lead to lower levels of drugs estimates. Perhaps interviewers become complacent or careless with reading questions. Although, the NSDUH study was self-administered with interviewer presence, so it's possible interviewer behavior rushes respondents. One solution could be to reduce per-interview compensation or incentives so interviewers are not motivated to complete as many as possible. Demographics or traits of interviewers can impact answers if the interviewer has obvious traits related to the specific questions. One suggested solution is randomization of assignments of respondents to interviewers. Some of these issues can lead to variance. Variance may also be random, meaning the respondents truly are different. General tips to reduce issues: Read questions exactly as worded (but they must be worded clearly) Explain the procedure Don't be too chatty, a little about the weather is OK Don't share personal opinions Don't overshare about yourself Don't paraphrase what respondent said Talk slowly so respondents do not infer that going quickly is a priority Model behavior: use a recording at the beginning of the interview that models a slow and thorough interview Get a written commitment from respondent after a few questions, asking them to commit to giving thorough accurate data Communicate high standards through "programmed instructions" aka a standard written prompt that the interviewer reads to emphasize the importance of taking time, thinking about answers, and providing accurate data Reinforce their thorough answers and provide "negative feedback" (e.g. ask them to think more carefully) if they rush Probe "non-directively" in a way that does not increase the likelihood of one answer over others Training for interviewers Overall, these strategies to mitigate issues do two things: change respondent understanding of what is expected and generate a willingness to perform better by reporting accurately (From Wk. 7, Ch. 9 and PPT)

Face to Face Advantages

Some sample designs can be better implements (eg: area probability samples), Get to know the area better, You have your foot in the door (more difficult to say no), Shock and awe; paired with other pieces, More ability to gain trust

What are the benefits of selecting stratified random samples?

Stratified random sampling allows for control over the proportion of responses from particular subgroups or strata. Stratified sampling can be done proportionately (to mirror the characteristics of the population) or disproportionately (to allow a relatively higher amount of people from a specific subgroup to respond to so statistical analyses are valid). Typical examples of strata are for gender or race or geography Stratified random sampling can improve the accuracy of estimation on the selected variables, and it focuses on the representation of important subpopulations.

Describe a simple model of survey response process

The cognitive model of survey response presents two tracks that respondents could choose. The first track, the high road, is marked by accurate responses. This is when respondents do the work to understand the question, recall the facts correctly, and organize memories to answer the questions as accurately as possible. The second track, the low road, is described as the path of least resistance. Respondents, in this case, bypass the cognitive work and give answers influenced by acquiescence (the tendency to agree with any assertion) and satisficing (selecting the easiest answer such as "no opinion" or the socially desirable response).

What is the difference between the life cycle of a survey from a design perspective, the measurement dimension, process perspective, and representational dimension

The difference between these perspectives and dimensions is what each view focuses on. The process perspective focuses on the actions steps (such as choose mode of collection, recruit and measure sample), while the design perspective visualizes bringing each element from the abstract to the concrete. The measurement and representational dimensions are components of the design perspective. The measurement dimension is concerned with "the what" and takes a construct and turns it into a response. The representation dimension is concerned with "the who" and starts with a target population and ends with respondents.

To reduce interviewer variance:

To estimate the effects of interviewers on respondents you have to remove the effects of real differences. This is done through "interpenetrated assignment" which is where probability sub-samples of the full sample are assigned to each interviewer. Reducing interviewer bias can reduce interviewer variance. Strategies for reducing interview bias: Use experienced interviewers Have interviewers communicate to respondents they value accurate data. Also have interviewer model for respondent what the interview should be like i.e. talking slowly, asking for clarification, giving full and complete answers Systematic reinforcement Interviewer should provide positive feedback when respondents behaved in ways that were consistent with accurate reporting Standardization Interviewers should read instructions at the beginning of the survey and during the survey to encourage respondents to think thoroughly and provide accurate responses

What are the methods to deal with unit non-response? (Peytchev et al, 2010)

Unit Non Response = Respondent failed to participate Peytchev was the article about abortion rates being underestimated because of social stigma. In that study, after 12 months of data collection the authors analyzed nonresponse rates and targeted groups that were less likely to participate. Larger incentives were then offered to those individuals. You might increase response rates by: Using ACASI (Audio Computer Assisted Self Interviewing) instead of a live interviewer Offer incentives Sponsorship Multi-Mode surveys Longer Data Collection Period Train interviewers Dillman's Tailored Design Method Pre notifications by mail ($2) Multiple attempts/call backs Reminder letters to non respondents Method aims to increase rewards, reduce costs, build trust For online surveys Dillman prescribes 6 contacts Letter requesting participation offering incentive Email with link Postcard thanking/encouraging Resend link Phone call to non respondents Send link from different email account Q: Are the results obtained using the survey data biased by the presence of non-response? A: No, at least the bias appears to be small and inconsistent for most variables. Ketter et.al. studied the difference between a basic (response rate: 36%) and rigorous survey (response rate 61%) and found results were very similar. The largest differences were in demographic items.

Validity

Validity is needed to ensure that your survey measure reflects the intended construct. Face validity (the questions measure what they are intended to measure), Construct validity (the extent to which the survey measures the theoretical construct it is intended to measure), Predictive validity (the degree to which the operationalization can predict or correlate with other measures of the same construct that are measured), Concurrent validity, which is when you benchmark one measurement against another measurement. When one question is comparable to another, then the validity is the part that measures the same content. Concurrent validity can be established by correlating one question to another that has already been previously validated.

What roles do interviewers play in survey research?

When interviewers conduct surveys, they play a central role in the process. A few of their essential roles are helping to select eligible persons in the household, eliciting cooperation, aiding in question comprehension, and editing answers for quality. However, interviewers can also play a role in biasing survey results. This tends to happen in cases when the topic is socially undesirable, when traits of the interviewer are linked to the topic of the survey, or when the interviewer is perceived differently as a function of interviewer experience.

Why and when should we weight data?

Why? Weights are used to make statistics more representative of the population. When? Before or after data collection. Before: Design Weights are used to compensate for over- or under-sampling of specific cases or for disproportionate stratification After (oops): Post-stratification weight used if sample does not represent population. Used to compensate for the fact that persons with certain characteristics are not as likely to respond to the survey. You need population estimates or data to calculate these.

What are the issues in protecting respondent information through administrative channels?

You need to make a plan, think about the platforms that you use, Protecting Respondent Information through administrative channels Everything should be password-protected Identifiers & consent forms should be stored separately from survey data Use interviewer/staff confidentiality agreements Training staff and norm enforcement

Errors of non observation

survey sample does NOT represent larger population


Related study sets

Exercise 4: Endocrine System Physiology

View Set

California Real Estate Chapter 8

View Set

Chapter 14: Nursing Management During Labor and Bir

View Set

M01 Check for Understanding - Cloud Concepts

View Set

chapter 16-lecture 24 slide 19-42

View Set

Better Chinese - Lesson 2: What is Your Name? (part 3)

View Set

Biology Non Science majors Exam 2

View Set

Canada's provinces and territories Capital cities

View Set

DHA-US001 HIPAA & Privacy Training Challenge Exam

View Set

Economics Chapter 10: Externalities

View Set

Sociology Chapter 7 Stratification

View Set