ARM: Questions

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

What is encoding?

"Encoding" is the process of forming memories from experiences. Survey designers have little impact on these mental processes; however, they can write better questions if they take into account how respondents may have encoded the information.

Please name TWO types of bias in question designs.

(1) problem with wording (ambiguous question, complex questions, double barrel); (2) formatting problems.

What is the difference between unit non-response and item non-response?

- Unit non-response is when a sampled person did not respond - Item non-response when an individual question is not answered within a participants survey

Name one of the key standards for subjective questions:

- response task is appropriate to the question and is relatively easy for most people to perform - response alternatives ar

Which of the following should you not include on a cover sheet when giving out a paper-based survey? -What the study is about -Information about who was not selected as a respondent -Instructions to return questionnaire -Fit the cover sheet to 1 page

-Information about who was not selected as a respondent

List some of the advantages to authenticated web-based surveys.

-linked to a participant list -password/ID credentials needed to login -can track progress of individual participants, send reminders -may be able to send invitation and personalize -if custom data points are known, they can be programmed into survey for personalization or calculation -each participant can only submit once -can complete survey at more than one sitting

How can one efficiently record what was said in a focus group? One or more observers take notes One way windows Videotape the group All of the above

...

What are types of errors in survey methodology? (select all that apply) Editing and processing errors Coverage errors Sampling errors Non-response errors all of the above

...

Which of the following is an example of random sampling? Taking the name of every person in a telephone book. Generating a list of numbers by picking numbers out of a hat and matching numbers to names in the telephone book. Taking the tenth name from a list of everyone in the telephone book.

...

What are the three characteristics of a target population?

1) Finite in size, can be counted 2) Exists in a specified time frame 3) Observable/accessible

Name 3 standards of a good question:

1) Questions need to be consistently understood 2) Questions need to be consistently administered or communicated to respondents 3) What constitutes and adequate answer should be consistently communicated 4) Unless measuring knowledge is the goal of the question, all respondents should have access to the information needed to answer the question accurately 5) Respondents must be willing to provide the answers called for in the question Answer: (must correctly list 3 of the 5 above)

To evaluate whether or not the answer to questions are valid are measures of what researchers are trying to measure. What are the 4 approaches to evaluating the validity of survey measures?

1) Studies of patterns of association - construct validity, predictive validity, and discriminant validity 2) Validating against records - using factual data or record checking and compare survey results 3) Comparing alternative question forms - ask the same question in two different forms than compare 4) Consistency as a measure of validity - measure the reliability or consistency of survey answers through asking the same person the same questions twice or two people can be asked the same question

Why do probability-based sampling techniques have limited success for the most-at-risk populations?

1) These populations typically don't have sampling frames, 2) Typically are too small to be adequately measured in general population surveys 3) Often practice illegal or socially stigmatizing behaviors, making them difficult to access

What are the four different types of weighting that are prevalent in complex surveys?

1. As a first-stage ration adjustment 2. For differential selection probabilities 3. To adjust for unit nonresponse 4. Post-stratification weighting for sampling variance reduction

How can researchers reduce response distortion?

1. Assure confidentiality of responses. - Minimizing the use of names or other identifiers - Making sure that completed survey answers are not accessible by nonstaff people - Dissociating identifiers from survey answers 2. Emphasize the important of accuracy: respondents are asked to make a commitment to give accurate answers 3. Reduce the role of interviewer: using self-administered forms

Common Purposes of Survey Research

1. Exploration - what are the needs of the community (needs assessment) 2. Description- proportion of people in target area? 3. Hypothesis/ Theory Development & Testing: Example- what extent gun violence comes from video games 4. Predictions 5. Evaluations

Name 4 errors in responding to questions

1. Misunderstood the question 2. Deliberately do not answer correctly 3. Forget 4. Unable to provide necessary information 5. Have no answer

What are the two paradigms of cognitive interviewing?

1. One involves a cognitive interviewer whose role is to facilitate participants verbalization of their thoughts. An example is the think aloud procedure. 2. The other involves an interviewer who guide the interaction more proactively. An example is intensive interviewing with follow-up probes

Name and define the 2 types of memory errors:

1. Overreporting: the respondent includes events from outside time period he/she is being asked about 2. Underreporting: the respondent does not include events that should have been included within the time period he/she is being asked about.

What are the three reinterview assumptions

1. There are no changes in the underlying construct between the two interviews 2. All important aspects of the measurement protocol remain the same 3. There will be no impact of the first measurement on the second responses

Name the three ways we might bring bias into our surveys

1. Way the question is written 2. Way the questionnaire as a whole is designed 3. Way the questionnaire is administered (Answers might also be more detailed. Other potential answers include leading questions, inconsistency, formatting, not objective interviewer, faulty scale, problems with wording...)

For surveys of the general population, questions should be written at what reading level? Less than 4th grade 4th-6th grade 8th-9th grade 11th-12th grade

4th-6th grade

What is a certificate of confidentiality?

A certificate of confidentiality is granted by the Department of Health and helps protect researchers from being compelled in most circumstances to disclose names or other identifying characteristics in federal, state, or local proceedings. It remains in effect for the duration of the study and protects the identity of respondents, not the data.

What is a survey and what are the goals of surveys?

A survey is systematically collected information on a topic through asking individuals questions. The goal of a survey is to generate statistics on the group(s) those individuals represent.

What is a target population and give some characteristics of a target population.

A target population is the group of elements for which the survey investigator wants to make inferences by using the sample statistics. They are finite in size, have time restrictions, and they are observable. They have to specify the kind of units that are elements in the population and the time extents of the group.

What are the 4 steps of the cognitive process that occur while filling out a questionnaire (Multiple choice question) a) Comprehension b) Rewording c) Retrieval of information d) Reporting an answer e) Satisficing f) Judgment and estimation g) Ordering

A, C, D, and F

For recalling of shorter period and more salient events, which of followings is most likely to happen? A. Over-report B. Under-report C. Report without bias

A. Over-report

What are primacy and recency effects? How can you minimize primacy and recent effects in a study?

Primacy effect: individual picks the first option that they are presented because they remember it better than the options presented later; common in visual surveys (mailed surveys) Recent effect: individual picks the last option that they are presented because they remember it better than the options presented earlier; common in auditory surveys (telephone or personal interview surveys) You can minimize primacy/recency effects by reducing the number of response categories and by randomizing the order of categories in survey instruments. Or can do as fill-in-the-blank: A PRIMACY effect is when an individual picks the first option that they are presented because they remember it better than the options presented later. A RECENCY effect is when an individual picks the last option that they are presented because they remember it better than the options presented earlier.

In questionnaire design, there is two concepts we have to consider when creating our options of question responses: primacy effect and recency effect. What are primacy effects and recency effects?

Primacy effects is when presenting an option first include the chances that respondents will choose that option more likely than a later option (visual modes). Recency effects is the response option is at the end of the list and include its popularity of selection usually through an auditory mode.

What are the pro's and con's of self-administered surveys?

Pro's: Increased flexibility (time, place, etc.); lower cost; better access Con's: more likely to miss or skip a question; risk of respondent not understanding the question; lower social desirability

Describe the pros and cons of using progress indicators in web administered surveys.

Progress indicators can be beneficial in the sense that they can give clues as to where abandonments tend to happen in a survey, or where in the web administered survey respondents gave up. Also, progress indicators inform respondents of their progress and can serve as motivation to complete the survey. One con of progress indicators are that they take up additional download time, leading to lower completion rates and higher nonresponse. Also, if a web administered survey contains many skip patterns, it may not be an accurate representation of a participant's progress and may seem daunting.

What are pros and cons of web-based surveys?

Pros: Multimedia capabilities such as video, audio Real-time Data validation Can give real-time feedback to participants if data is invalid Validation rules could be maximum values, minimum values, required answers and maximum number of characters. Can show/hide questions (conditional logic) or text based on conditions (customized end page) Cons: Cost - Designing a good web survey requires a significant amount of time at the beginning. Security issue - Some software packages don't conform to security standards. Web layout - Different browsers show different results on participants' screens.

When sampling rare populations, what are common difficulties and what are the most effective sampling methods?

Rare populations are hard to locate, such as people with a rare disease or individuals who participate in illegal or promiscuous activities. For these rare or hidden populations, problems such as there being no visible sampling frame existing (unknown size and boundaries) or privacy concerns (illegal or stigmatized behaviors) are common barriers. Snowball sampling has proven to be an effective method of sampling. In this method, researchers use a list of selected members of a rare population, who are then used to be informants of otherwise unknown members of the population. This method is used because it allows researchers to reach target populations where numbers might be small or in which some degree of trust is prerequisite to establishing a relationship. This is also effective to study populations otherwise difficult to enumerate through other methods. Of course, this ushers in the chance of selection bias

What are three challenges to using a diary method?

Requires user training, participant burden, under-reporting

List and describe the three governing principles for research ethics according to the Belmont report.

Respect for persons (the researcher should give the potential participant all the information he/she needs to make an informed decision based on the risks and benefits of participating. And if the potential participant declines, his/her refusal will be respected) Beneficence (there should be more benefit than harm for participants the vast majority of the time) Justice (those who bear the burden of research should also be the ones that benefit from research)

What is telescoping ?

Respondents recall an event in the past as happening more recently that it actually did. Telescoping can be reduced by the bounded recall procedure

How does social desirability affect response? Describe a ways you could reduce the effects of social desirability when asking respondents to report their attendance of religious services.

Respondents to questions dealing with personally or socially sensitive content may answer in a socially acceptable direction, resulting in response effects. For example, people tend to underreport socially disapproved behaviors, such as drug use, and overreport socially desirable behaviors such as attending religious services. One method to reduce this response effect is the inclusion of a buffer/introductory sentence to the question that provides support for the alternative/less socially desirable response ("Not all people are able to attend religious services every week..."). The question could also be adjusted to limit the level of detail respondents are asked to give.

The systematic deviation of a response from the true value where there is a consistent direction of the response deviations over different trials is called... Reliability Response Variance Response Bias Processing error

Response Bias

What is the difference between Reponse Rate and Cooperation Rate? Please choose the appropriate numerator and denominator for each rate. Response Rate = __ divided by __ Cooperation Rate = __ divided by __ A. Number of eligible sample who complete questionnaire B. Number of eligible sample in your target population C. Total number of eligible sample able to be contacted D. Total number of eligible sample E. Total population

Response Rate = a/d Cooperation Rate = a/c

Describe methods and tools to increase response rates among survey samples

Response rates can be increased by using multiple modes of data collection, increasing the number of attempts to access samples persons, extending the data collection period, using sponsorship, reducing participant burden, using pre-notification and persuasion letters, and consistent follow up and reminders. Maximizing the number of these methods used in each sample will increase the response rates. Cash incentives have been shown to be one of the most effective methods of increasing response rates, as have consistent reminders (up to 15 for some surveys), and the use of sponsorship.

What are three ways to handle fractional intervals in systematic selection?

Round the interval to the nearest integer Treat the list as circular Use the fractional interval and then round after the fractional selection numbers have been calculated

Name and define the two types of sampling error

Sampling Variance: by chance, many different sets of frame elements could be drawn, each set with different values on the survey statistics. Sampling Bias: same members of frame have no or a reduced chance of selection

Define sampling bias and sampling variance.

Sampling bias- when some members of the sampling frame are given no chance (or reduced chance) of selection. Reduce by giving all elements an equal chance of selection. Sampling variance- replications of the same study will generate different statistics with similar means. Reduce this by increasing sample size.

What is selective attrition?

Selective attrition (the action or process of gradually reducing the strength or effectiveness of someone or something through sustained pressure), however, is a known problem in cohorts as those in disadvantaged socio-economic groups, ethnic minorities, younger and older people and those at greater risk of ill-health are more likely to drop out. This may result the generalizability of findings being limited and estimates of association being biased.

What are the main methods of data collection?

Self-Administered survey (mail, computer assisted) Telephone survey Face-to-Face survey/personal interview

What are the types of needs?

Service needs: Needs health professionals believe the target population must have met to resolve a problem Service demands: Needs that those in the target population believe they must have to resolve a problem

What are some similarities and differences between snowball sampling and respondent driven sampling (RDS)? (Open-ended question)

Similarities: 1)Large samples can be obtained quickly 2)Reproducible in different study sites 3)Only works for networked populations; Differences: 1.)RDS can restrict maximum number of participants that each person can recruit 2)RDS approximates probability sampling 3)One can use weighting in RDS (weight observations inverse to inclusion probabilities) 4)RDS relies on many assumptions

What are the four approaches to evaluating validity of study measures?

Studies of patterns of association Comparison of results from alternative forms of the same question Comparing answers to survey questions with answers from other source, such as records Asking the same question twice of the same respondent and comparing result

Validity is threatened by ___________ error while reliability is threatened by ________ error

Systemic; Random

Which of the following is NOT a stage of questionnaire development? Specify research question Draft questions Review literature Test questions on co-workers

Test questions on co-workers

When creating a questionnaire that will be administered to the general population, what reading level should questions be written at?

The 4-6th grade level

What are the two opposing views regarding incentives and equity, specifically the issue of offering refusal conversion payments to reluctant respondents?

There is the economic perspective, and the social/psychological perspective: 1. Economic perspective sees it as entirely appropriate to offer compensation to refusers but not to cooperative respondents because the refusers are probably more burdened by taking the survey. 2. Social/Psych perspective sees refusal conversion payments as a violation of equity expectations, and that it is likely to reinforce uncooperative behavior and alienate cooperative respondents.

What are some sampling advantages with focus groups?

They do not discriminate against people who cannot read or write. They can encourage participation of people afraid to be interviewed one on one. They can encourage participation from those considered "unresponsive patients".

What are the reasons for us to do needs assessment?

To create community trust and buy-in; To avoid creating bad interventions; To evaluate an intervention process; Avoid wasting resources: want to target interventions in such a way that they will do the most good.

In phone-based surveys, it is standard practice to make up to 10-15 attempts to contact potential participants at differing times of day and day of the week. True/False

True

True or False. Computer adaptive testing successively selects questions to maximize the precision of the exam based on how the individual answered prior questions.

True

True or False. Measures must be reliable to be valid.

True

True or False: An alternative method of data collection involves psychological assessments, such as standardized tests.

True

True or False: Minority and less educated respondents have the tendency to agree rather than disagree with statements as a whole or with what are perceived to be socially desirable responses to questions?

True

True or False: Record-check studies are the best way to learn how well people report certain events and the characteristics of events that are reported inaccurately

True

True or false:: ACASI stands for "Audio Computer-assisted Self-interviewing"

True

How to improve recall process of respondents?

Using multiple questions to lead to a better answer Stimulating association likely to be tied to the event

What is the difference between validity and reliability of a measurement?

Validity is the accuracy of the measurement and how much it is measuring what it is intended to measure. Reliability is whether or not the measure is able to produce the same result every time it is used.

If you are conducting the following 2 questions, which one will you put first, and why? A − Did you have a visit with your primary care provider in the past four weeks? B − Did you have a visit with your primary care provider in the past 12 months?

Visiting primary care provider is a less salient behavior. So it is better to ask a more specific question before a more general one. So I will ask A first, then B.

What is cognitive interviewing?

We define cognitive interviewing as the administration of draft survey questions while collecting additional verbal information about the survey responses, which is used to evaluate the quality of the response or to help determine whether the question is generating the information that its author intends.

Imagine you are conducting a short survey of high SES individuals that requires very low costs and a very short length of data collection with wide geographic distribution. The skip patterns will be complex, and the survey may require respondents to look back at personal records. What mode should be used? Face-to-face Telephone Mail Web correct

Web

Please list two types of post-survey adjustments.

Weighting: weighting up underrepresented respondents can improve survey estimates Imputation: replacing missing data with estimated responses

What are some factors that can interfere with respondent memory?

a. Passage of time b. Intervening events c. How meaningful an event is d. Uniformity of events

Which of the following is not a method for evaluating survey questions? focus groups pilot studies cognitive interviews field pretests

b.) Pilot studies are used as preliminary studies to see if a study is even worth doing and is usually not used to evaluate and test survey questions. This component will come later if the pilot study has shown that the researcher has a worthwhile question of interest.

Which of these is NOT an example of a potential sampling advantage of focus groups?

does not discriminate against people who cannot read and write can encourage paricipation by those reluctant to be interviewed reduces difficulties in communication by allowing people with different disabilities to contribute in the same conversation can encourage participation by those who feel they have nothing to say

Is it better to put sensitive questions at the beginning or the end of a survey? Explain your answer.

it's always best to put sensitive questions at the end of a questionnaire, or at least at the end of the category of questions where it's relevant. We do this to avoid making the participant think we're categorizing them based on their answers to these sensitive questions. It also allows us to gather as much information as possible in the event that the sensitive question causes them to stop completing the survey/end the interview.

What are 2 of the 4 approaches used to evaluate the validity of survey measures?

studies of patterns of association comparison of results from alternative forms of the same question (when asked to different people - split ballot studies) comparing the answers to survey questions with information derived from other sources such as records asking the same question twice and comparing results, or asking the same question of more than one person and comparing the results. (Fowler lists 4, but another possible response listed in Groves is Comparison between groups whose answers should be different if the answers are measuring the intended construct)

What are the three main sources of bias in a survey? Give an example of each source.

the way a question is designed ex. problems with wording the way the question as a whole is designed ex. formatting problems or the question is too long how the questionnaire is administered ex. interviewer is not objective, respondent's inaccurate recall

True or False: Does the choice of data collection method have impact on sampling frame and response rates?

true

For a measure to be accurate, it must be both _________and _________.

valid and reliable

What are two instances when informed consent is not required?

when the only form linking respondents to the research project is the IRB form and researchers are concerned about confidentiality breach when research presents no more than minimal risk of harm and involves no procedures for which written consent is normally required Or can be done as true or false: FALSE: Informed consent is required when the only form linking respondents to the research project is the IRB consent document and researchers are concerned about a confidentiality breach. TRUE: Informed consent is NOT required when research presents no more than minimal risk of harm and involves no procedures for which consent is normally required.

What is an advantage and disadvantage of using systematic sampling?

Advantage: easy to implement as only one random number is needed Disadvantage: any hidden pattern or periodic structure in the data will introduce bias

Which of the following are included in the stages of questionnaire development? Specify research question Develop research design Develop questionnaire outline Review literature Review previous questions (use focus groups and expert panels) Pilot study Draft questions Test questions Review and revise All are included in the stages of questionnaire development

All are included in the stages of questionnaire development

What is a purpose of conducting a survey? Prediction Needs assessment Explanation/test a causal model All of the above

All of the above

In a probability sample, what is the chance of selection for every record?

Answer: Every record has a nonzero chance of selection into the sample.

What is coder variance?

Answer: It is a component of the overall variance of a survey statistic arising from different patterns of use of code structures by different coders.

What is patient-recorded outcome (PRO)?

Any report of the status of a patient's health condition that comes directly from the patient, without interpretation of the patient's response by a clinician or anyone else.

Short answer. Name two principles of good question design

Ask people about first hand experiences using questions they can answer Ask one question at a time Other answers include-Word question so every respondent is answering the same question, wording of question must be complete and any script used should prepare respondents to fully answer the question, communicate what kind of answer is adequate, make it as easy as possible to read, follow instructions and record answers, orient respondents to tasks in a consistent way, beware of asking for second hand information)

What types of questions are most sensitive to variation in question wording? Attitudes Beliefs Behaviors Attributes

Attitudes

Which of the following is the correct definition of Response Rate (RR)? A. RR= # of eligible samples/original sample size B. RR= # of eligible sample who complete questionnaire/total # of eligible sample C. RR= # of eligible sample/ # of ineligible sample D. None of the above

B.

List two benefits and two assumptions of respondent-driven sampling.

Benefits: large sample can be obtained quickly, reproducible in different study sites, approximates a probability sample Assumptions: Relationships are reciprocated Everyone is connected within the network There is a high level of mixing within the Population People can accurately report their degree The population is large relative to the sample The chain of recruitment is long enough to attain "equilibrium"

Fill in the blank: 1Q: _________is when the researcher reports data or results that have been made up. ___________ is manipulating research materials, equipment or processes, or changing or omitting results such that the research is not accurately represented in the research method. Hint: Both answers start with an F

Blank 1: Fabrication Blank 2: Falsification

Which of the following statements illustrate some differences between proportionate and disproportionate sampling? A. The size of group sample is different for each group in proportionate sampling. B. One needs to apply weights to standard statistical formulas for mean and variance estimation in disproportionate sampling. C. A and B D. None of the above

C

All of the following characteristics are advantages of Simple Random Sampling EXCEPT: A. It's easy to understand B. Follows standard statistical formulas C. Captures all important subpopulations D. Is self-weighting E. None of the above

C, important subpopulations may be missed in SRS

Which of the following is not protected under 45CFR46? Children Prisoners Mentally ill Pregnant women (including fetuses and neonatal)

C. Mentally ill are not included

As an incentive for participation, why might a $2 bill be better than $5 cheque?

Cash is better incentive than checks because of the inconvenience of cashing check + immediate gratification of cash; also, the novelty of $2 makes it worth more to respondents.

Name the 4 cognitive steps in answering questions

Comprehension Review from memory Summarize meaning Report

Please briefly explain an advantage of using computer adaptive testing?

Computer adaptive testing successively selects questions to maximize the precision of the exam based on how the individual answered prior questions. It doesn't need to ask all the questions to get a score.

What is the difference between consent and assent?

Consent is when parents give permission for their child to participate in research, which protects the child from assuming unreasonable risks Assent demonstrates respect for the child and his developing autonomy

In what situations are proxy respondents not advisable?

Considerations include how questions are phrased and the degree to which proxy and participant talk to another about the topic being asked. It is not advisable to have a proxy to answer questions on attitudes, knowledge, or perceptions.

This type of validity is the extent to which a measure represents all relevant dimensions. Face validity Criterion validity Content validity Criterion validity

Content Validity

What are the three standards that all survey questions should meet?

Content standards for example, are the questions asking the right things? Cognitive standards are respondents consistently understanding the questions and is all of the required information to answer the question available? Usability standards Can the questionnaire be completed easily and as intended?

Define: Coverage Bias and Coverage Error

Coverage Bias: When a proportion of the target population is not covered by the sampling frame and there is a difference between the covered and non covered population Coverage Error: Exists before sample is drawn, not caused by action of survey

Please list three nonstatistical notions of survey quality.

Credibility Relevance Timeliness

An informed consent should include all of the following except: Study purpose Risks Benefits Confidentiality Data coding

Data coding

What is the difference between the de facto residence rule and the de jure residence rule?

De factor residence rule: people who slept in the housing unit the previous night are included. De jure residence rule: people who usually live in the housing unit.

What are the two uses of statistics?

Descriptive uses and analytic uses.

In terms of data collection, is it better to do a census or a sample? Why?

Despite the cost and time requirements, a census will have better data quality and is more representative.

What are three reasons for non-participation in population-based cohort studies?

Distrust of researchers Concerns about research design Uncertainty about the outcomes Discordance between lay beliefs and medical practice Demands of the trial

All of the following are reasons for conducting a needs assessment except: A. Create community trust B. Avoid creating bad interventions C. Evaluate an intervention process D. Avoid wasting resources E. Reduce Measurement Bias

E. Reduce Measurement Bias

What is an ecological momentary assessment and what are three primary aims?

EMA is repeated sampling of current behaviors and experiences in real time in natural environments . AIMS: minimize recall bias, maximize ecological validity, allow the study of micro processes that influence behavior in real-world contexts

What are practical issues regarding stratified sampling method?

Each stratum needs to be clearly defined and mutually exclusive. Calculating appropriate weight requires population proportion of each stratum Has to be possible to draw samples from each stratum.

What is an error?

Error refers to deviations of what is desired in the survey process from what is obtained.

What is the difference between event-based and time-based sampling?

Event-based: method of data collection whereby a recording is made each time a predefined event occurs Time-based: method of data collection whereby a recording is solicited based on a time schedule (based on time intervals)

What is systematic sampling? Briefly discuss its pros and cons

Every element has the same probability of selection but not every combination can be selected. Pros: Easy to implement; only one random number is needed Cons: Any hidden pattern or periodic structure in the data will introduce serious bias

What is the difference between evidence-based sampling and time-based sampling?? Provide an example of each.

Evidence-based sampling: a method of data collection whereby a recording is made each time a predefined event occurs. For example, collecting data every time someone has a panic attach. Time-based sampling: a method of data collection whereby a recording is solicited based on a time schedule, often based on random time intervals. For example, an ambulatory BP monitor taking your BP every 30 minutes.

Why is it important to consider wording in your survey questions design? Demonstrate your understanding of a parallel question form by providing your own example.

Example: How do you feel about individuals who are on welfare? (negative connotation) How do you feel about low-income individuals? (more neutral) Example: Should children of illegal aliens be eligible to receive financial aid for higher education? (negative connotation) Should children of undocumented immigrants be eligible to receive financial aid for higher education? (more neutral)

Why should one think twice before asking a survey question that relies on autobiographical memory?

Experiences are more likely to be remembered if they were emotionally salient, and the retrieval of the memory depends on the person's current mood (i.e. it's remembered to be worse or not as bad depending on if they are currently happy or sad). This can lead to results that are bias

List and briefly define three methods that researchers use to evaluate survey questions.

Expert review panel: small group of specialists brought together to provide advice about a questionnaire Focus group discussions: group of people usually with similar characteristics assembled for a guided discussion of a topic or issue Cognitive-based interviews: the administration of draft survey questions while collecting additional verbal information about the survey responses, which is used to evaluate the quality of the response or to help determine whether the question is generating the information that its author intends Field pretests: practice run of the study protocol

True or False: Validity corresponds to systematic deviations on summary statistics (systematic deviation across all trials and persons between response and true value)

FALSE, this is Bias, not validity

A pharmaceutical company has come out with a new ADHD drug. They test it on 12-17 year olds who have ADHD. Their results are no better than what is already on the market so the company makes up the results. What form of misconduct is this? Plagiarism Falsification Fabrication

Fabrication

What is the difference between fabrication and falsification?

Fabrication is: 1. fabricate all or part of interview 2. misreporting codes and data 3. miscoding to avoid followup questions 4. ubtervuew onsampled person to reduce effort to complete interview Falsification is: manipulating research materials, equipment or processes ro changing or omitting results such that the research is not accurately represented in the research record.

Which of the following survey methods has the highest response rate and lowest response bias? Mailed Survey Web Survey Telephone Survey Face to Face Interviews

Face to Face Interviews Rank from highest to lowest: Face to Face, Telephone, Mailed, and Web

True or False: In systematic sampling, every combination of elements has the same probability of selection.

False

True or False: Open-ended questions will only elicit relevant information.

False

True or False? It is best practice to put demographic questions at the beginning of a survey.

False

True or false - Patient reported outcomes do not affect health care compensation

False

Please determine whether the statement is true or false. Reliability, which is a measure of consistency in producing the same results every time, is threatened by systematic error.

False. (Reliability is threatened by random/nonsystematic error).

The variance of a systematic sample is always lower than a simple random sample. True or false?

False. (Reliability is threatened by random/nonsystematic error).

True or False. The likelihood of over-reporting is increased with longer recall periods and less salient events.

False. The likelihood of over-reporting is increased with shorter recall periods and more salient events.

True or False: Response bias occurs when there is a difference between the target population covered in the survey and the target population that is not covered by the survey.

False. This is an example of coverage bias. Response bias occurs when there is a consistent direction of the response deviations over trials: systematically underreported or overreported

True or False: The social psychological perspective on incentives states that refusal is an indication that the survey is perceived as more burdensome and has less utility for the refusers. Therefore, it is appropriate to offer compensation to refusers but not cooperative respondents, whose cooperation is seen as evidence of the utility of the survey to them.

False. This is the definition of the economic perspective on incentives. The social psychological perspective states that reluctance to participate is not an ipso facto indication that the survey is more burdensome, and offering refusal conversion payments to reluctant respondents may be seen as inequitable.

True or False: Health professionals (in general) are more likely to respond to emailed surveys than other types of surveys (phone, paper-based).

Falsephysicians are less likely because they can easily ignore or miss the email; confidentiality concerns

What is field coding and what is one benefit and burden of using this technique?

Field coding is when respondent has an open question but the interviewer codes into a numeric category. Pro: respondent gets to describe situation in their own words Con: interviewer has burden to interpret the response and categories. Evidence that this has negative effects on interviewer behavior

The Current Employment Statistics program was interested in measuring the total number of jobs existing in the US during a specific month. The program asked individual employers to report how many persons were on their payroll in the week of the 12th of that month. Name at least one potential errors with this design.

First, an error can arise because job counts are not measured in other weeks in the month. Second, employer records may be incomplete or out of date, resulting in error.

The following question was designed to measure frequency of alcohol consumption: On days when you drink alcohol, how many drinks do you usually have—would you say one, two or three, or more? Describe two possible revisions to the question design that would improve the accuracy of responses.

First, respondents may be unclear about what counts as 'a drink.' Revising the question to include a definition of 'a drink' would resolve most of this ambiguity. Second, there's a lack of response option range; and given sensitivity of reporting drinking behaviors, respondents may be influenced by where the boundaries are drawn to create the response categories. The investigator could either add additional response options (one, two, three, four, five, six, seven, or more), or change the response option altogether. If the investigator is interested in absolute frequencies, open questions, as opposed to closed question, will most likely obtain higher estimates.

What are the three main kinds of question evaluation activities?

Focus group discussions- presenting new products and ideas to small groups, then have a discussion about what people like and do not like about them Intensive individual interviews- also known as think-aloud interviews Field pretesting- replicating to a reasonable extent, procedures to be used in a proposed survey

What is the primary purpose behind focus groups?

Help people to explore and clarify their views in ways that would be less easily accessible in a one on one interview. This helps researchers tap into the many different forms of communication that people use in day-to-day interaction.

How can incentives for refusal conversion affect the generalizability of the survey results

How can incentives for refusal conversion affect the generalizability of the survey results

Select the most appropriate sampling approach for the following studies: 1. Screening 2. Respondent driven sampling 3. Targeted/Venue-based sampling 4. Snowball sampling I. Risk of HIV among injection drug users in NYC (Answer: Respondent Driven Sampling) II. Condom use among female sex workers working in brothels (Answer: Targeted/venue-based sampling) III. Levels of stress among smoking homemakers living in Providence, RI (Answer: Screening) IV. Postpartum depression among pregnant women in disaster situations (Answer: snowball sampling)

I. Risk of HIV among injection drug users in NYC (Answer: Respondent Driven Sampling) II. Condom use among female sex workers working in brothels (Answer: Targeted/venue-based sampling) III. Levels of stress among smoking homemakers living in Providence, RI (Answer: Screening) IV. Postpartum depression among pregnant women in disaster situations (Answer: snowball sampling)

You are Ariel, the Little Mermaid, and you have been asked by Prince Eric to do a global survey of all individuals on the planet. You are able to sample the human population, and the mermaid population who are not upset with you (now that you've been turned into a human). You are missing a sizeable subset of the global population. Scuttle (the seagull) suggests that you use mean imputation. What are some problems with this approach? (Hint: What does imputation do to your sample variance?)

If you impute (for example Mean imputation), your value is derived from whatever data you already have. If you have somehow missed sampling true outliers in the population (or an entire sub-population that is hiding out somewhere), your newly constructed "missing data" set will not reflect that missing population. You are narrowing your sample variance because while you are maintaining the desired number of observations, they are more like each other.

What is the difference between a sample survey and a census? What are the benefits and drawbacks to each?

In a census, every individual in the population of interest is surveyed. In contrast, in a sample survey only a subset of the population (usually selected with some degree of randomization) is surveyed. The benefit of a census is that it is completely representative. However, if some individuals do not answer census questions, results will be biased. Ensuring that the maximum number of individuals in a representative sample are interviewed, this bias will be avoided and you will have to interview fewer individuals.

How does stratification impact our sampling estimates? Increases accuracy Decreases precision Decreases accuracy Increases precision

Increases precision

What are the components of assent as it applies to pediatric research?

Information needs to be presented in a way that a child of that age can understand what's involved, the child needs to express a willingness to participate, and there needs to be a determination about what the child can understand.

If asking sensitive questions about behavior, why might using loaded language be a good thing?

Invites a person to give the socially undesirable answer as opposed to giving the more acceptable answer

How can incentives for refusal conversion affect the generalizability of the survey results

It can increase generalizability if your scope is to improve the response rate Especially for general populations For health care workers or time-limited people, it may not improve generalizability.

What is respondent-driven sampling?

It is a new chain-referral sampling technique that uses statistical adjustments for network size to produce generalizable samples. Recruitment is kind of like a pyramid, where "seeds" are selected and trained to recruit a certain quota of their peers, who are then trained to recruit from their social network. Both seeds and recruits receive incentives. In addition to the recruitment process, RDS involves a complex analytical component that is crucial to generate representative estimates and confidence intervals.

What are two advantages of diaries or other ecological momentary assessments?

It reduces retrospection bias and can provide additional or supplementary information to data obtained by other methods. Other Advantages: Examine events/ experiences in natural context and information can be complementary

Please provide the definition of the following terms: - Item missing data - Response Bias - Sampling frame bias

Item missing data - The absence of information on particular questions. Response bias - One subgroup of population is more or less likely to answer than others. Sampling frame bias - Sampling frame does not include all members of the population.

List some characteristics of an effective focus group discussion.

Making people feel at ease, fostering communication, giving all people a chance to speak, having a good leader with good interviewing skills, 5-8 people

What kinds of questions produce the best recall? (Hint: think of an event that you remember especially well!)

May answer a variant of these: (1) More recent the event (2) Greater impact or current salience (3) Consistency of the event

What is the difference between measurement erros/errors of observations and errors of nonobservation?

Measurement errors/errors of observation pertain to deviations from answers given to a survey question and the underlying attribute being measured. So when the answers to the questions are not good measures of the intended constructs. Errors of nonobservation pertain to the deviation of a statistic estimated on a sample from that on the full population. An example is when the characterisitcs of the respondents don't match those of the population from which they are drawn. The first is related to what you ask and how people answer the questions, the second is related to who you ask and how they represent the larger population.

Describe some threats to reliability

Mechanical Lack of clarity Changes in personal factors of observers Variation in administration (ask different questions than intended) Situational factors (who's in the room) Reactive measures (how people react to questions ex. If they are having a bad day, they may respond differently than if they were having a good day) Use of tape recorders/video cameras

Describe methods to inform the development of surveys.

Methods may be needed first before a good survey can be developed including focus groups, formative research, biological samples, existing record sources (e.g. EHR), and participant / non-participant observation.

A researcher is attempting to decrease nonresponse by offering incentives to survey respondents. Of the following incentives, which is considered the most effective method for decreasing nonresponse? Gift cards Checks Money They are all equally effective.

Money

Which of the following is not an element of informed consent? Study purpose Procedures Risks Benefits Must remain in study until study completion Alternative treatment Voluntariness Confidentiality

Must remain in study until study

Can respondent-driven sampling work in all contexts?

No, it can only work when the population is socially networked and when members of the networks are willing to recruit from peers.

You are validating a survey on discrimination by first addressing construct validity (measuring the same thing with multiple measures). If two questions designed to measure discrimination do not correlate highly, do we know which question is a poor measure? Why or why not?

No. Either or both could be poor questions. If there were several measures of closely related items we might be able to determine which was more likely to be more valid measure.

Describe three levels of measurement?

Nominal: data in categories can only be counted with regard to frequency of occurrence, no ordering or valuation is implied (ex. favorite color) Ordinal: rank ordering of categories in terms of the extent to which they possess the characteristic of the variable, underlying continuum along which respondents can be ranked; no assumption about precise distances between the points along a continuum (ex. Birth order) Interval: labels, orders, and uses constant units of measurement to indicate exact value of each category of response (weight, height)

What is one limitation of using a list of landline telephone numbers as a sampling frame?

Not everyone has a landline telephone (ex. More young adults have cell phones than landlines) Some individuals might have more than one landline Some individuals are more likely to pick up a landline telephone and take a survey than others (ex. Elderly women vs. young adults)

What are the benefits of using authenticated surveys?

Only participants that are in the list can take the survey. Can track survey progress and send reminder e-mails to participants.

What are limitations of open- and closed-ended questions?

Open: (1) will elicit certain amounts of irrelevant and repetitious information,(2) requires greater degree of communication skills by respondent, (3) may take more of respondent's time, (4) interpretation of answers is subjective (statistical analysis requires more effort) Closed: (1) Respondent may select fixed responses randomly rather than in thoughtful fashion,(2) require respondent to choose "closest representation" of actual response, (3) subtle distinctions among respondents cannot be detected, (4) may lead to inadvertent errors

What is the purpose of assent and how is it different from parental permission?

Parental permission protects the child from assuming unreasonable risks. Assent demonstrates respect for the child and his developing autonomy. In order to give meaningful assent, the child must understand that procedures will be performed, voluntarily choose to undergo the procedures, and communicate this choice.

Why is it often necessary to have study participants use diaries for ecological studies?

Participants have a limited ability to recall events (recall bias). Participants also give more weight to the most recent experience and give more weight to painful events/things that stick out in their minds.

What is PROMIS and what are the five domains?

Patient-Reported Outcomes Measurement Information System. The five domains are physical function, fatigue, pain, emotional distress, and social health.

What are the 3 types of scientific misconduct overseen by the Office of Research Integrity in the Dept of Health and Human Services?

Plagiarism: Theft of intellectual property, unattributed copying of work. Includes unauthorized use of privileged communication, but DOES NOT include authorship disputes. Falsification: Manipulating or omitting results or processes Interviewer falsification: Intentional and unreported departure from instructions - can be deliberate misreporting, miscoding, or interviewing a non-sampled person for convenience Fabrication: Making up data in proposing, performing, reviewing, or reporting

When questions related to satisfaction are being asked, what would the response tend to be? (Choi et al) Negative Response Neutral Response Positive Response No Response

Positive Response

Define "bias"

"Deviation of results or inferences from the truth, or processes leading to such a deviation. Questionnaire bias can result from unanticipated communication barriers between the investigator and respondents that yield inaccurate results."

What are the benefits of diary methods?

- Permit the examination of reported events and experiences in their natural, spontaneous context, providing information complementary to that obtainable by more traditional designs - Reduction in the likelihood of retrospection, achieved by minimizing the amount of time elapsed between an experience and the account of this experience - Studying human phenomena: personality processes, marital and family interactions, physical symptoms and mental health. Can show how much people vary over time in variables of interest; ability to characterize temporal dynamics, such as diurnal cycles, weekday versus weekend effects, seasonal variation, or the effect of time to, or since, an event - Aggregate experiences over time, temporal pattern of experiences, factors affecting changes in these experiences.

Discuss the definition and purpose of survey research. (answers will vary but should include some of the following)

- systematically collecting information by asking questions - to generate statistics on a certain group or population of interest and the individuals answering the questions represents that population - for purposes of needs assessment, description, evaluation, prediction, theory development or testing -have a clearly defined research purpose or objective to base the survey on

What is the value of focus groups?

-- can help people explore and clarify views in ways that would be less easily accessible in a one to one interview -- taps into the many different forms of communication that people use in day to day interaction --can highlight subcultural values or group norms -- group work can facilitate the discussion of taboo topics because the less inhibited members of the group break the ice for shyer participants --participants can become an active part of the process of analysis -- group discussions may spark more angry comments, useful when the purpose is to improve services and particularly useful in disempowered populations that may have trouble expressing such feelings

What are some reasons for doing needs assessments? To create community trust and buy-in To make sure problem is important to be assessed To avoid doing an evaluation on intervention process To account for resource allocation so no resources are wasted A and B only A, B, C only A, B, D only

...

Which of these options is NOT a cognitive process involved in respondent response models? Comprehension Judgement Retrieval Reporting None of the above

...

Describe some data editing you might do if you discovered that you had missing data. (Hint: consider imputation options)

1. Mean value imputation - take mean of sample and use that for your missing data 2. Regression imputation - create regression model for overall sample, and apply it to the missing data (Note: all values used as predictors are themselves present in the imputation) 3. Hot Deck Imputation - like regression imputation, but predicted residual is "borrowed" from another case in the data set. The missing value is imputed by the most recent reported value in the sort sequence 4. Multiple imputation - creating multiple imputed datasets and putting them together. This allows a variation in the estimates across these datasets and allows for estimation of overall variation, including sampling and imputation variance. (Like analyzing your own analysis...if that makes sense)

What are some kinds of checks you can do while editing, to make sure that your data is ready for analysis? Choose 2 of your favorites and provide examples.

1. Range edit - recorded value should lie in a specific range. (Example: Participant answers that her age is 173yrs → Survey designer can create range restriction from 1month to 125yrs) 2. Ratio edit - recorded value should have the desired comparators (Example: I have room for 12 school chilren in my field trip bus, and the numerator and denominator of that ratio should add up to 12. 3. Consistency edit - recorded value should make sense with other responses (Example: Participant answers NO to "Have you ever been sexually active" but answers YES to "Have you ever had a sexually transmitted disease?" → Survey designer can create pop-up message to ask "You answered X in this question, but Y in this question. These responses are inconsistent. Please provide a valid response. Thank you!" 4. Balance edit - recorded value should summate to a target whole (Example: if recording percentages of something, the total should =100%) 5. Comparison to historical data - recorded value should (generally) not change over time (Example: last week I lived in a house with 4 other people. This week I (likely) still live in a house with 4 other people) 6. Checks of highest and lowest values - recorded value should not be implausible (Example: My age range of samples is from 0 to 173...something is wrong!)

Which of the following is a strength of close-ended questions? Less burden on respondent Can offer more privacy Good for exploratory studies A & B All of the above

A & B

Match the type of error with its source A. Sampling Frame B. Sample Size C. Wording of questions 1. Coverage Error 2. Sampling Error 3. Measurement Error

A -1 B-2 C-3

For each of the following scenarios, list one pretesting technique that would most directly address the problem at hand, and state why that technique would be useful. As you're drafting a survey on health insurance, you'd like to know what types of insurance plans respondents are aware of and what they know (or think they know) about each type of plan. You'd also like to get a sense of which issues respondents think are important and how they think about these issues and how they categorize/group them.

A focus group is an efficient method for determining what potential respondents know and what they do not know about the survey topics, and how they structure that knowledge. If, for example, respondents see HMOs as very different from other types of health service plans, this info can help researchers structure the questionnaire to promote the most accurate reporting

Which of the following are pros of web surveys? A. Multimedia Capabilities B. Required fields C. data validation D. conditional logic E. customer error messages F. upfront investment G. drop-down lists H. security

A,B,C,D,E,G are all pros F and H are cons

What are the four elements of child assent

A. A developmentally appropriate understanding of the nature of the condition B. Disclosure of the nature of the proposed intervention C. Assessment of the child's understanding of the information provided and the influences that are an impact on the child's evaluation D. A solicitation of the child's expression of willingness to accept the intervention.

What are two benefits and two limitations of focus groups?

A. Benefits can be used to see what potential respondents know or do not know about a topic can be used to find out what topics are important and what topics are not important can be used to identify terms that respondents use when discussing a topic and how they understand these terms do not discriminate against people who cannot read or write encourage participation from people who do not want to be interviewed on their own encourage participation from people who feel that they do not have anything to share B. Limitations participants are not always representative of the survey population so you should not generalize responses not a good venue for evaluating wording of specific questions or discovering how respondents arrive at their answers potential for results to be unreliable, hard to replicate, and subject to judgments of those who are conducting the focus groups the articulation of group norms may silence individual voices of dissent can have issues of hierarchy loss of confidentiality because other participants are present

Please match the correct word to its definition: 1. Elements 2. Target population 3. Frame 4. Satisficing 5. Optimizing 6. Positivity bias A. group of units for which the researchers want to make inference B. respondents tend to shy away from the negative end of a scale C. the fundamental unit of a population D. respondents seek to fully understand a question to provide an accurate answer E. set of materials used to identify the individual units within the group for which researchers want to make inference F. respondents only seek to understand the question enough to provide a reasonable answer

A. group of units for which the researchers want to make inference (Answer: Target Population) B. respondents tend to shy away from the negative end of a scale (Answer: Positivity bias) C. the fundamental unit of a population (Answer: Elements) D. respondents seek to fully understand a question to provide an accurate answer (Answer: optimizing) E. set of materials used to identify the individual units within the group for which researchers want to make inference (Answer: Sampling Frame) F. respondents only seek to understand the question enough to provide a reasonable answer (Answer: satisficing)

What are some reasons to perform a needs assessment as the first step in research?

A. to determine where the resources should go B. to determine if the community is going to respond C. it is a data driven way to determine that what you as a researcher think is important actually is important to others D. to avoid creating bad interventions E. to evaluate an intervention process

Name TWO pros and TWO cons of self-administered survey.

A: Pro-more privacy, less non-response, lower social desirability (stigmatized and sensitive questions); more flexibility and increases people's willingness to participate (not necessary increase response rate); lower the cost. Cons- higher response error, higher incomplete (except for computer assisted); we don't know how many people got the mailing survey and how many complete it

Describe the advantages and disadvantages of utilizing a web-based survey (as opposed to another mode of administration).

Advantages of web-based surveys include multimedia capabilities (e.g. pictures, charts, web links), the ability to use required fields which reduces non-response and ensures questions answered in intended order, the ability for data validation which saves time on post-collection data cleaning, the ability to use conditional logic which simplifies navigation for the participant and allows for more streamlined survey progression, custom error messages based on data rule and participant's responses, the ability to pipe customized text into questions or response options, the ability to use drop-down lists which provide a compact way to present a large number of response options on only one line of the screen, the ability to customize end pages, the progress bar option, the built in data dictionary, and that data is easily downloadable for statistical analysis/software. Disadvantages of web-based surveys include the significant upfront time investment necessary to create a well-planned web survey, web layout issues (questionnaire may look different to different participants depending on their browsers, screen resolution, window size), scrolling issues (most users don't like having to scroll right and left), and security concerns (respondents may be wary of web-based security and some software packages may not conform to security standards required by IRBs).

Which of the following are approaches to assess reliability? (select all that apply) Test-retest reliability Inter-rater reliability Alternate form reliability Split-half reliability All of the above None of the above

All of the above

Which of the following factors interferes with respondent memory of events? Passage of time Uniformity of events Low salience of event Intervening event All of the above

All of the above

Which of the following is a type of validity? Construct validity Content validity Criterion validity All of the above

All of the above

Why is a College setting an appropriate place for a web-based survey?

Almost all students will have access to email and a computer. Also, they are younger and better equipped to using computers.

When does an IRB waive the requirement for a signed consent form?

An IRB may waive the requirement for a signed consent form if ... The only record linking the subject to the research is the consent document and the principal risk is a breach of confidentiality The research presents no more than minimal risk of harm and involves no procedures for which written consent is normally required outside the research context

What is an expert panel? What are some advantages to using one?

An expert panel is a small group of specialists brought together to provide advice about a questionnaire. Some advantages to using an expert panel are: they are cheaper than recruiting participants for individual interviews, an expert has likely done research in that area and will have answers to questions the audience might have, and a good panel will get multiple opinions about a subject.

Discuss some of the problems that researchers faced with recruitment and retention for the iSay study on adolescent alcohol and drug abuse. And what were some of the methods they used to combat these issues. (Kristina Jackson's lecture)

Answers will vary greatly but should discuss some of the following: difficulty getting principal buy-in, non-native Rhode Islander had more difficulty in local schools with buy-in, principals finally onboard but didn't educate teachers about the program, required parental consent, students lost paperwork. Strategies: had multiple copies of paperwork to keep handing out, used graphic designer to professionalize the logo and paperwork, information tailored to age, incentives (money, shirt, pizza party), very structured contact with participants with e-mail notification and phone call follow ups to late responders, obtained many different forms of contact information, kept contact information updated, allowing re-entry of students who missed months, quick payouts so students had almost immediate return on completing task, newsletters, birthday cards

What are the two main ways to measure consistency?

Ask the questions twice on the same person Ask two people the same question

Describe two ways to verify that your survey results are consistent.

Ask the same participant twice, ask 2 different participants the same question, ask same question to same person in different form

A research team would like to evaluate the reporting of crime victimization surveys by drawing samples from police records. Households in which known victims are thought to live will be sampled, interviewers will visit households to carry out a standard crime survey, and the accuracy of reporting criminal events will be evaluated by comparing the survey reports with the results from police records. Identify potential limitations of this type of study.

Because the sample is being drawn from those known to have being a victim of a crime, this study design will be great for detecting underreporting (failure to report an event that actually occurred)—however there's little opportunity for measuring overreporting. Also, a record-check study based [only] on events reported to the police isn't necessarily representative of all such crimes—crime is often underreported (attempted burglary) or overreported (car theft). Thus, investigators must consider the extent to which crimes in police records are actually representative of crime.

What are two benefits and two limitations of snowball sampling?

Benefits: allows access to those communities where trust is a prerequisite to establishing contact allows for formal study of populations which are hard to enumerate through other methods Limitations: the sample is not likely to be representative because it depends on who people select to recommend the person serving as the informant may not actually be as connected to the target population as previously thought

What are the differences between snowball sampling and respondent driven sampling (RDS)?

Both snowball sampling and RDS start with a convenience sample from a target population called "seeds". The seeds refer others into the sample. For RDS, there are a restricted maximum number of participants each person can recruit. For both methods, recruitment is repeated until a desired sample size is attained.

Which of the following is not a principle of consent with regard to children? A. Developmentally appropriate based on age B. An assessment of the child's understanding of the intervention C. Limited disclosure of the intervention D. An idea of the child's willingness to accept the intervention

C. Limited disclosure of the intervention

A respondent with a complicated employment history will find it difficult to report beginning and ending dates of jobs, whereas this task will be simpler for someone who has held the same job since completing school. If survey designers are unfamiliar with the distribution of employment experiences among their target population, which type of error is most likely to occur? 1. overcoverage 2. undercoverage 3. measurement error 4. processing error 5. non-response

C. Over/undercoverage has to do with population elements, which aren't relevant here (the target population isn't defined). Processing error refers to mistakes in data coding, and non-response has to do with failure to obtain complete data from all selected individuals. When respondents misunderstand a question or find it difficult to answer, such as identifying particular start/end dates, they are more likely to provide an estimate/responses that are less accurate, resulting in measurement error. One could make an argument for non-response on the basis if a respondent finds a question too difficult to answer s/he will just skip it; though if the question isn't explicitly difficult (rather, it's cognitively challenging), the respondent is more likely to provide an answer that is less accurate.

Which of the following statements are true? I. Random sampling is a good way to reduce response bias. II. Increasing sample size tends to reduce coverage bias. III. Compared to other modes, mail-in surveys are the most vulnerable to non-response bias I only II only III only I and II only All of the above None of the above

C. Random sampling provides strong protection against bias from undercoverage and voluntary response bias, but it's not effective against response bias. Increasing sample size won't reduce survey bias, since a large sample size can't correct for the methodological problems (undercoverage, nonresponse bias, etc.) that produce survey bias (it can, however, reduce sampling error/random error). Mail-in surveys typically have lower response rates than other modes, making them vulnerable to response bias.

Which of the following is NOT a procedure for monitoring cognitive processes A. Going through the questions twice. Once to read as is, and a second time to process. B. "Think aloud interviews" C. Reading questions through once and having respondents read them back to you the second time. D. Asking probe or follow up questions after each individual question

C. Reading questions through once and having respondents read them back to you the second time.

What are the processing activities after data collection?

Coding: the process of transforming word answers into numeric data. Data entry: the process of entering data into files Editing: the process of cleaning data and removing errors Imputation: the process of repairing item-missing data by replacing one or more estimated answers into a field that previously had no data. Weighting: the process of adjustment of computation to counteract effects of noncoverage or nonresponse Sampling variance estimation: the process of estimation of the instability of survey statistics

Describe two methods for testing survey questions and list at least one pro and one con of each.

Cognitive-based interviews, focus groups, pretests, expert panels Cognitive-based interviews involve administering draft survey questions while collecting additional verbal information about survey responses, which is used to evaluate the quality of the response or to help determine whether the question is generating the information that its author intends (think aloud or focused probing). A benefit of cognitive-based interviews is that they allow you to gain information the respondent answering process. Some cons of cognitive-based interviews are that being in a cognitive interview may alter participant responses, may uncover problems that don't affect the validity of the data, may not identify problems that actually exist in survey administration, interviewers can introduce bias. An expert panel is a small group of specialists brought together to provide advice about a questionnaire. Some pros of expert panels are that they are already knowledgeable about the topic of interest, they most likely have experience with the types of questions you're trying to ask and/or the populations you're interested in, and they are inexpensive. Some cons of expert panels are that they may not be able to speak to the experience of particular types of respondents and they may point out things that are inconsequential to actual respondents. A focus group is a group of people usually with similar characteristics assembled for a guided discussion of a topic or issue related to survey questions. Some pros of focus groups are that they don't discriminate against people who can't read or write, they encourage participation from those who don't want to be interviewed on their own, they can be used to see what's common knowledge within the group, they can ascertain nuances in opinions, and they are sensitive to cultural variables. Some cons of focus groups are that the articulation of group norms may silence individual voices of dissent, confidentiality is compromised, may encounter people who may just be participating for the compensation and thus may not share genuine opinions or experiences, and findings are not generalizable. Field pretests are "practice runs" of the study protocol as close to the actual administration as possible. Some pros of pretests are that they allow you to see how the survey holds up in the field, they test whether a question is easy and comfortable for an interviewer to read as written, and they test whether non-paid respondents are as able and willing to answer certain questions as paid volunteers. A con of pretests is that they do not offer flexibility to probe and understand the nature of problems that interviewers and respondents may encounter.

Patient reported outcomes (PROs) are increasingly recognized as valuable clinical research endpoints, and are used across a wide spectrum of diseases in clinical trials. How can PRO instruments, in general, be improved?

Cross-validation of instruments and standardization of interpretation of outcomes would allow greater comparability of scores across studies and diseases. The use of an electronic centralized resource can help with consistent application, interpretation, and validation of PROs between studies. Patient-Reported Outcomes Measurement Information System (PROMIS) is an example of such a database, and evaluation of PROMIS item banks and their short forms have shown they're reliable and precise measurements of generic symptoms.

Which of the following ways can be used to improve physician response to surveys? A. Mail information brochures regarding your study topic B. Contact them by email C. Reprint the questionnaire on high quality paper D. Send a colorful envelope in the mail E.None of the above

D. Send a colorful envelope in the mail

Discuss the challenges to writing a good survey question for collecting factual data.

Defining objectives and specifying the kind of answers needed to meet the objectives of the question. That is, the objective defines the kind of information that is needed from the survey. Ensuring common, shared understanding of meaning of the question; having the same understanding of the key terms. All respondents must have the same understanding of what is to be reported. Ensuring people are asked questions to which they know the answers. Barriers to this could be that respondents may not have the information needed to answer the question or that respondents may have once known the information needed to answer the question but have difficulty recalling it. Asking questions that respondents are able to answer in the terms required by the question. Interviewers must be careful not to impose the assumption of regularity upon respondents. Asking questions respondents are willing to answer accurately. This includes reducing the effect of social desirability bias, reducing response distortion, and carefully selecting question design options.

What is wrong with the following questionnaire? List three. Are you a... Nurse Physician Assistant Administrative staff Other: The medical director wants to know your thoughts on the new computers we have installed. Do you agree that these computers are useful? Strongly agree Agree Disagree Strongly disagree

Demographic questions asked first (people who have lesser power, example assistants may be not want to express their opinions) Use of authority in question may bias results Biased question (uses only agree, not agree or disagree) Uses strongly agree first (which is more socially desirable)

Describe some of the methods that researchers can use to evaluate draft survey questions.

Expert reviews: subject matter experts and question design experts review a draft of the survey and make comments on the wording of questions, response alternatives, order of questions, structure of the questions, instructions to interviewers, navigational rules, etc. Focus group discussions: a discussion among a small number of the target population members guided by a moderator to help the researcher learn about how members of the target population may understand concepts presented in the questionnaire. Cognitive interviews: find out how people understand and answer questions Field pretests: small-scale rehearsals of data collection to make observations about content, validity, and reliability of the survey. Randomized or split-ballot experiments: offer clear evidence of the impact on responses of methodological features.

What type of survey method (mailed, web, telephone, face to face) tends to be the most costly? Explain why.

Face to face, 1 on 1 interviewing: - Interviewer spends lots of time with one individual - Scheduling can be difficult - Long project time frame

What are the types of validity? Briefly Explain.

Face validity: The validity of a survey at face value (Whether measurement is logical) Content validity: The extent to which a measure represents all relevant dimensions Criterion validity: The extent to which the measure agrees with or predicts some criterion of the "true" value (or gold standard) Construct validity: The extent to which relationships between measures agree with relationships predicted by theories or hypotheses

True or False has this question successfully reduced social desirability bias. (Has a scale from Strongly Agree to strongly disagree & prefer not to answer) In general, I believe that drunk driving increases motor accidents. True or False

False

Health professionals response rates are higher for web-based surveys than other forms of data collection. True/False

False. Web-based surveys have the lowest response rates in this population.

Web surveys are always the first and best option for data collection today. True or False

False. dependent upon the population, resources, and survey content

Sampling frames are rarely perfect, and there are almost always problems that disrupt the ideal one-to-one mapping of frame elements to target population elements. Consider the following scenario. A telephone directory (the sampling frame) is used to sample adults living in telephone households (the target population). Identify [two] different problems that could arise using this frame and suggest potential method(s) to address each

First, a telephone listing in the directory may have multiple adults living there (clustering). There could also be two telephone listings (two different phone lines) that exist for the same household (duplication). Both of these can be corrected using weighting, given the number of eligibles in the cluster and the number of duplicate entries for a given population element are both known. *note: in general, there are four potential problems. The other two include ineligible units—one of the listings belongs to a business (or any non-household) and non-coverage—a person living in a telephone household is not listed for whatever reason

When would you use open-ended questions versus close-ended questions? What are the limitations of each type - open-ended and close-ended questions?

For focus groups and needs assessments, using open-ended questions may be more beneficial. In addition asking sensitive questions about behavior may be more efficient. Close-ended questions are commonly used when asking about nonsensitive question about behavior and attitude questions. For open-ended questions, it is great for exploratory studies and may gain insight to precise pieces of information that can be easily recalled while some limitations include that it is more timely for both the participants and for researchers. Researchers will have to spend more time to analyze along with interpretation of responses before recoding of the data. In addition, there is an increase irrelevant and/or repetitious information. On the other hand, the strengths of the close-ended questions include asking less of respondents, easier to analyze, more privacy, and lumping risks into options. Limitations include respondents not being able to fit into a category due to fixed responses, the data can be skewed since the respondent will have to choose their closest representation of their answer, and the categories with any ranges can make respondents feel like they're in an extreme and select based on desirability selection bias.

Define and discuss pros and cons of two of the following types of probability samples: simple random sample, systematic sample, and stratified sample.

In simple random sampling, every element has the same probability of selection. Pros of simple random sampling are that it is the most basic selection process, easy to understand, and self-weighting; cons of simple random sampling are that it is often difficult to carry out in practice particularly because it is not always feasible to get the required sampling frame and that important subpopulations may be missed in the sample (e.g. rare or hidden populations). In systematic sampling, individuals are selected using a pre-determined sampling interval and every element has the same probability of selection but not every combination can be selected. Pros of systematic sampling are that it is easy to implement and only one random number is needed in order to determine which observation to begin sampling with; a con of systematic sampling is that any hidden pattern or periodic structure in the data will introduce bias. In stratified sampling, the population is divided into strata based on variable of interest (e.g. gender) and a certain number of sample elements are selected from each strata specifically depending on type of stratified sampling (proportionate vs. disproportionate). Pros of stratified sampling are that it provides more control over the units of each stratum, it may increase the precision of the sampling estimates, and it allows you to ensure sufficient representation of various subpopulations of interest. Cons of stratified sampling are that strata need to be clearly defined, mutually exclusive and exhaustive, which can sometimes be difficult; the population proportion in each stratum must be known in order to attach the appropriate weight to the statistical formula; and it has to be possible to draw samples from each stratum.

Describe one of the strategies used for improving participation:

Incentive-based strategies can use either monetary or non-monetary incentives to increase participation by "reimbursing" participants for their time (it is important not to advertise participating in studies as a way to make money). It has been found that offering every participant a smaller amount of money works better than giving them objects (like pencils) or putting them in a raffle for a chance to win a bigger prize. This is not true with every population, for example, doctors might also be interested in continuing education credits. It is important to know the desires of your target population and to adapt this tactic specifically to their needs.

Compare and contrast your 2 favorite survey scales. What are the pros and cons of each? Possible scales: Likert scale, dichotomous scales, ranking scales, paired comparisons, semantic differential scale, self-anchoring scales

Likert PROS: offers concrete scaled choices (numbers, tick marks along a line, etc) OR continuous scaled choice (continuous line where participant marks her/his own response) Likert CONS: can only choose choices presented Dichotomous PROS: can summate scores to know distribution of preferences CONS: can only be yes/no Ranking PROS: gives respondent lots of flexibility (can write in own order of variables) If list of choices given, can control options CONS: If free writing (no list of choices given) can have a very wide range of answers = nightmare to code, spelling errors from free writing, or writing the same number twice, can be overwhelming to participant if choices are too numerous Paired PROS - easiest for respondent because only need to compare 2 things at once rather than entire list (like in ranking), can easily compute overall ranking statistically CONS: lots of paired combinations to present to participant - can be lengthy Sematic differential PROS - good for allowing participant to respond without having the exact word to describe degree of answer (ie. I'm not entirely sympathetic OR entirely unsympathetic...but somewhere in between) - can just mark spatially on line provided. -sometimes people don't use numbers on a line, they use words CONS - maybe the respondent actually prefers words... Self-anchoring PROS - allows participant reference point CONS - be careful about how you order the questions.... can influence answers chosen

What is location sampling, and what are two limitations?

Location sampling involves visiting places where members of the study population are known to gather with the aim of collecting background info on behaviors and increase rapport with key individuals. 1) Those who don't use the location cannot be studied 2) Different types of people use locations at different times of the year so must understand that in order to take it into account

The likelihood of under-reporting is increased with: Longer recall periods and less salient events correct Longer recall periods and more salient events Shorter recall periods and less salient events Shorter recall periods and more salient events

Longer recall periods and less salient events correct

Discuss the strengths and limitations of the four major survey methods (mailed, web, telephone, face-to-face) with respect to response rate, response bias, cost, and quality of recorded responses.

Mailed surveys have one of the lower response rates (45-70%) compared to telephone and face-to-face surveys, but slightly higher than web surveys. Mailed surveys have medium to high amounts of response bias, their cost is low, and the quality of recorded responses is fair to good. Web surveys have the lowest response rates (30-70%) of the four methods and have medium to high amounts of response bias. Web surveys also have low cost and very good quality of recorded responses. Telephone surveys have the second highest response rates (60-90%) and have low to medium amounts of response bias. The cost of telephone surveys ranges from low to medium and the quality of recorded responses is very good. Face-to-face surveys have the highest response rates (65-95%), low levels of response bias, and very good quality of recorded responses, but high cost. Rank high to low of response bias: Web ~ Mailed, Telephone, and Face to Face Rank high to low of cost: Face to Face, Telephone, and Mailed ~ Web Rank high to low of quality of recorded response: face to face ~ Telephone~ Web (very good) and Mailed (fair/good)

What is the most common approach for sampling rare and hard-to-reach populations? Identify the strengths and limitations of the approach mentioned to sampling rare and hard-to-reach populations?

Methods for sampling hard-to-reach populations include screening, time space sampling, snowball sampling, and respondent driven sampling. Screening - also known as random digit dialing, strengths include based on a probability sample, includes individuals not on any list; limitations include low response rate, individuals may not identify themselves, and expensive Time space sampling - three stages: formative, preparation, and sampling; collect information on where and when people in target population congregate, verify that people in target population are actually at those places, and sample a time and location; benefits include large diverse samples can be attained, probability sample, reproducible in different study sites; limitations include limited to people who attend the specific locations and concerns with selection bias Snowball sampling - benefits include large samples can be attained since initial respondents refer other respondents and reproducible in different study sites; limitations include only works for networked populations and non probability sample Respondent Driven Sampling (RDS) - start with convenience sample from target population, respondents refer others to sample but with a maximum number of referrals, and is repeated until sample size is attained

What factors increase the likelihood of over-reporting and under-reporting?

Over-reporting include events from outside time period being asked about. To reduce, include calendar prompts and extend recall periods. For under-reporting, do not include events that should have been included within time period. To reduce, shorten recall periods.

What is the difference between patient-centered and patient-reported outcomes?

Patient-reported outcomes may fail to incorporate the patient's perspective. A researcher or clinician may view the outcome of a surgery as a success when it does little to increase the quality of the patient's life.

Short answer. Name the worst times in the year to collect survey data.

Possible answers- the winter holidays (end of November to the New Year), on Super Bowl Sunday, during the summer (depending on the state).

What is the difference between sampling frame bias, selection bias, and response bias?

Sampling frame bias affects peoples' chance of being selected, i.e. the sampling frame doesn't include all of the target population. Selection bias is not to be confused with sampling frame bias. If the sampling frame was people at the mall on a Tuesday afternoon, selection bias would be if you only picked people coming out of Nordstrom. Response bias refers to when one subgroup is more (or less) likely to respond than the others (e.g. a survey that's only offered in English would systematically exclude those people who don't speak English)

Name/define two sources of bias associated with sampling and data collection.

Sampling frame bias: sampling frame does not include all members of the defined population or does not include correct information about sample members (affects chance of selection) Response bias: one subgroup of population is more or less likely to cooperate than others

What are three basic data collection methods? Describe using one of the three data collection methods what sampling frame bias is and how this could be a limitation to data collection.

Self-administered surveys, telephone survey or interviewers, and face-to-face interviews are the three data collection methods. Sampling frame bias is when the sampling frame does not include all members of the defined population. For example, in telephone surveys the sampling comes from those with access to a telephone and let's say that you are trying to get a Needs Assessment on inadequate sleep among airline employees. Airline employees most likely carry cell phones but your telephone survey uses respondents home landlines. This would be a limitation to collecting the necessary information from this population.

Please describe the meaning of the term "Survey Methodology."

Since "ology" means "the study of", Survey Methodology means the study of survey methods. The field of survey methodology studies the sources of error and strives to increase the accuracy of data collection techniques. The field seeks to identify principles about design, collection, processing, and analysis of surveys that are linked to the cost and quality of survey estimates.

List stages of questionnaire development.

Specify research question Develop research design Develop questionnaire outline Review of the literature Review of previous questions - with Expert Panels and/or Focus Groups

Match the following stages of questionnaire development in the correct order: Specify research question Develop research design Develop questionnaire outline Review of the literature Review of previous questions Pilot study Draft questions Test questions Review and revise

Specify research question Develop research design Develop questionnaire outline Review of the literature Review of previous questions Pilot study Draft questions Test questions Review and revise

In the past month, how much did you use public transportation? A lot A little Moderately What is wrong with the design of this question?

Starting time bias is present. The question should say for example, "since Jan 1 how many times did you use public transportation", As written, each responded is going to be referring to a different month depending on when they take the survey. Additionally, the response options are not ordered. Lastly, the question includes several vague words (what's a lot to one person may not be a lot to another. Transportation as well can mean a lot of different things depending on what a given city has available).

What are the strengths and limitations of open-ended questions?

Strengths: Good for exploratory studies Commonly used to elicit precise piece of information that respondents can recall easily when large number of possible answers are possible. Useful for eliciting frequency of sensitive behaviors Limitations: Will elicit certain amount of irrelevant and repetitious information Requires greater degree of communication skills by respondent May take more of respondent's time Statistical analyses require interpretive, subjective, and time-consuming categorization of responses

Please describe two strengths and limitations of using closed-ended questions.

Strengths: questions clearer for respondents & limit irrelevant answers Limitations: respondents select fixed responses randomly & subtle distinctions among respondents cannot be established

What are some of the common purposes of survey research? What are some of the limitations and strengths of common study designs involving primary data collection?

Survey research is used to collect information on a topic by asking individuals questions with the goal to generate statistics on the group or population that those sampled will represent. Purposes of surveys can be used for Needs Assessments, description, hypothesis/theory development or testing, explanation or test of a causal model, prediction, and evaluation. Study designs include cross-sectional, retrospective, and prospective. When considering primary data collection, the research problem or question, variables, measurement of variables, the population, data analysis, use of results, and resources used need to be all considered. Limitations with cross-sectional is that it looks at one snapshot of data, retrospective uses past information so measurement can be potentially limited, and prospective is timely and costly. Some strengths of cross-sectional is that it is least expensive and time-consuming, prospective contains data that has already been collected, and prospective is measurements over time.

Which of the following approaches to assess reliability is performed by taking measurements by the same observer for the same group of subjects, with the same instrument, under equivalent conditions, but at different points in time? Split-half reliability Alternate form reliability Inter-rater reliability Test-retest reliability

Test-retest reliability

What type of information are respondents more likely to remember? (Multiple choice, tick all that apply)

The most recent events are more likely to be recalled The greater the impact, the more likely it is to be recalled. The slighter the salience of the event, the more likely it is to be recalled. The more consistent an event is with the way the respondent thinks about things, the more likely it is to be recalled. Answer: a, b, d

For each of the following scenarios, list one pretesting technique that would most directly address the problem at hand, and state why that technique would be useful. You've solicited help from several substantive experts who have each written what she considers to be the 'best' question on a particular topic. Each is convinced s/he is right (these are, after all, academics), but only one of the questions can be included in the final version. How could you address this situation in a way that would appease both of your specialists?

The pretest could incorporate a randomized or 'split-ballot' experiment, administering two different versions of the questionnaire (the difference being the 'best' question). While this doesn't necessarily determine which question is 'best,' this type of experiment offers the clearest evidence of the impact on responses of methodological features (such as question wording) and can generate additional info to assist with such a decision.

Explain the difference between target population and survey population?

The target population is the intended population of study. The survey population is the actual population surveyed in the given time frame.

What is the difference between the target population and the frame population?

The target population is the set of units to be studied, but the frame population is limited to the set of the target population that has a chance to be selected into the survey sample.

Describe what the "telescope" effect means when we talk about the effect of memory on recalling events. What's one way to prevent the telescope effect?

The telescope effect means people tend to over report events, saying they happened during your time frame even if they happened before. This usually happens with big or important events and when the time frame you're talking about is shorter. One way to help with the telescope effect is to use calendar prompts to refresh the memory of your participants.

When drafting and writing questions for a questionnaire, what are the three standards that the questions should meet? (Open-ended question)

The three standards that the questions should meet are the content, cognitive, and usability standards. The content standard specifies whether the questions represent what you are trying to learn and whether the questions are asking the right things. The cognitive standard assesses whether the respondents are able to understand the questions, have enough information to answer the questions, and are willing and able to answer the questions. The usability standard assesses whether the respondents and interviewers are able to complete the questionnaires easily as intended.

What are benefits of having multiple items on a screen at once?

There is the potential for less missing data and a faster completion time for participants.

What are the benefits and costs of sampling clusters instead of elements?

They are cheaper and so used when sampling nonclustered elements is too expensive. The tradeoff is that they result in a larger standard error of the mean from the sample.

Despite elevated prevalence of diagnosed AIDS cases, HIV infection, and related risk behaviors, minority young men who have sex with men (MSM) have been virtually invisible in general population surveys and surveys that target specific population segments such as racial/ethnic groups. Describe a method an investigator could use to generate info.

Time-space sampling is a probability-based method for enrolling members of a target population at times and places where they congregate rather than where they live. Before randomly selecting venues, times, and then individual participants, investigators must first collect information on where and when people in the target population congregate, and then verify its actuality. It's a promising strategy for sampling MSM in minority communities because it concentrates resources where minority MSM can be found. Collecting additional data on the mobility of respondents, frequency of attendance and those who refuse to participate can allow the investigator to estimate the probability of selection and understand selection bias

A survey is interested in measuring the total number of jobs in existence in the US during a specific month. It asks individual sample employers to report how many persons are on their payroll in the week of the 12th of that month but the employers' records are incomplete so the information given is inaccurate. This is an example of an error of observation. True or False.

True

True or False? In systematic sampling every element has the same probability of selection but not every combination can be selected.

True

True or false?: For very salient behaviors, the preferred question order is more general before more specific.

True

When evaluating a sampling frame there are several things that one must consider. Provide a 1-sentence description of what each of the following words mean in reference to sampling frame selection: Under coverage, ineligibility/foreign units, duplication, clustering

Under coverage: when sampling frame doesn't include part of the target population) Ineligibility or foreign units: when sampling frames contain elements that are not part of the target population Duplication: when a single element in the target population has several frame elements - causing overrepresentation of the duplicated element - eg. One person has 3 telephone numbers Clustering: when multiple elements of the target population are linked to the same single frame element - eg 5 people in a household have the same telephone number

What is the difference between undercoverage and overcoverage?

Undercoverage: some types of people are missing from the frame (ex. Non-telephonic households when trying to cover the household population using telephonic interviews) Overcoverage: ineligible people are included in the frame (ex. Business telephone numbers in a telephone frame when trying to cover the household population)

What is the difference between unit nonresponse and item missing data?

Unit nonresponse is when a person selected to be included in a sample is not successfully measured. It is hard to say when a person is nonresponsive when the person provides less than complete information. Then, a decision must be made whether to exclude the individual or include the individual and say that there is an item missing data. Item missing data describes the absence of information on individual data items where a person successfully measured on other items.

What is the difference between unit nonresponse and item nonresponse?

Unit response refers to a complete absence of an interview from a sampled person whereas item nonresponse refers to the absence of answers to specific questions in the interview after the sampled person agrees to participate in the survey.

What is a way to overcome non-compliance with paper diaries?

Using electronic diaries will enhance compliance. The electronic diary can be a Palm computer that provides good auditory prompts to allow participants to easily answer and select questions via a touch screen.

Short answer. Under what circumstances would it be wise to use a self interviewing technique (using audio or computer) instead of a telephone or face to face survey?

Usually when the questions you are asking are especially sensitive or have the potential for social desirability bias, it's better to let people self administer the survey.

What is the difference between response variance and response bias?

VARIANCE:Sampling variance is the variance of the sampling distribution for a random variable. It measures the spread or variability of the sample estimate about its expected value in hypothetical repetitions of the sample. Low variance = high precision BIAS:systematic failure on measuring dur to poor design (not random, some people have more probability in being chosen than others)

What are the differences between bias and validity?

Validity corresponds to individual response to questions; Bias corresponds to systematic deviations on summary statistics (systematic deviation across all trials and persons between response and true value)

Define survey validity and briefly describe the four types of validity (face validity, content validity, criterion validity, construct validity).

Validity is the extent to which a data collection procedure successfully measures a variable of interest. Face validity refers to the validity of a survey at face value and whether measurement is logical. Content validity refers to the extent to which a measure represents all relevant dimensions. Criterion validity refers to the extent to which the measure agrees with or predicts some criterion of the "true" value or gold standard. Construct validity refers to the extent to which relationships between measures agree with relationships predicted by theories or hypotheses. Alternate Question Format: What type of validity refers to the extent to which relationships between measures agree with relationships predicted by theories or hypothesis? Answer Choices: A. Face validity B. Content validity C. Criterion validity D. Construct validity Correct answer: D, construct validity

What is the difference between validity and reliability?

Validity is when a tool is measuring what it is supposed to measure whereas reliability is when a tool consistently produces the same results every time the measure is used.

What did we learn from our reading on differential incentives?

We learned that a significant number (74%) of respondents that learned about differential incentives thought that the system was unfair but this still didn't change whether or not they participated in future surveys.

What are some potential problems with scale ratings?

With ratings, respondents typically shy away from the negative end of the scale, producing "positivity bias". They also tend to avoid the most extreme answer categories. When scales are numbered, the numbers can affect the answers. It's important to also think about the number of response categories available. Too few may make it difficult to discriminate between respondents with different underlying judgments, too many may fail to distinguish reliability between adjacent categories. This becomes important when analyzing your data.

A study would like to know the number of hospital employees at a local hospital who eat fruits/ vegetables for lunch at noon. Therefore a questionnaire was given to participants who ate at the hospital cafeteria at 12pm. Can you explain how this could lead to problems of inference or error?

You are not accounting for the people who do not eat at the cafeteria who may bring their lunch, eat elsewhere or at another time. Therefore you are not obtaining answers for the desired characteristic to be measured that could represent target population for inference.

If you were designing a study and wanted to offer a financial incentive, would you offer each participant $1, $5, $10, or $20 for completing your questionnaire? Note: You're trying to be frugal while maximizing your response rate! Justify your answer.

You get the biggest increase in response rate by just providing $1. After that you have diminishing returns for each additional dollar. So if your study had a small budget, providing just $1 may be enough.

Short answer. Describe when to use an authenticated survey and when to use an unauthenticated survey

You would use an authenticated survey if you wanted to limit your participants to a preselected group of people. It allows you to know exactly who did and didn't participate and prevent more than one response per person. However, there's always the chance authenticated people can forward the email to someone else and have them complete it. This also allows you to use existing information about the participant in the survey to personalize it and allow participants to come back to half finished surveys. You would want to use an unauthenticated survey if you didn't have a set list of preselected participants. These surveys allow anyone to participate. So they are ideal for casting a wider net of respondents or reaching people you know little about. Unauthenticated surveys can be shared on listservs, places like Craig's list, or other public places. The problem with these surveys is you cannot prevent people from responding more than once and you cannot control who responds.

When writing a question that contains categorical response options that the respondent can choose from, in what order should the responses be listed? -least socially desirable response option should be listed first - least socially desirable response option should be listed last - order is important but social desirability factors never need to be considered when ordering categorical responses -order does not matter at all

a) the least socially desirable response option should be listed first

You are asked to design a web-survey for a specific dementia unit of a nursing home. This target population is predominantly elderly, with greater prevalence of cognitive impairment (slowed or distorted thinking), severe arthritis (joint pain), and macular degeneration (vision loss). What features would you choose to include in your web-survey, and why? THERE ARE MANY POSSIBLE CORRECT ANSWERS.

cognitive impairment - present only one question at a time (small tables only, if any), single-item screen macular degeneration - LARGE FONT, light background, dark font, make sure visual alignment easy to follow if using a table or graphics arthritic hands - radio buttons better than free-typing

What are the advantages of using multiple questions to cover all aspects of what a question is to be reported, rather than trying to put all aspects into a single definition? a. It makes the questions more clear, b. It makes the reporting task easier and reasonable c. It provide the researcher the ability to produce different measures on the topic or main question d. All the above

d. All the above


Set pelajaran terkait

EAQ 6: Coping and Stress Tolerance

View Set

Chapter 1: Database Systems, Chapter 2: Data Models, Chapter 3: The Relation Database Model, Chapter 4: Entity Relationship ER Modeling, Chapter 6: Normalization of Database Tables, Chapter 7: Introduction to Structured Query Language (SQL), Chapter...

View Set

Primerica - Life Insurance Policy Provisions, Options & Riders (AZ)

View Set

Chapter 9 Certification Style Exam Quiz

View Set

Chapter 26: Structure & Function of the Pulmonary System

View Set

Fundamentals of Capital Budgeting

View Set