Applied Research Methods

¡Supera tus tareas y exámenes ahora con Quizwiz!

What are 2 of the 4 approaches used to evaluate the validity of survey measures?

(1) studies of patterns of association (2) comparison of results from alternative forms of the same question (when asked to different people - split ballot studies) (3) comparing the answers to survey questions with information derived from other sources such as records 4) asking the same question twice and comparing results, or asking the same question of more than one person and comparing the results. (Fowler lists 4, but another possible response listed in Groves is Comparison between groups whose answers should be different if the answers are measuring the intended construct)

Discuss the definition and purpose of survey research.

(answers will vary but should include some of the following) -systematically collecting information by asking questions -to generate statistics on a certain group or population of interest and the individuals answering the questions represents that population -for purposes of needs assessment, description, evaluation, prediction, theory development or testing -have a clearly defined research purpose or objective to base the survey on

What is NOT a method of increasing physician survey response rare? (a) Using phone call to administer survey (b) Invite physician to participate in the survey via email (c) Send letters from legitimate professional association (d) Send letter via certified mail

(b) Invite physician to participate in the survey via email

Match the type of error with its source Sampling frame Coverage error Sample size Sampling error Wording of questions Measurement error

(the answers line up but the real question should have them mixed up)

What are benefits and drawbacks to focus groups?

- Benefits: o Group process help people to explore and clarify their views in ways that would not be easily accessible in a 1 on 1 interview o Beneficial for open ended questions o Can encourage contribution from individuals who would not participate o Do not discriminate against those who can not read or write - Drawbacks: o The development of group norms may silence other individuals from speaking out o The presence of research participants may compromise the confidentiality of the research session o Rarely produces quantitative data o Participants are not necessarily representative of the whole population

What are the three main sources of bias in questionnaires?

- Bias in the way the questions are designed - Bias in the way the whole questionnaire is designed - How the questionnaire is administered

What are the processing activities after data collection?

- Coding: the process of transforming word answers into numeric data. - Data entry: the process of entering data into files - Editing: the process of cleaning data and removing errors - Imputation: the process of repairing item-missing data by replacing one or more estimated answers into a field that previously had no data. - Weighting: the process of adjustment of computation to counteract effects of noncoverage or nonresponse - Sampling variance estimation: the process of estimation of the instability of survey statistics

What is the difference between consent and assent?

- Consent is when parents give permission for their child to participate in research, which protects the child from assuming unreasonable risks - Assent demonstrates respect for the child and his developing autonomy

What are the three standards that questions should meet?

- Content: Are questions asking the right things? - Cognitive: Do respondents understand the questions? - Usability: Can respondents and interviewers complete questionnaire easily and as intended?

Define: Coverage Bias and Coverage Error

- Coverage Bias: When a proportion of the target population is not covered by the sampling frame and there is a difference between the covered and non covered population - Coverage Error: Exists before sample is drawn, not caused by action of survey.

What are practical issues regarding stratified sampling method?

- Each stratum needs to be clearly defined and mutually exclusive. - Calculating appropriate weight requires population proportion of each stratum - Has to be possible to draw samples from each stratum.

What type of survey method (mailed, web, telephone, face to face) tends to be the most costly? Explain why.

- Face to face, 1 on 1 interviewing o Interviewer spends lots of time with one individual o Scheduling can be difficult o Long project time frame

Describe some threats to reliability

- Mechanical - Lack of clarity - Changes in personal factors of observers - Variation in administration (ask different questions than intended) - Situational factors (who's in the room) - Reactive measures (how people react to questions ex. If they are having a bad day, they may respond differently than if they were having a good day) - Use of tape recorders/video cameras

Define: Overcoverage, and Undercoverage

- Overcoverage: members are included in the sample that are not part of the target population - Undercoverage: portions of the target population are missing from the frame

List three protected populations under 45CFR46.

- Pregnant women (includes human fetuses and neonates) - Children - Prisoners

Name and define the two types of sampling error

- Sampling Variance: by chance, many different sets of frame elements could be drawn, each set with different values on the survey statistics. - Sampling Bias: same members of frame have no or a reduced chance of selection

What is snowballing and what is an example of a situation when snowballing could be very useful?

- Snowballing is constructing lists of rare populations by using initial sets of selected members as informants for names and addresses of unknown members. This technique is most helpful when being used to reach secluded populations or those in hiding such as illegal immigrants or those engaged in illicit drug use.

What are the elements of informed consent?

- Study purpose - Procedures - Risks/Benefits - Freedom to withdraw - Alternative treatments - Voluntariness - Confidentiality

Web surveys are always the first and best option for data collection today.

- True - False - Explain: dependent upon the population, resources, and survey content

When evaluating a sampling frame there are several things that one must consider. Provide a 1-sentence description of what each of the following words mean in reference to sampling frame selection: Under coverage, ineligibility/foreign units, duplication, clustering

- Under coverage: when sampling frame doesn't include part of the target population) - Ineligibility or foreign units: when sampling frames contain elements that are not part of the target population - Duplication: when a single element in the target population has several frame elements - causing overrepresentation of the duplicated element - eg. One person has 3 telephone numbers - Clustering: when multiple elements of the target population are linked to the same single frame element - eg 5 people in a household have the same telephone number

What is the difference between undercoverage and overcoverage?

- Undercoverage: some types of people are missing from the frame (ex. Non-telephonic households when trying to cover the household population using telephonic interviews) - Overcoverage: ineligible people are included in the frame (ex. Business telephone numbers in a telephone frame when trying to cover the household population)

What is the difference between unit non-response and item non-response?

- Unit non-response is when a sampled person did not respond - Item non-response when an individual question is not answered within a participants survey

What is the difference between validity and reliability?

- Validity means that a measure measures what it is supposed to - Reliability means that the measure is consistent in producing the same results every time the measure is used - For a measure to be accurate, it must be both valid and reliable, although a measure can be reliable and not accurate

Do incentives improve the quality of surveys by countering noncooperation?

-- appear to be no bad effects of incentives of the quality of survey responses -- giving different incentives to different respondents may create a feeling of inequity that makes people too mad to respond to later questions - people think it unfair, but still respond

What is the value of focus groups?

-- can help people explore and clarify views in ways that would be less easily accessible in a one to one interview -- taps into the many different forms of communication that people use in day to day interaction --can highlight subcultural values or group norms -- group work can facilitate the discussion of taboo topics because the less inhibited members of the group break the ice for shyer participants --participants can become an active part of the process of analysis -- group discussions may spark more angry comments, useful when the purpose is to improve services and particularly useful in disempowered populations that may have trouble expressing such feelings

What are the most effective ways to improve physician response rate?

-- financial incentives are effective, even a small incentive -- token nonmonetary incentives are ineffective unless the physician happens to -- --value the incentive --physicians are more likely to respond to postal and telephone surveys --physicians respond more to shorter surveys --resistant to close-ended questions --establish relevance as a means of improving response - pre-notification letter and sponsorship --endorsements by local, state, or national organizations many contacts and lengthy field review periods

What are some reasons to perform a needs assessment as the first step in research?

-to determine where the resources should go -to determine if the community is going to respond -it is a data driven way to determine that what you as a researcher think is important actually is important to others -to avoid creating bad interventions -to evaluate an intervention process

What is encoding?

"Encoding" is the process of forming memories from experiences. Survey designers have little impact on these mental processes; however, they can write better questions if they take into account how respondents may have encoded the information.

What are the three key features of the 1974 Belmont Report (Ethical Principles and Guidelines for the Protection of Human Subjects as Research)?

(1) Beneficence - minimize harm (2) Justice - fair balance of burden on subjects and benefit to researchers (3) Respect - ethics, informed consent - give subject meaningful control over information about themselves

What are the two types of validities?

(1) Content validity, the construct behind the question (2) Construct validity, required empirical findings supporting hypothesis, does those measure correspond to what you are expected

What are two different statistical uses for survey data?

(1) Descriptive uses (i.e. average value, prevalence of an attribute etc.) (2) Analytical uses (i.e. what causes a phenomenon to occur, how attributes are associated with each other)

What are the types of validity?

(1) Face validity - the validity of a survey at face value (whether the measurement is logical) (2) Content validity - the extent to which a measure represents all relevant dimensions (3) Criterion validity - the extent to which the measure agrees with or predicts some criterion of the "true" value (or gold standard) (4) Construct validity - the extent to which relationships between measures agree with relationships predicted by theories or hypotheses

What are two ways to measure the consistency of survey answers?

(1) The same person can be asked the same question twice (2) Two people can be asked the same question - ex. 2 members of the same family are asked about household income

What is snowball sampling and why is it used?

- Allows for lists of rare populations to be constructed by using informants to gather names and addresses of the unknown members - Used to reach a target population where the numbers may be small and used to formally study these populations that have been otherwise difficult to gather any information on

List some of the advantages to authenticated web-based surveys.

-linked to a participant list -password/ID credentials needed to login -can track progress of individual participants, send reminders -may be able to send invitation and personalize -if custom data points are known, they can be programmed into survey for personalization or calculation -each participant can only submit once -can complete survey at more than one sitting

) Which one is NOT a Diaries Studies Design: a) Time-based Design b) Fixed-Schedule c) Variable- Schedules d) Location-based Design

...

What are the three characteristics of a target population?

1) Finite in size, can be counted 2) Exists in a specified time frame 3) Observable/accessible

Why do probability-based sampling techniques have limited success for the most-at-risk populations?

1) These populations typically don't have sampling frames, 2) Typically are too small to be adequately measured in general population surveys 3) Often practice illegal or socially stigmatizing behaviors, making them difficult to access

6. Please provide the definition of the following terms: 6.1 Item missing data 6.2 Response bias 6.3 Sampling frame bias

1. - The absence of information on particular questions. 2. - One subgroup of population is more or less likely to answer than others. 3. - Sampling frame does not include all members of the population.

What are the limitations of open-ended questions?

1. Researchers may get irrelevant and repetitious information. 2. Respondents may take more time to complete the questions. 3. Statistical analyses require interpretive and time-consuming categorization of responses.

What are stages of questionnaire development?

1. Specify research question 2. Develop research design 3. Develop questionnaire outline 4. Review of the literature 5. Review of previous questions 6. Pilot study 7. Draft questions 8. Test questions 9. Review and revise

What are 4 approaches to evaluating the validity of survey measures

1. Studying patterns of association 2. Comparing results from alternative forms of the same question 3. Comparing answers to survey questions with information from other sources (such as records) 4. Asking the same questions twice of the same respondents or to more than one and comparing results

Which survey type allows the most response options a. Face-to-face interviews b. Telephone interviews c. Self-administered surveys d. A & C

A

9. What are the three governing principles for research ethics of all human research established by the Belmont Report? a. Beneficence b. Consent c. Justice d. Respect for persons e. Probing

A, C, D

Name two pros and two cons of using focus groups to collect information.

Pros- 1. Allows researchers to uncover topics that the survey might have missed. 2. Makes it possible for people who can't read or write to still participate Cons- 1. Focus groups can suffer from "group think" 2. Can't promise confidentiality.

What are three challenges to using a diary method?

Requires user training, participant burden, under-reporting

What are the governing principles for research ethics?

Respect for persons Beneficence Justice

Bias resulting from one subgroup of the population being more or less likely to cooperate than others is known as: - Sampling Frame Bias OR - Response Bias

Response Bias

What are the methods of data collection?

Self-administrated survey Telephone survey Face-to-face survey /personal interview

What are the main elements for an informed consent?

Study purpose Procedures Risks Benefits Freedom of withdraw Alternative treatments Voluntariness Confidentiality

When creating a questionnaire that will be administered to the general population, what reading level should questions be written at?

The 4-6th grade level

Explain the difference between target population and survey population?

The target population is the intended population of study. The survey population is the actual population surveyed in the given time frame.

In phone-based surveys, it is standard practice to make up to 10-15 attempts to contact potential participants at differing times of day and day of the week. True/False

True

True or False: Does the choice of data collection method have impact on sampling frame and response rates?

True

True or False: Probing provides focus to the participant's behavior, where carefully selected problems help to focus attention on pertinent issues

True

True or False? In systematic sampling every element has the same probability of selection but not every combination can be selected.

True

What is the difference between unit nonresponse and item missing data?

Unit nonresponse is when a person selected to be included in a sample is not successfully measured. It is hard to say when a person is nonresponsive when the person provides less than complete information. Then, a decision must be made whether to exclude the individual or include the individual and say that there is an item missing data. Item missing data describes the absence of information on individual data items where a person successfully measured on other items.

What is the difference between unit nonresponse and item nonresponse?

Unit response refers to a complete absence of an interview from a sampled person whereas item nonresponse refers to the absence of answers to specific questions in the interview after the sampled person agrees to participate in the survey.

Short answer. Under what circumstances would it be wise to use a self interviewing technique (using audio or computer) instead of a telephone or face to face survey?

Usually when the questions you are asking are especially sensitive or have the potential for social desirability bias, it's better to let people self administer the survey.

What is the difference between validity and reliability? (Someone asked this same question***)

Validity is the extent to which the survey measure accurately reflects the intended construct. Reliability is the variability in responses over trials of the measurement.

Describe ways to increase response rates.

Ways to increase response rates include mixed methods (contact with the interviewer, and another contact with a different interviewer), multiple outreach attempts, incentives, and matching interviewers with participants (connecting) based on gender, age, etc.

What is cognitive interviewing?

We define cognitive interviewing as the administration of draft survey questions while collecting additional verbal information about the survey responses, which is used to evaluate the quality of the response or to help determine whether the question is generating the information that its author intends.

What did we learn from our reading on differential incentives?

We learned that a significant number (74%) of respondents that learned about differential incentives thought that the system was unfair but this still didn't change whether or not they participated in future surveys.

What is NOT a key feature of Ecological Momentary Assessment (EMA) approaches:

a) Data are collected in real-world environments, as subjects go about their daily lives. b) Assessments focus on subjects' current state, ie. Current feelings. c) Subjects may choose a convenient moment to complete assessment. d) Subjects complete multiple assessments over time.

What is the difference between event-based and time-based sampling?

a. Event-based: method of data collection whereby a recording is made each time a predefined event occurs b. Time-based: method of data collection whereby a recording is solicited based on a time schedule (based on time intervals)

What are the pro's and con's of self-administered surveys?

a. Pro's: Increased flexibility (time, place, etc.); lower cost; better access b. Con's: more likely to miss or skip a question; risk of respondent not understanding the question; lower social desirability

Outline the stages of questionnaire development:

a. The stages of questionnaire development are... i. Specify research question ii. Develop research design iii. Develop questionnaire outline iv. Review literature v. Review previous questions 1. Expert panels 2. Focus groups 3. Cognitive-based interviews (focused probing or think aloud) vi. Pilot study vii. Draft questions viii. Test questions for: 1. Content (Are they asking the right things?) 2. Cognitive (Are the understandable and answerable?) 3. Usability (Are people able to complete them as intended?)

2) Which of the following is the correct definition of Response Rate (RR)? a) RR= # of eligible samples/original sample size b) RR= # of eligible sample who complete questionnaire/total # of eligible sample c) RR= # of eligible sample/ # of ineligible sample d) None of the above

b) RR= # of eligible sample who complete questionnaire/total # of eligible sample

Which of the following is not protected under 45CFR46? a. Children b. Prisoners c. Mentally ill d. Pregnant women (including fetuses and neonatal)

c. Mentally ill

List TWO ways to increase response rate.

follow-up mail or email or call; offer incentive.

Name TWO groups of people who wouldn't be captured by the US census.

illegal residence, incarcerated population, homeless people.

What are some strategies for sampling rare groups? Please name TWO.

network sampling, snowball sampling, screening, time-space sampling.

What are the two main ways to measure consistency?

o Ask the questions twice on the same person o Ask two people the same question

What are some factors that can interfere with respondent memory?

o Passage of time o Intervening events o How meaningful an event is o Uniformity of events

What are the main methods of data collection?

o Self-Administered survey (mail, computer assisted) o Telephone survey o Face-to-Face survey/personal interview

What are the two types of diary design?

time-based, event-based

What is one limitation of using a list of landline telephone numbers as a sampling frame?

• Not everyone has a landline telephone (ex. More young adults have cell phones than landlines) • Some individuals might have more than one landline • Some individuals are more likely to pick up a landline telephone and take a survey than others (ex. Elderly women vs. young adults)

What are primacy and recency effects? How can you minimize primacy and recent effects in a study?

• Primacy effect: individual picks the first option that they are presented because they remember it better than the options presented later; common in visual surveys (mailed surveys) • Recent effect: individual picks the last option that they are presented because they remember it better than the options presented earlier; common in auditory surveys (telephone or personal interview surveys) • You can minimize primacy/recency effects by reducing the number of response categories and by randomizing the order of categories in survey instruments. Or can do as fill-in-the-blank: A PRIMACY effect is when an individual picks the first option that they are presented because they remember it better than the options presented later. A RECENCY effect is when an individual picks the last option that they are presented because they remember it better than the options presented earlier.

What are some potential problems with scale ratings?

Answer: With ratings, respondents typically shy away from the negative end of the scale, producing "positivity bias". They also tend to avoid the most extreme answer categories. When scales are numbered, the numbers can affect the answers. It's important to also think about the number of response categories available. Too few may make it difficult to discriminate between respondents with different underlying judgments, too many may fail to distinguish reliability between adjacent categories. This becomes important when analyzing your data.

What is patient-recorded outcome (PRO)?

Any report of the status of a patient's health condition that comes directly from the patient, without interpretation

This type of validity is the extent to which a measure represents all relevant dimensions. A. Face validity B. Criterion validity C. Content validity D. Criterion validity

C. Content validity

What is the purpose of surveys?

Exploration; Description; Hypothesis/theory development; Hypothesis/theory test; Explanation/test of causal model; Prediction; Evaluation;

True or False: Validity corresponds to systematic deviations on summary statistics (systematic deviation across all trials and persons between response and true value)

FALSE, this is Bias, not validity

True or false - Patient reported outcomes do not affect health care compensation

False

True or False? Use of nonmonetary incentives is the best way to improve response rates

False: Monetary incentives are usually always the best way to increase response rates.

List 3 approaches to improve the response rate.

Incentive, remainder letters, or by design: have a shorter survey.

Describe methods to inform the development of surveys.

Methods may be needed first before a good survey can be developed including focus groups, formative research, biological samples, existing record sources (e.g. EHR), and participant / non-participant observation.

Analytical component - necessary to generate representative estimates and confidence intervals

No, it can only work when the population is socially networked and when members of the networks are willing to recruit from peers.

What are the benefits of using authenticated surveys?

Only participants that are in the list can take the survey. Can track survey progress and send reminder e-mails to participants.

To evaluate whether or not the answer to questions are valid are measures of what researchers are trying to measure. What are the 4 approaches to evaluating the validity of survey measures?

1) Studies of patterns of association - construct validity, predictive validity, and discriminant validity 2) Validating against records - using factual data or record checking and compare survey results 3) Comparing alternative question forms - ask the same question in two different forms than compare 4) Consistency as a measure of validity - measure the reliability or consistency of survey answers through asking the same person the same questions twice or two people can be asked the same question

Short answer. Name two principles of good question design

1. Ask people about first hand experiences using questions they can answer 2. Ask one question at a time Other answers include-Word question so every respondent is answering the same question, wording of question must be complete and any script used should prepare respondents to fully answer the question, communicate what kind of answer is adequate, make it as easy as possible to read, follow instructions and record answers, orient respondents to tasks in a consistent way, beware of asking for second hand information)

How can researchers reduce response distortion?

1. Assure confidentiality of responses. - Minimizing the use of names or other identifiers - Making sure that completed survey answers are not accessible by nonstaff people - Dissociating identifiers from survey answers 2. Emphasize the important of accuracy: respondents are asked to make a commitment to give accurate answers 3. Reduce the role of interviewer: using self-administered forms

What are the three standards that all survey questions should meet?

1. Content standards → for example, are the questions asking the right things? 2. Cognitive standards → are respondents consistently understanding the questions and is all of the required information to answer the question available? 3. Usability standards → Can the questionnaire be completed easily and as intended?

Give one advantage of focus groups.

1. Do not discriminate against people who cannot read or write 2. Can encourage participation from those who are reluctant to be interviewed on their own. 3. Can encourage contributions from people who feel as thought they have nothing to say or are otherwise nonresponsive.

Describe some of the methods that researchers can use to evaluate draft survey questions.

1. Expert reviews: subject matter experts and question design experts review a draft of the survey and make comments on the wording of questions, response alternatives, order of questions, structure of the questions, instructions to interviewers, navigational rules, etc. 2. Focus group discussions: a discussion among a small number of the target population members guided by a moderator to help the researcher learn about how members of the target population may understand concepts presented in the questionnaire. 3. Cognitive interviews: find out how people understand and answer questions 4. Field pretests: small-scale rehearsals of data collection to make observations about content, validity, and reliability of the survey. 5. Randomized or split-ballot experiments: offer clear evidence of the impact on responses of methodological features.

Match each ethical principle with its definition. Give a brief example of a research situation in which each principle is violated. Ethical Principle/Definition: 1. Justice or 2. Respect for persons or 3. Beneficience A. Requires researchers to minimize possible harms and maximize possible benefits for the subject B. Balance between those who bear the burdens of research and those who benefit from research C. Requirement for informed consent

1. Justice = b 2. Respect for persons = c 3. Beneficience = a Examples: • Beneficence: During the course of his/her study, a researcher chooses to not disclose study errors that arise during the administration of the study even though these study errors lead to poor health outcomes for research participants. • Justice: Potentially harmful and expensive pharmaceuticals are primarily tested in low-income populations (or third world countries) where patients cannot afford the cost of the medication. • Respect for persons: Prisoners are forced to take part in research experiments.

Describe some data editing you might do if you discovered that you had missing data. (Hint: consider imputation options)

1. Mean value imputation - take mean of sample and use that for your missing data 2. Regression imputation - create regression model for overall sample, and apply it to the missing data (Note: all values used as predictors are themselves present in the imputation) 3. Hot Deck Imputation - like regression imputation, but predicted residual is "borrowed" from another case in the data set. The missing value is imputed by the most recent reported value in the sort sequence. 4. Multiple imputation - creating multiple imputed datasets and putting them together. This allows a variation in the estimates across these datasets and allows for estimation of overall variation, including sampling and imputation variance. (Like analyzing your own analysis...if that makes sense)

What are some kinds of checks you can do while editing, to make sure that your data is ready for analysis? Choose 2 of your favorites and provide examples.

1. Range edit - recorded value should lie in a specific range. (Example: Participant answers that her age is 173yrs → Survey designer can create range restriction from 1month to 125yrs) 2. Ratio edit - recorded value should have the desired comparators (Example: I have room for 12 schoolchildren in my field trip bus, and the numerator and denominator of that ratio should add up to 12. 3. Consistency edit - recorded value should make sense with other responses (Example: Participant answers NO to "Have you ever been sexually active" but answers YES to "Have you ever had a sexually transmitted disease?" → Survey designer can create pop-up message to ask "You answered X in this question, but Y in this question. These responses are inconsistent. Please provide a valid response. Thank you!" 4. Balance edit - recorded value should summate to a target whole (Example: if recording percentages of something, the total should =100%) 5. Comparison to historical data - recorded value should (generally) not change over time (Example: last week I lived in a house with 4 other people. This week I (likely) still live in a house with 4 other people) 6. Checks of highest and lowest values - recorded value should not be implausible (Example: My age range of samples is from 0 to 173...something is wrong!)

Name the three ways we might bring bias into our surveys

1. Way the question is written 2. Way the questionnaire as a whole is designed 3. Way the questionnaire is administered (Answers might also be more detailed. Other potential answers include leading questions, inconsistency, formatting, not objective interviewer, faulty scale, problems with wording...)

Which methods are best to address question standard of content? A. expert panels B. focus groups C. cognitive-based interviews D. pretests

A and B

What are the advantages and disadvantages of a progress indicator in an online survey?

A progress indicator decreases abandonment but increases download time. With a large quantity of questions, the progress indicator moves slower and might cause the respondent to abandon the survey.

What are the 4 steps of the cognitive process that occur while filling out a questionnaire (Multiple choice question) a) Comprehension b) Rewording c) Retrieval of information d) Reporting an answer e) Satisficing f) Judgment and estimation g) Ordering

A, C, D, and F

Which of the following are pros of web surveys? A. Multimedia Capabilities B. Required fields C. data validation D. conditional logic E. customer error messages F. upfront investment G. drop-down lists H. security

A,B,C,D,E,G are all pros; F and H are cons

What are three benefits and three limitations of focus groups?

A. Benefits • can be used to see what potential respondents know or do not know about a topic • can be used to find out what topics are important and what topics are not important • can be used to identify terms that respondents use when discussing a topic and how they understand these terms • do not discriminate against people who cannot read or write • encourage participation from people who do not want to be interviewed on their own • encourage participation from people who feel that they do not have anything to share B. Limitations • participants are not always representative of the survey population so you should not generalize responses • not a good venue for evaluating wording of specific questions or discovering how respondents arrive at their answers • potential for results to be unreliable, hard to replicate, and subject to judgments of those who are conducting the focus groups • the articulation of group norms may silence individual voices of dissent • can have issues of hierarchy • loss of confidentiality because other participants are present

For recalling of shorter period and more salient events, which of followings is most likely to happen? A. Over-report B. Under-report C. Report without bias

A. Over-report

What is the difference between Reponse Rate and Cooperation Rate? Please choose the appropriate numerator and denominator for each rate. Response Rate = __ divided by __ Cooperation Rate = __ divided by __ a. Number of eligible sample who complete questionnaire b. Number of eligible sample in your target population c. Total number of eligible sample able to be contacted d. Total number of eligible sample e. Total population

A1: Response Rate = a/d Cooperation Rate = a/c

Why is a College setting an appropriate place for a web-based survey?

Almost all students will have access to email and a computer. Also, they are younger and better equipped to using computers.

What is an example of a progress indicator and what is one advantage and disadvantage of using them?

Answer: An example of progress indicator is the graphic and text indicator in the corner of the screen as well as motivational screens such as "You are one third done!" An advantage of progress indicators is they will increase completion and reduce abandonment. A disadvantage is they may require additional download time, which may lead to non-completion of survey.

All of the following characteristics are advantages of Simple Random Sampling EXCEPT: A. It's easy to understand B. Follows standard statistical formulas C. Captures all important subpopulations D. Is self-weighting E. None of the above

Answer:C, important subpopulations may be missed in SRS

Discuss some of the problems that researchers faced with recruitment and retention for the iSay study on adolescent alcohol and drug abuse. And what were some of the methods they used to combat these issues. (Kristina Jackson's lecture)

Answers will vary greatly but should discuss some of the following: difficulty getting principal buy-in, non-native Rhode Islander had more difficulty in local schools with buy-in, principals finally onboard but didn't educate teachers about the program, required parental consent, students lost paperwork. Strategies: had multiple copies of paperwork to keep handing out, used graphic designer to professionalize the logo and paperwork, information tailored to age, incentives (money, shirt, pizza party), very structured contact with participants with e-mail notification and phone call follow ups to late responders, obtained many different forms of contact information, kept contact information updated, allowing re-entry of students who missed months, quick payouts so students had almost immediate return on completing task, newsletters, birthday cards

What types of questions are most sensitive to variation in question wording? Attitudes Beliefs Behaviors Attributes

Attitudes

What are three common study designs?

Cross-sectional, retrospective, prospective

What are the two uses of statistics?

Descriptive uses and analytic uses.

Which of the following are approaches to assess reliability? (select all that apply) a. Test-retest reliability b. Inter-rater reliability c. Alternate form reliability d. Split-half reliability e. All of the above f. None of the above

E. All of the above

What are several key features of the Ecological Momentary Assessment (EMA) approach? And what bias does this approach reduce?

EMA are methods that are used to collect real-time data on subjects' behavior and experience in their natural environment. Other key features common to EMA approach are the focus on subjects' current state, multiple assessments are used over time, and moments are strategically selected for assessment. The bias that this approach aims to minimize is recall bias since this method collects data in the process of the behavior or following the experience.

We read an article that discussed Ecological Momentary Assessment (EMA). Define EMA and briefly discuss some of the benefits of this type of data collection method.

EMA is the repeated sampling of a subject's behavior and experiences in real time, in their natural environment. A main benefit of this type of collection is to decrease recall bias. Repeated assessments can be a better measure of a person's average state across situations. Data collected in the subjects' natural environment may be more generalizable to real-world experiences. This is used more often in the field of clinical psychology. Data can be used to evaluate the individual over time, trends over time, associations or interactions between two phenomena that occur at the same time, or the order of events and the association with behaviors.

What type of bias is EMA (ecological, momentarily, assessment) is trying to minimize, give an example?

EMA is trying to minimize recall bias, for example, the person can be writing a diary, right there and not completing it later.

What is an error?

Error refers to deviations of what is desired in the survey process from what is obtained.

List two methods for testing questions.

Expert panels and focus groups

True or false: For sensitive questions, asking closed-ended questions is better than asking open-ended questions.

False

True or False: Response bias occurs when there is a difference between the target population covered in the survey and the target population that is not covered by the survey.

False. This is an example of coverage bias. Response bias occurs when there is a consistent direction of the response deviations over trials: systematically underreported or overreported

True or False: The social psychological perspective on incentives states that refusal is an indication that the survey is perceived as more burdensome and has less utility for the refusers. Therefore, it is appropriate to offer compensation to refusers but not cooperative respondents, whose cooperation is seen as evidence of the utility of the survey to them.

False. This is the definition of the economic perspective on incentives. The social psychological perspective states that reluctance to participate is not an ipso facto indication that the survey is more burdensome, and offering refusal conversion payments to reluctant respondents may be seen as inequitable.

Health professionals response rates are higher for web-based surveys than other forms of data collection. True/False

False. Web-based surveys have the lowest response rates in this population.

True or False: Health professionals (in general) are more likely to respond to emailed surveys than other types of surveys (phone, paper-based).

False→physicians are less likely because they can easily ignore or miss the email; confidentiality concerns

What is field coding and what is one benefit and burden of using this technique?

Field coding is when respondent has an open question but the interviewer codes into a numeric category. Pro: respondent gets to describe situation in their own words Con: interviewer has burden to interpret the response and categories. Evidence that this has negative effects on interviewer behavior

What are some of the common purposes of survey research? What are some of the limitations and strengths of common study designs involving primary data collection?

Survey research is used to collect information on a topic by asking individuals questions with the goal to generate statistics on the group or population that those sampled will represent. Purposes of surveys can be used for Needs Assessments, description, hypothesis/theory development or testing, explanation or test of a causal model, prediction, and evaluation. Study designs include cross-sectional, retrospective, and prospective. When considering primary data collection, the research problem or question, variables, measurement of variables, the population, data analysis, use of results, and resources used need to be all considered. Limitations with cross-sectional is that it looks at one snapshot of data, retrospective uses past information so measurement can be potentially limited, and prospective is timely and costly. Some strengths of cross-sectional is that it is least expensive and time-consuming, prospective contains data that has already been collected, and prospective is measurements over time.

Validity is threatened by ___________ error while reliability is threatened by ________ error.

Systematic; Random

Which has been proven to garner a higher retention rate, a $2 bill or a $5 check? Why?

The $2 bill will have a higher retention rate because, in the USA, a $2 bill is rare and has a novelty status. Also, since the higher amount was given as a check the transaction cost of cashing a check at such a low value can be a pain, so it loses its incentive.

What is yea-saying? And who might be more susceptible to this behavior?

The tendency of a respondent to agree rather than disagree to statements as a whole or what they believe to be socially desirable responses to questions. Minority and less educated populations more likely to yea-say.

What are 3 ways bias can arise in a survey?

The way questions are designed, the way the questionnaire as a whole is designed, and how the questionnaire is administered or completed

What is the principle of beneficence?

This requires researchers to minimize possible harms and maximize possible benefits for the subject, and to decide when it is justifiable to seek certain benefits in spite of risks involved or when benefits should be forgone because of the risks involved.

The variance of a systematic sample is always lower than a simple random sample. True or false?

True

True or False: Two types of sampling error are sampling bias and sampling variance.

True

True or false: Computer adaptive testing successively selects questions to maximize the precision of the exam based on how an individual answered prior questions.

True

3. Which methods address the following questionnaire development standards? Circle all that apply Usability: -Expert reviews, Pretest, Focus Groups, Cognitive interview Content: -Expert reviews, Pretest, Focus Groups, Cognitive interview

Usability: Expert reviews, focus groups, and cognitive interview Content: pretest

Short answer. Describe when to use an authenticated survey and when to use an unauthenticated survey.

You would use an authenticated survey if you wanted to limit your participants to a preselected group of people. It allows you to know exactly who did and didn't participate and prevent more than one response per person. However, there's always the chance authenticated people can forward the email to someone else and have them complete it. This also allows you to use existing information about the participant in the survey to personalize it and allow participants to come back to half finished surveys. You would want to use an unauthenticated survey if you didn't have a set list of preselected participants. These surveys allow anyone to participate. So they are ideal for casting a wider net of respondents or reaching people you know little about. Unauthenticated surveys can be shared on listservs, places like Craig's list, or other public places. The problem with these surveys is you cannot prevent people from responding more than once and you cannot control who responds.

What are the three main kinds of question evaluation activities?

a) Focus group discussions- presenting new products and ideas to small groups, then have a discussion about what people like and do not like about them b) Intensive individual interviews- also known as think-aloud interviews c) Field pretesting- replicating to a reasonable extent, procedures to be used in a proposed survey

When writing a question that contains categorical response options that the respondent can choose from, in what order should the responses be listed? a) least socially desirable response option should be listed first b) least socially desirable response option should be listed last c) order is important but social desirability factors never need to be considered when ordering categorical responses d) order does not matter at all

a) the least socially desirable response option should be listed first

What are the three main sources of bias in a survey? Give an example of each source.

a) the way a question is designed ex. problems with wording b) the way the question as a whole is designed ex. formatting problems or the question is too long c) how the questionnaire is administered ex. interviewer is not objective, respondent's inaccurate recall

Multiple choice: What is one of the things that a survey needs? a. Answers people give must be accurate b. Demographic questions c. The sample size needs to be at least 20 people d. The survey must be administered in both Spanish and English

a. Answers people give must be accurate

What are the four different types of weighting that are prevalent in complex surveys?

a. As a first-stage ration adjustment b. For differential selection probabilities c. To adjust for unit nonresponse d. Post-stratification weighting for sampling variance reduction

What are three reasons for non-participation in population-based cohort studies?

a. Distrust of researchers b. Concerns about research design c. Uncertainty about the outcomes d. Discordance between lay beliefs and medical practice e. Demands of the trial

What is an ecological momentary assessment and what are three primary aims?

a. EMA is repeated sampling of current behaviors and experiences in real time in natural environments. b. AIMS: minimize recall bias, maximize ecological validity, allow the study of micro processes that influence behavior in real-world contexts

What is the primary purpose behind focus groups and what are two benefits and two limitations?

a. Help people to explore and clarify their views in ways that would be less easily accessible in a one on one interview. This helps researchers tap into the many different forms of communication that people use in day-to-day interaction. b. Benefits: Better understand the vocabulary of the potential respondents (help in question development); Removes discrimination against illiterate respondents c. Limitations: Confidentiality is not guaranteed; "professional focus groupers" if compensation is given

How can incentives for refusal conversion affect the generalizability of the survey results?

a. It can increase generalizability if your scope is to improve the response rate→ Especially for general populations b. For health care workers or time-limited people, it may not improve generalizability.

The likelihood of under-reporting is increased with: a. Longer recall periods and less salient events b. Longer recall periods and more salient events c. Shorter recall periods and less salient events d. Shorter recall periods and more salient events

a. Longer recall periods and less salient events

Describe three levels of measurement?

a. Nominal: data in categories can only be counted with regard to frequency of occurrence, no ordering or valuation is implied (ex. favorite color) b. Ordinal: rank ordering of categories in terms of the extent to which they possess the characteristic of the variable, underlying continuum along which respondents can be ranked; no assumption about precise distances between the points along a continuum (ex. Birth order) c. Interval: labels, orders, and uses constant units of measurement to indicate exact value of each category of response (weight, height)

What are the two paradigms of cognitive interviewing?

a. One involves a cognitive interviewer whose role is to facilitate participants verbalization of their thoughts. An example is the think aloud procedure. b. The other involves an interviewer who guide the interaction more proactively. An example is intensive interviewing with follow-up probes

What are limitations of open- and closed-ended questions?

a. Open: will elicit certain amounts of irrelevant and repetitious information, requires greater degree of communication skills by respondent, may take more of respondent's time, interpretation of answers is subjective (analysis requires more effort) b. Closed: Respondent may select fixed responses randomly rather than in thoughtful fashion, require respondent to choose "closest representation" of actual response, subtle distinctions among respondents cannot be detected, may lead to inadvertent errors

List and describe the three governing principles for research ethics according to the Belmont report.

a. Respect for persons (the researcher should give the potential participant all the information he/she needs to make an informed decision based on the risks and benefits of participating. And if the potential participant declines, his/her refusal will be respected) b. Beneficence (there should be more benefit than harm for participants the vast majority of the time) c. Justice (those who bear the burden of research should also be the ones that benefit from research)

What is the difference between sampling frame bias, selection bias, and response bias?

a. Sampling frame bias affects peoples' chance of being selected, i.e. the sampling frame doesn't include all of the target population. Selection bias is not to be confused with sampling frame bias. If the sampling frame was people at the mall on a Tuesday afternoon, selection bias would be if you only picked people coming out of Nordstrom. Response bias refers to when one subgroup is more (or less) likely to respond than the others (e.g. a survey that's only offered in English would systematically exclude those people who don't speak English).

Name/define two sources of bias associated with sampling and data collection.

a. Sampling frame bias: sampling frame does not include all members of the defined population or does not include correct information about sample members (affects chance of selection) b. Response bias: one subgroup of population is more or less likely to cooperate than others

List stages of questionnaire development

a. Specify research question b. Develop research design c. Develop questionnaire outline d. Review of the literature e. Review of previous questions

Match the following stages of questionnaire development in the correct order: a. Specify research question b. Develop research design c. Develop questionnaire outline d. Review of the literature e. Review of previous questions f. Pilot study g. Draft questions h. Test questions i. Review and revise

a. Specify research question (1) b. Develop research design (2) c. Develop questionnaire outline (3) d. Review of the literature (4) e. Review of previous questions (5) f. Pilot study (6) g. Draft questions (7) h. Test questions (8) i. Review and revise (9)

What are the four approaches to evaluating validity of study measures?

a. Studies of patterns of association b. Comparison of results from alternative forms of the same question c. Comparing answers to survey questions with answers from other source, such as records d. Asking the same question twice of the same respondent and comparing results

Which populations are protected under 45CFR46 (U.S. Federal regulations)? Check all that apply: a. The protected populations are... i. [ ] Pregnant women (including fetuses and neonates) ii. [ ] Terminally ill patients iii. [ ] People with learning disabilities iv. [ ] Children v. [ ] Prisoners vi. [ ] Refugees/Asylum-seekers

a. The protected populations are... i. [√] Pregnant women (including fetuses and neonates) ii. [ ] Terminally ill patients iii. [ ] People with learning disabilities iv. [√] Children v. [√] Prisoners vi. [ ] Refugees/Asylum-seekers

What are the two opposing views regarding incentives and equity, specifically the issue of offering refusal conversion payments to reluctant respondents?

a. There is the economic perspective, and the social/psychological perspective: i. Economic perspective sees it as entirely appropriate to offer compensation to refusers but not to cooperative respondents because the refusers are probably more burdened by taking the survey. ii. Social/Psych perspective sees refusal conversion payments as a violation of equity expectations, and that it is likely to reinforce uncooperative behavior and alienate cooperative respondents.

Which of the following is an example of random sampling? a. Taking the name of every person in a telephone book. b. Generating a list of numbers by picking numbers out of a hat and matching numbers to names in the telephone book. c. Taking the tenth name from a list of everyone in the telephone book

b. Generating a list of numbers by picking numbers out of a hat and matching numbers to names in the telephone book.

Which of the following is not a method for evaluating survey questions? a) focus groups b) pilot studies c) cognitive interviews d) field pretests

b.) Pilot studies are used as preliminary studies to see if a study is even worth doing and is usually not used to evaluate and test survey questions. This component will come later if the pilot study has shown that the researcher has a worthwhile question of interest.

Which of the following statements illustrate some differences between proportionate and disproportionate sampling? a) The size of group sample is different for each group in proportionate sampling. b) One needs to apply weights to standard statistical formulas for mean and variance estimation in disproportionate sampling. c) A and B d) None of the above

c) A and B

A pharmaceutical company has come out with a new ADHD drug. They test it on 12-17 year olds who have ADHD. Their results are no better than what is already on the market so the company makes up the results. What form of misconduct is this? a. Plagiarism b. Falsification c. Fabrication

c. Fabrication

Which of the following is not a principle of consent with regard to children? a. Developmentally appropriate based on age b. An assessment of the child's understanding of the intervention c. Limited disclosure of the intervention d. An idea of the child's willingness to accept the intervention

c. Limited disclosure of the intervention

Benefice is... a. Discuss harms and benefits of the study with the subject b. Fair balance between those who bear the burden of research and who benefit from it c. Minimize possible harm and maximize possible benefits for the subject d. Provide participants with rewards to mitigate the harms

c. Minimize possible harm and maximize possible benefits for the subject

Which of the following is NOT a procedure for monitoring cognitive processes a. Going through the questions twice. Once to read as is, and a second time to process. b. "Think aloud interviews" c. Reading questions through once and having respondents read them back to you the second time. d. Asking probe or follow up questions after each individual question

c. Reading questions through once and having respondents read them back to you the second time.

1) What are some purposes of surveys in research? (Please select the correct choice). a) Exploration (needs assessments) b) Hypothesis/theory testing c) Description, prediction, and evaluation of process and outcomes d) All of the above e) None of the above

d) All of the above

How can one efficiently record what was said in a focus group? a. One or more observers take notes b. One way windows c. Videotape the group d. All of the above

d. All of the above

What is a purpose of conducting a survey? a. Prediction b. Needs assessment c. Explanation/test a causal model d. All of the above

d. All of the above

Which of the following is a type of validity? a. Construct validity b. Content validity c. Criterion validity d. All of the above

d. All of the above

We seek data on lots of different topics. Which category has responses that are MOST SENSITIVE to how we word our questions? a. Attributes b. Beliefs c. Behavior d. Attitudes

d. Attitudes

How does stratification impact our sampling estimates? a. Increases accuracy b. Decreases precision c. Decreases accuracy d. Increases precision

d. Increases precision

Which of the following ways can be used to improve physician response to surveys? a. Mail information brochures regarding your study topic b. Contact them by email c. Reprint the questionnaire on high quality paper d. Send a colorful envelope in the mail e. None of the above

d. Send a colorful envelope in the mail

Which of the following is NOT a stage of questionnaire development? a. Specify research question b. Draft questions c. Review literature d. Test questions on co-workers

d. Test questions on co-workers

Imagine you are conducting a short survey of high SES individuals that requires very low costs and a very short length of data collection with wide geographic distribution. The skip patterns will be complex, and the survey may require respondents to look back at personal records. What mode should be used? a. Face-to-face b. Telephone c. Mail d. Web

d. Web

What are the 3 types of scientific misconduct overseen by the Office of Research Integrity in the Dept of Health and Human Services?

i. Plagiarism: Theft of intellectual property, unattributed copying of work. Includes unauthorized use of privileged communication, but DOES NOT include authorship disputes. ii. Falsification: Manipulating or omitting results or processes 1. Interviewer falsification: Intentional and unreported departure from instructions - can be deliberate misreporting, miscoding, or interviewing a non-sampled person for convenience iii. Fabrication: Making up data in proposing, performing, reviewing, or reporting

What are some methods used to sample hard to reach populations?

o Screening (random digit dialing) o Time-space sampling: collect information on where and when people in a target population congregate and verify that they are at those places; then sample a time/location from the list and people from the time/locations o Snowball sampling: start with convenience sample from target population called seeds, and seeds refer others into the sample; repeat until sample size is reached o Respondent-driven sampling: start with convenience sample from target population called seeds, and seeds refer others into the sample but there is a maximum number that can be referred; repeat until sample size is reached

Define a needs assessment and discuss reasons for conducting one.

• A needs assessment is a planned process that identifies the reported needs of an individual or group. There are a number of reasons to conduct needs assessments including, to create community trust and buy-in for your research project or intervention, to avoid creating bad interventions, to evaluate an intervention process, to help gather baseline information from which to develop project benchmarks and goals, to inform resource allocation, and to avoid wasting resources.

What is the most common approach for sampling rare and hard-to-reach populations? Identify the strengths and limitations of the approach mentioned to sampling rare and hard-to-reach populations?

Methods for sampling hard-to-reach populations include screening, time space sampling, snowball sampling, and respondent driven sampling. Screening - also known as random digit dialing, strengths include based on a probability sample, includes individuals not on any list; limitations include low response rate, individuals may not identify themselves, and expensive Time space sampling - three stages: formative, preparation, and sampling; collect information on where and when people in target population congregate, verify that people in target population are actually at those places, and sample a time and location; benefits include large diverse samples can be attained, probability sample, reproducible in different study sites; limitations include limited to people who attend the specific locations and concerns with selection bias Snowball sampling - benefits include large samples can be attained since initial respondents refer other respondents and reproducible in different study sites; limitations include only works for networked populations and non probability sample Respondent Driven Sampling (RDS) - start with convenience sample from target population, respondents refer others to sample but with a maximum number of referrals, and is repeated until sample size is attained

A respondent with a complicated employment history will find it difficult to report beginning and ending dates of jobs, whereas this task will be simpler for someone who has held the same job since completing school. If survey designers are unfamiliar with the distribution of employment experiences among their target population, which type of error is most likely to occur? a) overcoverage b) undercoverage c) measurement error d) processing error e) non-response

Over/undercoverage has to do with population elements, which aren't relevant here (the target population isn't defined). Processing error refers to mistakes in data coding, and non-response has to do with failure to obtain complete data from all selected individuals. When respondents misunderstand a question or find it difficult to answer, such as identifying particular start/end dates, they are more likely to provide an estimate/responses that are less accurate, resulting in measurement error. One could make an argument for non-response on the basis if a respondent finds a question too difficult to answer s/he will just skip it; though if the question isn't explicitly difficult (rather, it's cognitively challenging), the respondent is more likely to provide an answer that is less accurate.

Why do physicians not respond to taking a survey?

Physicians do not have much time. They are concerned about confidentiality. If the questions are not salient, they are less likely to participate. Individual questions may appear biased or not allow the respondent a full range of choices on the subject.

How does social desirability affect response? Describe a ways you could reduce the effects of social desirability when asking respondents to report their attendance of religious services.

Respondents to questions dealing with personally or socially sensitive content may answer in a socially acceptable direction, resulting in response effects. For example, people tend to underreport socially disapproved behaviors, such as drug use, and overreport socially desirable behaviors such as attending religious services. One method to reduce this response effect is the inclusion of a buffer/introductory sentence to the question that provides support for the alternative/less socially desirable response ("Not all people are able to attend religious services every week..."). The question could also be adjusted to limit the level of detail respondents are asked to give.

What are some advantages of using authenticated surveys rather than unauthenticated surveys? (Open-ended question)

Sample Correct Answer: With authenticated surveys, you can track survey progress on participant level and send reminders. You can also prevent duplicated surveys because participants can only complete survey once with assigned username and password when using authenticated survey. Also, an authenticated survey is useful for personalizing the survey or verifying information.

Define sampling bias and sampling variance.

Sampling bias- when some members of the sampling frame are given no chance (or reduced chance) of selection. Reduce by giving all elements an equal chance of selection. Sampling variance- replications of the same study will generate different statistics with similar means. Reduce this by increasing sample size.

Describe two methods for sampling hard to reach populations.

Screening involves collecting data from members of a larger (typically probability-based) sample to be able to classify them as members or non-members of the hard to reach population of interest; only hard to reach population members are included in your final sample. Time space sampling involves randomly sampling individuals at specific times and locations from known locations where members of your target population congregate. Snowball sampling uses a convenience sample from the target population (seeds) to refer others to the sample until desired sample size is attained. Respondent driven sampling also uses a convenience sample from the target population (Seeds) to refer others to the sample until the desired sample size is attained, but RDS places a limit on the number of referrals each seed can make.

List two methods for sampling hard-to-reach populations.

Screening, time-space sampling, snowball sampling, respondent-driven sampling

What is selective attrition?

Selective attrition (the action or process of gradually reducing the strength or effectiveness of someone or something through sustained pressure), however, is a known problem in cohorts as those in disadvantaged socio-economic groups, ethnic minorities, younger and older people and those at greater risk of ill-health are more likely to drop out. This may result the generalizability of findings being limited and estimates of association being biased.

What are three basic data collection methods? Describe using one of the three data collection methods what sampling frame bias is and how this could be a limitation to data collection.

Self-administered surveys, telephone survey or interviewers, and face-to-face interviews are the three data collection methods. Sampling frame bias is when the sampling frame does not include all members of the defined population. For example, in telephone surveys the sampling comes from those with access to a telephone and let's say that you are trying to get a Needs Assessment on inadequate sleep among airline employees. Airline employees most likely carry cell phones but your telephone survey uses respondents home landlines. This would be a limitation to collecting the necessary information from this population.

What are the types of needs?

Service needs: Needs health professionals believe the target population must have met to resolve a problem Service demands: Needs that those in the target population believe they must have to resolve a problem

List one method for sampling hard to reach populations.

Snowball sampling

Describe the advantages and disadvantages of utilizing a web-based survey (as opposed to another mode of administration).

• Advantages of web-based surveys include multimedia capabilities (e.g. pictures, charts, web links), the ability to use required fields which reduces non-response and ensures questions answered in intended order, the ability for data validation which saves time on post-collection data cleaning, the ability to use conditional logic which simplifies navigation for the participant and allows for more streamlined survey progression, custom error messages based on data rule and participant's responses, the ability to pipe customized text into questions or response options, the ability to use drop-down lists which provide a compact way to present a large number of response options on only one line of the screen, the ability to customize end pages, the progress bar option, the built in data dictionary, and that data is easily downloadable for statistical analysis/software. • Disadvantages of web-based surveys include the significant upfront time investment necessary to create a well-planned web survey, web layout issues (questionnaire may look different to different participants depending on their browsers, screen resolution, window size), scrolling issues (most users don't like having to scroll right and left), and security concerns (respondents may be wary of web-based security and some software packages may not conform to security standards required by IRBs).

What are two instances when informed consent is not required?

• when the only form linking respondents to the research project is the IRB form and researchers are concerned about confidentiality breach • when research presents no more than minimal risk of harm and involves no procedures for which written consent is normally required Or can be done as true or false: FALSE: Informed consent is required when the only form linking respondents to the research project is the IRB consent document and researchers are concerned about a confidentiality breach. TRUE: Informed consent is NOT required when research presents no more than minimal risk of harm and involves no procedures for which consent is normally required.

In the context of sampling rare populations, what is the difference between snowball sampling and screening?

Snowball sampling involves constructing lists of rare populations by using initial sets of selected members as informants for names and addresses of unknown members. Screening involves collecting data from members of the initial sample to be able to classify them as members or non-members of the rare population.

The Belmont Report highlights 3 main principles that should be followed when conducting research involving human subjects. What are they?

"Beneficence" - minimize possible harm and maximize possible benefits for the subject "Justice" - try to achieve a fair balance between those who bear the burdens of research, and those who benefit from it "Respect for persons" - informed consent

Define "bias"

"Deviation of results or inferences from the truth, or processes leading to such a deviation. Questionnaire bias can result from unanticipated communication barriers between the investigator and respondents that yield inaccurate results."

Please name TWO types of bias in question designs.

(1) problem with wording (ambiguous question, complex questions, double barrel); (2) formatting problems.

How it functions. Recruitment process

-- begin with a set number of individuals or seeds selected purposefully from the target population --seeds are trained to recruit a set number of individuals (recruitment quota) from their social network of peers --seeds and recruited participants receive incentives, both to be interviewed (referred to as primary incentives) and to refer additional recruits (secondary incentives) --produces recruit chains made up of several waves of recruits --reach equilibrium (characteristics of sample stabilize

Snowball sampling: a) Briefly describe its use to sample rare populations. You may use diagrams to explain. b) What are two reasons why it is used? c) Describe the problems of bias this type of sampling introduces. d) What is an alternative strategy for identifying a rare populations described in the literature?

...

True or False: a) Ecological Momentary Assesment (EMA) is an alternative to cross-sectional reports. b) Our recollections are inaccurate AND systematically biased. c) EMA focuses on between-subject variation over time.

...

What are some if the human phenomena that diaries allow psychologists to study?

...

What are some of the limitations and challenges of post-disaster research?

...

What are some reasons for doing needs assessments? a) To create community trust and buy-in b) To make sure problem is important to be assessed c) To avoid doing an evaluation on intervention process d) To account for resource allocation so no resources are wasted e) A and B only f) A, B, C only g) A, B, D only

...

Which of these options is NOT a cognitive process involved in respondent response models? • Comprehension • Judgement • Retrieval • Reporting • None of the above

...

What are the four elements of child assent?

1. A developmentally appropriate understanding of the nature of the condition 2. Disclosure of the nature of the proposed intervention 3. Assessment of the child's understanding of the information provided and the influences that are an impact on the child's evaluation 4. A solicitation of the child's expression of willingness to accept the intervention.

Discuss the challenges to writing a good survey question for collecting factual data.

1. Defining objectives and specifying the kind of answers needed to meet the objectives of the question. That is, the objective defines the kind of information that is needed from the survey. 2. Ensuring common, shared understanding of meaning of the question; having the same understanding of the key terms. All respondents must have the same understanding of what is to be reported. 3. Ensuring people are asked questions to which they know the answers. Barriers to this could be that respondents may not have the information needed to answer the question or that respondents may have once known the information needed to answer the question but have difficulty recalling it. 4. Asking questions that respondents are able to answer in the terms required by the question. Interviewers must be careful not to impose the assumption of regularity upon respondents. 5. Asking questions respondents are willing to answer accurately. This includes reducing the effect of social desirability bias, reducing response distortion, and carefully selecting question design options.

Name the 4 cognitive steps in answering questions

1. Misunderstood the question 2. Deliberately do not answer correctly 3. Forget 4. Unable to provide necessary information 5. Have no answer

What are the three reinterview assumptions

1. There are no changes in the underlying construct between the two interviews 2. All important aspects of the measurement protocol remain the same 3. There will be no impact of the first measurement on the second responses

Describe the difference between a census and a sample and describe potential benefits of using a sample instead of a census.

A census gathers information about every individual in a population, while a sample is a selection of a subset of a population. Sampling is less expensive, less time-consuming, and can be more accurate in some instances because censuses can encounter issues with double-counting and capturing certain rare or special populations (e.g. homeless, undocumented individuals, people with unstable housing, migrant workers). Some samples can also lead to statistical inference about the entire population.

What is a certificate of confidentiality?

A certificate of confidentiality is granted by the Department of Health and helps protect researchers from being compelled in most circumstances to disclose names or other identifying characteristics in federal, state, or local proceedings. It remains in effect for the duration of the study and protects the identity of respondents, not the data.

For each of the following scenarios, list one pretesting technique that would most directly address the problem at hand, and state why that technique would be useful. As you're drafting a survey on health insurance, you'd like to know what types of insurance plans respondents are aware of and what they know (or think they know) about each type of plan. You'd also like to get a sense of which issues respondents think are important and how they think about these issues and how they categorize/group them.

A focus group is an efficient method for determining what potential respondents know and what they do not know about the survey topics, and how they structure that knowledge. If, for example, respondents see HMOs as very different from other types of health service plans, this info can help researchers structure the questionnaire to promote the most accurate reporting.

What is a survey and what are the goals of surveys?

A survey is systematically collected information on a topic through asking individuals questions. The goal of a survey is to generate statistics on the group(s) those individuals represent.

What is a target population and give some characteristics of a target population.

A target population is the group of elements for which the survey investigator wants to make inferences by using the sample statistics. They are finite in size, have time restrictions, and they are observable. They have to specify the kind of units that are elements in the population and the time extents of the group.

What is an advantage and disadvantage of using systematic sampling?

Advantage: easy to implement as only one random number is needed Disadvantage: any hidden pattern or periodic structure in the data will introduce bias

What are the advantages and disadvantages to progress indicators?

Advantages to progress indicators include motivation for the participation to know where they are at in the survey. Also, skip patterns allows for a faster progression. Disadvantages include slower download time which may impact interest in completing survey.

What is an ecological momentary assessment and what is the major type of bias it reduces?

An EMA has three features: data is collected in the real-world environment, assessments focus on current or very recent states, and subjects complete multiple assessments over time. They reduce recall bias because the time between the survey and the event is usually very small.

What is an expert panel? What are some advantages to using one?

An expert panel is a small group of specialists brought together to provide advice about a questionnaire. Some advantages to using an expert panel are: they are cheaper than recruiting participants for individual interviews, an expert has likely done research in that area and will have answers to questions the audience might have, and a good panel will get multiple opinions about a subject.

What is wrong with the following questionnaire? List three. a. Are you a... i. Nurse ii. Physician iii. Assistant iv. Administrative staff v. Other: b. The medical director wants to know your thoughts on the new computers we have installed. Do you agree that these computers are useful? i. Strongly agree ii. Agree iii. Disagree iv. Strongly disagree

Answer: 1. Demographic questions asked first (people who have lesser power, example assistants may be not want to express their opinions) 2. Use of authority in question may bias results 3. Biased question (uses only agree, not agree or disagree) 4. Uses strongly agree first (which is more socially desirable)

Which of the following is a strength of close-ended questions? a. Less burden on respondent b. Can offer more privacy c. Good for exploratory studies d. A & B e. All of the above

Answer: D

What are some sampling advantages with focus groups?

Answer: They do not discriminate against people who cannot read or write. They can encourage participation of people afraid to be interviewed one on one. They can encourage participation from those considered "unresponsive patients".

Despite elevated prevalence of diagnosed AIDS cases, HIV infection, and related risk behaviors, minority young men who have sex with men (MSM) have been virtually invisible in general population surveys and surveys that target specific population segments such as racial/ethnic groups. Describe a method an investigator could use to generate info.

Answer: Time-space sampling is a probability-based method for enrolling members of a target population at times and places where they congregate rather than where they live. Before randomly selecting venues, times, and then individual participants, investigators must first collect information on where and when people in the target population congregate, and then verify its actuality. It's a promising strategy for sampling MSM in minority communities because it concentrates resources where minority MSM can be found. Collecting additional data on the mobility of respondents, frequency of attendance and those who refuse to participate can allow the investigator to estimate the probability of selection and understand selection bias.

Describe two ways to verify that your survey results are consistent.

Ask the same participant twice, ask 2 different participants the same question, ask same question to same person in different form

What is ecological momentary assessment (EMA)? What is EMA addressing?

Assessment of people "in the moment" (ex. Diary); EMA decreases recall bias

Which of the following populations is NOT considered to be a protected population under federal regulations (45CFR46)? A. Pregnant women B. Mentally disabled individuals C. Children D. Prisoners

B

This type of sampling method has the characteristic of every element having the same probability of selection but not every combination can be selected. A. Simple random sampling B. Systematic sampling C. Proportionate stratified sampling D. Cluster sampling

B. Systematic sampling

For surveys of the general population, questions should be written at what reading level? A: Less than 4th grade B: 4th-6th grade C: 8th-9th grade D: 11th-12th grade

B: 4th-6th grade

A research team would like to evaluate the reporting of crime victimization surveys by drawing samples from police records. Households in which known victims are though to live will be sampled, interviewers will visit households to carry out a standard crime survey, and the accuracy of reporting criminal events will be evaluated by comparing the survey reports with the results from police records. Identify potential limitations of this type of study.

Because the sample is being drawn from those known to have being a victim of a crime, this study design will be great for detecting underreporting (failure to report an event that actually occurred)—however there's little opportunity for measuring overreporting. Also, a record-check study based [only] on events reported to the police isn't necessarily representative of all such crimes—crime is often underreported (attempted burglary) or overreported (car theft). Thus, investigators must consider the extent to which crimes in police records are actually representative of crime.

What are the three ethical obligations to consider from the Belmont Report when conducting research?

Beneficence - minimize possible harms and maximize possible benefits for the subject; Justice - to achieve some fair balance between those who bear the burdens of research and those who benefit from it; Respect for persons - the ethical requirement for informed consent

What are two benefits and two limitations of snowball sampling?

Benefits: • allows access to those communities where trust is a prerequisite to establishing contact • allows for formal study of populations which are hard to enumerate through other methods Limitations: • the sample is not likely to be representative because it depends on who people select to recommend • the person serving as the informant may not actually be as connected to the target population as previously thought

List two benefits and two assumptions of respondent-driven sampling.

Benefits: large sample can be obtained quickly, reproducible in different study sites, approximates a probability sample Assumptions: Relationships are reciprocated Everyone is connected within the network There is a high level of mixing within the Population People can accurately report their degree The population is large relative to the sample The chain of recruitment is long enough to attain "equilibrium"

Fill in the blank: 1Q: _________is when the researcher reports data or results that have been made up. ___________ is manipulating research materials, equipment or processes, or changing or omitting results such that the research is not accurately represented in the research method. Hint: Both answers start with an F

Blank 1: Fabrication Blank 2: Falsification

Considering random sampling and systematic sampling, which one do you think is more precise?

Both are of equal precision.

What are the differences between snowball sampling and respondent driven sampling (RDS)?

Both snowball sampling and RDS start with a convenience sample from a target population called "seeds". The seeds refer others into the sample. For RDS, there are a restricted maximum number of participants each person can recruit. For both methods, recruitment is repeated until a desired sample size is attained.

Which of the following statements are true? I. Random sampling is a good way to reduce response bias. II. Increasing sample size tends to reduce coverage bias. III. Compared to other modes, mail-in surveys are the most vulnerable to non-response bias A. I only B. II only C. III only D. I and II only E. All of the above F. None of the above

C. Random sampling provides strong protection against bias from undercoverage and voluntary response bias, but it's not effective against response bias. Increasing sample size won't reduce survey bias, since a large sample size can't correct for the methodological problems (undercoverage, nonresponse bias, etc.) that produce survey bias (it can, however, reduce sampling error/random error). Mail-in surveys typically have lower response rates than other modes, making them vulnerable to response bias.

As an incentive for participation, why might a $2 bill be better than $5 cheque?

Cash is better incentive than checks because of the inconvenience of cashing check + immediate gratification of cash; also, the novelty of $2 makes it worth more to respondents.

What are the 4 steps in the survey response process?

Comprehension, retrieval, judgment, and reporting

Please briefly explain an advantage of using computer adaptive testing?

Computer adaptive testing successively selects questions to maximize the precision of the exam based on how the individual answered prior questions. It doesn't need to ask all the questions to get a score.

In what situations are proxy respondents not advisable?

Considerations include how questions are phrased and the degree to which proxy and participant talk to another about the topic being asked. It is not advisable to have a proxy to answer questions on attitudes, knowledge, or perceptions.

Describe the 3 components that standard questions should meet.

Content- are questions asking the right things? Cognitive- do respondents understand questions? Are respondents willing and able to answer questions? Usability- can respondents and interviewers complete questionnaire easily and as intended.

Please determine whether the statement is true or false. Reliability, which is a measure of consistency in producing the same results every time, is threatened by systematic error.

Correct Answer: False. (Reliability is threatened by random/nonsystematic error).

What are types of survey research in time order?

Cross-sectional (Present) Retrospective (Past) Prospective (Future)

What are the three main types of survey design?

Cross-sectional, retrospective, and prospective.

Patient reported outcomes (PROs) are increasingly recognized as valuable clinical research endpoints, and are used across a wide spectrum of diseases in clinical trials. How can PRO instruments, in general, be improved?

Cross-validation of instruments and standardization of interpretation of outcomes would allow greater comparability of scores across studies and diseases. The use of an electronic centralized resource can help with consistent application, interpretation, and validation of PROs between studies. Patient-Reported Outcomes Measurement Information System (PROMIS) is an example of such a database, and evaluation of PROMIS item banks and their short forms have shown they're reliable and precise measurements of generic symptoms.

Which of the following approaches to assess reliability is performed by taking measurements by the same observer for the same group of subjects, with the same instrument, under equivalent conditions, but at different points in time? A. Split-half reliability B. Alternate form reliability C. Inter-rater reliability D. Test-retest reliability

D. Test-retest reliability

An informed consent should include all of the following except: a. Study purpose b. Risks c. Benefits d. Confidentiality e. Data coding

Data Coding

What is the difference between the de facto residence rule and the de jure residence rule?

De factor residence rule: people who slept in the housing unit the previous night are included. De jure residence rule: people who usually live in the housing unit.

Describe the different timing designs when using EMA.

Designs include time-based: interval (every 2 hours), random (no set times); and event contingent (every time you smoke).

In terms of data collection, is it better to do a census or a sample? Why?

Despite the cost and time requirements, a census will have better data quality and is more representative.

In class we talked about the advantages of snowballing and Screening. What are the some of the disadvantages of snowballing and screening?

Disadvantages of Screening: It is time-consuming and costly and may only produce a limited "strike rate." It can only be used where identifying characteristics are not especially sensitive or confidential. There is a potential for non-response at the screening phase. Moreover, the non-response may be higher for the rare population than for the total population. Disadvantages of Screening: It can be a slow, protracted, and unreliable process, and may not result in a large sampling frame.

Which of the following factors interferes with respondent memory of events? a. Passage of time b. Uniformity of events c. Low salience of event d. Intervening event e. All of the above

E. All of the above

What are types of errors in survey methodology? (select all that apply) a. Editing and processing errors b. Coverage errors c. Sampling errors d. Non-response errors e. all of the above f. None of the above

E. all of the above

What is a telescoping error? Name one way to address it.

Events in the past may seem closer to the present than they actually are. A telescoping error is made when respondents erroneously report events that actually occurred before the beginning of the reference period. One way to solve this is by bounding. You ask questions twice. The second time will more clearly cover the time since the first time.

What is systematic sampling? Briefly discuss its pros and cons.

Every element has the same probability of selection but not every combination can be selected. Pros: Easy to implement; only one random number is needed Cons: Any hidden pattern or periodic structure in the data will introduce serious bias

What is the difference between evidence-based sampling and time-based sampling?? Provide an example of each.

Evidence-based sampling: a method of data collection whereby a recording is made each time a predefined event occurs. For example, collecting data every time someone has a panic attach. Time-based sampling: a method of data collection whereby a recording is solicited based on a time schedule, often based on random time intervals. For example, an ambulatory BP monitor taking your BP every 30 minutes.

Why is it important to consider wording in your survey questions design? Demonstrate your understanding of a parallel question form by providing your own example.

Example: How do you feel about individuals who are on welfare? (negative connotation) How do you feel about low-income individuals? (more neutral) Example: Should children of illegal aliens be eligible to receive financial aid for higher education? (negative connotation) Should children of undocumented immigrants be eligible to receive financial aid for higher education? (more neutral)

Why should one think twice before asking a survey question that relies on autobiographical memory?

Experiences are more likely to be remembered if they were emotionally salient, and the retrieval of the memory depends on the person's current mood (i.e. it's remembered to be worse or not as bad depending on if they are currently happy or sad). This can lead to results that are bias

What are 2 ways of the five ways to systematically evaluate questions? What is the purpose of these 2 ways and what are some factors related to these ways?

Expert reviews - experts in the area of topic can review the questions to consider wording and terms, unclear questions, etc Focus groups - systematic discussion among a small group of target population members guides by a moderator to get feedback from a group of people Cognitive interviews - have respondent think as they answer questions, gets the individual to determine whether they understood the question but not generalizable Field pretests & behavior coding - sample rehearsals of the data collection conducted before the main survey and to evaluate the survey instrument Randomized or split-ballot experiment - survey designs conduct studies to experimentally compare different methods of data collection, different field procedures, or different versions of the questions

True or false: it is generally better to have heterogeneous focus groups than homogenous focus groups.

FALSE: homogenous focus group populations are generally preferred.

Which of the following survey methods has the highest response rate and lowest response bias? a) Mailed Survey b) Web Survey c) Telephone Survey d) Face to Face Interviews

Face to Face Interviews

What are the types of validity? Briefly Explain.

Face validity: The validity of a survey at face value (Whether measurement is logical) Content validity: The extent to which a measure represents all relevant dimensions Criterion validity: The extent to which the measure agrees with or predicts some criterion of the "true" value (or gold standard) Construct validity: The extent to which relationships between measures agree with relationships predicted by theories or hypotheses

4) True or False has this questions successfully reduced social desirability bias. CHART: Strongly Agree Agree Neither agree or disagree Disagree Strongly Disagree Prefer not to answer In general, I believe that drunk driving increases motor accidents.

False

True or False: Fabrication is manipulating research materials or changing/omitting the results.

False

True or False: In systematic sampling, every combination of elements has the same probability of selection.

False

True or False: Open-ended questions will only elicit relevant information.

False

True or False? It is best practice to put demographic questions at the beginning of a survey.

False

True or false - A limitation of close-ended questions is that it doesn't offer respondents privacy when answering questions.

False

True or false - When using an unauthenticated survey, you can control who completes it and how many times a person completes it.

False

7. TRUE/FALSE For each statement below, write "T" for a true statement and write "F" for a false statement. 4.1 _____ Questions in the questionnaire should be written at reading level 6th - 8th grade. 4.2 _____ Sensitive questions should be placed at the beginning to avoid missing data. 4.3 _____ Self-administered questionnaires should have less than 10 response options per question.

False, False, True

True or False? The concept of beneficence in the Belmont Report ensures that there is fair balance between those who participate in research and those who receive the benefits of the research.

False. The idea expressed above is covered by the principle of Justice in the Belmont Report, not Beneficence. Beneficence ensures that harms to the participant will be minimized and that hall benefits are maximized.

True or False. The likelihood of over-reporting is increased with longer recall periods and less salient events.

False. The likelihood of over-reporting is increased with shorter recall periods and more salient events.

Sampling frames are rarely perfect, and there are almost always problems that disrupt the ideal one-to-one mapping of frame elements to target population elements. Consider the following scenario. A telephone directory (the sampling frame) is used to sample adults living in telephone households (the target population). Identify [two] different problems that could arise using this frame and suggest potential method(s) to address each.

First, a telephone listing in the directory may have multiple adults living there (clustering). There could also be two telephone listings (two different phone lines) that exist for the same household (duplication). Both of these can be corrected using weighting, given the number of eligibles in the cluster and the number of duplicate entries for a given population element are both known. *note: in general, there are four potential problems. The other two include ineligible units—one of the listings belongs to a business (or any non-household) and non-coverage—a person living in a telephone household is not listed for whatever reason

The following question was designed to measure frequency of alcohol consumption: On days when you drink alcohol, how many drinks do you usually have—would you say one, two or three, or more? Describe two possible revisions to the question design that would improve the accuracy of responses.

First, respondents may be unclear about what counts as 'a drink.' Revising the question to include a definition of 'a drink' would resolve most of this ambiguity. Second, there's a lack of response option range; and given sensitivity of reporting drinking behaviors, respondents may be influenced by where the boundaries are drawn to create the response categories. The investigator could either add additional response options (one, two, three, four, five, six, seven, or more), or change the response option altogether. If the investigator is interested in absolute frequencies, open questions, as opposed to closed question, will most likely obtain higher estimates.

What are 3 methods used to evaluate draft survey questions? Describe each method briefly.

Focus group discussion helps examined the assumptions about the reality about which people will be asked and evaluates assumptions about vocabulary. In cognitive interviews, respondents are brought into a special setting in which interviews can be recorded and observed to determine how respondents understand questions and perform the respond tasks. In field pretesting, when a survey instrument is near completion, experienced interviewers conduct 15 to 35 interviews with people similar to those who will respondents in a planned survey.

What are the benefits of focus groups?

Focus groups allow getting nuances from group interaction, vocabulary the target population uses, important things to that specific population, cannot discriminate against those who cannot read or write, and common knowledge vs. individual knowledge.

Can you name a method for sampling rare population? Please briefly describe the procedure of the method you named.

For example: snowball sampling. First, you recruit a sample from target population (seed). Then seeds are trained and be able to recruit others into sample. Repeat this procedure until you get enough sample size and your sample reach an equilibrium.

When would you use open-ended questions versus close-ended questions? What are the limitations of each type - open-ended and close-ended questions?

For focus groups and needs assessments, using open-ended questions may be more beneficial. In addition asking sensitive questions about behavior may be more efficient. Close-ended questions are commonly used when asking about nonsensitive question about behavior and attitude questions. For open-ended questions, it is great for exploratory studies and may gain insight to precise pieces of information that can be easily recalled while some limitations include that it is more timely for both the participants and for researchers. Researchers will have to spend more time to analyze along with interpretation of responses before recoding of the data. In addition, there is an increase irrelevant and/or repetitious information. On the other hand, the strengths of the close-ended questions include asking less of respondents, easier to analyze, more privacy, and lumping risks into options. Limitations include respondents not being able to fit into a category due to fixed responses, the data can be skewed since the respondent will have to choose their closest representation of their answer, and the categories with any ranges can make respondents feel like they're in an extreme and select based on desirability selection bias.

What are two of the advantages of using focus groups in questionnaire development?

Hear the things that are important and type of vocabulary that respondents use in describing the topic. Know what should be included in the questionnaire and what terms to use. Focus groups do not discriminate people who cannot read or write.

You are Ariel, the Little Mermaid, and you have been asked by Prince Eric to do a global survey of all individuals on the planet. You are able to sample the human population, and the mermaid population who are not upset with you (now that you've been turned into a human). You are missing a sizeable subset of the global population. Scuttle (the seagull) suggests that you use mean imputation. What are some problems with this approach? (Hint: What does imputation do to your sample variance?)

If you impute (for example Mean imputation), your value is derived from whatever data you already have. If you have somehow missed sampling true outliers in the population (or an entire sub-population that is hiding out somewhere), your newly constructed "missing data" set will not reflect that missing population. You are narrowing your sample variance because while you are maintaining the desired number of observations, they are more like each other.

What is the difference between a sample survey and a census? What are the benefits and drawbacks to each?

In a census, every individual in the population of interest is surveyed. In contrast, in a sample survey only a subset of the population (usually selected with some degree of randomization) is surveyed. The benefit of a census is that it is completely representative. However, if some individuals do not answer census questions, results will be biased. Ensuring that the maximum number of individuals in a representative sample are interviewed, this bias will be avoided and you will have to interview fewer individuals.

Describe one of the strategies used for improving participation:

Incentive-based strategies can use either monetary or non-monetary incentives to increase participation by "reimbursing" participants for their time (it is important not to advertise participating in studies as a way to make money). It has been found that offering every participant a smaller amount of money works better than giving them objects (like pencils) or putting them in a raffle for a chance to win a bigger prize. This is not true with every population, for example, doctors might also be interested in continuing education credits. It is important to know the desires of your target population and to adapt this tactic specifically to their needs.

What are the components of assent as it applies to pediatric research?

Information needs to be presented in a way that a child of that age can understand what's involved, the child needs to express a willingness to participate, and there needs to be a determination about what the child can understand.

List three examples of hard to reach populations.

Injection drug users, sex workers, the mafia

If asking sensitive questions about behavior, why might using loaded language be a good thing?

Invites a person to give the socially undesirable answer as opposed to giving the more acceptable answer

What is respondent-driven sampling?

It is a new chain-referral sampling technique that uses statistical adjustments for network size to produce generalizable samples. Recruitment is kind of like a pyramid, where "seeds" are selected and trained to recruit a certain quota of their peers, who are then trained to recruit from their social network. Both seeds and recruits receive incentives. In addition to the recruitment process, RDS involves a complex analytical component that is crucial to generate representative estimates and confidence intervals.

What are two advantages of diaries or other ecological momentary assessments?

It reduces retrospection bias and can provide additional or supplementary information to data obtained by other methods.

What are some common reasons for low response rates from physicians and what is one way to increase response rates?

Lack of time, perceived salience of the study, concerns about confidentiality, or individual questions that may appear biased or don't allow for full range os responses, are all possible explanations for low physician response rates. These low response rates can be increased through offering monetary incentives, making questionnaires shorter and concise, and the use of multiple response methods.

What are some of the main reason physicians don't participate in surveys?

Lack of time, they don't agree with the topic the survey is about, they are concerned about the confidentiality of the results.

What is location sampling, and what are two limitations?

Location sampling involves visiting places where members of the study population are known to gather with the aim of collecting background info on behaviors and increase rapport with key individuals. 1) Those who don't use the location cannot be studied 2) Different types of people use locations at different times of the year so must understand that in order to take it into account

List some characteristics of an effective focus group discussion.

Making people feel at ease, fostering communication, giving all people a chance to speak, having a good leader with good interviewing skills, 5-8 people

What kinds of questions produce the best recall? (Hint: think of an event that you remember especially well!)

May answer a variant of these: (1) More recent the event (2) Greater impact or current salience (3) Consistency of the event

What is the difference between measurement erros/errors of observations and errors of nonobservation?

Measurement errors/errors of observation pertain to deviations from answers given to a survey question and the underlying attribute being measured. So when the answers to the questions are not good measures of the intended constructs. Errors of nonobservation pertain to the deviation of a statistic estimated on a sample from that on the full population. An example is when the characterisitcs of the respondents don't match those of the population from which they are drawn. The first is related to what you ask and how people answer the questions, the second is related to who you ask and how they represent the larger population.

Which of the following is not an element of informed consent? Study purpose Procedures Risks Benefits Must remain in study until study completion Alternative treatments Voluntariness Confidentiality

Must remain in study until study completion

You are validating a survey on discrimination by first addressing construct validity (measuring the same thing with multiple measures). If two questions designed to measure discrimination do not correlate highly, do we know which question is a poor measure? Why or why not?

No. Either or both could be poor questions. If there were several measures of closely related items we might be able to determine which was more likely to be the more valid measure.

From the following choices, please match the correct definition for each of the governing principles of research ethics of the Belmont Report _____ Beneficence _____ Justice _____ Respect for Persons a) Requires for informed Consent b) Minimize harm and maximize benefits for participants c) Aim to achieve balance between those who bear burden of research and those who benefit from it

Order: B, C, A

What factors increase the likelihood of over-reporting and under-reporting?

Over-reporting include events from outside time period being asked about. To reduce, include calendar prompts and extend recall periods. For under-reporting, do not include events that should have been included within time period. To reduce, shorten recall periods.

What is the purpose of assent and how is it different from parental permission?

Parental permission protects the child from assuming unreasonable risks. Assent demonstrates respect for the child and his developing autonomy. In order to give meaningful assent, the child must understand that procedures will be performed, voluntarily choose to undergo the procedures, and communicate this choice.

Why is it often necessary to have study participants use diaries for ecological studies?

Participants have a limited ability to recall events (recall bias). Participants also give more weight to the most recent experience and give more weight to painful events/things that stick out in their minds.

What is PROMIS and what are the five domains?

Patient-Reported Outcomes Measurement Information System. The five domains are physical function, fatigue, pain, emotional distress, and social health.

When questions related to satisfaction are being asked, what would the response tend to be? (Choi et al) • Negative Response • Neutral Response • Positive Response • No Response

Positive Response

Short answer. Name the worst times in the year to collect survey data.

Possible answers- the winter holidays (end of November to the New Year), on Super Bowl Sunday, during the summer (depending on the state).

In questionnaire design, there is two concepts we have to consider when creating our options of question responses: primacy effect and recency effect. What are primacy effects and recency effects?

Primacy effects is when presenting an option first include the chances that respondents will choose that option more likely than a later option (visual modes). Recency effects is the response option is at the end of the list and include its popularity of selection usually through an auditory mode.

Name TWO pros and TWO cons of self-administered survey.

Pro-more privacy, less non-response, lower social desirability (stigmatized and sensitive questions); more flexibility and increases people's willingness to participate (not necessary increase response rate); lower the cost. Cons- higher response error, higher incomplete (except for computer assisted); we don't know how many people got the mailing survey and how many complete it

Describe the pros and cons of using progress indicators in web administered surveys.

Progress indicators can be beneficial in the sense that they can give clues as to where abandonments tend to happen in a survey, or where in the web administered survey respondents gave up. Also, progress indicators inform respondents of their progress and can serve as motivation to complete the survey. One con of progress indicators are that they take up additional download time, leading to lower completion rates and higher nonresponse. Also, if a web administered survey contains many skip patterns, it may not be an accurate representation of a participant's progress and may seem daunting.

What are pros and cons of web-based surveys?

Pros: 1. Multimedia capabilities such as video, audio 2. Real-time Data validation - Can give real-time feedback to participants if data is invalid - Validation rules could be maximum values, minimum values, required answers and maximum number of characters. 3. Can show/hide questions (conditional logic) or text based on conditions (customized end page) Cons: 1. Cost - Designing a good web survey requires a significant amount of time at the beginning. 2. Security issue - Some software packages don't conform to security standards. 3. Web layout

What is respondent-driven sampling (RDS), and can respondent-drive sampling work in all contexts?

Purpose - It is an alternative means to sample most-at-risk populations for biological and behavioral surveys. What it is - It is a chain-referral sampling technique that uses statistical adjustments for network size to produce generalizable samples. How it functions

What are three ways to improve interpretability

Quality Translation, Responsible patient investigator burden, numbers accurately apply to clinical setting to measure change

When sampling rare populations, what are common difficulties and what are the most effective sampling methods?

Rare populations are hard to locate, such as people with a rare disease or individuals who participate in illegal or promiscuous activities. For these rare or hidden populations, problems such as there being no visible sampling frame existing (unknown size and boundaries) or privacy concerns (illegal or stigmatized behaviors) are common barriers. Snowball sampling has proven to be an effective method of sampling. In this method, researchers use a list of selected members of a rare population, who are then used to be informants of otherwise unknown members of the population. This method is used because it allows researchers to reach target populations where numbers might be small or in which some degree of trust is prerequisite to establishing a relationship. This is also effective to study populations otherwise difficult to enumerate through other methods. Of course, this ushers in the chance of selection bias.

What is ecological momentary assessment and what kind of bias does it aim to minimize? Give an example of how EMA decreases this type of bias.

Researchers sample subject behaviors and experiences in real time and in the subjects' natural environment using a range of methods and methodologies. Aims to minimize recall bias. Researchers assess how the subject is currently feeling/behaving so do not have to worry about autobiographical memory not being accurate.

Please list governing Principles for research ethics.

Respect for persons; beneficence; justice.

Describe response bias and give an example.

Respondents answer questions in the way they think the interviewer wants them to answer rather than their true beliefs. Example: people underreport substance abuse in fear of being judged by interviewer.

Describe methods and tools to increase response rates among survey samples

Response rates can be increased by using multiple modes of data collection, increasing the number of attempts to access samples persons, extending the data collection period, using sponsorship, reducing participant burden, using pre-notification and persuasion letters, and consistent follow up and reminders. Maximizing the number of these methods used in each sample will increase the response rates. Cash incentives have been shown to be one of the most effective methods of increasing response rates, as have consistent reminders (up to 15 for some surveys), and the use of sponsorship.

When drafting and writing questions for a questionnaire, what are the three standards that the questions should meet? (Open-ended question)

Sample Correct Answer: The three standards that the questions should meet are the content, cognitive, and usability standards. The content standard specifies whether the questions represent what you are trying to learn and whether the questions are asking the right things. The cognitive standard assesses whether the respondents are able to understand the questions, have enough information to answer the questions, and are willing and able to answer the questions. The usability standard assesses whether the respondents and interviewers are able to complete the questionnaires easily as intended.

What are some similarities and differences between snowball sampling and respondent driven sampling (RDS)? (Open-ended question)

Similarities-1)Large samples can be obtained quickly 2)Reproducible in different study sites 3)Only works for networked populations; Differences-1)RDS can restrict maximum number of participants that each person can recruit 2)RDS approximates probability sampling 3)One can use weighting in RDS (weight observations inverse to inclusion probabilities) 4)RDS relies on many assumptions

What is the difference between simple random sampling and systematic sampling?

Simple random sampling occurs when every element has the same probability of selection and every combination of elements has the same probability of selection. With a systematic sample every element has the same probability of selection but not every combination can be selected.

What is snowball sampling, and what are two benefits and two limitations to this method?

Snowball sampling involves using initial informants to introduce or give names of other unknown people in the target population Benefits: 1) allows access to those communities where trust is a prerequisite to establishing contact 2) allows for formal study of populations which are hard to enumerate through other methods Limitations: 1) the sample is not likely to be representative because it depends on who people select to recommend 2) the person serving as the informant may not actually be as connected to the target population as previously thought

Why is snowball sampling used?

Snowball sampling is used to reach a target population where the numbers may be small (ex. illicit drug users). It is also used formally to study populations that have been otherwise difficult to enumerate through other methods.

Which of the following are included in the stages of questionnaire development?

Specify research question Develop research design Develop questionnaire outline Review literature Review previous questions (use focus groups and expert panels) Pilot study Draft questions Test questions Review and revise All are included in the stages of questionnaire development

Who maintains standardized classification systems, and what is their purpose?

Standardized classification systems are maintained by government agencies for the purpose of standardizing measurements (e.g. of occupations or industries) to enable comparison across surveys, even across multiple countries.

In the past month, how much did you use public transportation? a. A lot b. A little c. Moderately What is wrong with the design of this question?

Starting time bias is present. The question should say for example, "since Jan 1 how many times did you use public transportation", As written, each responded is going to be referring to a different month depending on when they take the survey. Additionally, the response options are not ordered. Lastly, the question includes several vague words (what's a lot to one person may not be a lot to another. Transportation as well can mean a lot of different things depending on what a given city has available).

What are the strengths and limitations of open-ended questions?

Strengths • Good for exploratory studies • Commonly used to elicit precise piece of information that respondents can recall easily when large number of possible answers are possible. • Useful for eliciting frequency of sensitive behaviors Limitations • Will elicit certain amount of irrelevant and repetitious information • Requires greater degree of communication skills by respondent • May take more of respondent's time • Statistical analyses require interpretive, subjective, and time-consuming categorization of responses

Please describe two strengths and limitations of using closed-ended questions.

Strengths: questions clearer for respondents & limit irrelevant answers Limitations: respondents select fixed responses randomly & subtle distinctions among respondents cannot be established

You are asked to design a web-survey for a specific dementia unit of a nursing home. This target population is predominantly elderly, with greater prevalence of cognitive impairment (slowed or distorted thinking), severe arthritis (joint pain), and macular degeneration (vision loss). What features would you choose to include in your web-survey, and why?

THERE ARE MANY POSSIBLE CORRECT ANSWERS. Example: -cognitive impairment - present only one question at a time (small tables only, if any), single-item screen -macular degeneration - LARGE FONT, light background, dark font, make sure visual alignment easy to follow if using a table or graphics -arthritic hands - radio buttons better than free-typing

What are the overall goals and purposes of developing a survey?

The goals of surveys are to have representative, probability sampling to represent the entire population. It is to generate group-level summary statistics. Different purposes of surveys include: exploration (needs assessment), description, hypothesis/theory development, hypothesis/theory testing, explanation or testing of causal models, prediction, and evaluation.

For each of the following scenarios, list one pretesting technique that would most directly address the problem at hand, and state why that technique would be useful. • You've solicited help from several substantive experts who have each written what she considers to be the 'best' question on a particular topic. Each is convinced s/he is right (these are, after all, academics), but only one of the questions can be included in the final version. How could you address this situation in a way that would appease both of your specialists?

The pretest could incorporate a randomized or 'split-ballot' experiment, administering two different versions of the questionnaire (the difference being the 'best' question). While this doesn't necessarily determine which question is 'best,' this type of experiment offers the clearest evidence of the impact on responses of methodological features (such as question wording) and can generate additional info to assist with such a decision.

What is the purpose of survey research?

The purpose of survey research is to generate group-level summary statistics for needs assessment, description, hypothesis/theory development, explanation/testing of a causal model, prediction (e.g. for polls), and evaluation (especially for interventions).

What is the difference between the target population and the frame population?

The target population is the set of units to be studied, but the frame population is limited to the set of the target population that has a chance to be selected into the survey sample.

Describe what the "telescope" effect means when we talk about the effect of memory on recalling events. What's one way to prevent the telescope effect?

The telescope effect means people tend to over report events, saying they happened during your time frame even if they happened before. This usually happens with big or important events and when the time frame you're talking about is shorter. One way to help with the telescope effect is to use calendar prompts to refresh the memory of your participants.

Name two methods for sampling hard to reach populations and describe each briefly in 1-2 sentences.

There are four methods for sampling hard to reach populations. Time Space Sampling: A specific location and time is selected and then individuals are sampled from the location. Screening (i.e. Random Digit Dialing): Phone numbers are generated at random and then respondents are screened by asking a series of questions to the respondents to determine if they fit the criteria/description of the hard to reach population. Snowball Sampling: you start with a convenience sample of individuals with the population and then these individuals in the convenience sample refer others into the sample until the necessary number of respondents is attained. Respondent Driven Sampling: again you start with a convenience sample of individuals with the population and then these individuals refer a preset/specific number of individuals in the sample. Once these individuals are recruited, they then refer this set number of new individuals in the sample and so on until the sample size is reached.

What are the benefits and costs of sampling clusters instead of elements?

They are cheaper and so used when sampling nonclustered elements is too expensive. The tradeoff is that they result in a larger standard error of the mean from the sample.

What is respondent driven sampling? (RDS)

This is a sampling method where you start with a set group of people ("seeds") selected from the target population. The seeds are then trained to recruit a certain number of individuals from their social network of peers ("recruitment quota"). The recruits of the seeds are also trained to then recruit a set number of individuals from their social network or peers. Ideally this results in several long recruitment chains

What are the reasons for us to do needs assessment?

To create community trust and buy-in; To avoid creating bad interventions; To evaluate an intervention process; Avoid wasting resources: want to target interventions in such a way that they will do the most good.

Why do you think it is necessary to collect new information when conducting a NEEDS assessment? Please list 2 reasons for that.

To create community trust and buy-in; to avoid creating bad interventions.

Cash is usually the best incentive to increase retention (True/False)

True

True of False: The four groups of cognitive processes are comprehension, retrieval, judgment, and reporting.

True

True or False. Measures must be reliable to be valid.

True

True or False: An alternative method of data collection involves psychological assessments, such as standardized tests.

True

True or False: Ecological Momentary Assessment involves repeated sampling of a subject's current behavior and experiences in their natural environment

True

True or False: Minority and less educated respondents have the tendency to agree rather than disagree with statements as a whole or with what are perceived to be socially desirable responses to questions?

True

True or False: PHP2040 Applied Research Methods is the best public health course at Brown University

True

True or False: Record-check studies are the best way to learn how well people report certain events and the characteristics of events that are reported inaccurately

True

True or false?: For very salient behaviors, the preferred question order is more general before more specific.

True

True or false: Informed consent can come in different forms including written, verbal, and electronic signature.

True. Sometimes electronic is the only means available, sometime verbal is preferred if confidentiality is a concern, it depends on the situation and is up to the IRBs to decide which is most appropriate for the population at hand.

In a web based survey design what are the advantages and disadvantages of using authenticated versus unauthenticated?

Unauthenticated surveys can be useful when you do not have direct access to email addresses and they can be completed by anyone. However authenticated surveys have a participant list with email addresses through which invitations are sent out and the progress of the survey can be track for each participant. If a participant logs out of a survey, they can resume where he/she left off which is not true of unauthenticated surveys. You can also send reminders to participants through authenticated surveys.

What is a way to overcome non-compliance with paper diaries?

Using electronic diaries will enhance compliance. The electronic diary can be a Palm computer that provides good auditory prompts to allow participants to easily answer and select questions via a touch screen.

What are the differences between bias and validity?

Validity corresponds to individual response to questions; Bias corresponds to systematic deviations on summary statistics (systematic deviation across all trials and persons between response and true value)

What is the difference between validity and reliability of a measurement?

Validity is the accuracy of the measurement and how much it is measuring what it is intended to measure. Reliability is whether or not the measure is able to produce the same result every time it is used.

Define survey validity and briefly describe the four types of validity (face validity, content validity, criterion validity, construct validity).

Validity is the extent to which a data collection procedure successfully measures a variable of interest. Face validity refers to the validity of a survey at face value and whether measurement is logical. Content validity refers to the extent to which a measure represents all relevant dimensions. Criterion validity refers to the extent to which the measure agrees with or predicts some criterion of the "true" value or gold standard. Construct validity refers to the extent to which relationships between measures agree with relationships predicted by theories or hypotheses. • Alternate Question Format: What type of validity refers to the extent to which relationships between measures agree with relationships predicted by theories or hypothesis? • Answer Choices: A. Face validity B. Content validity C. Criterion validity D. Construct validity • Correct answer: D, construct validity

What is the difference between validity and reliability?

Validity is when a tool is measuring what it is supposed to measure whereas reliability is when a tool consistently produces the same results every time the measure is used.

If you are conducting the following 2 questions, which one will you put first, and why? A − Did you have a visit with your primary care provider in the past four weeks? B − Did you have a visit with your primary care provider in the past 12 months?

Visiting primary care provider is a less salient behavior. So it is better to ask a more specific question before a more general one. So I will ask A first, then B.

What type of survey format is best used for physicians and why?

Web surveys are the least response rates because there is less personalization. There is direct contact (and increases chances for response rates) through paper and phone surveys.

What is response bias?

When the response to a question departs from the true value by an error

What is non-response error?

When the values of statistics computed based only from respondent data differ from those based on entire sample data

If you were designing a study and wanted to offer a financial incentive, would you offer each participant $1, $5, $10, or $20 for completing your questionnaire? Note: You're trying to be frugal while maximizing your response rate! Justify your answer.

You get the biggest increase in response rate by just providing $1. After that you have diminishing returns for each additional dollar. So if your study had a small budget, providing just $1 may be enough.

What is telescoping ?

a. Respondents recall an event in the past as happening more recently that it actually did. b. Telescoping can be reduced by the bounded recall procedure

What are three strategies for sampling rare populations, according to Dawood et al (2008)?

a. The three strategies for sampling rare populations are... i. Snowball sampling (chain-referrals), which starts with members of the rare population who act as informants to refer more members. Non-random, and therefore prone to selection bias. ii. Screening (two-phase sampling), which casts a wide net using a screening questionnaire that is structured to avoid being transparent about the rare population of interest. Limited strike rate, a high risk of false negatives, definitely not appropriate for sensitive topics. iii. Location sampling for groups that are defined by their activities/where they gather. The number of visits is the unit of analysis, and it's important to note that frequency can vary (e.g. by time of year). This is a good way for researchers to build rapport with key individuals of a rare population.

Which is NOT a characteristic of survey research? a. Purpose is to systematically collect information by asking individuals questions b. Goal is to have a convenience sample c. Goal is to generate statistics on a group or groups that individuals represent d. Usually interdisciplinary

b. Goal is to have a convenience sample

Which of the following items is not required in an informed consent document? a) statement of risks or discomforts b) statement of purpose of the research c) contact information of research in the event that the respondent has questions d) a statement indicating that all new findings from the study will be sent to the respondent

d) a statement indicating that all new findings from the study will be sent to the respondent

The likelihood of over-reporting is increased with: a. Longer recall periods and less salient events b. Longer recall periods and more salient events c. Shorter recall periods and less salient events d. Shorter recall periods and more salient events

d. Shorter recall periods and more salient events

What are the 3 principles of conduct for all research involving human subjects outlined by the Belmont Report?

i. Benefice - minimize possible harm and maximize possible benefits for the subject ii. Justice - fair balance between those who bear the burden of research and who benefit from it iii. Respect for persons - obtain informed consent "without undue inducement or any element of force, fraud, deceit, duress, or any other form of constraint or coercion" (US Dept of Health, Education, and Welfare, 1974)

Is it better to put sensitive questions at the beginning or the end of a survey? Explain your answer.

it's always best to put sensitive questions at the end of a questionnaire, or at least at the end of the category of questions where it's relevant. We do this to avoid making the participant think we're categorizing them based on their answers to these sensitive questions. It also allows us to gather as much information as possible in the event that the sensitive question causes them to stop completing the survey/end the interview.

4. Which of these is NOT an example of a potential sampling advantage of focus groups? o does not discriminate against people who cannot read and write o can encourage paricipation by those reluctant to be interviewed o reduces difficulties in communication by allowing people with different disabilities to contribute in the same conversation o can encourage participation by those who feel they have nothing to say

o reduces difficulties in communication by allowing people with different disabilities to contribute in the same conversation

Do incentives improve the quality of surveys by countering noncooperation?

...

True or False: a) Audio+ text CASI produces statistically significant higher reports overall than text-only CASI. b) Addition of voice in high-privacy condition is related to increased social desirability effects.

...

Define and discuss pros and cons of two of the following types of probability samples: simple random sample, systematic sample, and stratified sample.

In simple random sampling, every element has the same probability of selection. Pros of simple random sampling are that it is the most basic selection process, easy to understand, and self-weighting; cons of simple random sampling are that it is often difficult to carry out in practice particularly because it is not always feasible to get the required sampling frame and that important subpopulations may be missed in the sample (e.g. rare or hidden populations). In systematic sampling, individuals are selected using a pre-determined sampling interval and every element has the same probability of selection but not every combination can be selected. Pros of systematic sampling are that it is easy to implement and only one random number is needed in order to determine which observation to begin sampling with; a con of systematic sampling is that any hidden pattern or periodic structure in the data will introduce bias. In stratified sampling, the population is divided into strata based on variable of interest (e.g. gender) and a certain number of sample elements are selected from each strata specifically depending on type of stratified sampling (proportionate vs. disproportionate). Pros of stratified sampling are that it provides more control over the units of each stratum, it may increase the precision of the sampling estimates, and it allows you to ensure sufficient representation of various subpopulations of interest. Cons of stratified sampling are that strata need to be clearly defined, mutually exclusive and exhaustive, which can sometimes be difficult; the population proportion in each stratum must be known in order to attach the appropriate weight to the statistical formula; and it has to be possible to draw samples from each stratum.

What is the difference between incentive and design-based strategies to improve physician response? pre-paid money is the best

Incentive: p monetary (money to them or charity, etc) or non-monetary incentives (stickers, pencils, pens, informational brochures, candy) Design based: personalized mailings, design-friendly questionnaires, sponsorship (academic/research institution, their professional organization), etc.

Compare and contrast your 2 favorite survey scales. What are the pros and cons of each? Possible scales: Likert scale, dichotomous scales, ranking scales, paired comparisons, semantic differential scale, self-anchoring scales

Likert PROS: offers concrete scaled choices (numbers, tick marks along a line, etc) OR continuous scaled choice (continuous line where participant marks her/his own response) Likert CONS: can only choose choices presented Dichotomous PROS: can summate scores to know distribution of preferences CONS: can only be yes/no Ranking PROS: gives respondent lots of flexibility (can write in own order of variables) If list of choices given, can control options CONS: If free writing (no list of choices given) can have a very wide range of answers = nightmare to code, spelling errors from free writing, or writing the same number twice, can be overwhelming to participant if choices are too numerous Paired PROS - easiest for respondent because only need to compare 2 things at once rather than entire list (like in ranking), can easily compute overall ranking statistically CONS: lots of paired combinations to present to participant - can be lengthy Sematic differential PROS - good for allowing participant to respond without having the exact word to describe degree of answer (ie. I'm not entirely sympathetic OR entirely unsympathetic...but somewhere in between) - can just mark spatially on line provided. -sometimes people don't use numbers on a line, they use words CONS - maybe the respondent actually prefers words... Self-anchoring PROS - allows participant reference point CONS - be careful about how you order the questions.... can influence answers chosen

Describe two methods for testing survey questions and list at least one pro and one con of each.

• Cognitive-based interviews, focus groups, pretests, expert panels • Cognitive-based interviews involve administering draft survey questions while collecting additional verbal information about survey responses, which is used to evaluate the quality of the response or to help determine whether the question is generating the information that its author intends (think aloud or focused probing). A benefit of cognitive-based interviews is that they allow you to gain information the respondent answering process. Some cons of cognitive-based interviews are that being in a cognitive interview may alter participant responses, may uncover problems that don't affect the validity of the data, may not identify problems that actually exist in survey administration, interviewers can introduce bias. • An expert panel is a small group of specialists brought together to provide advice about a questionnaire. Some pros of expert panels are that they are already knowledgeable about the topic of interest, they most likely have experience with the types of questions you're trying to ask and/or the populations you're interested in, and they are inexpensive. Some cons of expert panels are that they may not be able to speak to the experience of particular types of respondents and they may point out things that are inconsequential to actual respondents. • A focus group is a group of people usually with similar characteristics assembled for a guided discussion of a topic or issue related to survey questions. Some pros of focus groups are that they don't discriminate against people who can't read or write, they encourage participation from those who don't want to be interviewed on their own, they can be used to see what's common knowledge within the group, they can ascertain nuances in opinions, and they are sensitive to cultural variables. Some cons of focus groups are that the articulation of group norms may silence individual voices of dissent, confidentiality is compromised, may encounter people who may just be participating for the compensation and thus may not share genuine opinions or experiences, and findings are not generalizable. • Field pretests are "practice runs" of the study protocol as close to the actual administration as possible. Some pros of pretests are that they allow you to see how the survey holds up in the field, they test whether a question is easy and comfortable for an interviewer to read as written, and they test whether non-paid respondents are as able and willing to answer certain questions as paid volunteers. A con of pretests is that they do not offer flexibility to probe and understand the nature of problems that interviewers and respondents may encounter.

What are the four elements of child assent outlined by Bartholome?

• Developmentally appropriate understanding of condition • Disclosure of proposed intervention • Assessment of child's understanding • Solicitation of child's willingness to participate

List and briefly define three methods that researchers use to evaluate survey questions.

• Expert review panel: small group of specialists brought together to provide advice about a questionnaire • Focus group discussions: group of people usually with similar characteristics assembled for a guided discussion of a topic or issue • Cognitive-based interviews: the administration of draft survey questions while collecting additional verbal information about the survey responses, which is used to evaluate the quality of the response or to help determine whether the question is generating the information that its author intends • Field pretests: practice run of the study protocol

Discuss the strengths and limitations of the four major survey methods (mailed, web, telephone, face-to-face) with respect to response rate, response bias, cost, and quality of recorded responses.

• Mailed surveys have one of the lower response rates (45-70%) compared to telephone and face-to-face surveys, but slightly higher than web surveys. Mailed surveys have medium to high amounts of response bias, their cost is low, and the quality of recorded responses is fair to good. Web surveys have the lowest response rates (30-70%) of the four methods and have medium to high amounts of response bias. Web surveys also have low cost and very good quality of recorded responses. Telephone surveys have the second highest response rates (60-90%) and have low to medium amounts of response bias. The cost of telephone surveys ranges from low to medium and the quality of recorded responses is very good. Face-to-face surveys have the highest response rates (65-95%), low levels of response bias, and very good quality of recorded responses, but high cost.

A researcher is attempting to decrease nonresponse by offering incentives to survey respondents. Of the following incentives, which is considered the most effective method for decreasing nonresponse? • Gift cards • Checks • Money • They are all equally effective.

• Money (CORRECT ANSWER)

What are three ways to handle fractional intervals in systematic selection?

• Round the interval to the nearest integer • Treat the list as circular • Use the fractional interval and then round after the fractional selection numbers have been calculated

What are four methods that researchers use to study rare populations? Choose two methods to define.

• Screening: pick a sampling frame (i.e. random digit dialing) • Time Space Sampling o Three phases: formative phase, preparatory phase, and the sampling phase o Formative phase: collect information on where and when people in target population congregate o Preparation stage: verify that people in target population are actually at those places o Sampling stage: step 1: sample a time/location from the list; step 2: sample people from the time/locations • Snowball Sampling: start with a convenience sample from target population ("seeds)"; seeds refer others into the sample; repeat until sample size is attained • Respondent Driven Sampling: start with a convenience sample from target population ("seeds"); seeds refer others into the sample (maximum number can be referred); repeat until sample size is attained

In needs assessment, define the two types of needs.

• Service Needs: Needs health professionals believe the target population must have met to resolve a problem Service Demands: Needs that those in the target population believe they must have to resolve a problem

What is Snowball Sampling? List 1 benefit and 1 limitation.

• Start with a convenience sample from target population, naming them as "seeds" • Seeds refer others into the sample • Repeat until sample size is attained Benefits: • Large samples can be attained quickly • Reproducible in different study sites Limitations: • Only works for networked populations • Non-probability sample

Why do we need to perform needs assessment? What are the benefits?

• To create community trust and buy-in • To avoid creating bad interventions • To evaluate an intervention process • Avoid wasting resources, so resources can fulfill needs of target population.

How to improve recall process of respondents?

• Using multiple questions to lead to a better answer • Stimulating association likely to be tied to the event

What are the differences between validity and bias?

• Validity corresponds to individual responses to questions • Bias corresponds to systematic deviations on summary statistics. Systematic deviation across all trials and persons between response and true value, when true value is knowable.

Which of the following are valid forms of informed consent?

• Written • Verbal • Electronic signature • All of the above


Conjuntos de estudio relacionados

Wordly Wise 7th Grade - Lesson 2 Definitions

View Set

QUIZLET Ch. 47 (M/S) INTESTINAL AND RECTAL DISORDERS

View Set

completing the application, underwriting, and delivering the policy

View Set

Asking/Answering Interrogative Questions

View Set

BUS 496 - Strategic Management - Ch. 5

View Set

CFMS 7th Grade Discord Server Staff Training

View Set