exam 2 study guide

¡Supera tus tareas y exámenes ahora con Quizwiz!

Discuss procedures for administering mail surveys. This includes descriptions of follow-up mailings and acceptable response rates

*Cover letter, Monitoring returns, Follow-up mailings *Return rates, The higher response rate, the less significant response bias-Acceptable: 50%-Good: 60%

example of questions that follow the standard research guidelines for asking questions

(make items clear, avoid double-barreled questions, make sure questions are relevant and that respondents can comprehend them, avoid negative items, avoid biased terms and items, be culturally sensitive, etc.)

formulate open- and closed-ended questions about the same variable

*Closed-ended questions need to be exhaustive and mutually exclusive (yes or no) *Open-ended questions should not lead respondents to answer in a specific manner (more description based)

what is the logic of probability sampling

*Even the most carefully selected sample will almost never represent the population from which it was selected *There will always be some degree of sampling error, which can be estimated

Discuss how questions should be organized and presented in a questionnaire. This involves a discussion of general questionnaire format, format of responses, ordering of questions, and writing instructions

*Format for Respondents: −Spread out and uncluttered, Use boxes −Or provide a code number beside each response Contingency Questions: −Respondents will only answer questions that are relevant to them Question order can affect the answers: Sensitivity to the problem Potential strategies: −Pretest the different forms of questionnaires to measure ordering effect −In self-administered questionnaires, start with the most interesting questions −In interviews, start with generic, non-threatening questions

similarities and differences in the ways reliability and validity are handled inquantitative and qualitative studies

*Reliability does not ensure validity Reliability and Validity in Qualitative Research -Qualitative researchers study and describe things from multiple perspectives and meanings -Purpose is to describe things in as much depth and detail as possible

examples of nonprobability sampling techniques and when it would be acceptable to use each

*Reliance on Available Subjects/Convenience sampling: Sampling from subjects who are available (e.g., satisfaction survey of past clients) *Purposive or Judgmental Sampling: When a researcher uses his or her own judgment in selecting sample members (e.g., handpick community leaders or experts known for their expertise on target population) -Deviant case sampling - one type of purposive sampling is using cases which do not fit regular patterns are selected to improve understanding of regular patterns *Quota Sampling: A relative proportion of the total population is assigned for the target population's characteristics (e.g., gender, ethnic groups), grouped into strata or cells, and the required number of subjects from each stratum or cell (given set of characteristics) is then selected *Snowball Sampling: Process of accumulation as each located subject suggests other subjects

Explain the principle of triangulation and explain its functions

*Use of more than one imperfect data-collection alternative in which each option is vulnerable to different potential sources of error *Maximizes the chance that hypothesized changes in the dependent variable will be detected

describe how to Construct a brief scale to measure a variable

*Variables may be too complex or multifaceted to be measured with only one item Examples: Depression, Marital satisfaction, PTSD, Social functioning

Explain the role of the interviewer in interview survey

*interviewers ask the questions orally and record the answers *Appearance and demeanor, Dress and grooming *Pleasant, genuine demeanor *Familiarity with questionnaire and specifications *Follow question wording exactly

what is meant by program evaluation and discuss similarities between program evaluation and social work research?

-Assess the ultimate successor programs -Assess problems in how programs are being implemented -Obtain information in program planning and development similar to social work research because social workers are going to do evaluations of what is working, what's not, what trends are happening, how are people responding, etc- it depends on what is being researched.

what are some potential data sources for single case design?

-Available records -Interviews -Self-report scales -Direct observation of behavior

different types of reliability and validity and how to measure each type.

-Interobserver and interrupter reliability: The degree of agreement or consistency between/among observers. -test-retest reliability: Assessing a measure's stability overtime −Acceptable reliability: above .70 or .80 (the higher the better) -Internal consistency reliability: Assess whether the items of a measure are internally consistent with each other. *Face Validity−A measure appears to measure what it is supposed to measure−Determined by subjective assessment made by the researcher *Content Validity−The degree to which a measure covers the range of meanings included within the concept−Established based on judgments as well *Criterion-related Validity −Based on some external criterion −Does my scale correlate with another measure of the same concept? *Subtypes of criterion-validity−Predictive Validity: Measure can predict a criterion that will occur in the future *Concurrent Validity: Measure corresponds to a criterion that is known concurrently *Known Groups Validity: Measure is able to differentiate between known groups.

what are some of the ways in which vested interests can impede the atmosphere for free, scientific inquiry in program evaluation?

-Vested interests can damage the integrity of the evaluation

what are the issues and trade-offs involved in using in-house evaluators as opposed to external evaluators?

-better knowledge of treatment/program(etc.) -faster and cheaper -better input as well issues: can be swayed by bias'

how can researchers can attempt to increase external validity

-by conducting field experiments. In a field experiment, people's behavior is studied outside the laboratory, in its natural setting. -Through replication, researchers can study a given research question with maximal internal and external validity

what are the steps that program evaluators can take to deal with potential resistances to an evaluation and to foster its utilization?

-needs assessment -better communication (clarity) -allow staff feedback -conduct a pilot study -explain the purpose of the study

Identify and differentiate among the alternative types of program evaluation?

-needs assessment: evaluates the needs -process evaluation: Identifying strengths and weaknesses, Recommending needed improvement -monitoring program implementation: Evaluates how well a program is implemented and maintained -Evaluating Outcome and Efficiency: goal is attainment model; assess that the goals are achieved effectively

explain the three criteria for causality

1) Cause (independent variable) must precede the effect (dependent variable) in time 2) The two variables are empirically correlated with one another (they covary) 3) The observed empirical correlation between the two variables cannot be due to the influence of a third variable

explain seven threats to internal validity

1. History 2. Maturation 3. Testing 4. Instrumentation changes 5. Statistical regression 6. Selection bias 7. Ambiguity regarding the direction of causal inference

advantages and disadvantages of surveys

Advantages: they are quick, can be performed a number of ways, close ended surveys provide quantitative data, can be anonymous. Disadvantages: questions may be misunderstood, closed ended questions provide partial depth, must understand the survey design to use it, quantitative surveys are less useful when the surveyed group is small.

explain and give examples of topics for which survey research is an appropriate method of observation.

Best method for describing a population that is too large to observe directly−National attitudes and opinions, e.g. public polls

what is the logic and phases of single-case design

Control Phase: −Baseline(repeated measure of outcome) is obtained in a target problem for subject −Internal validity is enhanced when baseline has enough measurement points to show a stable trend and to establish the unlikelihood that extraneous events will coincide with the onset of intervention. Experimental Phase: An intervention is introduced and repeated outcomes measures are continued Control and Experimental Phase data are examined to identify coinciding shifts and trends in data to make inferences about the effectiveness of the intervention

external validity and the methodological problems that limit

Definition: the extent to which a causal relationship depicted in a study can be generalized beyond the study conditions -Influenced by the representativeness of the study sample, setting and procedures High internal validity does not guarantee that the study will be generalizable - the intervention may appear to have caused the change but cannot easily be applied to others outside of the study

how are samples are used to describe populations?

Examples of populations that can be sampled from a sampling frame include elementary school children, high school students, church members, factory workers, and members of professional associationsMay not include all members of a population

what are the possible pitfalls in carrying out experiments & quasi-experiments in social service agencies and strategies to avoid them?

Four practical pitfalls: −Fidelity of the intervention −Contamination of the control condition −Resistance to the case assignment protocol −Client recruitment and retention strategies to avoid them could include: blind ratings, Experimental or control group status or Research hypothesis

advantages and disadvantages of the basic types of nonprobability sampling techniques.

Generally less reliable, but often easier and cheaper

how do you Construct a questionnaire properly?

Include clear instructions and introductory comments when appropriate

explain and give examples of research reactivity

Measurement Bias −Experimental Demand Characteristics/ Experimenter Expectancies −Obtrusive Observation −Novelty Effects −Placebo Effect

what practical obstacles to the feasibility of single-case designs?

Obstacles: −Client crises may not allow practitioners to collect sufficient baseline data −Heavy case loads increase difficulty of collecting repeated measures −Peers and supervisors may not recognize the value of single -case research −Clients may resent extensive monitoring

how does diffusion or imitation of treatments or compensatory rivalry affect the validity of an experiment or quasi-experiment

Sometimes the treatment and control group participants are able to communicate with each other. ... This is known as diffusion or imitation of treatments, resulting from an exchange of information between groups. this can effect the validity and Compensatory rivalry exists when the study group not receiving the experimental treatment (i.e., intervention) feels disadvantaged, disappointed, or left out and decides to obtain a similar intervention on its own. this significantly effects the validity because it "tampers" with the impact of the intervention and therefore changes what the results would have been if it were truthful and then becomes invalid.

common biases that contribute to systematic measurement error

The most common way our measures systematically measure something other than what we think they do is when biases are involved, e.g.:−Social desirability bias−Cultural bias

experimental and quasi-experimental designs threats to internal validity

additional threats: Diffusion or Imitation of Treatments, Compensatory Equalization, Compensatory Rivalry, or Resentful Demoralization, Attrition

advantages and disadvantages of open- and closed-ended questions and how they are used

advantages for open-ended: you get a more detailed answer advantages for closed-ended: you get a direct answer without bias, may be easier and faster to get a response disadvantages for open-ended: people may not be as honest or there will be bias depending on the question, may take longer to get these answers disadvantages for closed-ended: you don't get an explanation for the more direct response

advantages and disadvantages of different methods of collecting data for needs assessments

advantages: Describes the characteristics of a large population; Makes a large sample feasible; Makes findings more generalizable ;Enables analysis of multiple variables; Flexible analysis; Uniform measurement; Strong reliability disadvantages: Lack of context; Inflexibility in design; Artificiality; Weak in validity

advantages and disadvantages of online surveys

advantages: Quick and inexpensive, Ideal for some populations disadvantages: Representativeness, Technological problems

advantages and disadvantages of cross-sectional studies

advantages: Used to prove and/or disprove assumptions. Not costly to perform and does not require a lot of time. Captures a specific point in time. Contains multiple variables at the time of the data snapshot. The data can be used for various types of research disadvantages: Cannot be used to analyze behavior over a period to time. Does not help determine cause and effect. The timing of the snapshot is not guaranteed to be representative

what is single-case methodology?

an evaluation method that can be used to rigorously test the success of an intervention or treatment on a particular case (i.e., a person, school, community) and to also provide evidence about the general effectiveness of an intervention

Describe the use of surveys in needs assessment

asking the community/groups what is needed rather than doing an observational needs assessment

how and why the utilization of program evaluation findings can be influenced by political, ideological, and logistical factor?

behavior, life in general are affected by these factors so a program evaluation or needs assessment can also be affected by these factors because depending on what is being evaluated, these factors will be external factors that have influence over people.

what are some signs of basic single-case graphs that are and are not visually significant.

graphs that are visually significant are ones where the intervention has a clear outcome graphs that aren't as visually significant, it is hard to see the impact of the intervention

what are the ethical controversies regarding delaying or withdrawing intervention in order to establish baselines?

it could lead to the distress of the person being treated because it is creating instability

what are the special problems in measurement and data gathering pertaining to single-case methodology?

measurement issues: -Operationally defining target problem and goals Choosing what to measure The use of triangulation of measurement Multiple indicators of the target problem

what are some approaches to increase rigor of single case designs?

monitor client progress, data gathering and analysis

explain the logical arrangements used in nonequivalent comparison groups designs and time-series designs, and identify the advantages and disadvantages of those designs

nonequivalent comparison groups designs: Comparison group does not receive the intervention. it compares two different samples and how they correlate to each other and doesn't involve an intervention over time. pro- can compare scores before and after a treatment in a group that receives the treatment and also in a nonequivalent control group that does not receive the treatment. con- research comparison could be bias time-series designs: No minimum number of measurements is required pro- additional measurements strengthen design con- it's limited to only what the intervention is and an intervention could end up not working or unknown external reasons could alter the results.

probability and nonprobability sampling

probability: The use of random procedures to select a sample that can allow us to estimate the expected degree of sampling error in a study and determine or control the likelihood of specific units in a population being selected for the study. -Basic principle is that all members of population will have an equal chance of being selected in the sample, known as equal probability of selection method non-probability: Used when probability or random sampling is not possible or appropriate (e.g., homeless individuals) Generally less reliable, but often easier and cheaper, there are 4 types

quantitative and qualitative sampling purposes and method

qualitative: Deviant case sampling intensity sampling: selected because they are more or less intense than usual but not enough to be call deviant Theoretical sampling - associated with grounded theory; selecting cases until no new insights are being generated, then selecting different types of cases until they too generate no new insight (saturation) quantitative: ?

Identify, explain and distinguish between reliability and validity.

reliability: a particular measurement technique would be considered reliable if, when applied repeatedly to the same object, would yield the same result each timeThe more reliable the measure, the less random error. Validity: the extent to which an empirical measure adequately reflects the real meaning of the concept under consideration EXAMPLE: Scale to measure depression asks: Energy level, Interest, Sadness

descriptions of the basic types of probability (3 types)

simple Random sampling: assign numbers; Auto-generate random sections systematic: Elements chosen based on sampling interval (the standard distance between elements)-Sampling ratio is the proportion of elements in the population that are selected-First element selected at random to avoid bias stratified: Involves the process of grouping members of a population into homogeneous strata before sampling (e.g., by ethnic group or gender) *improves the representativeness of a sample by reducing the degree of sampling error

advantages and disadvantages of the basic types of probability sampling

simple random advantage: Most fundamental technique in probability sampling simple random disadvantage: laborious and time consuming stratified advantage: *improves the representativeness of a sample by reducing the degree of sampling error systematic advantage: can be efficient systematic disadvantage: list can be bias

strengths and weaknesses of single-case designs from the standpoint of internal and external validity

strengths in High internal validity makes single-case designs a useful tool for identifying promising interventions for testing in subsequent studies Replication results in the accumulation of evidence to support generalizability -Advances scientific basis of an intervention -Useful in evaluating an agency or program external validity strength/weakness: other factors can influence what is being studied in either a positive or negative way but it may not be because of the intervention

difference btwn systematic and random measurement error

systematic: When the information we collect consistently reflects a false picture random: Random errors have no consistent pattern of effects. They do not bias the measures

what is the role of blind ratings and the problem of rating bias in group design

the role of blind rating is to avoid measurement bias. The problem with NOT blind rating in a group design is that people tend to rate their group in a bias way because they want to look a certain way to others.

what are practical pitfalls that are likely to be encountered in attempting to implement experiments or quasi-experiments in service-oriented agencies?

−Fidelity of the intervention −Contamination of the control condition −Resistance to the case assignment protocol −Client recruitment and retention


Conjuntos de estudio relacionados

Georgia Real Estate - Section 12 Unit 2

View Set

Supply Chain Management - Chapter 1

View Set

Med Surg. Chapter 44 Assessment of Digestive and Gastrointestinal Function

View Set