FYC4622 Exam 3

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

diffusion or contamination threats

A diffusion threat to internal validity (also known as a contamination threat) occurs when the intervention delivered to one group "diffuses" to the other.

When should programs conduct process evaluations?

-During the early or "pilot testing" stage of a program. -On an ongoing basis. -When an established program is undergoing major restructuring. -In conjunction with an outcome evaluation.

Describe 5 of the 7 characteristics for good measurements tools? (SHORT RESPONSE)

1. SIMPLE-Collect only what is needed, and minimize the requirement for narrative. 2. Realistic-Think twice about including information on standard forms that is difficult to obtain or very sensitive for program participants. 3. USED CONSISTENTLY-Everyone who is going to use a tool or form needs to understand what information is being gathered, when it is gathered, and what its intended use is. 4. IN A USEFUL FORM-1. Short answer questions are much easier to handle than a long narrative. 5.A MEASURE OF THE RIGHT CONSTRUCT- 1. . Even a very good instrument isn't useful if it measures the wrong thing. 6. APPROPRIATE FOR THE TARGET AUDIENCE-Factors related to age, culture, language, and other issues will affect how well you are able to collect data. 7.EASY TO ADIMINSTER -Much of your data collection will likely occur during program activities.

Describe Dr. Diehl's 5 tips for selecting measurement instruments.(SHORT RESPONSE)

1. make sure you are measuring something that is central and important to the program 2. Identify the key concept to be measured and then search for corresponding instruments. 3. find instruments that are valid, reliable, and are sensitive to change to change. 4. find instruments that are practical for your resources and audience (are currently being used by programs) 5. find instruments that are high-quality (can be found in peer-reviewed articles).

What are the threats to validity described in this article?

1.History 2.Maturation 3.Testing 4.Instrumentation 5.Mortality 6.Differential Attrition 7.Statistical Regression

What are the advantages and disadvantages of experimental designs

Advantage ♣ Can enable program to draw cause-effect conclusions based on the findings. ♣ Can produce findings that are generalizable to a large population (if sample size is large enough). ♣ Is considered the strongest design, with the most credible findings Disadvantage ♣ Can be the most costly to implement ♣ Requires specific staff expertise to implement ♣ May raise concerns among internal or external stakeholders about denying programs participants the services available in the program or new initiative.

What are the advantages and disadvantages of quasi-experimental designs?

Advantage ♣ Can be a relatively strong design when random assignment is not feasible ♣ Can be implemented with fewer resources than experimental designs ♣ Can be relatively simple to implement with in-house staff Disadvantage ♣ Cannot allow for cause and effect conclusions to be drawn from the findings ♣ May be difficult to find a comparison group with similar characteristics ♣ Can only apply the findings to the sample examined and cannot generalize them to a larger population

What are the key elements of the Under One Sky Theory of Change model?

Assumptions: -youth have assets and can identify assets and share their gifts -families can be identified who want to adopt youth -Adoptions will more likely be successful if based on assets to both youth and families -support for families and youth during camp methodology Community Assets: -Potential families interested in adoption -NC Kids-find families -4-H camp with self discovery camp methodology Issue:getting youth adopted out of foster care Activities: -youth (passages camp_ -potential families identify strengths -sharing of assets together

Understand the key messages of the "Evaluation Tips" listed in the sidebars.

Choose the logic model design that best suits your needs ● Ask as many, and as diverse, group of stakeholders as you can as to what they would like to get out of the program ● Consider intensity and duration of activities to ensure realistic outcomes ● Pay attention to the needs of your stakeholders as they provide an important view on the success of your program ● Begin with small evaluations to maximize resources. Then proceed to grow as need be ● Use social capital to seek help from those around you with program evaluation experience ● Seek help from external evaluators that may have experience with similar programs ● Try to identify if needed research has already been done on for your evaluation ● Understand who uses the program ● Identify if your organization has the resources to pursue a program evaluation

♣What is the difference between anonymity and confidentiality?

Confidential ♣ Name or other identifiers are used to follow-up with non-respondents or match data from pre-test/posttests. ♣ Individual data are NOT shared with anyone! Information is not used for any other purpose. ♣ Confidentiality must never be breached! This pledge is crucial in attaining honest, complete answers from respondents. ♣ Identifying information is destroyed after survey is complete Anonymous ♣ Name is not asked of respondents Because no other identifying codes are used, the researcher is unable to follow-up with non-respondents or match data from pre-test/posttests. This may not be a problem when doing random interviews (such as exit surveys). ♣ Collecting basic descriptive information about respondents is still useful for comparing respondents with the population. ♣ One possible way to maintain anonymity while also keeping track of non-respondents is to send a separate post card with the questionnaire. The respondent can return it separately, enabling him or her to declare that "John/Mary Doe has returned the questionnaire.

Understand the factors related to data quality. (think of what questions you need to ask to screen your data for quality)

Representativeness ♣How well does your information represent the target group or community of interest? Completeness ♣Do you have all of the information you intended to collect for everyone? Comprehensiveness ♣Did you collect information on all of the factors you want to include in your analysis and reports? Cleanliness ●Are your data relatively free from errors and inconsistencies?

rivalry or resentment threats.

Rivalry or resentment threats can affect the evaluation's conclusions in either direction. Depending on the situation, they can either increase or decrease the differences between groups in "before" to "after" changes

Understand the tasks and purposes described in the five tiered evaluation approach.

Tier 1 -Task: Conduct a needs assessment -Purpose:To address how the program can best meet the needs of the local community. -Methods: Determine the community's need for an OST program. Tier 2 -Task:Document program services -Purpose: To understand how program services are being implemented and to justify expenditures. -Methods: Describe program participants, services provided, and costs. Tier 3 -Task: Clarify your program -Purpose: To see if the program is being implemented as intended. -Methods: Examine whether the program is meeting its benchmarks, and whether it matches the logic model developed. Tier 4 -Task: Make program modifications -Purpose: To improve the program. -Methods:Discuss with key stakeholders how to use the evaluation data for program improvement. Tier 5 -Task: Assess program impact -Purpose: To demonstrate program effectiveness. -Methods: Assess outcomes with an experimental or quasi-experimental evaluation design.

What were the key elements of the Situation Statement for the Once Upon a School example?

Two many issues and not enough people to tackle them ♣ Work on language, reading, etc ♣ Not enough resources ♣ Learning disabilities, English as second language ♣ Underfunded schools Assets ♣ Publishers, editors, writers with flexible schedules ♣ Built structure of the office

What is reliability? What is validity? (understand the arrows/targets analogy)

o "Reliability estimates the consistency of your measurement, or more simply the degree to which an instrument measures the same way each time it is used in under the same conditions with the same subjects... o Validity, on the other hand, involves the degree to which you are measuring what you are supposed to, more simply, the accuracy of your measurement...

Describe the five steps in designing your own measurement tools. (SHORT RESPONSE)

o Adapt an existing tool. You may have found something that seems almost appropriate. o Review the literature. It is helpful to know what is being written about in terms of both your intervention and the issue you are trying to address. o Talk to other programs. People who work in programs that are similar to yours, or who work with a similar target population, may have tools that will be useful to you. o Talk to those with expertise or experience. National and local experts on the issue you are addressing, the population you are serving, and your community may have ideas about how to measure the outcomes your program seeks to achieve. o Pilot test tools. Pilot testing refers to trying your tool with a few representatives of the group with whom the tool will be used.

What are the four questions about already-completed evaluations that can help shape future evaluations?

o Are there program areas assessed in the prior evaluation showing significant room for improvement that we should examine in subsequent evaluations? o Are there findings that were not very helpful that we want to remove from consideration for future data collection? o Do the results of our formative/process evaluation suggest that our program is sufficiently well implemented to allow us to now look at outcomes? o Are there specific aspects of our program (e.g., participant subgroups served, subsamples of types of activities) that it would be helpful to focus on in subsequent evaluation activities?

Describe Dr. Diehl's five takeaway messages about Once Upon a School. (SHORT RESPONSE)

o Be Proactive and Try Something: Take action, no excuses o Harness Passion and Creativity: It is remarkable what you can accomplish with passion, creativity, and sustained energy o Respect Program Values and Tone: When planning and evaluating a program, you have to be very careful not to interfere with the core program and its tone o Adapt Language and Process: You can address some of these challenges through your language and process (e.g., you can construct a logic model without ever calling it a logic model) o Be "Unbounded": If we desire innovation and creativity, it is sometimes good to ignore the traditional boundaries and rules

Why should programs consider conducting outcome evaluations?

o Describe and understand the characteristics of the participants. o Strengthen program services and identify training needs for staff and volunteers. o Help understand program success and identify the services that have been the most effective. o Develop long-range plans for the program. o Bring programmatic issues to the attention of board members. o Attract funding to the program.

What are the ways in which you can make your evaluation findings more interesting and accessible?

o Digital and social media. For instance, you may want to create multimedia presentations that use videos or photos highlighting program activities, or audio clips from youth participants. You can also use sources like Facebook, Twitter, and other social media outlets to publicize your evaluation findings. o Descriptive examples. Such examples can bring the data to life so that data become more user-friendly and accessible to multiple audiences. Descriptive examples can also help attract local media, who often like the "human interest" side of an evaluation. Elements that you can add to increase the appeal of your reports include testimonials from satisfied families of program participants and multimedia materials as outlined above. o Visual representations of your data and strategies. These visuals can help to break up the text. They can also be used to help make the information more accessible and dynamic. These visuals can include charts and graphs highlighting evaluation findings, depictions of logic models, and graphics illustrating complex relationships or systems.

Describe the four points Dr. Diehl made about his evaluation philosophy and provide a real-life example to illustrate one of these points.

o Evaluation should be done in partnership and needs to meet the ____________ o Evaluation needs to be tailored to the conditions of the program (consider the resources, intensity of program, content, etc.) o Evaluation should be used to encourage program quality and improvement o In most cases, we should be measuring that which we think we can _________

What are the key evaluation questions for Under One Sky?

o How satisfied are youth with the program? o Are prospective adoptive parents more receptive to adoption? o *How well are youth building life skills? o *How much does the program increase the assets of youth? o How much does the program improve youths' attitudes toward adoption? o *How successful is the program at placing youth in adoptive homes?

What are the four ways we can determine whether a measure is reliable?

o Inter-Rater or Inter-Observer Reliability: Used to assess the degree to which different raters/observers give consistent estimates of the same phenomenon. o Test-Retest Reliability: Used to assess the consistency of a measure from one time to another. o Parallel-Forms Reliability: Used to assess the consistency of the results of two tests constructed in the same way from the same content domain. o Internal Consistency Reliability: Used to assess the consistency of results across items within a test

What were the four lessons about outcome evaluation?

o It can be extremely valuable to be outcomes-focused because it keeps us focused on the change we are aiming for. o However, our current approach, what Michael Patton calls "outcomes mania" can mean that we are abusing the power of outcomes. o We need to be reasonable and realistic about what outcomes we can achieve given the issues we are addressing, the resources we have, and the approaches we are using. o Details, details, details - are the outcomes meaningful? Are they achievable? Do you have the resources and expertise to measure them? Are there unreasonable

Understand the four major points for "making decisions with your data."

o Look for consistency. Do the findings seem to relate to one another in expected ways? Two indicators that measure the same thing should change in the same direction. For example, if one measure of mental health shows improvement, a similar measure should also improve. o Look for trends. Does the progression over time make sense? For example, children should get larger and age should increase. o Ask questions. Particularly if you have the assistance of an evaluation consultant or data analyst, be sure that all of your questions are answered such that you understand and can explain the results. o Check with others. Review the findings with a representative group of everyone involved in the project, particularly including program staff and people from the target population. Ask one another whether the findings make sense and reflect together on what they mean.

Describe five of the six typical questions for process evaluation. (SHORT RESPONSE)

o Needs and Assets: What are the needs of the participants? The assets? o Quality: How well is the project being implemented? What can be improved? o Coverage: Are the intended services being delivered to the intended persons? Is the population in need being reached? o Fidelity: Is the project operating as planned? o Accessibility/Engagement: Are stakeholders engaged? Do they stick with the project? o Satisfaction: Are the key audiences satisfied with what is being delivered?

What are the four key lessons about process evaluation?

o Process evaluation is currently undervalued (value statement by me) and certainly much less valued than outcome evaluation (factual statement, I believe) o However, if you are going to achieve outcomes, it is critical that program implementation is high-quality (and process evaluation helps improve quality) o Process evaluation helps you understand how and why a program works o If you are ever going to replicate your program elsewhere, process information will be essential

What are the four main ways of collecting program participation data?

o Program participation demographics can be collected through program application forms that parents complete as a prerequisite to enrolling their child in your program. o Program attendance data can be collected through daily sign-in sheets for participants or a checklist/roll-call by program staff. o School and community demographic data can be collected through such sources as school records and census data. o Feedback from participants can be collected through surveys or interviews of participants and their parents to get a sense of why they participate (or why not) and their level of engagement in your program.

What are the cautions about relational or ex post facto studies?

o Relational ♣ In a relational study, "cause-and-effect" cannot be claimed; only that there is a relationship between the variables. ♣ Variables that are completely unrelated could, in fact, vary together due to nothing more than coincidence. o Ex post facto ♣ This relational research method is also used when a true experimental design is not possible, such as when people have self-selected levels of an independent variable or when a treatment is naturally occurring and the researcher could not "control" the degree of its use (represented by the question mark next the X that depicts the treatment in the model).

What are the kinds of error that survey design should address?

o Sampling Error (How representative is the group being surveyed?) o Frame Error (How accurate is the list from which respondents are drawn?) o Selection Error (Does everyone have an equal chance of being selected to respond?) o Measurement Error (Is the questionnaire valid and reliable?) oNon-response Error (How is the generalizability of findings jeopardized because of subjects who did not reply?)

Describe the four pitfalls of outcomes monitoring and provide one concrete example of a pitfall. (SHORT RESPONSE)

o Selection of outcomes may influence program delivery in a negative way ♣ Example: Teaching to the test (teachers may feel pressure to teach FCAT skills at the expense of others) o "Corruptibility" of indicators (staff may intentionally or unintentionally influence the reporting of the outcome) ♣ Example: Staff may encourage "correct" answers to a survey (or participants may want to please staff) o Interpretations may be inaccurate ♣ Example: Staff or an evaluator may not understand the findings, leading to an incorrect conclusion o Over-reliance on outcomes may be premature or may be done without understanding important process variables ♣ Example: A program might measure outcomes in the first 6 months of existence (before it is ready)

•Understand the guidelines to make your life easier when you are setting up your data collection system.

o Simple. The sophistication of your evaluation and, therefore of your data collection, should be appropriate for the scale of your program. o Focused. Do not collect any information you will not use, no matter how interesting it seems. o Ethical. Protect the privacy and dignity of your program participants and other respondents. In fact, you may find that you have very specific requirements with which you must comply if your organization has an Institutional Review Board (see Appendix B for more information on IRB). o Consistent. Information should be collected in the same manner for each person at each time point

What are the four limitations of randomized field experiments?

o Stage of Program Development: Less useful in the early stages of program development o Ethical Considerations: Staff frequently struggle with the idea that some people will receive a "benefit" while others do not o Experimental Integrity: The integrity of the experiment can be affected by factors such as differential attrition and unintended influences. o **Time and Cost: Randomized field experiments are very expensive and time consuming

What do you need to consider when choosing an evaluation design

o The logistics and management of the design and if it is feasible, considering the staffing resources that are available. For example, if an experimental or quasi-experimental design is chosen, does the program have the ability to identify, manage, and collect data on a control group? o The funder requirements and priorities for the outcome evaluation. In some instances, a program's funder can request or require that certain outcomes are examined or a certain design is used in the evaluation. o The financial resources available for the evaluation. Ideally, at least 10 percent of the program's budget should be allocated for evaluation. The evaluation design, type of evaluator, number of outcomes measured, and data collection methods will have an impact on the amount of resources necessary to fund an outcome evaluation. o The sample size available for the outcome evaluation. A small program with a limited number of participants may choose to include all participants in the evaluation. o The staffing expertise available to implement certain designs. For example, if a program wishes to implement a random assignment design, that program will need a staff member who is knowledgeable about implementing research-based lottery systems.

What are the three possibilities to consider when an evaluation suggests little or no progress on achieving outcomes?

o The program may have set expectations too high. If your program is relatively new, it may be too early to expect the types of outcomes that the program aimed to achieve. Progress may not be seen on many outcomes until your program has been running for several years. In this case, you should consider your evaluation findings as benchmark data upon which to judge outcomes in future years. Evaluation findings from similar programs can offer guidance as to what types of results can reasonably be expected. o The program may not be implemented as intended, so the services provided are not leading to the intended outcomes. In this situation, you should consider taking a step back and conducting a formative/process evaluation to examine program implementation before looking at any further outcomes. A formative/process evaluation can help to uncover possible problems in the way that your program is implemented that may be preventing the program from achieving its intended goals. o The evaluation may not be asking the right questions or examining the right outcomes. Your program's activities may not be directly tied to the outcomes that the evaluation measured. In that case, you should revisit your evaluation plan to ensure that the evaluation is designed to measure outcomes that are directly tied to program activities.

What are the four critical questions for evaluation design?

o What are the evaluation questions you are seeking to answer? (does the question imply change over time? does the question imply comparison of groups?) o When and how frequently will you collect data? o How many groups will you collect data from? (1 group = the program group, two groups would add a comparison group, etc.) o If you will collect data from a comparison group, how will that group be created?

What questions should be considered when determining the time frame of your evaluation?

o What do our funders require in terms of data collection and reporting time frames? o Do we need to include pretest/posttest measures to examine changes in our outcomes? o What time frame is most feasible given our available resources for evaluation? o How much time do we need to collect and analyze data that will be useful for our program?

What are the reasons for using a post-then-pre design?

oA variation of the Pretest-Posttest One-Group Design involves giving the pretest and the posttest after the program is completed. oWhen you are asking participants to rate (i.e. give their opinion) of their perceived knowledge or skill, instead of testing them. oThis is a problem when participants don't have a realistic understanding of their knowledge or skills until they learn through an educational program how much there really is to know. oTeenagers are notorious for overestimating their knowledge or skill levels because they lack a frame of reference that experience and maturity provides.

What are the key characteristics of the survey audience you should consider as you create a survey?

oAge oEducation level oFamiliarity with tests & questionnaires oCultural bias/language barrier

What are the four characteristics of a functional management information system?

oBe user-friendly to allow program staff to update and maintain data on an ongoing basis. oProvide flexibility, allowing the possibility of adding or modifying the type of data collected as additional program needs are identified. oBe able to easily run queries on the various types of data collected. oHave safeguards in place to protect the confidentiality of the data (e.g., password protection with restricted access).

What are the five steps in the measurement process?

oConceptualize (Approaches to Learning, etc.) oOperationalize - Does the child explore the environment, etc oMeasure (Teacher Observation, Parent Observation) oAnalyze % of children improving? % of children are at age-level for task or concept, etc. oAct: Refocus teaching, identify areas for parents to address, etc.

What are the four types of bias that threaten internal validity?

oMeasurement bias (the worst cases are when measurement is systematically different for the program/control groups, which is one reason for 'double-blind' studies) oMaturation effects: natural improvements due to development (e.g., natural increase in vocabulary) oSelection bias: (e.g., voluntary selection into a program, attrition) oHistory/Interfering events (think of trying to evaluate programs when Hurricane Katrina hit)

Who are the audiences for the Under One Sky evaluation and how will they use the evaluation

oProgram Staff (Internal Improvement) ♣To improve practices ♣To support funding requests oAdministration on Children and Families (federal funder) (Accountability) ♣To justify the expenditure of funds oOther Practitioners (Dissemination/Policy) ♣To develop similar programs elsewhere oResearchers (Knowledge) ♣To understand people and programs

What three factors do we need to consider when selecting our evaluation design?

oScientific Rigor - we wanted to be as rigorous as possible. oPractical Considerations - we wanted to be able to look at both rural and urban families. oEthical Considerations - we did not feel that we could have children from the same classroom in both the program and comparison groups

Explain what sensitivity is and list the three threats to sensitivity presented by Dr. Diehl in class.

oSensitivity Are the measures sensitive to change? When there is change present, how likely are you to capture the change? oInsensitivity to program effects may arise when... ♣ The measure includes some concepts that do not relate directly to what the program does ♣ The measure is not designed to change (i.e., is designed to be highly reliable) ♣ The measure is phrased in such a way that almost all people answer in a certain way

What are the five strategies for improving a simple before-and-after design?

oStrategy 1: add a control group oStrategy 2: take more measurements before and after the intervention implementation oStrategy 3: stagger the introduction of the intervention among groups oStrategy 4: add a reversal of the intervention oStrategy 5: use additional outcome measures

What issues should you consider as you determine the level of detail to include in reporting evaluation findings to various stakeholders?

oThe local community needs enough detail so that someone who knows nothing about your program will have sufficient information to understand the evaluation and the program's role within and impact on the community. oSchool staff, such as teachers and principals, are likely to be most interested in findings that have implications for school-day academic achievement and classroom conduct—that is, outcomes related to children's learning and behavior. oProgram practitioners often have little time to devote to poring over evaluation results; results presented to this audience should be "short and sweet," and contain only the most important details.

Understand the dimensions or criteria you should use to assess the quality of published measurement tools.

oValidity. a test or measurement is this if it measures what t is supposed to measure. A measurement may be valid but not reliable and vice versa. Related to reliability oReliability. the consistency with which an outcome measure actually measures a given effect (awareness, attitudes, etc). There is NO perfectly reliable measure. Another term for consistency. Related to validity oStandardization. Standardization means that a tool has been tested in one or more populations and the results are consistent across the groups.

Understand the basic definitions relevant to experimental design and be able to identify examples of each.

oVariable: Characteristics by which people or things can be described. oManipulation: Random assignment of subjects to levels of the independent variable (treatment groups). o independent variable: The treatment, factor, or presumed cause that will produce a change in the dependent variable. o dependent variable: The presumed effect or consequence resulting from changes in the independent variable.

selection threats

occurs when the apparent effect of the intervention could be due to differences in the participants' characteristics in the groups being compared, rather than the intervention itself. Selection interaction

Selection interaction threats

the legitimacy of your evaluation conclusions.

What is the definition of internal validity?

whether observed changes can be attributed to your program or intervention (i.e., the cause) and not to other possible causes (sometimes described as "alternative explanations" for the outcome)."

double-barreled questions

♣ Do you like cats and dogs? ♣ Do you watch television or would you rather read? ♣ Do you like tennis or do you like golfing? ♣ Have you stopped beating your child?

loaded questions

♣ Do you treat your children with kindness like a good parent should? Are you as interested in sports as most other red blooded American men? ♣ Do you agree that our program deserves more funding than the greedy politicians currently provide?

the characteristics of strong performance measures

♣ Have strong ties to program goals, inputs, and outputs. There should be a direct and logical connection between your performance measures and the other pieces of your logic model. Ask yourself: What do we hope to directly affect through our program? What results are we willing to be directly accountable for producing? What can our program realistically accomplish? ♣ Be compatible with the age and stage of your program. Performance measures should be selected based on your program's current level of maturity and development. For example, a program in its first year should focus more on measures of effort than on measures of effect to ensure that the program is implemented as intended before trying to assess outcomes for participants. ♣ Consider if the data you need are available/accessible. Performance measures should never be selected solely because the data are readily available. For example, if your program does not seek to impact academic outcomes, it does not make sense to examine participants' grades. That said, you should think twice before selecting program performance measures for which data collection will be prohibitively difficult and/or expensive. For example, do not choose performance measures that require access to school records if the school will not provide access to these data. ♣ Yield useful information to the program. Consider the question: "Will the information collected be useful to our program and its stakeholders?" The answer should always be a resounding "Yes." To determine whether the data will be useful, consider the purpose of your evaluation and what you hope to get out of it, as outlined in Step 1


Ensembles d'études connexes

Muscle Origin, Insertion, and Action

View Set

Marilyn Hughes (Fracture arm/leg)

View Set

Tennessee Life Only Life Insurance Policy Provisions, Options, and Riders

View Set

Grove Science Midterm; Force and Work

View Set

Art History Exam Chapters 1,2,3,4,5

View Set

chapter 1: systems approach to a Food service organization

View Set