Chapter 13: Evaluation

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

internal evaluation

- An individual trained in evaluation and personally involved with the program conducts the evaluation. - Advantage: more familiar with the organization and the program history, knowing the decision making, being present to remind others of results over time, being able to communicate technical results more frequently and clearly. - Less expensive. Drawback is that evaluator bias or conflict of interest.

process evaluation

- Assesses the implementation process in general, and tracks and measures what went well and what went poorly and how these factors contributed to the success or failure of a particular program. - Measures fidelity, or how closely program implementation followed existing standards or protocol. Not focus on the quality of the program in process, but it measures how well program implementation occurred. - Process of looking backwards after the program concludes, data is collected throughout the the implementation process. - Measures how many products were distributed or how many services were offered and how many people participated in the program.

Framework for Programs Evaluation

- CDC Framework for Program Evaluation. It is helpful in any evaluation, regardless of type or setting. Has more application for impact evaluation. They are starting points for tailoring the evaluation. (1) Engaging stakeholders (2) Describing the program (3) Focusing the evaluation design (4) Gathering credible evidence (5) Justifying conclusions (6) Ensuring use and sharing lessons learned - Framework also uses standards of evaluation. Provide practical guidelines for the evaluators to follow when having to decide among evaluation options. Standards help evaluators avoid evaluations that may be accurate and feasible but not useful or those that would be useful and accurate but not feasible. Four standards: (1) Utility standards ensure that information needs of evaluation users are satisfied. (2) Feasibility standards ensure that the evaluation is viable and pragmatic. (3) Propriety standards ensure that the evaluation is ethical. (4) Accuracy standards ensure that the evaluation produces findings that are considered correct.

summative evaluation

- Determining effectiveness. - Extent to which awareness, knowledge, behavior, environment, or health status changed as a result of a particular program. - Requires the measurement or establishment of a baseline value (starting point or status of a health indicator prior to the implementation of an intervention) and the measurement of the same health indicator (posttest). Occurs after the program has ended. - Closely related is impact and outcome evaluations. Impact and outcome together constitute summative evaluation.

Introduction

- Efforts should be made to address the two critical and basic purposes of program evaluation (1) assessing and improving quality (2) determining effectiveness. - Difference between formative and process evaluation: (1) Formative attempts to enhance program components before and during implementation so that the very best products and services are offered (2) Formative involves pretesting and pilot testing (3) Formative focuses on improving the quality of the program and its components while they are being implemented; process measures the degree to which the program was successfully implemented and generally applies lessons learned in subsequent versions or action. - Formative: The quality of the program components is measured and improved prior to or during program implementation. - Process: The mechanics and results of program implementation are assessed. - Summative: Program outcome measured including impact (behavior change) and outcome evaluation (disease). Formal definitions on 353-354.

Evaluation Results

- Evaluation can be conducted from several vantage points. - Different aspects of the evaluation can be stressed, depending on the groups needs and interests. - Include a determination of how the results will be used. - Important in formative evaluation to implement the findings rapidly to improve the program. - Feedback loop and action plan needed in summative, impact, and outcome evaluation to ensure that results and lessons learned are used to determine how to proceed with health promotion programs.

Ethical Considerations

- Evaluation or research should never cause mental, emotional, or physical harm to those in the priority population. - Participants should always be informed of the purpose and potential risks and should give consent. Assure confidentiality and anonymity. - No individual should ever have his or her personal information revealed in any setting or circumstance. When appropriate, evaluation plans should be approved by institutional review boards (IRBs).

impact evaluation

- Focuses on intermediate measures like behavior change or changes in attitudes, knowledge, and awareness.

outcome evaluation

- Measures the degree to which end points actually decreased.

external evaluation

- One conducted by someone who is not connected with the program. - Evaluation consultant. - Somewhat isolated, lacking knowledge of and experience with the program that internal evaluator possesses. More expensive. - Advantages: provide an objective review and a fresh perspective, ensure unbiased evaluation outcome, brings a global knowledge of evaluation having working in different settings, typically brings more breadth and depth of technical expertise.

Who will Conduct the Evaluation? `

- Program evaluator has to be objective and have nothing to gain personally from the results of the evaluation. - Internal evaluation and external evaluation. - Choose someone with credibility and values objectivity.

formative evaluation

- Quality assessment and program improvement. - Begins when programs are conceived and developed (or are forming). Important and relevant during the early stages of program development and implementation. Purpose is to improve the quality of a program or any of its component before the program concludes.

Evaluation in the Program Planning Stages

- Results of evaluation will determine whether the goals and objectives were met. - Evaluation must be planned in the early stages and should be in place before the program begins. Results will assist in improving the program and makes collecting data related to program outcome easier and reliable. - Evaluation process will focus on examples of formative and summative evaluations. - Formative evaluation may indicate: the necessary number of staff has been hired program sites are available, materials have been printed, participants are satisfied, and classes are offered with the needs of participants in mind. Answers whether the programs are provided at convenient locations for the community, whether materials were available on time, and whether people are attending workshops at all the times offered. - Summative evaluation at the beginning ensure that results are less biased. Ensures that questions answered relate to the original objectives and goals of the program.

Purpose of Evaluation

- Stakeholders will determine which factors will be measured to determine the worth and value of the program. Number of factors leading to the outcome of improved health can be measured in the evaluation process (like how successfully the program was implemented and the degree to which the program influenced knowledge, attitudes, confidence, abilities, and behaviors). - It also assess less tangible benefits deemed important by stakeholders (i.e. degree of good will, social capital, cohesiveness). - Programs are evaluated to gain information and make decisions. Information may be used by planners during implementation of a program to make immediate improvements, improvements to the implementation process in subsequent versions of the program, to see certain immediate outcomes, and to determine whether long-term program goals and objectives associated with disease outcomes and improved health status have been met. - Capwell, Butterfoss, and Francisco 6 reasons why stakeholders may want programs evaluated: (1) to determine achievement of objectives related to improved health status (2) to improve program implementation (3) to provide accountability to funders, the community, and other stakeholders (4) to increase community support for initiatives (5) to contribute to the scientific bae for community public health interventions (6) to informalities policy decisions.

evaluation

- The process of determining the value or worth of a health promotion program or any of its components based on predetermined criteria or standards of success identified by stakeholders. - Two categories of evaluation that corresponds to the two main purposes of evaluation: (1) quality - formative (2) effectiveness - summative.

Practical Problems or Barriers in Conducting an Evaluation

1. Planners fail to build evaluation in the program planning process or do so too late. 2. Adequate resources not available. 3. Organizational restrictions on hiring consultants and contractors may prohibit evaluation efforts. 4. Effects are often hard to detect because changes can be small. 5. Length of time for the program and its evaluation is not realistic. 6. Restrictions that limit collection of data. 7. Difficult to make an association between cause and effect. 8. Difficult to separate the effects of multiple interventions within a program. 9. Discrepancies arise between professional standards and actual practice. 10. Evaluators motives to demonstrate success introduce bias. 11. Stakeholders' perceptions of the evaluation's value may vary too drastically 12. Intervention strategies are sometimes not delivered as intended. - Can occur by not collecting initial information from participants because plans were not in place, fail to budget for the cost of the evaluation before a change can occur, conducting the evaluation prematurely before a change can occur, too long after program completion. Without a sound design. Lack of capacity of inability to conduct an evaluation may be one of the most significant barriers to evaluations. managers may be motivated to make their programs cost-effective. - Strategies to minimize: including evaluation in the early stages of program planning, accounting for ethical considerations, determining who will conduct the evaluation, considering the evaluation design, increasing objectivity, and developing a plan to use the evaluation results.

institutional review boards (IRBs)

Reviews the design and potential risks to participants. Human subject committees. These board safeguard the rights, privacy, health, and well-being of those involved in the evaluation and research.

baseline data

data reflecting the initial status or interests of the participants or something like qualitative data from focus groups can be used to assess participant satisfaction


संबंधित स्टडी सेट्स

Microbiology - Exam 1 (Assignment) Ch.1

View Set

AP Bio Select Review Questions Chapters 13-14

View Set

respiratory chapter 20 questions

View Set

Anatomy & Physiology II Lecture (Connect Questions) Respiratory System

View Set

Chapter 5 Computer Science Questions-AB

View Set

Chapter 24: Environmental Emergencies

View Set

Unit 5 Managing/Coordinating Care

View Set

Serial Dilutions Mastering Microbiology Lab Homework

View Set

Chapter 20. Electroconvulsive Therapy (Practice)

View Set