Program Evaluation Final Exam

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

Who are the evaluation sponsors?

?

Explain the difference between cost-benefit analysis and cost-effectiveness analysis.

A cost-benefit analysis seeks to identify the monetary inputs and compare them to outputs. Cost-benefit analysis is solely an economic perspective which is important because public resources are always scarce and this provides a valuable consideration for decision makers. A cost-effectiveness analysis seeks to identify the fiscal input and measure the output in terms of a designated unit that narratively depicts the achieved goal of the program. This is a more sensitive approach that is necessary for social programs (especially in terms of politically partisan environments) and can be a way to navigate through more controversial social programs.

What criteria determines good evaluation questions?

A good evaluation question must be reasonable and appropriate, meaning relevant to stakeholder and program expectations. A good evaluation question must also be answerable , meaning respondents should have concrete answers and not conform to vague or generic assumptions. An evaluator can examine the question in the context of the program, or analyze them in relationship to he findings in applicable literature. The role of performance criterion is to provide a standard by which to measure the program's effectiveness. Performance criterion can be derived from guidelines and managed care standards, administrative objectives, an organization's historical literature/previous evaluations.

Describe mediator and moderator variables.

A mediator variable explain how or why a relationship between the independent and dependent variables occur. x-> m -> y A moderator variable affects the direction and/or strength of the relationship between the independent and dependent variables. [example: the program will be more effective if the participant is female.]

hat is a rate, and why is it sometimes useful to use rates instead of counts?

A rate is the occurrence or existence of a particular condition expressed as a proportion of units in the relevant population. A rate can contribute to urgency or specific concerns among the group that is at risk by comparing the condition across different groups across other groups and areas. ???counts???

Explain the difference between Type I and Type II errors.

An apparent effect may be statistically significant when there is no actual program effect (type I error), or statistical significance may not be attained when there really is a program effect (Type II error).

What is an effect size statistic? What are the two effect size statistics discussed in the text?

An effect size statistic characterizes the magnitude of a program effect rather than a raw difference score or a simple percentage change. The text specifies two effect size statistics: Standardized mean difference-represents effects that vary numerically Odds-ratio-characterizes the magnitude of a program effect in a binary perspective (the intervention caused an effect or did not)

What is impact assessment? According to your text, what does the evaluator need to assess before beginning an impact assessment?

An impact assessment is designed to determine what effects programs have on their intended outcomes and whether perhaps there are important unintended effects. Prerequisite conditions such as an assessment of the program theory and program process, and identification of targets and political dimensions are also important to know prior to conducting an impact assessment.

What is an outcome? Explain the difference between an outcome level, an outcome change, and a program effect.

An outcome is the state of the target population or the social conditions that a program is expected to have changed. An outcome level is the status of an outcome at some point in time. outcome change is the difference between outcome levels at different points in time. program effect is the portion of an outcome change that can be attributed uniquely to a program as opposed to the influence of some other factor.

What is an unintended outcome (consequence)? Why does the evaluator need to make a special effort to identify them? How does the text recommend going about assessing unintended outcomes?

An unintended outcome may be positive or negative, but distinctively emerges through a process that is not part of the program's design or direct intent. The evaluator needs to make a special effort to find them because they are difficult to anticipate and provide considerations that the program planners could find very useful. The text recommends prior research and contact with program personnel at all levels to identify and assess unintended outcomes.

Explain the concepts of service coverage and service bias.

Coverage refers to the extent to which participation by the target population achieves the levels specified in the program design. Bias is the degree to which some subgroups participate in greater proportions than others. Bias can arise out of self-selection and can also derive from program actions. Coverage failure usually arises when high target participation has not been met.

Explain the difference between incidence and prevalence.

Incidence refers to the number of new cases of a particular problem that are identified or arise in a specified area or context ding a specified period of time. Prevalence refers to the total number of existing cases in that area in a specified time.

Describe the evaluator-stakeholder relationship.

Independent: evaluator takes primary responsibility for developing evaluation plan, conducting, and disseminating the results. Participatory/Collaborative: team of evaluates and stakeholder (one or more). Stakeholders are involved with planning, conducting, and analyzing evaluation. Empowerment: evaluators essentially teach the stakeholders how to conduct the evaluations on their own through consultation

Who are the stakeholders?

Individuals, groups, or organizations having a significant interest in how well a program functions, for instance, those with decision-making authority over the program, funders, and sponsors, administrations and personnel, and clients or intended beneficiaries.

Explain the difference between program process evaluation and program process monitoring.

Process monitoring documents the key aspects of program performance and assesses if the program is operating as intended (or according to a set standard). Program process monitoring involves assessments of program performance in both service utilization and program organization. Process evaluations findings are for the stakeholders/decision makers to judge the program.

Define process evaluation. How is this different from impact evaluation?

Program evaluation monitors the systematic and continual documentation of performance, whereas an impact evaluation measures the intended outcomes of the program (generally the social conditions it aims to improve).

Summarize the limitations on the use of randomized experiments.

Programs in early stages of implementation, ethical considerations, differences between experimental and actual intervention delivery, time and cost, and integrity of experiments can all be seen as limitations on the use of randomized experiments.

Explain the difference between quantitative and qualitative research. Why is qualitative research useful for describing needs?

Quantitative research involves numerical representation of the objects of interest, whereas qualitative research involves textured knowledge of the specific needs in question. Qualitative research (from focus groups and informant surveys and interviews) can provide descriptive information about the nature and nuances of a social problem and the service needs of those who experience it.

What is experimental design? What is a quasi-experimental design?

Randomized field experiment is stated to be the most valid way to establish the effects of intervention. It involves testing a control group, who receives no intervention, and an intervention group (receives intervention). Any differences witnessed in these groups are then attributed to the intervention. Quasi-experimental design targets the participants in a program (the intervention group) and compared them to nonparticipants that are presumed to be similar (the control group). The technique is called "quasi" because it lacks the random assignment to conditions that are essential for for true experiments. Quasi-experiments can be useful for an impact assessment when it is impractical or impossible to conduct a true randomized experiment.

Explain reliability, validity, and sensitivity.

Reliability is the extent to which a measure produces the same results when used repeatedly to measure the same thing. Sensitivity is the extent to which the values on a measure change when there is a change or difference in the thing being measured. Validity is the extent to which a measure actually measures what it is intended to measure. To produce credible results, outcome measures need to be reliable, valid, and sufficiently sensitive to detect changes in outcome level (and to the magnitude of outcome the program might expect to produce).

What are the four sources of info for explicating program theory?

Review program documents interviews with program stakeholders and other selected informants site visits and observation of program functions and circumstances social science literature

Explain the difference between statistical and practical significance.

Statiscal significance is numerical, meaning if it indicates a positive number...that may be attributed to the large sample size. Practical significance is more of a narrative approach seeks to describe the worth/meaningfulness rather than numbers.

Explain the process of defining and identifying targets of interventions. Define a target and explain the difference between direct and indirect targets.

Target is someone or a group of someone's who fit within the parameters of demographic and social characteristics in regards to the problem and program. A direct target may be someone who is provided services, an indirect target may be another community member who benefits from the improvement of the problem. (The farmers friend who learns the new training from his friend)

Explain the concept of opportunity costs.

The concept of opportunity costs reflects the facts that resources generally are limited. Consequently, individuals or organizations choose from existing alternatives the ways these resources are to be allocated, and these choices affect the activities and goals of the decision makers. The opportunity cost of each choice can be measured by the worth of the forgone options. Since in many cases opportunity costs can be estimated only by making assumptions about the consequences of alternative investments, they are one of the controversial areas in efficiency analyses.

What is a program effect?

The program effect is the difference between the outcome measured on program targets receiving the intervention and an estimate of what the outcome for those target would have been had they not received the intervention. (numerical difference between the means of the two outcome values// magnitude of program effect can be described as a percentage increase or decrease)

Define program evaluation

The use of social research methods to systematically investigate the effectiveness of social intervention programs in ways that are adapted to their political and organizational environments and are designed to inform social action to improve social conditions.

Define needs assessment. What does this task entail? In other words, what are the steps in this process?

the esssential task for the program evaluator is to describe the problem that concerns all major stakeholders in a manner that is careful, objective, and meaningful to all groups as possible to help draw out the implications of that diagnosis for structuring effective intervention. A. Construct a precise definition of problem B. Define and identify the targets of intervention C. Accurately describe the nature of service needs of that population.


Ensembles d'études connexes

Greek Contributions - Mythology, Art, Science, Philosophy

View Set

الادب / مدرسة الاحياء و البعث

View Set

Microbiology Chapter 16: Food and Industrial Microbiology

View Set

Pediatric Success Respiratory Disorders

View Set

Risk Management: Managing Life Cycle Risks

View Set

Lesson 8: Skeletal Muscle Part II

View Set

Combo with "Pysch" and 11 others

View Set