Chapter 14: Program Evaluation
Cost-Effectiveness and Cost-Benefit Analysis
2 major approaches to assessing the efficiency of a program Assessing costs of a program can be highly complex -Requires technical expertise in cost accounting adn deals with such accounting concepts as variable vs. fixed costs Because program evaluators often lack that kind of expertise, these analyses are often not included as part of the evaluation
Goal Attainment Period
Approach to evaluation, refers to the formal goals and mission of the programs
Cost-Benefit Analysis
Assess efficiency of program An effort is made to monetize the program's outcome in addition to its costs
Cost-effectiveness Analysis
Assess efficiency of program The only monetary considerations are the costs of the program itself (the monetary benefits of the program's effects are not assessed) Involves fewer cost-accounting complexities and fewer questionable monetizing assumptions than cost-benefits analysis
Utilization of Program Evaluation Findings
Can affect jobs, programs and investments, but beliefs and values are also at stake E.g. Richard Nixon had a study on porn and found exposure to porn wasn't linked to likelihood of sex crimes and Nixon said they were wrong Why the implications of evaluation research results are not always put into practice
Summative Evaluations
Concerned with the assessment of the ultimate success of programs How successful the program is / decisions about whether it should be continued / chosen in the first place from among alternative options. Results convey a sense of finality. Asking if the program is successful is the most significant evaluative question we might ask. Typically uses quantitative methods
evidence-based practice
Decisions about providing the type of program or intervention with the best chance for successful outcomes are often informed by meta-analyses, which are statistically oriented systematic reviews of research studies.
Outcomes Approach Logic Model
Emphasizes outcomes, less likely to begin with underlying program assumptions and more likely to begin with program inputs (resources)
Activities Approach Logic Model
Emphasizes the details of the implementation process.
Theory-Based Logic Model
Emphasizes the underlying theory that influenced decisions about program components and thus explains the reasons for those decisions.
Evaluating Outcome and Efficiency
Evaluating these 2 parts may assess whether the program is effective attaining its goals, if it has any unintended harmful effects, if its success (if any) is being achieved at a reasonable cost, and how the ratio of its benefits to its cost compares with the benefits/costs of other programs with similar objectives Goal Attainment Period
The Politics of Program Evaluation
Evaluations of a program can bring on supporters or opponents Vested interests can impede the atmosphere for free scientific inquiry Examples: There may be pressure to design the research or interpret findings in ways that are likely to make the program look good Program evaluation conducted in the cheapest, most convenient way possible Based on the belief that funding sources won't pay attention to the quality of the research Administrators likely will not pick the most qualified person to conduct the evaluation Political considerations may be given higher priority Commitment to conducting the most scientific study possible sometimes will threaten administrators and be perceived as a problem
Planning an Evaluation and Fostering Its Utilization
First Step: Learn as much as possible about the evaluation's stakeholders - people who have some interest in the program Second Step: It is essential that key stakeholders be involved in a meaningful way in planning the evaluation. Involving them can increase the chances that the evaluation will study something that is a priority for them, promote their identification with the evaluation, and foster their support of it during the data collection phase and their eventual utilization of its findings and recommendations Third Step: it is important at the outset to find out who wants the evaluation, why they want it, and who doesn't want it. involvement should begin early in the planning of the evaluation, not after the research design is ready to be implemented. Fourth Step: Obtain stakeholders' feedback to a draft of the evaluation proposal that reflects their input Fifth Step: Include a logic model in the evaluation proposal Sixth Step: Assure program personnel and other stakeholders that they will be able to respond to a draft of the report before it is finalized Seventh Step: Tailor the form and style of the report to the needs of stakeholders Eighth Step: Present negative findings, if any, tactfully, recognizing the human efforts and skills of program personnel Ninth Step: Instead of implying program failure provide suggestions for developing new programs or improving existing ones Tenth Step: Develop implications that are realistic and practical Even if you follow all steps, you may still encounter some problems.
In-House vs. External Evaluators
Fiscal concerns can also affect the evaluation designs employed abut can also lead to attempts to influence the ways findings are interpreted Evaluators can also be influenced by criticizing their competence after they have reported findings that program administrators do not like - External evaluators learn that if they produce reports that reflect favorably on the evaluated program, then program staff members are extremely unlikely to mobilize efforts to discredit the evaluation's credibility or the evaluator's competence - Political considerations can affect not only in-house evaluators but also external evaluators who seem to be more independent. Even funding sources and other external sponsors of an evaluation can have a stake in its outcome and may try to influence it for political reasons - Web of politics in program evaluation can be extensive and sometimes external groups that sponsor an evaluation are not as independent and objective as we may suppose
The W. K. Kellogg Foundation
Identifies three different approaches for logic models Theory-Based Logic Model Outcomes Approach Logic Model Activities Approach Logic Model Which logic model to choose will vary, depending on program needs and what seems to be most helpful for those involved in program management and evaluation.
Why the implications of evaluation research results are not always put into practice
Implications may not always be presented in a way that non researchers can understand Evaluation results sometimes contradict deeply held beliefs Vested Interests
Evaluation for Program Planning: Needs Assessment
Just as clinical practitioners evaluate program evaluators may assess a program's target population in order to enhance program planning might assess the extent & location of the problems the program seeks to ameliorate, as well as the target population's characteristics, problems, expressed needs, and desire Info then used to guide program planning & development concerning issues like what services to offer, how to maximize service utilization by targeted subgroups, or where to locate service The process of systematically researching diagnostic questions widely used to cover all sorts of techniques for collecting data for program planning purposes, and has become synonymous with evaluation for program planning conceptual issue that complicates the definition of needs: whether they're defined in normative terms or in terms of demand How we define needs affects the choice of specific techniques to assess them The specific techniques for conducting a needs assessment are usually classified in 5 categories: 1. the key informants approach 2. the community forum approach 3. the rates-under-treatment approach 4. the social indicators approach
Purposes and Types of Program Evaluation
Might have one or more of the following: 1. To assess the ultimate success of programs 2. To assess problems in how programs are being implemented 3. To obtain information needed in program planning and development. Program eval's can be further classified as summative or formative.
Problems and Issues in Evaluating Goal Attainment
Over the years, evaluations of program outcomes have had far more negative findings indicating program failure than positive findings It's also believed that studies with negative findings tend not to be used because of vested interests at stake One important criticism of evaluating goal attainment correctly: the determination of program goals and their measurable indicators can be hazardous Formal goals are often stated so vaguely that different evaluators may find it impossible to agree on what they really mean in terms of specific indicators of success When evaluators choose a few operational indicators of success, they risk missing areas in which the program is also succeeding - If so, then their negative findings may be misleading and may endanger the continuation of programs that are succeeding in other, equally important ways Some program evaluators suggest keeping the goal attainment model but with some adjustments - E.g. ignore the formal goals or mission statement of a program and simply measure every conceivable indicator of outcome that the evaluators think has some potential being affected by the program Or assess official program goals as well as a limited number of additional goals that seem to be the most plausible in light of the social science theories on which the program is based Surveys of Communities or Target Groups Focus Group
Historical Overview of Program Evaluation
Planned social evaluation origins traced to 2200 BC in China In the mid-19th century, the reform movement for more humane care of the mentally ill, led by Dorothea Dix, succeeded in getting states to build more public mental hospitals. More systematic approaches to program evaluation can be traced back to the beginning of the 20th century. Early efforts evaluated schools that used different teaching approaches, comparing educational outcomes by examining student scores on standardized tests. The effects of worker morale on industrial productivity and the impact of public health education programs on hygienic practices. In the 1940s, after New Deal social welfare programs were implemented, studies examined the effects of work relief versus direct relief, the effect of work relief vs. direct relief After WWII, large public expenditures were committed to programs that attempted to improve housing, public health, attitudes toward minorities, and international problems in health, family planning, and community development Program evaluation became widespread by the late 1950s as efforts increased to alleviate or prevent social problems By the late 1960s, textbooks, professional journals, national conferences, and a professional association on evaluation research emerged. But by the late 1970s, after public funding for these programs waned, declines began in the funding of studies to evaluate them. This trend toward reduced funding of program evaluation accelerated during the 1980s as federal evaluation offices were hit hard by the budget cuts of the Reagan administration The "age of accountability" continued through the 1980s and 1990s Liberals and conservatives alike demanded that programs be more accountable to the public and show whether they were really delivering what they promised to deliver. As a result of these forces and despite governmental funding cuts, by the 1990s program evaluation had become ubiquitous in the planning and administration of social welfare policies and programs. The Impact of Managed Care Evidence-Based Practice
Process Evaluation
Process evaluations (which are an example of formative evaluations) ask many of the same questions in connection with monitoring program implementation, and they focus on identifying strengths & weaknesses in program processes and recommending needed improvements the term is aligned with monitoring program implementation The most appropriate methodology to use depends on the nature of the research question Experimental or quasi-experimental designs might be used to assess the effectiveness of alternative fund-raising strategies, to measure the impact of different organizational arrangements on staff attitudes, to determine which outreach strategies are most successful in engaging hard-to-reach prospective clients in treatment, and so on. surveys that use questionnaires or scales might assess staff, client, or community attitudes that affect program implementation decisions rely heavily on qualitative methods Ie discovering reasons that clients cite for service dissatisfaction or for refusing or prematurely terminating service delivery
External Evaluators
Program evaluators who work for external agencies such as government or regulatory agencies and private research consulting firms May have strong incentives to get and stay in good graces of the program being evaluated Can still struggle being objective
Managed Care
Refers to a variety of arrangements that try to control the costs of health and human services. Have a large organization contract with care providers who agree to provide services at reduced costs. Some common types of managed care organizations are health maintenance organizations (HMOs), preferred provider organizations (PPOs), and employee assistance programs (EAPs). One way in which these companies attempt to reduce costs is by reviewing requests for services by those they cover and approving— that is, agreeing to pay for—only those services they deem to be necessary and effective. This stipulation refers to both the type of service and the amount of service, putting pressure on service providers to come up with brief treatment plans as well as evidence as to how many sessions are needed to achieve certain effects
Program Evaluation
Refers to the purpose of research rather than to any specific research methods. Its purpose is to assess and improve the conceptualization, design, planning, administration, implementation, effectiveness, efficiency, and utility of social interventions and human service programs
Focus Groups
Relatively speedy and inexpensive qualitative research method Often used for needs assessment or for collecting other forms of program evaluation data Bringing together a bunch of key players to engage in a guided discussion of community needs or the need to provide a specific program to a particular target population or prospective clients
Logistical and Administrative Problems
Social context of program evaluation affects utilization of the outcomes of evaluative studies and logistics involved in their implementation Refer to getting research participants to do what they're supposed to do, getting research instruments distributed and returned and other seemingly unchallenging tasks If your resources are insufficient to have a frequent on-site presence and ongoing interaction with program personnel, then you may find out about program changes that wreak havoc on the planned evaluation too late to salvage the evaluation. Programs change and the impact of this on evaluation must be considered Important to be vigilant about the potential for the corrupting influence of vested interests yet aware that despite this potential, many objective and useful program evaluation studies have been done in the past and are likely to be done in the future.
Monitoring Program Implementation
Some programs unsuccessful because they aren't being implemented properly familiarity with organizational goal theory helps us realize the true importance of implementation evaluations we cannot assume that those goals are the real priority of the program personnel who are responsible for attaining them —> sometimes preoccupied with their own agendas, resulting in activities that are irrelevant or at odds with the attainment of the official goal i.e. administrators may secure federal poverty funds not because they are devoted to fighting poverty but because those funds will help balance agency budgets & enhance the agency board evaluation of the administrators' performance once funds are received, there's no guarantee that they will reach the clients as promised in the grant If a study was done to look at the funding, it wouldn't show that the agency never really tried to serve clients because of their own internally directed objectives - shows purpose of proper program implementation Evaluations of program implementation are not necessarily concerned only with the question of whether a program is being implemented as planned Questions to ask: (more on p. 332) Which fundraising strategy yields the most funds? What proportion of the target population is being served? What types of individuals are not being reached? Why are so many targeted individuals refusing services?
Surveys of Communities or Target Groups
Surveys are the most direct way to assess the characteristics and perceived problems and needs of the target group Survey a sample drawn from the population, or, if feasible, survey everyone in the target group Ideally, use random sampling techniques. If this is not possible, use qualitative sampling approaches Data collection methods might use highly structured quantitative questionnaires or semi structured qualitative interviews, depending on the nature of the target group and what is known or not known about the needs of the target group Advantages of the direct survey approach parallel those of surveys in general Evaluators have to be mindful of the potential biases associated with low response rates, social desirability, and acquiescent response sets Those who do respond can't be assumed to represent those who don't The advantages: directness and potential for determining the need for and likely use of programs Design a needs assessment survey that minimizes those biases
Bogdan and Taylor (1990)
Two qualitatively oriented researchers who had been conducting evaluation research since the early 1970s on policy issues connected to the institutionalization and deinstitutionalization of people with developmental disabilities Advocates for deinstitutionalization and the integration of people with disabilities into the community They believed that many quantitatively oriented outcome evaluations on the effects of deinstitutionalization ask the wrong question "Does it work?" - Community-based practitioners believe in the work they do. You'll see them attribute problems to poorly funded, poor quality programs Community integration seen as a moral question Compare freeing people from institutions to freeing people from slavery Is community integration worth it if they encounter more problems in the community? Focused on discovering insights about what community integration means and how better to achieve it Used qualitative sampling strategies (will be discussed in part 6) instead of random sampling They were not interested in representativeness. Instead they sought to identify exemplary programs that were reputed to be doing well in achieving community integration Sampling Announcements in newsletters, national mailings, and reviews of professional literature Contacted key informants who could tell them which agencies were doing a good job, who could tell them about other informants who knew of other good programs Snowball sample Conducted in-depth, open-ended phone interviews Interviews helped them narrow the sample down to eight agencies that promised to yield the most comprehensive understanding of what community integration means and how best to achieve it Small sample size so that each agency could be studied intensively Series of visits by researchers over a 3-year period Researchers told the agency that they had been nominated as innovative or exemplary Encouraged participation and open to discussion Triangulated qualitative data collection methods Direct observation, intensive interviewing, document analysis Interviews with staff members, clients and their family members, and representatives of other local agencies Thousands of pages of field notes and interview transcripts Data presentation Case studies Agency overview, innovative agency policies that seemed to be fostering community integration, and illustrative case examples of how agency innovations are perceived to be affecting people's lives Reported problems and dilemmas Case studies disseminated to the field A positive, optimistic qualitative approach that focused on processes Believed that they were producing research that would have greater utility to the field and ultimately do more to improve the well-being of disabled individuals than producing quantitative studies of outcome
In-House Evaluators
When program evaluators work for the agency being evaluated Thought to have advantages over external evaluators May have greater access to program information and personnel May be more trusted by program personnel and therefore get better feedback and cooperation They may also be less objective and independent than external ones
Logic Models
a graphic portrayal that depicts the essential components of a program, shows how those components are linked to short-term process objectives, specifies measurable indicators of success in achieving short-term objectives, conveys how those short-term objectives lead to long-term program outcomes, and identifies measurable indicators of success in achieving long-term outcomes. useful tools in guiding evaluations and helping agency administrators and practitioners remain apprised of the program evaluation protocol. They also are useful in helping program planners and administrators conceptualize, develop, and manage their programs. A good one is seen as enhancing the prospects that the program will be managed well, monitored incrementally to assess whether it is being implemented properly, and have its short- and long-range objectives evaluated as planned. The W. K. Kellogg Foundation
Normative need
a needs assessment would focus on comparing the objective living conditions of the target population with what society deems acceptable or desirable from a humanitarian standpoint Normatively defining the needs of the homeless might lead you to conclude that certain housing or shelter programs need to be developed for individuals who are living in deplorable conditions on the streets, even if those individuals don't express any dissatisfaction with their current homelessness.
Vested Interests
after receiving extensive training in a new model of therapy, a group of clinicians, succeed in convincing agency colleagues and superiors to let them form a new unit that specializes in service delivery based on that model of therapy. More likely to advocate for the effectiveness of this model even if a study suggested it didn't work
the rates-under-treatment approach
attempts to estimate the need for a service and the characteristics of its potential clients on the basis of the number and characteristics of clients who already use that service Can compare 2 similar communities in size, characteristics, target population for a secondary analysis of case records Advantages: quick, easy, inexpensive, and unobtrusive disadvantages: assesses only that portion of the target population already using services, and pertains primarily to demand & may underestimate normative need The records and data in the comparison community may be unreliable or biased agencies may exaggerate the number of clients served or their needs for services so that will look good to funding sources
Stakeholders
include the program's funders, personnel, service recipients and their families, and board members
the social indicators approach
makes use of existing statistics - and it does not look just at treatment statistics; it examines aggregated statistics that reflect conditions of an entire population i.e. infant mortality rates can be an indicator of the need for prenatal services in a particular community School dropout rates can indicate the need for a school district to hire school social workers advantages: unobtrusive, quick & inexpensive
Formative Evaluations
not concerned with testing the success of a program. Focus is on obtaining info that is helpful in planning the program and in improving its implementation/performance. Can use quantitative, qualitative or both methods
the community forum approach
nvolves holding a meeting in which concerned members of the community can express their views and interact freely about their needs advantages: feasibility, ability to build support & visibility for the sponsoring agency, and its ability to provide an atmosphere in which individuals can consider the problem in depth and learn things they might otherwise have overlooked disadvantages: Biased views, not representative of the target population; public meetings, so the individuals may not speak up
Demand Need
only those individuals who indicate that they feel or perceive the need themselves would be considered to be in need of a particular program or intervention
The Key Informants Search
utilizes questionnaires or interviews to obtain expert opinions from individuals who are presumed to have special knowledge about the target population's problems and needs as well as about current gaps in service delivery to that population selected to be surveyed might include leaders of groups or organizations that are in close contact with the target population and that have special knowledge of its problems Advantages: sample can be obtained & surveyed quickly, easily, and inexpensively can provide the fringe benefits of building connections with key community resources that are concerned about the problem & of giving the program some visibility Disadvantages: your information is not coming directly from the target population