Comparative Evaluation Theory Midterm

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

History of Evaluation: 4 Key Events

1. !993; Ralph Tyler's Eight-Year Study 2. 1960s: Great Society Legislation, War on Poverty (Surge of evaluation and funds, not done by evaluators so resulted in a dip in work) 3. 1993; Government Performance and Results Act (start of the rise of accountability and evaluation) 4. 2010 Government Performance and Results Modernization Act (Increase focus on evaluation and accountability)

Describe the Evaluation Theory Tree (Christie & Alkin, 2013)

1. Describes growth of filed based upon underlying "drivers" (roots) 2. Serves as a teaching tool by highlighting theorists' primary and secondary orientations toward evaluation practice (branches)

Steps of Empowerment Evaluation

1. Mission: Group must come to a consensus concerning mission/values 2. Taking Stock: Group members evaluate efforts within context of a set of shared values; dot activity to prioritize activities and then ranking activity to determine how they well they are doing in regards to activities 3. Planning for the Future: generate goals, strategies, and credible evidence. Put a plan in place where people are accountable for carrying things out.

Define Evaluation

1. Process of determining the merit, worth, or value of something, or the product of that process. (Scriven, 1991, p. 139) 2. Systematic assessment of the operation and/or the outcomes of a program or policy, compared to a set of explicit or implicit standards, as a means of contributing to the improvement of the program or policy. (Weiss, 1998, p. 4) 3. Use of social research methods to systematically investigate the effectiveness of social intervention programs 4. Systematic collection of information about the activities, characteristics, and outcomes of programs to make judgments about the program, improve program effectiveness, and/or inform decisions about future programming. (Patton, 1997, p. 23)

What does a good evaluation theory address? According to Shadish, Cook, Leviton; 5 parts

1. Social Programming: how can social programs contribute to social problem solving. Focuses: How programs are structured, their functions, how context shapes/constrains program, how social change occurs 2. Knowledge Construction: concerned with what counts as acceptable knowledge about thing being evaluated, methods that produce credible evidence, and philosophical assumptions about knowledge 3. Value: concerned with the role value and process of valuing plays in evaluation, what values ought to be represented, how to construct judgement of the worth of social programs 4. Use: concerns how social science information can be used in social policy and programming. Deals with possible kinds of use, weight given to each kind of use, and what evaluators can do to increase use 5. Evaluation Practice: concerns things evaluators do as they practice; when an evaluation should be done, the purpose of the evaluation, the role of the evaluator, types of questions, design used, what activities will be used

Bandwidth-fidelity

Bandwidth: range of issues broached Fidelity: accuracy of an answer They are negatively related Bandwidth fidelity describes the dilemma/trade-off one must consider when wanting to either study a broad range of issues w/ less precision vs. studying a few things with great precision The convergent-divergent piece is a prescriptive approach to asking evaluation questions: ask a lot of questions (divergent) and then refine what questions to ask to get a dependable answer (convergent)

Who is on the methods branch that we have learned? 5 people

Base: Campbell In leaves: Cronbach, Weiss, Chen, Mark & Henry Hint: The C's (Campbell, Cronbach, Carol, Chen, and the COUPLE (Mark & Henry)d

Who is on the use branch (that we have learned)? 6 people

Base: Patton In leaves: Fetterman, Preskill, King, Cousins, Wholey

Integrative Validity Model

Builds on Campbelian model strengths. It requires evaluator-supplied intervention information to be scientifically credible (social science theory), but also relevant and useful in stakeholder practice Does not matter how valid results are, must translate back to program

Decision Accretion

Carol Weiss alternative to typical decision making. Typically policy is not made with one single decision at one point in time, but this is a build up of small choices over time. Decisions and actions accumulate over stakeholders to result in policy action Decisions are made via many large and small decisions made by different people over time

Pregnant Questions

Carol Weiss term about questions that talk about mechanisms of change

Bottom-up approach to validity

Chen's framework that aims to foster an approach that is reflective of stakeholders' views and concerns. Combine stakeholders ideas with evidence-based interventions that fit their needs. No best method (contingency based) This approach maximizes viable validity. First, make sure that a program is viable before testing effectiveness in the real world (more settings) and then test efficacy (rigorously assessing causal relationship between intervention and outcome).

Viable Validity

Chen's validity that adds stakeholder views and interests into the mix. This describes the extent to which an evaluation provides evidence that an intervention is successful in the real world. This is what is important to stakeholders. They want to know if it'll actually work in the context they work in. This validity describes if something can actually work in a real context. Is it affordable, accessible to right clients, able to be implemented by ordinary implementer, etc.? Evidence about something is pretty useless if it cannot be implemented.

Stage 3 of Shadish, Cook, Leviton (1991) Organization

Contingency; Cronbach and Rossi Goal: integrate the past Intended to help evaluators consider how to decide what approach/decision to make as part of their practice Try to specify under which circumstances and for which purposes different practices make sense Emphasize considering factors like program maturity and selecting certain evaluation questions

Practical Participatory Evaluation

Cousins Goal: support program decision making and problem solving; evaluation utilization Evaluator and participants partner to make decisions Primary users (sponsors, managers, developers, implementors) participate in eval (small number) The primary users extensively participate in all phases of eval

UTOS, utos, *UTOS

Cronbach's framework that provides a structure for systematically expanding evaluation and for understanding it across various settings. A way to involve PSC. Doing little utos studies that we discern and study a effect. Generalizabilty comes through doing the small fleet of studies. U.T.O.S stands for Units (population/sites), Treatment, Observations, Setting A small utos, that exists in the big UTOS, is used to inform *UTOS utos: sample of units, treatments, observations UTOS: Population of units, txts, observations, and setting represented in observered sample starUTOS: units, treatments, observations, and setting that manifestly differ from UTOS. GENERALIZATION to starUTOS is the main function of evaluation, permitting transfer of knowledge across diverse localities, times, and programs

Policy Shaping Community

Cronbach's stakeholders that shape policy through interaction; includes legislators, officials at program central office, officials at the local service level, program constituents, scholars, journalists, and others who do social inquiry Policy level, program level, local level, program constituents, illuminators (scholars, journalists, those who disseminate infromation)

Comprehensive evaluation typology

Depends on program maturity: prescribes evaluation approach, strategies, tools. E.g. constructive during initial implementation Evaluation Functions: Constructive vs. Conclusive Program Stages: Process vs. Outcome Examples: Constructive process evaluation: information about strengths/weaknesses of program's structure/implementation processes with purpose of program improvement Constructive outcome evaluation: identify strengths and weaknesses of program elements in terms of how they may affect program outcomes Conclusive process evaluation: judge the merits of implementation process Conclusive outcome evaluation: overall judgement of program outcomes in terms of its merit or worth

Evolutionary Epistemology

Donald Campbell's extension of biological evolution to cognitive mechanisms and ideas The acquisition of knowledge is a process of generating and testing falsifiable hypotheses, retaining those that solve the problems Knowledge is facilitated through criticism

Experimenting Society

Donald Campbell's ideology for a political system Reforms are evaluated and new approaches to social problems are developed based on evaluation outcomes An active society of exploration, innovation, and *experimentation* (he is okay with true and quasi experiments) This is an honest society, open to reality testing, criticism, and avoiding deception. Scientific society of theory testing and purification In this society, we would try out new programs, see if they work, imitate, modify, or discard them based on effectiveness

What is Evaluation Theory?

Evaluation theory is a *prescriptive theory* that tells people how to approach evaluation. Evaluation theory creates a common language, identification of important issues, a unique knowledge base, a presentation to the outside world. Evaluation theory tells us how to practice (think of how it may affect the 6 steps of the CDC framework). E.g. Fetterman tells us to have a wide range of stakeholders who are involved extensively-->connects to Engage Stakeholders step

Empowerment Evaluation

Fetterman's framework Use of evaluation concepts, techniques, and findings to foster improvement and self-determination. This approach aims to increase the likelihood that program will achieve results by increasing the capacity of program stakeholders to plan, implement, and evaluate their own programs Evaluator is a facilitator Empowerment evaluation is a method for gathering, analyzing, and sharing data about a program and its outcomes and encourages stakeholders to actively participate in system changes. Main premise of this approach is that the more closely stakeholders are involved in reflecting on evaluation findings, the more likely they are to take ownership of the results and to guide decision making and reform. Steps: Mission, Taking Stock, Planning for the Future

Self Determination

Foundation concept underlying empowerment theory (Fetterman). Helps detail the specific mechanisms or behaviors that enable the actualization of empowerment. The ability to chart one's own life. Consists of interconnected capabilities like ability to identify and express needs, establish goals, make a plan to achieve goals, identify resources, evaluate results, etc.

Participatory Approach

Hallie Preskill

Leverage

How Cronbach chooses which questions to answer: Broad range of question, answer the most important questions to the PSC. Weight you would assign uncertainity in the

Michael Patton : Purpose, Approach, Audience, Engagement, Value Judgments, Political Setting

In Use trunk (not leaf) Ultimate purpose: Use for programmatic improvement (genuine desire to help facilitate use) Approach: Utilization Focused Evaluation (UFE), identify primary intended users with personal factor, engage primary intended users to emphasize use, methods are flexible and anything is possible, analyze/interpret findings AND follow through on use after report is delivered, Targeted dissemination to facilitate intended use by intended users, recognizing other users may occur Audience: primary intended end users w/ personal factor Stakeholder Engagement: Primary intended users engaged throughout (may not do all talks, but "in the know") Value Judgments: Facilitate the making of value judgments by primary intended users Political setting: Recognize and accommodate political realities Key Ideas: personal factor, process use, situational responsiveness, intended use by intended users, developmental evaluation

Primary Intended Users

Intended users are more likely to use the evaluations if they understand and feel ownership of the process and findings May or may not perform tasks, but they are kept "in the know....level of engagement is flexible

Evaluation Capacity Building (ECB)

Intentional work to continuously create and sustain overall organizational processes that make quality evaluation and its uses *routine*. Approaches: training's, learning by doing (i.e. with a coach), engaging in communities of practice Capacity is built at individual, group, and organizational levels to support "doing evaluation" and "using" findings Preskill, King, Cousins

Sequential Purchase of Information (4 steps)

Joseph Wholey's framework that concerns the cost of evaluation in relation to it's value with 4 steps (ERPI). This framework buys increments of timely information when its likely usefulness outweighs the cost of acquiring it 1. Evaluability assessment: assessing whether the program is ready to be managed for results, what changes are needed to do so, and whether the evaluation would contribute to improved program performance. 2. Rapid Feedback Evaluation: A quick assessment of program performance in terms of agreed-upon objectives and indicators; provides more valid, reliable, full-scale evaluation 3. Performance Monitoring: Establishment of ongoing process and outcome program monitoring system. 4. Intensive evaluation: Rigorous experimental evaluations to test validity of causal assumptions linking program activities to outcomes.

Interactive Evaluation Practice + 3 Frameworks

King "Interactive evaluation practice is the intentional act of engaging people in making decisions, taking action, and reflecting while conducting an evaluation study" A method of developing process use Part of ECB framework. It is engaging stakeholders at all parts of the evaluation. Evaluator works closely w/ stakeholders, but evaluators are the experts making the final decision Evaluators have a commitment to 3 different roles: decision maker, actor (getting things done), reflective practitioner Includes 3 frameworks: *Basic Inquiry tasks* (form questions, determine design, collect and analyze data, report and disseminate findings), *Interpersonal Participation Quotient* (various levels of who is directing eval (participant or evaluator) and various levels in decision making and implementation (evaluator can be high or low and program leaders/staff/comm members can be high or low), and *ECB* (ranges from formative/summative eval, to eval focused on ECB, to eval focused on organization development)

Interpersonal Factor

King "The interpersonal factor highlights the unique ability of an evaluator to do two things: (a) interact with people constructively throughout the framing and implementation of evaluation studies and (b) create activities and conditions conducive to positive interactions among evaluation participants. The interpersonal factor is the mechanism that brings the personal factor to life and enables it to work." Describes people working together, build relationships, listening to what people says, engaging and systematically involving primary stakeholders to the evaluation. Managing inter-personal dynamics. Evaluator has to be interpersonal. Evaluator is responsible for fostering ECB NOTE: personal factor is concerned with making evaluation useful by engaging key stakeholders; the interpersonal factor is concerned with creating, managing, and ultimately mastering the interpersonal dynamics that make the evaluation possible and inform its outcomes

Free-Range Evaluation

King Evaluative thinking lives unfettered in an organization, evaluation practice is embedded in regular daily practice (i.e. asking evaluation questions, collecting data)

Emergent Realist Evaluation

Mark & Henry Acknowledge that there is not one truth. My idea of what is real is different than what is real. They try to piece collectively of what is real to give you a sense of what is real. Reality exists apart from our understanding of it. Different perspectives Emergent realist evaluators explicitly recognize that individuals and groups assign varying levels of importance to different and that choices made in the evaluation process can serve some value perspectives and the parties that hold them over others. similar to constructivist

Multiple Pathways to Influence

Mark & Henry Consider other uses of evaluation Intended use is limited, consider other types Other use: process use, enligthenment use

Evaluation as Assisted Sense-making

Mark & Henry Humans try to make sense of the surrounding world, but things are falsely sensed. Natural everyday process of taking in information, understanding it, and adapt/behave in relation to it. Evaluation can extend, enhance, and check the natural sense-making that people engage in about programs, policies, or other evaluands Goal: get the most accurate answer for the most pressing questions in a particular circumstance

Lee Cronbach: Purpose, Approach, Audience, Engagement, Value Judgments, Political Setting

Methods branch, leaning towards use branch Ultimate purpose: Social betterment through the policy shaping process Approach: Contingency based (tailor design and methods to context, consider program maturity), refine big list of evaluation questions, formative evaluation Audience: Policy Shaping Community Stakeholder Engagement: PSC help shape evaluation and disseminate findings Value Judgments: evaluator is an educator, not judge (but share data driven and well informed opinions) Political Setting: Accommodated through PSC, evaluation can provide valuable info to resolve conflicts in political process Key terms: UTOS framework, external validity, bandwidth-fidelity, Leverage, Fleet of studies

Carol Weiss: Purpose, Approach, Audience, Engagement, Value Judgments, Political Setting

Methods branch; emphasizes the role of politics and contexts Ultimate Purpose: Improve social, economic, and cultural condition of society Approach: deeply understand context and program theory, ask questions that matter (pregnant questions that get at the mechanisms of the program), tailor design to context and questions; in any design--*rigor* is important Audience: program decision makers and policy makers Stakeholder engagement: multiple stakeholders who are affected by the program and evaluation. Help frame questions and disseminate information Value Judgments: evaluator does NOT judge, but provides insights for meaningful change over time Political Setting: Evaluation is political; it is affected by politics and programs are entangled within a political system Key terms to know: Decision accretion, Conceptual/enlightenment use, Program Theory, III Analysis

Mark & Henry : Purpose, Approach, Audience, Engagement, Value Judgments, Political Setting

Methods lean toward valuing Ultimate Purpose: "...to contribute to social betterment by informing deliberations in democraically established institutions..." Approach: Emergent realist evaluation; methods neutral (all designs and data are options), Methods choice driven by purpose, values, influence, and social betterment Audience for evaluation: broad--anyone/entity who contributes to the democratic process (e.g. citzenry, democratic institutions) Stakeholder engagement: Assist in identifying purpose of evaluation and in communication of findings Value Judgments: Criteria derived from systematic values inquiry--from broad range of stakeholders including public, Provide judgement on these criteria as part of the evaluation findings, but enters the democratic process for interpretation and influence Political setting: Recognize democratic process aims to promote social betterment through policies/programs and aligns potential role of evaluation at each stage Key ideas: assisted sense-making, multiple pathways to influence They want to understand what people value, methods is central of how they do assisted sense-making

Participatory Evaluation (and the two types)

Participatory Evaluation (PE) refers to the collaboration between evaluators and various stakeholders. It is an extension of the stakeholder-based model with a focus on enhancing evaluation utilization through primary users' increased depth and range of participation in the applied research process. All forms of PE can be identified by their choices in (a) control of evaluation process, (b) stakeholder selection for participation, and (c) depth of participation Practical participatory evaluation (P-PE): pragmatic, supports program decision making and problem solving Transformative Participatory Evaluation (T-PE): emanicipatory and social justice, empower members of community that are less powerful

Utilization Focused Evaluation

Patton A process for making decisions about evaluation focus, methods, and processes with the intended users while focusing on their intended use of the evaluation Highly personal and situational No particular content, model or method emphasized Focused on intended use by intended users Evaluations should be judged by their utility and actual use; therefore evaluators should facilitate the evaluation process and design with consideration from beginning to end how how decisions will affect use "People matter. Relationships matter. Evaluation is not just about methods and data"

Personal Factor

Patton Those who actively care about the evaluation some people personally care about the evaluation findings "The personal factor is the presence of an identifiable individual or group of people who personally care about the evaluation and the findings it generates. Where such a person or group was present, evaluations were used; where the personal factor was absent, there was a correspondingly marked absence of evaluation impact"

Active-Reactive-Interactive-Adaptive

Patton "active-reactive-interactive-adaptive' describes the nature of the consultative interactions that go on between utilization- focused evaluators and intended users. The phrase is both descriptive and prescriptive. It describes how real-world decision-making actually unfolds. But it is also prescriptive in alerting evaluators to consciously and deliberately act, react and adapt in order to increase their effectiveness in working with primary intended users." Patton refers to this as an orientation evaluators must adapt during the work of evaluation so that they can be effective - it is not necessarily a straightforward 4 - step process that one must follow, like in Empowerment Evaluation - it is a reflection of how effective evaluation practice, according to Patton, happens. Interactive Evaluation Practice: "intentional act of engaging people in making decisions, taking action, and reflecting while conducting an evaluation study" Intend to build thoughtful interaction between evaluator and stakeholders Create situations where people can meaningfully discuss issues related to the study and to engage people intentionally

Process Use

Patton & Fetterman Focus on individual changes in thinking and behavior as a result of learning to think evaluatively Process use=learning how to learn; the learning that occurs during the evaluation process Evaluation findings are short lived, but teaching someone to think evaluatively has lasting impacts on programs How people and organizations benefit from being involved in an evaluation process

situational responsiveness

Patton is referring to evaluators' needs to approach evaluations flexibly. This means that evaluators will not be bound to any set of metrics or approaches that do not help him or her provide information that is useful/helpful for primary intended users.

Developmental Evaluation

Patton's description of a specific niche in utilization-focused Stresses innovation (new programs, changes, policy reform, etc.) development to aid adaptation within a complex environment (many interaction elements with no central control; uncertainty about how to solve problems) Programs provide ongoing development in response to changing conditions and complex contextual dynamics. Evaluators bring evaluative thinking and data to bear as the team conceptualized, developed, and tried new approaches for new groups For programs that change in significant ways, new evaluation questions emerge, gals and strategies evolve.

Readiness for Organizational Learning and Evaluation (ROLE)

Preskill Instrument designed to help an organization determine its level of readiness for implementing organizational learning and evaluation practices and processes that support it. Results can be used to identify existence of learning organization characteristics, diagnose interest in conducting evaluation that facilitated org learning, Identify areas of strength to leverage evaluative inquiry processes, and identify need of org change and development The organization may use the results to focus its efforts on improving or further strengthening areas that will lead to greater individual, team, and organizational learning.

Appreciative Inquiry

Preskill Orient People that they are doing well, increase excitement "Approach to organizational change and development.... That builds on past successes (and peak experiences) in an effort to design and implement future actions" Alternative to focusing on areas of improvement because it's not always effective due to the negative language associated with it

Assets Based Approach

Preskill ??? assets based approaches recognize and build on a combination of the human, social and physical capital that exists within local communities.

Evaluative Inquire for Learning in Organizations (EILO)

Preskill's explicit approach that aims to have evaluation play an expanded and more productive role *within* organizations. Evaluation becomes ongoing, reflexive, and embedded in org practice. "EILO is a data-based approach to organizational learning and change. By focusing on the use of evaluative inquiry processes within organizations rather than across large-scale, multisite programs, authors Hallie Preskill and Rosalie T. Torres are able to bridge the gap between what research "says" about individual, team, and organizational learning and what it "says" about evaluation" Evaluative inquiry is an ongoing process for investigating and understanding critical organizational issues Fully integrate into organization's work practices for example: org members interest and ability in exploring critical issues using eval logic, org members involved in eval processes, and personal and professional growth of individuals within org Evaluators should work w/ stakeholders to apply learning from eval processes and findings, learning from eval process is an important goal, and EILO acknowledges that eval occurs within a complex system that is influence by org infrastructure

Describe the roots of the evaluation tree

Roots describes growth of field based upon primary underlying "drivers" 1. Social Accountability: generating systematic information for decision making (below use) 2. Social Inquiry: systematic study of the behavior of groups of individuals in various social settings by a variety of methods (below methods) 3. Epistemology: what is knowable and who can know it; post-positivism (there is a truth), constructivsm (all people have different experience), pragmatism (use what is best--both sides) (below valuing)

Describe the branches of the evaluation tree

Serves as a teaching tool by highlighting theorists' primary and secondary orientations toward evaluation practice Use: emphasis on learning, instrumental use, formative evaluation, involvement of stakeholders, incorporates process use and ECB Methods: emphasis on post-positivism, methodology (validity, causal mechanisms, etc.) Valuing: Evaluation is not value free, whose values count and how do we place value?

Role of Theory (three types) and Evaluation

Social Science Theory (basic research), Program Theory (how does the program work), and Evaluation Theory (how should we practice evaluation) all influence each other and *evaluation practice*. What this could look like: something found in research does not work in the program so could inform social science to reform a theory

Shadish, Cook, Leviton (1991) Stages Organization + Evaluators in those stages

Stage 1: Truth (Scriven and Campbell) Stage 2: Use (Weiss, Wholey, Stake) Stake 3: Contingency (Cronbach, Rossi) Chronological evolution of theories

CDC Framework (What are the steps?) and Standards (FUPA)

Standards: Utility (who needs info, what do they need?), Feasibility (money, time, effort), Propriety (ethical eval), Accuracy (technically adequate to inform decisions) 1. Engage Stakeholders 2. Design the Program (logic model/program theory) 3. Focus the evaluation design (state eval purpose, eval questions, select design) 4. Gather Credible Evidence (collect primary or secondary data) 5. Justify Conclusions (consider data analysis, standards, and interpretation/judgment) 6. Ensure use and share lessons learned (disseminate findings, check in after reporting, etc.)

Evaluability assessment

Step 1 of Wholey's Sequential Purchase of Information Any program can be evaluated, but is the program *ready* to be evaluated One should evaluate only programs that can give affirmative answers to "are the problems, intended program interventions, anticipated outcomes, and expected impact sufficiently well-defined and measurable? Is the logic laid out clearly enough to be tested? Who is clearly in charge of the problem? What constraints exist on his ability to act? What range of actions might he reasonably take or consider as a result of various possible evaluation findings?" Purpose: clarify program intent, explore program reality, and assist policy, management, and evaluation decisions

Rapid Feedback Evaluation

Step 2 of Wholey's Sequential Purchase of Information Pilot studies designed to estimate program effects, indicate range of uncertainty in the estimates, and produce tested designs for more definitive evaluation efforts. Provides *quick and cheap* preliminary assessment of program performance in agreed-on program objectives and performance indicators and designs for more valid, more reliable full-scale evaluation Consists of five steps: existing data collection, new data collection, preliminary evaluation, design of full scale evaluation, assisting policy and management decisions

Performance Monitoring

Step 3 of Wholey's Sequential Purchase of Information Set up data collection to get performance metrics Measures program performance. Compare to prior or expected performance. Performance measurement systems and program evaluation studies should be mutually reinforcing. Evaluation studies are more feasible, cheaper, and more useful when performance criteria have been clarified and performance data is collected on a regular basis. Causation of course is difficult, but even without causation evaluators and policy makers can adjust activities when performance leaves too much to be desired, and with *qualitative case studies outcome monitoring* is usually the most feasible evaluation alternative Has four steps: establish data sources, collect data on program outcomes, compare program outcomes with prior or expected outcomes, assist policy and management decisions

Critical Friend

The empowerment evaluator (Fetterman) is considered to be a critical friend This individual has evaluation expertise but serves as a coach, advisor, or guide, rather than "the expert". The evaluation is in the hands of the people in the program, but a critical friend helps to keep it on track and rigorous.

Donald Campbell: Purpose, Approach, Audience, Engagement, Value Judgments, Political Setting

The internal validity, experiments guy. Trunk of methods branch. Ultimate Purpose: Identify effective solutions to social problems Approach: focus on causal questions, reducing bias, high internal validity, preference of experimental designs and *summative (black box) evaluations* Audience: public policy decision makers Stakeholder engagement: Not engaged for evaluation, but encourages disputatious community of scholars to enhance eval practice....would love an Experimenting society Value Judgement: leave to others Political setting: Administrators should work with evaluators to employ social reforms in manner that assists good evaluation Key terms: Experimenting society, evolutionary epistemology, disputatious community of scholars, internal validity, single truth approximated with error

Disputatious community of scholars

The work of one group is constantly check and challenged by others. Truth emerges from and because of this community of truth seekers. These scholars debate, have multiple evaluations of one program, and reanalyze others' evaluation data *peer criticism* Evaluators at the core of this group

Action and Change Model Program Theory

Theorized by Huey Chen Program Theory: conceptual framework that evaluators use to facilitate stakeholders in describing an intervention program or to guide an evaluation. Related to logic models, but distinct from them because program theory emerged from theory-driven evaluation. Action Model: Prescriptive assumptions (what actions must be taken to produce change); plan for arranging staff, resources, setting, and supporting organization to reach target population and deliver intervention. This must be implemented appropriately to achieve transformation process in change model. Things to consider if you can actually do an intervention (like pre-implementation) Change model: Describes Intervention, Determinants, and Outcomes; descriptive assumptions of what causal processes are expected to happen to attain program goals. Similar to logic model parts.

Theory of Action and Theory of Use

Theory of action: the operating theory about how a program or organization works based on the views of program personnel. Theory of Use: actual program reality, observable behavior of stakeholders Create a theory of action at one stage of empowerment evaluation and test it against the existing theory of use at a later stage. They will also create a new theory of action as they plan for the future. Must determine if theory of action and theory of use are in alignment, out of alignment, and alignment in conflict.

Jean King's 4 Principles for Driving her work

These principles relate to Kin's commitment to fostering use by specific people in specific settings. Responsibility for Use: evaluator should be responsible for making results useful Interpersonal Skills: relationships matter, all evaluation is participatory because an evaluation *must* interact with someone Learning Experience: participating in program evaluation is a learning experience for those who take part (including evaluator) Free Range Evaluation: highest form of eval that lives independently in an organization, lives in a natural setting and reproduces itself in organization context

Stage 1 of Shadish, Cook, Leviton (1991) Organization

Truth: Scriven and Campbell Focused on searching for effective solutions to social problems, truth seekers, summative evaluation, casual questions and internal validity, assme small subset of stakeholders

Strategic Learning and Evaluation Systems (SLES)

Typically evaluation in orgs is not aligned to a strategy. To increase the value of the evaluation, it should be connected with a strategy IMPORTANT: A strategic approach to evaluation requires a *clear vision* for evaluation (plus stakeholder principles and values); a culture that fosters individual, group, and organizational learning; a compelling and cogent *strategy* (systems map, theory of change, evaluation questions); coordinated *evaluation and learning* activities (reporting, communicating findings, conducting program focused eval); and a *supportive environment* (leadership, HR, IT, management systems). Good strategic learning and evaluation questions address process, impact, organization, and field-level outcomes. They guide the organization's evaluation activities for the next 1 to 3 years.

Huey Chen: Purpose, Approach, Audience, Engagement, Value Judgments, Political Setting

Ultimate purpose: "...to produce useul information that can enhance the knowledge and technology we employ to solve social problems and improve the quality of our lives" Approach: Theory driven evaluation, integrative validity approach, contingency based, articulate program theory (action and change model), Tailor design and methodology to context, Formative preference Audience: Program planners/implementers Stakeholder engagement: Open-Closed-Open; Program planners/implementer involved in developing program theory, defining questions AND at the end when interpreting and taking action. Data collection/analysis up to evaluator. Value Judgments: Evaluator is expert and makes value judgement based on analysis Political setting: Recognizes that evaluation is conducted within contexts that are affected by politics, Skeptical of policymakers--notes that they are not trained to set goals for social science programs, but do so (need to use caution in evaluating stated goals) Key terms to know: comprehensive evaluation typology, program theory (action and change model), viable validity, bottom-up approach to validity

David Fetterman : Purpose, Approach, Audience, Engagement, Value Judgments, Political Setting

Use branch, lean towards value Ultimate Purpose: To foster program improvement (by achieving closer alignment of theory of action with theory of use) through empowerment, self-determination, and process use (like MQP). Approach: Empowerment Evaluation (mission, taking stock, planning for the future), form of self-evaluation, methods neutral Audience: all stakeholders Stakeholder engagement: everyone (admin to CEO) in every aspect of evaluation. Evaluator is critical friend. Value Judgments: It is not the job of the evaluator to make value judgments. Participants monitor and examine progress made on goals established during "planning for the future" in second, third, fourth, etc. "taking stock" phase. Political setting: Recognizes this, aims to promote social justice but notes that everyone can increase self-determination Key ideas: Empowerment, self-determination, process use, theory of action and theory of use, critical friend

Brad Cousins: Purpose, Approach, Audience, Engagement, Value Judgments, Political Setting

Use branch, no lean Ultimate purpose: Increased program effectiveness through integration of evaluation into the organizational culture Approach: Practical Participatory Evaluation, Collaborate w/ individuals who have a vital stake in program or eval throughout evaluation, all designs and data collection methods possible Stakeholder engagement: deep involvement from a limited number of primary stakeholders w/ vital stake in program; characterized by control of technical eval decision making balanced between evaluators and stakeholders Value Judgments: Made by nonevaluator stakeholders. Informed judgement fostered by evaluator Political setting: recognizes that micro-politics can affect success of P-PE Key ideas: ROE, ECB, Capacity to do and use evaluation, Data use leads to data valuing

Hallie Preskill: Purpose, Approach, Audience, Engagement, Value Judgments, Political Setting

Use branch, no lean b/c ECB Ultimate Purpose: Organizational learning and development Approach: Evaluative Inquiry for Learning in Orgs, Participatory approach, examine readiness of org for learning, evaluation, and change, many designs and methods possible Audience: stakeholders in decision-making positions Stakeholder engagement: inclusive stakeholder approach (primary, secondary, and tertiary), Participatory in nature so they are engaged in all aspects of the evaluation Value Judgments: stakeholders likely to make a value judgment (if any occur), evaluator is the coach or consultant Political setting: evaluator conducts work in volatile context and should be aware and accommodate as necessary Key Ideas: ECB, ROLE, Appreciative Inquiry, Transformational learning, Strategic learning and eval systems

Jean King: Purpose, Approach, Audience, Engagement, Value Judgments, Political Setting

Use branch, no lean because ECB focus Ultimate purpose: make world a better place Approach: Interactive evaluation practice (Basic inquiry, interpersonal participation quotient, evaluation capacity building), engage stakeholders throughout, use array of methods, build ECB and free range eval Audience: broad but within principles of facilitating "learning" and "use" Engagement: extensive, all aspects of evaluation Value Judgments: collaborating with stakeholders, participatory in nature Political settings: aware that politics play role, guidance around ECB includes assessing political context if conducive that leadership is supportive and intended users have autonomy for eval Key ideas: interpersonal factor, ECB, free-range eval, evaluator competencies

Joseph Wholey : Purpose, Approach, Audience, Engagement, Value Judgments, Political Setting

Use branch, pointing towards methods Ultimate Purpose: "...to improve program performance and results." Promote Good government (use information on a regular basis to make best programs possible --> assume these programs do not go away) Approach: Sequential Purchase of Information in 4 steps; Evaluability assessment, Rapid Feedback, Performance Monitoring, and Intensive Evaluation Audience: Federal program managers (small group of stakeholders who have control on immediate program management, have power to make changes) Stakeholder Engagement: Public program managers consulted regularly to describe program, set criteria and levels of performance, interpret results Value Judgment: Interpretation left to stakeholders, but evaluator works w/ to identify program success and reasonable benchmarks for progress Political Setting: Clearly a reality for these programs, and in many ways accommodated by the general nature of this approach - fed programs typically don't get cancelled Key Ideas: Sequential Purchase Steps Strengths: nailing down program logic, what is meaningful to people. He does not just skim over this. Concerns: accuracy of information (people lie), labor costs, behavioral patterns--incentives changing behavior, timeliness

Stage 2 of Shadish, Cook, Leviton (1991) Organization

Use: Wholey, Weiss, Stake Emphasize use and pragmatism Recognized that decision makers do not use evaluation results and summative evaluation was not taking place (go/no go decisions). Focus on formative evaluation, facilitating use, identify users, broader stakeholders, incremental approach to improvement

III Analysis

Weiss analysis of ideology-interest-information linkages in policy, to clarify effects of social science research. "Public policy positions taken by policy actors are the resultant of three sets of forces: their *ideologies* (philosophy, values, principles, political orientation), their *interests* (e.g. in power, reputation, financial reward) and the *information* they have. Each of these three forces interacts with others in determining participants' stance in policy making." Information is powerful when ideology and interests are in conflict

Program Theory

Weiss and Chen How a program is supposed to work (think logic models, theory of change, action models, etc.) Descriptive theory

Conceptual/Enlightenment Use

Weiss explains that people gradually change mindsets over time, ideas percolate, evaluation findings will slowly infiltrate thinking

Intensive Evaluation

Wholey's 4th Step of the Sequential Purchase of Information (rarely get to this stage) Major evaluation design choice is whether to test validity of causal assumptions linking program activities to program outcomes Preferred methods are *RCTs or quasi-experiments* (more costly than performance monitoring) Recognition of expense suggests these are not always necessary - we may choose previous tools

Small fleet of studies

multiple small utos


Kaugnay na mga set ng pag-aaral

developmental psych final, death and dying exam, middle adulthood exam, Early Adulthood Flashcards, Study Plan: Middle Childhood, Study Plan: Early Chilhood, Exam 2: Theories in Lifespan Development

View Set

IGGY CH 42: Concepts of Care for Patients With Musculoskeletal Conditions

View Set

Gerund Phrases and Infinitive Phrases

View Set

MS 47--Enteral feedings/Total parenteral nutrition (TPN)/Abdominal Paracentesis/Bariatric surgeries/NG decompression/Ostomies

View Set

sociology chapter 7 DEVIANCE/CRIME/SOCIAL CONTROL

View Set