PUBAFRS 4000 Midterm 1
reliability
consistency of measurement
needs assessment - personas
creating portraits of representative users using motivations and behaviors
2. Considerable assistance can be obtained in planning a program evaluation from a) published evaluations. b) the Internet. c) informal conversations with other evaluators. d) all of the above.
d
needs assessment - focus groups
interactive sessions to gather input on needs and validate data from other tools
why would unexpected consequences occur
for subpopulations or due to changes in external factors
why evaluate
formative implementation improvement unanticipated consequences
external validity
generalizability
needs assessment - interviews and observations
guided conversations with users and first-hand observations of how they use spaces
threats to internal validity
History/intervening events - "stuff" happens • Maturation - People grow up/change • Testing - The act of observing something/Hawthorne • Instrumentation - Measuring the same way? Regression to the mean - People vary (over time) in random ways • Selection (bias) - Whoops, not so random • Attrition - Drop-out tells us something • Unequal treatment - Is the program effect likely to be the same? • Contamination - Is the control group influenced by treatment?
measures
Methods to measure criteria that indicate successful implementation and outcomes • Multiple sources and methods - Each method can lead to bias in measures
Policy response presumes....
Policy response presumes that gap is deserving of attention (and can be solved, or at least reduced)
type 1 error
Rejecting null hypothesis when it is true
type 2 error
failing to reject a false null hypothesis
When an evaluation relates the cost of a program to its outcomes, this form of evaluation can be thought of as a) an evaluation of performance. b) an evaluation of efficiency. c) a variant of basic research. d) overly scientific.
b) an evaluation of efficiency.
Outcome evaluations provide the data to show whether a) a service has achieved appropriate results. b) a target population will use a service. c) the evaluator has made every effort to work with the program director in an effective manner. d) resources were spent on programs that are most needed.
a) a service has achieved appropriate results.
Using expert opinion as a form of program evaluation is especially useful when a) the program is complex and there are few clear objective criteria of effectiveness. b) there are readily observable outcomes from a program. c) it is essential that the causal connection between the program and the outcome be definitely understood. d) new medications are being evaluated.
a) the program is complex and there are few clear objective criteria of effectiveness.
1. Evaluators are especially concerned about the possibility of invalid conclusions because a. evaluations are more likely to have an impact on people's lives than basic research reports do. b. what they conclude cannot be applied. c. they work in applied settings in which mistakes cannot be tolerated. d. evaluation is a young field that has yet to prove its worth.
a. evaluations are more likely to have an impact on people's lives than basic research reports do.
Concern over whether the program funds are spent on the activities described in the approved program plan a. is part of evaluations of program implementation. b. indicates obsessive-compulsive thinking since the target population benefits regardless of what services are provided. c. should concern accountants; program evaluators focus on softer criteria. d. would not be of interest to governmental bodies responsible for supporting the program.
a. is part of evaluations of program implementation.
2. Selecting one particular variable to be the criterion of program effectiveness will a. probably corrupt it. b. complicate the data analysis. c. reflect the complexity of the program. d. be desired by most staff members.
a. probably corrupt it.
1. A program might be ineffective because the a. target population does not feel a need for the service provided. b. program treats the needs of the target population. c. program deals with needs that the population really has. d. program has a well thought-out impact model.
a. target population does not feel a need for the service
statistical validity
accuracy/precision
validity
accurate
needs assessment - data analysis
analyszing data on usage, satisfaction, and trends to assess future needs
documenting need directly
asking v. measuring
Census data can be used in program planning by a. asking key informants to evaluate the census data. b. contrasting one community with larger areas, such as a state. c. searching for a community's strong points. d. showing errors in census procedures.
b. contrasting one community with larger areas, such as a state.
Evaluators who learn that a treatment group shows a higher average level on measures of desired outcomes than a comparison group at a p < .05 level have a. completed the work necessary to show that the program is worthwhile and should be maintained. b. made a serious error since statistical significance tests are never used in program evaluations. c. not completed their work since field settings require an understanding of the extent of improvement. d. focused on a criterion that is value-oriented and, consequently, inappropriate for program evaluation.
c. not completed their work since field settings require an understanding of the extent of improvement.
1. Desirable aspects of gathering data for program evaluation from records include the a. freedom to choose the most relevant outcome variables. b. uniformly high validity of archival data. c. thoroughness with which program records are routinely kept in many settings. d. non-reactivity of the data selected from program records.
c. thoroughness with which program records are routinely kept in many settings.
how to build a logic model
collect relevant information define the problem and its context build a table check the logic draw the model verify with stakeholders
2. In conducting an evaluation of an innovative program, the most important responsibility of the evaluator is to a. conduct a valid evaluation. b. consider the generalizability of any positive findings. c. be sure that the most needy individuals get into the program. d. be sure that the evaluation does not harm the participants.
d. be sure that the evaluation does not harm the participants.
internal validity
did the program cause the cahnge
types of evaluation pyramid
efficiency outcome process needs assessment
4 types of validity
external validity, statistical validity, measurement reliability and validity, and internal validity
what is the difference between logic and impact models
logic models are the process and impact models are the outcome
measurement reliability and validity
reproduceability
what are the elements of a logic model
resources (inputs) activities outputs short-term outcomes intermediate outcomes long-term outcomes and problem solutions external influences and related programs
documenting need indirectly
similar communities, use of similar services
needs are....
socially constructed dynamic and population-specific
needs assessment - use casae
stories of how a future space will be used - who, where, why, and how
formative
test driving the program
what is a needs assessment
the gap between is and should be Requires a measure of "fact" - Requires a normative statement (and why the change is a good idea)
why do we do evaluations
to understand the program, to understand the organization, and to generalize to other programs/organizations
what information sources do we need
who is a stakeholder and what do they know who is a client prior studies and evaluations of similar programs
evaluation theory questions
why did a change occur, what is a good outcome, did anything unexpected happen
ethical dilemmas in evaluation
• Bias, wanting to see success - Choosing methods/findings to make the case • Advocacy rather than independence - What do say about negative results?