Program Evaluation in Organizations (Russ-Eft) --- Russ, D. & Preskill, H. (2009). Evaluation in organizations. New York, NY: Basic Books.

¡Supera tus tareas y exámenes ahora con Quizwiz!

Considerations in Developing a Program Logic Model (CH 5)

"By developing a logic model of the program to be evaluated, the evaluator and stakeholders gain an understanding of the evaluand's background and the evaluation's purpose.

Considerations in Developing the Evaluation's Rationale and Purpose (Ch 5)

"Clarifying the program's underlying assumptions, activities, resources, short- and long-term outcomes, as well as the stakeholders' needs and expectations for the evaluation."

Considerations in Developing an Evaluation Plan (CH 5)

"It is critical to take time to discuss the background of what is being evaluated, why it's being evaluated, what questions the evaluation should address, and who the intended users of the evaluation's findings will be."

What assumptions are made by use of surveys as a data collection method? (CH 10)

(1) respondent can read in text form, (2) respondent is willing and able to provide truthful responses, (3) if the respondent must search for information, he/she will do whatever is necessary to provide a response. Based on the assumptions, it is critical to determine if a survey is the best method to address the key evaluation questions.

Logistics: Three choices for survey distribution: (CH 10)

(1) surveys in one of more group sessions, (2) send them to individuals through the mail or email, (3) administer through the organizations intranet or via the Internet.

Describe considerations that guide the selection of data collection methods (CH 7)

- The Key Questions should guide selection of methods So should the following: • Evaluator/Team Skills • Resources • Stakeholder preferences • Level of Acceptable Intrusiveness • Cultural considerations • Validity and Reliability • Availability of Data • Timeliness • Objectivity

Considerations when making an observation plan (ch. 9)

-the focus of the observation -length and number of observations -depth of information required (what are other collection methods? -location (intrusiveness) -timing (varied for accurate representation -ethics (rights of the observed/approvals) -observer training

The 1st Four Steps in designing an evaluation plan (Ch 5)

1. Describe rationale and purpose of evaluation 2. Develop a program logic model 3. Identify the evaluation stakeholders 4. Determine evaluation's key questions

Six strategies for increasing the viability of organizational evaluation (Ch 17)

1. Gaining commitments and support for evaluation work 2. Involving stakeholders 3. Understanding the evaluation context 4. Engaging in evaluation professional development 5. Choosing an evaluator role (internal, external, coach, mentor, facilitator) 6. Building evaluation capacity in organizations and action.

8 Major Categories of costs to be considered for every evaluation (CH 15)

1. Personnel/Professional staff 2. Materials, supplies and equipment 3. Communications 4. Printing and reproduction 5. Travel 6. Facilities 7. Indirect, Overhead, or General Administrative costs 8. Miscellaneous or contingency costs

Describe issues with instrument validity(Ch7)

1. Test content (content validity) 2. Response and scoring process validity (do they represent the concept being measured? i.e., did a participant respond truthfully or for social acceptance?) 3. Internal structure of instrument (factor analysis) 4. Relations with other variables (encompasses earlier types of validity) -- for example: Criterion-related validity

Logic Model (ch 2)

A "visual depiction of how a program is supposed to operate and the theory that drives the program." (Ch. 2)

Determining audiences for Communication and Reporting (CH 14)

After the data have been collected and analyzed, communicating and reporting the final findings takes the center stage. The fundamental purpose of evaluation is to improve a particular program and the purpose would be defeated without presenting the final findings in a manner the audiences (stakeholders) would understand and utilize. All potential audiences (stakeholders) must be put into consideration when choosing the format for presentation of results. This chapter, however, states that the "best way to help ensure an evaluation relevance and sensitivity to its different stakeholders is to use a fully participatory approach" (p.405). The evaluators are, therefore, advised to know that: All stakeholders are potential audiences; All stakeholders must be considered in terms purpose; The most primary stakeholders are main reporting audience; nationality and their educational level, et cetera. Consider the diversity of the audience: gender, age, cultural orientation, The presentations of the findings (both interim and final) must be tailored towards an easy assimilation by the diverse audience (p.405). This is because if they don't understand the purpose of the study and the data analysis methods, they cannot use the recommendations of the findings.

Some Types of Data

Archival (documents, records), Interviews, observations, surveys

Common Ethical Challenges (Ch 4- pp. 126-131).

Choosing a test primarily because of personal familiarity with it • Responds to the concerns of one interest group more than another • Required evaluation, but cannot produce useful results • Conducting an evaluation without sufficient skills or experience • Failing to identify the values of a right-to-know audience • Losing interest once the final report is delivered • Change evaluation questions to match the data analysis • Promise confidentiality when it cannot be guaranteed • Evaluator is pressured by stakeholder to alter presentation of findings • Evaluator is reluctant to present findings fully • Evaluator discovered behavior that is illegal, unethical, dangerous, etc. • Evaluator is unsure of ability to be objective or fair in presenting findings • Misinterpretation or misuse of reports "the greatest misuse of evaluation findings occurred during the communicating and reporting stage" (p. 117).

Important steps throughout/during an evaluation (Ch 4- p. 120)

Communicate purpose/intention Obtain data from a wide variety of sources Be professional Train stakeholders in methods of data collection analysis & how/when to communicate findings

Key considerations with Communication and Reporting (CH 14)

Communicating and reporting an evaluation result serves two fundamental purposes: communicating the evaluation activities and the findings. However, for the entire exercise to be successful, the evaluator(s) must define the purpose and: 1.Collaborate with the stakeholders in decision making about evaluation design and activities; 2.Inform those directly involved in the process about specific upcoming evaluation activities; 3.Keep everyone involved about the progress of the activity; 4.Present (in collaboration with the participants) the interim or initial findings; and 5.Present (in collaboration of the stakeholders) complete or final findings (p.403).

What are the functions of the evaluation plan? (Ch 5)

Contract between evaluator and organization Document to guide through the evaluation process

Describe types of Economic analyses of learning, performance, and change interventions.

Cost analysis: The results of cost analysis are most useful for selecting among several alternative in cases where all alternatives display approximately equal effects or benefits. Cost-Effectiveness Analysis focuses on the direct outcomes, effects, or impacts of a program. Cost-Benefit Analysis and Return on Investment (ROI) are other methods to calculate the risk and benefit.

Advantages of collecting Observation Data (CH 9)

Easily collected; patterns/theories develop; highlights discrepancies between fact and fiction; evaluator can see what participants do not; complements other methods

Benefit of group administration of survey (CH 10)

Ensures all surveys are completed and returned, helps respondents overcome fears and questions about completing the survey, less time for administration and follow-up.

Observer roles (ch. 9)

Full Participant, Partial, Non-Participant

Types of disclosure prior to observation (ch.9)

Full explanation of real purpose to all Partial Covert - no one is given any information Considerations: Ethics, validity/reliability

C0nsiderations in Developing the Evaluation's Rationale and Purpose (CH 5)

It is helpful to sum up the purpose of the evaluation into a two- or three-sentence purpose statement. It is not unusual for there to be several purposes for conducting an evaluation." The text recommends "including a sentence or two that indicates how the evaluation results will be used."

Important steps at the start of an evaluation (CH 4, p. 119)

Make clear the intended goals Know political consequences of answering the evaluation questions Develop a logic model Include as many stakeholders as possible in planning and implementation Be deliberate, careful and clear

Some advantages of collecting archival data (Ch8)

May be easily collectible May provide quantitative data Cost-effective Shows historical context/chronology of events Gains info beyond surveys/interviews Increases credibility of data (perceptual benefit) Requires minimal training Allows creation of new variables/scales

Some disadvantages of archival data (Ch8)

May be inaccessible/privacy restrictions Incomplete representations Time consuming May require sophisticated data analysis Documents/records may be limited, incomplete or too small a sample Terms/collection methods may have changed over time Data might fail to address key questions (you might not know until you collect)

Things to keep in mind when Developing Key Evaluation Questions (CH 5)

Outcome of focusing phase: list of broad, guiding focusing questions the evaluation will seek to address The questions that are developed are the result of the input and negotiations among the evaluator and stakeholders • Not so specific they could be survey questions • Open-ended • Group by themes or categories • Prioritize (need to answer now, would be nice to answer etc.) • Agree on a a set of questions that meets the most immediate concerns

Types of stakeholders (CH 5)

Primary (Ex.: funding agencies, developers,staff) Secondary (Ex.: participants, customers, parents) Tertiary (Ex.: future participants, legislators, community) (Examples can vary depending on role) (CH 4)

Describe Potential Problems in Analyzing Data (Ch 13)

Problems with data analysis can generally be traced to actions taken during earlier phases in the evaluation. The first of these problem areas is that it doesn't have a proper focus or the focus changes in the midstream. The second problem area associated with data collection is that of missing data or non-responses.

Approaches to Qualitative Data Analysis (Ch 13)

Qualitative data analysis is also referred to as thematic analysis or content analysis and is used to identify themes and patterns in data To analyze qualitative data, one must decide between a case study analysis (case description per person in study) or a cross-case analysis (categorizing responses from several people in study) Categorizing Qualitative data can be done via Theoretical literature via grounded theory or analytic induction methods An existing framework for categorization ◦ The current data set

Guidelines for Recruiting Participants for Individual and Focus Group Interviews (CH 11)

Recruiting Participants-interviewers need to make initial contact • By letter or phone-describe the purpose and what will be done with the information • Qualifying questions may need to be asked during the recruitment stage, they should be asked early • Assure confidentially if possible • Provide incentive for participation • Arrange times for the interview earliest possible date for phone or in person interviewees (beginning and ending time) • Identify location of meeting • Leave contact information-phone, email, fax, Scheduling interviews • At least 4 weeks in advance • Call, leave message, no response, email or fax, no response determine why • Assure all participants they have the right to not answer questions

Exploring Relationship: (determine whether there is a relationship between two variables or among several variables) What types of relationships might be found/looked for? (CH 13)

Relationship between two variables - Relationship or a "correlation" exists between one variable and another. Positive correlation (directly) would indicate that respondents who give high rating of satisfaction tend to us the skills on the job. Negative correlation (inversely) does not tend to use the skills on the job. A zero correlation would indicate no relationship.

Important steps in the Communicating and Reporting Phase: (Ch 4- p. 121)

Report with clear, non-inflammatory language Report in a variety of formats to a wide variety of audiences Make sure your methods are sound for internal validity and credibility Be timely Prioritize the findings and recommendations and Translate them into long-term benefits "Collaborative evaluation approaches are crucial as a means for mediating the political nature of evaluation" (p. 121)

Types of Survey Questions (CH 10)

Seven question types. ◦ Open-ended questions: Respondents write response using their own words. ◦ Fill in the blank: Ask the respondent to insert a word or numbers in a blank statement. ◦ Two-choice (aka dichotomous choice): Allow for choosing between two alternatives. ◦ Multiple choice: Provide respondents with several choices from which to select one or more responses. ◦ Rating Scale: Variant of multiple choice such as Likert scale, behaviorally anchored scale, and behavior-observation scale. ▪ Likert-type Scales: Scaling method where the low end of the scale represents a negative response and the high end represents a positive response. Allows responses of varying degrees to each specific survey item. ▪ Behaviorally anchored scales: Idea is that various levels on the scale are "anchored" into performance categories for a specific job category or position. ▪ Behavior observation scale: Uses behavioral items from the behaviorally anchored scale, but also uses the scale or rating options from the Likert- type scale. ▪ Constant sum: Asks the respondent to assign values to each of the options indicating the level of preference, interest, or importance. Values must sum to a specific amount such as 100%, 24 hours, etc.

Considerations for Content of Communications and Reports (CH 14)

The book outlines four main areas to consider for effective written communications for reporting evaluation findings: a) The writing must be clear, jargon-free; b) Tables and figures to be used appropriately; c) Communicating qualitative and quantitative findings in an unambiguous terms; and d) Communicating negative findings in a manner that will not rattle stakeholders. Reporting negative findings could be threatening; it therefore be noted that the objective of any evaluation process is not to demonize any group or an individual, but to point out areas of concern for the purpose of continuous improvement (pp.413-417).

Chapter 17 look-back -- Insert here

The concluding eight bullet points help evaluators expand their work within the organization. They are important references to use later - page 490.

Developing Key Evaluation Questions (CH 5)

These are the questions which the evaluation seeks to answer. "Most program evaluations include anywhere from three to five key questions, although some complex and multifaceted evaluations may require as many as ten to twelve questions."

Timing of Communicating and Reporting of the evaluation (CH 14)

This section notes that the "more frequent" the evaluation communicating and reporting "the better." Thus the evaluator(s) must communicate with the audience on decision making of the evaluation activities and issues that are the focus of the evaluation. And as the evaluation goes on, specific dates and sufficient time must be provided for communicating with and reporting to the diverse audiences or stakeholders (p.415). And for this exercise to be effective, it must be done collaboratively. E-mails and other fast electronic and social media could be utilized to schedule meetings and allow the stakeholders ample time to plan for the meetings.

Provide 3 examples of understanding to be gained from observation (Ch 9)

Understanding the participant's performance Understand how/when participant learns Understand how well a change is being implemented

Disadvantages of collecting observation data

Validity (Observer bias); reliability (only one observer's viewpoint); observer training needed; costly/time-consuming

Describe strategies for effective focus groups (CH 11)

Valuing Silence • Give interviewee an opportunity to contemplate a question, formulate a response Managing the unexpected • Interruptions in focus groups, participants may have to participate in a later group, • Problems that are not necessarily related to the evaluation • Disclosing illegal activity by the participants • Interviewee should not be expected to deliver any service or follow-up on personal problems

Benefit of Mail Administration, Email and Web-Based Administration of Surveys(CH 10)

Works well when individuals taking the survey are in many different locations. Difficult to oversee that all surveys are completed and returned, more costly due to postage, paper, etc., difficult to know if respondent completed survey or someone else.

You are about to start the 1st step of designing an evaluation plan, which is to describe the rationale and purpose of evaluation. Describe a process to use to achieve this. (Ch 5)

a. Use focusing meeting w/stakeholders b. Task of focusing the evaluation: i. Clarifying program's underlying assumptions, activities, resources, short- and long-term outcomes, as well as stakeholders' needs and expectations for the evaluation c. Opportunity for negotiating which goals should be evaluated d. Several methods to develop: e. Initial conversation with stakeholders (roundtable) i. Goal: Clarigy explicit as well as unstated goals and objectives of any program, process, product, or object being evaluated Sample participant questions: • Role & length of time with program • Why interested (& how much) in the evaluation • Trends observed • Concerns • What they hope to learn • What decisions they want to make based on the evaluation results • What they believe the goals of the program are ( and other important goals -i.e, tacit, unstated) • What claims they wish to make about a program (i.e., If the program were working well, what would you say about it? See sample evaluation rationales (p. 146)

Reliability (ch 7)

the consistency of: • The items within the instrument • Data collected using the instrument when those data are collected over time

Concurrent validity (ch 7)

the degree to which the data correctly measures some condition or criterion

Predictive validity (Ch 7)

the degree to which the data correctly predicts some condition or criterion

List/Describe Critical Interviewer Skills (CH 11)

• Acute observation skills. • Ability to deal with the unexpected. • Neutrality • Interview experience • Ability to follow procedures • Educational background

THINGS TO BEWARE OF WHEN CONDUCTING ANALYSIS of data (Ch 13)

• Allot sufficient time to conduct your analysis • Only perform analysis when alert and attentive to categories and coding of categories • When you encounter a piece of data that is ambiguous, do not make assumptions or inferences about what the respondent "meant"

Confidentiality of Interviewees' Responses (Ch 11)

• Anonymity and confidentiality are difficult in focus groups, but can be maintained for in-person and phone interviews • Assure interview transcripts, notes, and tapes will only be available to the evaluation team

Guidelines for Constructing Surveys (CH 10)

• Anonymity and confidentiality must be determined for each method of data collection. Anonymity means that no one including the evaluator can identify who provided the survey data. Confidentiality means that you might be able to identify the respondents and their response, but you guarantee that the information will go to no one else.

Considerations when Facilitating the Interview/Focus Group (CH 11)

• Brief introduction, purpose of evaluation and explanation of how the participants were selected to participate • Begin with an easy nonthreatening question to relieve anxiety • Focus groups don't arrive at the same time prepare a short survey for the early participants, should include information that may be collected at the end • Structured interview approach, ask questions as they are written in the guide, changes should be limited • Semistructured or structured approach avoid rephrasing of a question if it is not immediately answered • Ask all questions, unless inappropriate or irrelevant • Multiple-choice questions in an interview ◦ Do not change the working of questions ◦ Define words if necessary ◦ Record the answers by the interviewee, even if it is not one of the choices ◦ At the conclusion of the interview ask if there are any questions, say thank you,

Types of Measurement Bias (p. 224 - Ch 7)

• Changes in Record-Keeping procedures (solution: calibrate against a standard) • Measurement itself causes a change (solution:make observation unobtrusive or routine) • Guinea pig or hawthorne effect (use archival or unobtrusive methods) • Response sets (select all "Extremely well" responses) (Solution: vary the form of the questionnaire) • Interviewer effects (solution: use unobtrusive methods or vary/test interview characteristics) • Nonresponse bias (use techniques to increase response rate, sample nonrespondents and weigh to represent all respondents ---- to avoid having unsatisfied participants be the only/likely ones to respond)

Two types of criterion-related instrument validity (examine the correlation between the measurement and the criterion) (Ch 7)

• Concurrent validity - the degree to which the data correctly measures some condition or criterion • Predictive validity- the degree to which the data correctly predicts some condition or criterion

To determine the degree of intrusiveness the program can tolerate during evaluation, you need to consider the methods of data collection in order of obtrusiveness. List data collection methods from least to most intrusive. (CH 7)

• Currently available archival data (+ easy, cheap, available, less distortion, provides chronology/longitudinal data) • Modified archival data • Observations in natural settings (ranges from routine countings of certain occurrences to writing narrative descriptions of observation) • Observation of artificial or simulated settings • Surveys and questionnaires (Useful particularly w/close-ended info/v.diff. to develop well) • Paper, pencil, and computer tests (if asking how much or what was learned) • Individual and focus group interviews (rich, qualitative)

Describe the purpose and use of probing questions in interviews/focus groups (Ch 11)

• Designed to clarify, explain, and focus the comments of the interviewee and specific questions • Should be asked when the response: partially answers the question, no response, irrelevant response is given, to enhance the response with examples or critical incidents • Methods for probing: tell me more, I don't understand what you mean, think of an example, how do you feel about that? Repeating the question, summarizing the interviewee's replay

Describe recommendations for managing and implementing an evaluation plan (Ch 15)

• Develop at least one timeline • Develop at least one Data Collection Management Plan (see p. 446-7) • See also Evaluation management Plans (p. 451) o Plans in the workable solutions and contingency plans

Considerations for Recording Individual and Focus Group Interview Responses (CH 11)

• Digital audio and video recording-interviewee must agree, also take notes • Handwritten or computer written note-need to be accurate for recall of responses by interviewees, include all details and stories, don't summarize without details • Completing the interview transcript-transcriptions should be done within 24 hours of the interview

Professional Organizations and Standards of Practice (Ch 4- p. 125-131) Program Evaluation Standards (by the Joint Committee on Standards for Educational Evaluation from the Western Michigan University Evaluation Center):

• Feasibility • Propriety • Accuracy • Utility

Considerations with QUALITATIVE DATA ANALYSIS SOFTWARE (Ch 13)

• For the analysis of large amounts of qualitative data, the author recommends the use of software packages released for the sheer purpose of indexing and searching for key words in the data. Some examples are NUD*IST, QSR-NVio, ATLAS.yi, Ethnograph, etc. • The use of software will not be a substitute for a human being for the appropriate categorization of the data

Two sources of errors of reliability (Ch 7)

• Inadequate sampling • Situational factors (can involve individual variation, such as fatigue, mood, etc. and errors in entering the data

Disadvantages of Individual and Focus Group Interviews (CH 11)

• Individual interviewees may not interpret the questions in the same way • Both can be relatively expensive method of data collection compared to surveys, checklist, or test data • Qualitative data collected from interviews takes longer to transcribe and analyze • Some groups within an organization may refuse to participate in a focus group (then a phone interview or personal interview may be needed) • Both required skilled and trained interviewers • Both can be difficult to schedule • Individual interviewees may not interpret the questions in the same way • Both can be relatively expensive method of data collection compared to surveys, checklist, or test data • Qualitative data collected from interviews takes longer to transcribe and analyze • Some groups within an organization may refuse to participate in a focus group (then a phone interview or personal interview may be needed) • Both required skilled and trained interviewers • Both can be difficult to schedule

Advantages of surveys (Ch 10)

• Inexpensive • Same question are presented in same manner to all respondents • Most people in US and western European countries are more familiar with surveys. • Some people feel more comfortable responding to a survey than an interview. • Tabulation of closed-ended is easier. • May increase the likelihood of obtaining a representative sample.

The Interviewer's Role in interviews/focus groups (CH 11)

• Influences caused by the interaction between the interviewer and interviewee must be recognized and defined • Language barriers should be identified • Dress more formally - showing respect to the participants • Interviewee's may answer in an "acceptable" manner, order of questions may lead interviewee's to answer in a particular way • Develop a rapport with the interviewees ◦ Remain neutral during questioning and answering ◦ Write the information to indicate the importance of what is being said ◦ Understand the communication style of the interviewee ◦ Create a relaxed atmosphere, may want to memorize the guide

Strategies for pilot testing interview questions (Ch 11)

• Interviewer conducting a small number of interview sessions, to determine specific items that need clarification or rewriting • Behavior coding - an observer views the live or by tape or digital recording the pilot interviews and notes the interviewee's reactions to the interviewer and questions

Considerations for pilot-testing surveys (CH 10)

• It is very important to pilot test any survey to identify any weaknesses that may have been overlooked. The respondents involved in the pilot should be representative of the target sample.

Benefits of structured data collection (CH 7)

• More common • More easily analyzed data • Ensures the info gathered is relevant to the purpose and the eval's key questions • Results may be primarily an artifact of the structure (downside) See: Checklist for choosing data collection methods p. 226

Secondary Stakeholders (Ch 5- p. 165-167)

• More removed from the daily operations of the program • May not have financial controls, but may have stake in outcomes • Examples: managers, administrators, participants, students, customers, clients, trainers, parents

Disadvantages of Surveys (CH 10)

• Often result in a low response rate that can threaten the external validity of the findings. • Some questions may not have the same meaning to all respondents threatening the reliability and validity of the information. • Language and reading ability may affect the reliability and validity of the information. • Without personal contact, difficult to assure who actually completed the survey. • Data is limited due to inability to probe for more details from respondents. • Takes a lot of time to develop and write good survey questions. Critical to decide if a survey is the best method vs. the cost.

Handling Non-response Bias with Surveys (CH 10)

• Option one: Do nothing. • Option two: Do a non-respondent check. • Option three: Do a non-respondent follow-up.

Benefits of using multiple collection methods (CH7)

• Overall results not as likely to be distorted by a single consistent source of bias • Whenever the results from several different data collection methods agree with one another ( a form of triangulation), ,you can be more certain that the results are not an artifact of the data collection method but rather represent a truer indication of the program being evaluated • The use of multiple methods and convergent evidence increases the validity, reliability and the precision of the information gathered.

Types of Surveys (CH 10)

• Postcourse Reaction Surveys: Most commonly used for evaluating learning and performance activities or events using feedback forms at the end of a day or program. • Behavioral or Skill Measures: May be used before and after a learning, performance, and change initiative as a way to determine the impact of intervention. • Knowledge Tests: Commonly used for evaluating learning and performance such as a certification exam.

Describe considerations/requirements of managing an evaluation plan (CH 15)

• Preparing a detailed plan for the project tasks • Monitoring the eval.'s project tasks and personnel • Staying on schedule or negotiating schedule changes • Monitoring the evaluation's budget • Staying on budget or negotiating changes • Keeping the client and stakeholders informed of the evaluation's progress and any problems that occur

Advantages of Individual and Focus Group Interviews (CH 11)

• Provide more in-depth information than other methods • Interviewer can make a personal connection with the interviewee enhancing the quality and quantity of the data • Interviewer can gather and record personal impressions • Interviewer know the interviewees and thus can ensure greater participation • Both can uncover unexpected information • Interviewees are more likely to complete the entire interview in comparison to surveys • Individual interviews the interviewers have greater control • Focus groups interview provide opportunity for participants to interact which may enrich the depth and quality of data

Benefits of unstructured data collection (CH 7)

• Provides a greater depth of info • Often more palatable to organizational staff • Tends to cost more to analyze • Requires greater expertise for analysis see: Checklist for choosing data collection methods p. 226

APPROACHES TO QUANTITATIVE DATA ANALYSIS (Ch 13)

• Quantitative data consists of numbers that represent nominal, ordinal, interval or ratio-level data • Whether the data is qualitative or quantitative, the author recommends creating a frequency distribution ◦ Frequency distribution identifies the number of responses of a given type Use statistical analysis to do one of the following: describe data, search for relationships, predict a relationship or test for differences between variables To describe data, use "measures of central tendency" i.e. mode, median or mean AND/OR "measures of dispersion" Measures of central tendency For nominal data, use the mode as a measure of central tendency For ordinal data, use the median as a measure of central tendency For interval or ratio data, use the mean as a measure of central tendency Measures of Dispersion/Variability defines the degree to which the data clusters or spreads apart For nominal data, use the range (difference between highest and lowest values) as a measure of dispersion For ordinal data, use the semi-interquartile range (half of the range of the middle 50 percent of the values) For interval or ratio data, use the standard deviation. Standard deviation =

PROCEDURES FOR ANALYZING QUALITATIVE DATA (Ch 13)

• Read data carefully and make notes paying close attention to information that you wish to revisit or use in the final report • Compare data against expectations, standards or other frameworks • Categorize the data, assign codes to each category and sort the data into the categories defined. • Examine how the data within categories relates as well as how data in different categories relate to each other • When writing your findings, ensure that they are tied to the purpose of the evaluation and the key evaluation questions • Creatively display data in ways that makes its interpretation intuitive i.e. diagrams, charts, etc.

Guiding Principles for Evaluators (by the American Evaluation Association): (Ch 4)

• Systematic Inquiry • Competence • Integrity and Honesty • Respect for People • Responsibilities for the General and Public Welfare

Describe different types of data analysis (Ch 13): Qualitative, quantitative, economic

• The evaluation's key questions and the level of analytic sophistication of the stakeholders should guide decisions regarding appropriate data analyses. • Qualitative data analysis consists of linking data with theory, creating empirically derived categories, and applying coding processes to the qualitative data. • Quantitative data analysis focuses on describing the data, searching for or specifying relationships among variables, predicting relationship among variables, or testing for significant differences among variables. • Economic analysis, and return on investment, cost-effectiveness analysis, cost-benefit analysis, and return on investment should be used only when relevant and valid data can be obtained.

Tertiary Stakeholders (Ch 5- p. 165-167)

• Those with some interest in the evaluation for future planning or decision-making • Or, have a general interest or right to know • Examples: Future participants, Potential users/adopters, Professional colleagues, Professional organizations, Governing boards The farther removed, the more likely to be secondary or tertiary stakeholders (p. 167) See questions to identify stakeholders ( p. 167) N.B: Do not equate stakeholders with data sources (you may contact some during data collection, but stakeholders are more related to the potential users of the evaluation findings (p.168)

How do you Manage the Survey Data Collection Process (CH 10)

• Use a survey tracking form and determine how you will handle non- respondents.

Considerations for Survey Question Construction (CH 10)

• Use terms that respondents will understand. Important to be sure that the vocabulary is within each students vocabulary. • Avoid acronyms. • Avoid double negatives. Avoid phrasing a question in a negative way. • Avoid leading or loaded questions. These are questions that lead the respondent to answer to one side or the other. • Avoid "double-barreled" questions that ask for more than one piece of information in the question. This type of question may result in the respondent answering one part of the question and not the other. • Development of good survey questions is as much an art as it is a science.

Considerations for online/web-based surveys (Ch 10)

• Using computers simplifies distribution and data collection and appears to show results similar to paper-based surveys. • Ability to distribute worldwide, less costly, responses can be gathered more quickly, shorter and less complex survey - the higher the return rate. • Impossible to validate who, exactly, took the survey.

Four areas of the organization's context it's important to learn about (CH 17)

• When to conduct an evaluation. Ensure timeliness by tying the evaluation to the purpose and key questions; avoid evaluating immediately before or after a merger, staff reduction, or financial strain. • How the results may be received and used. To prepare stakeholders for a variety of results, consider what-if statements the evaluation team and stakeholders can respond to. • The organization's business and financial situation. Be aware of the organization's challenges and successes and how they may affect the evaluation process and the use of its results. • The external pressures on the organization. Ask questions about these pressures to find out how they may affect the evaluation or the use of its results.

Inter-rater reliability (CH 7)

• correlation in ratings done by multiple raters

unstructured data collection (CH 7)

• the most unstructured method would involve recording all available information without predetermining how it is produced. Afterward a variety of observational methods could be used to analyze the data/observed behaviors

Primary Stakeholders (p. 165-167)

• those charged with oversight or implementation of the evaluand because they tend to make instrumental (rather than conceptual) use of findings. (p. 167) • Typically those who allow the evaluand to exist or make it happen • Typically responsible for the successful design, development, implementation, and funding of the evaluand Examples: funding agencies, org., depts.., designers, developers, implementers, staff (See other samples p. 148 - includes program developer (also facilitator) and program participants)

Guidelines for Constructing semi-Structured approach Individual and Focus Group Interview questions ( Ch 11)

▪ Develop a set of questions to guide the interview ▪ Allows for rephrasing, probing, and asking questions in different sequences ▪ Generally used for individual or focus groups

◦ Methods for Increasing the Response Rate of Surveys (CH 10)

▪ Highlighht the Importance of the survey: Why should individuals take the survey? How will it help them and the organization that is administering the survey? ▪ Keep it short: Only ask questions that are "need to know" not "nice to know." ▪ Additional "Interesting" questions at the beginning will help to engage the respondent and keep their interest to continue the survey. ▪ Handwriting with a paper survey can create a more personal survey. ▪ Appearance of the survey should flow easily, include smooth transitions, and not appear crowded. ▪ Incentives such as including dollar bills, gift certificates, candy, etc., can increase interest in completing the survey. ▪ Deadlines are not recommended. If a timeline is desired, use days or weeks as periods of time, not a specific date in time. ▪ Follow up procedures can increase response rates significantly if they are well communicated and clear to the respondents.

Guidelines for Constructing Structured approach Individual and Focus Group Interview questions ( Ch 11)

▪ Structured approach ▪ Develops a series of questions, rarely probes for details, rephrasing ▪ Often has rating scales or multiple-choice items ▪ Generally used for telephone interviews

Four factors used when determining what type of survey questions will be asked.(CH 10)

▪ Who will be answering the questions? ▪ How much time will respondents be able and willing to spend? ▪ How many respondents will be involved? ▪ How much is known about the range of possible answers, and do you want to provide them to the respondent?

Guidelines for Constructing Unstructured approach Individual and Focus Group Interview questions ( Ch 11)

◦ Develop 1-2 questions that guide the interview ◦ Interview is more conversational and meanders ◦ May help to identify unknown critical or pertinent issues


Conjuntos de estudio relacionados

Physical Assessment Chapter 2: Critical Thinking in Health Assessment

View Set

Church and Sacraments Unit 3 Part 1Quiz

View Set

Chapter 18 Bone and Joint Problems (EVOLVE ch 49/51 med/surg)

View Set

Chapter 1 Pre-Work/Quiz CIST-004A

View Set

8. The Berlin Blockade (Cold War)

View Set