Research Methods of Crim Exam 2

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

questions for evaluation research

-how does the program operate (process evaluation) -what is the program's impact (impact evaluation) -how efficient is the program (efficiency analysis)

process evaluation example

-in a cognitive-behavioral program for prison inmates, there is a protocol for the staff to follow -e.g., 8 - 12 inmates per class, 2 staff members (preferably who are not correctional staff), class meets twice a week for a total of 15 hours, staff follows the curriculum that has been provided, staff have college degrees in psychology or mental health counseling -Process evaluation would consider things like: --Were there 8 - 12 inmates per class? Did inmates drop out? --Was the program delivered by correctional staff members or by treatment staff? --Did the program staff adhere to the curriculum?

stakeholders

-individuals and groups who have some basic concern with a program -might be clients, staff, managers, funders, or the public -ex: Congress or another federal office, a federal funding agency (National Institute of Justice), school-district superintendents and/or parents of schoolchildren -affect ER by setting overarching agenda for program -ex: this program prevents children from joining gangs -stakeholders might also have personal or emotional investment in a program or policy

cohorts

-individuals or groups with common starting point -ex: college class of 1997

individual matching

-individuals who are similar in terms of key characteristics are paired prior to assignment, and then each member of each pair is assigned to a different group -researcher finds each person in the tx group a counterpart who is similar to the tx group person in many ways but who did not receive the tx/intervention

ethical issues in survey research

generally, survey research poses minimal risk -no manipulation or experimentation taking place -little or no possibility of psychological or physical harm -little or no coercion -but respondents still have to be protected, especially if you are asking them to divulge personal info -survey respondents need --informed consent --anonymity and confidentiality -protection can be enhanced by considering the necessity of certain items --don't ask sensitive or embarrassing questions unless they are absolutely critical to the research

reductionist fallacy (reductionism)

- an error in reasoning that occurs when incorrect conclusions about group-level processes are based on individual-level data -"individualist fallacy" -ex: a reductionist explanation of individual violence would focus in biological factors, such as genes or hormones, instead of community's level of poverty -race and crime

outcome evaluation

-"Did the program work/have its intended consequences?" -analysis of the extent to which a tx or other service has an intended effect -if you know process went smoothly, proceed to evaluating outcomes -this will generally be quantitative methods or mixed methods -first, pick an outcome measure (the variable you will use to operationalize "effectiveness") -ex: # of self reported arrests in 6 months, number of actual rearrests in 6 months, number of self-reported instance of drug use per month -second, pick a design --experimental is always best, when possible --quasi is second best --non-experimental if other types are unfeasible or inappropriate -the trick is to build a design with strong internal validity (internal validity is prized over generalizability in ER) -you want to be able to say that any improvements seen in tx group are FOR SURE result of program (rule out problems with temporal ordering and spuriousness) -many program evaluations suffer from the same downfall: selection bias (big deal, something that GOOD researchers work hard to prevent)

process evaluation

-"Is the program working as planned?" -"how does the program operate?" -program monitoring -systematic attempt by researchers to examine program coverage and delivery -before you tackle the question of outputs or outcomes, you need to know if the program is actually being implemented correctly -process evaluation is important in and of itself, but it is also imperative for understanding the final evaluation of outputs and outcomes -if program appears to work, is it because: --it's a bad program? OR --it was implemented badly? -don't want to throw out a potentially good program just bc someone messed up implementation -often relies upon qualitative methods (participant observation, intensive interviewing, focus groups) -also the collection of secondary data (criminal histories of inmates in program, college education level of staff members, etc)

efficiency evaluation/analysis

-"is the program worth it?" -if a program is found to have positive outcomes, the next question is, "are the outcomes good enough to justify the cost" -compares program costs to program effects -"effective" doesn't mean best or only; even when a program is found to work, there might be a more effective or less expensive one out there so important to do comparisons -cost-benefit analysis -cost-effectiveness analysis -effectiveness at achieving desired outcome is only part of the story; need to also find out if: --the benefits are at least equal to (if not greater than) the costs --there is no alternative that would produce a better outcome for the same or fewer costs

questionnaire development and assessment

-"the questionnaire... is the central feature of the survey process" -the quality of survey research hinges upon 1. the appropriateness of the sampling technique 2. the quality of the questionnaire -the number and type of questions and the overall length of the questionnaire will depend upon the research topic being examined --generally speaking, a written questionnaire should take no more than 20 minutes to fill out, and a phone survey should take up to 30 but absolutely no more than 45 minutes to complete --surveys that are too long will fatigue the respondents, they will stop paying attention, start answering sloppily, give up

4 special circumstances in which time order is more confidently drawn in cross sectional research

--the IV is fixed at some point prior to variation in DV (demographic variables determined at birth, education, marital status) --we believe that respondents can give us reliable reports of what happened to them or what they thought at some earlier point --our measures are based on records that contain info on cases in earlier periods (archival gov data) --we know that the value of DV was similar for all cases prior to treatment (anger management class, no one was able to control verbal outbreak before treatment, so any change to that is from class and would show time order)

causation: 5 criteria

-5 criteria a research study must possess in order to demonstrate causal effect -each criterion is important but not sufficient for causation to be proven REQUIRED: empirical association, temporal ordering, nonspuriousness RECOMMENDED: mechanism, context -when 1 or more criteria are not present, causal conclusions are impossible, or, at least, highly suspect

efficiency evaluation examples

-An expensive job-training program is found to reduce rearrests by 40%. Is the money saved from reduced recidivism enough to offset the cost of the program? -An early-childhood family therapy program reduces early delinquency in youth by 30%. Are the saved costs enough to justify the program? -A job-training program reduces rearrests by 40% and saves money. But is there a different one that would reduce it by 50%? 60%?

evaluation research vs data colelction

-ER collected to find out if the program or policy works -ER is different bc it can employ the diff designs (experimental, quasi, non experimental) -ER can be quantitative, qualitative, or both -any given ER design might include one or multiple elements of 3: --random assignment to tx and control groups with quantitative outcome measure (experimental), followed by in-depth interviews with members of each group to learn their experiences (Non-experimental) -strong ER designs use more than one research approach -ER heavily applied while much other CCJ research is theoretical or somewhat removed from policy

ethics in evaluation

-ER raises many ethical issues bc of the direct involvement of people in the program and the direct impacts of the research on people's lives -many program participants are vulnerable (prison inmates, drug addicts) -bad research that leads to termination of a good program (or continuation of a bad one) hurts peolpe -the ER feedback loop may be used to ensure than any harms are identified and corrected immediately -also might need to follow up with people afterward to make sure they are ok -research design may also raise ethical concerns: --random assignment means some people get beneficial program and others dont --ethics might require the researcher to provide the control group an alternative, or let them go through the program after the evaluation is over

integrative approach

-an orientation to evaluation research that expects researchers to respond to concerns of people involved with stakeholders as well as to the standards and goals of the scientific community

compensatory rivalry

-happens when there is a leak of info, or group members are mistakenly allowed to contact on another -source of causal invalidity that occurs when the experimental and/or comparison group is aware of the other group and is influenced in the posttest as a result

program theory approach

-a descriptive(what impacts are generated and why they occur, causal mechanism, empirically based) or prescriptive(how to design or implement tx, outcomes that should be expected, how performance is judged) model of how a program operates and produces its effects -if an investigation of program process is conducted -describes what has been learned about how the program has its effect -researcher attempts to explain observations using theory -attempts to form an explanation for WHY the change occured (or didn't occur) -theory-driven evaluation -e.g. what is (in)effective about this program? Approach? Protocol? Implementation? characteristics of the people/places in tx group?

mechanism

-a discernible process that creates a causal connection between two variables -the specific way that the IV causes the DV -what process or mechanism is responsible for the relationship between the IV and DV -doesn't prove anything, but helps researcher build a logical argument -ex: Children of incarcerated parents are more likely to commit crime later as adults, but WHY? --having one parent incarcerated reduces the remaining parent's ability to supervise the child, to keep him out of trouble

evidence based policy

-a policy that has been evaluated with a methodologically rigorous design and has been proven to be effective -policy (or programs) built around existing scientific evidence demonstrating the effectiveness of certain strategies and tactics -evidence can be from a single ER study showing a policy or program to have beneficial impacts (less advisable) or from a systematic review of studies that collates info from several evaluations to form conclusions (highly advisable)

differential attrition

-a problem that occurs when comparison groups become different because subjects are more likely to drop out of one of the groups than the other for various reasons -hard to get people to stay in a 6 month drug-treatment program

matching

-a procedure for equating the characteristics of individuals in different comparison groups in an experiment -can be done on either an individual or an aggregate basis -by itself, poor substitute for randomization

policy research

-a process in which research results are used to provide policy actors with recommendations for action that are based on empirical evidence and careful reasoning -goal is to inform those who make policy about the possible alternative courses of action in response to some identified problem, their strengths and weaknesses, and their likely positive/negative effects

theory-driven evaluation

-a program evaluation that is guided by a theory that specifies the process by which the program has an effect

regression effect

-a source of causal invalidity that occurs when subjects who are chosen for a study because of their extreme scores on the DV become less extreme on the posttest due to natural cyclical or episodic change in variable

nomothetic causal explanation

-a type of causal explanation involving the belief that variation in an IV will be followed by variation in the DV, when all other things are equal -quantitative causal explanation -ex: individuals arrested for domestic assault tend to commit fewer subsequent assaults than do similar individuals who are accused in the same circumstances but are not arrested

cost-effectiveness analysis

-a type of evaluation research that compares program costs to actual program outcomes -specific costs of program are compared to program's outcomes, such as the number of jobs obtained, the extent of improvement in reading scores, or the degree of decline in crimes committed -ex: one result might be an estimate of how much it cost program for each job obtained by program participant

cost-benefit analysis

-a type of evaluation research that compares program costs to the economic value of program benefits -requires that the analyst identify whose perspective will be used in order to determine what can be considered a benefit rather than a cost -program clients have a different perspective on these issues than will taxpayers or program staff

non-experimental design

-a way of collecting data that doesn't attempt to test an intervention, treatment, or event --SURVEY RESEARCH to determine whether small-business owners would consider hiring convicted felons if the government paid those employee's salaries --use of SECONDARY DATA to determine the proportion of all felony assault cases that end in trials vs guilty pleas --QUALITATIVE RESEARCH to learn about the ways youths living in high-crime areas try to avoid violent victimization

ecological fallacy

-an error in reasoning in which incorrect conclusions about individual-level processes are drawn from group-level data -group level data do not describe individual-level processes -ex: researcher examines prison employee records and finds that higher percentage of correctional workers w/o college education in prisons, the higher the rate of inmates complaints of brutality by officers in prisons -she concludes that individual correctional officers w/o college degree more likely to engage in acts of brutality against inmates

idiographic causal explanation

-an explanation that identifies the concrete, individual sequence of events, thoughts, or actions that resulted in a particular outcome for a particular individual or that led to a particular event; must be termed an individualist or historicist explanation -qualitative causal explanation -very concerned with context, with understanding the particular outcome as part of a larger set of interrelated circumstances -time order and causal mechanisms -deterministic

example of idiographic causal explanation

-an individual is neglected by his parents -he comes to distrust others, has trouble maintaining friendships, has trouble in school, eventually gets addicted to heroin -to support habit, starts selling drugs and is ultimately arrested and convicted for drug trafficking

stakeholder approach

-an orientation to evaluation research that expects researchers to be responsive primarily to the people involved with the program -responsive evaluation -if social science procedures are neglected, standards of evidence will be compromised, conclusions about program effects will likely be invalid, and results are unlikely to be generalizable to other settings

social science approach

-an orientation to evaluation research that expects researchers to emphasize the importance of researcher expertise and maintenance of autonomy from program stakeholders -if stakeholders are ignored, researchers may find that participants are uncooperative, that their reports are unused, and that the next project remains unfunded

constructing questions guideline

-avoid confusing phrasing or vagueness (when possible, give respondents a specific time frame or reference period, define terms or avoid jargon altogether) --the longer the reference period, the greater the underreporting of given bx (for every day bx ask "in past month," for rarer bx ask "in past 6 months" "in past 12 months") -avoid negatives and double negatives -avoid double barreled questions- contain more than one concept or idea -ex: NO: "i believe we should stop spending so much money building prisons and put it into building schools" --YES: "i believe we spend too much money on prisons" and "we should spend more money on schools' -avoid making all answer options disagreeable- sometimes question is worded in a way that makes all answer options uncomfortable or inapplicable --reduce likelihood of agreement bias by presenting both sides of attitude scales in question --keep language neutral; avoid stigmatizing or judging respondents

endogenous change

-change that is unrelated to the experiment, natural development due to other factors 3 types: -testing: pretest can influence post test -maturation: aging and increases in emotional maturity -regression: cyclical or episodic trends -a problem especially in before-and-after, possibility of endogenous change cannot be eliminated -repeated measures panel studies and time series designs are more effective bc they allow researcher to trace the pattern of change or stability in the DV up to and after the tx

hawthorne effect

-changes that occur in the experimental (or control) group simply as a function of being monitored by the research staff --might make them feel special or maybe self-conscious

constructing questions

-close and open ended questions -good strategy- when feasible and warranted- is to follow up close-ended questions with open-ended ones -open ended questions allow for more detail and nuance, and the contrast between answers to closed and open ended questions can be very interesting -"all hope for achieving measurement validity is lost unless survey questions are clear and convey the intended meaning to respondents" -the data u collect from survey project will be extremely useful or totally worthless, depending on how good a job you do at the questionnaire design -remember: --u must use same question with all respondents (one survey instrument) --diff people must understand each question in same way --you won't be there to rephrase a question if someone doesn't understand it --don't assume respondents know the same phrases, expressions that you do

demographic questions

-demographics are personal characteristics of the respondents (age, race, sex, marital status, education level, income level, political affiliation, religious affiliation) -they are important control variables in statistical analyses --help isolate the imapct of IV of interest on the DV

cross-sectional research

-design in which all data are collected at only one point in time --a survey collecting data from respondents at one time point -most CCJ research is cross-sectional --easier, fewer resources required -identifying time order of effects is critical but can be an insurmountable problem with cross-sectional design -ex: re-arrest data on ex-prisoners collected 6 months after their release

longitudinal research

-design in which data are collected at 2 or more points in time -a study in which data are collected that can be ordered in time -longitudinal research has a lot of advantages --get to see how things play out over the long run --can possibly conduct some before-and-after testing --identification of time order can be quite straightfoward -ex: re-arrest data collected at 6 months, 1 year, 3 years, and 5 years after release --contacting the same survey respondents every 6 months for 2 years

design decisions

-each ER project requires the researcher to make decisions about how to evaluate the program: --will researcher use black box or program theory approach? --whose goals matter more: researcher's or stakeholder's? --should the methods be quantitative or qualitative (or both)?

external events/history effect

-events outside the study that influence the DV -different from endogenous change bc history effects are unpredictable and have nothing to do with respondents themselves -researchers is conducting a one year study of the effects of nutritional counseling on people's eating habits. six months in, nation is swept by a kale fad -the more carefully controlled the conditions for experimental and control groups, the less likely external events will invalidate causal conclusions of an experiment

generalizabiltiy

-exists when a conclusion based on a sample or subset of a larger pop holds true for that pop -recall that random assignment in experiments is very different from random sampling --random assignment does nothing to ensure that the experimental group is representative of general pop, generalizability questionable -in particular, experimental subjects are usually recruited rather than sampled --in smoking study, only people who smoke and who see an ad in the local newspaper and choose to volunteer will be in study -thus, it is often not clear how representative a sample is of the pop, or sometimes even what the pop actually is -generally, results produced by carefully controlled studies are said to be at least roughly generalizable --if there were no systematic differences between the experimental and control groups, and the experimental group improved, then what could possibly be the explanation other than that the experimental tx is effective? -but remember that this doesn't solve the matter of whether the results of a lab experiment are translatable to the "real world"

empirical association

-first and foremost, two variables have to be related -this relationship can have few different looks/forms -the point is that the DV does something predictable in response to a change in the IV --IV changes (increase/decrease), DV changes too (increase/decrease)

web-based surveys

-generally a bad idea --no sampling frame, hard or impossible to get representative sample, often don't even know who pop is, selection bias is almost certainly present --standard problems such as non-completion, and researcher cannot control who fills out the questionnaire -but in some cases, they can be useful --an identified population of which most members have known email addresses --e.g. work-climate surveys where all employees can be emailed the link -web based might become more useful and mainstream as the US population moves toward 100% home/personal access

aggregate matching

-groups are chosen for comparisons that are similar in terms of the distribution of key characteristics -researcher finds a comparison group that is similar to the tx group on average (overall rather than on a person-by-person basis)

feedback

-info about service delivery system outputs, outcomes, or operations that is available to any program inputs -variation in both outputs and outcomes in turn influences the inputs to the program through a feedback process -constant assessment of inputs, outputs, outcomes that might be used to adjust one or more aspects of program -feedback processes also distinguish ER from other types of research: --ER is dynamic, meaning it moves and is ongoing --it can be used to make changes to the program as the program progresses (no need to let a dysfunctional program continue being ineffective if it can be improved)

evaluation basics

-just like other research, ER requires the researcher to decide how to conduct the study and what to measure -ER research might be ongoing as a program progresses -if so, then feedback process might be used -presence of stakeholders -any given ER design not totally under researchers' control; research strategy affected by stakeholders, stakeholders might affect program process as it develops which then impacts design -researchers are generally not powerless- can help educate stakeholders on what is needed for valid program evaluation (healthy stakeholder-researcher relationship is one where both parties respect and accommodate each other's needs)

solution to ecological and reductionist fallacies

-know what the units of analysis and units of observation were in a study and to take these into account when weighing the credibility of the researcher's conclusions

certeris paribus

-latin term meaning "all other things being equal"

cover letter

-letter sent with mailed questionnaire, explains survey's purpose and encourages the respondent to participate -often suffices as informed consent -credible -personalized -interesting -responsible

The Minneapolis Domestic Violence Study

-looked at effectiveness of methods used by police to reduce domestic violence -misdemeanor assault calls where victim and offender were present when the police arrived (51 police officers, 330 cases, 17-month long study) -use one of three approaches for handling domestic violence calls, in cases where officers had probable cause to believe an assault had occurred --send abuser away for 8 hours --advice and mediation of disputes --make an arrest -6 month follow-up period-- interviews with both victims and offenders, official records check -arrest was found to be most effective police response -Strengths: study design -Limitations: generalizability; 6-month follow-up; arrest policy (overnight stay in jail); based on deterrence theory -replicated in 6 different locations- showed that results varied by locale

tips for good questionnaire development

-maintain focus- stick to subject matter at hand, every item should be directly related to central topic -build on existing instruments --no need to reinvent wheel --sometimes, researcher has to use previously established scales; there are certain areas of research that have set scales for measuring certain concepts (e.g. self control) --existing scales may have already been assessed for validity and reliability- new scales are untested and potentially untrustworthy --makes for consistency and facilitates comparisons across studies if all researchers studying a certain theoretical concept use the same scale to measure that concept -consider translation --this goes back to knowing your population --to ensure the survey is accessible to all possible respondents, you may need one or more non-English versions --3 step translation process: 1. translate from English into non-English 2. have different person translate the non-English back into English 3. compare the original English and the re-translated English, and edit translation where needed

posttest

-measure of an outcome (dependent) variable after an experimental intervention or after a presumed IV has changed for some reason

pretest

-measurement of an outcome (dependent) variable prior to an experimental intervention or change in a presumed independent variable for some other reason -the pretest is exactly the same test as the posttest, but it is administered at a different time -provide direct measure of how experimental and control groups have changed over time -allow researcher to verify if randomization was successful

time series design (repeated measures panel design)

-measuring the same sample repeatedly before and after an event or intervention -generally 30+ measurements before and after -useful for policy analysis, can find out the impact of a large-scale event such as a new law or policy -state legislator bans plea bargaining for violent crimes committed with firearms. will this cause a rise in number of people imprisoned for firearms? -you gather info on state prison admissions 30 months before and 30 months after new law to see if there is an increase in firearm-related prison commitments

close ended question

-most common type, respondents get a set of answer options and have to pick one --"death penalty is the best sentence for someone convicted of first-degree murder" agree, unsure, disagree

causality in non-experimental designs

-most of these designs are cross-sectional, and none include any of the features of experimental or quasi-experimental designs --no control groups of any sort, no before-and-after, etc -so non-experimental designs have to somehow compensate for their lack of internal devices that guard against problems of time ordering and spuriousness --association is usually easy to establish in non-experimental research association can be established but other criteria of causality are more challenging: -TIME ORDER:can be difficult to establish that change in the IV preceded change in DV -SPURIOUS RELATIONSHIPS: can use statistical controls, meaning that one variable is held constant so the relationship between two or more other variables can be examined apart from influence of "control" variable, to reduce threat of spuriousness -MECHANISM: the use of intervening variables can help determine how variation in the IV affects variation in the DV -CONTEXT: can be developed, especially when surveys are administered in different settings and with different individuals

causation in CCJ research

-mostly applicable to explanatory research and program evaluation --research where the primary interest is in demonstrating that a particular IV causes a particular DV -not important, usually, to exploratory or descriptive research

in-person surveys or interviews

-researcher goes to respondents' homes and administers survey in person -if financial resources available, best way to conduct survey Benefits: -can build good rapport to maximize response rates -can use longer questionnaires -can probe for details or clarify items when needed Drawbacks: -cost and time -potential for presence of interviewer to influence responses -response rates if people in sample are rarely home or don't want to allow researcher in

group administered surveys

-not always possible, but sometimes a good way to gather data -idea: researcher brings questionnaire to a setting where people are a captive audience, distributes them, collects them on spot --police officer roll calls, student classes, jails, etc Benefits: -high response rate -efficient (can get lots of responses relatively quickly) -cost effective Drawbacks: -can be hard to determine what the population is -possibility of selection bias -possibility that respondents feel coerced or pressured -respondents might not trust the researcher is independent; might not be fully honest in their answers

omnibus survey

-other option to topic survey -where the questionnaire contains items about a variety of topics (i.e. there is no central theme) -example is General Social Survey (500 q's about background characteristics and opinions) -disadvantage: the limited depth that can be achieved in any one substantive area

formulative evaluation

-process evaluation that is used to shape and refine program operations -when process evaluations reveal that a program/policy is not being implemented as planned, or they reveal a problem with implementation, these findings can be used to help shape or refine the program

why evaluate?

-programs to prevent or reduce crime cost a lot of money- need to know if they work --state or federal funding agencies might require evaluation of new or trial programs -many programs have been proposed too- need to know which ones to keep and which to discontinue --don't want to keep using an ineffective program --effective or promising programs should be transplanted to other areas or populations -sometimes programs can have unanticipated (negative) side effects --boot camps that actually increase youths' aggression --prisoner rehabilitation programs that encourage antisocial attitudes or behaviors -need to make sure not only that program has its intended effect, but also that it is not having any unforeseen consequences

quantitative vs qualitative methods

-quantitative methods are generally better for determining program effectiveness -numerical data allows pre/post and tx/control comparisons, but qualitative data doesn't -test of effectiveness requires numerical (quant) data -qualitative data, though can add important context; can reveal important details that quantitative data is not able to tap into (recall tradeoff between breadth and depth) -qual data is very useful for those who are not content with the black box and want to find out why a program worked (or didn't) --e.g. researcher can observe program sessions, interview participants and/or staff members --process evaluation -best approach is mixed methods --quant data to answer the "does it work?" question --qual data to understand why it worked or not, how it might be improved

nonequivalent control group designs

-quasi design in which there are experimental and comparison groups that are designated before the tx occurs but are not created by random assignment -instead, tx group is already selected and the researcher employs a MATCHING technique to create the comparison group

before-and-after design

-quasi design may consist of before-and-after comparisons of the experimental group, but not control group -assumption that most phenomena move through time in a pretty much random, nonsystematic fashion -but if in intervention, treatment, or event occurs that impacts a certain phenomenon, noticeable change will occur --status before will be sharply different than status after -the same or multiple groups are measured before and after (i.e. pre test and post-test) an intervention, tx, or event -idea is that if pre- and post-test scores are different, then difference is attributable to the intervention/treatment/event -they are good at allowing association to be tested --a significant change on the post-test relative to the pre-test indicates association -time ordering is built into the design -but spuriousness is a lingering threat

multiple group before-and-after designs

-quasi-experimental design consisting of several before-and-after comparisons involving the same variables but different groups

contingent question

-questions that are asked of only a subset of survey respondents after they respond yes to a filter question

quasi-experimental design

-research design in which there is a comparison group that is comparable to the experimental group in critical ways, but subjects are not randomly assigned to comparison and experimental groups -contain one or two, but not all 3, elements of experimental designs -not as rigorous as experiments and usually fail to rule out possibility of spuriousness (due to researchers lack of control over manipulation of experimental condition -but they allow research in situations where experiments are impossible or unethical, so they are a good substitute -bc they usually take place in "real world," they might be better at establishing context --controlled studies can be too "sanitized" or divorced from context 3 major types: -nonequivalent control group designs -before-and-after designs -ex post facto control group designs

field experiment

-research designs that use the 3 features of experiments (experimental and control groups; random assignment; assessment of change) but that are conducted in an uncontrolled setting (the "real world") rather than a controlled one -ex: the Minneapolis Domestic Violence Study helpful because: -the generalizability of study results is sometimes questionable (that is, to what extent does an effect found in a lab play out in real life?) -controlled experiments are often not possible; only way to conduct a study might be in the field

telephone surveys

-researcher calls respondents and administers the survey over the phone, recording respondents' answers --often used in conjunction with random-digit dailing sampling method -cellphone users harder and more costly to contact -older persons (above 24) have higher response rates -have many of the advantages of in person interviews at a much lower cost Benefits: -higher response rates than in mail surveys -less chance for respondents to skip items -easier to accommodate those who prefer a non-English version -greater control over the administration process (compared to mail surveys) Drawbacks: -reaching the sampling units: How to contact people both via landlines and cell phone (usually addressed by random digit dialing, but even RDD will miss many people) -getting good response rates --people need to (1) answer phone and (2) complete entire survey --researchers use a call-back rule (e.g. 5 call-backs per number) --try to make survey as interesting and non-threatening as possible

topic survey

-researcher has a specific topic/questions/idea in mind and builds the entire survey around that central theme --attitudes toward police, knowledge about local court or sentencing practices, drug use, violent victimization experiences, etc. -most surveys are topic surveys

organization matters

-researcher has to choose the order of questions and sections (groups of items or sets of multiple-item indexes) -earlier questions can set a tone or frame of reference that then influences respondents' answers to later questions -all sensitive or controversial items should be immediately preceded by neutral items -if there are multiple sensitive items, they should be far apart and separated by several neutral items -ease respondents in by putting neutral, non-threatening items at the front of the survey; then introduce a few that are somewhat personal, then delve into sensitive ones (earn their trust) -major topics should be divided into sections, with short intro to each one -instructions should be minimal, concise, neutral -font, font size and spacing should be easy to read -when possible, include numbers along with response options to facilitate coding later -good idea to periodically remind respondents that answers are confidential -mailed questionnaires often have cover letter (often suffices as informed consent)

causality in non-experimental designs examples

-researcher studying whether gov subsidized salaries would encourage small-business owners to hire convicted felons sends out surveys to owners -DV: "would you hire a convicted felon if the government paid that person's salary for 2 years" -researcher wants to know what factors (IV's) affect the DV (type of business, how many employees business has, gender of business owner) -researcher is studying relationships between violent video games (IV) and aggressive behavior (DV) -but there are lots of other variables that could impact aggressiveness (gender, age, number of hours spent playing these games) -need to control for these other factors/variables in order to isolate the possible relationship between violent video games and aggression

researcher vs stakeholder orientation

-researchers and stakeholders have different professional needs and orientations -researchers want to focus on collecting scientifically sound data; stakeholders want a fast, easy answer to "Does it work" -possibly, stakeholders even have a bias in the program working and are upset by results showing null effects -the balance between 2 competing interests depends on situation -if stakeholders win out, then the "evaluation" of the program will consist of their opinions about things (stakeholder approach) -if researcher wins, evaluation should be an objective, scientific assessment using proper methods and theory (social science approach) -however, even if researcher has control, s/he still must make the results accessible and useful to the stakeholders

filter questions

-researchers can make surveys longer and more detailed/complex by making certain items applicable only to certain subsets of respondents -filter questions discover whether respondents possess a particular characteristic of interest --this creates a skip pattern where some respondents answer contingent questions and other respondents skip a certain section and move on to the next -if you want to know about robbery victims' satisfaction with the police response, then filter out respondents who have not been victimized --"question 1: in the past 6 months, has anyone taken valuable items or money from you by hurting or threatening to hurt you?" Yes (please fill out following questions) No (please skip to question 2)

inputs

-resources, raw materials, clients, and staff that go into a program -persons or units that enter program, also staff and others who run program

mail survey

-self administered- respondents have total control over when they complete it, how long it takes, and when they send it back -read questions themselves and answer them main drawbacks: -response rate- hard to get people to fill the survey out and return it (unlikely to be above 80%, below 60% is a disaster) -missing data- sometimes people miss or skip items and this can cause minor or major loss of data -also, researcher has little control over who fills out questionnaire and how long it takes them to return it

random assignment

-separating a pool of subjects into the experimental and control groups in a random or chance-based manner -researcher uses predetermined randomization procedure -this is NOT the same as random sampling- random assignment in no way ensures representativeness or generalizability -randomization is useful for ensuring internal validity, NOT generalizability -random sampling is useful for ensuring that the research subjects are representaitve of some larger population (generalizability), NOT internal validity -essentially, what random assignment does is help make sure there are no systematic differences between the experimental and control groups -systematic differences could arise in absence of random assignment --ex: you are testing whether violent video games encourage aggressive bx among male adolescents, you set up experimental condition with violent video game and control condition with non-violent game, you gather group of subjects and let them choose which they want to play --use flip of coin to spread personalities evenly

fixed-sample panel design (panel study)

-simplest before-and-after design, one group measured before treatment/event/intervention and then again afterward -a type of longitudinal study in which data are collected from the same individuals- the panel- or two or more points in time -very weak, extremely difficult to rule out spuriousness -one pretest and one posttest -does not qualify as quasi-experimental bc comparing subjects to themselves at only one earlier point in time does not provide an adequate comparison group -all kinds of things might have happened between pre- and post-tests -ex: will an anti-gang program for youth increase youths' negative attitudes towards gangs and gang membership? You gather a sample and measure attitudes toward gangs, youth go through one-week program, you measure their attitudes again after -you find that their post-test attitudes are more negative than pre-test -but you can't rule out rival explanations (for example, a high-profile gang shooting could've occurred during that time)

demoralization

-someone from control group finds out the experimental group is getting something good and deliberately performs worse or attempts to get what experimental group is getting -type of contamination in experimental and quasi that occurs when control group members are aware they were denied some tx they believe is valuable, and as a result, feel demoralized and perform worse than expected

placebo efect

-when people in control condition feel an improvement even absent the experimental tx -solution is usually multiple control groups -instead of merely experimental (real pill) and control (placebo), can have: --real pill alone --real pill plus therapy --placebo alone --placebo plus therapy

treatment misidentification

-when subjects receive an unintended tx 1. expectancies of experimental staff (self-fulfilling prophecy) 2. placebo effect 3. Hawthorne effect (participation in study makes them feel special)

context

-sometimes, certain causal rx's are dependent upon a given set of circumstances (situational factors that must be present for a cause-and-effect sequence to occur) -relationships between variables that vary between geographic units or other contexts -no cause can be separated from the larger context in which it occurs -when relationships among variables differ across geographic units like counties or across other social settings, researchers say there is a contextual effect -context is a way of adding layers/nuances to a causal relationship -helps us understand the "effect": --a lot of times, the IV's impact on the DV is not 100% true or exactly the same for every single case in the sample --sometimes, there have to be other factors present in order for IV's effect to play out -ex: children of incarcerated parents are more likely to commit crime themselves IF the remaining parent is unable to run a functioning household

nonspuriousness

-spurious relationship- an apparent association between two variables is actually untrue; both variables are caused by a third variable -when the first 2 criteria (association and temporal) have been established, researcher still must rule out spuriousness as potential cause of apparent relationship -establishing causation is about ruling out these alternative explanations (i.e. controlling for or eliminating third variables that might be the true cause of the apparent relationship)

systematic review

-summary review about the impact of a program wherein the analyst attempts to account for differences across research designs and samples, often using statistical techniques such as a meta-analysis

floaters

-survey respondents who provide an opinion on a topic in response to a closed-ended question that does not include a "don't know" option, but will choose "don't know" if it is available -choose a susbtantive answer when they do not know anything about the particular question (if not provided with "i dont know")

fence sitters

-survey respondents who see themselves as being neutral on an issue and choose a middle (neutral) response that is offered

temporal ordering

-temporal- having to do with time, one event happening before the other -time order- the change in the IV happened BEFORE the change in DV (not after or at same time) -failing to prove time ordering undercuts causal hypothesis -to demonstrate causation, the IV has to be shown to occur before the DV

internal validity (causal validity)

-the ability to say with confidence that A causes B -experiments are the best way to minimize threats to internal validity but they aren't perfect -however, experiments suffer from questions of their generalizability --how well do lab results apply to the real world? -quasi have less internal validity but usually have better generalizability compared to experiments --quasi are more realistic; take place in real world not artificial environment --but they don't do a good job of isolating causal effects -in experiments, threats to internal validity revolve around the comparability and quality of experimental and control groups 1. selection bias 2. endogenous change 3. external events/history effects 4. contamination 5. treatment misidentification SEECT- So Earl, ever cooked tuna?

program process

-the complete treatment or service delivered by a program -designed to have some impact on the cases, as inputs are consumed and outputs are produced -delivery of prescribed service

outcomes

-the impact of a program process on the cases processed -measurement of effectiveness

counterfactual

-the outcome that would have occurred if the subjects who were exposed to the tx actually were not exposed but otherwise had had identical experiences to those they underwent during the experiment -the situation as it would have been in the absence of variation in the IV ex- -actual situation: people who watch violence on TV are more likely to commit violent acts -counterfactual situation: the same people who watch nonviolent tv shows at the same time, in the same circumstances, are not more likely to commit violent acts

expectancies of experimental staff

-the researcher's enthusiasm for the experimental condition (and lack thereof for control) can influence his/her bx and interaction with subjects -solution is double-blind procedures

outputs

-the services delivered or new products produced by a program process -direct products of service delivery

ex post facto control group designs

-the tx and control groups are designated after the tx is administered or other variation in the IV has occurred -could be considered non experimental rather than quasi bc the control group may not be comparable to tx group

administration options of survey research

-there are multiple manners of administration in survey research -the administration method a researcher chooses depends on the circumstances (funding/cost, time, size of pop and size of sample, access to sampling frame, etc) -mailed survey -group survey -phone survey -in person interview -electronic survey

additional guidelines for fixed-response questions

-there are several rules for constructing close-ended questions -these rules not only help make sure questionnaire is clear and concise, but they help boost validity and reliability -mutually exclusive- no overlap between categories -exhaustive- every possible response option is represented; all respondents have an option that applies to them -Likert-type response categories- best to avoid being overly restrictive in the options you give respondents-allow them nuance --better to have "agree, strongly agree, etc" instead of only "yes or no" -fence-sitting vs floating -filter questions

mixed-mode surveys

-to boost response rates, researchers might use multiple delivery modes --send mail survey but also provide telephone number and web address if respondents prefer to do survey by phone or online --send both mail survey and call and/or email a sampling unit (though be careful that each person only fills out one questionnaire)

true experiment

-two comparison groups (experimental group and control/comparison group) -randoms assignment to a group -pretest/postest- assessment of change in the DV for both groups after the experimental condition has been recieved --posttest necessary, pretest not, both have to be the same -time order determined by researcher

black box evaluation

-type of evaluation occurs when an evaluation of program outcome ignores, and does not identify, the process by which the program produced the effect -focus of evaluation researcher is on whether cases seem to have changed as a result of their exposure to the program, 'between the time they entered the program as inputs and when they exited the program as outputs' -requires only the test of a simple input/output model -researcher sticks to empirical observations (program worked or not) -if he/she sticks to empirical observations, s/he will merely measure whether or not the cases changed between input and output --e.g. pretest vs posttest, or experimental group vs matched controls -no attempt to figure out WHY it worked (or didn't)

purposes of survey research

-type of nonexperimental research design intended to do things like: -measure the incidence and prevalence of certain phenomena (descriptive research) --violent victimization --verbal harassment by police officers -examine relationships (empirical association) between 2 or more phenomena --frequency of alcohol use and frequency of drunk driving --employment history and property crime offending -test hypotheses or examine research questions --people with low self-control are more likely to commit crime than those with higher self-control --female judges will sentence defendants accused of sexual assault harsher compared to male judges

split ballot design

-unique questions or other modifications in a survey administered to randomly selected subsets of the total survey sample, so that more questions can be included in the entire survey or so that responses to different question versions can be compared -allows inclusion of more questions without increasing survey's cost -allows for experiments on effect of question wording: different phrasings of the same question

units of analysis vs units of observation

-unit of analysis- level of social life on which a research question is focused, such as individuals -units of observation- cases about which measures actually are obtained in a sample -ex: in some studies, groups are the units of analysis, but data is collected from individuals (unit of observation) -in most studies they are the same thing

ways to strengthen panel designs

-use multiple samples (multiple-group before-and-after design) -use multiple pre- and post-tests on the same sample (repeated-measures panel designs) -these designs can be an improvement, but having no control group still problomatic

open ended question

-used in more qualitative surveys --allowing respondents to write in their own answers and offer as much detail as they want --less common and often less practical- respondents' answers can be very long, and the number of questions you can ask is limited --"if u were a judge, what sentence would you give to someone convicted of first-degree murder? why?"

statistical control

-using statistical analyses in a way to compensate for a lack of design controls -using the data, rather than the design, to help establish time ordering and/or nonspuriousness -statistical controls are less rigorous and trustworthy than design controls, but they are all that can be done in non-experimental research --quantiative only, qualitative is different -statistical controls are IV's that the researcher includes in the statistical anaylsis as a way of isolating the impact of the primary IV on the DV -in other words, getting rid of third (4th, 5th, etc) variables that are a threat to the nonspuriousness of the IV-DV relationship

survey research: overview

-ususally, the researcher can only gather a sample, but her/his real interest is in the population --sample of juveniles-> all juveniles --sample of adults in a city-> all adults in that city --sample of prisoners in a medium-security facility-> all medium-security prisoners -survey research designs usually aim for GENERALIZABILITY, meaning that they rely heavily upon RANDOM SAMPLING and good REPRESENTATIVENESS of those samples -the basic idea: gather a (random) sample of people (respondents) and give each respondent a survey questionnaire to fill out -the quality of survey research hinges upon (1) the appropriateness of the sampling technique and (2) the quality of the questionnaire

intervening variables

-variables that are influenced by an IV and in turn influence variation in a DV, thus helping to explain the relationship between the IV and DV -name for causal mechanisms in nonexperimental research

idiosyncratic variation

-variation in responses to question that is caused by individuals' reactions to particular words or ideas in the question instead of by variation in the concept that the question is intended to measure -respondent's answers are their own interpretations of the item, or the emotional reaction they have (you don't want this) -single questions prone to this -ex: for sentencing question, respondent A visualizes a hardened, violent offender who preys on innocent victims and respondent B thinks of her 20 year old nephew who was always a good kid but got into drugs and a rough patch

fence sitting vs floating

-ways to avoid either problem -first, try defining terms or providing background about the issue --"the Juvenile Justice Bill going before Congress this month will do x,y,z.... do u think this will help reduce crime committed by juveniles" -second, consider substance of question itself, if it is pure opinion then maybe you don't need neutral category; if it requires knowledge, maybe you do need one --"during my recent contract with a police officer, officer was polite" probably doesn't need a neutral category --"police in the city where I live are polite to people they encounter" might need a "don't know" category

causal effect (idiographic perspective)

-when a series of concrete events, thoughts, or actions result in a particular event or individual outcome

combining questions into an index

-when measuring concepts, best option is to devise an index of multiple rather than single questions --capture nuances; doesn't force people into a corner --better CONTENT VALIDITY- captures entire range of a concept not just single piece --better RELIABILITY- minimize idiosyncratic variation for a more stable, consistent, trustworthy estimate across all respondents -researchers who use multiple items will later sum or average those items to form a scale/index --4 point Likert-type response option, then researchers sum scores for entire set of items -only way to effectively form an index is to administer question in a pretest to people similar to sample you plan to study -reliability measure help researchers decided whether responses are consistent

contamination

-when one of the groups affects or is affected by the other 2 types: -compensatory rivalry -demoralization

selection bias

-when there are systematic differences between experimental and control groups -occurs when program participation is voluntary, the result being that there are unknown but probably important differences between those who do the program and those who don't -this is minimized with use of random assignment -but subjects might still drop out of one condition more than the other (differential attrition) -a group that started out good becomes problematic over time (not real issue for lab studies, but matters greatly to long-term studies) -also an ever-present concern in quasi-experimental designs (e.g. nonequivalent control groups) -a pre test helps determine whether selection bias exists and control for it --most variables that might influence outcome scores will also influence scores on pretest

2. The key feature of an experimental design that distinguishes it from a quasi-experimental design is the: ◦a. Longitudinal measurement of the dependent variable ◦b. Random assignment to two or more groups ◦c. Comparison of the dependent variable before and after the independent variable ◦d. Elimination of error due to chance

B

3. A researcher wants to study the effectiveness of a correctional program in reducing negative attitudes. He identifies a jail that already has a program in place and compares inmates attitudes to a jail that does not. This is an example of which type of quasi- experimental design? ◦ a. Non-equivalent control group design ◦ b. Before-and-after design ◦ c. Ex post facto control group design (programs already in place so its after the fact) ◦ d. This cannot be a quasi-experiment.

C

1. Which of the following is a requirement for the pretest in an experiment? ◦a. The pretest must be a previously validated instrument. ◦b. The pretest cannot refer to the treatment to be tested. ◦c. The pretest must be given only to the comparison group, not to the experimental group. ◦d. The pretest must be the same as the posttest.

D

4. Which of the following is not a source of internal validity ? ◦ a. Selection bias ◦ b. External events ◦ c. Contamination ◦ d. Statistical control

D

5. All of the following are types of endogenous change except: ◦ a. Testing ◦ b. Maturation ◦ c. Regression ◦ d. Selection bias

D

8. Cost-benefit and cost-effectiveness analyses: ◦ a. Focus on internal and external validity, respectively ◦ b. Focus on external and internal validity, respectively ◦ c. Are examples of the evaluation of need ◦ d. Are examples of the evaluation of efficiency

D

which survey research design is the best?

Depends on: -goals of research -knowledge of population -amount of funding and time researcher has -researcher has to weigh all factors and options

causality in quasi-experiments

association -established by comparison between groups, in the same way as an experimental design time order -this is a strength of the various quasi-experimental before-and-after designs -difficult with nonequivalent control group designs bc we cannot ensure they were equivalent -this is much greater problem with ex post facto control group designs non spurious relationships -cannot entirely meet this challenge, but there is considerable confidence that most extraneous influences could not have occurred

testing for causal effects

association -established by random group assignment -control group provides info on what would have happened without the intervention, ceteris paribus time order -can be established by pretests and posttests -in an experiment with pretest, time order can be established by comparing postest to pretest scores -in experiments with random assignment of subjects to experimental and comparison groups, time order can be established by comparison of posttest scores only nonspuriousness relationships -random assignment eliminates many extraneous influences that can create spurious relationships -some say impossible to establish in nonexperimental designs -hard to address mechanism and contextual elements in experimental design

research design categories

experimental -tx and control groups -random assignment to groups -pre-test and post-test quasi-experimental -one or two of the characteristics of experiments, but not all 3 nonexperimental -lacking all 3 -often not concerned with testing effects of intervention or event -studying phenomena as they naturally occur -studying attitudes, bx's, experiences, etc

benefits of survey research

versatility -lots of options. All types of topics, questions and so on -can get info that can't be collected in experiments or quasi --might not be feasible/ethical --researcher's interest might be in something not subject to manipulation efficiency -relatively cost-effective; not free, but more affordable than other types of research -quick- questionnaires are generally designed to take no more than 20-30 minutes for respondents to fill out (though some last longer) generalizability -proper sampling techniques can be used to get good samples from any population of interest -generalizability is easy to establish, as long as sampling is good (compare this to experimental and quasi-experimental research) only method to get at people's attitudes, beliefs, expectations, experiences, and natural (i.e. not in the lab) behaviors -if you want to know what someone thinks or has experienced, you have to ask!


संबंधित स्टडी सेट्स

Fundamentals Chapter 24 Fitness&Therapeutic (Prep U)

View Set

exam 2 practice 2, exam 2 practice

View Set

Pedagogy, Professional Issues, and Technology

View Set

Drugs for heart failure- Ch. 27 Questions

View Set