research methods msw
what is science?
◼ 3 conditions for knowledge to be considered scientific: 1) Must be logical -"logically structured" means that science is done according to a clear and explicit plan that makes sense. We need to be able to tell people exactly how we did what we did, so they can judge our work or even replicate it (do it themselves to see if they get the same results). 2) Must be empirical -"empirical" things are real things that we can observe or measure (observable or measurable things as evidence): knowledge claims based on evidence → Direct observation (# of times a child correctly identifies a shape) & psychological testing. 3) Must be public -Science is not a thing one person does. It is an endeavor of the human race. Our little part of science is made public (we call this knowledge building). If your work isn't public, nobody else can check it or try to replicate it. Must meet the approval of the scientific community.
true experimental research designs
(1) Pretest-posttest control group Design (2) Posttest only control group Design (3) Multiple Experimental Conditions (4) Dosage Design (5) Crossover/Partial Crossover Design
source of knowledge
(alternatives of social work research) - Authority - Tradition - Common Sense - Media Myths - Personal Experiences Knowledge from these sources is not always wrong or bad. Sometimes, intuition based on our life experience is important to social work. These sources of knowledge provide a starting point for inquiry, but can lead us to start at the wrong point and push us in the wrong direction. → Relying on these sources of knowledge led us to error
what analysis to use?
-Univariate analysis: is the analysis of a single variable (e.g., gender, educational Inspiration) -Bivariate analysis: Bivariate analysis is the analysis of describing two variables simultaneously (e.g., Chi-square, T-test, ANOVA) Chi-square tests look at the association of categorical variables. What it does is test whether one set of proportions is different from another. It does this by comparing frequencies. T-test is used when we compare mean (continuous variable) in two groups (categorical variable) - Independent sample t-test - Paired sample t-test ANOVA is used when we compare mean (continuous variable) in MORE THAN two groups (categorical variable) Correlation: Correlation tests a relationship between two continuous variables. It provides us information about the direction and strength of the relationship. -Multivariate: There is the third variable that influences the association between the two variables (indepdent & dependent) of our interest. We need to control the third variable(s) to test the true relationship between the two variables of our interest
true experimental designs
(sometimes called "equivalent group designs" or "randomized designs" or "randomized clinical trials") are very common in medical and psychological disciplines and are considered by many to be the best or gold standard form of research. This is BECAUSE : true experimental designs allow maximum control over spurious causality and therefore are the best designs for really nailing down causal relationships between variables. Only true experiments allow researchers to make causal conclusions based on study results.
observation
- Counting the number of times a client uses the word "I" during an interview as a measure of ego strength - Watching for control & aggression while a father plays with a child - Looking a video or audiotape - Observing the sequence of interaction in a family session ◼ Strengths - Can be more naturalistic - See things people aren't aware of (unable to report) ◼ Weakness - Something cannot be observed - Presence of observer changes some situation
step 2: measure of central tendency
- Mode: the most commonly occurring observed value - Median: middle observation - Mean: the mean is the sum of the scores divided by the number of scores Range -Difference between most extreme values Variance -Indicates the spread of the data around the sample mean. Standard deviation -The variance represented in terms of original units.
research and social work
- To be effective practitioners and administrators - To develop effective policy - To provide ethical treatment - For professional survival
qualitative data analysis
-Interviews: What is said, What is not said, Context/Demographic info (questionnaire) -Process Notes: Your ideas, impressions, day-to-day commentary, thoughts, ideas, interpretations & Notes on research setting/description of people and actions observed -Memos: analytic thoughts about the codes. They provide clarification and direction during coding. Step one: transcript *correspondence* Step two: data explanation and reduction 1. Read and reread your interviews/transcripts 2. Go through and highlight places that stand out 3. Note any ideas or questions you have in the margins step 3: begin coding ◼ Begin Coding: Don't need to wait for all of your data to be collected ◼ Grounded Theory Approach -What's going on? -What are people doing? -What is the person saying? ◼ A Code is more than a summary ◼ Concept driven coding -The categories or concepts the coders represent may come from the research literature, previous studies, the interview schedule/prompt, or a hunch. -Researcher builds a list of thematic ideas before applying codes to text. However, the researcher will need to amend the list of codes during analysis as new ideas are detected. ◼ Data driven coding -Starting to code without a pre-generated list. -Pulling from the data what is happening without imposing an interpretation based on pre-existing theory ◼ Codebook -Labels of codes -Hierarchy of codes (identifies sub-codes) -Definition and Notes about all codes ◼ Importance of codebook -Makes methods transparent by recording analytical thinking used to devise codes -Allows comparison with other studies/interviews step 4: interpretation ◼ Not necessarily linear—may constantly move back and forth between data and interpretation ◼ Comparisons, Causes, Consequences, and Relationships
example of developing a hypothesis
-Physical abuse is harmful to Children -Physical abuse is related to adolescent bad behaviors -Physical abuse is associated with adolescent violent behaviors -Physical abuse in childhood will be associated with adolescent violent behaviors -Physical abuse in childhood reported by CPS will be associated with adolescent violent behaviors reported by Juvenile delinquent system → So, in this example, hypothesis may be "Physical abuse in childhood reported by CPS will be associated with adolescent violent behaviors reported by Juvenile delinquent system.
how to conduct correct statistical analysis
-Step 1: Level of Measurement/ Coding Scheme -Step 2: Understanding Data (Measure of Central Tendency/ Variation)
validity
-Validity: A measuring instrument is valid when it does what it is intended to do (valid measuring instruments measure what they are supposed to measure ) There are four most commonly described types of validity: 1) Face validity: Face validity is synonymous with "it looks right" or "it makes sense". 2) Content validity: Are all key factors included in the instrument? 3) Construct validity: It is a way of seeing if your measurements and your theory seem to be getting along well and supporting each other. 4) Criterion validity: This validity is most commonly refereed to when trying to figure out a particular measure is working right. This can be classified as concurrent, predictive, and discriminant validity
secondary data
-existing data -use data from someone else's survey -must understand the data ◼ Strengths - Lots of good, already gathered data - Cheap and fast ◼ Weakness - Don't have control over what was asked/measured examples on page 394 in textbook
4 major ways of collecting data
-pen and pencil (written self reports) interview -observation -secondary data
sampling
-sampling is critical because correct selection of subjects from the population is the main factor that provides your study with external validity (generalizability). -identifying and operationalizing your population of interest
Step 1 -- Formulation of a research problem (your area of interest)
-your area should interest you -you must be able to say what your area of interest is in one sentence in simple language -your area of interest must be small enough to guide you to specific questions -your area should be important and have some relevance to practice -it should be practical to study.
Standardized and unstandardized measures
1) Standardized Measures By definition, a standardized measure should measure what it says it measures (validity) and do so consistently over time (reliability). This is supported by the process of development, use, and revision. 2) Unstandardized Measures Unstandardized measures may include those you develop yourself or instruments that already exist but have not had sufficient field testing to be standardized.
four levels of measurement
1)Nominal: types or categories (Gender) 2)Ordinal : orders objects along some continuum (Education) 3)Interval : differences between scale points (IQ) 4)Ratio : an absolute zero point (Salary).
single subject design example
1. ABA design: A- baseline, B- treatment Generally, follow-up measurement is done the same way it was done before treatment. EX: Client Fred is referred to therapist Kristin for depression, anxious, and stress. Before Fred arrives for his first meeting, Kristin decides he needs a way to objectively measure the problem. She found a short version of DASS (Depression Anxiety Stress Scales), which has 21items. She also found that this measurement has been approved that it has good validity and reliability.Client Fred shows up and does indeed seem to have issues around depression, anxiety, and stress. Treatment is scheduled for ten sessions, which is what Fred's insurance will cover. (first two sessions are assessment sessions, suggesting baselines. The next seven sessions are CBT, which are treatment sessions. The last session was a follow-up session.). Kristin asked Fred to fill out this scale from every week. 2. Multiple baseline model (ABAB design) (Single subjct combined with intervention design) If you really need to be sure if a treatment works for a client, you might stop the treatment and then restart the treatment again. This is called an ABAB (baseline-treatment-baseline-treatment) design. Example 2: Don't do crime Suppose that It's December 2006, and you work at a local community agency and are particularly interested in helping your community to reduce street crime. You are lobbying for improved streetlighting to be installed, but this won't happen for about a year. You are thinking of starting up a citizen program to see if that will help. Your goal is to reduce assaults and robberies that occur in your community. Thanks to the police help, you can get quarterly crime rates in your area.
3 elements of a question
1. Context: Please indicate how true each statement has been for you during the past 7 days. 2. Stem: I have nausea. 3. Response: 0 Not at all 1 A little bit 2 Somewhat 3 Quite a bit 4 Very much
level issues related measurement error
1. Ecological Fallacy Ecological fallacy occurs when you take the average value for a group and assign this value to all members of the group ◼ Attribution of a group level characteristic to an individual ◼ No ability to take into account the variability between individuals w/in the group 2. Clustering Clients of the same doctor or therapist will share similar characteristics or outcomes / Students in the same school will share similar characteristics or outcomes.
The most common types of literature include: -Empirical -Review or Overview -Theoretical or Conceptual
1. Empirical: These are mainly focused on presenting new data. Most articles are empirical and we used empirical articles the most for learning more about our area of interest. EX: An article reporting findings from and experimental treatment for autism or study describing relationships between neighborhood poverty and rates of domestic violence. This type of literature has a section of "methods" including sample, measure, analysis, etc. 2. Review or overview: These sources tell you what we know about something. EX: An article describe what we know about what works and what fails in welfare-to-work programs. This type of literature often includes the term "systemic review of ..." or "meta-analysis of ....." in the title. This type of literature uses multiple empirical studies on an area of interest to review the results. 3. Theoretical or conceptual: These sources present ideas, tie together prior findings in new ways or seek in some other manner to make sense out of what we know. EX: An article suggesting that the current findings in a given area can be best explained through the application of a new theoretical model. This type of literature usually doesn't have methods section.
choosing a measurement
1. Locating instrument I might suggest that you try the following approaches: - review your literature and see what instruments were used. - ask people (professor, other researcher) in your areas what instruments are available. - check your library catalog to see if any books exist in your area that include either reviews of instruments or that include the instruments themselves (e.g., Corcoran, K. & Fischer, J. (2000). Measures for Clinical Practice, A sourcebook) 2. Find a standardized instrument for your study because: a.) it tends to have high validity and reliability. b.) from a purely practical perspective, it also let you save lots of your time and effort evaluating and defending the instrument you created 3. Modifying Existing Instruments Common reasons for modification include the need to: - translate into a different language - change due to age or cognitive ability - shorten the measure 4. Creating New Instruments - Creating valid standardized scales and measures requires substantial expertise in item creation and statistics and repeated pilot testing of the instrument and can end up taking years. For practical purposes, instead of using a standardized multi-item scale to assess something, you just ask about it simply.
probability sampling
Probability sampling involves "random chance", which means each subject has equal chance of being selected. probability sampling types: (1) Simple Random Sampling (2) Systematic Sampling (3) Stratified Sampling - Proportionate Stratified Sampling - Disproportionate Stratified Sampling (4) Cluster Sampling
key factors of true experimental design
1. They include experimental groups and a control group experimental groups are the groups that receive the experimental treatment (e.g., they get the drug, they get the new therapeutic program, etc.). ◼ Control groups are groups that do not receive the experimental treatment OR placebo OR a treatment-as-usual 2. Subjects are randomly assigned to experimental and control groups. → The subjects in each group are identical 3. True experimental designs are deductive (theory predicts factual outcome)
key features of true experimental designs
1. They include experimental groups and a control group ◼ experimental groups are the groups that receive the experimental treatment (e.g., they get the drug, they get the new therapeutic program, etc.). ◼ Control groups are groups that do not receive the experimental treatment OR placebo OR a treatment-as-usual. 2. Subjects are randomly assigned to experimental and control groups. 3.True experimental designs are deductive (theory predicts factual outcome)
quasi experimental designs
1. multi-group quasi experimental designs 2. Natural experiments happen when the world creates a quasi-experimental design for you. For example, you might compare 20 different communities that have had a large employer leave town to 20 other communities that were otherwise similar but did not lose a major employer. In this case, you can't ever do a true experimental design, because you can't randomly assign communities to "lose major employer" or "not lose major employer" conditions. In short, naturalistic experiments are very common in the social sciences and are the best way to study many kinds of social issues. Just make sure that your subjects in both groups are alike because of the lack of random assignment. Recruit your subjects for both groups in similar ways and match two groups on important characteristics (e.g., age, SES, level of problems, etc) as much as you can. 3. single group design
types of question
1. yes or no 2. true or false 3. Categorical response items Example: What is your religion: [ ] 4. List-section items Example: What is your religion? (check only one or check all that apply) 5. likert scale: I feel as if these examples will never end. 6. multi-item scale: emotional well being. When usually used? Latent Construct ◼ Estimation of a unidimensional latent trait/variable - abstract concept -cannot be measured directly -examples: attitudes, quality of life, satisfaction, anxiety, depression ◼ Scoring: Sum or Average ◼ Advantages of Multi-item scales - Latent variables are usually complex and not easily measured with a single item - Usually more reliable and less prone to random measurement errors than single-item measures 7. skip patterns: Sometimes questions are meant to be answered only by certain subjects.
longitudinal studies
: Studies that follow subjects over time are called longitudinal studies. Data are collected from several different points in time.
cross sectional studies
: The simplest and most common type of social research is the cross-sectional study. In the cross sectional research, researchers observe at one point in time. So, it cannot capture social process or change.
structure of instruments definition
All structured instruments are made up of series of items. These items usually have closed-ended responses, which most commonly are in the form of yes/no items, numeric or categorical response items, true/false items, Likert scale items.
step 1: four levels of measurement
Categorical Variables 1)Nominal : types or categories (Gender) 2)Ordinal : orders objects along some continuum (Education) Continuous 3)Interval : differences between scale points (IQ) 4)Ratio : an absolute zero point (Salary)
cluster sampling
Cluster sampling takes place when a researcher starts with many clusters of subjects and picks a subset of these clusters for the sample
3 types of criterion validity
Concurrent validity : Concurrent validity occurs when you check to see if an instrument agrees with other measures or behavior observations taken at the same time. Predictive validity : This validity denotes an instrument's ability to predict future events. Discriminant validity : Discriminant validity is how you show that your instrument is not measuring some specific other thing.
correlation and causality
Correlation means there is a relationship between variables. If you want to study whether consuming violent media is related to aggression, you collect data on children's video game use and their behavioral tendencies. You ask parents to report the number of weekly hours their child spent playing violent video games, and you survey parents and teachers on the children's behaviors. You find a correlation between the variables: playing violent video games and aggressive behavior. However, correlation doesn't imply causation. This is due to the third variable that affects both variables. In your study on violent video games and aggression, parental attention can be the third variable that could influence how much children use violent video games and their behavioral tendencies. Low quality parental attention can increase both violent video game use and aggressive behaviors in children. If there is no third variable on the relationship/correlation, you can say there is a causality (ONLY violent video games influence on aggressive behavior). As you can imagine, there are always many third variables in social work problems since we study human behaviors, and human behaviors should be understood in the environment where many factors are related (we don't live in the vacuum). This is why we cannot say "causality" (A causes B) in most social work research. What we can say is "correlation" (A is related to/associated to B).
cross over design
Crossover designs give subjects exposure to different experimental conditions over time. ◼ Participants are randomized to the order in which interventions are assigned. i.e., start with experimental, and then switch to control, or the reverse. ◼ Often includes a "rest" or "washout" (no treatment) period to eliminate effects of first intervention. ◼ The most common form used in social science research is partial crossover designs.
Should you always have a hypothesis?
Deductive approaches always look at theory and specify a fact that theory predicts will happen. All deductive research must have at least one hypothesis. Inductive research looks at facts and creates theory. Purely inductive research will not have any hypotheses.
directional and nondirectional example
Directional: Higher level of burnout will be associated with fewer hours of service provided. Non-directional: Burnout will be associated with differences in hours of service Most researchers use directional hypotheses when they are confident they can predict the direction of the expected relationship, based on theory or prior work.
disproportionate stratified sampling
Disproportionate sampling (oversampling) happens when a researcher intentionally recruits more of a given type of subject than would naturally occur in a random sample. This is done to be sure of getting enough people of that type example: Let's say that we want to know about the college experience of different races. Our school has 2,000 people, of whom 100 are Hispanic, 200 are AA, and 1,700 are Caucasian. We want to final sample of 90. This would mean you will get probably get about 5 Hispanic and 10 AA if we select students randomly. Not enough!
posttest only control group design example
EXAMPLE: Counseling agencies often find that 30 % of scheduled appointments results in "no-shows". So, you have a hypothesis that the 30% no-show rate could be reduced by the agency's receptionist calling clients to remind them of their scheduled appointments. The group receiving the phone calls would constitute the experimental group. Those clients who do not receive a reminder constitute the control group. Membership in either the experimental or control group would be randomly determined (e.g., even-number clients during the first week of March, similar SES, equivalent personal characteristics, similar level of problems, etc.) At the end of the study, the cancellation rates for the two groups could be compared.
dosage design examples
Example1 : New Aspirin Group Pain Rating Sugar pla. 7.8 150 mg 6.3 300 mg 3.3 450 mg 3.3 600 mg 3.2 Example2: New Job Program Group % of people who find job treatment as usual (old program) 10 % 2 weeks program 15 % 4 weeks program 20 % 8 weeks program 80 % 12 weeks program 85 % ex: how many long the job training should be?
pen and pencil tests
Examples: Beck Depression Inventory as a measure of depression - SAT as a measure of potential for success in college - MMPI as a measure of personality functioning - Voting Machine as a measure of political preference Strengths: Collect information from many people - Relatively cheaper - Usually consistent - People can feel anonymous and answer more truthfully. Weaknesses: - Cannot clarify confusion - Cannot expect long answers - Cannot expect in-depth information ◼ Modern form of Pen and Pencil Tests - Internet Surveys ◼ Issues related to Pen and pencil tests: Response rates - Why to response rates matter? - What can we do to boost rates? ◼Do's & Don'ts - Short is better than long - Negative terms should be avoided (confusing) - Leading questions should be avoided (bias) - Simplify presentation whenever possible (use matrix questions, standardized set of responses to a series of close-ended questions)
multiple experimental conditions
Experimental group: test: treatment A: test Experimental group: test: treatment B: test Experimental group: test: treatment C: test Control group : test -- test
nonprobability sampling
Nonprobability samples provide weaker generalizability than probability samples, but they are often necessary for practical reasons types of nonprobability sampling: (1) Convenience/Haphazard sampling (2) Purposive sampling (3) Snowball sampling
who needs assessment
Human beings and society are very dynamic: Populations change, social condition change, available treatments change, and people change. → Problems we identify in a needs assessment today may not be at all the same next year. There are two typical types of needs assessments Prevalence: The number of people with a need or a disorder or whatever at a point in time Incidence:The number of people who pick up that thing (new cases) over a certain period of time EX: If 40 million people in the US are currently depressed, and there are 5 million new cases per year, then the current prevalence is 40 million, and the incidence of new cases are 5 million people per year.
inferential statistics
Inferential Statistics: infer something about the population (parameters) from what we know about the sample (statistics) ◼ Statistical Significance: The idea of significance is related to probability. We never say 100% sure about our sample statistics. If you get a different result (different mean between two group) in your sample, you are 95% confident to expect this results hold true in our population. There is a 5% chance that this result (difference) is due to luck. (P-value)
threats to internal design
Internal validity is confidence in the conclusions drawn based on the findings in research design. Threats to the internal validity of a study came from extraneous variables. In other words, the observed difference in the experimental groups was not truly resulted from the intervention/treatment and it was incorporated with some other extraneous variables. (cf) External validity: generalization 1. Maturation: your subject change (mature) as time goes by. (EX) Suppose that you are evaluating an intervention which is designed to improve the adolescents' delinquent behaviors. Since the behavior of adolescents changes naturally as they mature, the observed changed behavior may have been due as much to their natural development as it was to the intervention. 2. History: The world changes, and your subjects change. (EX) Let's say we are investigating the effect of an educational program on racial tolerance. We may decide to measure the dependent variable, race tolerance, before and after the program (pre- and post- test). Suppose before the posttest, an outbreak of racial violence, such as the type that occurred in Los Angeles in 1992, occurred. It poses a massive, possibly fatal, threat to your study. 3. Mortality or Attrition: If you lose many people from your sample, then you may have a problem. (EX) you want to test a new treatment for patients who have serious physical problems. Suppose that the patients' conditions get more serious or they died, so the most serious patients already left in the study and only patients who have ok condition are in the treatment group. How could you say the patients' condition is because of treatment? 4. Measurement Error 5. Regression to the Mean (EX) Let's say that you have a new treatment for people in crisis. I might get 100 people who call the crisis hotline, and I might use my treatment on them. No matter how effective my treatment is, the people will certainly get better. → Why? Because if you nab a bunch of people on their worst day, then they have nowhere to go, but up. Similarly, individuals within your sample who score very high or very low in the pretest also tend to moderate when retested. This is because you probably just caught them on a bad or good day, and they are just returning to normal later. This means regression to the mean, another words-- "returning to average". 6. Social Desirability: Sometimes people say things to look good. It is important to avoid questions that subjects will feel uncomfortable answering in a given way. (EX) For example, rather than asking "do you hit your child?", you may want to ask "how often do you use physical punishment with your child?" 7. Expectation Effects: People often can influence outcomes through their expectations. People are simply responding to the researcher's attention. To get rid of expectation effects, "blind" designs are often used. A. Single-blind experiments B. Double-blind experiments
convienence/haphazard sampling example
Let's say that you wanted to recruit freshmen for a study of the psychological transition to college. You located a large, freshman class, stood outside the door, and asked the first 10 students coming out of class to participate in your survey. What kind of bias might be introduced by using this haphazard sampling strategy?
measurement error
Measurement error occurs when you do not measure what you want to measure accurately because your instrument or procedure is wrong. This can come in many forms: 1. Asking about the wrong construct 2. Asking the question in a way that invites a specific response (social desirability bias) 3. Not being specific enough about your construct 4. Asking questions your subject doesn't understand 5. Subject will sometimes not answer all questions 6. Researchers can make mistakes while asking questions or observing
proportionate statified sampling
Proportionate sampling strategies begin by stratifying the population into relevant subgroups and then random sampling within each subgroup. The number of participants that we recruit from each subgroup is equal to their proportion in the population
quasi experimental design
Quasi-experimental designs are probably the most common form of social research. They lack random assignment and are therefore often called "nonequivalent group" design. The main weakness of quasi-experimental designs is that unlike true experimental designs, they lack random assignment and so cannot completely rule out spurious factors --> these designs are weaker with regard to causality questions.
blind designs
Single-blind experiments: In single-blind experiments you don't tell the subject what condition he or she is in. The use of sugar pill placebos is a form of single-blind experiment. This removes the threat of the subject's expectations changing the outcome. Double-blind experiments: Neither the subject nor the persons who deal directly with the subject know what condition the subject is in. If sugar pills and real pills are given, and even the person giving the pills doesn't know which pills are fake, then this is a double-blind study. This removes the threat of both the subject and the researcher influencing the outcomes with their expectations.
dosage design
Sometimes, it is important to know how much of something it takes to get a given effect. This situation may call for a dosage design. We might believe that our new form of aspirin works better, but we aren't sure how much it takes to be effective. We would create a series of groups, getting pills with different levels of the new aspirin.
stratified sampling
Stratified sampling is used when we have subgroups in our population. -Proportionate Stratified Sampling: -Disproportionate Stratified Sampling
retrospective studies
Studies look backwards are called retrospective studies. Retrospective designs can be collect data over time or from a single point in time. For example, you could ask a subject to recall and rate his self-esteem at different time points in his life, or you could ask him to rate his selfesteem at only a single point in his life. Unfortunately, retrospective data based on human recall are questionable value. While many studies use such data, human memory is fallible, and such data can be wrong. The further back subjects are asked to recall, the more dubious such data are.
retrospective studies
Studies look backwards are called retrospective studies. Retrospective designs can be collect data over time or from a single point in time. For example, you could ask a subject to recall and rate his self-esteem at different time points in his life, or you could ask him to rate his selfesteem at only a single point in his life. Retrospective Studies: Unfortunately, retrospective data based on human recall are questionable value. While many studies use such data, human memory is fallible, and such data can be wrong. The further back subjects are asked to recall, the more dubious such data are.
prospective studies
Studies that begin at a given point and extend into the future are called prospective studies. They are longitudinal in nature by definition. If I decide to follow a group children who had cancer treatment to see how they adjust, and I check them out every month, this would be a prospective study.
prospective studies
Studies that begin at a given point and extend into the future are called prospective studies. They are longitudinal in nature by definition. If I decide to follow a group children who had cancer treatment to see how they adjust, and I check them out every month, this would be a prospective study.
longitudinal design
Studies that follow subjects over time are called longitudinal studies. Data are collected from several different points in time
common sense
That "makes sense". What we know as common sense is often received through other ways of knowing such as received wisdom (a.k.a. agreement reality). This means that the common-sense knowledge did not necessarily originate from an expert, tradition, authority, etc., but perhaps something even less formal. This truth about common sense means that it cannot be relied on solely, but it can be a useful addition to these other ways of knowing. Common sense is valuable in daily living.
social work research
The critical factor that separates social work research from these knowledge sources is that it uses a scientific approach
Step 2-- Literature review
The process of learning more about your area of interest is usually called "Literature Review". Through the Literature Review, we can find the following information: ◼ what questions have been asked by others in my area? ◼ what theories exist to tell me how to think about my area? ◼ what kinds of data, designs, and methods are used in my area? ◼ what are the main empirical findings in my area? ◼ what needs to be studied next in my area?
cross sectional designs
The simplest and most common type of social research is the cross-sectional study. In the cross sectional research, researchers observe at one point in time. So, it cannot capture social process or change.
specific steps of research process
Theoretical Level 1. Formulation of a research problem (Your area of interest) 2. Literature Review 3. Relationship of the problem to theory 4. Formulation of testable hypotheses 5. Formulation of a research design Empirical Level 6. Gather the data 7. Analyze data 8. Interpret the data and formulate conclusions 9. Implications for practice, policy, and research 10. Publication of results
Step 3 -- Relationship to the problem of theory
Theory describes how one thing related to something else, what we call in research "the relationship among and between concepts/constructs. Uses of Theory (Approach of Theory) 1)Deduction: using a theory to predict a face: Deduction is when you take a theory and try to figure out what specific facts that theory would predict. (2) Induction is moving from facts to theories: Induction is when you look at data (fact) and come up with a theory (idea) that fits the data.
cultural competence
This includes knowing how to interact respectfully, how to communicate well, and understanding how different people will react to the study. Specifically this includes understanding: -Language and dialects used -Slang or colloquial expressions used to describe constructs of interest -The perception of the constructs with the cultures in the sample (i.e., what topics are sensitive) -Comfort level with talking to others outside their culture Second, another issue is that your position as "the researcher" creates a power differential that may make subjects uncomfortable about correcting you once the study begins. This is particularly important how to respectfully communicate with a given group. → Need to understand how to ask questions about this in a way that minimize their discomfort Example (How to ask): A scholar conducted research on diabetes within a lower-income, urban, African American area. He discovered that residents in the community studied did not use the word "diabetes", but called it " ". Using the terms used by the community enabled the researcher to obtain the information needed more easily and accurately.
media myths
This is perhaps one of the most risky ways of knowing because of how it is persuaded by popular opinion, which is very fickle. Not only is it influenced by opinion, but also by media creators who might aim to change public opinion for their own nefarious reasons. If there was a single way of knowing to be most wary of, this is it!
inaccurate and selective observation
This is the tendency to pay attention to situations that only correspond to (or serve to confirm) a pattern that is only perceived to be true. Example: Sign of Neglect Also, if we believe that a specific theory or method works better than the others, common mistakes we make are "seeking out" what we believe when we assess our clients. For example, the practitioner trained to interpret problems in terms of family communication dynamics is apt to look vigilantly for signs of potential communication problems and then magnify the role those problems play in explaining the presenting problem. At the same time, that practitioner is likely to overlook other dynamics or perhaps to underestimate their impact.
true experimental designs
True experimental designs (sometimes called "equivalent group designs" or "randomized designs" or "randomized clinical trials") are very common in medical and psychological disciplines and are considered by many to be the best or gold standard form of research. This is BECAUSE : true experimental designs allow maximum control over spurious causality and therefore are the best designs for really nailing down causal relationships between variables. Only true experiments allow researchers to make causal conclusions based on study results.
tradition
We accept something as being true because "it is the way things have always been". This refers not only to the traditions of your own personal culture but also to the traditions you encounter in a social work service setting. Just as you may have a recipe handed down over the years in your family for the best way to cure the common cold, you will likely also encounter traditional ways of doing things at an agency or organization. Just because the organization has always done something a certain way does not mean it is the best or only way to do it. The down side to relying on tradition is that it is not always open to new ways of doing things, and perhaps is not as critical of itself as it should be.
personal experiences
We can come to know things through our own experiences. In social work, this is sometimes referred to as practice wisdom. Personal experience can be an invaluable tool from which to learn, especially about hard life experiences. However, there are limitations to personal experience. Sometimes our own practice wisdom can be biased or unreliable due to forces outside of our own control. ex. Linda is 31, single, outspoken, and very bright. She majored in philosophy in college. As a student, she was deeply concerned with discrimination and other social issues, and participated in anti-nuclear demonstrations. Which statement is more likely? (1) Linda is a bank teller. (2) Linda is a bank teller and active in the feminist movement.
authority
We gain knowledge from parents, teacher, and experts. The knowledge touted by experts is relied upon by people because of the experts' perceived or actual authority. Even if an expert has legitimate credentials, knowledge based on their word alone should not be relied upon. History has many examples of knowledge based on authority that has turned out to be skewed, false, or simply inaccurate.
Step 4-- formulation of testable hypotheses
When a researcher empirically tests or evaluates a relationship, it is called a hypothesis. Example on physical abuse and violent behaviors: (1) Our research interest and aim is to determine the association between childhood abuse experience and adolescent violent behaviors. (2) Hypothesis is statement about the relationships you expect to find between constructs. And, you used "social learning theory" to specify the relationship. ◼ Research Hypotheses are statements (never questions) about the relationships you expect to find between constructs.
Single Subject Designs
Why study individuals in your practice? - You are ethically required to be accountable for your practice - Coming up with outcome measures forces you and the client to come to a very clear understanding about treatment goals - Recording the client's progress helps both you and the client to know how the therapy is going - Formalized research on client outcomes can not only help you with individual clients but can be used over time to determine strengths and weaknesses in your whole practice
interview
consists of: asking your client questions, oral surveys ◼ Strengths - Can develop relationship with subject - More likely to answer all questions - Can clarify confusion - Can probe where you want more information ◼ Weakness - Not good for sensitive information - Interviewer can have huge impact on results - Time consuming Another form of interview: telephone survey, many conflict with this methods pencil tests and interview mixed ◼ Both pen and pencil tests and Interview have much in common - Both are a form of self-report - In a survey, they are very structured. Often, interviews use the same instrument as mail-out surveys ◼ The biggest difference is how the interviewers affects the interview (gender, race/culture/ethnicity issues) Q: Who is the logical person to do interview?
pretest posttest control group design
experimental group: test: treatment: test control group: test: -- : test Should we always pretest? - you can use these data to determine the degree of change (change = time 2 score - time 1 score) - you can verify that your groups were similar at start with respect to variables of interest
posttest only control group design
experimental group: treatment: test control group: -- : test Random assignment establish the initial equivalence between the experimental and control groups. Measurement of the test of control group then serves as a pretest measure for comparison with the experimental group's posttest.
purposive sampling
judemental sampling, picking one group (green guys only)
can single subjects have be a true experiemental desgin?
no, it lacks randomization
single subject designs
◼ Advantages & Disadvantages There are three major advantages to single-subject designs. 1. the ability to focus on individual performance. 2. you don't leave individuals untreated in control groups. 3. flexibility in design. There are 2 major disadvantages to single-subject designs. 1. Some effects are small and can't really be seen in one subject. 2. Sometimes you can't try out different variables on the same subject and you must use between-subject designs. ◼ Key terms in single-subject designs 1. Baseline: The word "baseline" means "the level of functioning before treatment." 2. Intervention: By intervention, we simply mean "the specific treatment the client is getting." You may use one intervention, or you may use several interventions simultaneously or one after the other. 3. Outcome: Outcome means "the things we're trying to change" and is the same as dependent variable. 4. Follow-up: Follow-up occurs when you stop treatment but come back later to see how the outcome is holding up
unstructured instruments
◼ An unstructured instruments consists solely of a open-ended question or questions, such as "Tell me about your experience with WVU MSW program". This method is more likely to be associated with qualitative methods as the responses are problely going to be narrative.
qualitative analysis
◼ Class Exercise with examples (Barriers related to Breast Cancer Screening in rural community & Evaluating Social Service Program in ER)
program monitoring: client satisfaction
◼ Client satisfaction is one type of those gray areas that sometimes crosses evaluation boundaries. Some agencies see their client satisfaction measures as equivalent to an outcome assessment. Clearly we want our client or target population to be happy with the services. The problem is that satisfaction does not always prove you have good outcomes Let's say that an older individual is homebound and is rarely visited by family. The person is therefore lonely and also has physical therapy needs to recover from a hip operation. An incompetent case manager (who also happens to be very nice and talkative) is assigned. The manager never successfully arranges for the correct physical therapy but is given a glowing satisfaction evaluation by the client. This is because the client values the human relationship and therefore ignores whether or not the other need is met. While satisfaction may be one type of outcome, it is best cast as a part of program monitoring.
Step 5-- formulation of research design
◼ Constructs are the things we study. They can be almost anything, from traits to opinions to events to treatment interventions to virtually anything we can think. ◼ Usable constructs are easy to understand and specific and can be transformed into variables that are clearly operationalized and measured. ◼ Variables & Operationalization Variables are operationalized constructs. → what does "operationalized" mean? Something is operationalized if it is clear how it is measured. ◼ Nominal Definition - Dictionary definitions or word definitions (ex) Suicide attempt history is defined as history of the intentional taking of one's own life ◼ Operational Definition - These are definitions in which concepts are put into measurable terms (ex) Operationalized suicide attempt history can be "number of hospital admissions for attempted suicide". ◼ Independent variables: Independent variables can be thought of as "predictor" or "causal" variables. Independent variables are those things you believe will predict or cause a change in the dependent variable. When drawing models, independent variables are to the left. ◼ Dependent variables: Dependent variables can be thought of as "outcome" variables. They are those things you believe are affected by other things in your model. When drawing models, the dependent variable is the item furthest to the right. ◼ Control variables: Control variables are other possible influences on your dependent variable that you decide to keep track of. (Extraneous Variable) ◼ Causality (Causal relationship) To be a causal relationship: 1. A is correlated with B 2. A occurs before B in time 3. There is an absence of confounding variables
Cultural Competence & Sensitivity in Social Work Research
◼ Cultural Competence means that you can work well with people from different cultures and make adjustments to the research process to avoid threats to validity that may stem from cultural differences across all phases of the research process. This process may include the appropriate selection of measures and language use, the use of racially and/or culturally similar interviewers, choosing to use specific types of research design.
focus groups
◼ Focus Groups Characteristics - Address a specific topic - Small group (6-12 people) - Best if from similar backgrounds - Best if participants don't know one another - 1 to 2 hours in length - Max 10 questions ◼ Focus Groups useful for: - Learning about community/group norms/values - Capturing discussion (interaction) among participants - Learning what people think - Learning how people think - Finding range of opinions that exist in a group - In-depth data - Identifying major themes ◼ Do - Be clear on how data will be used before you begin asking questions - Move from least to most threatening - Ask open ended questions (let them use their own language) ◼ Avoid - Asking for second hand data - Asking more than one question at a time - Leading questions (when your school is not responsible for ..., isn't that angry?) ◼ Moderator - Skilled in groups discussion - Uses protocol but flexible - Establishes "safe" environment (Capture body language too. Usually there is note taker/observer) ◼ Managing Focus Groups Discussion - Even participation - Mirror/echo/reflecting - Silence - Linking
common errors of alternatives
◼ Inaccurate and Selective Observation ◼ Overgeneralization ◼ Correlation and Causality
evaluation research
◼ Why do we need evaluation research? - To improve our practice - To be able to report back to your client, your agency, and funding or accrediting agencies - To serve as a basis for making application or grant funding 1. Needs Assessment Social workers need to understand "who needs services, how much services are needed, and where they should be targeted". 2. Program Monitoring (Process Evaluation) : " what am I doing" or "what's going on here" Agencies and organizations need to understand certain basic aspects of their operation to continue to function appropriately. Monitoring questions include: 1. how many total persons have been served? 2. how long does our staff stay here? 3. how satisfied are the clients? 4. are treatment protocols being followed? 3. Outcome Evaluation Understanding the effect of a service, program, or treatment If you have a program you want to implement, then you should also consider the audience for the results when planning the evaluation. For example, a school service worker will want to show that the program is valuable to the host agency that funds it. Perhaps the school service worker is interested in self efficacy, but the school really wants to have ways of improving test scores. If the increase in self-efficacy might reasonably affect test scores, then it's a good idea to also measure those test score changes.
qualitative research
◼ It means that the collection of information is based on the lived experiences of the subject. It is inductive research, generally exploratory (The inductive and exploratory process of observing the world, interpreting meaning, and using that information to come up with ideas (theories) about what is being observed). ◼ Strengths -Provides complex textual descriptions of: Meaning, Experience, Multiple perspectives of the same situation, Context Example? -Identifies intangible factors such as social norms, gender roles, and religion, whose role in the research issue may not be readily apparent -More naturalistic ◼ Qualitative methods are useful when: - There is little empirical data - Previous understandings are puzzling/contradictory - Theory development is needed - Interest in understanding complex context ■ Participant observation is appropriate for collecting data on naturally occurring behaviors in their usual contexts. ■ In-depth interviews are optimal for collecting data on individuals' personal histories, perspectives, and experiences, particularly when sensitive topics are being explored. ■ Focus groups are effective in eliciting data on the cultural norms of a group and in generating broad overviews of issues of concern to the cultural groups or subgroups represented. The types of data these three methods generate are field notes, audio (and sometimes video) recordings, and transcripts. ◼ Participant Observation Ethnography - an umbrella term for a family of qualitative research methods. Often used interchangeably with 'participant observation' Participant observation is when you not only observe people doing things, but you participate to some extent in these activities as well (cf: Observation is when you are watching other people from the outside as an observer). Most fieldwork of a qualitative nature tends to involve participant observation rather than observation ◼ Focus Groups: A focus group is a qualitative data collection method in which one or two researchers and several participants meet as a group to discuss a given research topic (Group discussion on a focused topic). ◼ Common Focus Groups Uses - Obtain background information on topic - Diagnosing potential problems with a new program/service /products (program development) - Learning how respondents talk about area of interest (may facilitate research tool development)
qualitative research
◼ Let's say that a student decides to do some qualitative research on how hospitalized people feel about being in a hospital. That students picks some people (without any clear reasoning for whom he picks), asks whatever questions seem appropriate at the time, and writes a report filled with his opinions about what was said. This would be an example of very bad qualitative research, and few people say it is scientific. It is not scientific mainly because it follows no clear logical structure. For example, nobody can really tell what the researcher was trying to do or what he did or how he came to his conclusions. Nobody can go out and replicate what was done. ◼ This research could have been improved if the researcher had paid more attention to his question, his sample, and how he asked questions. He needed an organized way to record data and tell people how he analyzed his data. If these things are done in a logical and structured manner, and if he reports his process then this would be an example of qualitative research.
non experimental designs
◼ Non-experimental design can be broken down into three types: correlational, descriptive, and exploratory 1. Correlational designs test theoretically derived relationship. Correlational designs test relationship as predicted by theory like experimental designs, but their greatest weakness is that they cannot prove causation. 2. Descriptive Designs tell you about the characteristics of a given group or rather than test established hypotheses or relationship between two constructs. (EX) You may need information on characteristics of people using emergency rooms for non-emergency care. 3. Exploratory designs seek to begin to understand new areas or phenomena. Sometimes you need help thinking about some new population or issue. Perhaps you want to start thinking about something that hasn't been studied much and you don't even know where to begin. Example: Park use study One of the authors consulted on a study of the uses of a large urban park. This study had a large number of components, including two that were purely observational. In one component, a researcher parked his car at an intersection and simply noted the number, type, and direction of vehicles passing through. This was done to get a sense for the flow of traffic through the park. The researcher showed up at different times and different days and at different intersections according to a rotating schedule, so that the best overall picture of park traffic flow could be obtained
reliability
◼ Reliability is having a measuring tool or instrument that produces the same result every time We can establish reliability through 1) test-retest reliability: Test-retest reliability involves giving the measure twice and checking statistically to see if you get the same result 2) interrater reliability: Interrater reliability is the extent to which two or more individuals (coders or raters) agree. 3) alternative-form method: Alternative forms can be thought of as instruments with equivalent content with different questions. But they are intended to measure the same variable. 4) internal consistency: Internal consistency, which is well known as Chronbach's alpha. If all items in a measurement are measuring the same construct, they should agree with each other.
reliability and validity
◼ Reliability is synonymous with "repeatability": the degree of accuracy or precision of a measuring instrument ◼ Valid is synonymous with "correct": the degree to which an instrument measure what it is supposed to. ◼ Relationship between Reliability and Validity --> -For a measure to be valid (correct), it must first be reliable: validity requires reliability (if a measure is valid, it must first be reliable) - Reliability can easily exist in the absence of validity. If you give the same wrong answer repeatedly, then you are reliable in what you will say and also wrong.
reliability check
◼ Rigorous cooperation ◼ Code cross checking More than one person engaged in data analysis can provide comparison to avoid bias, detect omission, and ensure consistency
semi structured instruments
◼ Semi-structured instruments are similar in form but include questions without predetermined responses. Example: Course evaluations are of this type, with both Likert scale items and sections for the students to write their opinions down in narrative form (open-ended question)
overgeneralization
◼ The assumption that a few similar events are evidence of a general pattern. EX: Welfare reform in 1980s Reagan administration (Media Myth and Overgeneralization) During the 1980s, conservative Reagan era politicians sought to reduce welfare spending and cited stereotypes such as "welfare queens" driving Cadillacs and trading food stamps for money to buy liquor or other socially unacceptable items. By playing on these stereotypes, they stimulated the public to overgeneralize about welfare recipients.
survey design
◼ The key issue in survey design is data collection. Choosing how to collect the data has, obviously, major implications for the kind of data you get, so it is important to make a reflective choice that is base on what you want to find out.