Research Final

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

basic designs (single subject research design)

- A-B designs - A-B-A designs - A-B-A-B designs

A-B-A designs

- MISSING NOTE - includes baseline (A) followed by intervention (B), followed by return to baseline (A)

criterion for interpretation of PND scores

- PND range 0-100% - MISSING NOTES

what is the decision matrix?

- a basic decision diagram showing the steps involved in a statistical conclusion - the central decision involves retaining the null hypothesis or accepting the alternative hypothesis

what does difference mean?

- a statistical difference is a function of the difference between means, relative to the variability - a small difference between means with large variability could be due to chance - like signal-to-noise ration signal ---> diff btwn group means noise ----> variability of groups

multiple baseline designs (single subject research designs)

- across participants - across setting - across behavior

foundational similarities between qual and quant methods

- all qualitative data can be measured and coded using quantitative methods - all quantitative research can be generated from qualitative inquiries - ex: one can code an open-ended interview with numbers that refer to data-specific references

if checklist....

- are all alternatives covered - is it reasonable length - is the wording impartial - is the form of the response easy, uniform

phenomenology

- branch of science dealing with classifying and describing phenomena without attempting to explain - phenomenology focuses on people's subjective experiences and interpretations of the world - phenomenological theorists argue that objectivity is virtually impossible to ascertain - phenomenologists attempt to understand those who they observe from the subject's perspective - this outlook is especially pertinent in social work and research where empathy and perspective become the keys to success

PND: percentage of nonoverlapping data

- calculation of non-overlap between baseline and successive intervention phases - procedures: identify highest data point in baseline and determine the % of data points during intervention exceeding this level - advantages: 1. easy to interpret 2. non-parametric statistics

IRD: improvement rate difference

- calculations of percentage of improvement between baseline and intervention performance - procedures: 1. identify min number of data points from baseline and intervention that would have to be removed for complete data separation 2. points removed from baseline are "improved" (overlap with intervention) and points removed from intervention are "not improved" (overlap with baseline) 3. subtract % "improved" in baseline from % "improved" in treatment

decisions about question wording

- can the question be misunderstood - what assumptions does question make - is the time frame specified - how personal is the wording - is the wording too direct - some additional issues: 1. does question contain difficult or unclear terminology 2. does question make each alternative explicit 3. is the wording objectionable 4. is the wording loaded or slanted

tests for differences among multiple groups/conditions: complex ANOVA

- complex anova: 1. two or more IV (factors) 2. look at main effects and interactions (one effect is influenced by another effect) - non-parametric test: friedman's test: 1. when assumptions about normality and homogeneity of variance are violated 2. analysis of variance using the ranks of the data 3. does not investigate interaction effects

limitations of IRD

- conventions for calculation are not always clear for more complex and multiple data series - "Baseline improvement" is misleading concept - requires further validation and comparison to existing measures

descriptive statistics

- describe measures of central tendency and variation of participants in study - goal of group design is to generalize to the population: 1. need a sample that accurately represents the population 2. sample possesses the key variables in same proportion as found in the target population - purpose of the experiment: test one or more hypotheses about some characteristics of a target population

null hypothesis testing (Ho)

- developed by Sir Ronald Fisher - ex: 2 groups of children- 1 receives treatment, the other does not - alternative hypothesis (H1, Ha)- your prediction/the hyp. you support (the group means will differ) - null hyp (Ho)- set up artificially for logical purposes (the group means will not differ if experiment repeated an infinite number of times) - if the sample means differ enough from each other, null can be rejected - one has 2 choices: reject null, retain null - how much the mean must differ for the null hyp. to be rejected involves the question of variability and the concept of statistical significance - determine likelihood that given result will occur by chance IF null hyp. is correct - results that occur with low prob. by change are likely to result from real treatment effects. they are statistically significant (p<=.05). only those relatively infrequent results in the tails of the curve are sufficient evidence to lead to rejection of null - likelihood of incorrectly rejecting null hyp is min. if results occur w/ relatively low probability - experimenter must decide upon particular prob. of incorrectly rejecting null hyp. that he is willing to tolerate - prob of incorrectly rejecting null hyp. is under experimenters control: 1. often, prob level is set to .01 or .05 (the likelihood of the sample result to occur in the population, is 1 or 5 in 100 if the null hyp were true

structured questions

- dichotomous - nominal - ordinal - interval

repeated measurement of dependent variable

- direct observation at multiple points in time - must include inter-observer agreement to assess "reliability" if the DV

ethnography

- emphasizes the observation of details of everyday life as they naturally unfold in the real world - also called naturalistic research - it is a method of describing a culture or society - primarily used on anthropological research

tests for differences among multiple groups/conditions: ANOVA

- equivalent to t-test, but analyzes many groups - assumptions for use of ANOVA: 1. data are continuous (interval or ratio) 2. participants randomly assigned 3. normal distribution of data (robust to violations if n>30) 4. data sampled from populations with equal variances (levene's test for equal variance) - one way anova (one grouping factor- 1 IV) - two way anova (two grouping factors) - three-way anova - multi-factor (complex) analysis of variance

A-B-A-B designs

- extends the A-B-A design by reintroducing intervention - addresses major ethical concern by ending on an intervention phase

type II error

- fail to reject the null hypothesis - retain null hypothesis - when in fact it is false - beta error - you claim there is no effect/result when there is - can be decreased in increasing the number of subjects in the investigation - more critical the level of significance (.01 instead of .05), the larger the probability of error

t-test

- for differences between groups - tests whether the means of two groups are statistically different from each other

inferential statistics

- go beyond the data of current study based on a sub-set of the populations (sample) and make inferences for the whole population - validity of the interference depends on quality of sample

terminology for qualitative research

- grounded theory - ethnography - phenomenology - field research

interpretation of PND scores

- if a study includes several experiments, PND scores are aggregated by taking the median (rather than the mean) since: 1. scores usually not distributed normally 2. median is less effected by "outliers" - overall premise: the higher the PND statistic %, the more effective the treatment - specific criteria for interpreting PND - scores outlined by Scruggs, Mastropieri, Cook, and Escobar

limitations of PND

- ignores all baseline data except one data point (which could be unreliable)-- ceiling effects - lacks sensitivity or discrimination ability as it nears 100% for very successful interventions - cannot detect slope changes - requires its own interpretation guidelines

objectivity

- impossible in qualitative inquiry - it is replaced by subjective interpretation and mass detail for later analysis

A-B designs

- initial baseline phase followed by intervention phase - data are analyzed by visually comparing data btwn the two phases - pre-experimental- not an experimental single subject design. its a step up from a case study

decisions about placement

- is the answer influenced by prior question - does question come too early or too late to arouse interest - does question receive sufficient attention

decisions about question content

- is the question necessary/useful - are several questions needed - do respondents have the needed info - does the question need to be more specific - is question sufficiently general - is question biased or loaded - will responded answer truthfully

type I error

- likelihood or incorrectly rejecting the null hypothesis - alpha error - you claim that an effect is present when in fact it is not - probability is under control of the investigator through selection of level of significance

single subject research design- the WHY, WHEN, WHERE

- low-incidence populations - heterogeneous populations - instances where attempting evaluate an intervention - instances where repeated measurement is appropriate

strengths and weaknesses of qualitative research

- objectivity - reliability - validity - generalization

causal hypothesis types

- one-tail (one outcome) - two-tail (two outcomes)

sensitive questions

- only after trust is developed - should make sense in that section of the survey - precede with warm-up questions

interpretation of IRD scores

- overall premise: the higher the IRD score, the more effective the treatment - advantages: 1. provides separate improvement rates for baseline and intervention phases 2. has better sensitivity than PND 3. offers better option to have confidence intervals 4. has been proven in medical research ("risk difference") score range--- interpretation <0.50 --- very small/questionable 0.51-0.70---- moderate effectiveness 0.71-0.75----- large effectiveness >0.75---- very large effectiveness

methods in qualitative research

- participant observation - direct observation - unstructured or intensive interviewing - case studies

selecting the survey method

- population issues - sampling issues - question issues - content issues - bias issues - administrative issues

tests for differences among multiple groups/conditions: post-hoc comparisons

- post-hoc comparisons analyze pairs of means for significance - when ANOVA reveals significant effect, one needs to determine what specific pairs of means are significantly difference (one or several) - Bonferroni, sheffe (seen as overly conservative), Tukey, Duncan, Newman-Keuls

validy

- qualitative researchers use greater detail to argue for the presence of construct validity - weak on external validity - content validity can be retained if the researcher implements some sort of criterion settings

field research

- refers to a group of methodologies used by researchers in making qualitative inquiries - the field researcher goes directly to the social phenomenon under study, and observes it as completely as possible - the natural environment is the priority of the field researcher (no implemented controls or experimental conditions to speak of) - such methodologies are especially useful in observing social phenomena over time

grounded theory

- refers to an inductive process of generating theory from data - this is considered ground-up or bottom-up processing - grounded theorists argue that theory generated from observations of the empirical world may be more valid and useful than theories generated from deductive inquiries - grounded theorists criticize deductive reasoning since it relies upon a-priori assumptions about the world

statistical tests

- related groups (e.g., pretest-posttest design) - related-sample tests - level of measurement for DV and experimental design: 1. nominal, ordinal, interval, ratio 2. parametric statistic (inferring about the population based on the sample) -- t-test, ANOVA for testing the null hypothesis ~interval or ratio level measurement ~presumes that samples are randomly selected from the population: random sampling - if one or more variables are categorical: use non-parametric test (e.g., chi-square= test of independence)

generalizability

- results for the most part, do not extend much further than the original subject pool - sampling methods determine the extent of the study's generalizability - quota (sample until you achieve a specific number) - and purposive sampling strategies are used to broaden the generalizability

the opening question

- should be easy to answer - should not be sensitive material - should get the responded "rolling"

tests for differences among multiple groups/conditions: simple ANOVA

- simple anova (one-way): 1 IV with several levels - determine probability that means of groups of scores deviate from one another by sampling error alone - within vs btwn-group variability: 1. within group variability: portion of total variance that cannot be explained by the research design. variability=mean square for error (MSerror) 2. between group variability: portion of total variance attributable to group membership. between group variability=mean square for effect (MSeffect) - separate how much total variance is attributable to sampling error (within) and treatment effect (between) - non-parametric statistics: kruskal-wallis test (H): 1. when samples are very small (n<10) 2. compare 3 or more independent groups of data

reliability

- since procedure is de-emphasized in qualitative research, replication and other tests of reliability become more difficult - however, measures may be taken to make research more reliable within the particular study (observer training, more objective checklists, etc.)

why do we need to be concerned with effect sizes for SCED/SSED studies?

- single-subject experimental designs (SSED) traditionally evaluated by visual analysis - there are documented procedures/standards for doing so - however, ebp emphasizes importance of additional objective outcome measures, especially "magnitude of effect" indices or "effect sizes" (ES) - ES are needed for systematic reviews and meta-analysis to summarize outcomes.

a checklist of considerations

- start with easy, nonthreatening questions - put more difficult, threatening questions near end - never start mail survey with an open-ended question - put demographics at end (unless needed to screen) - avoid demographics at beginning - for historical demographics, follow chronological order - ask about one topic at a time - when switching topics, use a transition - reduce response set - for filter or contingency questions, make a flowchart

causal hypothesis

- statement of relationship between an IV and a DV - describes a cause and an effect - usually stated in two forms: 1. alternative hypothesis: hypothesis that you support (predict) 2. null hypothesis: what describes the remaining possible outcomes

differences between two groups/conditions

- statistical analysis accounts for amount of difference between two groups (e.g., experimental-control) - take into consideration means and SD (difference btwn means may be large, but large SD could minimize the effect) - alternative to between-subject design could be pretest-posttest design (related-measure design) --- baseline measurement- experimental treatment- posstest measurement

differences between two groups/conditions: related-samples design

- t-test for related-samples: 1. observed differences between two conditions within the same subjects 2. same subjects design with two conditions 3. pre-posttest designs - assumptions: 1. participants are randomly sampled from population 2. two sets of scores are related 3. normal sampling distribution non-parametric test: wilcoxon signed rank test: appropriate when sample distributions are highly skewed

case studies

- the case study is important in qualitative research, especially in areas where rare cases or exceptions are being studied - example: a patient may have a rare form of cancer that has a set of symptoms and potential treatments that have never before been researched

multiple-baseline designs

- the intervention is introduced in one setting/behavior/subject while baselines are extended and monitored in additional settings/behaviors/subjects followed by the sequential introduction of the intervention in the remaining settings/behaviors/subjects

foundational differences between qual and quant methods

- the major diff= researcher's underlying strategies - quantitative research is viewed as confirmatory and deductive (reasoning from premise to logical conclusion- top down) - qualitative research is considered to be exploratory and inductive (reasoning from individual cases to a general conclusion- bottom up)

null hypothesis: Type I and Type II errors

- the possibility that changes in one variable (x) causes changes in another variable (y) = may be considered a hypothesis - hypothesis testing involves formulating an explicit claim about the relationship between two or more variables: 1. choose a design for making relevant observations 2. gather relevant data 3. analyze the data: support/fail to support the hypothesis 4. goal of researcher: to reach a valid conclusion from the data and to avoid unjustified conclusions

participant observation

- the researcher literally becomes part of the observation, becomes a participant - ex: one studying the homeless may decide to walk the streets of a given area in an attempt to gain perspective, and possibly subjects, for future study

direct observation

- the researcher observes the actual behaviors of the subjects, instead of relying on what the subjects say about themselves or others say about them - example: 1. courtroom drawings of witnesses 2. piaget's observation of childhood behavior and cognitive development

unstructured or intensive interviewing

- this method allows the researcher to ask open-ended questions during an interview - details are more important here than a specific interview procedure - inductive framework through which theory can be generated

filter or contingency questions

- try to avoid having more than three levels (two jumps) for any question - if only two levels, use graphic to jump (e.g., arrow, box) - if possible, jump to a new page

tests for analyzing categorical data: count data (nominal level)

- using chi-squal test - purpose: determine whether observed counts differ significantly from frequencies expected by chance - assumptions: 1. individual observations must be independent of one another 2. observations are 'count' data (not percentages or other forms) 3. categories are exclusive of one another 4. expected values of one cell must not be too small (n<5)

visual analysis/statistical analysis

- visual analysis documents basic effect at 3 different points in time - statistical analysis options are emerging to document effect size

confidence interval (CI)

- width of CI indicates precision of results (depends on sample size) - if CI includes zero: null hyp is retained (no significant difference) e.g., CI (95%) = -3.22, +4.1

systematic replication

- within a study to document experimental control - across studies to document external validity - across studies, researchers, contexts, participants to document EBP

key components of method section (multiple-baseline design?)

1. accurate description of specific single subject design employed 2. detailed participation information 3. clearly defined dependent measure(s) 4. phase descriptors (baseline, intervention, generalization, maintenance) 5. procedural reliability (was it truly delivered in the way it was described?) 6. transcript reliability 7. data reliability 8. coding information 9. data analysis procedures 10. social validation (answers the "so what")

Major categories for research design

1. descriptive research - case studies - qualitative research - survey research 2. experimental research - group design - single-subject design

9 defining features of single case research

1. experimental control 2. individual as unit of analysis 3. independent variable is actively manipulated 4. repeated measures of dependent variable 5. baseline 6. design controls for threats to internal validity 7. visual analysis/statistical analysis 8. systematic replication 9. experimental flexibility

visual analysis of SCD- 6 variables

1. level 2. trend 3. variability 4. immediacy of effect 5. overlap 6. consistency of data patterns across similar phases

four components to a statistical conclusion

1. sample size 2. effect size 3. alpha level 4. power

type of questions

1. structured 2. unstructured

general standards for design evaluation

1. the IV must be systematically manipulated 2. each outcome variable must be measured systematically over time by more than one assessor 3. inter-assessor agreement must be collected for at least 20% of the data within each phase 4. there must be at least 3 attempts to demonstrate an intervention effect at 3 different points in time (differs by design type) 5. each phase must have a minimum of 3 data points (to meet standards with reservations) and 5 data points (to fully meet standards

sample size

1. the number of units (e.g., people) accessible to study. 2. amount of information

alpha level

1. the odds the observed result is due to chance 2. willingness to risk

power

1. the odds you'll observe a treatment effect when it occurs 2. ability to see effect that is there

effect size

1. the salience of the program relative to noise 2. salience of program

Single subject research designs

A-B A-B-A A-B-A-B multiple baseline designs alternative treatment designs (ATD)

consideration for specific designs

ABAB - meets standards: minimum of 4 phases per case with at least 5 data points per phase - meets standards with reservations: minimum of 4 phases with at least 3 data points per phase Multiple Baseline - meets standards: min of 6 phases with at least 5 data points per phase - meets standards with reservation: min of 6 phases with at least 3 data points per phase Alternating treatment - meets standards: 5 repetitions of the alternating sequence - meets standards with reservations: 4 repetitions of the alternating sequence

Comparing qualitative and quantitative methods

before discussing the differences between qualitative and quantitative methodologies, one must understand the foundational similarities

single subject research designs- the WHAT

contrasting SSRD's with Case Studies - experimental research: 1. variables are manipulated 2. effects on other variables are measured 3. includes an element of experimental control 4. looking at within-subject variables

experimental flexibility

designs may be modified or changed within a study

individual as unit of analysis

individuals serve as their own controls - can treat a "group" as an individual with a focus on the group as a single unit

alternative treatment designs (ATD)

involves application of 2 or more interventions to same set of stimuli or behaviors - usually following and initial baseline - sometimes more effective/efficient intervention is applied alone in 3rd phase - order effects can be controlled through counterbalancing or randomization - however, carryover effects are not controlled as readily because the same symbols and referents are taught in both conditions - ex: one cannot rule out the possibility of learning a symbol in one condition, for example, affecting the performance in the other condition

single subject research design- the HOW

key concepts: 1. prediction: references the idea that if there is no effect attributable to the IV, the DV's data path will remain unchanged 2. verification: confirmation that the DV is changing in a predictable fashion as the IV is systematically applied 3. replication: references the repeating of the observed predictions and verifications within the same study - basic designs - evaluation of the effectiveness of one intervention via the use of multiple baselines and sequential introduction of the intervention. - multiple baseline designs

baseline

must be present to document the problem/issue and control for confounding variables

common characteristics of Case studies

observational descriptive (or possibly correlational) one subject (MISSING 2 CHARACTERISTICS)

design controls for threats to internal validity

opportunity for replication of basic effect at 3 different points in time

differences between two groups/conditions: Independent-samples design

parametric statistics: t-test for independent samples - null hyp - alternative hyp - .05 level of confidence=conventional cutoff point - confidence interval (CI) - assumptions for use of t-tests: 1. random sampling of subjects 2. two groups are unrelated and independent 3. normal distribution of scores (especially important when n<30)- if not,mathematical transformations of scores can be performed 4. homogeneity of variance: two groups have equal or near equal variances - non-parametric statistics: mann-whitney U test: distribution-free test (not dependent upon normality and equal variances)

types of surveys

questionnaires - group administration - mail administration - household drop off interviews - personal - telephone

independent variable is actively manipulated

researchers determine when and how the IV conditions change

decisions about the response format

should re response format be.... - check the answer - dichotomous - multiple choice - scale - free answer - check answer with follow up

Visual analysis of SCD- 4 step

step 1: documentation of a predictable baseline pattern step 2: assessment of within-phase patterns step 3: comparison of data from each phase with the data in the adjacent (or similar) phase to determine if manipulation of the IV has an associated effect step 4: integration of all information from all phases of study to determine if there are at least 3 demonstrations of an effect at different points in time

what do we estimate?

t-test one way analysis of variance (ANOVA) a form of regression all test the same thing and can be considered equivalent alternative analyses

experimental control

the design allows documentation of the causal (i.e., functional) relationship btwn IV and DV

some generalization considerations and precautions

there may be unique requirements for specific metrics applied - min # of data points or participants - specific type of SSED (e.g., mult. baseline) - randomization (e.g., assignment to treatment conditions, order of participants) - assumptions of data distribution and nature (e.g., normal distribution, no autocorrelation) specific metrics may identifiable limitations - ability to detect changes in level and trend - sensitivity to floor and ceiling effects - direction of behavior of change - behavior increases vs decreases


संबंधित स्टडी सेट्स

Chapter One Quiz, bio final exam (unite1-5)

View Set

Course 4 Module 3: Income Tax Calculations

View Set

CEA_Residential Roof Types and Materials

View Set