12. Meta-analyses/Systematic reviews
Overview of components
1. Background & purpose - Sets up review (e.g., definitions, rationale, etc.) - States objectives of review (i.e., what Q(s) will be addressed?) 2. Search strategy - Describes systematic process used to find relevant studies (e.g., databases/other strategies, years, search terms, etc.) 3. Selection criteria - Describes criteria for studies to be included in review (e.g., type of participant/intervention, research design/level of evidence; etc.) 4. Data collection & analysis - Describes how (and what type of) data was extracted from articles and how was analyzed - Qualitative vs. quantitative - Data often organized/summarized in tables, figures may display quantitative data 5. Results - Describes what was found through review 6. Authors' conclusions - Discusses what conclusions can be drawn from review
CASM: Critical Appraisal of Systematic Reviews/Meta-Analyses
1. Was there a comprehensive and clearly described search for relevant studies? -can this be replicated? were search terms relevant? 2. Were clear and adequate criteria used to include/exclude studies from analysis? -why did they include/exclude certain parameters/conditions? 3. Were individual studies rated independently? -if disagreements, how were they resolved? 4. Were individual studies rated with blinding? -blinding in this situation - means blinded to authors, when it was published, etc. 5. Was inter-rater agreement adequate? • Qs below are specific to quantitative analysis of outcomes in meta-analysis. If study is systematic review, enter NA. 6. Was average effect size presented? 7. Were results weighted by sample size? - it will explicitly state this, if not, assume it wasn't 8. Was confidence interval around average effect size adequately precise? 9. Did forest plot suggest reasonable homogeneity of findings across individual studies? 10. If not, was heterogeneity or moderator analysis conducted? -Note: If confidence interval around mean effect size is very large, or if forest plot shows extensive variability, outcome of meta-analysis will be of limited utility 11. Were results sufficiently relevant to my patient and practice? -"Regardless of how strong external evidence from a meta-analysis or systematic review may be, it must still be integrated with evidence concerning characteristics and preferences of a particular patient before a decision about changing current clinical practice is made"
Evaluating the quality of included studies
• "validity of systematic review or meta-analysis depends on validity of individual studies on which it is based" • What criteria should be used to evaluate studies? - meta-analytic results were influenced by "3 key domains that have been shown to be associated empirically with bias: concealment of treatment allocation, blinding of outcome assessments, and handling of dropouts and withdrawals." • Should studies with low quality ratings be excluded from review? Or is there another option?
Meta-analysis
• A type of systematic review that generates one or more quantitative summary statistics - Calculate average effect size across studies - May be weighted for sample size - May calculate 95% confidence interval - May not be possible to make these calculations if adequate information not provided in original studies -systematic review= qualitative -meta-analysis= quantitative and qualitative ---will usually include a forest plot
Systematic reviews
• Collect and review evidence from multiple sources, in a systematic way • Attempt to synthesize the findings related to a specific Q • Traditional literature review: "Author makes subjective decisions about which information to include and to highlight" - Significant potential for bias! • Systematic review: - Standard procedure for literature search - Operationalized criteria for including/excluding studies - Multiple reviewers independently evaluate sources; measure inter-rater agreement -"EBSRs [Evidence-based systematic reviews] represent an emerging research methodology designed to reduce bias and promote transparency in synthesis of evidence for research and clinical purposes"
Interpreting Results: Calculating a Cumulative effect size
• Effect size should be weighted by sample size so small-n studies do not unduly bias outcome (average effect size) - Remember, bigger sample size, more confident in estimate of true treatment effect (narrower CI = good!) - Small-n studies are more vulnerable to measurement error than large studies
Forest plots
• Forest plots tell us about "strength, direction, consistency, and precision of findings from studies included in meta-analysis" • Plot effect sizes with 95% confidence intervals for all included studies • Plot weighted mean effect size with 95% confidence interval • Does 95% CI around weighted mean effect size include zero? -box size may be related to sample size, but most important is mean, SEM, etc.
Interpreting study evidence
• How are effect sizes calculated in Kent-Walsh et al 2015? Why? - Improvement rate differences (IRD) - Improvement rate of treatment phase(s) minus improvement rate of baseline phases(s) - Values range from 0 to 1: --- < 0.50 indicate small or questionable effects --- 0.51-0.70 indicate moderate effects --- 0.71-0.75 indicate large effects --- > 0.75 indicate very large effects
Should unpublished studies be included?
• Publication bias ("file-drawer effect"): study reporting significant result is more likely to be published than study with non-significant result • As result, published literature may overestimate treatment efficacy bc studies finding no effect were never published • Also creates incentive for researchers to hack results until get p-value < .05! • But on other hand, papers that don't get published tend to have significant methodological problems
Searching for and selecting studies for review
• Searching the literature -First-pass search is as broad as possible • Narrowing down articles - Define and apply criteria for inclusion/ exclusion
Why use systematic reviews/meta-analyses?
• Systematic review/meta-analysis of high-quality research constitutes top of pyramid of evidence • Recall: Levels of evidence • "Systematic reviews free individual clinicians and researchers from finding, evaluating, summarizing, and synthesizing research articles spanning numerous years and journals" • "end-user" clinician may focus on using available systematic reviews or clinical practice guidelines • ASHA has programs dedicated to supporting EBP, especially for end-users - N-CEP (National Center for EBP in CSD): Conducts systematic reviews on assessment and treatment of communication disorders - NOMS (National Outcomes Measurement System): Large-scale documentation of efficacy of intervention for communication disorders
Overall...
• Systematic reviews & meta-analyses... • Can validate research • Attempt to "get to the bottom line" • Address discrepancies among different studies • May highlight a need for further research
Interpreting the results of included studies
• most valuable info we can extract from primary literature: "An accumulated pool of...effect sizes" • For each study, was positive treatment effect observed? - How large was it? How precise is estimate? How large/wide is 95% confidence interval around effect size? Does confidence interval include zero?