Systematic Reviews & Meta-Analyses

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

I2 Statistic Interpretation

*I2 statistic* = describes the percentage of the variability in effect estimates that is due to clinical or methodological heterogeneity rather than sampling variability. A rough guide to interpretation is as follows: 0% to 40%: might not be important; 30% to 60%: may represent moderate heterogeneity*; 50% to 90%: may represent substantial heterogeneity*; 75% to 100%: considerable heterogeneity*.

Hierarchy of Scientific Evidence

An individual RCT is designed in such a way that sources of bias and noise are eliminated or minimized in order to get the most reliable and valid estimates possible. However, we understand that every study has limitations—from its size to its generalizability—and those limitations can help explain the variation displayed in the previous figure. Variation in the estimates of the risk ratio across all 6 studies. That is why methods like systematic reviews and meta-analyses sit atop the hierarchy of evidence. --> Meta-analyses are designed to (1) systematically examine the strengths and weaknesses of the accumulated evidence, (2) explore heterogeneity between studies (including those of different designs), (3) identify potential sources of bias, and (4) provide an overall estimate of the benefit or harm of treatment.

Limitations of Meta-Analysis (search bias)

Even in the ideal case that all relevant studies were available (ie, no publication bias), a faulty search can miss some of them. In searching databases, much care should be taken to assure that the set of key words used for searching is as complete as possible. This step is so critical that most recent meta-analyses include the list of key words used. The search engine (eg, PubMed, Google) is also critical, affecting the type and number of studies that are found. 7 Small differences in search strategies can produce large differences in the set of studies found. 8

Forest Plots

Forest plots are typically like the one shown here—they plot the individual estimated treatment effects and confidence intervals on a single figure. Here, the estimated treatment effect is the risk ratio *Click*. It is displayed for each study and is represented by the blue box. Notice you can easily see both the direction and size of the effects. The further away from 1 the estimate is in any direction, the stronger the association is between treatment type and the outcome. In addition to the point estimate of RR, we are also given interval estimates. Since this effect size measure is the RR, a RR < 1 indicates a reduced risk of the outcome associated with treatment. A study whose confidence interval falls completely below 1 provides evidence of benefit of corticosteroids in preventing intraventricular haemorrhage (IVH).

Funnel Plots

Funnel plots are tools often reported to demonstrate the risk of publication bias. The estimated treatment effect (here the odds ratio) of each study defines the x-axis and some function of study size defines the y-axis. Here's it's a measure of the sampling variability of the treatment effect. The idea is that, the larger the study the smaller its sampling variability, so if all studies are included, we should see many small studies with estimates that scatter widely across the bottom of the figure. Larger studies have less sampling variability, so as we move up the y-axis we should see sampling variability decrease and a resulting funnel shape.

Summary

In summary, though they are not without limitations, systematic reviews and meta-analyses are tools that provide systematic examination of the accumulated evidence for or against a treatment. They allows us to explore heterogeneity between studies that might help explain conflicting results, and they provide us with an overall estimate of the effect of treatment.

Assessing Heterogeneity of Treatment Effects

Inevitably, studies brought together in a systematic review and meta-analysis will differ. Any kind of variability among studies in may be termed heterogeneity, but it can be helpful to distinguish between different types of heterogeneity. Variability in the participants, interventions and outcomes studied may be described as clinical diversity (sometimes called clinical heterogeneity), and variability in study design and risk of bias may be described as methodological diversity (sometimes called methodological heterogeneity). Variability in the intervention effects being evaluated in the different studies—as visualized in the forest plots—is known as statistical heterogeneity, and is a consequence of clinical or methodological diversity, or both, among the studies. Statistical heterogeneity is identified when the observed intervention effects being more different from each other than one would expect due to random error (chance) alone. We will follow convention and refer to statistical heterogeneity simply as heterogeneity.

Meta-analyses

Meta-analyses are statistical syntheses of results from two or more primary studies that addressed the same hypothesis in the same way.

Forest Plots (2)

It's also important to note the size of the marker for each estimate. The bigger the box, the more that individual estimate is weighted in the final overall RR estimate. For example, the Liggins 1972b study resulted in RR = 0.57. Because this was also one of the larger studies (n = 451 + 437), it provides more weight (17.2%) toward the final estimate. Smaller studies, like the Taeusch 1979 study in 54 + 69, contribute much less toward the final estimate. The combined result again, is essentially a weighted average of all study findings. It's represented by the diamond at the bottom of the figure—the width of the diamond corresponds to the 95% confidence interval on the overall estimate. Here, the synthesized evidence is in favor of a benefit associated with corticosteroids for reduced IVH. (RR = 0.54) (95% CI = 0.42, 0.68).

Heterogeneity

It's important to consider to what extent the results of studies are consistent. If confidence intervals for the results of individual studies have poor overlap, this generally indicates the presence of statistical heterogeneity. More formally, a statistical test for heterogeneity is available. This *chi-squared* test is included in the forest plots in most meta-analyses. It *assesses whether observed differences in results are the result of statistical heterogeneity*; A small p-value provides evidence of variation in effect estimates beyond simple sampling variability. Care must be taken in the interpretation of the chi-squared test, since it has low power when studies have small sample size or are few in number. This means that while *a statistically significant result may indicate a problem with heterogeneity, a non-significant result must not be taken as evidence of no heterogeneity* Since clinical and methodological diversity always occur in a meta-analysis, statistical heterogeneity is inevitable. There is a trend in the literature away from tests for heterogeneity since many believe that it will always exist whether or not we happen to be able to detect it using a statistical test. Other methods have been developed for quantifying inconsistency across studies that *move the focus away from testing whether heterogeneity is present to assessing its impact on the meta-analysis*. A useful statistic for quantifying inconsistency is the I2 statistic. It describes the percentage of the variability in effect estimates that is due to clinical or methodological heterogeneity rather than sampling variability

How can we determine the current state of evidence?

Meta-analyses are conducted in a manner similar to that of an RCT. However, rather than recruiting subjects to include in an RCT, we search for studies to include in the meta-analysis. *Click* . . . The Cochrane Collaboration provides guidelines for conducting systematic reviews, including how to identify, appraise, and synthesize research-based evidence and present it in accessible format. *Click* The PRISMA or (Preferred Reporting Items for Systematic reviews and Meta-Analyses) Statement provides specific guidance on the conduct and reporting of systematic reviews and meta-analyses. Here is an example of one reporting requirement for both—it is a flow chart depicting how studies were identified for inclusion in the meta-analysis and/or systematic review. It looks very much like the flow chart you often see reported with the results of a clinical trial.

Limitations of Meta-Analysis

Meta-analyses are not without limitations. Searches of databases such as PubMed can yield long lists of studies. However, these databases include only studies that have been published. Such searches are unlikely to yield a representative sample because studies that show a "positive" result (usually in favor of a new treatment or against a well-established one) are more likely to be published than those that do not. This selective publication of studies is called *publication bias* To minimize the effect of publication bias on the results of a meta-analysis, a *serious effort should be made to identify unpublished studies*. Identifying unpublished studies is easier now, thanks to improved communication between researchers worldwide, and thanks to registries in which all the studies of a certain disease or treatment are reported regardless of the result. The National Institutes of Health maintains a registry of all the studies it supports, and the FDA keeps a registry and database in which drug companies must register all trials.

Meta-Analysis: the analysis of analyses

Meta-analysis, which is a term for a set of statistical models used to pool evidence across studies to obtain an overall estimate, provides an estimate of the direction and size of a treatment effect, a measure of how consisitent studies designed to investigate the treatment actually are, and an overall strength of the current state of evidence. Forest plots actually provide us with a little bit of all of these.

Combine the results

Next, the author of a systematic review will write a narrative synthesis of the findings including details on the quality of each of the studies and their interpretation of the current state of evidence. In a meta-analysis, this is taken one step further by actually using a statistical method to pool the results of studies and obtain an overall estimate of the effect. Here, we are back to the figure displayed earlier. The pooled estimate of the RR displayed at the bottom of the figure and is represented by a black diamond. The pooled RR is 0.59, and the pooled 95% confidence interval is (0.38, 0.91). The results of this analysis indicate a benefit of corticosteroids—specifically a 41% reduction in the risk of moderate to severe respiratory distress syndrome. In addition, the pooled CI is completely below 1.

Assess the studies

Once the pool of relevant studies has been identified, each should be appraised for quality. Quality is another term for risk of bias, in that studies with low risk have higher quality of evidence. In this figure, along the left is displayed several specific sources of bias. The authors of this meta-analysis have reviewed and classified each of the studies as having 'low', 'unclear' or 'high' risk for each of these sources, and have summarized the % of all studies with each level of risk here. 'Risk of bias' graph: review authors' judgements about each risk of bias item presented as percentages across all included studies

Search the literature

Once the protocol has been developed, a search should be conducted according to the directions outlined in the protocol. In general, a PICO approach will identify many of the relevant studies based on the patient population of interest, the intervention of interest, the control group, and the specific outcomes of interest. Putting these terms into Google Scholar or PubMed (among others) will help identify published studies.

Limitations of Individual RCTs

Recall from your material on study validity that individual studies are at risk of inaccurately estimating exposure-outcome relationships because of bias and sampling variability. Bias can result in a systematic distortion of the relationship, and sampling variability can actually mask the signal in noise. In addition, individual studies may not generalize across different populations and settings. In this figure, the results of 6 different studies are displayed. Each study was designed to detect whether corticosteroids given during the antenatal period are effective in accelerating fetal lung maturation in women at risk of preterm birth. The figure displays the risk ratio and 95% confidence interval for moderate to severe respiratory distress syndrome, comparing corticosteroids to control. Some studies demonstrated effectiveness of corticosteroids (first three), while others did not (last three). How do we know what the current state of evidence is when the evidence appears to be mixed?

Funnel Plots (2)

Since publication bias results when studies with non significant findings are not published, and since studies that are smaller have a larger chance of non-significant findings, we may see asymmetry in the funnel when these studies are not included. Those studies are indicated here with open circles. Notice they are removed from the lower figure to demonstrate what a funnel plot may look like in the presence of this bias.

Define a question

Step 1 in a systematic review/meta-analysis includes defining the research question and writing a detailed protocol for conducting the study. In the protocol, one should include details like how studies will be identified, how they will be appraised, and what statistical methods will be used to synthesize results. As in clinical trials, best practices for reproducibility require a pre-registration of the review and protocol.

Systematic reviews

Systematic reviews are literature reviews focused on synthesizing all high-quality research evidence relevant to a research question.

Limitations of Meta-Analysis (selection bias)

The identification phase usually yields a long list of potential studies, many of which are not directly relevant to the topic of the meta-analysis. This list is then subject to additional criteria to select the studies to be included. This critical step is also designed to reduce differences among studies, eliminate replication of data or studies, and improve data quality, and thus enhance the validity of the results. To reduce the possibility of selection bias in this phase, it is crucial for the criteria to be clearly defined and for the studies to be scored by more than one researcher, with the final list chosen by consensus. 9,10 Frequently used criteria in this phase are in the areas of: objectives, populations studied, study design (eg, experimental vs observational), sample size, treatment (eg, type and dosage), criteria for selection of controls, outcomes measured, length of follow-up, among others. The objective in this phase is to select studies that are as similar as possible with respect to these criteria. It is a fact that even with careful selection, differences among studies will remain. But when the dissimilarities are large it becomes hard to justify pooling the results to obtain a "unified" conclusion.


Set pelajaran terkait

Physical Science Final Exam Part 2

View Set

Chapter 2: Administration of Drugs - PrepU

View Set

Chapter 10 DNA: The Chemical Nature of the Gene

View Set

LifePac Grade 12, Government Test 1202

View Set

Reading Development: Comprehension

View Set