quiz 2 308

¡Supera tus tareas y exámenes ahora con Quizwiz!

ws 3

N308-001 Fall 2017 Worksheet 3 2 points each - total = 20 points. This worksheet refers to specific articles. You will need to download them by following the links. I found both of them through Google Scholar. Use that method if needed. Article 1 http://onlinelibrary.wiley.com/doi/10.1111/jocn.13019/full Doupnik, S. (2017). Parent coping support interventions during acute pediatric hospitalizations: A meta-analysis. Pediatrics, 140(3). 1. What classifies this article as a systematic review? A systematic review summarizes the results of available carefully designed healthcare studies and provides a high level of evidence on the effectiveness of healthcare interventions. Judgments may be made about the evidence and inform recommendations for healthcare. A systematic review answers a defined research question by collecting and summarizing all empirical evidence that fits pre-specified eligibility criteria. In the above article, 1571 articles were screened by the researchers, and of these, 14 articles were selected for analysis through a systematic inclusion/exclusion, "keeper" process. This screening process qualifies this paper as a systematic review. 2. What types of quantitative studies were included in the literature search? The researchers stated that they included quantitative research described as: Randomized control trials, prospective studies, cohort studies, and survey studies with nurse participants in any health care setting who experienced actual or potential aggression (verbal or physical) in the workplace. 3. How many articles were finally included in the meta-analysis and from where were they conducted? Fourteen papers were reported using quantitative data for analysis. Of these fourteen studies, the researchers stated that they were conducted in Israel, Portugal, USA, Italy, Sweden, Great Britain, Australia, Canada, Germany and Holland. 4. The narrative of the article and the Forest Plot provide a confidence interval for gender rates of verbal abuse within 6 - 12 months (Figure 2). a. Based on the forest plot, which of the articles had the smallest confidence interval? Why is a smaller confidence interval desirable? The smallest confidence interval was identified in Figure 2 as, 1. Bernaldo-de-Quiros (2015). A smaller confidence interval is desirable because it means that, generally, more participants were used in collecting data, and the confidence interval is, generally, narrower. This is significant because it makes it less likely that the P will cross the line of null effect. b. Which had the largest confidence interval? What is the odd ratio for that article? What is indicated when there is such a large confidence interval? The largest confidence interval was indicated in Figure 2 as, 4. Sa (2008). Odds Ratio for this experiment was female nurses experiencing verbal assault (R side of the forest plot). A large confidence interval (wide whiskers) indicates that there was a small group of participants within the small study. c. What is the final, compiled result for gender? How is it indicated on the Forest Plot? 5 studies found that women were the target of Verbal Aggression in healthcare areas***. Two studies, originating from Hong Kong and Turkey (historically having extreme patriarchal customs), found that there was more verbal aggression against men in health care settings. 5. On page 296 the author reports "A certain degree of statistical, clinical and design heterogeneity is evident in the analyzed studies" as they relate to physical abuse and gender. Yes, describe what heterogeneity means in #5. Mon 11/6/2017 8:47 AM Heterogeneity is the differences between studies that are not due to chance. There is clinical and statistical heterogeneity. Clinical is always present. Statistical heterogeneity is not always present and must be evaluated statistically. Heterogeneity is a precise measure of the effect after combining studies. After studies are combined we want to make sure that we are comparing studies that generally measure the same thing (apples v oranges and omit the oranges to have apple to apple comparison). I interpret that the authors meant there was a large degree of heterogeneity, which means that it is difficult to know if there is a meaningful variation between studies due to chance alone. This means we have an imprecise estimate of effect and can effect the placement of the diamond more towards the line of no effect and can mean we have less statistical significance on the forest plot. *** *** Article 2 https://academic.oup.com/ofid/article/4096864 Uden, L., Barber, E., Ford, N., & Cooke, G. S. (2017). Risk of tuberculosis infection and disease for health care workers: An updated Meta-analysis. 6. How many articles were screened and how many met the inclusion criteria? There were 2,152 publications screened, 21 publications met the authors inclusion criteria. 1. The authors made a decision (p. 2) to look at "both incidence and prevalence of both LTBI and active TB disease". Why is this differentiation necessary? The authors describe near this entry that TB is symptomatic and LTBI is latent and symptomatic. Focusing on both of these and not differentiating the two would lead to a problem of heterogeneity. As described in question #5, we cannot compare apples and oranges and expect a meaningful variation between studies, precise estimate of effect, and accurate statistical significance. 2. Under "Results" p. 3, the authors explain how they tried to reduce selection bias. Explain what selection bias means in this context, and then list 2 types of decisions that were made in order to decrease bias. The authors wanted to reduce selection bias. They essentially wanted to avoid picking studies that would give a "built in" weakness which would reduce their study's validity. Two steps that the researchers decided to decrease bias was A) If studies implemented multiple testing methods, then only the initial results were used because later screening may have biased results due to increased awareness of occupational TB risk. B) using papers that had clear inclusion criteria for HCWs and control groups. *** 3. Figure 2, page 5, shows that heterogeneity (i-squared) was 84.8%. What does this mean? I>2 describes the percentage of total variation across studies that is due to heterogeneity rather than chance. I>2 lies between 0% and 100%, this is the degree of inconsistency across studies in the meta-analysis. A value of 0% indicates no observed heterogeneity, and larger values show increasing heterogeneity. We want I>2 to be as close to 0% as possible. As soon as I>2 is greater than 25%, heterogeneity starts becoming significant, which is bad. I>2 that is greater than 75%, in this case 84%, shows that heterogeneity is HIGH, the results are probably not due to chance, but instead could be due to heterogeneity between studies. 4. In Figure 3, 2 studies' confidence intervals 'cross the line'. What does this mean? The two studies that cross the line of no effect which means they have crossed a position at which the intervention in the study likely has no effect on the outcome being measured, the intervention is no better than control. *** 5. Finally as shown in Figure 3, is it more likely or less likely that health care workers (HCW) will contract active tuberculosis than non-health care workers? How can you tell? The control groups which included school workers, nonmedical students, administrative employees, and reference data for the general population was found to have a higher incident of TB infection compared to HCW***. 6. OMIT Mon 11/6/2017 8:47 AM

term

Search Create Upgrade to Quizlet Plus Create a new study set Saving... Saving... quiz 2 308 TITLE + Import from Word, Excel, Google Docs, etc. Visible to everyone Change Only editable by me Change Choose File Add and label a diagram Drag and drop any image or choose an image. terms TERM * Presentation E Now what - I have the articles and need to make a decision * NURS 308-001 - SIUE School of Nursing * * Reason to appraise research - again * The goal of appraising research articles is to be sure to capture how much each study is able to affect and improve practice * All studies that are worthy are not necessarily randomized control trials - much useful information can be gained from qualitative and descriptive studies, but look to see if what was learned in the Level 5 and Level 6 studies has been tested at a Level 2, 3, or 4 study * The final desired outcome is having reliable information that can be plugged into the "best evidence -practitioner expertise - patient value" triad * So - what to appraise * Since you know that the highest levels of evidence are most likely to quickly give you the best information, start there * Look for Clinical Practice Guidelines and Systematic reviews of the clinical problem at hand * Read meta-analysis carefully - did the authors only include one type of study - is other evidence not present, and hence is there a bias in the meta-analysis? * A word on Systematic Reviews * A systematic review is defined and described very well at this Cochrane says to the general public: http://consumers.cochrane.org/what-systematic-review * When there is evidence from a variety of sources that can be considered, the authors might perform an "integrative review" which includes literature from experimental, non-experimental research (including qualitative and descriptive), and theoretical research * Why use integrative research? * Best use is to inform nursing knowledge, inform research, guide practice and form policy initiatives based on what is known about both SCIENCE and HUMAN BEHAVIOR * What is so great about a "meta-anything"? * A meta-analysis of quantitative studies looks to find an overall answer by compiling many effect sizes of similarly conducted research studies. Note, it is important to note how much variability there is between the studies. * The result is a typical response to a specific intervention that can show the magnitude of an effect across many studies. If an intervention is strong enough to be effective over and over again, then the nurse can know that the same effect should result with the current patient in the room. * What do the data really mean? * The outline for the course content shows that you need to learn what the evidence means. * When it comes down to EB literature, there are some key items to consider. You rarely need to calculate these items, but it important to know what they are and be able to recognize them * RR = Risk ratio * OR = Odds ratio * CI=Confidence intervals * Forest plots.....effect size.....etc * Learn more on these concepts by Video 1 by Rahul Patwari or on this You Tube Video by Terry Shaneyfelt which focuses only on Forest Plots. * What is a Qualitative meta-synthesis? * Yup - qualitative studies can also be used to synthesize information * In this case, a meta-synthesis focuses on the recurrent themes as opposed to an aggregate effect. This helps develop new knowledge based on the analysis of existing qualitative research studies about the same phenomenon * CPGs briefly * Search for a CPG using the guideline.gov site. * When you look at guidelines, don't assume they are all equal. Generally ones on the guideline.gov site are rigorously reviewed. * For others, use the "AGREE" Instrument on the next slide * Domains of Quality in AGREE instrument -How to think about a CPG * Domain 1: Scope and purpose of the guideline * What is the purpose, aim, clinical question and target population * Domain 2: Stakeholder involvement * Will this meet the needs of the intended users? * Domain 3: Rigor of development * Evaluates the process of gathering and synthesizing the evidence and how it was examined * Domain 4: Clarity and presentation * Evaluates the format and clearness of the guideline * Domain 5: Applicability * Behavioral, costs, and organizational consequences of applying the guideline * Domain 6: Editorial independence * Are conflicts of interest listed and are the recommendations independently presented? * Evaluation Tables * In Evidence based practice step by step: Critical Appraisal: Part 1 of the assigned readings you were introduced to ONE way of organizing literature. This method is called an Evaluation Table * See the 5th article of the series, page 50. * If you were really looking at a great deal of articles, you would find an evaluation table to be vital to your work * For the EBP assignment 3: * You should have at least 3 articles that you can analyze using the Methodology Matrix - also called an Evaluation Table in the articles. * The purpose of doing this exercise in this course is to ensure that you have a chance to review prior research concepts AND TO give you the tools for using this type of table for future work in graduate school (hint hint). * You will also complete the CPG and Systematic Review evaluations for Assignment 3 * After the discovery of new evidence * After the team of researchers, nurses, other health care providers discover the best answer to a clinical question, what happens next? * A number of ways can be used to test or eventually 'hard wire' the new approach into practice. * Several are known - the one in this course is the "John Hopkins Nursing Evidence Based Practice Model" * There are other models * The Iowa Model is popular among many nursing systems. * It is a simple way to move new processes into a hospital system. Involving teams from a variety of departments lends strength to the process * The model by Johns Hopkins is also a simple one to integrate. * Identifying and involving the key stakeholders in the processes aids in successful changes. * Quality Improvement * Based on what you knew from your past degree and studies, you are familiar with the research process * Through this brief course, you've seen how evidence based practice can be integrated throughout a health care system through the use of a plan, such as the Iowa Model. * The next step is to determine whether or not the use of EBP is actually improves patient care and leads quality patient outcomes. * A few bare bones thoughts.. * Now that you know how to judge evidence and experience research, you recognize that the reason for any nursing research or health care improvement is to eventually improve patient outcomes! * Patient Outcomes... * What is a good patient outcome? If you don't know, then think about this - what would you desire if you were a patient in an emergency room? You'd want prompt care, you would seek the best treatment, and you would like to leave knowing that you will be better soon - armed with the knowledge that you know how to care for yourself and reach your best level of health. * That is, You want to go home, and stay well! * What do people want? * People - very large to very small - rely on health care to provide the BEST preventive, early treatment, restorative treatments, and patient education to keep them from becoming ill and recovering if they do. * So.....the patient interacts with the health care system, eventually leaving it to go home and care for him or herself. When the patient leaves, how do we know that our efforts led to good patient outcomes? * Complete Worksheet 3 * Complete Worksheet 3 which includes concepts on forest plots, meta-analyses and effect sizes which you learned about on the You Tube Videos. Advertisement Journal Logo Secondary Logo • Account • Login • Subscribe • Help • Articles & Issues o Current Issue o Archives o ePub Ahead-of-Print • Collections • Multimedia o Podcasts o Videos o How to Try This Videos o New Look at the Old Videos o Supporting Family Caregivers • CE • For Authors o Information for Authors o Language Editing Services • Journal Info o About the Journal o Advertising and Business o AJN Photo Exhibit: Faces of Caring o Award Winners o Contact Info o Editorial Board o Information for Faculty o Information For Media o Letters to the Editor o Reprints o Rights and Permissions o Subscription Services o Book of the Year Awards o Open Access o Author Guidelines - Reflections o Nurse Faculty Scholars / AJN Mentored Writing Award Advanced Search • < Previous Article • Next Article > Evidence-Based Practice, Step by Step: Critical Appraisal of the Evidence Part III Fineout-Overholt, Ellen PhD, RN, FNAP, FAAN; Melnyk, Bernadette Mazurek PhD, RN, CPNP/PMHNP, FNAP, FAAN; Stillwell, Susan B. DNP, RN, CNE; Williamson, Kathleen M. PhD, RN AJN The American Journal of Nursing: November 2010 - Volume 110 - Issue 11 - p 43-51 doi: 10.1097/01.NAJ.0000390523.99066.b5 Feature Articles • Abstract • In Brief • Author Information • Article Outline The process of synthesis: seeing similarities and differences across the body of evidence. This is the seventh article in a series from the Arizona State University College of Nursing and Health Innovation's Center for the Advancement of Evidence-Based Practice. Evidence-based practice (EBP) is a problem-solving approach to the delivery of health care that integrates the best evidence from studies and patient care data with clinician expertise and patient preferences and values. When delivered in a context of caring and in a supportive organizational culture, the highest quality of care and best patient outcomes can be achieved. The purpose of this series is to give nurses the knowledge and skills they need to implement EBP consistently, one step at a time. Articles will appear every two months to allow you time to incorporate information as you work toward implementing EBP at your institution. Also, we've scheduled "Chat with the Authors" calls every few months to provide a direct line to the experts to help you resolve questions. See details below. Box In September's evidence-based practice (EBP) article, Rebecca R., our hypothetical staff nurse, Carlos A., her hospital's expert EBP mentor, and Chen M., Rebecca's nurse colleague, rapidly critically appraised the 15 articles they found to answer their clinical question—"In hospitalized adults (P), how does a rapid response team (I) compared with no rapid response team (C) affect the number of cardiac arrests (O) and unplanned admissions to the ICU (O) during a three-month period (T)?"—and determined that they were all "keepers." The team now begins the process of evaluation and synthesis of the articles to see what the evidence says about initiating a rapid response team (RRT) in their hospital. Carlos reminds them that evaluation and synthesis are synergistic processes and don't necessarily happen one after the other. Nevertheless, to help them learn, he will guide them through the EBP process one step at a time. Table 1 Back to Top | Article Outline STARTING THE EVALUATION Rebecca, Carlos, and Chen begin to work with the evaluation table they created earlier in this process when they found and filled in the essential elements of the 15 studies and projects (see "Critical Appraisal of the Evidence: Part I," July). Now each takes a stack of the "keeper" studies and systematically begins adding to the table any remaining data that best reflect the study elements pertaining to the group's clinical question (see Table 1; for the entire table with all 15 articles, go to http://links.lww.com/AJN/A17). They had agreed that a "Notes" section within the "Appraisal: Worth to Practice" column would be a good place to record the nuances of an article, their impressions of it, as well as any tips—such as what worked in calling an RRT—that could be used later when they write up their ideas for initiating an RRT at their hospital, if the evidence points in that direction. Chen remarks that although she thought their initial table contained a lot of information, this final version is more thorough by far. She appreciates the opportunity to go back and confirm her original understanding of the study essentials. The team members discuss the evolving patterns as they complete the table. The three systematic reviews, which are higher-level evidence, seem to have an inherent bias in that they included only studies with control groups. In general, these studies weren't in favor of initiating an RRT. Carlos asks Rebecca and Chen whether, now that they've appraised all the evidence about RRTs, they're confident in their decision to include all the studies and projects (including the lower-level evidence) among the "keepers." The nurses reply with an emphatic affirmative! They tell Carlos that the projects and descriptive studies were what brought the issue to life for them. They realize that the higher-level evidence is somewhat in conflict with the lower-level evidence, but they're most interested in the conclusions that can be drawn from considering the entire body of evidence. Rebecca and Chen admit they have issues with the systematic reviews, all of which include the MERIT study.1-4 In particular, they discuss how the authors of the systematic reviews made sure to report the MERIT study's finding that the RRT had no effect, but didn't emphasize the MERIT study authors' discussion about how their study methods may have influenced the reliability of the findings (for more, see "Critical Appraisal of the Evidence: Part II," September). Carlos says that this is an excellent observation. He also reminds the team that clinicians may read a systematic review for the conclusion and never consider the original studies. He encourages Rebecca and Chen in their efforts to appraise the MERIT study and comments on how well they're putting the pieces of the evidence puzzle together. The nurses are excited that they're able to use their new knowledge to shed light on the study. They discuss with Carlos how the interpretation of the MERIT study has perhaps contributed to a misunderstanding of the impact of RRTs. Comparing the evidence. As the team enters the lower-level evidence into the evaluation table, they note that it's challenging to compare the project reports with studies that have clearly described methodology, measurement, analysis, and findings. Chen remarks that she wishes researchers and clinicians would write study and project reports similarly. Although each of the studies has a process or method determining how it was conducted, as well as how outcomes were measured, data were analyzed, and results interpreted, comparing the studies as they're currently written adds another layer of complexity to the evaluation. Carlos says that while it would be great to have studies and projects written in a similar format so they're easier to compare, that's unlikely to happen. But he tells the team not to lose all hope, as a format has been developed for reporting quality improvement initiatives called the SQUIRE Guidelines; however, they aren't ideal. The team looks up the guidelines online (www.squire-statement.org) and finds that the Institute for Healthcare Improvement (IHI) as well as a good number of journals have encouraged their use. When they review the actual guidelines, the team notices that they seem to be focused on research; for example, they require a research question and refer to the study of an intervention, whereas EBP projects have PICOT questions and apply evidence to practice. The team discusses that these guidelines can be confusing to the clinicians authoring the reports on their projects. In addition, they note that there's no mention of the synthesis of the body of evidence that should drive an evidence-based project. While the SQUIRE Guidelines are a step in the right direction for the future, Carlos, Rebecca, and Chen conclude that, for now, they'll need to learn to read these studies as they find them—looking carefully for the details that inform their clinical question. Once the data have been entered into the table, Carlos suggests that they take each column, one by one, and note the similarities and differences across the studies and projects. After they've briefly looked over the columns, he asks the team which ones they think they should focus on to answer their question. Rebecca and Chen choose "Design/Method," "Sample/Setting," "Findings," and "Appraisal: Worth to Practice" (see Table 1) as the initial ones to consider. Carlos agrees that these are the columns in which they're most likely to find the most pertinent information for their synthesis. Back to Top | Article Outline SYNTHESIZING: MAKING DECISIONS BASED ON THE EVIDENCE Design/Method. The team starts with the "Design/Method" column because Carlos reminds them that it's important to note each study's level of evidence. He suggests that they take this information and create a synthesis table (one in which data is extracted from the evaluation table to better see the similarities and differences between studies) (see Table 21-15). The synthesis table makes it clear that there is less higher-level and more lower-level evidence, which will impact the reliability of the overall findings. As the team noted, the higher-level evidence is not without methodological issues, which will increase the challenge of coming to a conclusion about the impact of an RRT on the outcomes. Sample/Setting. In reviewing the "Sample/Setting" column, the group notes that the number of hospital beds ranged from 218 to 662 across the studies. There were several types of hospitals represented (4 teaching, 4 community, 4 no mention, 2 acute care hospitals, and 1 public hospital). The evidence they've collected seems applicable, since their hospital is a community hospital. Findings. To help the team better discuss the evidence, Carlos suggests that they refer to all projects or studies as "the body of evidence." They don't want to get confused by calling them all studies, as they aren't, but at the same time continually referring to "studies and projects" is cumbersome. He goes on to say that, as part of the synthesis process, it's important for the group to determine the overall impact of the intervention across the body of evidence. He helps them create a second synthesis table containing the findings of each study or project (see Table 31-15). As they look over the results, Rebecca and Chen note that RRTs reduce code rates, particularly outside the ICU, whereas unplanned ICU admissions (UICUA) don't seem to be as affected by them. However, 10 of the 15 studies and projects reviewed didn't evaluate this outcome, so it may not be fair to write it off just yet. The EBP team can tell from reading the evidence that researchers consider the impact of an RRT on hospital-wide mortality rates (HMR) as the more important outcome; however, the group remains unconvinced that this outcome is the best for evaluating the purpose of an RRT, which, according to the IHI, is early intervention in patients who are unstable or at risk for cardiac or respiratory arrest.16 That said, of the 11 studies and projects that evaluated mortality, more than half found that an RRT reduced it. Carlos reminds the group that four of those six articles are level-VI evidence and that some weren't research. The findings produced at this level of evidence are typically less reliable than those at higher levels of evidence; however, Carlos notes that two articles having level-VI evidence, a study and a project, had statistically significant (less likely to occur by chance, P < 0.05) reductions in HMR, which increases the reliability of the results. Chen asks, since four level-VI reports documented that an RRT reduces HMR, should they put more confidence in findings that occur more than once? Carlos replies that it's not the number of studies or projects that determines the reliability of their findings, but the uniformity and quality of their methods. He recites something he heard in his Expert EBP Mentor program that helped to clarify the concept of making decisions based on the evidence: the level of the evidence (the design) plus the quality of the evidence (the validity of the methods) equals the strength of the evidence, which is what leads clinicians to act in confidence and apply the evidence (or not) to their practice and expect similar findings (outcomes). In terms of making a decision about whether or not to initiate an RRT, Carlos says that their evidence stacks up: first, the MERIT study's results are questionable because of problems with the study methods, and this affects the reliability of the three systematic reviews as well as the MERIT study itself; second, the reasonably conducted lower-level studies/projects, with their statistically significant findings, are persuasive. Therefore, the team begins to consider the possibility that initiating an RRT may reduce code rates outside the ICU (CRO) and may impact non-ICU mortality; both are outcomes they would like to address. The evidence doesn't provide equally promising results for UICUA, but the team agrees to include it in the outcomes for their RRT project because it wasn't evaluated in most of the articles they appraised. As the EBP team continues to discusses probable outcomes, Rebecca points to one study's data in the "Findings" column that shows a financial return on investment for an RRT.9 Carlos remarks to the group that this is only one study, and that they'll need to make sure to collect data on the costs of their RRT as well as the cost implications of the outcomes. They determine that the important outcomes to measure are: CRO, non-ICU mortality (excluding patients with do not resuscitate [DNR] orders), UICUA, and cost. Appraisal: Worth to Practice. As the team discusses their synthesis and the decision they'll make based on the evidence, Rebecca raises a question that's been on her mind. She reminds them that in the "Appraisal: Worth to Practice" column, teaching was identified as an important factor in initiating an RRT and expresses concern that their hospital is not an academic medical center. Chen reminds her that even though theirs is not a designated teaching hospital with residents on staff 24 hours a day, it has a culture of teaching that should enhance the success of an RRT. She adds that she's already hearing a buzz of excitement about their project, that their colleagues across all disciplines have been eager to hear the results of their review of the evidence. In addition, Carlos says that many resources in their hospital will be available to help them get started with their project and reminds them of their hospital administrators' commitment to support the team. Table 2 Table 3 Back to Top | Article Outline ACTING ON THE EVIDENCE As they consider the synthesis of the evidence, the team agrees that an RRT is a valuable intervention to initiate. They decide to take the criteria for activating an RRT from several successful studies/projects and put them into a synthesis table to better see their major similarities (see Table 44,8,9,13,15). From this combined list, they choose the criteria for initiating an RRT consult that they'll use in their project (see Table 5). The team also begins discussing the ideal make up for their RRT. Again, they go back to the evaluation table and look over the "Major Variables Studied" column, noting that the composition of the RRT varied among the studies/projects. Some RRTs had active physician participation (n = 6), some had designated physician consultation on an as-needed basis (n = 2), and some were nurse-led teams (n = 4). Most RRTs also had a respiratory therapist (RT). All RRT members had expertise in intensive care and many were certified in advanced cardiac life support (ACLS). They agree that their team will be comprised of ACLS-certified members. It will be led by an acute care nurse practitioner (ACNP) credentialed for advanced procedures, such as central line insertion. Members will include an ICU RN and an RT who can intubate. They also discuss having physicians willing to be called when needed. Although no studies or projects had a chaplain on their RRT, Chen says that it would make sense in their hospital. Carlos, who's been on staff the longest of the three, says that interdisciplinary collaboration has been a mainstay of their organization. A physician, ACNP, ICU RN, RT, and chaplain are logical choices for their RRT. As the team ponders the evidence, they begin to discuss the next step, which is to develop ideas for writing their project implementation plan (also called a protocol). Included in this protocol will be an educational plan to let those involved in the project know information such as the evidence that led to the project, how to call an RRT, and outcome measures that will indicate whether or not the implementation of the evidence was successful. They'll also need an evaluation plan. From reviewing the studies and projects, they also realize that it's important to focus their plan on evidence implementation, including carefully evaluating both the process of implementation and project outcomes. Be sure to join the EBP team in the next installment of this series as they develop their implementation plan for initiating an RRT in their hospital, including the submission of their project proposal to the ethics review board. Table 4 Table 5 Back to Top | Article Outline REFERENCES 1. Chan PS, et al. (2010). Rapid response teams: a systematic review and meta-analysis. Arch Intern Med 2010;170(1):18-26. • Cited Here... | • View Full Text | PubMed | CrossRef • • 2. McGaughey J, et al. Outreach and early warning systems (EWS) for the prevention of intensive care admission and death of critically ill adult patients on general hospital wards. Cochrane Database Syst Rev 2007;3:CD005529. • Cited Here... | • PubMed 3. Winters BD, et al. Rapid response systems: a systematic review. Crit Care Med 2007;35(5):1238-43. • Cited Here... | • View Full Text | PubMed | CrossRef • • 4. Hillman K, et al. Introduction of the medical emergency team (MET) system: a cluster-randomised controlled trial. Lancet2005;365(9477):2091-7. • Cited Here... | • PubMed | CrossRef • 5. Sharek PJ, et al. Effect of a rapid response team on hospital-wide mortality and code rates outside the ICU in a children's hospital. JAMA 2007;298(19):2267-74. • Cited Here... | • View Full Text | PubMed | CrossRef • • 6. Chan PS, et al. Hospital-wide code rates and mortality before and after implementation of a rapid response team. JAMA2008;300(21):2506-13. • Cited Here... | • View Full Text | PubMed | CrossRef • • 7. DeVita MA, et al. Use of medical emergency team responses to reduce hospital cardiopulmonary arrests. Qual Saf Health Care2004;13(4):251-4. • Cited Here... | • View Full Text | PubMed | CrossRef • • 8. Mailey J, et al. Reducing hospital standardized mortality rate with early interventions. J Trauma Nurs 2006;13(4):178-82. • Cited Here... | • View Full Text | PubMed | CrossRef • • 9. Dacey MJ, et al. The effect of a rapid response team on major clinical outcome measures in a community hospital. Crit Care Med 2007;35(9):2076-82. • Cited Here... | • View Full Text | PubMed | CrossRef • • 10. McFarlan SJ, Hensley S. Implementation and outcomes of a rapid response team. J Nurs Care Qual 2007;22(4):307-13. • Cited Here... | • View Full Text | PubMed | CrossRef • • 11. Offner PJ, et al. Implementation of a rapid response team decreases cardiac arrest outside the intensive care unit. J Trauma2007;62(5):1223-8. • Cited Here... | • View Full Text | PubMed | CrossRef • • 12. Bertaut Y, et al. Implementing a rapid-response team using a nurse-to-nurse consult approach. J Vasc Nurs 2008;26(2):37-42. • Cited Here... | • PubMed | CrossRef • 13. Benson L, et al. Using an advanced practice nursing model for a rapid response team. Jt Comm J Qual Patient Saf2008;34(12):743-7. • Cited Here... | • PubMed 14. Hatler C, et al. Implementing a rapid response team to decrease emergencies. Medsurg Nurs 2009;18(2):84-90,126. • Cited Here... | • View Full Text | PubMed • 15. Bader MK, et al. Rescue me: saving the vulnerable non-ICU patient population. Jt Comm J Qual Patient Saf 2009;35(4):199-205. • Cited Here... | • PubMed 16. Institute for Healthcare Improvement. Establish a rapid response team. n.d.. • Cited Here... Supplemental Digital Content • AJN_110_11_2010_10_12_AJN_0_SDC1.pdf; [PDF] (229 KB) Back to Top | Article Outline © 2010 Lippincott Williams & Wilkins, Inc. • • • • • Article Tools • Article as PDF (771 KB) • Article as EPUB • Print this Article • Email To Colleague • Add to My Favorites • Export to Citation Manager • Alert Me When Cited • Get Content & Permissions • View Images in Gallery • View Images in Slideshow • Export All Images to PowerPoint File Share this article on: • • • • • Article Level Metrics Advertisement Related Links • Articles in PubMed by Ellen Fineout-Overholt, PhD, RN, FNAP, FAAN • This article in PubMed • Articles in Google Scholar by Ellen Fineout-Overholt, PhD, RN, FNAP, FAAN • Other articles in this journal by Ellen Fineout-Overholt, PhD, RN, FNAP, FAAN Related Collections • Evidence-Based Practice, Step by Step Readers Of this Article Also Read • Evidence-Based Practice, Step by Step: Critical Appraisal of the Evidence: Part ... • Evidence-Based Practice Step by Step: Critical Appraisal of the Evidence: Part I • Evidence-Based Practice, Step by Step: Searching for the Evidence • Evidence-Based Practice, Step by Step: Asking the Clinical Question: A Key Step ... • Evidence-Based Practice: Step by Step: The Seven Steps of Evidence-Based... Follow us on • Twitter • Facebook Content Links • Feedback • Sitemap • RSS Feeds • LWW Journals Resources • Privacy Policy (Updated October 4, 2017) • Terms of Use • Open Access Policy • Subscribe to eTOC Contact Wolters Kluwer Health, Inc. Email: [email protected] Phone: 800-638-3030 (within the USA)301-223-2300 (outside of the USA) Copyright © 2017 Wolters Kluwer Health, Inc. All rights reserved. Advertisement Journal Logo Secondary Logo • Account • Login • Subscribe • Help • Articles & Issues • Collections • Multimedia • CE • For Authors • Journal Info Advanced Search • < Previous Article • Next Article > Evidence-Based Practice, Step by Step: Critical Appraisal of the Evidence Part III Fineout-Overholt, Ellen PhD, RN, FNAP, FAAN; Melnyk, Bernadette Mazurek PhD, RN, CPNP/PMHNP, FNAP, FAAN; Stillwell, Susan B. DNP, RN, CNE; Williamson, Kathleen M. PhD, RN AJN The American Journal of Nursing: November 2010 - Volume 110 - Issue 11 - p 43-51 doi: 10.1097/01.NAJ.0000390523.99066.b5 Feature Articles • Abstract • In Brief • Author Information • Article Outline The process of synthesis: seeing similarities and differences across the body of evidence. This is the seventh article in a series from the Arizona State University College of Nursing and Health Innovation's Center for the Advancement of Evidence-Based Practice. Evidence-based practice (EBP) is a problem-solving approach to the delivery of health care that integrates the best evidence from studies and patient care data with clinician expertise and patient preferences and values. When delivered in a context of caring and in a supportive organizational culture, the highest quality of care and best patient outcomes can be achieved. The purpose of this series is to give nurses the knowledge and skills they need to implement EBP consistently, one step at a time. Articles will appear every two months to allow you time to incorporate information as you work toward implementing EBP at your institution. Also, we've scheduled "Chat with the Authors" calls every few months to provide a direct line to the experts to help you resolve questions. See details below. Box In September's evidence-based practice (EBP) article, Rebecca R., our hypothetical staff nurse, Carlos A., her hospital's expert EBP mentor, and Chen M., Rebecca's nurse colleague, rapidly critically appraised the 15 articles they found to answer their clinical question—"In hospitalized adults (P), how does a rapid response team (I) compared with no rapid response team (C) affect the number of cardiac arrests (O) and unplanned admissions to the ICU (O) during a three-month period (T)?"—and determined that they were all "keepers." The team now begins the process of evaluation and synthesis of the articles to see what the evidence says about initiating a rapid response team (RRT) in their hospital. Carlos reminds them that evaluation and synthesis are synergistic processes and don't necessarily happen one after the other. Nevertheless, to help them learn, he will guide them through the EBP process one step at a time. Table 1 Back to Top | Article Outline STARTING THE EVALUATION Rebecca, Carlos, and Chen begin to work with the evaluation table they created earlier in this process when they found and filled in the essential elements of the 15 studies and projects (see "Critical Appraisal of the Evidence: Part I," July). Now each takes a stack of the "keeper" studies and systematically begins adding to the table any remaining data that best reflect the study elements pertaining to the group's clinical question (see Table 1; for the entire table with all 15 articles, go to http://links.lww.com/AJN/A17). They had agreed that a "Notes" section within the "Appraisal: Worth to Practice" column would be a good place to record the nuances of an article, their impressions of it, as well as any tips—such as what worked in calling an RRT—that could be used later when they write up their ideas for initiating an RRT at their hospital, if the evidence points in that direction. Chen remarks that although she thought their initial table contained a lot of information, this final version is more thorough by far. She appreciates the opportunity to go back and confirm her original understanding of the study essentials. The team members discuss the evolving patterns as they complete the table. The three systematic reviews, which are higher-level evidence, seem to have an inherent bias in that they included only studies with control groups. In general, these studies weren't in favor of initiating an RRT. Carlos asks Rebecca and Chen whether, now that they've appraised all the evidence about RRTs, they're confident in their decision to include all the studies and projects (including the lower-level evidence) among the "keepers." The nurses reply with an emphatic affirmative! They tell Carlos that the projects and descriptive studies were what brought the issue to life for them. They realize that the higher-level evidence is somewhat in conflict with the lower-level evidence, but they're most interested in the conclusions that can be drawn from considering the entire body of evidence. Rebecca and Chen admit they have issues with the systematic reviews, all of which include the MERIT study.1-4 In particular, they discuss how the authors of the systematic reviews made sure to report the MERIT study's finding that the RRT had no effect, but didn't emphasize the MERIT study authors' discussion about how their study methods may have influenced the reliability of the findings (for more, see "Critical Appraisal of the Evidence: Part II," September). Carlos says that this is an excellent observation. He also reminds the team that clinicians may read a systematic review for the conclusion and never consider the original studies. He encourages Rebecca and Chen in their efforts to appraise the MERIT study and comments on how well they're putting the pieces of the evidence puzzle together. The nurses are excited that they're able to use their new knowledge to shed light on the study. They discuss with Carlos how the interpretation of the MERIT study has perhaps contributed to a misunderstanding of the impact of RRTs. Comparing the evidence. As the team enters the lower-level evidence into the evaluation table, they note that it's challenging to compare the project reports with studies that have clearly described methodology, measurement, analysis, and findings. Chen remarks that she wishes researchers and clinicians would write study and project reports similarly. Although each of the studies has a process or method determining how it was conducted, as well as how outcomes were measured, data were analyzed, and results interpreted, comparing the studies as they're currently written adds another layer of complexity to the evaluation. Carlos says that while it would be great to have studies and projects written in a similar format so they're easier to compare, that's unlikely to happen. But he tells the team not to lose all hope, as a format has been developed for reporting quality improvement initiatives called the SQUIRE Guidelines; however, they aren't ideal. The team looks up the guidelines online (www.squire-statement.org) and finds that the Institute for Healthcare Improvement (IHI) as well as a good number of journals have encouraged their use. When they review the actual guidelines, the team notices that they seem to be focused on research; for example, they require a research question and refer to the study of an intervention, whereas EBP projects have PICOT questions and apply evidence to practice. The team discusses that these guidelines can be confusing to the clinicians authoring the reports on their projects. In addition, they note that there's no mention of the synthesis of the body of evidence that should drive an evidence-based project. While the SQUIRE Guidelines are a step in the right direction for the future, Carlos, Rebecca, and Chen conclude that, for now, they'll need to learn to read these studies as they find them—looking carefully for the details that inform their clinical question. Once the data have been entered into the table, Carlos suggests that they take each column, one by one, and note the similarities and differences across the studies and projects. After they've briefly looked over the columns, he asks the team which ones they think they should focus on to answer their question. Rebecca and Chen choose "Design/Method," "Sample/Setting," "Findings," and "Appraisal: Worth to Practice" (see Table 1) as the initial ones to consider. Carlos agrees that these are the columns in which they're most likely to find the most pertinent information for their synthesis. Back to Top | Article Outline SYNTHESIZING: MAKING DECISIONS BASED ON THE EVIDENCE Design/Method. The team starts with the "Design/Method" column because Carlos reminds them that it's important to note each study's level of evidence. He suggests that they take this information and create a synthesis table (one in which data is extracted from the evaluation table to better see the similarities and differences between studies) (see Table 21-15). The synthesis table makes it clear that there is less higher-level and more lower-level evidence, which will impact the reliability of the overall findings. As the team noted, the higher-level evidence is not without methodological issues, which will increase the challenge of coming to a conclusion about the impact of an RRT on the outcomes. Sample/Setting. In reviewing the "Sample/Setting" column, the group notes that the number of hospital beds ranged from 218 to 662 across the studies. There were several types of hospitals represented (4 teaching, 4 community, 4 no mention, 2 acute care hospitals, and 1 public hospital). The evidence they've collected seems applicable, since their hospital is a community hospital. Findings. To help the team better discuss the evidence, Carlos suggests that they refer to all projects or studies as "the body of evidence." They don't want to get confused by calling them all studies, as they aren't, but at the same time continually referring to "studies and projects" is cumbersome. He goes on to say that, as part of the synthesis process, it's important for the group to determine the overall impact of the intervention across the body of evidence. He helps them create a second synthesis table containing the findings of each study or project (see Table 31-15). As they look over the results, Rebecca and Chen note that RRTs reduce code rates, particularly outside the ICU, whereas unplanned ICU admissions (UICUA) don't seem to be as affected by them. However, 10 of the 15 studies and projects reviewed didn't evaluate this outcome, so it may not be fair to write it off just yet. The EBP team can tell from reading the evidence that researchers consider the impact of an RRT on hospital-wide mortality rates (HMR) as the more important outcome; however, the group remains unconvinced that this outcome is the best for evaluating the purpose of an RRT, which, according to the IHI, is early intervention in patients who are unstable or at risk for cardiac or respiratory arrest.16 That said, of the 11 studies and projects that evaluated mortality, more than half found that an RRT reduced it. Carlos reminds the group that four of those six articles are level-VI evidence and that some weren't research. The findings produced at this level of evidence are typically less reliable than those at higher levels of evidence; however, Carlos notes that two articles having level-VI evidence, a study and a project, had statistically significant (less likely to occur by chance, P < 0.05) reductions in HMR, which increases the reliability of the results. Chen asks, since four level-VI reports documented that an RRT reduces HMR, should they put more confidence in findings that occur more than once? Carlos replies that it's not the number of studies or projects that determines the reliability of their findings, but the uniformity and quality of their methods. He recites something he heard in his Expert EBP Mentor program that helped to clarify the concept of making decisions based on the evidence: the level of the evidence (the design) plus the quality of the evidence (the validity of the methods) equals the strength of the evidence, which is what leads clinicians to act in confidence and apply the evidence (or not) to their practice and expect similar findings (outcomes). In terms of making a decision about whether or not to initiate an RRT, Carlos says that their evidence stacks up: first, the MERIT study's results are questionable because of problems with the study methods, and this affects the reliability of the three systematic reviews as well as the MERIT study itself; second, the reasonably conducted lower-level studies/projects, with their statistically significant findings, are persuasive. Therefore, the team begins to consider the possibility that initiating an RRT may reduce code rates outside the ICU (CRO) and may impact non-ICU mortality; both are outcomes they would like to address. The evidence doesn't provide equally promising results for UICUA, but the team agrees to include it in the outcomes for their RRT project because it wasn't evaluated in most of the articles they appraised. As the EBP team continues to discusses probable outcomes, Rebecca points to one study's data in the "Findings" column that shows a financial return on investment for an RRT.9 Carlos remarks to the group that this is only one study, and that they'll need to make sure to collect data on the costs of their RRT as well as the cost implications of the outcomes. They determine that the important outcomes to measure are: CRO, non-ICU mortality (excluding patients with do not resuscitate [DNR] orders), UICUA, and cost. Appraisal: Worth to Practice. As the team discusses their synthesis and the decision they'll make based on the evidence, Rebecca raises a question that's been on her mind. She reminds them that in the "Appraisal: Worth to Practice" column, teaching was identified as an important factor in initiating an RRT and expresses concern that their hospital is not an academic medical center. Chen reminds her that even though theirs is not a designated teaching hospital with residents on staff 24 hours a day, it has a culture of teaching that should enhance the success of an RRT. She adds that she's already hearing a buzz of excitement about their project, that their colleagues across all disciplines have been eager to hear the results of their review of the evidence. In addition, Carlos says that many resources in their hospital will be available to help them get started with their project and reminds them of their hospital administrators' commitment to support the team. Table 2 Table 3 Back to Top | Article Outline ACTING ON THE EVIDENCE As they consider the synthesis of the evidence, the team agrees that an RRT is a valuable intervention to initiate. They decide to take the criteria for activating an RRT from several successful studies/projects and put them into a synthesis table to better see their major similarities (see Table 44,8,9,13,15). From this combined list, they choose the criteria for initiating an RRT consult that they'll use in their project (see Table 5). The team also begins discussing the ideal make up for their RRT. Again, they go back to the evaluation table and look over the "Major Variables Studied" column, noting that the composition of the RRT varied among the studies/projects. Some RRTs had active physician participation (n = 6), some had designated physician consultation on an as-needed basis (n = 2), and some were nurse-led teams (n = 4). Most RRTs also had a respiratory therapist (RT). All RRT members had expertise in intensive care and many were certified in advanced cardiac life support (ACLS). They agree that their team will be comprised of ACLS-certified members. It will be led by an acute care nurse practitioner (ACNP) credentialed for advanced procedures, such as central line insertion. Members will include an ICU RN and an RT who can intubate. They also discuss having physicians willing to be called when needed. Although no studies or projects had a chaplain on their RRT, Chen says that it would make sense in their hospital. Carlos, who's been on staff the longest of the three, says that interdisciplinary collaboration has been a mainstay of their organization. A physician, ACNP, ICU RN, RT, and chaplain are logical choices for their RRT. As the team ponders the evidence, they begin to discuss the next step, which is to develop ideas for writing their project implementation plan (also called a protocol). Included in this protocol will be an educational plan to let those involved in the project know information such as the evidence that led to the project, how to call an RRT, and outcome measures that will indicate whether or not the implementation of the evidence was successful. They'll also need an evaluation plan. From reviewing the studies and projects, they also realize that it's important to focus their plan on evidence implementation, including carefully evaluating both the process of implementation and project outcomes. Be sure to join the EBP team in the next installment of this series as they develop their implementation plan for initiating an RRT in their hospital, including the submission of their project proposal to the ethics review board. Table 4 Table 5 Back to Top | Article Outline REFERENCES 1. Chan PS, et al. (2010). Rapid response teams: a systematic review and meta-analysis. Arch Intern Med 2010;170(1):18-26. • Cited Here... | • View Full Text | PubMed | CrossRef • • 2. McGaughey J, et al. Outreach and early warning systems (EWS) for the prevention of intensive care admission and death of critically ill adult patients on general hospital wards. Cochrane Database Syst Rev 2007;3:CD005529. • Cited Here... | • PubMed 3. Winters BD, et al. Rapid response systems: a systematic review. Crit Care Med 2007;35(5):1238-43. • Cited Here... | • View Full Text | PubMed | CrossRef • • 4. Hillman K, et al. Introduction of the medical emergency team (MET) system: a cluster-randomised controlled trial. Lancet2005;365(9477):2091-7. • Cited Here... | • PubMed | CrossRef • 5. Sharek PJ, et al. Effect of a rapid response team on hospital-wide mortality and code rates outside the ICU in a children's hospital. JAMA 2007;298(19):2267-74. • Cited Here... | • View Full Text | PubMed | CrossRef • • 6. Chan PS, et al. Hospital-wide code rates and mortality before and after implementation of a rapid response team. JAMA2008;300(21):2506-13. • Cited Here... | • View Full Text | PubMed | CrossRef • • 7. DeVita MA, et al. Use of medical emergency team responses to reduce hospital cardiopulmonary arrests. Qual Saf Health Care2004;13(4):251-4. • Cited Here... | • View Full Text | PubMed | CrossRef • • 8. Mailey J, et al. Reducing hospital standardized mortality rate with early interventions. J Trauma Nurs 2006;13(4):178-82. • Cited Here... | • View Full Text | PubMed | CrossRef • • 9. Dacey MJ, et al. The effect of a rapid response team on major clinical outcome measures in a community hospital. Crit Care Med 2007;35(9):2076-82. • Cited Here... | • View Full Text | PubMed | CrossRef • • 10. McFarlan SJ, Hensley S. Implementation and outcomes of a rapid response team. J Nurs Care Qual 2007;22(4):307-13. • Cited Here... | • View Full Text | PubMed | CrossRef • • 11. Offner PJ, et al. Implementation of a rapid response team decreases cardiac arrest outside the intensive care unit. J Trauma2007;62(5):1223-8. • Cited Here... | • View Full Text | PubMed | CrossRef • • 12. Bertaut Y, et al. Implementing a rapid-response team using a nurse-to-nurse consult approach. J Vasc Nurs 2008;26(2):37-42. • Cited Here... | • PubMed | CrossRef • 13. Benson L, et al. Using an advanced practice nursing model for a rapid response team. Jt Comm J Qual Patient Saf2008;34(12):743-7. • Cited Here... | • PubMed 14. Hatler C, et al. Implementing a rapid response team to decrease emergencies. Medsurg Nurs 2009;18(2):84-90,126. • Cited Here... | • View Full Text | PubMed • 15. Bader MK, et al. Rescue me: saving the vulnerable non-ICU patient population. Jt Comm J Qual Patient Saf 2009;35(4):199-205. • Cited Here... | • PubMed 16. Institute for Healthcare Improvement. Establish a rapid response team. n.d.. • Cited Here... Supplemental Digital Content • AJN_110_11_2010_10_12_AJN_0_SDC1.pdf; [PDF] (229 KB) Back to Top | Article Outline © 2010 Lippincott Williams & Wilkins, Inc. • • • • • Article Tools • Article as PDF (771 KB) • Article as EPUB • Print this Article • Email To Colleague • Add to My Favorites • Export to Citation Manager • Alert Me When Cited • Get Content & Permissions • View Images in Gallery • View Images in Slideshow • Export All Images to PowerPoint File Share this article on: • • • • • Article Level Metrics Advertisement Related Links • Articles in PubMed by Ellen Fineout-Overholt, PhD, RN, FNAP, FAAN • This article in PubMed • Articles in Google Scholar by Ellen Fineout-Overholt, PhD, RN, FNAP, FAAN • Other articles in this journal by Ellen Fineout-Overholt, PhD, RN, FNAP, FAAN Related Collections • Evidence-Based Practice, Step by Step Readers Of this Article Also Read • Evidence-Based Practice, Step by Step: Critical Appraisal of the Evidence: Part ... • Evidence-Based Practice Step by Step: Critical Appraisal of the Evidence: Part I • Evidence-Based Practice, Step by Step: Searching for the Evidence • Evidence-Based Practice, Step by Step: Asking the Clinical Question: A Key Step ... • Evidence-Based Practice: Step by Step: The Seven Steps of Evidence-Based... Follow us on • Twitter • Facebook Content Links • Feedback • Sitemap • RSS Feeds • LWW Journals Resources • Privacy Policy (Updated October 4, 2017) • Terms of Use • Open Access Policy • Subscribe to eTOC Contact Wolters Kluwer Health, Inc. Email: [email protected] Phone: 800-638-3030 (within the USA)301-223-2300 (outside of the USA) Advertisement Journal Logo Secondary Logo • Account • Login • Subscribe • Help • Articles & Issues • Collections • Multimedia • CE • For Authors • Journal Info Advanced Search • < Previous Article • Next Article > Evidence-Based Practice, Step by Step: Critical Appraisal of the Evidence Part III Fineout-Overholt, Ellen PhD, RN, FNAP, FAAN; Melnyk, Bernadette Mazurek PhD, RN, CPNP/PMHNP, FNAP, FAAN; Stillwell, Susan B. DNP, RN, CNE; Williamson, Kathleen M. PhD, RN AJN The American Journal of Nursing: November 2010 - Volume 110 - Issue 11 - p 43-51 doi: 10.1097/01.NAJ.0000390523.99066.b5 Feature Articles • Abstract • In Brief • Author Information • Article Outline The process of synthesis: seeing similarities and differences across the body of evidence. This is the seventh article in a series from the Arizona State University College of Nursing and Health Innovation's Center for the Advancement of Evidence-Based Practice. Evidence-based practice (EBP) is a problem-solving approach to the delivery of health care that integrates the best evidence from studies and patient care data with clinician expertise and patient preferences and values. When delivered in a context of caring and in a supportive organizational culture, the highest quality of care and best patient outcomes can be achieved. The purpose of this series is to give nurses the knowledge and skills they need to implement EBP consistently, one step at a time. Articles will appear every two months to allow you time to incorporate information as you work toward implementing EBP at your institution. Also, we've scheduled "Chat with the Authors" calls every few months to provide a direct line to the experts to help you resolve questions. See details below. Box In September's evidence-based practice (EBP) article, Rebecca R., our hypothetical staff nurse, Carlos A., her hospital's expert EBP mentor, and Chen M., Rebecca's nurse colleague, rapidly critically appraised the 15 articles they found to answer their clinical question—"In hospitalized adults (P), how does a rapid response team (I) compared with no rapid response team (C) affect the number of cardiac arrests (O) and unplanned admissions to the ICU (O) during a three-month period (T)?"—and determined that they were all "keepers." The team now begins the process of evaluation and synthesis of the articles to see what the evidence says about initiating a rapid response team (RRT) in their hospital. Carlos reminds them that evaluation and synthesis are synergistic processes and don't necessarily happen one after the other. Nevertheless, to help them learn, he will guide them through the EBP process one step at a time. Table 1 Back to Top | Article Outline STARTING THE EVALUATION Rebecca, Carlos, and Chen begin to work with the evaluation table they created earlier in this process when they found and filled in the essential elements of the 15 studies and projects (see "Critical Appraisal of the Evidence: Part I," July). Now each takes a stack of the "keeper" studies and systematically begins adding to the table any remaining data that best reflect the study elements pertaining to the group's clinical question (see Table 1; for the entire table with all 15 articles, go to http://links.lww.com/AJN/A17). They had agreed that a "Notes" section within the "Appraisal: Worth to Practice" column would be a good place to record the nuances of an article, their impressions of it, as well as any tips—such as what worked in calling an RRT—that could be used later when they write up their ideas for initiating an RRT at their hospital, if the evidence points in that direction. Chen remarks that although she thought their initial table contained a lot of information, this final version is more thorough by far. She appreciates the opportunity to go back and confirm her original understanding of the study essentials. The team members discuss the evolving patterns as they complete the table. The three systematic reviews, which are higher-level evidence, seem to have an inherent bias in that they included only studies with control groups. In general, these studies weren't in favor of initiating an RRT. Carlos asks Rebecca and Chen whether, now that they've appraised all the evidence about RRTs, they're confident in their decision to include all the studies and projects (including the lower-level evidence) among the "keepers." The nurses reply with an emphatic affirmative! They tell Carlos that the projects and descriptive studies were what brought the issue to life for them. They realize that the higher-level evidence is somewhat in conflict with the lower-level evidence, but they're most interested in the conclusions that can be drawn from considering the entire body of evidence. Rebecca and Chen admit they have issues with the systematic reviews, all of which include the MERIT study.1-4 In particular, they discuss how the authors of the systematic reviews made sure to report the MERIT study's finding that the RRT had no effect, but didn't emphasize the MERIT study authors' discussion about how their study methods may have influenced the reliability of the findings (for more, see "Critical Appraisal of the Evidence: Part II," September). Carlos says that this is an excellent observation. He also reminds the team that clinicians may read a systematic review for the conclusion and never consider the original studies. He encourages Rebecca and Chen in their efforts to appraise the MERIT study and comments on how well they're putting the pieces of the evidence puzzle together. The nurses are excited that they're able to use their new knowledge to shed light on the study. They discuss with Carlos how the interpretation of the MERIT study has perhaps contributed to a misunderstanding of the impact of RRTs. Comparing the evidence. As the team enters the lower-level evidence into the evaluation table, they note that it's challenging to compare the project reports with studies that have clearly described methodology, measurement, analysis, and findings. Chen remarks that she wishes researchers and clinicians would write study and project reports similarly. Although each of the studies has a process or method determining how it was conducted, as well as how outcomes were measured, data were analyzed, and results interpreted, comparing the studies as they're currently written adds another layer of complexity to the evaluation. Carlos says that while it would be great to have studies and projects written in a similar format so they're easier to compare, that's unlikely to happen. But he tells the team not to lose all hope, as a format has been developed for reporting quality improvement initiatives called the SQUIRE Guidelines; however, they aren't ideal. The team looks up the guidelines online (www.squire-statement.org) and finds that the Institute for Healthcare Improvement (IHI) as well as a good number of journals have encouraged their use. When they review the actual guidelines, the team notices that they seem to be focused on research; for example, they require a research question and refer to the study of an intervention, whereas EBP projects have PICOT questions and apply evidence to practice. The team discusses that these guidelines can be confusing to the clinicians authoring the reports on their projects. In addition, they note that there's no mention of the synthesis of the body of evidence that should drive an evidence-based project. While the SQUIRE Guidelines are a step in the right direction for the future, Carlos, Rebecca, and Chen conclude that, for now, they'll need to learn to read these studies as they find them—looking carefully for the details that inform their clinical question. Once the data have been entered into the table, Carlos suggests that they take each column, one by one, and note the similarities and differences across the studies and projects. After they've briefly looked over the columns, he asks the team which ones they think they should focus on to answer their question. Rebecca and Chen choose "Design/Method," "Sample/Setting," "Findings," and "Appraisal: Worth to Practice" (see Table 1) as the initial ones to consider. Carlos agrees that these are the columns in which they're most likely to find the most pertinent information for their synthesis. Back to Top | Article Outline SYNTHESIZING: MAKING DECISIONS BASED ON THE EVIDENCE Design/Method. The team starts with the "Design/Method" column because Carlos reminds them that it's important to note each study's level of evidence. He suggests that they take this information and create a synthesis table (one in which data is extracted from the evaluation table to better see the similarities and differences between studies) (see Table 21-15). The synthesis table makes it clear that there is less higher-level and more lower-level evidence, which will impact the reliability of the overall findings. As the team noted, the higher-level evidence is not without methodological issues, which will increase the challenge of coming to a conclusion about the impact of an RRT on the outcomes. Sample/Setting. In reviewing the "Sample/Setting" column, the group notes that the number of hospital beds ranged from 218 to 662 across the studies. There were several types of hospitals represented (4 teaching, 4 community, 4 no mention, 2 acute care hospitals, and 1 public hospital). The evidence they've collected seems applicable, since their hospital is a community hospital. Findings. To help the team better discuss the evidence, Carlos suggests that they refer to all projects or studies as "the body of evidence." They don't want to get confused by calling them all studies, as they aren't, but at the same time continually referring to "studies and projects" is cumbersome. He goes on to say that, as part of the synthesis process, it's important for the group to determine the overall impact of the intervention across the body of evidence. He helps them create a second synthesis table containing the findings of each study or project (see Table 31-15). As they look over the results, Rebecca and Chen note that RRTs reduce code rates, particularly outside the ICU, whereas unplanned ICU admissions (UICUA) don't seem to be as affected by them. However, 10 of the 15 studies and projects reviewed didn't evaluate this outcome, so it may not be fair to write it off just yet. The EBP team can tell from reading the evidence that researchers consider the impact of an RRT on hospital-wide mortality rates (HMR) as the more important outcome; however, the group remains unconvinced that this outcome is the best for evaluating the purpose of an RRT, which, according to the IHI, is early intervention in patients who are unstable or at risk for cardiac or respiratory arrest.16 That said, of the 11 studies and projects that evaluated mortality, more than half found that an RRT reduced it. Carlos reminds the group that four of those six articles are level-VI evidence and that some weren't research. The findings produced at this level of evidence are typically less reliable than those at higher levels of evidence; however, Carlos notes that two articles having level-VI evidence, a study and a project, had statistically significant (less likely to occur by chance, P < 0.05) reductions in HMR, which increases the reliability of the results. Chen asks, since four level-VI reports documented that an RRT reduces HMR, should they put more confidence in findings that occur more than once? Carlos replies that it's not the number of studies or projects that determines the reliability of their findings, but the uniformity and quality of their methods. He recites something he heard in his Expert EBP Mentor program that helped to clarify the concept of making decisions based on the evidence: the level of the evidence (the design) plus the quality of the evidence (the validity of the methods) equals the strength of the evidence, which is what leads clinicians to act in confidence and apply the evidence (or not) to their practice and expect similar findings (outcomes). In terms of making a decision about whether or not to initiate an RRT, Carlos says that their evidence stacks up: first, the MERIT study's results are questionable because of problems with the study methods, and this affects the reliability of the three systematic reviews as well as the MERIT study itself; second, the reasonably conducted lower-level studies/projects, with their statistically significant findings, are persuasive. Therefore, the team begins to consider the possibility that initiating an RRT may reduce code rates outside the ICU (CRO) and may impact non-ICU mortality; both are outcomes they would like to address. The evidence doesn't provide equally promising results for UICUA, but the team agrees to include it in the outcomes for their RRT project because it wasn't evaluated in most of the articles they appraised. As the EBP team continues to discusses probable outco


Conjuntos de estudio relacionados

Chapter 60: Assessment of Integumentary Function

View Set

Scarcity and Opportunity Cost The Economic Problem

View Set

Section 6. Pt.2: Environmental Concerns and Disclosures in DE

View Set