COMD 790 Lecture 1

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

relevance

•Do the patients in the study represent patients typically seen in the clinical setting?

4. Effect Sizes and Confidence Intervals Theme 4

•Effect size: a number that represents the strength of the relationship between two variables (or between two conditions in a treatment efficacy study) •Confidence Intervals: Reflect the precision of the estimated difference or effect - "narrower" is better!

What do we need to use EBP?

•Recognition we do not inherently know whether a specific plan (or clinical action) is best for a client •Professional integrity: honesty, respect, awareness of own biases, and open-mindedness •Adherence to ASHA Code of Ethics (I-M): Individuals who hold the Certificate of Clinical Competence shall use independent and evidence-based clinical judgment, keeping paramount the best interests of those being served. •Other ethical principles -Beneficence: maximizing benefit to client -Non-maleficence: minimizing harm to client -Autonomy: ability to make decisions independently -Justice: sense of fairness

-Nagoski & Nagoski (2020, p. xvii)

"Science is the best idea humanity has ever had. It's a systematic way of exploring the nature of reality, of testing and proving or disproving ideas. But it's important to remember that science is ultimately a specialized way of being wrong. That is, every scientist tries to be (a) slightly less wrong than the scientists who came before them, by proving that something we thought was true actually isn't, and (b) wrong in a way that can be tested and proven, which results in the next scientist being slightly less wrong. Research is the ongoing process of learning new things that show us a little more of what's true, which inevitably reveals how wrong we used to be, and it is never finished'."

Steps to implementing EBP:

1.Ask a well-built question 2.Select evidence sources 3.Implement a search strategy 4.Appraise and synthesize the evidence 5.Apply the evidence 6.Evaluate the application of evidence 7.Disseminate the findings

1. State the Problem

1.Create a "research question" that is concise, specific, well-formed, and testable •Or multiple questions! •Or questions within questions! 2.Position the question in the context of established knowledge •What is the rationale for the study? •Why is this study important? •What do we know about this question already? •What new knowledge can be gained from this study? 3.Form a hypothesis based on background information •A statement of what you expect the "answer" to be, based on either an established theory or a new theory

Scientific Theory

A well-tested explanation for a wide range of observations or experimental results.

Slide 11

After you've completed your investigation, it's time to analyze the data and evaluate the results. As a scientist, I personally find this step to be the most challenging AND the most exciting! As I'm running statistical analyses, I can't help but feel a buzz of excitement. What will the data tell me? What can I learn from the data? Was my hypothesis right or wrong or more likely, somewhere in between? In this step, it is especially important to ensure that your own biases and hopes and dreams aren't influencing the results. The best practice is therefore to come up with an objective data analysis plan that and stick with it. Run the analyses indicated by your analysis plan, which is designed to answer your research question. And when it comes time to report those results, you report just the facts. In other words, just the data and results of statistical analyses. No interpretation, no opinions, no conjecture.

Slide 6

As I said, the book's version of the scientific method is not wrong, it's just simplified, and makes the process sound more linear than it usually is. In reality, the scientific method looks something like this. You start off with a question or problem. You do some background reading on the topic in an effort to find out what research has already been down and what that research suggests. You formulate a hypothesis, which is the tentative explanation made on the basis of limited evidence. It is the starting point for further investigation, and it is what you are trying to test via your research study. Next, you test your hypothesis with an experiment. This is where we define the methods of the study. We'll spend a lot of time this semester talking about this step! Next you analyze your results and draw conclusions. Regardless of whether your results suggest that your hypothesis is true, partially true, or false, you should report the results. Not everyone does this, for a couple of reasons. First, some scientists dislike notifying the world that they were wrong about their hypotheses! Also, some journals prioritize publication of studies where the hypothesis was supported over studies where the hypothesis was refuted. This is called publication bias, and we'll talk about it later in the semester. You'll notice here in the diagram a really important step of the scientific method - that feedback loop I talked about earlier, where you use learn from how wrong you were and go back to the drawing board. Good scientists learn from being wrong, they use the knowledge gleaned from the study to refine or rethink their hypothesis. Just as in every other aspect of life, the greatest learning comes from failure, not success.

1b slide 7

Because evidence-based practice is so important, this course is designed around this framework, particularly the tenet of external evidence. The primary objective of this course is to help you learn how to find, evaluate, and apply external evidence to your clinical decision making. So, you will be learning how to implement EBP throughout the course. You will also be applying EBP in your case study projects. You will use your experience/background education up to this point as your source of internal evidence. You will be incorporating information on patient values/perspectives from your case study. And you will be finding, analyzing, and applying external evidence from peer-reviewed research articles. Next week, I will be describing the case study projects in detail, so don't worry too much about the ins and outs of that project. What's most important at this point is that you know that you'll be implementing EBP, and getting feedback on that implementation, throughout the course of this semester.

1b Slide 6

EBP is critically important. We use it for several reasons. First, clinicians do not inherently know whether a specific plan or clinical action is best for a client. No matter how skilled and experienced you are, you are not omniscient. You are not super-heroes. And more than that, best practices are always changing and evolving. EBP provides a framework within which clinicians can keep up-to-date on best practices and. EBP also encourages professional integrity - it keeps us honest, it requires us to be respectful, open-minded, and aware of our own biases. EBP is also closely aligned with the ASHA code of ethics. Code I-M specifically states: Individuals who hold the Certificate of Clinical Competence shall use independent and evidence-based clinical judgment, keeping paramount the best interests of those being served. EBP also promotes other ethical principles regarding health care. For example, the principles of beneficence and non-maleficence remind clinicians to maximize beneficence to clients while minimizing harm. And like I mentioned earlier, EBP ensures that the best, most justified practices are implemented while the ineffective or poorly-studied practices, which can be detrimental to client well-being, are not. The principle of autonomy encourages clinicians to make decisions independently, and EBP provides a framework within which clinicians can do that. Finally, the principle of justice is exceptionally important. By using external evidence, you reduce risk of unconscious bias and increase access to best practices for all patients regardless of background.

Slide 7

Going back to the textbook's description of the scientific method, you can see how the 4 steps map nicely on to the real-life scientific method, they just condense some of the steps into a more streamlined framework.

1c Slide 1

Hello, and welcome to lecture 1c. In this video, we'll be discussing the basic principles of the evaluation of external evidence. I hope you remember from our last lecture that external evidence is an important tenet of evidence-based practice. We'll be spending most of this semester learning how to evaluate external evidence, but today, you're going to get a 30,000-foot-view introduction to the various factors that we consider as we evaluate external evidence. Consider this a true introduction, we will be unpacking most of these terms and concepts in great detail throughout this semester. However, I wanted to introduce you to these concepts so that you have a general overview of what we'll be learning about this semester and some initial exposure to the terms and concepts right off the bat. Before we get started, though, if you haven't already carefully read the ASHA Technical Report on evidence-based practice, please close this lecture video and do that before you watch any further.

2. Experimental Control 2nd theme

Highest standards: •Control subjects: a group of subjects that do not receive the treatment under evaluation •Prospective design: subjects are recruited and assigned to a condition (treatment v. control) before the study starts •Randomized Controlled Trials: patients are randomly assigned to a condition (treatment v. control)

Slide 2

I want to start off with this fantastic passage from a pop science nonfiction book called Burnout: The Secret to Unlocking the Stress Cycle. Most of what I have to say today, or in this whole class, has nothing to do with stress (or at least I hope it doesn't!), but the authors' description of science is so spot on! They state that' Science is the best idea humanity has ever had. It's a systematic way of exploring the nature of reality, of testing and proving or disproving ideas. But it's important to remember that science is ultimately a specialized way of being wrong. That is, every scientist tries to be (a) slightly less wrong than the scientists who came before them, by proving that something we thought was true actually isn't, and (b) wrong in a way that can be tested and proven, which results in the next scientist being slightly less wrong. Research is the ongoing process of learning new things that show us a little more of what's true, which inevitably reveals how wrong we used to be, and it is never finished." So, when you're thinking about the scientific method, try to remember that scientists are trying to employ a systematic formula for conducting studies that prove themselves to be a little less wrong than the last scientist!

1c slide 7

Meta-analyses can be used for non-treatment purposes too. In this example, the scientist compiled studies of the predictors of outcomes of late talkers. They compiled data for 2,134 children across 20 studies. They found that expressive expressive vocabulary size, receptive language, and socioeconomic status in toddlerhood predicted expressive language outcomes later in development. Pretty cool, right? I wanted to include this example so that you can see that meta-analyses are good at drawing conclusions about phenomena other than just treatment efficacy.

1c slide 6

Meta-analyses can focus on treatment efficacy, as in this example. This meta-analysis was recently published. It investigated the effects of early intervention on social communication outcomes for children with autism. It included data from 29 studies, totaling 1,442 children. Their main finding was that early intervention has a significant positive impact on social communication outcomes, and that the age at which this impact was greatest was at 3.81 years. The amazing this about this study is that it would be completely impossible to study 1442 children in one study, it just takes too much time and money and personnel - it's infeasible within the scope of a single research study. So this meta-analysis compiled the data from 29 different independent studies to draw a meaningful conclusion about the efficacy of early intervention in autism.

1c Slide 2

Most of the time, when you're implementing evidence-based practice, you'll be evaluating external evidence on a specific treatment. There are exceptions to this, for example, if you're trying to figure out best practices for clinical assessment (e.g., what is the best way to diagnose autism, or what is the best measure for assessing expressive vocabulary in a Spanish-speaking child). But the vast majority of the time, you'll be evaluating treatment efficacy in order to determine the best course of treatment for a specific client. So, this term "treatment efficacy" will be used extensively throughout the semester and it's important to understand what precisely it means. Treatment efficacy is defined as the extent to which a treatment brings about desired outcomes under highly controlled conditions (i.e., in a research setting, in a highly-controlled research study). It represents the potential of a particular treatment to benefit patients in a particular clinical population. It is not to be confused with treatment effectiveness, which means something very different. Treatment effectiveness is the extent to which a certain treatment benefits patients when administered under the conditions of routine clinical practice. In other words, it represents performance of a treatment in the "real world." Let's work through some examples to illustrate these differences. An example of treatment efficacy is a randomized, controlled trial of

Slide 10

Ok, now that you've identified your research question and established that it is significant, testable, and novel, it's time to decide upon a course of action. This step is where we define the method of our investigation. This is arguably the most important step of the process, and we will spend a great deal of time discussing many different methodological decisions that can be made under variable circumstances. It is important to know now that the method of investigation should directly reflect the subject of interest and relevant background factors and your research question. Building off of our TMS in aphasia example, we should only include measures that are directly relevant to language production as it is defined in our study. While extraneous factors may be interesting, you should only include what is relevant. For example, you may wonder whether social-emotional factors such as perceived social support impact aphasia recovery when TMS is paired with speech-language therapy, but that is not part of your research question, and therefore it should not be included in your methodological decisions. If you want to study whether perceived social support impacts aphasia recovery - formulate a separate research question about that and get to work! -> Because the appropriateness of methodological approaches vary based on a variety of factors, it is sometimes difficult to establish the "gold standard" method for any research question. What matters most is that scientists make methodological decisions that are rigorous and well-justified by the evidence base. Methods sections are very detailed and specific, and that is because scientists must ensure that they have carefully justified their methodological decisions. Another reason that methods sections are so detailed and specific is that in order to be considered strong research, the methods must be "replicable". In other words, if another scientist came along and wanted to "replicate" a study, they should be able to do that based on the methods section of a research article.

1c slide 8

Ok, the second theme that we consider when evaluating external evidence is the extent of experimental control. I alluded to this a bit in our first slide where we discussed the rankings of different types of studies. There are several standards for research on treatment efficacy. First, the inclusion of control subjects, which are a group of subjects that do NOT receive the treatment under evaluation. They may receive a placebo, or no treatment, or treatment-as-usual, which is the treatment they would otherwise receive if they weren't involved in a study. The second standard is the use of a prospective design; this means that subjects are recruited and assigned to a condition (treatment v. control) before the study starts. The highest standard is the randomized controlled trials. As I mentioned earlier, this means that patients are randomly assigned to a condition (treatment v. control), because random assignment reduces the chance that groups might differ systematically in some unanticipated or unrecognized ways other than the experimental factor being investigated.

Slide 4

Scientific research is centered around a scientific theory - the statement that is formulated to explain natural phenomena. Imagine Sir Isaac Newton sitting under an apple tree when an apple falls and hits him on the head. According to legend, this is what sparked Newton's theory of gravity, which he then set about testing systematically and objectively. Good research begins with a scientific theory, and it ends with a scientific theory. Sometimes the scientific theory at the end is very similar to the scientific theory at the beginning; frquently, though, the scientific theory at the end of a research study differs in some way from the version of the scientific theory that inspired the research study. In this way, the scientific process is quite circular, like a big, complex feedback loop. Theory inspires research, which informs theory, which then inspires more research, which in turn informs theory, and so on and so on. However, there is in fact, method to this madness. It is called the scientific method.

Slide 9

So the first step is to state the problem. In this step, scientists attempt to identify a research question (or multiple research questions, or research questions within research questions!) that are concise, well-formed, and most importantly, TESTABLE. An example of a good research question might be "Does pairing transcranial magnetic stimulation with behavioral speech-language therapy help improve aphasic patients' language recovery?" And even better one would be: "Does pairing transcranial magnetic stimulation with behavioral speech-language therapy help improve aphasic patients' recovery of expressive grammatical skills?" In that example "recovery of expressive grammatical skills" is more specific, thus more testable, than "language recovery". An example of a bad research question: "Do tacos taste better than hamburgers"? First off, everyone knows tacos taste better than hamburgers. But more importantly, that research question is not testable! How does one measure the relative taste of two things? Taste is highly subjective and to my knowledge no objective measurements exist that could answer this question. Also, this research question is not specific enough. For example, who is doing the tasting? Me? My dog? The next person I see walking down the street? ->The next step is to position the research question within the context of the established knowledge. This "established knowledge" is often referred to as the evidence base, the scientific literature, or the scientific context. This step helps you determine whether the research question is important and necessary. This step is also important because when you're trying to answer a research question, first and foremost, you want to make sure no one else has already answered it definitively. No one wants to spends years of their life and sometimes millions of dollars to conduct a study that has already been done. More commonly though, other scientists have done some sort of research at least somewhat relevant to your research question. Maybe you think their methods weren't as strong as they could be. Maybe they were only able to recruit a handful of participants, but now the condition you study is more widely recognized and diagnosed, so you're able to recruit a larger sample that is more representative of the population. Or maybe technology has advanced significantly and now you can test your research question in a new and exciting way. Either way, you use the findings of previous studies to shape your hypothesis, which is the third step. Your hypothesis is your best guess about the answer to your research question, based on the theory you're working from and the background information available to you. Let's return to our original research question: "Does pairing transcranial magnetic stimulation with behavioral speech-language therapy help improve aphasic patients' recovery of expressive grammatical skills?" Maybe, based off your review of prior research, you hypothesize that yes, transcranial magnetic stimulation does improve recovery of expressive grammatical skills, but only when applied to a specific region of the cortex, specifically the left inferior frontal gyrus. You base this hypothesis off of evidence from neuroimaging studies that suggests that this cortical region (also known as Broca's area) plays a critical role in sentence p2roduction. This is highly testable, because you can include a control condition where TMS is administered to a different part of the brain and directly compare changes in expressive grammatical skills. And that is exactly what this research study did!

1b Slide 4

So, as I said, there are three primary domains you will be integrating as you implement EBP: External evidence, internal evidence, and patient preferences. What's really cool is that all of these components require you to use the scientific method! Let's take a look at what I mean.

Slide 13

So, that's the scientific method in a nutshell, folks. Pretty cool, right? Even if you never spend a day of your life conducting a scientific study, the scientific method is relevant to you as a clinician. In our next lecture, I will be explaining the concept of evidence-based practice, which is founded on the scientific method. It turns out that you'll use the scientific method constantly to make decisions about your clients.

1B Slide 1

So, what is evidence-based practice? This definition was originally published in 1996, when the conversations about Evidence-Based Practice began to emerge. It stats that Evidence-Based Practice is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients... by integrating individual clinical expertise with the best available external clinical evidence from systematic research. It's important to note that later definitions of EBP also compel us to incorporate client needs and values into the framework. But in the simplest definition, EBP is clinical practice that is founded in evidence. EBP is highly relevant to you because as a future practitioner, you will be responsible for your clients' care. You will be making decisions about your client's care. And you will need to be able to justify your decisions with evidence. EBP increases best practices which enhance client outcomes. It also ensure that practices that have not yet been tested are not put into use prematurely, and it also prevents the use of ineffective practices, which can be detrimental to client well-being.

1c slide 13

That is precisely what we're talking about here. When you're evaluating external evidence, you should ask yourself: Do the patients in the study represent the patients I typically see in the clinical setting? That is how you consider the relevance of a study. Next you should consider feasibility. To do that, you should ask yourself something along the lines of "can the methods being studied be realistically implemented in my clinical setting? Examples of something that might not be feasible in the clinical setting include a diagnostic interview that requires 5 hours to complete or a treatment that requires technology that is not readily available in a typical clinic.

1c Slide 3 Evaluation of external evidence

When we're evaluating external evidence of treatment efficacy, there are different levels of evidence. Think of these as "letter grades" that can be assigned to a study based on its scientific rigor. à The most rigorous and most convincing form of external evidence comes from a meta-analysis of at least one randomized controlled trial. This is the holy grail of treatment efficacy. We'll go into detail later on what a meta-analysis is, but for now, just know that it is a method of science where researchers compile data from other studies and analyze the compiled data using special meta-analytic techniques that help us draw conclusions about how well the evidence from previous studies "converges" on one conclusion. Because meta-analyses require that several previous studies have been published on a specific treatment, they are rare and infrequent, but can have a lot of impact on the field à The next step down the ladder belongs to well-designed randomized controlled trials. A randomized controlled trial (or RCT) is an experimental study that aims to reduce certain sources of bias when testing the efficacy of new treatments; this is accomplished by randomly allocating subjects to two or more groups, treating them differently, and then comparing them with respect to a measured response. One group—the experimental group—receives the intervention being assessed, while the other—usually called the control group—receives an alternative treatment, such as a placebo or no intervention or what is sometimes called "treatment as usual". An example would be randomly assigned ovarian cancer patients to either a new treatment or to an already-established treatment protocol (in other words the "treatment as usual") to determine if the new treatment is more efficacious than the current best practice treatment. à The next level of evidence belongs to well-designed controlled studies without randomization. To use the previous example of ovarian cancer, in a controlled study without randomization, patients might opt into which treatment they want, the new one or the established one, instead of being randomly assigned. But there are still two conditions, the treatment condition and the control condition. à The next level consists of quasi-experimental studies. Quasi-experimental research is similar to true experimental research, but uses carefully selected rather than randomized subjects, and doesn't usually have a control condition. One example would be if a scientist wants to determine whether patients with stage 2 ovarian cancer responds better to a treatment than patients with stage 4 ovarian cancer. Because a scientist can't randomly assign patients to a particular stage of cancer, all they can do is try to select their subjects so that the groups are matched on other potentially important variables, such as age, treatment history, etc. Next down the ladder, we have well-designed non-experimental studies. Nonexperimental studies are studies where something is not being actively manipulated. In non-experimental research, researchers measure variables as they naturally occur without any further manipulation. An example might be a study where the researcher investigates the relationship between specific genetic variables in ovarian cancer patients and their response to a new treatment. A patient's DNA cannot be actively manipulated (at least not yet!) and there is no control condition here so this is a non-experimental study. The lowest grade, level 4, is assigned to evidence that consists primarily of "expert authorities", for example, a report written by a committee of experts or a consensus document from a research conference.

Slide 12

The final step of the scientific process is to form a conclusion. For this step, you are taking your results and interpreting them within the context of a lot of important information. You should interpret the results within the context of your specific research question. For example, if you studied TMS paired with speech-language therapy in aphasia, you should not attempt to draw any conclusions about whether TMS will improve language outcomes in young children with autism. If you studies the effects of TMS on grammatical skill recovery in aphasia, you should not attempt to draw any conclusions about whether TMS improves semantic skills in aphasia. When interpreting their findings and drawing conclusions about their research question, scientists must also consider how their findings relate to those of previous research studies. How are the findings different or similar? Were any findings surprising given previous studies' results? What methodological differences could explain divergent findings? Scientists should also consider the limitations of their own methods. All studies have limitations. It is literally impossible to design the perfect study - no such thing exists. There is always room for improvement. An important part of this step is identifying ways in which the study could be stronger or better. This is a fun step, because it often inspires the next scientist that comes along to design a stronger, better study, and that is how science is advance, one baby step at a time.

1c Slide 5

The first theme is independent confirmation and converging evidence. It is extremely rare for a single study to provide the definitive answer to a clinical question. Instead, what we want to see is a body of evidence comprised of multiple high quality, independent investigations. These investigations can be synthesized via a meta-analysis to approach a definitive answer even when, as is likely, results vary across studies. A meta-analysis is an examination of data from a number of independent studies of the same subject, in order to determine overall trends.

1c slide 10

The fourth theme is related to effect sizes and confidence intervals. Historically, studies relied heavily on statistical significance, or what we call the "p-value". But over the past couple of decades, the recognition of effect sizes and confidence intervals in determining treatment efficacy has being increasing. Effect sizes and confidence intervals reflect the practical significance as opposed to the statistical significance. An effect size is a number that represents the strength of the relationship between two variables or between two conditions in a treatment efficacy study. Effect size is an essential component when evaluating the strength of a statistical claim, and I will teach how how to interpret effect sizes later on in the semester. Confidence intervals reflect the precision of the estimated difference or effect. A narrower confidence interval is better. I will also teach you how to interpret confidence intervals in a future module.

Slide 5

The scientific method that is described in your book is... shall we say, simplified. It is not wrong per se, it is simply the simplified, linear version of the real scientific method. It breaks down the scientific method into 4 distinct steps, which we will discuss in detail in a few minutes. These steps are 1. Identify or state the problem to be investigated. 2. Determine a method for investigating the problem. In other words, figure out HOW to investigate the problem. 3. Present the results that you found during the investigation. 4. Draw conclusion about the problem, based on the results you found.

1c slide 9

The third theme is avoidance of subjectivity and bias. Bias occurs when "systematic error is introduced into sampling or testing by selecting or encouraging one outcome or answer over another". It's important to understand that bias is often unconscious! And that it can stem from a variety of sources, including the patients themselves, clinicians, researchers, or the measures being used. One example of how patients can be biased is if they know that they received the treatment or the control. If they know they received the treatment in question, their answers to questions may be unconsciously biased toward a positive impact, even if objective measures indicate no positive response to the treatment. To reduce bias, researchers use a process called "blinding" - where they mask or conceal information that could potentially influence or bias the results of the study. Patients or others involved in the research, such as those collecting the data from patients, may be blinded to the treatment condition, the over-arching research questions or goals, or the patient group. For example, when the COVID vaccine was being tested in clinical trials, all patients received injections, but patients were blinded to whether they received the vaccine or the placebo. So any responses they made about how they felt (e.g., side effects, adverse reactions) should be free of the bias that comes from receiving the injection in the first place

Slide 3

There are a few key concepts I want to emphasize to lay the foundation for our discussion on the scientific method. First, it is important to always remember that science is not designed to PROVE anything, but rather to test a specific hypothesis. Second, science is objective, in that it should not be based on a person's opinions or beliefs. I like Kerlinger & Lee's definition of scientific research as "systematic, controlled, empirical, amoral, public, and critical investigation of natural phenomena." Third, science can rely on either empiricism, or rationalism, or sometimes both! Empiricism is the philosophy that knowledge is gained through experience and evidence. Think of physics as a great example. Rationalism is the philosophy that assumes that knowledge must be gained through the exercise of logical thought. Linguistics is a great example of this approach.

1c Slide 4

There are five major themes that are considered when evaluating the quality of external evidence. These include independent confirmation and converging evidence; experimental control; avoidance of subjectivity and bias; effect sizes and confidence intervals; and relevance and feasibility. Let's talk through each of these in a bit more detail.

1b Slide 3

To use evidence-based practice, you must rely upon three specific sources of information: First, you'll use high-quality research evidence, in other words, peer-reviewed published research. This is what is sometimes called "External Evidence". Second, you'll pull from your expertise as a practitioner, the knowledge you have gained through your training and clinical experiences. This is what we call "Internal Evidence". Finally, you'll incorporate your client's preferences and values, which you will have asked them about directly. It's important at this point that you never make assumptions about what a client needs or wants based on their racial, ethnic, cultural, or socioeconomic background. It's also important that the client is asked directly about their needs and preferences whenever that's possible. There may be some cases in which you are unable to get this information directly from the client. For example, imagine you are providing services to a three-year-old autistic child who doesn't have the ability to communicate verbally. In a case like that, you would make sure you assessed needs and preferences with the primary caregiver of the child. Another important context to consider is the clinical environment. You will want to make sure you're considering limitations and caveats of the clinical environment in every case. Let's go back to the example we used in the lecture on the scientific method, where we discussed the study that investigated the pairing of transcranial magnetic stimulation, or TMS, with speech-language therapy to improve language recovery in aphasic patients. Now, imagine you're treating an adult with stroke-induced aphasia, and as you're reviewing and evaluating the external evidence, you find that there's a good deal of high-quality research suggesting that TMS is an effective tool and can really enhance the efficacy of speech-language therapy. However, you and your client live in rural South Carolina, and the nearest TMS device is 90 minutes away. In this situation, the clinical environment prohibits the implementation of what might otherwise be a best practice.

1b Slide 5

When you're implementing EBP, you will go through some specific steps. First you will ask a well-built question. We sometimes call this the "clinical question", and I will be teaching you how to frame a clinical question in the near future. In the meantime, just know that the question usually sounds something like.... "Example". Next, you will select the sources from which you plan to draw your evidence. Then you will search for your evidence in a systematic way. After you've found some examples of evidence, you will evaluate and synthesize that evidence. You will then apply the evidence to your clinical problem, evaluate how that application impacted your clinical problem, and disseminate the findings. This looks so much like the scientific method! For example, when you frame your clinical question, you're identifying the problem. When you plan and execute your search for evidence, you are determining the methods to investigate the problem. When you are appraising synthesizing, and applying the evidence, you are examining the results of your investigation. And then you are forming a conclusion when you evaluate the application of the evidence and disseminate your findings.

Effect size

a number that represents the strength of the relationship between two variables (or between two conditions in a treatment efficacy study)

Confidence intervals

the range on either side of an estimate that is likely to contain the true value for the whole population Reflect the precision of the estimated difference or effect - "narrower" is better!

randomized controlled trials

patients are randomly assigned to a condition (treatment v. control)

rationalism

philosophy that assumes knowledge must be gained through the exercise of logical thought (e.g., linguistics)

empiricim

philosophy that knowledge is gained through experience and evidence (e.g., physics)

Treatment effectiveness

whether the treatment can be shown to work in clinical practice •The likelihood that a certain treatment protocol will benefit patients in a certain clinical population when administered under the conditions of routine clinical practice •Represents performance under "real world" conditions

Treatment efficacy

whether the treatment can produce changes under well-controlled conditions •The extent to which a treatment brings about desired outcomes under highly controlled conditions (i.e., in a research setting) •Represents the potential of a particular treatment protocol for bringing about beneficial change when administered to patients in a particular clinical population.

1. Independent confirmation and converging evidence First theme

•A single study rarely provides a definitive answer about efficacy •A body of evidence - multiple high quality, independent studies - is much more informative •Meta-analysis: "gold standard" •examination of data from a number of independent studies of the same subject, in order to determine overall trends.

3. Avoidance of Subjectivity and Bias Third theme

•Bias: occurs when "systematic error [is] introduced into sampling or testing by selecting or encouraging one outcome or answer over others" -Often unconscious! -Can stem from a variety of sources (patients, clinicians, researchers, measures) •Blinding: process of masking or concealing information that could potentially influence, or bias, the results of a study -Treatment condition (treatment v. control) -Research questions -Patient group (high-risk v. low-risk)

feasibility

•Can the methods being studied (e.g., screening tool, measure, treatment approach) be realistically implemented in a clinical setting? •Ex: a diagnostic interview that requires 5 hours to complete •Ex: a treatment that requires technology that is not readily available in a typical clinic

3. Evaluate the Results

•Consider your own bias •Think critically to report the results objectively •Examine the hard facts: data •Identify information that is directly relevant to your statement of the problem

2. Determine the method

•Create a course of action •Consider the subject of interest, relevant background factors, etc. •Be specific in the strategy you use to answer your research questions -Subjects -Materials -Procedures -Assessment tools •Methods must be "replicable"

5. Relevance and Feasibility 5th theme

•Relevance: Do the patients in the study represent patients typically seen in the clinical setting? •Feasibility: Can the methods being studied (e.g., screening tool, measure, treatment approach) be realistically implemented in a clinical setting? •Ex: a diagnostic interview that requires 5 hours to complete •Ex: a treatment that requires technology that is not readily available in a typical clinic

Scientific Theory

•Research *should* start and end with a scientific theory: a carefully considered statement that is formulated to explain natural phenomena •To have a scientific theory, there must be an established method to conduct research:

test

•Science is not designed to prove, but rather to _______.

Scientific Research

•Science is not designed to prove, but rather to test. •Scientific research is founded on objectivity: -"systematic, controlled, empirical, amoral, public, and critical investigation of natural phenomena" - Kerlinger & Lee (2000, p. 14) •Scientific research can rely on: -empiricism - philosophy that knowledge is gained through experience and evidence (e.g., physics) -rationalism - philosophy that assumes knowledge must be gained through the exercise of logical thought (e.g., linguistics)

4. Form a conclusion

•Synthesize the results •Consider the evidence in light of the specifics of the problem •Make a conclusion based on your results and on others' results •Consider the limitations of the methods used •Replication is key - the more times results are replicated by independent studies, the more "trustworthy" they become

EBP in this course

•This course is designed around EBP - the primary objective of the course is to learn how to critically evaluate external evidence! •You will learn to implement EBP throughout the course •You will apply EBP in your case studies: •Internal evidence: Your experience/background education up to now •Patient values/preferences: information from the case study you select •External evidence: information you obtain, analyze, and apply from research articles

Treatment efficacy v. effectiveness

•Treatment efficacy: The extent to which a treatment brings about desired outcomes under highly controlled conditions (i.e., in a research setting) -Represents the potential of a particular treatment protocol for bringing about beneficial change when administered to patients in a particular clinical population. •Treatment effectiveness: The likelihood that a certain treatment protocol will benefit patients in a certain clinical population when administered under the conditions of routine clinical practice -Represents performance under "real world" conditions

Evidence-Based Practice (EBP)

•What is EBP? •"... the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients ... [by] integrating individual clinical expertise with the best available external clinical evidence from systematic research"(Sackett et al., 1996, p. 71) •Simplest definition: practice that is founded in evidence Why is EBP relevant to me? •As a future practitioner, you will be responsible for your clients' care EBP... •Increases best practices to enhance clients' outcomes •Delays use of practices that have not yet been tested (i.e., are unproven) •Prevents use of ineffective practices, which can be harmful

Okay, so the scientific method is super cool, right? Riiiiiiiight?!?!

•Why is the scientific method relevant to the practioner? •We are accountable to employ evidence-based practice, which is founded on the scientific method •As practitioners, we need to use the scientific method daily to make decisions about clients

control subjects

•a group of subjects that do not receive the treatment under evaluation

•Scientific research can rely on:

•empiricism - philosophy that knowledge is gained through experience and evidence (e.g., physics) •rationalism - philosophy that assumes knowledge must be gained through the exercise of logical thought (e.g., linguistics)

Bias

•occurs when "systematic error [is] introduced into sampling or testing by selecting or encouraging one outcome or answer over others" •Often unconscious! •Can stem from a variety of sources (patients, clinicians, researchers, measures)

Blinding

•process of masking or concealing information that could potentially influence, or bias, the results of a study •Treatment condition (treatment v. control) •Research questions •Patient group (high-risk v. low-risk)

prospective design

•subjects are recruited and assigned to a condition (treatment v. control) before the study starts


Ensembles d'études connexes

CNA Ch. 4: Understanding the People in Our Care

View Set

Physical Science Chapters 6 and 7

View Set

Chapter 40: Nursing Care of the Child With an Alteration in Gas Exchange/Respiratory Disorder

View Set

Sociology Chapter 15 Communities, the Environment, and Health

View Set

History Alive: Grade 5 - Chapter 13

View Set

Experimental psy chapters 12 and 13

View Set

Child Care With Alteration in Intracranial Regulation/Neurologic Disorder

View Set

U2: La familia (avanzada-advanced)

View Set