Evidence Based Practice 2

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

How does Popper's 'Notion of Falsifiability" relate to EBP?

'Falsifiability' states that laws cannot be shown to be either true or false but that they can only be held provisionally true. The basis of EBP is that any review of evidence is provisional, albeit based on the best evidence available at the time

Why would you do an observational study rather than an interventional study?

- Cost - Establish causality

List two Observational and two Experimental research designs?

- Experimental: RCT; Pre-post - Observational: Cohort; Case-control

When calculating sample sizes for a study what power is usually aimed for?

0.20 or 80%

How does a cohort study differ from a case control study?

1. Cohort studies • A group of people with shared or common characteristics • Followed over time (prospective/ retrospective) or at a point in time (cross sectional) 2. Case-control studies • Compare people with the disease or characteristics with otherwise similar people who don't have the disease • Useful for rare conditions

What are the 5 research evidence dimensions?

1. Hierarchy level • study design (Strength of evidence) 2. Study Quality/Bias • how well was the study done?(Internal Validity-Bias/ Confounders) 3. Statistical precision of results • statistical significance (p value, confidence limits) (Chance) 4. Effect size • how clinically important are the findings? (Impact) 5. Relevance • usefulness of results in clinical practice (External Validity)

Describe the five steps for EBP as defined in the Sicily Statement?

1. Identify the information need & form a clinical question 2. Find the evidence 3. Appraise the evidence 4. Integrate the evidence in clinical decision making 5. Evaluate the process

What are the key patient drivers underpinning the need for EBP?

1. Increasing choices in healthcare and treatments for patients 2. Patients' expectations of healthcare is increasing 3. Health resources becoming scarcer - patients have less money for treatment 4. Vast quantity of research literature of varied quality and this literature is now publicly accessible - Open access

Identify the five research evidence dimensions?

1.Hierarchy level · Study design used indicates degree bias has been minimised 2.Study Quality/Bias · Methods used to minimise bias within the design 3.Statistical precision of results · Statistical significance (p value, confidence limits) reflects the degree of uncertainty about the true effect 4. Effect size · How clinically important are the findings? 5. Relevance · usefulness of the evidence in clinical practice

How can we reduce the influence of confounders/prognostic factors?

A Prognostic Factor is a patient characteristic that can predict that patient's eventual outcome (confounder): - demographic: e.g. sex, age, race - disease-specific: e.g. tumor stage - comorbidity: other co-existing conditions Reduce influence of confounders in an observational study by Stratification/subgrouping

What are the potential clinical implications of a test having a high specificity?

A test that has a low α (high specificity) is good at ruling in the condition if the result is positive

What are the potential clinical implications of a test having a high sensitivity?

A test that has a low β (high sensitivity) is good at ruling out the condition if the result is negative.

What factors may be considered when comparing the applicability of the research evidence to your specific patient context?

Applicability of evidence to your context · Replicable - Reproduce intervention in your clinical practice · Patient population - Similar patient population · Geography - Area of research evidence and your clinical practice · Philosophy - Be aware of philosophical differences in practice from different countries · Cost - Can you afford it?

How do you know statistically that an outcome measure is reliable?

As with all reliability measures there are a number of different ways of describing reliability Standard error of Measurement - is related to test reliability in that it provides an indication of the dispersion of the measurement errors.

How does risk of bias differ from methodological quality?

Assessment of methodological quality - the extent to which study authors conducted their research to the highest possible standards. Assessing risk of bias : the extent to which results of the study can be believed.

What is the potential effect of allowing a longitudinal observational study to go on for too long a time period?

Attrition bias: Subjects may drop out over time and therefore bias the results.

If the 95th%ile Confidence Interval associated with this OR extended from 0.6 - 7.4, what does this mean?

CI includes 1 - hence not statistically significant

Define chance? How can it be controlled for?

Chance (aka imprecision) - Random error eg. the same study undertaken multiple times will produce different results Small sample may result in greater risk of chance influencing the results The smaller the sample size the bigger the risk that the study is underpowered - greater chance of a Type 2 error. Control: Need to ensure sufficient subjects, usually via sample size calculations, to ensure sufficient number of subjects to minimise the potential or chance

What are the three threats to Internal Validity?

Chance, bias and cofounders

Describe the four pillars of the EBP model

Clinical Experience: Informal evidence obtained through clinical practice Research Evidence: Formal evidence obtained through scientific research Patient Values: Values, expectations and experiences of the patient Practice context: Characteristics of the practice context

How does statistical significance differ from clinical significance?

Clinical significance - The practical or applied value or importance of the effect of an intervention Does the intervention make a real (genuine, palpable, practical, noticeable) difference in everyday life to the patient? The ultimate clinical significance is return to normal.

How does clinical signficance differ from statistical significance?

Clinical significance is the practical or applied value or importance of the effect of an intervention Does the intervention make a real (genuine, palpable, practical, noticeable) difference in everyday life to the patient? The ultimate clinical significance is return to normal.

How would we identify if there is an allocation bias? Where would we find this information?

Compare the baseline characteristics in Table 1

Define Critical appraisal

Critical Appraisal: the process of assessing and interpreting evidence by systematically considering its validity, results, and relevance.

How does a cross over-randomised clinical trial differ from a randomised controlled trial?

Cross-over Randomised Controlled Trial Where the subjects in the trial received an intervention and then cross-over to receive the alternative intervention at a point in time - are considered to be the same level of evidence as a RCT

What is the difference between validity and reliability?

Dart Board scenario - Reliable = grouped together Valid = average is central Ideal = Robin Hood 16. What is the difference between sensitivity to change and responsiveness when considering an Outcome Measure? Sensitivity to change is the ability of an instrument to measure change regardless of whether it is meaningful. The more score options a measure has the more likely it is to be sensitive to change. Responsiveness is the ability of an instrument to measure meaningful change. Terms are interrelated Sensitivity - the most minimal overall change in score is 0.83% (a change of 10% in one question divided by 12 questions) Relevance - Williams and Myers (1998) showed that a minimal overall difference of 16% is required to represent a clinically important difference. So any difference may be statistically significant. But is it clinically significant??

What is measurement error with an outcome measure?

Data are only as good as the Outcome measure selected Ideally: • Outcome measures are an exact reflection in a change in status i.e. when the outcome changes by 2%, the outcome measure is able to identify this, and provide data that changes by 2%. Reality: • This isn't always the case = Measurement error Source: a) The Outcome Measure b) The Subject C) The Measurer

Definitions:

Diagnosis: The process of determining health status and the factors responsible for producing it; may be applied to an individual, family, group or community. The term applied both to the process of determination and to its findings Diagnostic Test: Any medical test performed to confirm, or determine the presence of disease in an individual suspected of having the disease, usually following the report of symptoms, or based on the results of other medical tests. Some examples of diagnostic tests include performing a chest xray to diagnose pneumonia, and taking skin biopsy to detect cancerous cells

How would you differentiate between a Diagnostic test study and an Intervention study?

Diagnostic studies = outcomes from one diagnostic test (the index test) are compared with outcomes from a reference standard test. - measured in individuals who are suspected of having the condition of interest. - 'Accuracy' refers to the amount of agreement between the index test and the reference standard test.

How do we control for attrition bias?

Ensuring low dropout rates, high compliance rates and minimising outcomes and data

What is Evidence?

Evidence is a piece of information that supports a conclusion. • Personal experience • Books • Observation • Journals • Magazines • Newspapers • The Internet • Family/Friends • Peers/Colleagues • Lecture Notes

In Sackett et al's (1996) definition of EBP why do we need to be Explicit and Conscientious?

Explicit and conscientious emphasises the need for a transparent and rigorous evidence search approach.

Describe the two approaches to CATs?

Give a strength and weakness for each Can be Scales/Checklists (PEDRO, CASP) Or Domain-based evaluations (Cochrane) Strength Weakness

How would you identify a Gold / Reference Standard?

Gold Standard/Reference Standard: Test regarded as the most accurate method available for classifying people as disease positive or negative Clinical guidelines?

What are the key Health System drivers underpinning the need for EBP?

Increasing clinicians accountability due to a change in the way health care is administered 1. Increasing costs of health care 2. Reduced staffing 3. Managed care systems 4. Increasing litigation Drive for EBP is based on the notion that providing research evidence for all activities ensures accountability to the population for clinical decision making and interventions

Why do we do Sample Size calculations?

Indicate how many subjects are required to show a clinically meaningful effect

What is the difference between internal and external validity?

Internal Validity: Relates to the truth about inferences regarding cause-effect or causal relationships. The key question for internal validity is whether observed changes in an outcome measure can be attributed to the study intervention and not to other possible causes. External Validity: the extent to which the results of a study can be generalized to other situations and to other people.

List and describe the two forms of reliability?

Intra-observer reliability (test-retest) • How well the OM performs on repeated applications by the one researcher or clinician to the same sample • Paired t-test - can be used to compare the two sets of data. (Does not take into consideration random variation within the group) • If more than two repeat measures of interval-ratio level of measurement variables, then can use an ANOVA (Analysis of Variance) - with the null hypothesis of no change between measures (variance = 0) • The Pearson's correlation coefficient - the most common technique for assessing reliability. If a high r value (>0.7) and statistically significant correlation coefficient is obtained, the OM is reliable Very strong positive = r > +0.7 Strong positive = +0.4 - +0.69 Moderate positive = +0.3 - +0.39 Weak positive = +0.2 - +0.29 No or negligible = +0.19 - -0.19 Weak negative = -0.2 - -0.29 Moderate negative = -0.3 - -0.39 Strong negative = -0.4 - -0.69 Very strong negative = -0.7 or higher Inter-observer reliability • How well the OM performs on application by different researchers / clinicians. • Commonly used statistical analyses • For Categorical data, commonly use Cohen's kappa coefficient The K value can be interpreted as follows (Altman, 1991): Value of K Strength of agreement < 0.20 Poor 0.21 - 0.40 Fair 0.41 - 0.60 Moderate 0.61 - 0.80 Good 0.81 - 1.00 Very good • For Numerical data the intraclass correlation coefficient (ICC) can be used. ICC can be interpreted as follows: • 0-0.2 indicates poor agreement: • 0.3-0.4 indicates fair agreement; • 0.5-0.6 indicates moderate agreement; • 0.7-0.8 indicates strong agreement; and • >0.8 indicates almost perfect agreement.

In Hill's Criteria for Causality describe the criteron, Strength of Association ?

Large the risk associated with the factor is - strength of relationship, aka think in terms of causality

What is the level of power you aim for in your sample size calculations?

Looking for a statement which presents the level of power (0.20 or 80%)

What are key clinician drivers underpinning the need for EBP?

Main aim is to improve patient outcomes • Clients expect it • Improves clinician's knowledge • Communicates a profession's research base • Stimulates clinically relevant research to improve patient outcomes • Improves accountability

If the NNT for wearing a shirt and incidence of skin cancer was 3, what would be your advice to a patient about wearing a hat compared to wearing a shirt?

Much lower NNT, therefore more effective at lowering incidence of skin cancer, hence encourage patient to wear shirt over hat

If the NNT for reducing sun exposure through wearing a hat and incidence of skin cancer was 14, what does this mean?

NNT = number of patients needed to be treated to prevent one additional bad outcome • = 1/ARR (Absolute Risk Reduction) • If the NNT for smoking cessation was 5 • Therefore, for every 5 subjects who quit smoking, approximately 1 case of cancer would be prevented. • An ideal NNT = 1 (everyone who receives treatment gets better) Absolute Risk Reduction Absolute difference in rates of events between 2 groups. Proportion of those with trait who develop condition minus the proportion of those without trait who develop condition = a/(a+b) - c/(c+d) For this question - for every 14 patients treated, one will have reduced sun exposure & incidence of skin cancer

If the Odds Ratio was 0.7 (95th%ile CI, 0.2-0.99) for skin cancer associated with exposure to sunlight for greater than 3 hours a day, what does this mean?

OR = negative association CI does not include 1 = statistically significant

How does an observational study differ from an Experimental Study?

Observational has no direct contact/intervention from an assessor/researcher AKA: not controlling the intervention, just observing Characteristics of a observational study is that you are not controling the intention - you are just observing it Looking for a relationship between a factor and an outcome

If a study concluded that there was an odds ratio of 2.7 of skin cancer associated with exposure to sunlight for greater than 3 hours a day, what does this mean?

Odds ratio (OR) : a measure of association between an exposure and an outcome. • OR represents the odds that an outcome will occur given a particular exposure, compared to the odds of the outcome occurring in the absence of that exposure. • OR is the ratio of two probabilities and is the probability of an occurrence divided by the probability of the non-occurrence of the risk being studied. OR = (exposed with condition / exposed without) / (not exposed with / not exposed without) • If the OR is equal to 1.0, no association is indicated (equal odds) • If the OR is less than 1.0, a odds against the event (negative association). • If the OR is greater than 1.0, odds for the event (positive association) For this question - positive association

How do Outcome measures differ from outcomes

Outcome measures are assessment tools used to measure an outcome. Outcome measures give us data Outcome measures are used in both Clinical practice and Research Intervention → Outcome → Outcome measures → Data Any research (irrespective of type) collects some form of data •Quantitative data (number based) •Qualitative data (word based i.e. words, concepts, themes) In statistics, data are observations (observed values of variables) that have been collected. Subjective: • Self-assessed by the client • Assessment tools measure multiple aspects of a person's experiences and may express outcome as an index Objective: • Measured by an assessor (eg therapist, researcher) usually with calibrated instrumentation such as scales, ruler etc

What are the key features of an outcome?

Outcomes are 'the expected, or anticipated, change in some measure or state'. • Outcomes are the end result of an intervention. • The outcomes we are interested in are usually positive (but not always - 'treatment harms'). • Need to be quantified (for research). • Goals vs Outcomes.

Why do some studies report Specificity and Sensitivity figures but not PPV/NPV?

PPV = Positive Predictive Value NPV = Negative Predictive Value PPV and NPV change when the prevalence of the disease changes Sensitivity and Specificity are most often used • They are not influenced by the prevalence of disease and can be compared between different settings and tests Goal: The False Negative Rate & False Positive Rate to be close to zero. The values for Sensitivity, Specificity, Positive Predictive Value, and Negative Predictive Value to be close to 1 (100%).

What does PICO/PECOT stand for?

Population/ patient / participants: Who are the relevant patients? Intervention / Indicator: What is the management strategy, Diagnostic Test, Exposure that you are interested in? Comparator or Control: What is the control or alternative management strategy, test or exposure that you will be comparing the one you are interested in with? Outcome: What are the relevant consequences of the exposure in which you are interested? PECOT: Population; Exposure; Comparator; Outcome; Time

What are the differences between primary and secondary research designs?

Primary Research: 1. Undertaken on human subjects (or human tissues) or animals 2. Requires ethics 3. Answers specific research questions Quantitative vs Qualitative Qualitative Research • Grounded theory • Phenomenology • Ethnography • Feminist research • Participatory Action Research • Discourse analysis Secondary Research: • Synthesises the findings of primary research which addresses similar research question(s) • Synthesises and/or updates available secondary evidence findings • Doesn't require ethics as data comes from completed studies, not directly from humans (or animals) Secondary Research Designs Clinical guidelines: Combination of primary and secondary evidence to provide recommendations for best practice for a condition Meta-analysis: Combining the raw data and re-analysing it for one intervention for one condition Systematic Review: Describing the summary findings of included studies for one research question about one condition (treatment, risks, prognosis, diagnosis, or aetiology) Literature Review: Describe the literature related to a specific topic - usually quite broad and can involve background or foreground questions

Identify and describe three types of Reporting Bias?

Publication bias - The publication or non-publication of research findings, depending on the nature and direction of the results Multiple (duplicate) publication bias - The multiple or singular publication of research findings. Location bias - The publication of research findings in journals with different ease of access or levels of indexing in standard databases. Citation bias - The citation or non-citation of research findings, depending on the nature and direction of the results Language bias - The publication of research findings in a particular language Outcome reporting bias - The selective reporting of some outcomes but not others.

Describe the 4 statistical categories of data. Give two examples for each?

Qualitative: Nominal: Names, labels and categories • Gender, • SES Ordinal: Labels with a logical order Indexes (good/average/bad) • Muscle strength scales • Functional activity scales Quantitative: Discrete: Whole numbers • Attempts at a task/exercise • Number of patients in a ward Continuous: parts of a whole • Height (mm/cm) • Age (months, days) • Time (seconds/minutes)

Describe the difference between random and systematic measurement error? How would you control for each?

Random error - the 'noise' in the measurement • Biological error i.e. a change in a person's capabilities due to physiological adaptations or psychological factors such as motivation between test and retest. • Instrumentation or equipment problems • Uncontrolled confounding variables may also contribute to the noise in the measurements. Systematic error - a non-random change between trials in a test retest situation - may result from learning or fatigue effects in repeat 'testing'; In a series of repeated, maximal strength trials the patients may fatigue significantly Alternatively patients naı̈ve to the test may improve across trials due to a learning effect or increased confidence.

How does a pseudo-randomised clinical trial differ from a randomised controlled trial?

Randomised Controlled Trial The unit of experimentation (ie the subjects or cluster of subjects) are allocated to either an intervention group or control group using a random mechanism (i.e. coin toss, random number generator) Pseudo-randomised controlled trial The unit of experimentation (the subjects or cluster of subjects) is allocated to an intervention or a control group using a pseudo-random method (such as alternative allocation, allocation by day of week or odd/even study numbers)

Why would you undertake stratified randomisation?

Reduces the potential for confounding factors to affect the results

What are the two major categories of Bias?

Reporting bias - bias that affect the research evidence we are able to access. Methodological bias - bias inherent within individual research methodologies.

What is the difference between an a priori and a post hoc sample size calculation? Why would you do each?

Sample estimates are usually calculated prospectively (a priori), however they can be collected after the data is analysed (post-hoc)

Identify and describe five types of Methodological Bias?

Selection/Sampling Bias · how well the sample from which the study was based relates to the population which it purports to refer to Allocation Bias · how well subjects were allocated to control and intervention groups Maturation bias · Relates to bias introduced through natural maturation of a condition, rather than the intervention Attrition Bias · relates to bias from subjects who drop out of a study Measurement Bias · Relates to bias from measurement error Placebo · relates to bias due to pure psychological effect that an intervention can have on a subject Sampling biases · Volunteer bias/referral bias / Non-respondent bias · Popularity bias · Centripetal bias · Diagnostic access bias · Referral filter bias · Admission rate (Berkson) bias · Diagnostic purity bias Measurement biases · Instrument bias. · Insensitive measure bias. · Expectation bias / Observer bias / Interviewer bias · Recall or memory bias · Attention bias / Obsequiousness bias · Contamination bias · Co-intervention bias · Compliance bias · Withdrawal bias · Proficiency bias · Apprehension bias

What does a sensitivity of 0.7 mean?

Sensitivity: The probability that a diagnostic test result will be positive given that the individual truly has the condition. Sensitivity = No. of True Positives No of subjects with the condition i.e. A / A+C 70% chance that those with a positive result will have the condition

List and describe the 4 forms of validity?

Several types of validity • Face validity • Does it appear to measure what it is intended to? • Use common sense rules eg tape measure for length • Use expert consensus to validate the outcome measure • Content validity • Does it measure everything that it should? • Does a measure cover all possible aspects of the phenomenon that is being measured? E.g. measure of fitness, does it cover stamina, speed, strength, heart rate etc • Criterion related validity (aka instrumental validity) comparing an OM with another OM which has been demonstrated to be valid • Construct validity -an association between the outcome measure and the prediction of an inferred theoretical construct. • Convergent validity is agreement among ratings, gathered independently of one another, where measures should be theoretically related. • Discriminate validity is the lack of a relationship among measures which theoretically should not be related.

How does simple randomization differ from cluster randomisation?

Simple randomization (i.e. concealed envelopes, computer programs, flipping coin), based on a single sequence of random assignments. Maintains complete randomness of the assignment of a subject Advantage - simple and easy to implement Disadvantage - In small study samples may result in an unequal number of participants among groups Cluster randomisation - clusters of individuals are randomised Advantages - Able to study interventions that can't be directed toward individuals and the ability to control for "contamination" across individuals. Disadvantages - require more participants to obtain the same statistical power. Remember that people within a cluster are more likely to share common confounders than other people

Who is blinded when 'triple blinding' is done?

Subject, researcher, and organisers/analysists

Define bias? How can they be controlled for?

Systematic error in the way the study was done

How do you control for the Hawthorne effect in an intervention study?

The Hawthorne Effect is the alteration of behaviour by the subjects of a study due to their awareness of being observed. Can be controlled by having a long term approach

What do we mean by reliability of an outcome measure?

The extent to which a measurement is consistent.

What is external validity?

The extent to which the results of a study can be generalized to other situations and to other people; it is closely connected with the applicability of a study's findings.

Lecture 1 - Introduction to Evidence-based practice 2

The following questions relate to Lecture 1.

Lecture 4 : Intervention studies

The following questions relate to lecture 4

Week 2 : Introduction to Critical Appraisal - Outcomes and Outcome Measures

The following questions relate to week 2.

Week 3 : Introduction to Critical Appraisal 2 - threats to internal and external validity

The following questions relate to week 3

Week 7: Diagnostic study questions

The following questions relate to week 7

In Sackett et al's (1996) definition of EBP what does 'Judicious' mean?

The judicious use of the evidence is about making sure that the evidence is framed in terms of clinical expertise and the patient's values and circumstances.

Lecture 5 : Observational studies

The next questions relate to Lecture 5.

What does a specificity of 0.2 mean?

The probability that a diagnostic test result will be negative given that the individual truly does not have the condition. Specificity = No. of True Negatives / No of subjects without the condition i.e. D / B+D 20% possibility that if the test has a negative result, they will not have the condition

What are the potential clinical implications of having a high False Negative Rate?

The probability that a diagnostic test result will be negative given that the individual truly has the condition. FNR = No. of false negatives / No of individuals with the condition = C / A+C = 1-Sensitivity An error that results from a high false negative rate is potentially dangerous. In this case the subject is unaware of an existing condition and hence will not seek needed treatment.

What are the potential clinical implications of having a high False Positive Rate?

The probability that a diagnostic test result will be positive given that the individual truly does not have the condition. FPR = No. of false positives / No of individuals without the condition = B / B+D = 1-Specificity An error that results from a high FPR can result in inconvenience. i.e. the individual is diagnosed with a condition that is not present and will probably seek treatment for the non existent problem. How inconvenient this is will depend on how invasive the treatment is.

What does a Likelihood Ratio of 13.5 mean?

This statistic provides clinical information about an individual person because it indicates how likely a positive result will be found in a person with the disease compared to a person without the disease • LR+ > 1 indicates a +ve test is associated with presence of disease • LR+ < 1 indicates a +ve test is associated with absence of disease • LR+ of 10 indicates that a person with disease is 10 times more likely to have a +ve test result than someone without the disease. LR > 1 = +ve test is associated with disease. LR =13.5 = person with disease is 13.5 x more likely to have +ve test result

What is the potential effect of allowing a longitudinal observational study to go on for too short a time period?

Timing bias: if the measures are too close together there may be insufficient evidence of change.

What use is the Hill's Criteria for Causality?

To determine if causality exists i.e. there is a link between what we observe and the outcome of interest we need to satisfy a number of criteria - 'Hills Criteria for Causality'. Any relationship is not necessarily causal Can be consequential or coincidental Criteria Strength of association • This criterion examines how large the risk is associated with the factor, i.e. how strong the relationship is between the presence of the factor and the development of the condition. • It is assumed that factors with the largest relative risks are more likely to be causal in nature. The relative relationship can be quantified via odds ratios/Risk Ratios. Consistency • This criterion examines the reliability of the relationship between a factor and the condition by reviewing if the association has been repeated in different settings. • Inconsistency doesn't rule out a causal connection but suggests an association dependent on factors that vary across studies (Rothman and Greenland 1998 pp.25). Plausibility Is the causal association between the factor and the condition plausible Coherence Does the association make sense within current scientific knowledge on the condition's natural history and biology? It is assumed that for any factor to be considered a potential risk factor there must be some mechanism, either proven or hypothesised, by which the factor can lead to the condition. Analogy In some circumstances it may be appropriate to judge by analogy. Bradford-Hill felt that in some disease states we should be ready to accept similar evidence with another drug or another disease Temporality • This criterion examines the temporal relationship between the factor and the condition. • For a factor to be considered causative for a condition it must be present before the condition develops. Dose-specific gradient This criterion examines the size of the relationship between the factor and the condition. It is assumed that increased exposure to the factor will lead to an increase in the incidence of the condition. Specificity Specificity: This criterion examines whether the exposure to the factor is specifically associated with the disease? This is the most difficult criteria to satisfy as the ability of any study to control for exposures to a range of other factors is limited in practice. In reality only rare factors or exposures are able to meet this criterion. Reversibility (Experiment) Has it been shown that decreasing the exposure to the factor results in a reduction in the condition?

Why do we do Critical Appraisal?

To ensure that we are conducting best practice by being conscientious and judicious

How would you define the minimal acceptable clinical signficance?

To identify responsiveness we need to identify the minimal clinical difference in scores which suggests meaningful change

What does minimal clinical difference mean?

To identify responsiveness we need to identify the minimal clinical difference in scores which suggests meaningful change

Describe a type 1 error. Give an example

Type 1 = refers to SIGNIFICANT DIFFERENCES = p value usually p< 0.05 or 5 chances in a hundred that the result is due to chance The more measures /analyses you have, the greater the chance of a Type 1 error Incorrect rejection of a true null hypothesis (false positive) Detecting an effect that is not present

Why might you not do an RCT to answer an interventional type question?

Unethical, Time consuming, Costly

What is an advantage of a case control study?

Useful for rare conditions

Define confounders? How can they be controlled for?

Variables which were forgotten or missed and hence not controlled for in the study Control by randomisation and setting inclusion/exclusion criteria

What factors need to be considered when choosing an outcome measure?

We need to consider: • Reliability • Validity • Sensitivity of the outcome measure • Utility of the Outcome Measure Validity is concerned with the accuracy of the OM, while reliability is concerned with the consistency of the OM

Describe key features of the following study designs

a. Randomised Controlled Trial b. Non-Randomised Controlled Trial c. Cohort study - Retrospective d. Cohort study - Prospective e. Cross- sectional study f. Case-Control Study g. Case-study h. Case Series

You have conducted a diagnostic study to see how accurate a specific blood test is to identify the presence of Scootal blood disease. a. What do we mean by 'accurate' in diagnostic studies? b. You are about to report the findings in a research paper and your colleague tells you that the reason you found so many Scootal blood disease cases was that you collected data in the middle of a once-in-a-lifetime Scootal blood disease epidemic. Would you report the accuracy in terms of Sensitivity/Specificity or Positive Predictive Values/ Negative Predictive Values?

a. That there is high sensitivity and specificity. That the test result reflects if the subject truly has the condition or not. b. PPV and NPV

Where in a paper would we identify a study's research question?

last sentence of introduction for study aim

A study into the effct of an intervention on pain levels have calculated a p-value of 0.60, when comparing the change in pain levels following intervention compared to before the intervention. What does this mean?

would say that there was no clinical significance cant reject the null hypothesis

Identify three types of Experimental/Intervention studies?

· Randomised Controlled Trial - Experimental · Non-Randomised Controlled Trial - Experimental · Cross-over Randomised Controlled Trial -Experimental · Pseudo-Randomised Controlled Trial - Experimental · Case-study - Experimental/Observational · Case Series - Pre/ post test, -Experimental/Observational

How does an Odds Ratio differ from a Risk Ratio?

• A measure of the risk of a certain event happening in one group compared to the risk of the same event happening in another group. • Used in prospective studies. • Also called risk ratio i.e. the ratio of the risk in the exposed divided by the risk in the unexposed RR = portion of those with the risk who develop condition / portion of those without the risk who develop condition • If the relative risk is equal to 1.0, no association is indicated (risk the same for both groups). • If the relative risk is less than 1.0, the trait reduces the risk of the condition • If the relative risk is greater than 1.0, the trait increases the risk of the condition • RR=2.0 can be interpreted as two fold increase in risk • RR=0.7 can be interpreted as 30% decrease in risk RR cannot be calculated for a case-control study design • When an outcome is common (>10%) - Odds Ratio tends to overestimate the Risk • Risk Ratio can't be done for Case Control studies • Odds Ratios can be used when performing logistic regression statistics (controlling for confounders in the analysis)

If a colleague told you that a study into the effectivess of exercise for low back had a clinical spectrum bias, what would they be referring to?

• Clinical spectrum bias: Are all important features of this condition identified and included i.e. severe to mild, acute to chronic etc.

Why do we use a Hierarchy of Evidences such as the NHMRC or CEBM Hierarchy of Evidence?

• Different Study designs are better at answering different questions, with the relative quality of the research evidence affected by study design. • Described in a Hierarchy where the higher positions indicate increased quality of the evidence and decreased opportunity for bias and confounding. Less risk of that any relationship identified was due to a different, independent factor.

Identify factors that will affect the clinical utility of an outcome measure?

• Ease of implementation • Time taken to administer • Wording of instrument (if appropriate) • Wording of questions (low level literacy, complex concepts) • Availability of protocol manual • Response types (binary form, multiple categories, multiple responses etc) • Difficulties in scoring and interpreting scores • Availability of population norms • Availability of thresholds/ benchmarks • Relevance to patient and environment

Describe the types of clinical questions which may be explored with EBP?

• Interventions: What are the effects of an intervention? • Diagnosis: How accurate is a sign, symptom, or diagnostic test in predicting the true diagnostic category of a patient? • Prognosis: Can the risk for a patient be predicted? • Aetiology: Are there known factors that increase the risk of the condition? • Screening: Does a screening intervention result in improvements in patient-relevant outcomes eg survival. • Prevalence: How common is a particular condition or disease in a specified group in the population?

Identify two types of bias that may affect the validity of how subjects were selected for an observational study?

• Selection bias Were the subjects selected for the study sourced from the whole population of interest. • Centripetal bias: specialist clinics vs primary health care • Patient Filtering Bias: All patients who presented were included in the study • Clinical spectrum bias: Are all important features of this condition identified and included i.e. severe to mild, acute to chronic etc. • Loss to follow-up (affects validity) • Confounders Other: Attrition bias: Subjects may drop out over time and therefore bias the results. Timing bias: if the measures are too close together there may be insufficient evidence of change. Verification bias - has the diagnosis that has been used to identify the outcome been confirmed. Chance: Have we ruled out the potential influence of chance? Clinical impact: Have we considered the impact of the findings?

Survival curves are constructed by using dichotomous data. Give two examples of dichotomous data?

• The outcome measure is dichotomous because patients are either classified as having or not having a hip dislocation during the ten year period. Not having or having skin cancer etc


संबंधित स्टडी सेट्स

transcripton and traslation of DNA biochemistry

View Set

CompTIA Security+ 501 (Type of Spoofing Attacks)

View Set

Chap 14: The Family: Developmental Psychology

View Set

FDN Module 10: D.R.E.S.S. for Health Success Program: D for DIET

View Set

Multiplication Times Table Facts

View Set