MWU - RDM: Test 1

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

*interpretation of clinical results* - What is the importance of Power? How does power influence the validity of a study? --> asks if the study was powered sufficiently to *find a significant difference* --> power analyses can be done using pilot data before study, to justify an expected sample size or after study is finished (post hoc) to confirm that a null finding is valid ----> want power larger than ___%

80

*validity of measurements* - What are the four methods used to verify validity of a test instrument or method? 4) _____ _____ - ability of an instrument to measure an abstract variable --> This is difficult b/c it is partly based on content validity and theoretical component --> there are are five commonly used methods to validate instruments used to measure constructs

Construct validity

*validity of measurements* - What are the four methods used to verify validity of a test instrument or method? 2) _____ _____ - the items that make up the instrument sample the range of content that defines a variable ----> Ex: quality of life, you ask a pt's physical, social, emotional, economic statuses --> most useful for questionnaires, inventories, interviews --> free from irrelevant factors (don't include irrelevant questions) --> also subjective; you can have a panel of experts provide input on the creation of an instrument (no statistical verification) or you can look at specific content in the universe defined by the researcher ----> ex: McGill Pain Questionnaire (location of pain, quality, amount, intensity)

Content validity

*validity of measurements* - What are the four methods used to verify validity of a test instrument or method? 3) _____-_____ _____: measures how well one measure predicts the outcome of another, refers to how a target test result is validated by a gold standard (criterion, concrete outcome) --> most practical approach to validity testing, most objective approach, most scientifically sound ----> Ex: to measure the leg length, target test uses a yardstick and the gold standard uses an X-ray (the concrete outcome) --> *Concurrent validity:* measurement to be validated and criterion measurement are taken concurrently (the conditions under which the measurements are made are all done at the same time); represent same incident of behavior, useful for more efficient new tool, easier to administer, safer/practical new method ----> ex: clinical measure of BG vs. home kit --> *Predictive validity:* test measures are a valid predictor of future criterion score; however has *practical limitations*, i.e., need a large pt population and follow them long-term (see if the pt ever gets breast ca) ----> Ex: GRE score can show potential performance in grad school; or mammogram results and breast cancer --> *Prescriptive validity:* pre-treatment data used to decide course of action or treatment; multiple pts over a period of time; compromised if outcome measurements are not chosen carefully ----> Ex: assessment of balance to determine need for walking device—outcome measure: gait speed or reduction in falls? Reduction in falls is probably a better choice

Criterion-related validity

*validity of measurements* - What are the four methods used to verify validity of a test instrument or method? 1) ____ ____ - instrument appears to be measuring what it is supposed to measure; least vigorous method, post-hoc for of validation, requires clear definition about the concept being measured, more subjective in nature, scientifically weak (no clout; may be only method), not sufficient but necessary for patient participation --> obvious vs. less obvious: scale - measures weight, EEG - measures brain waves, health assessment survey

Face validity

*validity of measurements* - construct validity - What are five commonly used methods to validate instruments used to measure constructs? 3) ____ ____ - construct contains one or more underlying dimensions and within each variety of performance variables can be identified which together will provide an evaluation of that particular dimension --> statistical procedure --> groups of correlated variables = *factors* (factors represent different theoretical components of the construct) ----> ex: the measure of intelligence encompasses several different factors (perception, word fluency, verbal ability, memory, reasoning, etc.)

Factor analysis

*validity of measurements* - construct validity - What are five commonly used methods to validate instruments used to measure constructs? 1) ____ ____ ____ - criterion is chosen that can clearly identify the presence or absence of a particular characteristic and the theoretical context behind the construct is used to predict how different groups are expected to behave --> validity of test: ability to discriminate between those w/ and w/out the trait ----> ex: a group that is KNOWN to be depressed should have higher scores on depression scale than group that is known not to be depressed ----> ex: Harris Infant Neuromotor Test - screening tool to identify neuromotor and cognitive/behavior problems: one group is healthy infants (healthy moms) and at-risk infants (moms who had alcohol or drug abuse, low pre-term weight)

Known group method

*developing a research question:* - What are the characteristics of a good literature review? --> Current, relevant, quality (aka peer-reviewed), ____, _____, ____ studies, ____ analysis

Methodology; populations; classical; statistical

*principles of measurements* - What are nominal, ordinal, integer and ratio measurements? What are the characteristics and limitations of each? 1) _____ Scale (classification scale) - the WEAKEST level of measurement; assigned to category based on criteria; --> the measurements are mutually exclusive to each other, the rules for classification are exhaustive --> only has 1 property of a real number: you can *quantify* the number within each group (ex: handedness, blood type, sex) **runners in race example: the numbers given to runners in a race

Nominal

*developing a research question:* - What are constructs? --> ___-______ behaviors/events (pt states degree of pain, health, quality of life)

Non-observable

*principles of measurements* - What are nominal, ordinal, integer and ratio measurements? What are the characteristics and limitations of each? 2) _____ scale - the THIRD in strength in terms of levels of measurements; numbers indicate rank order of observation; *order* is the only property of real number --> usually used clinically (ex: nurse triaging pts in the ER waiting room) or in surveys (ex: is your current state of health poor (1), fair (2), good (3), very good (4), or excellent (5)) --> Intervals between ranks may not be consistent and may not be known (i.e., if we are ranking something 1-5, how do you rank these relative to each other? Is 2 vs. 3 very different from 1 vs. 2?); there is no true zero, no real "quantity," just relative position --> they are descriptive studies' ----> ex: Functional Independence Measure: measures 7 domains of function (self-care, sphincters, mobility, communication, psychosocial, and cognition) **runners in race example: placing runners in order from first to last to finish (you don't know how close 2nd place was to the 1st place, you just know they were in second)

Ordinal

*sampling techniques* - Why is a good study sample so important? --> a bad study sample can skew the results (ex: bad sample for election prediction; only asked rich white ppl and predicted Alf would win, then FDR won; AKA their sample was NOT an accurate representation of the population) --> A good study sample will have the following characteristics: ----> Relevant characteristics (Do they have what you are looking for?) ----> _____ distribution ----> _____ variations ----> _____ attributes ----> Size of the group (Do you have enough people to see variations if they arise?)

Proportional; common; related

*principles of measurements* - What are nominal, ordinal, integer and ratio measurements? What are the characteristics and limitations of each? 4) ____ measurements: the STRONGEST level of measurement - interval scale w/ an absolute zero that has empirical meaning --> has all 3 properties of a real number: has a real zero (no negative values), is quantifiable, and has order --> all math and statistical operations possible; you can use one scale to compare to another ----> ex: weight (kg vs. lbs), range of motion, height (cm vs. ft/in), force **runners in race example: show their time to finish in seconds

Ratio

*interpretation of clinical results* - What is predictive value? Why is it important? How is it used? How do you calculate positive and negative predictive value? --> Predictive values are determined by going _____ in the table instead of down (like sensitivity and specificity) --> Answers the question that if a pt has a positive test (+ve test), how likely is the person to truly have the disease/condition/disorder? --> positive predictive value (PPV) = a / (a+b) ----> this is in the FIRST ROW --> negative predictive value (NPV) = d / (d+c) ----> this is in the SECOND ROW ----> ex w/ appendicitis: if PPV = 61%, that is not helpful (bc only 61% of people that tested positive actually have the disease they tested positive for); but if NPV is 90%, this means ppl with a -ve test have a 90% of NOT HAVING appendicitis **overall note for sn/sp and ppv/npv: the negative test is usually a more accurate assessment**

across

*developing a research question:* - Identify the independent and dependent variable in a hypothetical experimental design. --> In a *predictive study*: the presence of an independent variable *predicts the dependent* variable --> Ex: do age and gender predict the onset of lower back pain? -- Independent: ___ and ___ -- Dependent: ___ ___ ___ --> In a *comparative study*: investigates *causal* relationships --> Ex: is regular exercise more effective than anti-inflammatory medication in relieving lower back pain? -- Independent: ____ ____ OR ____-____ ____ -- Dependent: ____ of lower back pain

age; gender; lower back pain; regular exercise; anti-inflammatory meds; relief

*reliability of measurements* - What are the four methods used to establish reliability of a measurement or instrument? 3) ____ ____: two or more versions of measurement give the same results (many versions of the same test) ----> ex: ACT, SAT, GRE, MCAT - Reliability between the participators and forms must be _____ (same group, multiple forms, compare results) --> Used often in clinical, educational, and psychological testing

alternate forms; established

*reliability of measurements* - What are variance, reliability coefficient, and correlation? 3) *Correlation:* degree of _____ between two sets of data --> shows how scores vary together, not the *extent* of agreement (ex: shoe size and height - positive correlation) --> correlation coefficient is *NOT* effective as a measure of reliability

association

*sampling techniques* - non-random sampling types: 1) Convenience Sampling: subjects based on ______. This is the ___ ___ non-probability method and includes consecutive sampling. This is the most practical approach, since subjects who meet criteria are enrolled as they arrive. --> can be man on street interviews, teachers using their students, or volunteers ----> volunteers contribute to the bias of self-selector; ex: if you are assessing the effect of weight training on muscle mass and choose the first 50 ppl that show up to the gym, then ppl who work out will show up to the gym, and ppl who don't work out will not show up --> i.e., **use who is available** --> usually used in pilot studies because it allows the researcher to obtain basic data and trends regarding their study without the complications of using a randomized sample

availability; most common

*principles of measurements* - Why use measurements? Why are they important? --> to understand, evaluate, and differentiate ____ of people/objects --> it is important because it imparts ____, prevents ambiguity, descriptive, decision making/conclusions, evaluate condition/response to _____

characteristics; precision; treatment

*developing a research question:* - how do research questions originate? --> Comes from something you don't know, if you want to resolve a ____, if you want to clarify ____, or from ___ interests that you may have (such as specific patient populations, specific interventions, clinical theory, and fundamental policy issues (ex: mandating flu shot for school college students)) ----> What do I know? What I don't know? What do I need to find out?

conflict; information; clinical

*reliability of measurements* - What are the four methods used to establish reliability of a measurement or instrument? 1) Test-retest reliability: the instrument is capable of measuring a variable with ______ results (Ex: giving students the same test twice) - Consider... --> the stability of response --> rater/tester influence --> the interval between the test and retest (avoid fatigue, make sure the conditions are unchanged) --> testing effects - factors that change the response (and ones that should be minimized)... a) _____ effect/interference: first testing session influences scores for the retest (ex: two memory tests, student remembers things from the first test and gets a higher score on the second) b) _____ effect/interference: the test itself induces change in a measured variable (ex: walking up the stairs inducing fatigue; an 80 y/o that is sedentary will get up from chair faster and faster each time she stands up)

consistent; carryover; testing

*validity of measurements* - construct validity - What are five commonly used methods to validate instruments used to measure constructs? 2) a. ______ ______ - two separate instruments yield similar results (not affected by different place, time, groups) ----> ex: health scale for quality of life ~~~*PLUS*~~~ 2) b. _____ _____ - low correlation between instruments that measure different traits; two instruments measure different traits (different characteristics => different results) ----> ex: IQ test vs. gross motor skills

convergent validity; discriminant validity

*sampling techniques* - What are populations and samples? Characteristics? Definitions? Identification? --> Populations: a defined aggregate of persons, objects, or events that meet a specified set of _____. These can include people, places, organizations, objects, animals, and days. --> Samples are a sub-group of the population whose purpose is to serve as a _____ group for the characteristics of the population. ----> This gives the researchers more ___, it is more ____, and it saves ____. --> For a research study, you will have the following... ----> ____ population (Reference population): the "Universe of interest" ----> ____ population (Experimental Population): a portion of the population that has a chance of selection ----> ____ Sample: Actual chosen participants.

criteria; reference; control; economical; time; Target; Accessible; Study

*validity of measurements* - construct validity - What are five commonly used methods to validate instruments used to measure constructs? 5) _____ ____ - comparison of test results with those of relevant criterion tests --> used when new instrument is developed --> problem is finding a suitable criterion --> suitable approach is finding criterion related to various aspects of a construct

criterion validation

*developing a research question:* - What is the difference between inductive and deductive theories? --> Inductive theories are ___ ___ and require empirically ____ observations --> Deductive theories are ____ and have little to no prior ____

data based; verifiable; hypothetical; observations

*developing a research question:* - Hypothesis classifications: --> ____ - based on theoretical premise; predict the outcome --> ___ - based on observations; clinical practice

deductive; inductive

*principles of measurements* - What are continuous, discrete and dichotomous variables? Examples include? - *variable = an attribute that can have more than value* - value quantity (i.e. blood pressure) vs. value quality (i.e. male/female) --> Continuous variable: any value along a continuum w/in a ____ ____; never measure exactly ----> Ex: Height, weight, strength --> Discrete variable: variable that can be defined only w/ ____ units ----> Ex: Blood pressure, heart rate, scale (1-10) - Dichotomous variable: ____ variable can take only two values ----> Ex: sex, handedness, yes/no scale

defined range; whole; qualitative

*interpretation of clinical results* - What is effect size? What does it mean and how is it used? --> *Effect size* is a simple way of quantifying the _____ between two groups (statistics just tells us that there is a difference) --> Absolute effect size (Mean 1 - Mean 2) ~~*OR*~~ --> *Standardized effect size / "Cohen's d"* (Unitless because it is a comparison over a large amount of studies) - where Cohen's d = 0.4 ----> Small effect size, d = 0.2 - 0.49 ----> Medium effect size, d = 0.5 - 0.8 ----> Large effect d = > 0.8 --> *note* - you WANT a ____ effect size for your pts (You want a study with an effect size greater than or equal to 1) ----> ex: an effect size of .25 indicates that the TREATMENT group OUTPERFORMED the COMPARISON group by a quarter of a *standard deviation*

difference; large (however, interventions w/ large ES may still have limited usefulness if the associated outcomes are rare and interventions with modest ES may be more meaningful if the outcomes occur more frequently)

*sampling techniques* - random sampling types: 4) Proportional Stratified Sampling: Performs random draw according to ______ in accessible populations, which strengthens the research design. ----> ex: how do grad students' level of education influence their ability to effectively interact w/ elderly pts? stratify by ratios: 1st year: 300 ppl, select 30 2nd year: 300 ppl, select 30 3rd year: 200 ppl, select 20 4th year: 150 ppl, select 15

distribution

*interpretation of clinical results* - Does statistical significance between two treatment groups mean clinical significance? What needs to be considered? --> Statistical significance _____ (always) equal clinical significance; studies with small test groups usually do not/cannot show statistical significance and provide misleading results --> doesn't tell the size of a difference between two measures --> cannot easily be compared across studies --> doesn't inform individual treatment planning

doesn't

*principles of measurements* - How are measurements applied to constructs? - Constructs are abstract variables (ex: health, pain, quality of life) --> measurements are applied via _____; their values are assumed to represent the original variable (ex: assessing pain via the "Wong-Baker FACES Pain Rating Scale")

expectations

*developing a research question:* - What is the research rationale? --> A logical argument that shows how the research question was developed --> this is usually in the intro of the research study, it is a logical argument; it provides the ____ for the research and allows for interpretation of the results

framework

*sampling techniques* - non-random sampling types: 3) Purposive Sampling - advantages: --> easier to make ____ about your sample, cost effective, and time effective. - disadvantages: --> Vulnerability to errors in judgement by ____ and low levels of reliability and high levels of bias.

generalizations; researcher

*validity of measurements* - What is a criterion? Why is it useful? Important? --> A "____ ____" test that is used to validate the results of other tests by comparison of results. It's important because it provides a way for new tests to be validated.

gold standard

*sampling techniques* - non-random sampling types: 3) Purposive Sampling: occurs when the researcher ____-____ the subjects based on specific criteria through methods including chart reviews and interviews. The researcher must choose wisely in order to represent the population (however this introduces a potential bias element) --> when compared to convenience sampling, here specific choices are made rather than availability; this is *qualitative* research ----> ex: choosing people with stage 3 Parkinson's for researching the benefits of a stand-up walker via chart review --> i.e., **select the samples based on a preconceived purpose** --> used when researchers need to study a certain cultural domain with knowledgeable experts within (will give qualitative and quantitative).

hand-picks

*interpretation of clinical results* - What is the likelihood ratio, positive and negative? How are they used? How are they interpreted clinically? --> preferred way to present Sn/Sp and PPV/NPV; does particular response change the likelihood that you have a particular disorder? -- i.e., a way to put it into good clinician and pt friendly language 1) LR+ = the likelihood that a +ve result was obtained in a person with the condition vs. a person without (ruling IN dz) - want a very _____ value 2) LR- = the likelihood that a -ve result was obtained in a person with the condition vs. a person without (ruling OUT dz) - want a very _____ value 3) LR = 1, *no change* in likelihood; LR > 10, I'm more persuaded you have the dz; LR < 0.1, decreased likelihood of dz *** 0.5 - 2 = unimportant LR range

high; low

*validity of measurements* - construct validity - What are five commonly used methods to validate instruments used to measure constructs? 4) _____ ____ - an instrument's validity is assessed by using it to test specific hypotheses that support the theory --> provides evidence of the construct validity of instrument ----> ex: function independence measure (FIM); function defined as burden of care, degree of assistance; develop hypothesis: age, discharge destination, severity of injury; examine associated btwn hypothesis and FIM score

hypothesis testing

*developing a research question:* - How can the research objective of a study be stated? --> Can be stated as a... 1) ____ 2) ___ ____ 3) purpose of the _____ *This statement must be specific and concise in letting the reader know what the study is expected to accomplish*

hypothesis; specific aims; research

*sampling techniques* - What is random sampling? --> the most ____ process. ----> Characteristics include: an equal chance of selection, equal chance of characteristics, free of bias, and "considered" representative. In the long run, random sampling *most accurately* reflects population.

ideal

*developing a research question:* - What types of questions help to define the feasibility and importance of a research question? --> Is the question____? "So what" --> Is the question ____? you need well defined variables --> Is the question ____? Do you have the skills, background, resources, and time to do the research?

important; answerable; feasible

*developing a research question:* - What is a hypothesis? Understand and be able to identify the different types of hypothesis statements. --> hypothesis: Declarative statement that predicts the relationship between the ____ and ____ variables ----> Specify population studied, exploratory/experimental studies ----> Characteristics: Expressed relationships, testable, based on sound rationale, not based on pure speculation

independent; dependent

*developing a research question:* - What are the phases of the literature review process? What sources are used in each phase? --> Phases: 1) phase 1 = ___ ___; sources: general knowledge, review articles 2) phase 2 = ____ studies; primary peer-reviewed papers

initial review; specific

*reliability of measurements* - What are the four methods used to establish reliability of a measurement or instrument? 4) ____ ___: the extent to which items measure various aspects of the same characteristic and *nothing* else in a test - examine the correlation among all items on a scale ----> ex: course exam; health status measure (includes: physical functions, physical limitations, pain, social function, mental health, emotional limitations, vitality, and general health perception) --> Split-half reliability: two halves are ____ forms of the ____ test; if you get 2 scores from a single session, the 2 score should agree ----> ex: a test is split into odds and evens and given to two groups of students; the scores for both groups should correlate ----> *this is superior to test-rest and alternate forms*

internal consistency; alternate; same

*principles of measurements* - What are nominal, ordinal, integer and ratio measurements? What are the characteristics and limitations of each? 3) _____ scale: SECOND strongest in terms of levels of measurements - ordinal scale w/ known & equal distances between intervals --> 2 properties of a real number: *order* AND *quantifiable* differences btwn levels, but, *no true zero,* must set arbitrary zero ----> ex: fahrenheit vs. celsius, calendar years **runners in race example: ranking the runners on their performance from 1-10

interval

*developing a research question:* - How can professional literature be used to develop a research question? --> It helps determine important ____; clarifies any "holes", conflicts/disagreements; duplication with a ___ ___, "further studies", their descriptive nature can lead to an experiment

issues; new population

*sampling techniques* - non-random sampling types: 4) Snowball Sampling: - advantages: --> allows for studies to take place where otherwise it might be impossible to conduct because of a ____ of participants - disadvantages: --> Usually impossible to determine the ____ error, or make inferences about populations based on the obtained sample.

lack; sampling

*sampling techniques* - random sampling types: 2) Systematic Random Sampling: based off of pre-made _____. It is the least *time consuming* and is the ____ convenient. A researcher uses systematic random sampling by creating a list of the *total number* of accessible subjects, dividing the *number of subjects needed* (which gives you the sample interval), and counting down the list using that sample interval. ----> most useful w/ names: ex: 10 total names, need 5 ppl; 10/5 = 2; so every 2nd person will be in the study)

lists; most

*developing a research question:* - why is this class relevant to you as a future healthcare provider? --> provide you with the information you need to successfully read the _____, apply it to evidence-based practice, pass the relevant board exam questions, and begin your journey as life-long learners

literature

*sampling techniques* - non-random sampling types: 4) Snowball Sampling: When subjects with particular characteristics are difficult to ____. This is described as a chain of events where few subjects are selected, then they suggest other possible participants, and this process continues to grow in that way. ----> ex: getting 1 person experiencing homelessness to recruit another --> i.e., **get sampled ppl to nominate others**

locate

*reliability of measurements* - Understand the three sources of measurement error. --> Tester/rater - person performing the measurement --> Instrument - Actual instrument --> Variability of ___ ___ - Parameter changes with time/conditions ----> ex: BP - the TESTER could inaccurately measure the pt's BP, the INSTRUMENT could be malfunctioning (wrist BP monitors often measure BP higher than it is), and the VARIABILITY could lie in the time of day (early in morning = lower BP) or conditions (high BP can be d/t white-coat HTN)

measured characteristic

*developing a research question:* - Hypothesis statements: states researcher's *true expectations of results* - ____ hypothesis = "no difference", ex: advil doesn't change back pain - ____ hypothesis = "will influence", ex: advil will affect the degree of back pain - ____ hypothesis = "will increase/decrease", ex: advil will *decrease* back pain* - ____ hypothesis = 1 independent and 1 dependent, ex: advil will affect back pain - ____ hypothesis = more than one ex: consumption of advil and exercise will affect the pain scale and quality of life

null; non-directional; directional; simple; complex

*reliability of measurements* - What are variance, reliability coefficient, and correlation? 2) *Reliability coefficient:* how much the _____ score varies from the ____ score --> RC = *T*rue score variance / (*T*rue score variance + *E*rror variance) --> high reliability: RC = ____; low reliability: RC = ____ ----> a low reliability is okay for a descriptive study, but not for a diagnostic study; a dx should have a score of ____ or more

observed; true; 1; 0; .9

*sampling techniques* - Why is recruitment important? --> Recruitment is important because not everyone who is eligible will be willing to ____. We need to have power, which will allow us to have the ability to detect ____ ____. - What are common recruitment methods? --> ____ invitation, telephone calls, personal _____, and public ____ (via internet, radio, etc.) - What determines how many subjects are required for a study? --> enough to generate statistical power, but not too much that it becomes unnecessarily costly or time-consuming

participate; statistical differences; written; contact; announcement

*interpretation of clinical results* - Minimal clinical important difference (MICD) indicates what? --> the SMALLEST change needed in order for the ____ to feel a difference / feel better, regardless of MDC (*more subjective*) --> this is measured based on pt and clinician response, and is also mathematically determined --> a change can be statistically significant, but not exceed the MCID so would not be _____ significant or meaningful - i.e., it can be statistically signif. in research, but if it's not to the pt, then what good is it? --> must get the MCD first, and then consider MICD

patient; clinically

*interpretation of clinical results* - What is number needed to treat? What does a high NNT mean? Low NNT? --> the number of people that need to be treated to get ONE intended ____ ___; thus, a high NNT = ____ effective treatment, and a low NNT = ____ effective treatment

positive effect; Less; more (i.e., you treated one person and they got better, instead of testing 10 ppl and only 1 person got better)

*interpretation of clinical results* - Speed bump and appendicitis; is this Q useful in dx? - What is sensitivity? What is specificity? Why are they important? Be able to calculate each. 1) *Sensitivity:* how many people were true ____ out of the suspected positive population --> Sn = a / (a+c) ----> this goes in FIRST COLUMN --> "*SnNOut*" (which is 97% accurate for the speed bump, making it highly sensitive, thus, if pt says NO to speed bump, R/O APPENDICITIS), where Sn = ____, N = _____, and OUT = *rule-out diagnosis* 2) *Specificity:* how many people were true ____ out of the suspected negative population --> Sp = d / (b+d) ----> this goes in the SECOND COLUMN --> "*SpPin*" (which is 30% accurate for thee speed bump, so it is not useful), where Sp = ____, P = ____, and IN = *rule-in diagnosis* ----> *100% would be the goal*

positives; sensitivity; negative; negatives; specificity; positive

*sampling techniques* - non-random sampling types: 1) Convenience Sampling - advantages: --> very ____ approach, since subjects who meet criteria are enrolled as they arrive. - disadvantages: --> If the period of study is too short, it will not be enough to have meaningful data. There is also a considerable amount of ____, since this sampling requires self-selection.

practical; bias

*developing a research question:* - What are the independent and dependent variables? What is the operational definition of each? --> Independent variable is the ___ and/or ____ --> Operational def: include levels --> Dependent variable is the ___ and/or ____ --> Operational def: include method of measurement ----> Operational definitions define a variable according to its unique study

predictor; cause; response; effect

*sampling techniques* - What are inclusion and exclusion criteria? Why are they important? --> Inclusion Criteria: ____ ____ that will qualify someone as a subject. ------> This is a more homogenous (same throughout) and allows the researcher to *reduce generalizability* and figure out how the results apply to the *target population.* Inclusion criteria will allow for the *correct balance* between the subjects. --> Exclusion Criteria: factors that preclude participation in the study. ----> Some examples include: Language barriers (pt cannot receive instructions if they cannot speak english), Can't actually participate (too busy), Confounding attributes that may introduce too much ____ to your experiments (i.e. age and dz; can't study cognition on old ppl and kids, or someone w/ alzheimers) *Inclusion and exclusion criteria are meant to ensure patients safety during the study, provide data (justification) of subject appropriateness for the study, to minimize withdrawal (also costs) and ensure that primary end-points of study are reached.* (ex: studying the effectivness of NSAID in pts w/ RA; include: ppl w/ RA; exclude: ppl taking NSAIDs)

primary traits; variability

*interpretation of clinical results* - What is a Receiver Operating Curve? How is it constructed? Why is it used? --> ROC curve is a graphic way to evaluate different cut scores and thresholds without having to repeatedly recalculate specificity and sensitivity. --> an optimal cut score would give the highest value combo of Sn and Sp --> It is a _____ curve - How do you interpret the ROC value in a clinical study? --> when area under the curve (AUC) = 1, then the test has _____% ability to distinguish between two diagnostic groups (useful test) ----> ex: if AUC was 64%, then the test has moderate ability to distinguish btwn 2 tests

probability; 100

*developing a research question:* - What is the benefit to using a Specific aims statement? --> Specific aims usually have guided ___, descriptive ____, provides ____ to the paper, and creates ____ ----> basically makes it easier for the reader to understand the paper before reading it in its entirety ----> ex: the specific aims of this study are to...

questions; studies; structure; organization

*reliability of measurements* - systematic vs. random errors - ____ error: due to chance, unpredictable, inconsistent, so cannot be corrected for. --> this impacts *reliability* --> Limitation: unable to be corrected for, so need large data base to cancel out the error; the large database allows for the average to be a good estimate of true score

random

*sampling techniques* - random sampling types: 1) Simple random sampling - use the ____ ____ table; It is unbiased. A researcher uses simple random sampling by selecting a random place to start on the numbers table, read off numbers, and can go in any order.

random numbers

*sampling techniques* - random sampling types: 6) Cluster Sampling (Multi-stage Sampling): Involves successive ____ sampling of a series of units in a population. Advantages include its _____ and efficiency, but its disadvantages include an increased sampling ____. ----> ex: how is the average American family handling the pandemic? samples: stage 1 = counties, stage 2 = segments, stage 3 = households, stage 4 = individuals **good for survey method**

random; convenience; error

*sampling techniques* - What is the definition of probability and non-probability methods of study sample selection? --> Probability Methods are defined as: _____ selection process (but everyone MUST MEET inclusion/exclusion criteria), not haphazard, and gives everyone in the accessible population has an equal chance of selection (opportunity); this is has *no bias, it is the most accurate, and most ideal* --> Non-probability Methods are defined as: _____ selection and are used *clinically* out of necessity. This may result in limited generalizability.

random; non-random

*reliability of measurements* - What are the four methods used to establish reliability of a measurement or instrument? 2) ____ ____: the reliability of the tester - ____-rater reliability: consistency the individual tester has with themself --> control by taking a known sample, measuring repeatedly, and comparing results ----> ex: make sure pt measuring at home BG levels performs this over and over to ensure they know how to use the instrument - ____-rater: variation between the two individuals, i.e., the extent to which 2 or more raters agree --> control by establishing intra-rater and comparing results ----> ex: person checking at home BG levels bring in a log to the dr. office, compare their levels to your own (aka done by nurse)

rater reliability; intra; inter

*developing a research question:* - What is the target population? --> Target population is the group that the findings will be applied to; aka the _____ population; they are the "universe of interest" ----> Characteristics: clear, obvious, _____ ----> Ex: Onset of type II diabetes in single, working men, between ages 50-60, who sit more than 4 hours a day, do not exercise, smoke, consume 2 drinks a day, and sleep less than 6 hrs/night - Accessible population? --> Accessible population is the population that you can ___ ____; the "_____ population" ----> Ex: individuals who live in downers grove and 30 mins away from the city - Study sample? --> Study sample are the ____ participants in the study

reference; complete; recruit from; experimental; actual

*sampling techniques* - random sampling types: 3) Stratified Random Sampling: occurs by identifying ____ population characteristics and partition members of a population into ____, non-overlapping subjects (strata) based on those characteristics. ----> ex: how do grad students' level of education influence their ability to effectively interact w/ elderly pts? stratify by: first vs. second vs. third vs. fourth year students

relevant; homogenous

*reliability of measurements* - The concepts of reliability and validity are the 2 requirements of reliable measurements. Understand the concepts of reliability and validity concerning measurements. --> _____ = *consistency*; extent to which a measurement is consistent and free from error; it is reproducible (under the same conditions), fundamental (confidence), and dependable ----> it is assessed by checking the consistency of results *across time*, *across different observers*, and *across parts of the test* itself ----> this measurement is not always valid: the results might be reproducible, but they're not necessarily correct --> ______ = *accuracy*; ensures that a test is measuring what it is intended to measure ----> it is assessed by checking how well the *results correspond* to *established theories* and other *measures* of the *same concept* ----> this measurement is generally reliable: if a test produces accurate results, they should be reproducible. - can a measurement / tool / instrument be reliable but not valid? *yes* - ex: a pulse ox measures SpO2 and HR; it is *reliable* to measure both of those things, but it is not *valid* to measure BG

reliability; validity

*reliability of measurements* - reliability vs. validity - You measure the temperature of a liquid sample several times under identical conditions. The thermometer displays the same temperature every time, so the results are ____

reliable

*validity of measurements* - What is the relationship between validity and reliability? Does one dictate the other? Explain. --> *validity* implies that a measurement is ____ (an invalid test can be reliable) ----> ex: a pulse ox is reliable for measuring O2 sat and HR ONLY ----> it is harder to establish the validity of surveys, interviews, or questionnaires --> *Reliability* set the limits of validity but does not guarantee _____ ----> *Low reliability -> low validity* ----> *High reliability -> not necessarily high validity* (ex: A1c is more RELIABLE overall measure of BG than a BG monitor)

reliable; validity

*developing a research question:* - What are theories? --> A set of interrelated concepts, definitions, or propositions that specifies relationship among variables and represents a systematic view of a specific phenomena. ----> they're not always scientific; scientific theories required ____ ___ - How are they used? --> They are used to summarize, explain, predict, and develop new knowledge; they are an important source for developing new research questions - why can they be used as a catalyst for new research? --> possibly assess other variables? idk (ex in class: obesity cause prior to 10 years ago was just cal in > cal out = gain weight; now they are seeing that bacteria in GI can contribute to obesity)

repeated verification

*sampling techniques* - What are the characteristics of biased sampling? --> Bias sampling can occur when the individuals selected for a sample over- or under-_____ certain population attributes that are related to the phenomenon under study. --> They can be divided into two different groups... 1) ____ bias: purposeful selection (example: choosing less ill patients for a new drug trial) 2) ____ bias: not overtly aware (example: survey people on the street)

represent; Conscious; Unconscious

*reliability of measurements* - What are variance, reliability coefficient, and correlation? 1) *Variance:* the measure of variability scores within a ____; this is how far a number is from the MEAN VALUE --> a large variance => greater *distribution* of scores

sample

*sampling techniques* - non-random sampling types: 2) Quota Sampling: - advantages: --> relies on ___ ____ to choose samples - disadvantages: --> impossible to find sampling ____ (because it is non-random)

set criteria; error

*sampling techniques* - non-random sampling types: 2) Quota Sampling: Incorporates elements of ____ because the researcher will control for _____ factors, the researcher guides selection process, and the researcher gathers subjects to reflect each strata, wherein each strata will reflect the proportion in population. This sampling calls for volunteers and will stop when the numbers (i.e. quota) are reached. --> i.e., **keep going until the sample size is reached** --> used when time is limited, a sampling frame is not available, the research budget is very tight, or when detail accuracy regarding selection is not important.

stratification; confounding

*reliability of measurements* - systematic vs. random errors - _____ error: the error is consistent, predictable, and can be corrected for --> Ex: a scale that always reads 3 lbs heavier; to correct this, one could use the "gold standard" scale, aka the criterion, which would be a scale at the doctor's office --> this is not a problem for reliability --> limitation: affects *accuracy*

systematic

*reliability of measurements* - How can measurement error be minimized? --> Planning study and measurements that need to be taken, careful ____, clear operational definitions of ____, _____ equipment, and controlling measurement conditions

training; variables; maintaining

*validity of measurements* - 3 important questions to ask yourself about validity: --> *Validity* (i.e., does the instrument you're using measure the intended thing you are trying to measure?) ----> a test that is valid should.... 1) discriminate btwn individuals w/ and w/out ____ of interest (ex: XR to see if person broke their bone) 2) detect a change in the quality/magnitude of the variable over time/after treatment (ex: BP monitor) 3) be able to allow providers to make predictions/dx from outcome (ex: EKG)

trait

*interpretation of clinical results* - Understand the significance of a placebo group compared to a no treatment group in comparison to the treatment group for determining a statistically significant difference. --> *Statistically significant* results of studies are often incorrectly based on the difference between the effect of ____ vs. ____ ____ at all; this shows little difference clinically but a difference between a placebo group and a treatment group would show there is one. *A good statistical analysis will include all three* --> additionally, need to ensure the study is sufficiently _____ (i.e., did you have enough participants to ensure this result is significant)

treatment; no treatment; powered

*developing a research question:* - What is a Pilot study? Why are pilot studies useful? --> "___ ___," where you would "test out" your research idea on a few people/animals before carrying out a big research on 100+ people --> useful because they test the ____ of a project and clarify decisions about _____ definitions and procedures

trial run; feasibility; operational

*interpretation of clinical results* - What is minimal detectable difference (MDD) or minimal clinical difference (MCD)? How are they used clinically? --> MCD is the amount of change in a variable that can be determined to reflect a ____ ____ for a specific population ----> this is *objective* ----> Ex. a 6 min walk test tests mobility; keep testing and see if they get same # every time, then your pt isn't getting better; however in healthy person, they're not going to change that much; how much change has to occur to prove that it's above measurable error? Usually, with COPD need to go 53 m to show signif. change ----> you are checking the RESPONSIVENESS to the change --> MCD is necessary to make sure it is not just a measurement error (for ex. If a healthy individual walked for 6 min before lunch vs. after lunch, there will be a small difference that would be considered measurement error)

true difference

*reliability of measurements* - What is the concept of generalizability? - Each measurement is the best estimate of a ____ ____ under the given testing conditions with the given population --> Reliability is *not an inherent quality* of an instrument; it instead exists only within the *context* in which it was *tested*

true score

*reliability of measurements* - Understand the relationship between observed value and true value. --> X = T +/- E ----> where X = observed score, T = ____ value, E = ____ value - what are the 2 types of measurement errors? ___ and ____

true; error; systematic; random

*interpretation of clinical results* - What are confidence intervals? Why are they important? --> a confidence interval (CI) is a range of scores in which the ____ score of a variable is estimated to lie within --> CI's help determine the meaningfulness of study findings by establishing the precision of the findings; when bias is minimized, confidence is strong, statistical significance is strong, and the CI's are _____ ----> Ex: a confidence interval of 0.1-0.5 is stronger than a confidence interval of 0.1-1.0 ----> should be at 90-95%, i.e., yo uare 90-95% confident that the true mean lies w/in the range ----> error is usually reported as standard deviation (SD) or standard error of the mean (SEM - which takes in SD and sample size into account)

true; narrow

*sampling techniques* - random sampling types: 5) Disproportional Sampling: Used when stratas are of ____ sizes. ----> ex: cohort of nurses has a 3:17 male to female ratio, if you need to do a nurse survey, then correct for proportion (i.e., recruit equal amount of males and females), then restore for proportional weight of strata

unequal

*reliability of measurements* - reliability vs. validity - If the thermometer shows different temperatures each time, even though you have carefully controlled conditions to ensure the sample's temperature stays the same, the thermometer is probably malfunctioning, and therefore its measurements are not ____

valid

*principles of measurements* - How are numbers used? --> Numeral: symbol/label and qualitative ____ (ex: football player's jersey #) --> Number: ____ (ex: scale)

value; quantitative

*developing a research question:* - what factors are involved in evidence-based clinical decisions? --> Clinical expertise --> Patient ____ --> Clinical _____ --> Patient management --> Clinical ____ --> Best research evidence

values; question; conditions


Ensembles d'études connexes

BUS 320 - Personal Finance Chapter 10 & 11

View Set

Chapter 16- The Age of Ecploration

View Set

N400, PrepU for Ch 16 (Documentation)

View Set

EC-6 291 SOCIAL STUDIES ASSESSMENT AND INSTRUCTION

View Set