Final- questions from quizzes

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

How to enhance Credibility

*Prolonged engagement *Persistent observation *Reflexive Strategies *Triangulation *Audit trail *Member checking *Comprehensive field notes *Peer debriefing -Audiotaping -Data Saturation -Intercoder checks -Negative case analysis -Document quality enhancing efforts

Key Decisions -Sequencing -Prioritization -Integration

*Sequencing -Sequential= data collected at diff times -Concurrent= data collected at same time *Prioritization= which is emphasized? equal vs dominant *Integration = how the methods are combined/ integrated

Ethnographic Analysis

- Analysis starts in the field -Look for patterns in behaviors/thoughts of participants (compare one vs another, analyze simultaneously) ***acquire a deeper understanding of culture under study

Two types of CRITERION-RELATED VALIDITY

-2 types: *Predictive validity: the instrument's ability to distinguish people whose performance differs on a future criterion (when a college correlates incoming students high school grades with subsequent college GPAs, look at predictive validity of high school grades for college performance) *Concurrent validity: the instrument's ability to distinguish individuals who differ on a present criterion

1)Transcribing

-Audiotapes of interviews -Word for word **Errors -Deliberate Trying to make things fit the research -Accidental / Inadvertent Typos, omissions -Unavoidable Body language / nonverbal, variation of tone when speaking

Embedded

-Concurrent or Sequential -One type= supportive, other type is primary source -Either qual or quan is dominant -Difficult to distinguish from triangulation

Specific MM Designs

-Convergent Parallel -Embedded -Explanatory -Exploratory

Criteria for "never events"

-Unambiguous-clearly identifiable and measurable -Usually preventable -Serious, resulting in death or loss of body part, disability, or function Adverse -Indicative of a problem in the safety system of the facility -Important for public credibility or accountability

Theoretical Sampling

-Used in Grounded Theory→ aim to discover categories and their properties and to offer new insights into interrelationships that can occur in substantive theory -What data to collect next and where to find those data to develop an emerging theory → What groups of subgroups should the researcher turn to next? -Focused approach → researcher simultaneously collects, codes, and analyzes data then decides what data to collect next and where to find them in order to develop theory as it emerges

Notations & Diagramming

-Used to sequence and prioritize decisions Dominant= uppercase (QUAL/quan), or if equal (QUAL/QUAN) -Sequence indicated by symbols= *QUAN--> qual (sequential) *QUAN+qual (concurrent) -Embedded *QUAN(qual) = qualitative methods embedded within quanitative design

An instrument is considered to have good stability if the reliability coefficient is ___

0.8

Organization of QUAL data

1) Begin with best possible quality data (careful transcriptions, ongoing feedback, continuous efforts to verify accuracy) 2) Develop category scheme 3) Coding 4) Organize

QUALITATIVE sampling

1) Convenience (volunteer) 2) Snowball (network) 3) Theoretical 4) Purposive

IOM recommendations for healthcare

1) Safe 2) Effective 3) Pt centered 4) Timely 5) Efficient 6) Equitable

Grounded Theory Sampling Evolvement

1) where to start? convenience/ snowball 2) Maximum variation early on to gain insight 3) Ongoing adjustment/ emerging conceptualizations 4) Sample until data saturation 5) Final sampling to refine and strengthen the theory

Advantages

1)Objectivity Based on data & statistical analysis 2)Explicit decisions Not based on personal decisions Based on protocols that include statistics and procedures 3)Increased power Probability of detecting a true relationship Even if some studies show insignificant results, MA can conclude the relationship is real

Metasynthesis Data analysis PATERSON, THORNE, CANAM, JILLINGS APPROACH

3 components -Meta-data analysis--> analyzing processed data -Meta-method--> study of the methodological rigor -Meta-theory--> analysis of theoretical underpinnings

Advantages of Measurement and Rules

Advan: removes guesswork, provides precise info, less vague than words -Rules must be invented, rules should be meaningful

Descriptive phenomenology steps

BRACKET INTUIT ANALYZE DESCRIBE

Hospital Value Based Purchasing Program

Began FY 2013 for discharges after 10/1/2012 CMS will make value based incentive payments to acute care hospitals either based on how well it performs or improves on certain quality measures --AMI, Heart failure, PNA, patient's experience of care, surgical care

Interpreting hypothesized results

CHance the observed relationship resulted by chance= TYPE 1 ERROR = FALSE POSITIVE

Validity coefficient

CRITERION VALIDITY Magnitude of coefficient is an estimate of instrument validity Ex) Looking at professionalism for nursing -Loko at who attended more conferences, b/c more conferences= more professionalism most likely

The process referred to as "constant comparison" involves:

Comparing data segments against other segments for similarity and dissimilarity

Advantages of MM

Complementary Practical Incremental Enhanced validity

Reflexovity

Critical self reflection about ones own biases

What are consequences of listwise deletions?

Decreased sample size and decreased magnitude of the effect

Statistical Conclusions Validity

Demonstrates a relationship exists btwn IV and outcome contributes to SCV Statistical power= ability of design to detect true relationships among variables

Data Analysis Continued (BASICALLY WORDED AGAIN)

Determine the pooled study effect -Based on weighted average of individual study effects -Weighted average based on standard error for each study --> larger the std error, less weight assigned to the study -Used for final statistical analysis (determine if significant difference in common metric btwn intv and control groups)

Designing the Meta-analysis

Develop sampling criteria -Substantive: ID what variables & population to be studied -Methodological: ID what type of study designs to include Practical: Stipulate language, published timeline Consider statistical heterogeneity Vastly different study findings not a good fit

Ethnographic Analysis -Domain -Taxonomic -Componential -Theme

Domain- units of cultural knowledge (broad categories- focus on objects and events; cultural meaning of terms and symbols Taxonomic- determine the # of domains -develop taxonomy- system of classifying Componential- examine relationships among domains Theme- uncover cultural themes

-Participant observation -in-depth interviews with key informants -Extensive field work

ETHNOGRAPHY

Difference between obtained score and true score

Error of measurement

Bias

Factors that create distortions and undermine researchers efforts to capture truth in the real world

-Seek to discover core variable -Basic social processes that explains how people resolve things -Constant comparison

GROUNDED THEORY

Fieldwork issues

Gaining trust Pacing data collection to avoid being overwhelmed by intensity of data Avoiding emotional involvement with participants Maintaining reflexivity

Data analysis in MM major goals

INTEGRATE the two strands, MERGE the data yo

Scrutiny of data

Inquiry audit

Mixed Methods Goal

Integrate findings and draw conclusions using both qual and quan data!!!

Prolonged engagement establishes what

Integrity

4) Organize data *Data management in qualitative research * Data analysis in qualitative research

Manual- conceptual files -Data management is REDUCTIONIST (convert MASS amts of data into smaller, manageable segments) -Data analysis is CONSTRUCTIONIST (putting segments together into meaningful data

Metasynthesis Data analysis Noblit & Hare approach

NOBLIT & HARE APPROACH = Meta-ethnography -Lists key themes or metaphors across studies and translate them into eachother ***Synthesis should be interpretive NOT aggregative -Construct interpretations vs descriptions

Content Valisity

New instruments measured by content validity index (CVI)

N O I R

Nominal: lowest level, using numbers simply to categorize attributes (ex: gender, blood type) Ordinal: ranks based on relative standing on an attribute, does not tell us how much greater (ex. Military rank) Interval: rank based on an attribute and specify the distance between, DO NOT HAVE A MEANINGFUL ZERO (ex: standardized IQ test, temperature) Ratio: highest level, have a meaningful zero, provide information about the absolute magnitude of the attribute (ex: weight, BP)

Quan

Objective VS Subjective One reality VS Multiple realities Reduction, control, prediction VS Discovery/description/understanding Measurable VS Interpretive Mechanistic VS Organismic Parts=whole VS whole> parts Report statistical analysis VS rich narrative Researchers separate VS Part of process Subjects VS Participants/informants Context free VS context dependent

Reporting Guidelines

PRISMA- writing up a systematic review of RCTs MOOSE- guides reporting of meta-analyses of observational studies

Feedback from external reviewers

Peer review and debrief

Limitations to Mixed Methods

Practical issues- time consuming, expensive, researchers must be competent in both ways

Transferability

Researchers must provide thick vivid descriptions

Evaluate tools based on sensitivity and specificity for ***screening and diagnostic instruments ***

Sensitivity: ability of a measure to correctly identify a "case", rate of yielding true positives -True positives (# who did report smoking)/ ALL REAL POSITIVES -Lowers with underreporting Specificity: ability of a measure to correctly identify NON cases -True negatives (# reported not smoking)/ ALL REAL NEGATIVES (look at cotinine levels to see who speaks the truth) (inverse relationship) -Increases with less OVERreporting When sensitivity of a scale is increased to detect more TRUE POSITIVES, the # of false negatives increases

Trustworthiness

Separate set of evaluation criteria used in qualitative research that parallel reliability and validity in QUANITATIVE RESEARCH -Credibility -Dependability -Confirmability -Transferability -Authenticity

RELIABILITY OF INSTRUMENT

Stability Internal Consistency Equivalence

Types of never events

Surgical events Product of Device events Patient protection events Care management events Environmental events Criminal events

Interpreting NONsignificant results

TYPE 2 ERROR= FALSE NEGATIVE -USUALLY result of inadequate power (sample size)

equivalence

The degree to which 2 or more independent observers agree about scoring Can be enhanced by careful training and specification of clearly defined, non overlapping categories -Interrater reliability 2 or more observers or coders make independent observations and then an index of agreement is calculated

A researcher decides to "fill in" missing values. Which statement is true about Imputations?

Use typical values of the existing data to fill in the missing values

Cleaning the data includes what?

Visually verifying the accuracy of the data

You are developing an instrument to measure nurses' attitudes towards AIDS. You prepare the instrument and sent it out to 25 experts in the field to examine and give you feedback on the instrument. This would be considered a form of:

content validity

A researcher has placed a notice in the local newspaper inviting people to participate in a qualitative study. what type of sampling is this?

convenience sampling

When examining the trustworthiness of a qualitative study, the investigator notes that when the inquiry was replicated with the same participants and setting, the results were the same. The refers to which type of trustworthiness?

dependability

In ethnographic research methodology, a description of a wedding from the members point of view would be an example of:

emic view

which type of instrument reliability is - Agreement about scoring with two or more independent observers

equivalence

interrater agreement in metaanalysis

extraction and coding of information should be completed by two or more people

true/false A Cronbach's alpha of 0.57 would indicate the items on the instrument are reliably measuring critical attributes.

false

An instrument is considered to have good internal consistency if the cronbach alpha level is _____

higher than 0.7

a stool sample is an example of which type of measure

in vitro

Cronbach's alpha is used to determine which of the following attributes of an instrument?

internal consistency

Is "the transfer of a patient to another facility without prior acceptance" a never event?

no

In grounded theory studies, the initial process of breaking down, categorizing, and coding the data is often referred to as:

open coding

The experience of having a child with leukemia is best suited for which qualitative method?

phenomenology

Which characteristic is NOT associated with qualitative design?

rigid

Evaluating screening and diagnostic instruments

self-report observational biophysical

which theoretical category does this quote fall into? "In my second semester, it became evident that I needed to do a better job of allocating specific times for studying and had to get support from my family and work."

setting priorities

Interpreting Mixed Results

some hypotheses supported and others are not

A researcher is concerned about the reliability of a new instrument. A test-re-test procedure is performed and yields a reliability coefficient of .85. The researcher then concludes the instrument has adequate reliability regarding which of the following?

stability

Test-retest validity is an example of...

stability

which type of instrument reliability is - Similar scores obtained on separate occasions.

stability

he degree to which the findings of a qualitative study can be extrapolated to other settings is referred to as:

transferability

True/false An instrument is considered reliable when it consistently measures an attribute.

true

True/false.. Inferences are valid only to the extent that the researchers have utilized rigorous methods

true

true/false One of the goals of Participatory Action Research is to empower those who are vulnerable to the control or oppression of a dominant culture.

true

Mixed Methods

***Pragmatist Paradigm -Research question drives the inquiry -Reject forced choice between positivists and constructivists modes -Both paradigms are important -Goal --Integrate findings and draw conclusions using both quant & qual data -Gives rise to meta- inferences

How to enhance Dependability

**Need to have this to be credible *Triangulation *Member Checking *Inquiry audit -Careful documentaion, decision trail -Stepwise replication

Exploratory

**SEQUENTIAL 1) qual = exploring 2)quant (builds on qual data) = measures or classifies *initial in depth exploration of phen

How to enhance transferability

*Comprehensive field notes *Data saturation *Document quality enhancement efforts *Thick vivid description

Sample Differences in: 1) Ethnography 2) Phenomenology 3) Grounded Theory

*Ethnography -25-50 informants -Mingling with culture members -Multiple interviews with smaller # of key informants -Purposive= based on role in culture *Phen - <10 - Experience phen of interest -Able to articulate -Use artistic/literary sources in INTERPRETIVE *Grounded -20-30 -Theoretical -Select informations who can best contribute to evolving theory -Emerging conceptualizations help inform theoretical process -Sampling/data collection/ analysis theory construction occur concurrently

How to enhance confirmability

*Investigator and theory triangulation *peer debriefing *Inquiry audit

Enhance Authenticity

*Prolonged Engagement *Persistent observation *Reflexive strategies (journal or diary) *Thick, vivid descriptions

3) Coding the Data

- corresponding for a particular category, required multiple readings -Nonlinear- paragraphs from transcriptions may contain elements relating to 3 or 4 categories -May require revisions of categories -Develop a coding scheme

Explanatory

-**SEQUENTIAL 1) Quant 2) Qual (builds on quan) **use when quant results are surprising or unexpected, or when sample has numerous outliers difficult to explain

Stability

--The extent to which similar scores are obtained on separate occasions -Assessed through test-retest reliability Researchers administer the same measure to a sample -twice, then compare scores. Reliability coefficients can be an indicator. The higher the coefficient, the more stable the measure. Problem with this: many traits change over time, regardless of an instrument stability (attitudes, mood, etc) -Reliability coefficients Represent the proportion of true variability to obtained variability. Usually range from 0-1.0 and show be at least 0.70-0.80 (preferable) Can be improved by making instrument longer (adding items) Are lower in homogenous than in heterogenous samples Lower scores reduce statistical power and lower statistical conclusion validity.

Inference

-Act of drawing conclusions based on limited information, using logical reasoning (Interpretation= making a series of inferences) -Inferences are valid to the extent that researchers have used rigorous methods -Start with skeptical attitude and null hypothesis -Statistical conclusion validity - extent to which correct inferences can be made about the existence of real relationships between key variables....also affected by sampling decisions.

Snowball (Network) sampling

-Asking early informants to make referrals for other participants -Can specify characteristics that they want the new participants to have -Enhances building a trusting relationship -Cost-effective and practical -Sample may be related to a small network of acquaintances ***Risk of referral bias

Reflexivity

-Awareness that the researcher as an individual brings to the inquiry a unique background, set of values, and a social and professional identity that can affect the research process and cause bias. -Reflexive journal or diary **helps with credibility/authenticity

CONSTRUCT validity

-Basically hypothesis testing, linked to theoretical conceptualizations *Logical analysis and testing relationships predicted on basis of well grounded conceptualizations

Hospitsl QI Stimulus Organizations

-Center for Medicare and Medicare Services (CMS) -Important for reimbursement since they are the largest payer -National Quality Forum (NQF) -Leapfrog group Consumer watch Legal based Compare hospital -Joint Commission

Advantages of Mixed Methods

-Complementary Avoids limitations to a single approach -Practical Given the complexity of the phenomena, it is practical to use whatever methodological tools are best suited to addressing pressing research questions -Incremental Progress on a topic tends to be incremental. Qualitative finding can generate hypotheses to be tested quantitatively, and quantitative findings may need clarifications through in-depth probing. -Enhanced validity-- confidence of inferences enhanced Hypothesis is supported by multiple and complementary types of data. Triangulations of methods can provide opportunities for testing alternative interpretations of the data and for examining the extent to which the context helped to shape the results.

After an effect size is computed for each study, what happens next?

-Compute pooled effect estimate to get a weighted average of individual effects based on the STANDARD ERROR WHY DOES THIS HAPPEN? effects from individual studies are pooled to yield an ESTIMATE of the population effect size - The higher the standard of error in a study, the less weight assigned -Greater weight assigned to larger studies This is called thE INVERSE VARIANCE METHOD

Quality Strategies Related to Coding and Analysis -Investigator and theory triangulation -Confirming Evidence -Disconfirming Evidence -Peer review and debriefing -Inquiry Audit

-Confirm findings with other groups or sources -Systematic search for data that will challenge an emerging categorization -Peer review and debriefing (opportunity for objective feedback from external reviewers) **credibility and confirmability

CONSORT guidelines

-Consolidated Standard of Reporting Trials -Adopted by major medical and nsg journals to help readers track participants; **important to look at when interpreting results ** -Ensure accurate reporting of RCTs -Helps research facilitate interpretations by carefully documenting methodologic decisions and respected outcomes

Grounded Theory Analysis and coding with glaserian approach

-Constant COMPARISON to ID commonalities -Substantive codes (subs of topic under study)--> open AND selective Open--first step, captures whats going on--> data broken down and sim and differences explored Selective-- code only those data related to core category = BASIC SOCIAL PROCESS

Construct Validity Methods of assessing construct validity

-Construct: A key criterion for assessing research quality. Most often linked to measurement and is concerned with: What is the instrument really measuring? Does it adequately measure the construct of interest? -Known-groups technique: evidence of contrast validity. Instrument is administered to groups hypothesized to differ on the critical attribute (validating the measure of fear of childbirth, contrast the scores of primiparas and multiparas) -Hypothesized relationships: offers supporting evidence -Factor analysis: statistical procedure. Method for identifying clusters of related variables (dimensions underlying a broad construct) -Multitrait-multimethod matrix method (MTMM): validation tool involving the concepts of convergence and discriminability

Systematic Reviews

-Cornerstone of EBP -A review that methodologically integrates research evidence about a specific research question using careful sampling & data collection procedures that are spelled out in advance in a protocol

Criterion related validity

-Criterion-related validity: the degree to which the instrument CORRELATES!! with an external criterion. Validity coefficient: calculated by correlating scores on the instrument and the criterion. Range 0-1.0 (higher indicates greater criterion validity and > 0.70 is desirable) **objective criterion to compare a measure **

Guiding principle of sample sizes in qualitative research

-DATA SATURATION** = no new info obtained, redundancy is achieved Depends on: -Scope of research question (broader= larger sample) -Quality of Data -Type of Sampling Strategy (max variation= larger sample)

2) Developing a category scheme

-Data must be organized and reduced -careful reading -ID concepts/ clusters of concepts -Descriptive= researchers use fairly concrete concepts -Theory generating= researchers use more abstract concepts -Label the category

Limitations to deletions

-Decreases sample size -Decreases magnitude of effect -Risk of Type II error (accept null when it is false) -Says no difference, but there really is

Purposive -Maximum variation -Homogeneous sampling -Extreme (deviant) case sampling -Typical case sampling -Criterion sampling -Confirming Cases -Disconfirming cases

-Deliberately choosing the cases or types of cases that will best contribute to the study -Strive to select sample members purposely based on the information needs that emerge from the early findings 1)Maximum variation sampling-(Purposefully select persons with a wide range of variation on dimensions of interest -Seek participants with diverse perspectives and backgrounds -Detection of common patterns emerging despite diversity in sample is major strength) 2)Homogeneous sampling- (Deliberately reduces variation & permits a more focused inquiry -May use this approach if researchers wish to understand 1 group of people really well -Often used to select people for group interviews) 3)Extreme (deviant) case sampling- (Also called outlier sampling Provides opportunity for learning from the most & extreme informants- "exceptions to the rule" -Most often used as a supplement to other sampling strategies -Extremes are used to develop a richer understanding) 4) Typical case sampling- (Selection of participants who illustrate or highlight what is typical or average -Can help a researcher understand key aspects of a phenomenon as they are manifested under ordinary circumstances) 5) Criterion sampling- (Studying cases that meet a predetermined criterion of importance Ex: In studying patient satisfaction with nursing care, sampling only those who expressed a complaint at discharge) 6) Confirming cases (cases that fit the researchers conceptualizations and strengthen credibility) 7) Disconfirming cases- cases that do not fit and challenge the researchers interpretations -Offer new insights as to how original conceptualization needs to be revised

Phenomenological Analysis

-Describe phen of interest -Collect participants descriptions of phen --> read them --> take out sig statements--> spell out meaning of each--> organize meanings into clusters--> write exhaustive descriptions --> validate with participants (member checking) Interpretive: hermeneutic circle --> IOT reach understanding, continual mvmt btwn parts and whole

Quality Strategies Related to Presentation

-Disclsure of quality enhancement strategies -Thick, contextualized descriptions -Researcher credibility

Magnitude of Effects

-Effect size= difference btwn tx and non tx group --> INDEPENDENT OF SAMPLE SIZE -How powerful are the effects and are they clinically important? (statistical sig. Alone doesn't mean results are meaningful) **Important in addressing EBP question

Never Events

-Errors in medical care that are clearly identifiable, preventable, and serious in their consequences for patients, and that indicate a real problem in the safety and credibility of a health care facility. Account for 2.4 million extra hospital days, 9.3 billion in excess charges and 32,600 deaths CMS will no longer pay for many preventable errors Public disclosure Joint Commission mandates a Root Cause Analysis (RCA)

Quantitative Data Interpretation Credibility

-Extent to which findings are valid Are the methods sufficiently rigorous so the evidence can be believed? -Requires careful analysis of the study's methodologic and conceptual limitations and strengths **Proxies and interpretation: how plausible is it that the actual sample reflects the recruited sample, accessible population, target population, and population construct population construct--> target population --> accessible population --> recruited sample --> actual sample

Authenticity

-Extent to which researchers show a range of different realities and convey the feeling tone of -participants' lives as they are lived -Distinct to constructivist paradigm -Enables readers to develop a heightened sensitivity to the issues being depicted

Types of Validity Face Content Criterion-related Construct (discusses first two)

-Face validity: refers to whether an instrument looks like it is measuring the appropriate construct. Based on judgement (no objective criteria for assessment). Least scientific measure of validity yo. -Content validity: the degree to which an instrument has an appropriate sample of items for the construct being measured. New instruments are evaluated by experts by the content validity index (CVI) and a CVI > 0.90 is recommended

Transferability

-Findings can be extrapolated to a different group of ppl **No desire to generalize to target population

Precision of parameter estimates

-How precise in the estimate of the effect? -How confident are we that the sample mean contains the population mean? -P values= how strong the evidence is that the study's null hyp is false ****-Confidence intervals (CI's)= how precise or imprecise are the study results= strength of evidence***

Sampling Strategies in MM -Identical -Nested -Parallel -Multilevel

-Identical =same participants are in both strands; provides opportunity for convergence of both data sets -Nested= some of the participants from one strand are in the other strand; most often used in MM -Parallel =participants are either in one strand or the other, but drawn from same population -Multilevel participants =are not the same, and are drawn from different populations at different levels in a hierarchy

Enter and Verify Data

-Import from excel into SPSS -Use data editor (set up/define variables, label using various coding schemes) -Visually verify entries

Calculation of Effects and Data Analysis

-Index that encapsulates in a single number the relationship between the IV and the outcome (DV) variables in each study -Most studies involve a comparison among 2 groups, the intervention & control groups. We want to know what the mean difference was for each group so we can combine them & average to a single value or index.

Methods of Assessing Construct Validity Known-groups Hypothesized relationships this one Next card Multitrait-multimethod matrix method (MTMM) Factor Analysis

-Known-groups technique= evidence of CONTRAST VALIDITY= instrument measuring different groups thought to differ on critical attribute! EXAMPLE: fear of child birth, assess primaparas and multiparas -Hypothesized relationships= examining predicted relationships -Ex: based on theory, construct A is related to construct B; scales X and Y are measures of the constructs -IT CAN BE INFERRED THAT X AND Y ARE VALID MEASURES OF CONSTRUCTS

Data Collection in QUALITATIVE

-Most common= indepth interviews -Researchers go out into the field (can be hard to gain trust/ can get overwhelmed with amt of data/ maintaining reflexivity (cause and effect interchangeable)

MetaSYNTHESIS

-NOT A.... Summary of research findings -Concept analysis Steps -Problem Formation -Design= upfront sampling decisions -Search for data in literature -Eval of study quality -Extraction of data for analysis -Data analysis and interpretation

How do you Integrate the Findings? -Narrative -Matrices -Data conversion

-Narrative Narrative summaries in the discussion section of the article -Matrices Matrices or tables may be used to present the quant & qual findings in a meaningful interpretative way -Data conversion Involves converting quant to qual data and vice versa.

Why use MM?

-New or poorly understood concepts -Findings from one approach enhanced by different sources of data (triangulation can help) -Neither is good enough to answer -Qualitative data can help explain QUAN results -->hypothesis generating and testing -Development Formal instruments/ nursing interventions -Explication --> offers isights into meaning of quan findings -Theory Building--> more opportunities for disconfirmation

Qualitative Data Analysis

-No universal rules -TON of work -Creativity, sensitivity, strong INDUCTIVE skills (induce universals from particulars) -Discern patterns, weave together into a whole -REDUCES data for reporting purposes

Validity Interpretation

-Not proved or verified, rather supported to a greater or lesser extent by evidence -DON'T validate an instrument, but the application of it (measuring anxiety may be approp in some situations, but not others)

Missing values/ patterns of missingness

-Occurs often in healthcare, surveys -Can introduce bias and obscure results→ so important to understand pattern of missingness Missing completely at random (MCAR)- likelihood of missing data not related to the value of the variable Least likely to cause bias Missing at random (MAR) - likelihood of missing data can be explained by other variables Ex) >65 yo or who had cataracts tended to not answer all items and randomly missed items Missing not at random (MNAR)- likelihood of missing data related to value of the variable Most likely to cause bias There is a reason certain items aren't answered Ex) those with higher income less likely to report their income, intentionally skip that question 1) Examine the extent of the missing values Which variables? Look at all variables How many cases? What was the percentage missing? Very low= not concerned; Moderate= need to explore further 2) Assess the randomness of missing values Divide into 2 groups (those with data and certain variable/ missing data with certain variable) and determine if groups comparable on outcome measure Different? Then need to do something about Can also use SPSS MVA= missing values analysis

Statistical Results

-P levels -Effect size -Confidence Intervals (preciseness of estimate of effect= how confident are we that the sample mean contains the true population mean)

Credibility

-PARALLELS INTERNAL VALIDITY (DV affected by IV) -Confidence in the truth of the data and the interpretations of them -Carrying out the study in a way that enhances the believability of the findings -Taking steps to demonstrate credibility to external reader

Meta-analysis

-PURPOSE= Transform all study results to a common metric =*Effect size -Effect sizes averaged across studies -Yield aggregate information about the existence of a relationship between variables -Provide estimate of the magnitude of effect -Research question being tested is similar across studies (population, IV, and DV should be the same) -Adequate # of studies -Consistency of evidence -Should have a narrow focus, conceptually define constructs

Confirmability

-Parallels OBJECTIVITY -Data represents the information the participants provided -Findings reflect the participants' voices and conditions of the inquiry, not the researchers' biases, motivations, or perspectives

Dependability

-Parallels RELIABILITY (stability) in quan -Consistency and accuracy with which an instrument measures an attribute -Stability of data over time and conditions -Would the study findings be repeated if the inquiry were replicated with the same participants in the same context? -Credibility can not be attained in the absence of dependability

Transferability

-Parallels with external validity or generalizability -Extent to which findings can be transferred to other settings or groups -Potential for extrapolation -Researchers must provide thick descriptions so reader can make that decision

Other indicators

-Predictive values: probability of an outcome after the results are known -Positive predictive value: proportion of people with a positive result who have the target outcome or disease -Negative predictive value: proportion of people who have a negative test result who do not have the target outcome or disease

Imputation

-Preferred method- filling in missing data with values believed to be good estimates of the values had they not been missing *Advantage- use full sample size, statistical power not compromised *Risk- imputations may be poor estimates of real values → leads to biases of unknown magnitude -Mean substitution (or median)- simplest form= using "typical" sample values to replace missing data that are continuous Ex) a person's age is missing, average age of all sample members is 45.2; substitute 45.2 for missing age

Coding data

-Process of transforming raw data into standardized form for data processing and analysis -Process of identifying recurring words, themes, or concepts within the data

Strategies to Enhance Quality

-Prolonged Engagement -Persistent Observation -Reflexive Strategies -Triangularion -Comprehensive and Vivid Recording of information -Member Checking

Strategies to Enhance Quality 1) Prolonged Engagement

-Prolonged engagement Investment of sufficient time collecting data to have an in-depth understanding of what is being studied -Establishes integrity -Ensures data saturation*** -Builds trust and rapport with informants -Ensure resources in place to support -Improves credibility and authenticity

THEORETICAL coding

-Provides insights into how substantive codes relate -Enhances abstract meaning of relationship among categories

Qual vs Wuan Research Questions

-Qual= processes, experiences, feelings -Quan= descriptive prevalence, relationships among variables, causal connections

Data Conversion Qualitizing Quantitizing

-Qualitizing= transform numerical data to qualitative data Example: Structured self report with predefined questions used. Here data transformed to narrative description of a typical case to read data qualitatively & come up with a profile Gives life to patterns emerging from quantitative data -Quantitizing= qual data to numeric values Example: Determining the frequency of themes or patterns. Document the extent to which themes occurred. Code presence or absence of them with 1 present or 0 not present. May display as a frequency table

Types of Bias -Design -Sampling -Measurement -Analysis

-RESEARCH DESIGN→ expectation bias, hawthorne effect (change behavior when theyre being observed), contamination of treatments, carryover effects, noncompliance bias, selection bias, attrition bias, history bias -SAMPLING → sampling error, volunteer bias, nonresponse bias -MEASUREMENT→ social desirability bias, acquiescence bias (agree with all questions), naysayers bias, extreme response set bias (all strongly agree, all strongly disagree), recall/memory bias, reactivity (subjects under study change behavior bc of being observed), observer biases -ANALYSIS → Type 1 Error= false positive (reject null when should accept it) ; Type 2 Error= false negative (accept null when should reject it)

Stability

-Reliability Coeefficient - r** = true variability/ obtained variability -test-retest reliability -Lower in homogenous vs heterogenous (same types of ppl will likely answer very similarly) -Can be improved by making instrument longer

Gathering Qualitative Self- Reports Self-reports are often supplemented by direct observation in naturalistic settings, one type of unstructured observation is.... -Participant observation

-Researcher gains entree into a social group and participates in its functioning while making in-depth observations of activities and events; observe ppl in their env, with minimum interference) **most often gathered through participant observation (observations occuring from within the group) -Take field notes, daily logs, photos, tapes Issues: gain entree/ estab rapport -Single/ multiple/ mobile positioning

Member Checking

-Researchers give participants feedback about emerging interpretations and then obtain participants' reactions **Enhances credibility and dependability** -Can lead to erroneous conclusions if participants desire to "cover up" or just agree bc they assume the researchers are more knowledgeable.

2) Persistent Observation

-Researchers' focus on the characteristics or aspects of a situation that are relevant to the phenomena being studied. -Improves credibility and authenticity -Prolonged engagement provides scope, persistent observation provides depth***

Interpretation in Quantitative data

-Results section- summary of statistical analysis -Discussion section- interpretation of study results; seldom totally objective; evaluate within context of study aims, theoretical basis, related research, strengths/ limitations Aspects- 1) credibility and accuracy of results; 2) precision of the parameter estimates; 3) magnitude of the effects and importance of results; 4) meaning of the results, esp with regards to causality; 5) generalizability; 6) implications of the results for nsg practice, theory dev for future research

Metaanalysis- Eval of Study Quality

-Rigorous studies weighted more than weaker studies **Scale approach -Quantitative ratings of evidence quality for each study -Various instruments available **Component approach -Domain based evaluation -Rate or code individual study features -Randomization -Blinding -Extent of attrition

Aims of Quality Improvement

-Safe -Effective -Patient-Centered -Timely -Efficient -Equitable

Analysis Overview

-Search for broad categories/ themes Theme- abstraction that brings meaning and identity to a current experience , may develop within or across categories ***emerges from data*** - rarely a linear process- iteration usually necessary -Themes validated and refined

Errors of measurement

-Situational contaminants- environmental conditions ( scores affected by the conditions which they are produced) -Response set bias -Transitory personal factors- temporary states such as hunger, fatigue, mood -Item sampling errors- reflect sampling of the items used to measure an attribute

Sampling in Qualitative Similarities

-Small -Use nonrandom methods -Final sampling decisions occur during data collection

Validity of measurement!!!!!!

-The degree to which an instrument measures what it is supposed to measure Ex: an instrument that measures hopelessness should validly reflect this construct and not something else like depression -An instrument cannot be valid if it is unreliable, but an instrument can be reliable without being valid.

Convergent Parallel

-Triangulation design -Purpose= obtain different, but complimentary data about central phen under study -QUAL + QUANT (collected simultaneously, equal priority)

MA Data Analysis 1)-Calculating effects (Means)

-When outcomes of studies are on an IDENTICAL SCALE(ex both in lbs), we can use the below approach -Determine the effect size for the individual studies Example: Comparison of 2 group means on a continuous outcome (weight) Mean post weight in control group 194 lbs Mean post weight in intervention group 182 lbs Effect size 12

Biophysical Measures Structured Instruments

-have a greater chance of capturing a construct that is accurate, truthful, and sensitive (BP, weight) -used to measure sttributes have lesser chance of achieving goals ( aka fatigue scale)

Patient Prospective Payment System

-reimbursement is based on a pre-determined payment, regardless of the intensity of the actual service provided -Quality measurement Process and outcome measures for clinical care Hospital acquired conditions

Cleaning the data involved checks -Check for extreme outliers -Wild codes -Consistency checks

1) Check for extreme outliers (values beyond 25th or 75th percentile) 2) Wild codes (code that is not possible) EXAMPLE: male=1, female =2, and you see 3 3) Consistency checks- focus on internal data consistency Ex) survey question about marital status as single and never married, 2nd question about # of marriages should be 0

4 key types of validity Construct Statistical conclusion External Internal

1) Construct- what is the instrument really measuring? Does it measure the abstract construct of interest? Ex) Does eligibility criteria adequately capture the construct of "low income women" 2) Statistical Conclusion- strength of the evidence between variables; *****is it adequately powered to detect difference? --POWER ANALYSIS TO ESTIMATE HOW LARGE OF A SAMPLE IS NEEDED 3)External- generalizability of the results; Do the findings hold true over variations of people, conditions, or setting? 4) Internal- extent to which a causal inference can be made; were the outcomes caused by the IV and not extraneous variables?

Developing sample criteria based on three ideas

1) Substantive (meaningful)= ID variables and population to be studied 2)Methodological= ID what types of study design to include! (ex= only RCTs included) 3) Practical= what type of language? Include both published and unpublished in report?

Interpreting un-hypothesized significant results

1) exploring relationships that weren't considered during design of the study 2) obtaining results opposite to those hypothesized (likely that reasoning or theory problematic)

internal consistency

An instrument is internally consistent to the extent that all of its items measure the same trait -Evaluated by calculating the coefficient alpha (Cronbach's alpha) Cronbach's alpha -Statistical procedure that computes an index of internal consistency to estimate the extent to which different subparts of an instrument (items) are reliably measuring the critical attribute -Normal range 0.00-1.0 (higher the value, greater internal consistency)

Quantitative Data Interpretation Validity

Approximate truth of an inference -Relates to credibility of a study!

Case studies Narrative analysis Descriptive qualitative studies Critical theory Feminist

Case studies→ focus on single entity, or a small # of entities, with intensive scrutiny Narrative Analysis → Focus on story→ determines how individuals make sense of events in their lives Descriptive qualitative studies→ eclectic design and methods → based on constructivist inquiry Critical theory --> critique of existing social structures and envisioning new possibilities Feminist → how gender domination and discrimination shape women's lives and their consciousness

reliability OF INSTRUMENT

Consistency and accuracy with which an instrument measures an attribute The less variation an instrument produces in repeated measurements, the higher the reliability An instrument is reliable to the extent that its measures reflect true scores

-Multitrait-multimethod matrix method (MTMM) -Convergence and discriminability

Convergence- different methods of measuring a construct yield similar results *****MEASURES OF CONSTRUCTS THAT SHOULD BE RELATD TO EACHOTHER, ARE** Discriminability- evidence the construct can be differentiated from other similar constructs

What cannot be attained in the absence of dependability?

Credibility

QUALITATIVE time frames

Cross sectional Longitudinal- multiple points in time

Preanalysis phase of QUAN research

DATA ENTRY -Coding--> The process of transforming data into numbers --*QUAN= (temp, BP, age) *Categories (single, married, divorces) *Structured instruments (code responses to items about physical activity): 1= not at all, 2= somewhat, 3= always *Structured Interviews Fixed responses (yes=1, no=2) Open ended questions: code according to predefined categories

Solutions to missing values= deletions and imputations

Deletions= LISTWISE AND PAIRWISE Listwise: complete case analysis= analyze only those with NO missing data -If 80 yo didnt answer a question, throw whole subject away Pairwise: (Delete variables with missing values) available case analysis= delete cases selectively on a variable by variable basis Example: test an intervention in decrease anxiety, the DV is BP and anxiety scale. If 10 people didn't complete the anxiety scale, base analysis of anxiety on 90 people who did complete it, but base BP analysis on sample of 100... ** difficult to interpret if # cases fluctuate across outcomes

Characteristics of Qualitative Research Design and approaches in general

Emergent and ongoing as study unfolds vs specific and predefined Based on realities and viewpoints of participants Not known at outset Naturalist/ Constructivist paradigm→ reality NOT fixed but IS flexible, multiple interpretations Plan for broad contingencies Pose decision opportunities for study design Triangulation used Holistic Researcher as instrument (intensely involved, sign amt of time) Ongoing data analysis Approach- involves Advanced planning

Physician Quality Reporting System PQRS

Encourages eligible professionals (EP's) and group practices to report on the quality of care to Medicare patients Apply a negative payment adjustment if measures not reported

In an article by Brooten et al. (2007) about women with high-risk preganancies, the Problem Classification Scheme of the Omaha System was used to identify and classify women's problems indicated by the women and APNs. The reported a range of 73% to 98% agreement for "intercoder reliability." What type of reliability is being described here?

Equivalence

Data COllection differences btwn Ethno Phen Grounded Historical PAR

Ethno= observation/ interviews/social network diagrams/ cultural system -LONGITIDINAL -Long, many months, years -Prob= gaining entree, reflexivity Phen= indepth interviews -Cross sectional -Moderate length -Prob: bracketing ones views, building rapport Grounded= individual interviews -Cross sect or longitudinal -Moderate length -Prob= building rapport Historical= Primary sources, secondary, archives, must confirm genuineness and authenticity PAR= data generation begins as soon as problem is identified by community/ large group -Initial discussions important

Theoretical coding with Glaserian vs Strauss/ corbin

Glaserian- to generate a theory, basic problem must come from data -theory generation** theory problem is seen in data - focuses on the fit for developing theory Strauss/Corbin- to generate theory, problem comes from literature or researchers experience Open- data broken into parts and compared for similarities/ differences Axial- analyst systematically develops categories and links with subcategories Selective - decide on central or core category

which theoretical category does this quote fall into? "I had been thinking about graduate nursing school for a long time and decided to enroll in a program, part time last fall."

Graduate nursing school

What happens if the outcome measures are not on the same scale?

Have to calculare cohens d, which transforms all effects into standard deviation units Ex) if d=0.50, this means the group means for one group was 1/2 of a standard deviation high than the other group

Analysis in Metaanalysis involves calculating what...?

INDEX! = calculates mean different in tx group and mean difference in control group--> combine and average to a single value aka index!

Data Collection in MM -Intramethod mixing -Intermethod mixing Data Analysis in MM

Intra= structured and unstructured self-reports Inter=biophysiologic measures & unstructured reports =INTEGRATING the 2 strands =MERGING the data (goal= analyse data and integrate results)= interprative integration

Similar to interrater reliability

Investigator triangulation

Sandelowski and Barroso

META-SUMMARY -List abstracted findings from primary studies and calculate manifest effect sizes -Can lay foundation for metasynthesis

Which pattern of missingness introduces the most bias?

MNAR (systematic)

MTMM Factor Analysis

MTMM-Organizes convergent and discriminant validity evidence for comparison of how a measure relates to other measures. Convergent= tests designed to measure same construct should highly correlate Discriminant= tests design to measure different constructs SHOULD NOT highly correlate= construct can be differentiated from other similar constructs FACTOR ANALYSIS- statistical procedure= identifies clusters of related items for a scale -Identifies items that "go together"

Gray Literature

NON-PUBLISHED -Abstracts from conference proceedings -Dissertations -Unpublished reports -Controversy regarding grey literature; exclusion can lead to bias, overestimation of effects -Exclusion of gray literature can lead to publication bias (tendency for published studies to systematically over-represent statistically significant findings) → overestimation of effects

Participant Observation Issues

Observations occuring from those within the group 1) Observer Bias (lose objectivity) 2) Emotional involvement (may skew researchers view of group or study issues) 3) Skill level of observer

Metaanalysis- searching for evidence -Published/ non-published

Published -*published bias= overrep stat signif findings Non-published= grey literature

Ethnography

Purpose→ To gain a holistic view of culture; strive to obtain tacit knowledge about culture not readily known to understand world view of cultural members Philosophical underpinnings→ Anthropology CULTURE Characteristics→ description and interpretation of a culture and cultural behavior through cultural behavior, artifacts, speech (key informants) -*****Extensive field work- Intimate, labor intensive, Actively participate- -Understanding of culture is inferred from words, actions, products of culture members→ portrayed through written text -Every human group evolves a culture that guides members view of world and the way they structure their experience STRIVE FOR Emic Perspective→ The way members of a culture envision their world (local language, concepts, expressions) VS. Etic Perspective→ Language used by those doing the research to refer to the same phenomena (outsiders interpretation of the experiences of that culture)

Phenomenology

Purpose→ Understand life's experiences by looking at the meaning of individual experiences; -Understand the essence of a phenomenon -Useful for phenomena that are poorly defined (meaning of suffering, QOL with chronic pain) Philosophical underpinnings→ Philosophy and Psychology Characteristics -Critical truth is grounded in people's lived experiences -Each person's PERCEPTION of a PHENOMENA has MEANING -Human existence is meaningful b/c of consciousness of that existence (aka people's conscious interactions with the world Date source= in-depth conversations; <10 people; rich, vivid descriptions that describe key themes; semi-structured interviews/ open ended questions/ audiotape to transcribe Types= Descriptive (described meaning of human experience aka what do we know as persons??)= hear/see/believe/feel/remember/decide/evaluate Involves 4 Steps- 1) Bracketing (ID and hold aside preconceived beliefs about phen under study); 2) Intuiting (Researcher is open to meanings attributed to a phen by those who have experienced it); 3) Analyzing (ID significant statements, categorize and make sense of essential meaning of phen; 4) Describing (researcher understands and defines phen) *Interpretive Phen.= hermaneutics -(interprets human experience aka what is being??)--> interpreting and UNDERSTANDING→ Based on Hermeneutics (Use supplemental texts/ ART (legit art, poetry, music) to understand phen further) **NO BRACKETING

Historical Research

Purpose→ discover new knowledge by answering questions about cause/effects/ trends in past events that may shed light on present practices Philosophical underpinnings→ History duh Characteristics Systematic collection/ critical evaluation and interpretation of historical evidence -Seeks to describe what happened and why did it happen -Focuses on relationships btwn ideas, events, people, organizations METHODS- collecting data (written records, non written records, interviews, letters/diaries/photos/videos Requires significant time and effort

Grounded Theory

Purpose→ seeks to understand actions from the perspectives of those involved/ seeks to discover core variable; has contributed to creation of middle-range theories *Goal is to discover main concern and the basic social process that explains how people continually resolve it Philosophical underpinnings→ Sociology/ Symbolic Interaction Characteristics -Social processes in social setting -Methods- in-depth interviews, participant observations -Recursive in nature→ Problem and process used to solve it emerge from data (researcher collects data/ categorizes it/describes emerging central phenomena/ recycles earlier steps -Constant comparison -Used to develop and refine theoretically relevant concepts/categories -Categories are constantly compared with data from earlier to find commonalities or variations -Generates emergent conceptual categories and integrates them into a theory GROUNDED in data

Action (Participatory)

Purpose→ to produce new knowledge through close collaboration with groups/communities that are vulnerable to control/ oppression; raise consciousness; produce an impetus that is directly used to make improvements through educational and sociopolitical action -Can motivate, increase self-esteem, generate community solidarity Philosophical underpinnings Interpretivism= research can never be objectively observed from the outside rather it must be observed from inside through the direct experience of the people. (In book) Based on the idea that production of knowledge can be political and used to exert power Characteristics -Production of knowledge can be used to exert power -Work with communities or groups that are vulnerable to the control or oppression of a dominant culture -Researchers and participants collaborate in defining the problem and conducting the study (participants can ask and answer research questions) METHODS- emergent process of collaboration and dialogue; interview & observation; storytelling to encourage participants

Key Criteria for evaluating quantitative instruments

RELIABILITY AND VALIDITY

How to evaluate screening/ diagnostic instruments

SENSITIVITY AND SPECIFICITY -NEEDS TO have a cutoff

Method used for weighing individual studies in meta-analysis

Standard Error -After an effect size is computed for each study, a pooled effect estimate is computed as a weighted average based on standard error for each study (The larger the standard error, the less weight assigned to the study; The bigger the weight given, the more that study will contribute to the weighted average). **Inverse variance method - Widely used approach which involves using the standard error to calculate a weight (repeat from above - the larger the standard error, the less weight assigned to the study).

Triangulation -Investigator and theory

Time/Space/Person -Method= multiple types of data collection -Investigator and theory -COllaboration reduces risk of bias *interrater reliability

purpose of ethnographical research

To describe and interpret cultural behavior

purpose of action research

To empower people through the process of constructing and using knowledge

purpose of historical research

To establish facts and relationships about past events

purpose of grounded theory

To generate theory that accounts for peoples' actions by focusing on the main concern the behavior is designed to resolve

purpose of phenomenology

To understand people's everyday life experiences

Credibility and Corroboration (confirmation of findings)

To what degree are study findings consistent with other similar studies? -Can come from internal and external sources; replication is important → sources from multi-site studies, triangulation ( results similar across different measures of a key outcome), mixed methods (qualitative data concerned with statistical analysis)

Types of Qual self reports Unstructured interviews Semi-structured interviews Focus Groups

Unstructured Interviews -No preconceived views of the information to be gathered -Conversational and interactive -Begin by asking broad questions (grand tour questions) -* LONG- can take notes and tape record Semi-structured Interviews -Done when want to cover specific topics -Prepare a written topic guide -Encourage participants to talk freely about all topics on the guide Focus Group Interviews -Groups range 5-10 ppl -Solicit opinions & experiences simultaneously -Efficient -Some may not be comfortable sharing their views or experiences

Performance Improvement Aims

What are we trying to accomplish? An organization will not improve without clear, time-specific, measureable aims that are specific to the population that is being affected. How will we know that a change is an improvement? ---Quantitative measures provide an organization with the evidence that changes have resulted in an improvement What changes can we make that will result in improvement? ----While all changes do not lead to an improvement, all improvement requires change, and an organization must pick those changes that will make the greatest impact on the care of the patients Incorporate a model for change - use the PDSA (plan, do, study, act) cycle to test small tests of change to see if they result in improvement.

Construct validity

What is the instrument really measuring? Does it adequately measure the construct of interest? **KEY CRITERION FOR ASSESSING RESEARCH QUALITY -Compares measure to abstract theoretical construct**

In which type of qualitative research would a focus group of key participants be used to find a solution to a real life situation?

action research

Convenience Sampling

aka VOLUNTEER -Efficient, but not preferred -Used when researchers want the participants to come forward and ID themselves -Economical -Can be a way to launch the sampling process

A sphygmomanometer yields which type of measure?

an in vitro measure

true/false The Physician Quality Reporting System (PQRS) encourages eligible providers or group practices to report on quality measures for Medicare patients. If measures are reported, the practice receives additional compensation.

false; they are fined

"The Theory of Postpartum Depression: Teetering on the Edge" would most likely be the result of which qualitative methodology?

grounded theory

Which qualitative approach involves the use of a procedure known as constant comparison?

grounded theory

Which type of qualitative methodology is most likely to employ theoretical sampling?

grounded theory

which type of instrument reliability is - All items measure the same trait

internal consistency

Credibility is analogous with which quantitative effort to achieve quality?

internal validity

As opposed to research, what do process improvement projects utilize?

many sequential observable tests

An investigator wishes to interview a group of potential participants with the goal of achieving a high degree of diversity within the sample, capturing different attitudes and opinions of those of varied age groups, education, marital status, and employment status. This type of sampling is referred to as:

maximum variation sampling

The qualitative researcher has developed the category for "plagued with an array of distressing thoughts and emotions" in describing birth trauma. She returns to 5 of the participants and asks them to review the category to see if it accurately describes their experiences. This attempt to establish trustworthiness of the data is called:

member checking

A researcher has completed data entry and while cleaning the data, notes some missing values. Additional tests are performed to examine the extent and pattern of missingness. Results indicated missingness was common in items assessing depression for women who had miscarriages. This would be an example of what type of missing value?

missing not at random

Which strategy to enhance quality is used when the researcher invests sufficient time to ensure data saturation?

prolonged engagement

When the researchers deliberately choses the cases or types of cases that will best contribute to the study, this is referred to as what type of sampling?

purposive

Cross-sectional versus longitudinal data collection apply to which type/(s) of studies?

qualitative and quantitative

One method of facilitating bracketing is to maintain

reflexive journals

reflecting criticaly on ones self

reflexivityn

P value CI

strength of evidence that null is FALSE -Strength of evidence

true/false Qualitative research requires the researcher to become the instrument.

true

which theoretical category does this quote fall into? "The decision to enroll was not an easy one, as I had to consider my family and work obligations. I went ahead and enrolled not sure of how I would perform academically."

uncertainty


Ensembles d'études connexes

Mrs. Harrington portion of final

View Set

LAW 250 CHPT 14, 15, 16 (EXAM 3)

View Set

Chapter 33 Environmental Emergencies

View Set

Module 4: The Neoclassical Period and the Enlightenment

View Set

China, The world's most populous country Chapter 30

View Set

Diseconomies of Scale/ Perfect Competition

View Set

FINA Chapter 11: Risk and Return

View Set