EBP Final Exam

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

Validity

-Degree to which a test measures what it is supposed to measure. The question is not whether a test is "valid or not valid;" rather "valid for what and for whom" -**An instrument/tool can be reliable and not valid! Explain.

Reliability

-Degree to which an instrument or test consistently measures what it is supposed to measure -Consistency with which an instrument/test measures a particular attribute

Internal consistency reliability

-Degree to which an instrument/test possesses internal consistency; extent to which items in an instrument "hang together" and measure what it is supposed to measure -To assess internal consistency, statistical procedure is used and called "Cronbach's Alpha;" reported as a reliability coefficient (r)

Comparison of Measures of Central Tendency

-Mean, most stable and widely used indicator of central tendency -Median, useful mainly as descriptor of typical value when distribution is skewed -Mode, useful mainly as gross descriptor, especially of nominal measures

Nominal Data Analysis

-Measure by assigning numbers to classify characteristics into categories -Characteristics or attributes may be randomly coded numerically but the numbers really provide no information about the variable.

Frequency Distribution Tables

-N=sum of frequency (sample size) -n= subgroups

Age

-Nominal born before and after 2000 -Ordinal - 20 year olds, 30 year olds, 40s,50s, 60 -Interval - 20 year olds, 21 year olds, 22 year olds -Ratio - Birth (0 in USA; 1 year in Taiwan), 1, 2, 3, 4,

Income

-Nominal: below poverty line and above poverty line -Ordinal: lower, moderate and high income levels -Interval: 10K to 19K, 20K to 29K, 30K to 39K -Ratio: Net Income for 2011, 2012, 2013

Chi-Square Test

-Nonparametric test: compares differences -One sample compared to theoretical frequencies, or -Two independent samples test the difference in proportions in categories within a contingency table

Ratio Measurement Data

-Occurs when there are equal distances between score units and there is a rational, meaningful, absolute zero -Highest level of measurement -The scales are different, ordered and separated by a constant unit of measurement -Examples -Weight or blood pressure -Can use most sophisticated statistical analysis

Interval Measurement Coding

-Ordered on a scale that has equal distances between points on the scale -Clear ordering and the distance between objects is specified but there is no meaningful zero point (no absence of value) -Adding or subtracting the values is meaningful. -Example: -SAT score -If sample size is large enough (over 30), can statistically analysis interval data like ratio data

t-Test

-Parametric statistic -Tests the difference between two means -t-test for independent groups (among subjects); independent and dependent variables -t-test for dependent groups (paired subjects) -Example: What is the difference between the mean pain score of patients who received p.m. care during the first 3 hours of the night and those who did not.

Analysis of Variance (ANOVA)

-Parametric statistics -Tests the difference between more than 2 means -One-way ANOVA (e.g., 3 independent ethnic groups with interval/ratio dependent variable) -Multifactor (e.g., two-way) ANOVA (2 independent variables with 1 dependent variable) -Repeated measures ANOVA (within subjects) like repeating tests over time

Commonly Used Bivariate Statistical Tests

-Pearson's r (examines relationship association) -t-Test (examine differences) -Analysis of variance (ANOVA): examine differences -Chi-square test (examine differences)

Levels of measurement

-Process of assigning numerical number to concepts -Researchers need to establish levels of measurement for each study variable

How do we test for reliability?

-Test-retest reliability (Coefficient of Stability) -Internal consistency reliability (Cronbach's Alpha) -Interrater reliability (Cohen's Kappa Statistic)

Multiple Correlation Coefficient (R)

-The correlation index for a dependent variable and 2+ independent (predictor) variables: R -Does not have negative values: it shows strength of relationships, but not direction -Can be squared (R2) to estimate the proportion of variability in the dependent variable accounted for by the independent variables

Essential Components of a Research Report

-Title -Abstract -Introduction -Literature Review -Methods -Results -Discussion -References

positive correlation

correlation in which high scores for one variable are paired with high scores for theater variable, or low scores for on variable are paired with low scores for the other variable

negative correlation

correlation in which high scores for one variable are paired with low scores for the other variable

Advantages of Quantitative data

data can be analyzed without extreme effort, comparisons can be made, and hypotheses can be tested with well-developed statistical techniques. If data are collected rigorously, the findings can be generalized to other populations.

outlier

data point isolated from other data points; extreme score in a data set

Homogeneity (Internal Consistency)

Cronbach's Alpha is utilized

One of the more popular statistical procedures used to assess internal consistency is?

Cronbach's alpha coefficient. This procedure is used to assess internal consistency when instruction items are scored categorically (i.e., 1 to 4).

Content validity

Degree to which a test measures an intended content area; determined by a panel of experts in the field

Confidence interval

The range of values estimated to contain the values for a population within a specified degree of probability

Descriptive statistics

Used to describe and synthesize data (AKA Means and Freqs)

Inferential statistics

Used to make inferences (indicate judgments) about the population based on sample data

Establishing content validity

When an investigator is developing a tool and issues of content validity arise - the concern is whether the measurement tool and items it contains are representative of the content domain that the researcher intends to measure When the researcher has completed: (1) defining the concept, (2) Identifying the dimensions, and (3) Placed in the tool; then, the items on the tool are submitted to a panel of judges - experts on this concept. The judges are asked to indicate their agreement with the scope of the items and the extent to which the items reflect the concept.

abstract

a brief summary of a research study; usually includes the purpose, methods, and findings of a study

operational definition

a definition that assigns meaning to a variable and the terms or procedures by which the variable is to be measured

instrument

a device, piece of equipment, or paper-and-pencil test that measures a concept or variable of interest

symmetrical distribution

a distribution in which the mean, median, and mode are all the same

skewed distribution

a distribution of scores with a few outlying observations in either direction

research report

a document that summarizes the key aspects of a research study; usually includes the purpose, methods, and findings of a study

Participant group interviews

a group (6-12 individuals) of acceptable subjects for in-depth interviews; observes the interactions of members and detecting their attitudes, opinions, and solutions to specific topics; non-threatening "safety in numbers".

refereed journal

a journal that uses expert peers in specified fields to interview and determine whether a particular manuscript will be published

mean

a measure of central tendency calculated by summing a set of scores and dividing the sum by the total number of scores; also called the average

median

a measure of central tendency that represents the middle score in a distribution

measures of dispersion

descriptive statistics that depict the spread or variability among a set of numerical data

measures of central tendency

descriptive statistics that describe the location or approximate center of a distribution of data

Instrument

device used to record or gather data on a particular concept.

grounded theory

discovery of a theory from data that have been systematically obtained through research

essences

elements or structured units that give an understanding of the lived experience

Mean

equals the sum of all scores divided by the total number of scores

psychometric evaluation

evaluating properties of reliability and validity in relation to instruments being used to measure a particular concept or construct

construct validity

extent to which an instrument of test measures an intended hypothetical concept or construct

criterion-related validity

extent to which an instrument or test measures a particular concept compared with a criterion

content validity

extent to which an instrument or test measures an intended content area

generalizability

extent to which research findings can be generalized beyond the given research situation to other settings and subjects; also called external validity

appropriateness

extent to which the phenomena being measured fits the sample

meaningfulness

extent to which the phenomenon being measured is important to research participants and clinicians

Type II Error

failure to reject a null hypothesis when it should be rejected—a false negative Prevent Type II errors because do not want to deprive patients of effective interventions

clinical significance

findings that have meaning for patient care in the absence or presence of statistical significance

constant comparative method

form of qualitative data analysis that categorizes units of meaning through a process of comparing incident to incident until concepts emerge

Examples of ordinal data

grade level

ratio level of measurement

highest level of measurement, characterized by equal distances between scores having an absolute zero point

Peakedness

how sharp the peak is. Consists of platykurtic, mesokurtic, leptokurtic

bracketing

identification of any previous knowledge, ideas, or beliefs about the phenomenon under investigation

Some guidelines to follow when designing a questionnaire

include using simple language, having each question represent just one idea, delimiting any reference to time, and phrasing questions in a neutral way.

Biophysical measures

include vital signs (i.e., temperature, pulse, respirations, blood pressure), maximum inspiratory pressure, cardiac output, glycosylated hemoglobin, urine catecholamine levels, and coagulation tests.

Face validity

a subtype of content validity. It is a rudimentary type of validity that is intuitive. Colleagues or subjects are asked to read the instrument and evaluate the content answering the question, "Does it appear to reflect the concept the researcher intends to measure?".

participant observer

a technique in anthropological fieldwork. It involves direct observation of everyday life in study participants' natural settings and participation in their lifestyle and activities to the greatest extent possible

Is temperature considered interval or ratio data?

interval - no absolute temperature because when 0 degrees there is still temperature

Participant observation

involves direct observation through involvement with subjects in their natural setting, participating in their lifestyle activities.

confirmability

method used to establish the scientific rigor of phenomenological research. It has three elements: auditability, credibility, and fittingness.

platykurtic

mole hill, flat peak

robust

referring to results from statistical analyses that are close to being valid, even though the researcher does not rigidly adhere to assumptions associated with parametric procedures

Type I Error

rejection of a null hypothesis when it should not be rejected—a false positive Risk is controlled by the level of significance (alpha), e.g., = .05 or .01 Research usually focuses on preventing Type 1 errors

fittingness

requires that the phenomenological description is grounded in the lived experience and reflects typical and atypical elements of the experience.

credibility

requires that the phenomenological description of the lived experience be recognized by people in the situation as an accurate description of their own experience

auditability

requires the reader to be able to follow the researcher's decision path and reach a similar conclusion

purposive sampling

selecting and interviewing participants who have actually lived and experienced the phenomena of interest

Survey

series of questions posed to a group of subjects. Surveys are used for collecting data to describe, compare, or explain knowledge, attitudes, and behavior.

semantic differential scale

set of scales using pairs of adjectives that reflect opposite feelings

homogeneity of variance

situation in which the dependent variables do not differ significantly between or among groups

Interviews

sometimes called "interview guide"; used to elicit meaningful data; decrease chance of vague answers; may use audio/video recordings.

Likert scale

sometimes referred to as a summative scale. Respondents are asked to respond to a series of statements that reflect agreement or disagreement. Most Likert scales consist of five scale points, designated by the words "strongly agree", "agree", "undecided", "disagree", and "strongly disagree"

Q methodology

sorting technique used to characterize opinions, attitudes, or judgments of individuals through comparative rank ordering

response-set bias

the amount of measurement error introduced by the tendency of some individuals to respond to items in characteristic ways (i.e., always agreeing/disagreeing) independent of the item's content.

measurement

the assignment of numerical values to concepts, according to well-defined rules

Reliability of a research instrument or test is

the extent to which the instrument yields the same results on repeated measures.

statistical significance

the extent to which the results of an analysis are unlikely to be the effect of chance

lived experience

the focus of phenomenology. It consists of everyday experiences of an individual in the context of normal pursuits. It is what is real and true to the individual

validity

value that refers to the accuracy with which an instrument or test measures what it is supposed to measure. Different types of validity include content, criterion, and construct

reliability

values that refers to the consistency with which an instrument or test measure a particular concept. Different ways of assessing reliability include test-retest, internal consistency, and interrater

continuous variable

variable that takes on an infinite number of different values presented on a continuum

leptokurtic

very peaked, like two giraffes next to each other

Consumers of research must make judgments about the instruments that are used in a study

(1) Must assess the reliability and validity of the instruments (2) Invalid measures produce inaccurate generalizations to the populations being studied

A reliable instrument/tool

(1) consistent, (2) accurate, (3) precise, (4) stable, (5) has homogeneity' and/or (6) equivalence.

Researchers and consumers of research must be concerned about whether the scores that were obtained for a sample of subjects were

(1) consistent, true measures, (2) an accurate reflection of the differences between individuals or groups, (3) stability in parallel or alternate forms, test-retest reliability, (4) Homogeneity in Cronbach's alpha, and (5) equivalence found in interrater reliability, or again in parallel or alternate forms of the tool.

data collection

- Allows participants to discuss in detail their interpretation of the perspective - Semi-structured interviews guided by broad, open-ended questions; can be conversational in nature; interviews usually audio-taped - Additional probes often necessary • Focus groups: small group of participants come together and respond to questions • Goal: gain an understanding of the participant's perspective

sample and setting

- Purposive sampling: snowball or network sampling often used - Setting: chosen based on research question(s) and participants that are required to answer question(s) - Setting: natural; "field" - Goal: not generalizability; rather "understanding"

data analysis

- Saturation: When the researcher hears the same explanations repeatedly, the data is said to be saturated - Analysis: Different types of analyses based on type of qualitative research. However, commonalities exist • Pull out significant statements • Formulate meanings related to the statements • Group the meanings together into clusters of themes

Sampling Distribution of the Mean

-A theoretical distribution of means is an infinite number of samples drawn from the same population, it: -is always normally distributed -has a mean that equals the population mean -has a standard deviation (SD) called the standard error of the mean (SEM) -has an SEM that is estimated from a sample SD and the sample size

Contingency Table for Independent Samples Chi Square

-A two-dimensional frequency distribution; frequencies of two variables are cross-tabulated -"Cells" at intersection of rows and columns display counts and percentages -Variables should be nominal or ordinal

Poster presentation

-Advertisement of research; combines text and graphics to make a visually pleasing presentation -Involves showing work to numerous researchers at a conference or seminar -Pay particular attention to the poster's design; focus on a small number of key points viewers should take away from the presentation

Data management

-Agree on specific procedures prior to data collection -Data are collected according to the approved protocol -Ascertain data accurately -Errors must be corrected and made public -PI is responsible for ensuring data are high quality -Data is stored from 5-7 years or longer

Example of different response choices

-Agreement (4-point Likert scale) -Strongly disagree -Disagree -Agree -Strongly agree -Evaluation (5-point Likert scale) -Excellent -Good -Satisfactory -Unsatisfactory -Poor

Descriptive Statistics: Frequency Distributions

-Arranging numeric variable values from lowest to highest -Count number of times each value measure was obtained -Frequency distributions can be described in terms of: -Shape -Central tendency (MMM) -Variability -Can be presented in tabular form (counts and percentages) -Can be presented graphically (e.g., frequency polygons) -Summarize how many scores are the same -(frequency of a score) -Displayed by : -Frequency Distribution Tables

Nominal (categorical) measurement

-Assignment of numbers to simply classify characteristics into categories -Type of "naming" or "labeling" system -No relationship between categories -the lowest level of measurement, and it consists of assigning numbers as labels for categories. These numbers have no numerical interpretation.

Different types of validity

-Content (face) -Construct -Criteria-related (includes concurrent and predictive)

Construct validity

-Degree to which a test measures an intended hypothetical construct (non-observable trait) -based on the extent to which a test measures a theoretical construct or trait. -Construct validity attempts to validate a body of theory underlying the measurement and testing of the hypothesized relationships. -Establishing construct validity is a complex process: (1) Involves several studies, (2) Hypothesis testing, (3) Factor analysis, and (4) others.

Criteria-related validity

-Degree to which a test measures how well an instrument measuring a particular construct compares with a criterion -defined as to what degree the subject's performance on the measurement tool and the subject's actual behavior are related. Example: Denver 2 Developmental Assessment: "What if the child is ill or just uncomfortable and does not perform the task?".

Test-retest reliability

-Degree to which scores are consistent over time -Administer an instrument/test; after 2-4 weeks has passed, re-administer the same test. Correlate the two scores and report as the test-retest reliability coefficient (r). -administer the same instrument to the same subjects under similar condition on two or more occasions. The scores are compared statistically using Pearson r - to closer to 1.0 the better the correlation.

Variability

-Degree to which scores in a distribution are spread out or dispersed -Homogeneity—little variability -Heterogeneity—great variability -Leptokurtic- highly peaked -Platykurtic- flat, low, long peak

Interrater reliability

-Degree to which two observers watching the same event independent of one another agree; very popular in nursing studies where judgments are made by two or more data collectors -To assess degree of consistency among raters, a statistical procedure called "Cohen's Kappa Statistic" is used and reported as a reliability coefficient

Once an instrument is selected:

-Determine what levels of data are produced -What statistical procedures are appropriate for these types of data.

Nonparametric Statistics

-Do not estimate parameters -Involve variables measured on a nominal or ordinal scale -Have less restrictive assumptions about the shape of the variables' distribution than parametric tests

Inferential Estimation

-Estimation of a Population Interval based on a sample -Point Estimation -Example - estimating percentage of childhood obesity in population based on a sample -Interval Estimation -Example - estimating age range when childhood obesity is likely to start -Level of Confidence -confidence associated with an interval estimation; -reliability of estimate (e.g., .95 confidence)

Statistical Inference—Two Types

-Estimation of parameters -Hypothesis testing (more common)

Interpreting research findings

-Examining the meaning of results -Considering the significance of findings -Generalizing study findings -Drawing conclusions -Suggesting implications for practice

Nominal Data Coding

-Example: Categories male and female; code them "1" and "2" respectively. -Male and female have no numerical value. -Assign values to sort people into two categories for data analysis purposes. -Quantitative analysis is based on numbers -SPSS = Statistical Package for the Social Sciences

How do we express reliability?

-Expressed numerically as a coefficient; reliability coefficient (r) -Reliability coefficients can range from 0.00 to 0.99 • r = 1.0 (perfect reliability coefficient); no instrument/test is perfect

Examples of Nominal

-Gender: 1) male 2) female -Blood type: 1) A 2) B 3) AB 4) O

Examples of interval/ratio levels of measurement

-Heart rate: 90 beats/minute -Age: 53 -Systolic blood pressure: 132 -Travel time to day care: 35 minutes

Measures of Dispersion: Variance and Standard Deviation

-How far did they stray from the middle? -Both are based on a deviated score -Deviated score is the raw score subtracted from the mean score -Raw score on test 76 -Class mean 96 -Raw - mean = -20 (The raw score is 20 points below the class mean.) -Raw scores below the mean are

Correlation (r)

-Indicates direction and magnitude of relationship (association) between two variables -Used with interval or ratio measures -Correlation coefficients (usually Pearson's r) summarize information -With multiple variables, a correlation matrix can be displayed

Quantitative data uses

-Instrument -Surveys -Questionnaires (Open vs. Closed) -Scales -Q sorts -Biophysical Measures

Qualitative data uses

-Interviews -Focus groups -Use of audio tapes -Participant observation

Parametric Statistics

-Involve the estimation of a parameter -Require measurements on at least an interval scale -Involve several assumptions (e.g., that variables are normally distributed in the population)

Ordinal Measurement Coding

-Involves ranking objects based on their relative standing on an attribute -Example: -Measuring the client's ability to do his ADL's using a scoring system of 1=dependent, 2=max assist, 3=moderate assist, 4=minimum assist, 5=independent

Four Levels of Measurement

-Lowest - Nominal (categories) -Next - Ordinal (higher to lower) -Then - Interval (equal distances between numbers) -Highest - Ratio (has an absolute zero) -Aim for highest level to statistically manipulate the data -Lower levels lose specificity of information -Age -Income

What is validity?

-Refers to whether a measurement instrument accurately measures what it is supposed to measure. It correctly measures the construct of interest. -For example: a valid instrument is supposed to measure anxiety does so- it does not measure another construct such as stress or fatigue. -Three major kinds of validity. -An instrument can be reliable but not valid.

Examples of nominal data

-Religion & gender -School Districts

Hypothesis Testing

-Researcher determines if there was a statistically significant difference between one intervention or another -Ex. pain relief from massage, versus pain relief from aroma therapy -Or one group and another -Ex. difference in mobility between bowlers and tennis players -Uses inferential statistics. -Hypothesis is tested to see if it is supported or rejected. -Based on rules of negative inference: research hypotheses (H1 ) are supported if null hypotheses (Ho ) can be rejected -Involves statistical decision making to either: -accept the null hypothesis, or -reject the null hypothesis -Researchers compute a statistic (usually a mean) based on their data, then determine whether the statistic falls beyond the critical region in the relevant theoretical distribution (outside of 95% of distribution, then Ho is accepted and results are due to chance) -If mean is within 95% of distribution, then Ho is rejected) -If the value of the test statistic indicates that the null hypothesis is "improbable," the result is statistically significant at a level (e.g., p < .05, or p < .01) -A nonsignificant result means that any observed difference or relationship could have resulted from chance fluctuations -Statistical decisions are either statistically significant or not significant, because significant level is set before study

Interval/Ratio measurement

-Richest, most sophisticated form of measurement -Distance between any two numbers are known and of equal size -Both levels are clumped together because both exhibit consistent interpretation -Difference between both; in ratio levels of measurement, there is an absolute zero point -interval level of measurement possesses all characteristics of a nominal and an ordinal scale, in addition to having equal interval sizes based on an actual unit of measurement. -The ratio level of measurement is the highest level of measurement and is characterized by equal distances between scores with an absolute zero point.

Publication

-Scientific papers must be written for a specific audience. -A well-written research paper explains the researcher's motivation for doing the study and the meaning of the results. -Research papers are written in a style that is exceedingly clear and concise. Their purpose is to inform an audience of other researchers. -The process leading to the publication requires the researcher to pay particular attention to not only content being presented but also style and organization of the paper. -Each journal has its own specific style -Familiarize with the journal, Read the Author Guidelines, Send a query letter.

Overview of Hypothesis-Testing Procedures

-Select an appropriate statistic. -Establish the level of significance (e.g., = .05). -Compute test statistic with actual data. -Determine degrees of freedom (df) for the statistic.

Sensible Statistics

-Set alpha level to prevent a Type I error before collecting data -Confidence Interval -1- a = -1-.05 = 95% confidence that results were not due to chance

Formulate the problem

-Similar to quantitative research -Problem often emerges from clinical issues -Perhaps the literature lacks depth that a quantitative study can provide

Example of Ordinal measurement

-Social Class: 1) Upper 2) Middle 3) Lower -Exercise: 1) <15 min/day 2) 16-30 min 3) > 31 min/day

Ordinal measurement

-Sorting of categories on the basis of their standing relative to one another -"Rank-ordering" or categories -Rank-ordering does not indicate magnitude of difference (strength) -specifies the order of items being measured without specifying how far apart they are. Ordinal scales classify categories incrementally and rank-order each category.

Multivariate Statistics

-Statistical procedures for analyzing relationships among 2 or more outcome variables -Commonly used procedure in nursing research: -Multiple regression

Research Process and the Sample

-Study of a sample of a population -Usually not possible to study the entire population -Developing a measuring tool is a research study in its self -Nurse researchers use instruments/tools that are already developed with validity and reliability support. -Nurse investigators use instruments that have been developed by researchers in nursing and other disciplines

Measurement

-Systematic assignment of numerical values to concepts/variables to reflect properties of the concepts/variables. -Concepts = In Theories -Variables = In Research -Explain the most with fewest number -Relate to Quantitative Data & Research

Oral Presentation

-Technique that provides researchers with an opportunity to present information through verbal means -Preparation is the key to giving an effective oral presentation -Know your topic; you are the expert -Have an idea of your audience's background -Be sure to prepare an outline

Multiple Linear Regression

-Used to predict a dependent variable based on two or more independent (predictor) variables -Predictor variables are continuous (interval or ratio) or dichotomous -Dependent variable is continuous (interval or ratio-level data)

Discriminate Function Analysis

-Used to predict categorical dependent variables (e.g., compliant/noncompliant) based on 2 or more predictor variables -Accommodates predictors that are continuous or dichotomous -Example: used to predict passing NCLEX-RN

Factor Analysis

-Used to reduce a large set of variables into a smaller set of underlying dimensions (factors) -Used primarily in developing scales and complex instruments -takes all the variables in the large set, and reduces them down according to underlying dimensions (factors). "That factors out on Number / factor/ concept - 1 That came out or factored out on Nursing Process, etc.

Inferential statistics

-Using a sample to make generalizations to a population. -Parameter is a numerical characteristic of a population (e.g., population mean, population standard deviation) -Based on laws of probability (p)- likelihood that an event will occur, given all possible outcomes -Uses the concept of theoretical distributions

Standard deviation (SD)

-average deviation of scores in a distribution -Calculate all the deviated scores for all the scores in the distribution -Deviated score is difference between raw score and mean of the sample -Square deviated scores so all numbers become positive (otherwise sum will be 0 ) and then divide by number of scores to obtain variance -Used with interval or ratio level data, summarizes the average amount of deviation of all values of a distribution from the mean.

Frequency Distribution Graphs

-bar graph -histogram -frequency polygon

Descriptive Statistics consists of

-central tendency -dispersion

Degrees of Freedom

-df -# of scores that are free to vary when computing variability -df = n - 1 (usually) -use Mean to determine variability (X is not free to vary)

Range

-highest value minus lowest value (the smaller the range the less the variability) -simplest measure of dispersion. -Not the best measure because it is based on only two values.

Dispersion consists of

-how far apart --- range and standard deviation -(SD - average variation from the mean)

Central Tendency consists of

-mean, mode, median -mean - arithmetic average -mode - most frequently occurring value -median - middle number in a set

What factors could interfere with an accurate measurement?

-meaningfulness -appropriateness

Visual analogue scale

-measures subjective phenomena; uni-dimensional; quantifying intensity only. -a 100-mm-line-long scale, with anchors at each end to indicate extremes of the phenomenon being assessed. Subjects are asked to mark a point on the line to indicate the amount of the phenomenon experienced at a particular time.

Four levels of measurement

-nominal -ordinal -interval -ratio

Disseminating Research Findings: Integral part of the research process; three forms

-publication -oral presentation -poster presentation

Variance

-standard deviation, squared -Standard deviation (SD) squared = variance -Square root of the variance = SD -SD is most stable measure of variability (but sensitive to outliers) -SD is how data varies around the mean -SD on both sides of mean = variance

Shapes of Distributions

-symmetric -skewed

Concurrent validity

-the degree of correlation of two measures of the same concept administered at the same time (2 measures given at the same time, generally). -demonstrated where a test correlates well with a measure that has previously been validated. So take a measure like the Denver 2 developmental assessment: Use it with a newly developed assessment tool. If both agree, then the correlation of agreement show that the new tools is as predictive as the established one, the Denver 2.This demonstrates that the new tool is also valid.

Unimodal

1 peak

Developing a Questionnaire: Design Good Answers

1) Open-ended responses 2) Categorical responses 3) Continuous responses

Developing a Questionnaire: Design Focused questions

1) Specific not general 2) Use simple language 3) Each question represents one concept 4) Be specific in time related questions 5) Phrase neutral questions

5 Principles for Ordering Questions

1) Start with a successful topic 2) Group similar content questions 3) Build continuity throughout the questionnaire 4) Place sensitive topics 2/3rd of the way into the questionnaire 5) Demographic questions should be place at the end of the questionnaire

Steps associated with conducting qualitative research

1. formulate the problem 2. literature review and research question 3. sample and setting 4. data collection 5. data analysis

Bimodal

2 peaks

Multimodal

2+ peaks

2 Types of Criteria-related validity

Concurrent validity and Predictive validity

Frequency polygon

For continuous variables like interval & ratio data

Histogram

For continuous variables like interval and ratio data (e.g. age)

Bar graph

For discrete variables like nominal & ordinal data (e.g. nursing education levels)

Central tendency

How data cluster together

Data Collection Methods

Methods and instruments associated with data collection are chosen according to: 1) the nature of the problem 2) approach to the solution 3) variables being studied

Equivalence: (Interrater Reliability) Cohen's Kappa Statistic

Instruments that depend on direct observation of a behavior that is to be systematically recorded OR Instruments where the results must be reviewed and given a score, such as the essay portion of the SAT exam. To accomplish interrater reliability, two or more trained individuals should make an observation of the behavior and scores obtained using the tool. The scores are compared and expressed as a percentage of agreement or a correlation coefficient expressed as a Kappa or K greater than 0.8 is good, K of less than 0.68 is tentative.

Developing a Questionnaire

Ordering the questions: the questionnaire is a collection of questions in which the whole becomes more than the sum of its parts.

Scale

Set of numerical values assigned to responses, representing the degree to which subjects possess a particular attitude, value, or characteristic

Homogeneity (Internal consistency)

The items within the scale, instrument, tool correlate or are complementary to each other. The score is uni-dimensional - it measure one concept.

Examples of concurrent validity

a measure of job satisfaction might be correlated with work performance. Note that with concurrent validity, the two measures are taken at the same time.

range

a measure of variability that is the difference between the lowest and highest values in a distribution

correlation

a measure that defines the relationship between two variables

survey

a method of collecting data to describe, compare, or explain knowledge, attitudes, or behaviors

dichotomous variable

a nominal variable that consists of two categories with continuous variables taking on an infinite number of different values presented on a continuum.

Chi-square

a nonparametric procedure used to assess whether a relationship exists between two nominal level variable; symbolized as x^2

psychosocial instrument

a paper-and-pencil test that measures a particular psychosocial concept or variable.

analysis of variance (ANOVA)

a parametric procedure used to test whether there is a difference among three group means

phenomenology

a philosophy and research method that explores and describes everyday experience as it appears to human consciousness in order to generate and enhance the understanding of what it means to be human. It limits philosophical inquiry to acts of consciousness.

Factor analysis (an advanced statistical procedure)

a popular method used to assess construct validity.

t-test

a popular parametric procedure for assessing whether two group means are significantly different from one another

ethnography

a qualitative research approach developed by anthropologists, involving the study and description of a culture in the natural setting. The researcher is intimately involved in the data collection process and seeks to understand fully how life unfolds for the particular culture under study

scale

a set of numerical values assigned to responses that represent the degree to which respondents posses a particular attitude, value, or characteristic

Semantic differential scale

a set of scales using pairs of adjectives that reflect opposite feelings; only the two extremes are labeled

Cronbach's Alpha

a statistical test and is the most commonly used test for internal consistency. It measures how well a set of variables or items measures a single, one-dimensional construct. It is used with Likert scale instruments. It simultaneously compares each item in the scale with the others.

questionnaire

a structured survey that is self-administered or interviewer administered

predictive validity

ability to predict future events, behaviors, or outcomes

mesokurtic

almost normal bell shape

interview schedule

also known as an interview guide, is a list of topics or an open-ended questionnaire administered to subjects by a skilled interviewer.

fieldwork

an anthropological research approach that involves prolonged residence with members of the culture that is being studied. Field notes are written as detailed descriptions of researchers' observations, experiences, and conversations in the "field" (research setting)

test-retest reliability

an approach to reliability examining the extent to which scores are consistent over time

Q Methodology (or Q sort)

an example of a sorting technique used to characterize opinions, attitudes, or judgments of individuals through comparative rank ordering.

Focus groups

are acceptable subjects for in-depth interview. The technique serves a variety of purposes, with the ultimate goal of observing the interactions among focus group members and detecting their attitudes, opinions, and solutions to specific topics posed by the facilitator. Focus groups are designed to be nonthreatening so participants can express and clarify their views in ways that are less likely to occur one-on-one.

Open-ended questionnaires

ask subjects to provide specific answers. This type of questionnaire allows participants to write a response as opposed to answer a question with fixed choices. It is less frequently found in studies in which quantitative methods for data analysis are planned.

Closed-ended questionnaires

ask subjects to select an answer from among several choices. This type of questionnaire is often used in large surveys when questionnaires are mailed.

limitations

aspects of a study that are potentially confounding to the main study variables

Normal distribution

bell shaped curve, symmetric, unimodal, not too peaked, not too flat

One popular statistical procedure that quantifies the degree of consistency among raters is

called Cohen's kappa. This statistical procedure is used with nominal data and is designed for situations in which raters classify the items being rated according to discreet categories.

Another statistical procedure used to assess internal consistency is

called Kuder-Richardson Formula 20, or simply, KR-20. This procedure is used when the items of an instrument are scored dichotomously (e.g., 1 for yes; 0 for no).

Qualitative Data

can be: observed, written, taped, or filmed.

Advantages of Qualitative data

collecting the data does not require prior knowledge of the subject and individual variation can be recorded in depth.

The process of determining validity

is by no means an easy task. An instrument may have excellent reliability, but not measure what it claims to measure. However, an instrument's data must be reliable if they are to be valid. Thus, high reliability is a necessary, though insufficient, condition for high validity.

A reliable measure

is one that can produce the same results if the behavior is measured again by the same scale.

Psychometric evaluation of an instrument

is primarily concerned with the construction and validation of measurement instruments.

query letter

letter written to an editor to determine the level of interest in publishing a research report

interval level of measurement

level of measurement characterized by a constant unit of measurement or equal distances between points on a scale

ordinal level of measurement

level of measurement that yields rank-ordered data

probability

likelihood that an event will occur; given all possible outcomes

interview schedule

list of topics or an open-ended questionnaire administered to subjects by a skilled interviewer. Sometimes referred to as an interview guide

Negative skew

long tail points to left

Positive skew

long tail points to the right

nominal level of measurement

lowest level of measurement, which consists of assigning numbers as labels for categories. These numbers have no numerical interpretation

variance

measure of variability, which is the average squared deviation from the mean

Modality

number of peaks. Consists of unimodal, bimodal, multimodal

parameter

numerical characteristic of a population (e.g., population mean, population standard deviation)

Quantitative data

numerical. Data can be used directly (e.g. weight, age in years) or to form categories (e.g. male and female) that can be formulated into counts or tables

saturation

point when data collection is terminated because no new description and interpretations of the lived experience are coming from the study participants

Qualitative data is used for

preliminary investigation for new areas and understanding of quantitative results

level of confidence

probability level in which the research hypothesis is accepted with confidence. A 0.05 level of confidence is the standard among researchers

coding

process by which data are conceptualized

theoretical sampling

process used in data collection that is controlled by the emerging theory; researcher collects, codes, and analyzes the data

Is weight considered interval or ratio data?

ratio

Likert Scale

referred to as "summative scales"; response choices commonly address agreement, evaluation, or frequency

coefficient of stability

referred to as test-retest reliability, deals with the consistency of repeated measurements. It is the extent to which scores are consistent over time.

descriptive statistics

statistics that describe and summarize data

inferential statistics

statistics that generalize findings from a sample to a population

Questionnaires

structured self-administered surveys. The most common way of distribution is mail. However, some situations allow for face-to-face administration. This form of data collection is economical and can reach large populations. Disadvantage is the low return rate (30-60% for most studies).

Mode

the most frequently occurring score in a distribution

standard deviation (SD)

the most frequently used measure of variability; the distance a score varies from the mean

Pearson Correlation Coefficient

the most widely used to exam the relationship between two quantitative sets of scores. It needs interval or ratio level of measurement data. The symbol is (r). Correlation coefficients can range from -1.00 to +1.00. The value of -1.00 represents a perfect negative correlation while a value of +1.00 represents a perfect positive correlation. A value of 0.00 represents a lack of correlation.

Median

the point in a distribution above which, and below which 50% of cases fall

What is reliability?

the proportion of accuracy to inaccuracy in measurement - if we use the same or comparable instruments on more than one occasion to measure a set of behaviors that ordinarily remain relatively constant.

mode

the score or value that occurs most frequently in a distribution; a measure of central tendency used most often with nominal-level data

response set bias

the tendency for subjects to respond to items on a questionnaire in a way that does not reflect the real situation

symbolic interaction

theoretical orientation to qualitative research; focus is on the nature of sisal interaction among individuals

The purpose of a scale

to distinguish among people who show different intensities of the concept to be measured.

Bivariate

two variables; usually an independent and dependent

category

type of concept that is usually used for a higher level of abstraction

open-ended questionnaire

type of format in which subjects are asked to provide specific answers

closed-ended questionnaire

type of format in which subjects are asked to select an answer from several choices

visual analogue scale

type of scale that measures subjective phenomena (e.g., pain, fatigue, shortness of breath, anxiety). The scale is 100 mm long with anchors at each end quantifying intensity. Subjects are asked to mark a point on the line indicating the amount of the phenomenon experienced at that time

Qualitative Collection methods include:

unstructured interviews, focus groups, direct observation, use of audio tapes, case studies, field notes, diaries, or historical documents.

Stability

when the same results are obtained on repeated administration of the instrument. Researchers expect the instrument to measure a concept consistently over a period of time. If you want to measure a clients hope prior to receiving chemo therapy, and then 3 and 6 months later, you would want the instrument to be stable.

Examples of predictive validity

where one measure occurs earlier and is meant to predict some later measure. In a study of predictive validity, the test scores are collected first; then at some later time the criterion measure is collected. Here the example is slightly different: Tests are administered, perhaps to job applicants, and then after those individuals work in the job for a year, their test scores are correlated with their first year job performance scores. Another relevant example is SAT scores: These are validated by collecting the scores during the examinee's senior year and high school and then waiting a year (or more) to correlate the scores with their first year college grade point average. Thus predictive validity provides somewhat more useful data about test validity because it has greater fidelity to the real situation in which the test will be used. After all, most tests are administered to find out something about future behavior. Think about HESI exams having predictive validity for the NCLEX-RN.

Cronbach's alpha (coefficient alpha)

widely used index of the extent to which a measuring instrument is internally stable

memos

write-ups of ideas about codes and their relationships as they occur to the researcher while coding

Grounded theory research

• Discovery of theory from data • Research problem/question: research problem is discovered; researcher is careful not to force the data, keeping an open mind to the emergence of the participant's problem • Goal: generate a theory from data collected about a substantive area of study • Datacollection:participantobservation,informaland formal interviewing • Data analysis: coding-process of conceptualizing data into patterns called concepts. The "constant comparative method of data analysis" is a form of analysis that makes sense of data by categorizing units of meaning through a process of constantly comparing incidents until categories emerge.

Ethnography

• Focus on the culture of a group of people with an effort to understand the world view of those under study • Assumption: every group evolves a culture that guides the members' view of the world and the way they structure their experiences • Data collection: participant observation • Fieldwork: prolonged residence(5months-5years) in the culture

Reflections on a qualitative approach

• Gaining insight through the discovery of meaning • Emphasis on understanding human experiences •Holistic approach utilized;focused on the nature of "reality" as the patient (participant) understands it •Perspective of the patient(participant)

Ethical considerations

• Informedconsentisrequired - Knowledgeable & expressed choice to participate •Without coercion, deceit, or duress Must include: - Explanation of research - Procedures - Risks- should never exceed the importance of problem - Benefits

literature review and research question

• Literature review may occur at two points - beginning and end • No hypotheses • Research questions are broad, open-ended, and non-directional; guides the selection of research approach

Scientific rigor in qualitative research

• Noreliability/validity • Trustworthiness 1. Credibility: how truthful or believable are the findings? • was there extended time in the field? and member checks? 2. Transferability: ability to transfer findings. Has sufficient information been provided about the context? 3. Dependability: stability • Was there consistency of data in interviews? 4. Confirmability: captures a sense of objectivity in the research • Three Elements: auditability, credibility, and fittingness • Was there use of audit trails and/or judge panels?

phenomenology

• Researcher asks, "What is the essence or 'lived experience' of a particular phenomenon and what does it mean?" • Main data source: interviews • Sample: usually small number of participants, until data reaches saturation • Researcher conducts "bracketing"-identifying and holding preconceived beliefs/opinions about the phenomenon • Example of phenomenon: meaning of suffering; experience of domestic violence; experience of hunger; quality of life with chronic pain • Goal: Understand the lived experience

Role of researcher in qualitative research

• Researcher is removed, distant, objective. • Researcher is equal partner or collaborator. • Researcher identifies his/her biases prior to starting the study. • Researcher maintains notes on his/her thoughts and feelings and responses to participants. • Researcher must guard against involvement.


Ensembles d'études connexes

Intro to Network Security sixth ed chapter 5

View Set

Math for Elementary Teachers 1 WGU

View Set

27 - Abdominal Pain, 28 - Jaundice, 29 - Nausea and Vomiting, 30 - Gastrointestinal Bleeding, 89 - Esophagus, Stomach, Duodenum, 91 - Disorders of the Pancreas, 92 - Disorders of the Small Intestine, 93 - Acute Appendicitis, 94 - Gastroenteritis, 95...

View Set