MSN Comprehensive
6. Identify and define sampling techniques and strategies as these apply to the research process.
A sampling method is the process of selecting a group of people, events, behaviors, or other elements that represent the population being studied. The sampling method in a study varies with the type of research being conducted. Quantitative, outcomes, and intervention research apply a variety of probability and nonprobability sampling methods. Qualitative methods usually include nonprobability sampling methods. Probability sampling methods have been developed to ensure some degree of precision in estimations of the population parameters. This type of method reduces sampling error. Probability sampling method refers to the fact that every member of the population has a probability higher than zero of being selected for the sample. This technique of sampling is also referred to as random sampling methods. These samples are more likely to represent the population than samples obtained with nonprobability sampling methods. There is less opportunity for a bias if subjects are selected randomly. Random sampling leaves the selection to change and decreases sampling error and increases the validity of the study. In order to obtain a probability sample, the researcher must develop a sampling frame that includes every element in the population and the sample must be randomly selected. There are four different probability sampling designs: simple random sampling; stratified random sampling; cluster sampling; and systematic sampling. Simple random sampling is the most basic of the probability sampling methods. To achieve simple random sampling, elements are selected at random from the sampling frame. If the sampling frame is small the researcher can write the names on a slip of paper, place them in a container, mix well, and draw out one at time until the desired sample size is obtained. Another technique is to assign a number to each name in the sampling frame. In a large population sets, elements may already have assigned numbers. Example: numbers are assigned to medical records, organizational memberships, and professional licenses. The computer can select these numbers randomly to obtain a sample. There can be some differences in the probability for the selection of each element, depending on whether the name or number of the selected element is replaced before the next name or number is selected. Selection with replacement provides exactly equal opportunities for each element to be selected. Selection without replacement gives each element different levels of probability for selection. Stratified random sampling is used when the researcher knows some of the variables in the population that are critical to achieving representativeness. Variables used for stratification are age, gender, ethnicity, socioeconomic status, diagnosis, geographical region, type of institution, type of care, care provider, and site of care. The variables chosen for stratification need to be correlated with the dependent variables being examined in the study. Subjects are randomly on the basis of their classification into the selected strata. Example: when conducting research, you selected a stratified random sample of 100 adult subjects using age as a variable. The sample may include 25 subjects in the age range of 18 to 39, 25 subjects from the age range of 40 to 59, 25 subjects in the age range of 60 to 79, and 25 subjects in the age range of 80 year or older. Stratification ensures that all levels of the identified variable are represented in the sample. A smaller sample size can be used to achieve the same degree of representativeness as a large sample acquired through random sampling. Sampling error decreases, power increases, data collection time is reduced, and cost of the study is lower. One question that arises with stratification is whether each stratum should have equivalent numbers of subjects in the sample (disproportionate sampling) or whether the numbers of subjects should be selected in proportion to their occurrence in the population (proportionate sampling). Stratification is not as useful if one stratum contains only a small number of subjects. Cluster sampling is applied when the population is heterogenous; it is similar to stratified random sampling but takes advantages of natural clusters or groups of population units that have similar characteristics. It is used in two situations. When a simple random sample would be prohibitive in terms of travel time and costs. The second situation is in cases which the individual elements making up the population are unknown preventing the development of a sampling frame. In cluster sampling the researcher develops a list of all the states, cities, institutions, or organizations with which the elements of the identified population would be linked. States, cities, institutions, or organizations are selected randomly as units from which to obtain elements for the sample. This random selection can continue through several stages and is referred to as multistage cluster sampling. Example the research might randomly select sates and next randomly select cities within those states. This sampling provides a larger sample at lower cost. A disadvantage is the data from subjects associated with the same institution are likely to be correlated and not completely independent. The correlation can cause a decrease in precision and an increase in sampling error. This disadvantage can be offset by obtaining a larger sample. Systemic sampling can be conducted when an ordered list of all members of the population is available. The process involved includes using every kth individual on the list, using a starting point selected randomly. In order to use this design, you must know the number of elements in the population and the size of the sample desired. Divide the population size by the desired sample size, giving k, the size of the gap between elements selected from the list. Example: if a population size is 1200 and the desired sample size is 100 k would equal 12, which means every 12th person on the list would be included in the sample. Some researchers believe that this method doesn't give each element an opportunity to be included in the sample; it provides a random but unequal chance for inclusion. Researchers must determine that the original list has not been set up with any ordering that could be meaningful in relation to the study. In nonprobability sampling, not every element of the population has an opportunity to be included in the sample. This method increases the likelihood of obtaining samples that are not representative of their target population. There are five nonprobability sampling methods: convenience sampling, quota sampling, purposive or purposeful sampling, network or snowball sampling; and theoretical sampling. Convenience sampling and quota sampling are applied more often in quantitative studies. Purposive sampling, network sampling, and theoretical sampling are more commonly applied in qualitative studies. In convenience sampling, subjects are included in the study because they happened to be in the right place at the right time. The researcher enters available subjects into the study until they have reached the desired sample size. This is considered a weak approach to sampling because it provides little opportunity to control for biases. Biases need to be identified and described in the samples. Extraneous variables need to be controlled by included inclusion criteria for the sample. By doing this it limits the extent of generalization but decreases the bias created. Many strategies are available for selecting this type of sample. A classroom of students can be used. Patients who attend a clinic on a specific day, subjects who attend a support group, patients currently admitted to a hospital with a specific diagnosis, and every 5th person who enters the ED. Convenience samples are inexpensive and accessible and they require less time to acquire than other types of samples. Quota sampling uses a convenience sampling technique with an added feature, a strategy to ensure the inclusion of subject types or strata in a population that are likely to be underrepresented in the convenience sample, such as women, minority groups, elderly adults, poor people, rich people, and undereducated adults. This method can be used to mimic the known characteristics of a target population or to ensure adequate numbers of subjects in each stratum for the planned statistical analyses. It is an improvement of convenience sampling and tends to decrease potential biases. In Purposive sampling, sometimes referred to as purposeful, judgmental, or selective sampling, the researcher consciously selects certain participants, elements, events, or incidents to include in the study. Qualitative researchers select information-rich cases, which is cases that make a point clearly or are extremely important in understanding the purpose of the study. The researcher may select participants of various ages, with differing diagnoses or illness severity, or who received an ineffective treatment vs. an effective treatment for their illness. This sampling technique has been criticized because it is difficult to evaluate the precision of the researcher's judgement. The researcher must indicate the characteristics that they desire in participants and provide a rationale for selecting these types of participants to obtain essential data for their study. It is used to gain insight into a new are of study or obtain in-depth understanding of a complex experience or event. Network sampling, also referred to as snowball or chain sampling, holds promise for locating samples difficult or impossible to obtain in other ways or that had not been previously identified for study. Network sampling takes advantage of social networks and the fact that friends tend to have characteristics in common. When a few participants have been found with the necessary criteria, you can ask for their assistance in getting in touch with others with similar characteristics. The first few participants are usually found through convenience or purposive sampling methods, and the sample size is expanded using network sampling. This strategy is useful in finding participants who can provide the greatest insight and essential information about an experience or event that is identified for study. It is also useful in finding participants in social devalued populations such as alcoholics, child abusers, sex offenders, drug addicts, and criminals. Biases are built into the sampling process because participants are not independent of one another. Theoretical sampling is usually applied in grounded theory research to advance the development of a selected theory throughout the research process. The researcher gathers data from any individual or group that can provide relevant data for theory generation. The data is considered relevant if they include information that generates, delimits, and saturates the theoretical codes in the study needed for theory generation. A code is saturated if it is complete and the researcher can see how it fits in theory. Diversity in the sample is covered so that the theory developed covers a wide range of behavior in varied situations.
11. Describe an approach (systematic plan) for applying research findings in practice setting, including evaluation of outcomes.
Buppert (2018) talks about systematic review of research. He mentions PRISMA or Preferred Reporting items for Systematic Reviews and Meta-analysis. The PRISMA is a 27 item guide that allows the nurse or NP to investigate the research findings to ensure they are appropriate. http://prisma-statement.org/ is where the information can be found. Pg471-482 are the pages that breakdown the information. One type of systematic approach would be to initiate a pilot study. This would entail taking the research findings and applying them to a select group or area of practice and applying a systematic approach to ensure that the findings are truly evident, applicable to practice, cost effective and appropriate for practice. Next the pilot study would take place and then all of the questions listed above would be answered. If one area is not met or it is found that the research findings do not really work in that particular setting then either more research needs to be completed or the whole project has to be scrapped and the research must start over (Granger, 2017). Buppert (2018) discusses the Grove Model for implementing evidence-based Guidelines in Practice. This talks about how the quality and usefulness of the guideline must be assessed. This is done by looking into: 1. The authors of the guideline, 2. The significance of the healthcare problem, 3. The strength of the research evidence, 4. The link to national standards, and 5. The cost-effectiveness of using the guideline in practice. Io
18. 4. Apply a community needs assessment to a case study.
I'm going to answer this like a "how to" ....I think it will be most beneficial this way.. The goal of a community needs assessment is to identify the assets of a community and determine potential concerns that it faces. A straightforward way to estimate the needs of a community is to simply ask residents their opinions about the development of services within the community, their satisfaction with services, and what particular services are needed. 1. The planning phase begins with establishing a partnership between those organizations that are likely to be involved in the needs assessment a. The first step in this process is information gathering and is followed by learning more about the organization sponsoring the needs assessment an identification of goals and objectives. 2. Getting ready, This will help your committee identify what needs to be done to collect the data. a. Identify the participants whom you want to survey. b. Identify your needs assessment strategy c. Determine the types of measures that you will use to collect your information - this will include using focus groups, developing and using a needs assessment survey, and information gathered at community public forums. d. How will the data be collected? For example, door-to-door surveys are often used in needs assessments e. How will the information be analyzed? f. How will the information be summarized and presented in a final report? 3. Once you have developed your needs assessment survey, you can now begin data collection a. Door to door surveys b. Mail to survey c. Provide incentives (raffles, drawings, ect.) 4. Summarize the data a. To compute the results of the survey, you will likely have to use a computer database or computer-analysis program b. The next step in summarizing the data is to prepare a one page summary of the main strengths and concerns identified by the survey respondents. This usually is in the form a "top five" list of concerns and strengths. 5. Writing the Final report a. An overall report of the needs assessment findings is necessary in order to provide written proof that an assessment was carried out and the report can serve to answer any questions regarding the process or findings if the needs assessment. Here is a good guide! https://cyfar.org/sites/default/files/Sharma%202000.pdf
17. 3. Differentiate between incidences, prevalence, Case fatality Rate (CFR), and index case. Endemic, epidemic, pandemic, epidemiological designs, epidemiological triangle, chain of infection and web of causation. Be able to discuss, apply, and utilize these concepts in a case scenario.
Incidence: conveys information about the risk of contracting the disease; prevalence: how widespread the disease is; case fatality rate (risk, ration or fatality rate): proportion of deaths within a designated population of "cases (people with a medical condition)" over the course of the disease; index case: the FIRST identified case in a group of related causes of a particular communicable or heritable disease; endemic: disease that exists permanently in a particular region or population (Malaria in Africa); epidemic: out break of disease that attacks many peoples at about the same time and may spread through one or several communities; pandemic: when an epidemic spreads throughout the world; epidemiological design: research measuring the relationship of exposure with a disease or an outcome- As a first step, they define the hypothesis based on the research question and then decide which study design will be best suitable to answer that question. How the investigation is conducted by the researcher is directed by the chosen study design. The study designs can be broadly classified as experimental or observational based on the approach used to assess whether exposure and an outcome are associated. In an experimental study design, researchers assign patients to intervention and control/comparison groups in an attempt to isolate the effects of the intervention. Being able to control various aspects of the experimental study design enables the researchers to identify causal links between interventions and outcomes of interest. In several instances, an experimental study design may not be feasible or suitable; in such situations, observational studies are conducted. Observational studies, as the name indicates, involve merely observing the patients in a non-controlled environment without actually interfering or manipulating with other aspects of the study and therefore are non-experimental. The observation can be prospective, retrospective or current depending on the subtype of an observational study.
7. Compare and contrast qualitative and quantitative research methodologies and be able to provide a rationale for their use in a research project.
Qualitative research design is defined as a systematic, subjective, interactive, holistic method of research that is used to describe life experiences and give them significance (Groves, Burns, & Gray, 2013). Quantitative research design can be defined as an approach to understand aspects of the world by using formal, objective and systemic processes that are implemented to obtain numerical data. One major difference between the research designs is that qualitative is subjective and quantitative is objective. When it comes to data quantitative uses numerical values whereas qualitative uses words. Qualitative and quantitative have some parallels in that they both require the researchers expertise, involve precision in implementation, and result in the generation of scientific knowledge for the nursing practice. The circumstances the researcher would use qualitative research is when the researcher is looking to explore, explain, or understand a phenomena. Qualitative researchers desire to understand human experiences, cultures, and events over time. Qualitative research is more unique and dynamic and implements theories, models, and frameworks.
10. Differentiate between statistical and clinical significance.
Statistical significance results are unlikely to be due to chance; thus, there is a difference between groups, or there is a significant relationship between variables. The findings of a study can be statistically significant but may not be clinically important. It is used in hypothesis testing, whereby the null hypothesis is tested. A statistically significant result does not "prove" anything and does not establish a causal relationship between the exposure and outcome. The finding of an "association" does not mean that the association is causal in most instances Statistical significance depends on three inter-related factors: 1. Sample size—with larger sample sizes, statistical significance is more likely to be seen. 2. Variability in patient response or characteristics, either by chance or by nonrandom factors. The smaller the variability, the easier to demonstrate statistical significance. 3. Effect size or the magnitude of the observed effect between groups. The greater the size of the effect, the easier to demonstrate statistical significance. Statistical significance is a statement about the likelihood of findings being due to chance. Classical significance testing, with its reliance on p values, can only provide a dichotomous result - statistically significant, or not. Limiting interpretation of research results to p values means that researchers may either overestimate or underestimate the meaning of their results. Very often the aim of clinical research is to trial an intervention with the intention that results based on a sample will generalize to the wider population. The p value on its own provides no information about the overall importance or meaning of the results to clinical practice, nor do they provide information as to what might happen in the future, or in the general population. Clinical significance is the practical importance of a treatment effect-whether it has a real genuine, palpable, noticeable effect on daily life. It is the practical importance of a treatment effect—whether it has a real genuine, palpable, noticeable effect on daily life. Clinical significance relates to the magnitude of the observed effect and whether the magnitude or "effect size" is big enough to consider changes to clinical care. Clinical significance is a decision based on the practical value or relevance of a particular treatment, and this may or may not involve statistical significance as an initial criterion. Confidence intervals are one way for researchers to help decide if a particular statistical result (whether significant or not) may be of relevance in practice.
1. Identify a researchable problem and explain the process and purpose of a review of literature.
The literature review process involves reading (skimming), comprehending, analyzing, and synthesizing content from sources. The literature review process begins by: • Selection of a topic, once you have selected your topic, you have to establish a research question. This question must be specific enough to guide you to the relevant literature. • Identify the most relevant sources on your topic by conducting a comprehensive bibliographic search of books, journals and other documents that contain useful information and ideas on the topic. Internet sites, theses & dissertations, conference papers, e-prints and government or industry report can also be included • Search and refine by use of databases that provide full text access to articles, that allow you to refine your search to peer reviewed journals. These are scholarly journals which go through rigorous process of quality assessment by several researchers before they are accepted for publication • Read and analyze by grouping the sources into themes and sub-themes of your topic. The sources are read to help you gain broad overview of the content. Skimming enables enable you to make a preliminary judgment about the value of a source and determine whether it is primary or secondary source. Through analysis, you can determine the value of a source for a particular study. • Write the literature review, this can be done by organizing the review in many ways; for example, you can center the review historically (how the topic has been dealt with over time); or center it on the theoretical positions surrounding the topic (those for a position VS those against); or you can focus on how each of the sources contributes to the understanding of the project The literature review should include an introduction, which explains how the review was organized, a body- which contains the heading, and subheading that provide a map to show various perspectives you the argument. It must also include a summary.
8. Identify and differentiate independent and dependent variables in a variety of research problem statements.
Variables are qualities, properties, or characteristics of persons, things, or situations that change or vary in a study. In research Study variables are concepts of various levels of abstraction that are measured, manipulated, or controlled in a study. The conceptual definition of a variable refers to its theorical meaning. The operational definition of a variable refers to how a variable will be measured or manipulated. Variables have been classified in various types to explain their use in the study. For example, there are independent, dependent, research, extraneous, demographic, moderator, and mediator variables. Independent variable: is an intervention or treatment manipulated by the researcher to create an effect on the dependent variable. Dependent variable: is the outcome the researcher wants to predict or explain. Identify and differentiate independent and dependent variables in a variety of research problem statements. In an experiment, the independent variable is the variable that is varied or manipulated by the researcher. The dependent variable is the response that is measured. For example: In a study of how different doses of a drug affect the severity of symptoms, a researcher could compare the frequency and intensity of symptoms when different doses are administered. Here the independent variable is the dose and the dependent variable is the frequency/intensity of symptoms.
14. Discuss considerations for inclusion of articles in the review of literature section.
When you consider using articles in the Literature review section, there are two broad types to consider: 1. Theoretical literature - concept analyses, models, theories, and conceptual framework that support a selected research problem and purpose. 2. Empirical literature - knowledge derived from research. Identify seminal and landmark studies. A. Seminal studies - are the first studies that prompted the initiation of the field of research. (Ex. Studying hearing loss in infants would need review of seminal work of Fred H. Bess, an early researcher on this topic who advocated for effective screening tools). B. Landmark studies - are the studies that led to an important development or a turning point in the field of research. (Ex. Glycemic control - must be knowledgeable of the implications of the diabetes control and complications trial). 3. Serials - published over time or may be in multiple volumes, but do not necessarily have a predictable publication date 4. Periodicals - subsets of serials with predictable publication dates, such as journals. 5. Monographs - Books, hard-copy conference proceedings, and pamphlets, are usually written once and may be updated with a new edition as needed. 6. Textbooks - Monographs written to be used in formal education programs. 7. eBooks - entire volumes of books available in digital or electronic format. 8. Government reports - for significance and background of a proposal 9. Position papers - disseminated by professional organizations and government agencies to promote a particular viewpoint on a debatable issue. 10. Thesis - research project (may or may not be published) 11. Dissertation - extensive, usually original research project that is completed as the final requirement for a doctoral degree. The published literature contains primary and secondary sources. 1. Primary source - Written by the person who originated, or is responsible for generating, the ideas published. 2. Secondary source - summarizes or quotes content from primary sources. 3. Citation - quoting or paraphrasing a source, using it as an example, or presenting it as support for a position taken.
12. Describe the purpose and objectives for evidence-based practice as contributing to the development of nursing science.
EBP promotes quality, cost effective outcomes for patients, families, health care providers, and the entire health care system. EBP integrates the best research evidence with clinical expertise and patient needs and values to address practice problems. Nurses need a solid research base to implement and document the effectiveness of nursing interventions in treating patient problems and promoting positive patient and family outcomes. Why is EBP relevant to nursing practice? • There is a gap between what we know and what we do • Nursing practice can and must be changed from tradition-based to science-based o Improved Patient Outcomes o Decreased unnecessary procedures & complications o Greater provider job satisfaction o Third party reimbursement • Effective nursing practice requires information, judgment, and skill • EBP empowers nurses and expands their skills
9. Interpret statistical methods employed in research studies. Be able to derive meaning and interpret statistical findings. Examples include p value, confidence interval, effect size, and power.
Measures of Central Tendency- Mean, Median, and Mode. Representations of the center or middle of frequency distribution. The mean is arithmetic average of all of the values of the a variable. The median is the exact middle value. The mode is most commonly occurring value in a data set. Analysis of Variance (ANOVA) Statistical procedure designed to reduce the error term (or variance within groups) by partialing out the variance resulting from a confounding variable by performing regression analysis before performing analysis variance. Independent samples t-test- the most common parametric analysis technique used in nursing studies to test for significant differences between two independent samples. Students t-test - used to test null hypothesis that there is no difference between the means of two groups. Pearson's product moment correlation coefficient (r)- Parametric test used to determine the relationship between two variables. Factor Analysis- Analysis that examines interrelationships among large numbers of variables and disentangles those relationships to identify clusters of variables that are most closely linked together. Two types of factor analysis are exploratory and confirmatory. Regression Analysis- analysis wherein the independent variable (predictor) or variables influence variation or change in the value of the dependent variable. Mann Whitney U Test- used to analyze ordinal data with 95% of the power of the t-test to detect differences between groups of normally distributed populations. Chi Square Test of Independence- Used to analyze nominal data to determine significant differences between observed frequencies within the data and frequencies that were expected. Confidence Interval- Range in which the value of the population parameter is estimated to be Effect Size- Degree to which the phenomenon is present in the population or to which the null hypothesis is false. Power- Probability that a statistical test will detect a significant difference or relationship exists, which is the capacity to correctly reject a null hypothesis. Standard power of 0.8 is used to conduct power analysis to determine the sample size for a study.
13. Appraise research studies for quality and level of evidence. Be able to discuss the following: a) type of research, b) need for the study, c) theoretical framework, d) sampling technique, e) study design, f) strengths and limitations of the findings, g) interpretation of outcomes, and h) recommendations for practice or future research.
1. Appraise research studies for quality and level of evidence. Be able to discuss the following: a. Quality and level of evidence-based research depends on the type of journal the article is written in. The journal is assessed by an impact factor. It is used to measure the importance or rank of a journal by calculating the times its articles are cited. The article is publishable after usually a peer-review. Depending on the novelty of the issues and funding of the article allows for peer review articles. 2. Type of research need for study, and limitations a. Quantitative research is defined as formal, objective, systematic study process to describe and test relationships and to examine cause-and-effect interactions among variables (Grove, Burns, & Gray, 2013). The quantitative is a design used by many for evidence-based practice (Grove, et. at, 2013). The research study is more of a data collection and generally asks "who" or "what" (Foolproof team 2015). There are 4 types of quantitative research: descriptive, correlational, Quasi-experimental, and experimental. Quantitative research puts data in numerical form creating categories, rank order or a unit of measurement (McLeod,2017). The research is done to test a theory by supporting the data or rejecting the data (McLeod, 2017). A quantitative allows for replication of the study, 2017). i. Limitations for quantitative research include lack of moderation meaning the participants are not able to explain their answers (McLeod, 2017). The variability in the data; furthermore, the smaller the participating group not enough data and the bigger the group the more error can occur (McLeod, 2017). b. Qualitative research is defined as systematic, interactive, subjective approach used to describe life experiences and give them meaning (Grove, et. al, 2013). The qualitative focuses on philosophical perspectives: philosophy, view of science, approach and method, and criteria of rigor (Grove, et. al, 2013). There are 5 types of qualitative research: phenomenological, grounded theory, ethnographic, exploratory-descriptive, and historical (Grove, et. al, 2013). Qualitative research answers the question how, when, where, and why (Foolproof team, 2015). i. Limitations for qualitative research include the sample size to conduct the research is limited (Foolproof team, 2015). Validity and reliability are a limitation for qualitative research (McLeod, 2017). 3. Theoretical framework: A theoretical framework is the research from previous literature that defines a study's core theory and concepts. Narrows the research question and helps create hypothesis (Reference, 2019). Research frameworks summarizes and integrates what we know about a phenomenon more succinctly and clearly than a literary explanation and allows us to grasp the bigger picture of a phenomenon (Grove, Burns, & Gray, 2013). a. Frameworks are used in quantitative research; the framework is testable theoretical structure or may be developed inductively from published research or clinical observations. Every quantitative research has a theoretical framework, but some researchers do not identify or describe the theoretical framework. Quantative research is carefully structure, clearly presented and well-integrated with the methodology. b. Qualitative research the theoretical framework will not be identified. Grounded theory research (qualitative research) the researcher is attempting to develop a theory as an outcome of the study. 4. Sampling technique: process of selecting subjects, events, behaviors, or elements for participation in a study. Sampling plan defines the process of making the sample selection. The sample denotes the selected group of people or elements in included in a study. Key concepts: populations, elements, sampling criteria, representativeness, sampling errors, randomization, sampling frames, and sampling plans. a. Populations: particular group of people the focus of the research i. Target population is the entire set of individuals or elements who meet the sampling criteria, determined by sampling criteria ii. Accessible population is the portion of the target population to which the researchers have reasonable access, available to researcher b. Elements: individual units of the population and sample, an element can be a person, event, behavior, or any other single unit of study. i. People=subjects, research participants, or informants ii. Quantitative = subjects and sometimes research participants 1. Intervention and outcome research the finding from a study are generalized to the accessible population and then the target population iii. Qualitative= study, research participant, informant c. Generalization: finding can be applied to more than just the sample d. Sampling criteria (eligibility criteria): include a list of characteristics essential for membership or eligibility in the target population i. Determines the target population ii. Inclusion sampling criteria: characteristics that a subject or element must posses to be part of the target population iii. Exclusion sampling criteria: characteristics that can cause a person or element to be excluded from the target population e. Probability (Random) Sampling methods: refers to the fact that every member (element) of the population has a probability higher than zero of being selected for the sample. Referred to random sampling methods. Quantitative, outcomes, and intervention research i. Simple random sampling: most basic of the probability sampling methods 1. Elements are selected at random 2. Quantitative, outcomes, and intervention ii. Stratified random sampling: is used when the researcher knows some of the variables in the population that are critical to achieve representation 1. Variables include: age, gender, ethnicity, socioeconomic status, diagnosis, geographical region, type of institution, type of care, care provider, and site of care 2. Correlate with the dependent variable 3. Quantitative, outcomes, and intervention iii. Cluster sampling: probability sampling method applied when the population is heterogenous. Like stratified random sampling but takes advantage of the natural clusters or groups of population units. Used in 2 situations; 1. Simple random sample would be prohibitive in terms of travel time and cost 2. Individual elements making up the population are unknown, preventing the development of a sampling frame 3. Provides a means for obtaining a larger sample at a lower cost 4. Disadvantages: subjects are same institution likely won't be independent, decrease in precision 5. Quantitative, outcomes, and intervention research iv. Systemic sampling: conducted when an ordered list of all members of the population is available. Quantitative, outcomes, and intervention research f. Nonprobability sampling methods: not every element of the population has an opportunity to be included in the sample. Increase the likelihood of obtaining samples that are not representative of their target populations i. Convenience sampling (accidental sampling): most nursing studies, subjects are included in the study because they happened to be in the right place at the right time. Inexpensive and accessible, used in quantitative studies ii. Quota sampling: convenience technique with an added feature, a strategy to ensure the inclusion of subject types of strata in a population that are likely to be underrepresented in the convenience sample, quantitative studies iii. Purposive sampling (purposeful, judgmental or selective): research consciously selects certain participants, elements, events, or incidents to include in the study, used in qualitative iv. Network (snowball) sampling: holds promise for locating samples difficult or impossible to obtain in other ways or that had not been previously identified for study. Takes advantage of networking, qualitative research v. Theoretical sampling: applied in grounded research to advance the development of a selected theory throughout the research process, qualitative research g. Sample size in Quantitative i. Power: capacity of the study to detect differences or relationships that exist in the population ii. Power analysis: calculated using the level of significance and effect size 1. Affect power effect size, type of study, number of variables, sensitivity of the measurement methods, and data analysis technique iii. Descriptive case studies small sample sizes iv. Quasi-experimental and experimental have small samples h. Qualitative research: sample size is large enough to describe variables, identify relationships among variables or determine differences between groups i. the focus of the is on the quality of information obtained from the person, situation, event, or documents sampled versus the size of the sample ii. many uses purposive or purposeful sampling methods 5. Study design: blueprint for conducting a study that maximizes control over factors that could interfere with validity of the findings. 6. Interpretation of outcome a. Goal is to evaluate outcome as defined by Donabedian b. Outcomes of care requires dialogue between the subjects of care and the providers of care
19. 5. Discuss issues related to health literacy and vulnerable populations from a local, national, and global perspective.
1. Health literacy: defines health literacy as the degree to which an individual has the capacity to obtain, communicate, process, and understand basic health information and services to make appropriate health decisions 2. We can help people use the health literacy skills they have (local level) a. Create and provide information and services people can understand and use most effectively with the skills they have. b. work with educators and others to help people become more familiar with health information and services and build their health literacy skills over time. c. build our own skills as communicators of health information. 3. Health literacy national level a. The Healthy People 2020 Social Determinants of Health topic area is organized into 5 place-based domains: i. Economic Stability ii. Education iii. Health and Health Care iv. Neighborhood and Built Environment v. Social and Community Context b. Health Literacy is a key issue in the Health and Health Care domain 4. Health literacy: global level a. WHO: world health organization Vulnerable population: Vulnerable populations include the economically disadvantaged, racial and ethnic minorities, the uninsured, low-income children, the elderly, the homeless, those with human immunodeficiency virus (HIV), and those with other chronic health conditions, including severe mental illness 1. National level a. CDC b. USaid: Vulnerable population program i. Displaced children and orphan ii. Leahy war victims iii. Victims of torture iv. Wheelchair program v. Disability program 2. Global level a. WHO i. AIDS, TB, Malaria
2. Generate research questions from theory and practice.
A researcher's expectation about the results of a study is expressed in a hypothesis. The hypothesis predicts the relationship between two or more variables. The hypothesis must be testable or verifiable empirically (capable of being tested in the real world by observations gathered through the senses). Generating research questions: Consider the problem statement, review the literature on the topic, and a theory might be discovered that predicts the relationship between the two variables (independent variable/independent variable). This is referred to as the directional research hypothesis. (contains the researchers expectations for the study results). The null hypothesis (predicts no relationship between the variables) is tested statistically, the directional research hypothesis is preferred for nursing studies. It is derived from the theoretical/conceptual framework and should indicate the expected relationship between variables. Theory: A theory is a set of interrelated constructs (concepts), definitions, and propositions that present a systematic view of phenomena by specifying relations among variables with the purpose of explaining and predicting the phenomena. OR A set of related statements that describe or explain a phenomenon in a systematic way. Concepts (word pictures or mental ideas of a phenomenon) are the building block of theory. Construct (highly abstract, complex phenomenon) is used to indicate a phenomenon that cannot be directly observed but must be inferred by less abstract indicators (ie: wellness, mental health, self-esteem) Propositions-statement or assertion of the relationship between concepts derived from theory based on empirical data Empirical generalization- when similar patterns of evens are found in the empirical data in a number of different studies. Hypothesis-predicts the relationship between two or more variables. Model-symbolic representation of some phenomenon or phenomena (ex: model heart/lung), and the model focuses of structure or composition of the phenomena Conceptual model-made up of concepts and propositions that state the relationship between the concepts. Most common concepts are person, environment, health, and nursing.
21. 7. Be able to identify multiple elements that affect healthcare policy decisions.
Health policy poses complex legal, ethical, and social questions Health policies that seriously burden individual rights to liberty, privacy, and nondiscrimination may require judicial, rather than majoritarian, determinations. For example, a fetal protection policy that excludes all women from unsafe work places to promote the health of infants may violate fundamental rights of nondiscrimination. Five elements of policymaking (impartial decision making, accountability, collecting full and objective information, applying well-considered criteria, and following a rigorous and fair process) are often helpful in developing sound health policies.
20. 6. Discuss health care policy from the global perspective.
Health policy refers to decisions, plans, and actions that are undertaken to achieve specific health care goals within a society. An explicit health policy can achieve several things: it defines a vision for the future which in turn helps to establish targets and points of reference for the short and medium term. It outlines priorities and the expected roles of different groups; and it builds consensus and informs people. Health financing and delivery systems are influenced by divergent views on a number of ethical issues. Countries must resolve ethical dilemmas such as (1) whether access to basic healthcare services is one of the fundamental rights of every human being, and (2) how scarce resources should be allocated between the old and the young, between medical and preventive care, and between healthcare and other social needs, as they develop their healthcare systems
16. 2. Know and discuss meaning of IMR, Causes of death at local, national and global level.
IMR stands for infant mortality rate, death rate in children younger than age 1. It is highly criticized as the population focus is so small. Per the CDC, in 2016, the infant mortality rate in the United States was 5.9 deaths per 1,000 live births. Causes of Infant Mortality Over 23,000 infants died in the United States in 2016. The five leading causes of infant death in 2016 were: Birth defects. Preterm birth and low birth weight. Sudden infant death syndrome. Maternal pregnancy complications. Injuries (e.g., suffocation). Premature birth is the biggest contributor to the IMR. Other leading causes of infant mortality are birth asphyxia, pneumonia, congenital malformations, term birth complications such as abnormal presentation of the fetus umbilical cord prolapse, or prolonged labor, neonatal infection, diarrhea, malaria, measles and malnutrition. One of the most common preventable causes of infant mortality is smoking during pregnancy. Many factors contribute to infant mortality, such as the mother's level of education, environmental conditions, and political and medical infrastructure. Improving sanitation, access to clean drinking water, immunization against infectious diseases, and other public health measures can help reduce high rates of infant mortality. In 2017, there were 5.9 deaths per 1000 live births in Texas. In 2016, infant mortality rates by race and ethnicity were as follows: Non-Hispanic black: 11.4 American Indian/Alaska Native: 9.4 Native Hawaiian or other Pacific Islander: 7.4 Hispanic: 5.0 Non-Hispanic white: 4.9 Asian: 3.6 Causes of Infant Mortality Worldwide (from the National Institutes of Health) Globally, the top five causes of infant death in 2010 (the most recent year for which data were available) were the following: Neonatal encephalopathy which results from birth trauma or a lack of oxygen to the baby during birth. Infections, especially blood infections Complications of preterm birth Lower respiratory infections (such as flu and pneumonia) Diarrheal diseases Causes of child mortality, 2017 (from the WHO) The leading causes of death among children under five in 2017 were: preterm birth complications, acute respiratory infections, intrapartum-related complications, congenital anomalies and diarrhea. Neonatal deaths accounted for 47% of under-five deaths in 2017. IMR: Infant mortality is the death of an infant before his or her first birthday. The infant mortality rate is the number of infant deaths for every 1,000 live births. In addition to giving us key information about maternal and infant health, the infant mortality rate is an important marker of the overall health of a society. In 2016, the infant mortality rate in the United States was 5.9 deaths per 1,000 live births. DSHS Center for Health Statistics The Portal for Comprehensive Health Data in Texas The DSHS Center for Health Statistics was established to provide a convenient access point for health-related data for Texas. Our objective is to be a source of information for assessment of community health and for public health planning. Our data are used to support research, grant applications and policy development and to provide rapid needs response to health emergencies. We also offer technical assistance in the appropriate use of the data we provide, and in the development of innovative techniques for data dissemination. We support the development and application of consistent standards for privacy and statistical validity. Through the links on the sidebar, and within these pages, you will find statistics on vital events like birth and death, population and demographic information, geographic material and survey data on risk factors and disease prevalence. We also provide information on supply trends for health professions, including nurses, as well as hospital discharge records, and surveys of Texas hospital facilities and charity and community benefits. We respond to requests for data from a variety of users, both inside the Agencies and external stakeholders. If you can not find what you need on these pages, or have suggestions for improvement, please use the contact information in the sidebar to let us know. Causes of Infant Mortality Over 23,000 infants died in the United States in 2016. The five leading causes of infant death in 2016 were: 1. Birth defects. 2. Preterm birth and low birth weight. 3. Sudden infant death syndrome. 4. Maternal pregnancy complications. 5. Injuries (e.g., suffocation). Infant Mortality Rate (Deaths per 1,000 live births) 5.7 5.9 Texas Birth Data 2016 State Rank* U.S.** Percent of Births to Unmarried Mothers 41.3 19th 39.8 Cesarean Delivery Rate 34.4 7th (tie) 31.9 Preterm Birth Rate 10.4 12th (tie) 9.9 Teen Birth Rate ‡ 31.0 4th 20.3 Low Birthweight Rate 8.4 22nd (tie) 8.2 ¹ Excludes data from U.S. territories ‡Number of live births per 1,000 females aged 15-19 CDC is committed to improving birth outcomes. This requires public health agencies working together with health care providers, communities, and partners to reduce infant deaths in the United States. This joint approach can help address the social, behavioral, and health risk factors that affect birth outcomes and contribute to infant mortality. On this page, you will learn about CDC's research, programs, and other efforts to better understand and reduce infant mortality. In 2015, preterm birth and low birth weight accounted for about 17% of infant deaths. CDC provides support to perinatal quality collaboratives (PQCs), which are state or multi-state networks of teams working to improve health outcomes for mothers and babies. Funding supports the capabilities of PQCs to improve the quality of perinatal care in their states, including efforts to reduce preterm birth and improve prematurity outcomes. CDC works with experts to develop resources PQCs can use to further their development, including a how-to guide Cdc-pdf[PDF 567KB] and a webinar series. CDC and the March of Dimes also launched the National Network of Perinatal Quality Collaboratives to support state-based PQCs in making measureable improvements in statewide health care and health outcomes for mothers and babies. Global Health Observatory (GHO) data Infant mortality Situation and trends In 2017, 4.1 million (75% of all under-five deaths) occurred within the first year of life. The risk of a child dying before completing the first year of age was highest in the WHO African Region (51 per 1000 live births), over six times higher than that in the WHO European Region (8 per 1000 live births). Globally, the infant mortality rate has decreased from an estimated rate of 65 deaths per 1000 live births in 1990 to 29 deaths per 1000 live births in 2017. Annual infant deaths have declined from 8.8 million in 1990 to 4.1 million in 2017.
15. 1. Know basic epidemiological data such as IMR, longevity rate, top 3 causes of death for state of residence and US. Know top 3 cause of death of third world countries (in general).
Infant Mortality Rate (IMR) US- in 2018 infant deaths before the age of 1 was 5.9 per 1000 live births. The leading causes were: birth defects, preterm birth or low birth weight, sudden infant death syndrome, maternal pregnancy complications, and injuries (suffocation). Infant Mortality Rate (IMR) Texas- 5.7 per 1000 live births. Top 3 Causes of Death in Texas- Heart Disease, Cancer, Stroke Top 3 Causes of Death in US- Heart Disease, Cancer, Accidents (unintentional injuries) Longevity Rate in US- 78.6, a decline from 78.9 in 2014. Deaths from suicide and overdose were responsible for much of the decline. Top 3 Causes of Death in Third World Country- Coronary Heart Disease, Lower Respiratory Infections, and HIV/AIDS
3. Develop a researchable problem statement (PICO/PICOT) or research question and justify that problem selection.
PICOS FORMAT USUALLY INCLUDES THE FOLLOWING ELEMENTS: P- Population or participants of interest (sample) I- Intervention needed for practice C- Comparison of the intervention with control, placebo, standard care, variations of the same intervention, or different therapies O- Outcomes needed for practice on outcome research S- Study design Formulating the question involves identifying a relevant topic, developing a question of interest that is worth investigating, deciding if the question will generate significant information for practice, and determining if the question will clearly direct the review process and synthesis of findings. Interventions to change maladaptive illness beliefs are beneficial to people with CHD because positive illness representations may lead to improved lifestyle behaviors of exercise, smoking cessation, and balanced diet. Asystematic review or meta-analysis...QUESTION- Which type of intervention to change illness cognitions (e.g. counseling, education, or cognitive behavioral) are most effective for people with CHD? The Population is people with CHD, and the Intervention was focused on changing maladaptive illness beliefs of these individuals. The different types of this intervention including counseling, education, and cognitive behavioral therapy, were compared. The intervention group was Compared with groups receiving standard care, no treatment, or a variation of the treatment. The primary Outcome measured was the change in beliefs about CHD at follow-up. The Study design included synthesis of only random control trials (meta-analysis) in the literature review.
4. Relate research design (experimental, quasi-experimental, correlational, descriptive, etc.) to the project objectives, research question(s) and hypotheses.
Term Definition Research objectives: Specific statements indicating the key issues to be focused on in a research project. Usually a research project will have several specific research objectives Research questions: An alternative to research objectives, where the key issues to be focused on in a research project are stated in the form of questions Research hypotheses: A prediction of a relationship between two or more variables, usually predicting the effect of an independent variable on a dependent variable. The independent variable is the variable assumed to have causal influence on the outcome of interest, which is the dependent variable Research objectives A research aim will usually be followed by a series of statements describing a project's research objectives. Research objectives indicate in more detail the specific research topics or issues the project plans to investigate, building on the main theme stated in the research aim. Normally at least two or three research objectives will be stated. It is good practice to put these in a numbered list so they can be clearly identified later in a proposal or report. Here is an example of a set of research objectives: Objective 1: To examine whether alcohol consumption is associated with increased partner violence. Objective 2: To examine whether labour force status (employment, unemployment, not in the labour force) is associated with variations in the incidence of partner violence. Objective 3: To explore differences between couples with an extended history of partner violence and couples with only a brief, recent history of partner violence. Research questions In some situations, rather than stating research objectives, researchers will prefer to use research questions. In the example below, the objectives stated in the previous example are reframed as research questions: Question 1: Is alcohol consumption associated with increased partner violence? Question 2: Is labour force status (employment, unemployment, not in the labour force) associated with variations in the incidence of partner violence? Question 3: Are there differences between couples with an extended history of partner vio- lence and couples with only a brief, recent history of partner violence? Research hypotheses Research hypotheses are predictions of a relationship between two or more variables. For example, a research project might hypothesise that higher consumption of alcohol (an TABLE 3.1 COMMONLY USED TERMS RELATED TO RESEARCH AIMS Term Definition Research aim A statement indicating the general aim or purpose of a research project. Usually a research project will have only one broad aim Research objectives Specific statements indicating the key issues to be focused on in a research project. Usually a research project will have several specific research objectives Research questions An alternative to research objectives, where the key issues to be focused on in a research project are stated in the form of questions Research hypotheses A prediction of a relationship between two or more variables, usually predicting the effect of an independent variable on a dependent variable. The independent variable is the variable assumed to have causal influence on the outcome of interest, which is the dependent variab Designs: Quantitative Approach- 4 types: (making observations to test theory) -Descriptive Study -Correlational Study -Quasi- Experimental Study -Experimental Study Quantitative Research Questions: Quantitative researchers pose research questions or hypotheses to focus the study's purpose. • Quantitative research questions: o Questions about the relationships among variables that the investigator seeks to know • Quantitative hypothesis: o Predicts that the researcher makes about the expected relationships among variables o Predictions about the population values that the researcher will estimate based on data from a sample • Quantitative objectives: o Indicate a study's goals o Used frequently in proposals for funding
5. Discuss the validity and reliability of data collection measurements/instruments.
Validity and Reliability of Data Collection Measurements/Instruments Reliability- Indicates the extent to which the scale or index consistently measures the same way each time it is used under the same condition with the same subjects. *Several ways of estimating the reliability of a measure: -Test-retest Reliability-Administer the index or the scale to a sample at two points in time and looking for a relatively strong correlation in scores for time (1) and time (2). The assumption is that the construct being measure is stable; therefore, a reliable measure should produce approximately the same score at time (2) that it did at time (1) for each person in the sample -Internal Reliability-Estimation of reliability through a statistical procedure calculating the inter-item correlations composing the measure. Especially appropriate for establishing the reliability of a scale versus the index because scale items should be highly correlated with each other. *Intercorrelations among items on a scale can be determined by a specific statistical procedure that yields the statistic Cronbach's alpha. Advantage of this method is that only one administration is necessary. The formula utilizes a variance-covariance matrix of the items along with the total number of items. The resulting statistic represents the ratio of the sum of the inter-item covariances to the variance of the total scores. *Cronbach's alpha has a potential range of 0 to 1. Higher scores represent greater inter-item reliability. In health promotion research >0.70 or higher is sufficient evidence of reliability. *Extremely high alpha suggests that there may be redundancy among some of the indicators and perhaps the scale could be reduced to fewer items *Low alpha indicates that some of the items are not representative of the construct, there are too few items, or the response options are too restrictive -Split-half method-Measures reliability by dividing the scale into two parallel forms of the measure (a 10-item scale would be divided into two five item scales). The shortened forms would then be administered to a sample. The correlation between scores for the two halves is calculated and then used in the formula to estimate reliability. Appropriate only for scales that are measuring the same unitary construct and not appropriate for indexes. Validity- Refers to the extent to which the scale or index measures what it is supposed to measure. *For a measurement tool to be valid, it must be reliable. Reliability is necessary to achieve validity. Face validity-Employs a jury of experts that decide "Does the index or scale appear to measure the construct"? Content validity-Can be assessed for both scales and indexes, but judgments made regarding the items differ. *For scales to determine content validity ask, "Do the items adequately represent the universe of all possible indicators relevant for the construct"? *For indexes you ask, "Do the items represent a census of items underlying the construct"? Construct validity- Particularly important for theoretical constructs and refers to the ability of a measure to perform the way in which the underlying theory hypothesizes. Several ways to assess construct validity: *Convergent Validity-Degree to which the scale measure correlates with other measures of the same construct. *Criterion-related Validity- Comparing the assessed construct to a tangible measure such as behavior or outcome. Predicated on the basic question "Is the construct statistically associated with the expected criterion measure"? If the scale is indeed valid, then a statistically significant relationship would provide evidence of criterion-related validity. *Factor analysis- Statistical technique for assessing the underlying dimensions of a construct, if in fact they exist, and for refining the measure. Commonly used in the development stage of a new measure. Before a new measure is adopted and accepted widely, it should be subjected to rigorous evaluations of its reliability and validity. Allows us to statistically show with data that the items corresponding to each theoretical dimension or "factor" are more strongly correlated with each other than with items from other dimensions. *Factor loading- Correlations between items and their underlying factors *Exploratory factor analysis- Data driven analysis that reveals whether items cluster together to forma factor and will reveal any underlying dimensions of the construct that may not have been specified a priority. Useful for weeding out the items that are weak.