Research Methods

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

Expedited Review

Additionally, the Common Rule allows, "research activities that (1) present no more than minimal risk to human subjects, and (2) involve only procedures listed in one or more of the following categories, may be reviewed by the IRB through the expedited review procedure...". Research activities in this category include the collection of human data (i.e., height, weight), imaging tests (i.e., electrocardiograms and magnetic resonance imaging), and blood and bodily fluids (OHRP, 2009). The review takes place with the staff of the IRB and typically a few experts but not the full IRB board.

purposes a research study can have

• Exploratory: Explore or investigate to determine scope of issue or to understand a problem that has not been clearly defined. • Descriptive: Describe the problem (who, what, where, and when; how many?) • Evaluative: How well is this working? • Explanatory: Determine a cause-and-effect relationship.

Case Study Design

"A case study is a research approach that is used to generate an in-depth, multi-faceted understanding of a complex issue in its real-life context" (Crowe et al., 2011, p. e1). It can describe, explain, or explore why something happened or how something changed as the result of something new occurring. There are numerous variations on case study designs; • Intrinsic case studies explore a unique occurrence of a real-life situation; the focus in this type of research is on exploring and explaining the uniqueness of the case. • Instrumental case study research selects a single case from a group of cases that when explored will help develop a better understanding of the real-life issue. • Collective case studies are similar to the purpose of the instrumental case study (to develop an understanding of the case); however, the difference is that in the collective case study, several similar cases are combined. This allows the researcher to develop an even more in-depth understanding of the issue beyond what is possible by exploring a single instrumental case In summary, case study research has the power to "answer 'how' and 'why' type questions, while taking into consideration how a phenomenon is influenced by the context within which it is situated. It enables the researcher to gather data from a variety of sources and to converge the data to illuminate the case" Specific Features of Case Study Design • Sample size is one. For example, a case can be a single person, one program, one policy, one event, or one group of similar cases. • Multiple types of data are typically collected (triangulation-interviews, obser-vations, documents) and analyzed. Sometimes the data in a case study can even include some quantitative data if it helps expound on the understanding of the case. This practice would not be considered mixed-methods research sample size is one. • In case study research, the underlying theory or framework guides the researcher through all five stages of the research study. • The results of case study research are offered as examples that can shed light on the how and why something happened. Case study research can serve as a model to improve practice; it is the reader who determines if the findings will be useful in their situation or setting

Hypotheses in Quantitative Research

"A research hypothesis is a specific statement that predicts the direction and nature of the results of the study". "... a good hypothesis must be based on a good research question at the start of a trial and, indeed, drive data collection for the study"

null hypothesis

(often written as H0) predicts there will be no significant difference or relationship between the variables. Here they state there will be no significant cor-relation between any of the variables. ■ There will be no significant correlation of knowledge with age, race, and gender. ■ There will be no significant correlation of attitudes with age, race, and gender. ■ There will be no significant correlation of behaviors with age, race, and gender. ■ There will be no significant correlation between behaviors and knowledge

EXPERIMENTAL RESEARCH

-Experiments involve highly controlled and systematic procedures in an effort to minimize error and bias, which also increases the level of confidence that the outcome was a direct result of the intervention or treatment. -treatment refers to clinical studies of drugs and other clinical medical treatments. The word intervention will be used to refer to complex programs designed to change behaviors, attitudes, environmental and social conditions. Experimental designs have the following four elements in common: Manipulation of variables - something that is purposely changed by the researcher in the study. In health science research, it typically is the treatment or the new intervention that is being tested. Two commonly used categories of variables are the dependent and independent variables. An independent variable is presumed to have an effect on a dependent variable, and a dependent variable "depends" on an independent variable. Control - used to prevent outside factors from influencing the outcome of the study. For instance, let's say we create two groups (experimental and control) that are "equivalent" to one another, with the exception that one group gets the intervention or treatment (experimental) and the other group (control) does not get the intervention or treatment. If there are differences in the outcome between these two groups, then the differences must be due to the only element that differed between the two groups: the intervention/treatment. Random sample - As noted in Chapter 5, using one of the probability sampling methods requires the researcher to randomly select participants, which is thought to yield a group of individuals representative of the population at large. This relates to another concept discussed in Chapter 5, external validity, which means the results of the study can be generalized to similar groups of people in different settings or other groups of people. Random assignment -(control or not) Once a random sample has been selected, random assignment means that everyone in the random sample has an equal chance of being assigned to the experimental or control group. In addition, confounding variables (both known and unknown) are equally distributed between the experimental and control groups. Random assignment ensures that the sample is representative of the population. "It ensures that alternative causes are not confounded with a unit's treatment condition ..." and "It reduces the plausibility of threats to validity by dis-tributing them randomly over conditions"

chapter 8. Non-experimental designs differ from experimental designs in three major ways: • whether there will be a relationship between variables (correlational) to explain or predict an outcome? • if an exposure is linked to a disease or condition (cohort)? • if there is an exposure that explains why a group has a disease and a similar group does not (case control)? • if characteristics or variables develop or change over time (developmental)?

1. Non-experimental designs typically involve one sample group. • While some non-experimental designs like case-control utilize two groups, most will only employ one group. 2. Sampling methodologies in non-experimental design can be either probability or non-probability. • With the exception of two non-probability sampling methods: purposive and theoretical sampling methods. 3.Non-experimental designs do not manipulate variables. Rather than looking to establish a cause-and-effect relationship as experimental designs do, non-experimental looks to discover relationships between variables. • This means that non-experimental research can discover a relationship between variables, but it does not definitely prove that one variable causes a change in another.

QUASI-EXPERIMENTAL RESEARCH DESIGNS- Quasi-experimental research can be defined as resembling (but not being true) experimental research. Although attempts can be made to make both groups very similar (i.e., similar communities, similar classrooms within the same school), the participants are not randomly assigned to the experimental and control group, and thus these groups are considered to be nonequivalent. This introduces an important limitation to this particular type of study; without randomization, there is no guarantee that the differences between the groups are due to the intervention, but rather the results might be due to chance.

1. Nonequivalent (pretest and posttest) control group 2. Simple time-series 3. Control group time-series

EXPERIMENTAL RESEARCH DESIGNS- each of these designs, a random sample was collected using one of the probability methods discussed in Chapter 5. Participants in the random sample were then randomly assigned to the experimental or control group. Everyone in the random sample had an equal chance of being assigned to the experimental or control group. This practice greatly increases the external validity of a study

1. Pretest-Posttest Control Group 2. Posttest-Only Control Group 3. Solomon Four-Group 4. Within Subjects 5. Control group time series

research methods

1. Sampling is the way a researcher recruits or selects individuals from a population to be participants in the study. 2.Data Collection is the type of data that will be collected and the procedures/ pro-cesses a researcher uses to collect data. 3. Data Analysis is how the researcher performs the analysis on the data that has been collected.

mediating variable

A mediating variable is a variable that accounts in some part for the relationship between an independent and dependent variable, or, stated a different way, "... the process by which two variables are related". MacKinnon explains that there are a limited number of causal relationships in studies with only two variables. The addition of a mediating variable increases the number of causal relationships as there is the relationship between the independent variable and the mediating variable, as well as the relationship between the mediating variable and the dependent variable.

Mixed Methods Purpose Statement

A purpose statement for a mixed methods research study includes both a qualitative and quantitative purpose statement plus the rationale for mixing the two studies. Example- The purpose of this mixed methods research study is to assess the "feasibility and acceptability of a telehealthbased parent-mediated intervention for children with autism spectrum disorder" (p. 845); combining both quantitative and qualitative data will allow a "more thorough understanding of the variables that might facilitate or impede the use of the two versions of ImPACT Online in community settings"

systematic investigation

A sys-tematic investigation includes utilizing approved and predefined sets of procedures, also referred to as design and methods, to conduct a research study. The design of the study is the framework, or roadmap, of how the study will be conducted; this framework includes the procedures (methods) used to conduct the study.

Within-Subjects Design (Repeated Measures Design)

A within-subjects design refers to a design in which the subjects participating in the research are exposed to each intervention, otherwise known as the independent variable. The same group of subjects are exposed to more than one intervention. Each subject's performance is repeatedly measured; hence the within-subjects design is sometimes called the "repeated measures design." An example of this design using the research notation below will help to illustrate how the research is conducted. -The strength of this particular design is that it does not require a large number of study participants, as the same participants receive multiple interventions. The use of one group of subjects helps control for the internal threats to validity of history and maturation. Since all participants are exposed to the same intervention, subjects serve as their own control for individual differences between subjects. -(children serving size) -Weaknesses of this design include carry-over effect, fatigue, and practice effects.

dependent & independent variable

An independent variable is presumed to have an effect on a dependent variable and a dependent variable "depends" on an independent variable.

Immersion

Coding is a multistage process. The commonality shared among all coding methods is that prior to data analysis, the researcher must immerse them-selves in the data. Immersion includes a review of the research purpose statement and research question, in conjunction with reading and rereading the data numerous times.

Non-probability methods utilize approved sampling procedures, but since the sampling procedures do not include random selection, there is no way to guarantee that the selected individuals (the sample) will be representative of the population

Convenience: the selection of individuals that the researcher has easy access to. Quota: the researcher needs to fill discrete groups at a predetermined number of par-ticipants. The groups have certain characteristics that are needed to answer the research question. Purposive: the researcher purposefully selects individuals with specific characteristics or specific experiences who can best answer the research question. Theoretical sampling: Purposive sampling typically identifies and purposefully selects all the participants at the beginning of the study. Theoretical sampling is the prac-tice of selecting participants over the course of the study (in phases) based on the results of the emerging data analysis. Meaning, that the researcher starts collecting data from a few participants, and the results of the data analysis will reveal which type of participants should be sampled next. This type of sampling is typically used in qualitative, grounded theory research. Snowball: the researcher identifies an individual with specific characteristics of inter-est or a specific life situation. The researcher then asks that person to refer similar people to the researcher. The researcher continues to ask for referrals from each person referred until the researcher has an adequate number of participants. Mack, Woodsong, MacQueen, Guest, and Namey (2005) explain that "snowball sampling is often used to find and recruit 'hidden populations', that is, groups not easily accessible to researchers through other sampling strategies"

EXPERIMENTAL AND QUASI-EXPERIMENTAL DESIGNS—DEFINED

Experimental designs are the most rigorous of all quantitative designs and have the greatest degree of internal validity and external validity. A confounding variable is one that the researcher is unaware is having an influence on the study, as such; it is not measured or observed. It exists, but its influence on a variable (i.e., independent, dependent) is not easily assessed. Quasi-experimental designs share many similarities with experimental designs. Some of these similarities may include a control group as well as an intervention. The main difference between these two designs is that a quasi-experimental design does not use probability sampling methods and does not randomly assign study participants to an experimental or control group.

What Types of Data Analysis Methods Are Used for Experimental and Quasi-Experimental Research Designs?

Experimental researchers use a combination of descriptive and inferential statistics. The specific statistical test used will depend on the type of data collected, the purpose and design of the study, and how the research questions are written. In general, the researcher will use descriptive statistics to describe the sample. For example, in a research article, often one will see a chart that presents demographic information on the participants. This provides the reader with an opportunity to see, for instance, the percentages of individuals who participated in the study based on gender, age, or the mean scores for a particular data set. The use of inferential statistics allows the researcher to draw inferences (conclusions and correlations) by the analysis of parametric data (interval and ratio). In experimental research, the analysis determines whether the null hypothesis can be rejected. The main difference between experimental and quasi-experimental research studies is the inferences that can be stated at the conclusion of the analysis. True experiments can determine cause-and-effect relationships between variables, and the results of the study can be generalized back to the population. Quasi-experimental studies can also draw inferential conclusions about the study. However, those conclusions are limited. For example, the results of a quasi-experimental study cannot determine causality or generalize the results back to the whole population, since non-probability sampling/nonrandom assignment to groups were used.

Research Notations: Research notations are a shorthand used by researchers to illustrate a specific research design. The research notations noted (O and X) were developed by Campbell and Stanley

Figure 7.1 illustrates the placement of the letters (R, O, X), and the numbers in subscript describe elements of the study. O = an observation (also referred to as a dependent variable); O represents the outcome of the influence of the independent variable (X) X = the treatment, or intervention, the experimental group is exposed to (also referred to as an independent variable), the effects of which are measured; the experimental group receives the treatment or intervention, the control group does not. In an experimental design, the researcher manipulates at least one variable. The absence of X indicates the control group. R = indicates that participants in the group have been randomly assigned to the experimental or control group Time: the left-to-right dimension sequential order of procedures in the experiment; this is sometimes noted by an arrow Subscripts: subscript numbers show difference. For example, when the number follows the R, it shows there are two groups. When the number follows the O, it shows that the study has several differ-ent observations (data collection points). Lack of X in the second row means it's a control. O first and second means pre and post test. scientific notation is simply a shorthand that researchers use to explain the design of the study. A researcher reviewing the above examples would not know what the observation(s) were; however, seeing these two examples, the researcher would immediately realize the first study had a single data collection measurement (observation), while the second example had three different data collection measurements (observations), collected over time, both prior to and after the intervention.

generalizable results

Generalizable results are a comment on how sampling was conducted and to what extent the study results are "likely to apply, generally or specifically, in other study settings

In research, there can be many threats to internal validity that a researcher must consider and attempt to control for to the best of their ability.

History: an outside event that occurred during the research study that can impact the results of the study. Controlling for this threat: Select a research design that includes a control group (more detail in Chapter 7). Maturation: time passed and the participants grew "older, wiser, stronger, [or] more experienced" (p. 52). Controlling for this threat: Select a research design that includes a control group. Testing: the participants simply get better at taking the test because they have become familiar with it. Controlling for this threat: Select a testing instrument (data collection tool) that has very high validity and reliability (more on this in the next section of this chapter). Instrumentation: changes in accuracy of measurements from the start to conclusion of study. For example, the individuals collecting the data become more experienced at collecting data as the study progresses, or the instrument is not sensitive enough to detect actual changes. Controlling for this threat: Select an instrument (data collection tool) that has very high validity and reliability. Statistical regression: participants with extremely high scores on a first test tend to perform lower on second test, and those with extremely low scores on first test tend to perform better on second test. Controlling for this threat: Select a data collection tool that has very high validity and reliability. Placebo effect: It has been well documented that simply the expectation that something will work can in the short term cause self-perceived change. Controlling for this threat: Using techniques known as double-blind and/or placebo-controlled. Using both in combination increases the validity of the study. Double-blind: Neither the researchers interacting with participants (to enroll participants, collect data, etc.) nor the participants know who is in the experimental or control groups. Placebo-controlled: The experimental group gets the real treatment and the control group gets a fake treatment. For example, in a research study on the efficacy of a new drug, the experimental group would receive the new drug and the control group would receive an inactive substance that looks similar to the new drug. Hawthorne effect: research participants will change their behavior simply because they know they are being observed. Controlling for this threat: Select a research design that includes a control group and a placebo. In observational studies (more detail in Chapter 8), conduct sus-tained observations and employ unobtrusive observation methods. Selection bias:how the researcher selects people to participate in the study can impact the results of the study. Meaning, a researcher must consider whether there are differences between participants in the experimental and control groups that influ-ence the outcome of the study. Or are there some inherent differences between the population and those who volunteer to participate in the study (the sample) that impact the validity of results. Controlling for this threat: Use one of the probability sampling methods (more detail on sampling later in the chapter). Attrition (also known as lost to follow-up):people leaving the research study for a variety of reasons (e.g., loss of interest, death, moving out of the area, etc.) can influence results of a study, especially a clinical drug trial. Controlling for this threat: There is no agreed-upon technique; researchers can use a variety of techniques including increasing the size of the sample to compensate for lost to follow-up

scientific literature

If this idea is systematically defined and studied, does it have the potential to generate new knowledge? this process is called conducting a systematic review of the literature

METHODS: SAMPLING

In Chapter 1, sampling was simply defined as the way a researcher recruits or selects indi-viduals to be in the study. If sampling is the technique a researcher uses to recruit/select individuals for the study, then what is the sample? Plichta and Kelvin (2013) define the term sample as: "A group selected from a population in the hopes that the smaller group [the sample] will be representative of the entire population" population. A population is a group that shares a common characteristic as defined by the researcher. Consider this example: A researcher seeks to understand the percentage of people living with cardiovascular disease (referred to simply as heart disease in the rest of the chapter) who adhere to their prescribed cardiac diet (e.g., limit salt and sugar, avoid saturated fat, include more fruit and vegetables in diet). In this case, heart disease would be the common characteristic.

Undue Influence

In contrast to coercion, Part C, section 1 of the Belmont Report defines undue influence as influence that "occurs through an offer of an excessive, unwarranted, inappropriate or improper reward or other overture in order to obtain compliance". Consider the example of an investigator promising students in her psychology class that they will receive extra credit if they participate in her research project. If students are presented with only this one way to earn extra credit, then the investigator is unduly influencing potential study participants. If, however, students who did not want to participate in the research project were given non-research opportunities to earn extra credit, then the possibility of undue influence is decreased. Another example of undue influence would be if a researcher offered a large sum of money (for instance, a month's salary) to participants for taking part in a one-day study to test the effects of a drug with potentially serious side effects that is under investigation. Because the sum of money offered could persuade potential participants to engage in the study against their better judgment, this offer could present undue influence.

Quantitative sample size -Size of the population

In quantitative studies the answer is simple: the size of the sample is determined by how large the population is. Krejcie and Morgan (1970) developed the widely used table that determines, for the researcher, the minimum number of participants (sample) required based on the size of the population. -if the population is a 100, the sample should be 80; - if the population is 1000, the sample should be 278; - if the population is 10,000, the sample should be 370; - if the population is 1,000,000, the sample should be 385. -Power analysis- Power analysis: in addition to the type of data and statistical test used to analyze the data, power analysis considers the alpha level,the amount of power in the study, and the effect sizes. -Typically, researchers set the power at .80, which means the research will be able to reject the null hypothesis correctly "80% of the time" -Effect sizes can be defined as the magnitude of "the impact made by the independent variable on the dependent variable" -Confidence level how confident that the results obtained from the sample will be true for the population (90%, or 95%, or 99%). -Confidence interval the margin of error. -Sample size estimator- Now take another look at the Krejcie & Morgan (1970) sample size estimator with the added information of confidence levels and confidence intervals: as the confidence levels and confidence intervals change, the number of individuals who need to be sampled increases. Confidence level set at 95% and confidence interval set at 5%, if the population is: 100, the sample should be 80 1000, the sample should be 278 10,000, the sample should be 370 1,000,000, the sample should be 384

Single Group Time-Series Design

In the single group time-series design, there is one group, and several observations of the dependent variable (O) are taken over time, both prior to and after the intervention. This research design is utilized when the researcher wants to determine what impact, if any, an intervention had on a group over time. If the intervention had an impact, observations after the intervention would be different from observations prior to the intervention. The example research notation below includes five observations prior to the intervention and five observations after the intervention. Observations prior to the intervention are notated as baseline data. Repeated measurements over time enable researchers to observe whether there is a pattern of change over time in the dependent variable being measured. Once there is a repeated measure of the dependent variable prior to the intervention, post-intervention measurements allow the researcher to assess what impact, if any, the intervention had on the variable. This design can only be a quasi-experimental study since there is no control group.

Nonequivalent (Pretest and Posttest) Control Group Design

In this design, also called quasi-experimental pretest/posttest design, there are two non-equivalent groups, meaning the participants in this study were not randomly assigned to the experimental or control groups. Both groups are given a pretest and posttest, but only group 1, the experimental group, receives an intervention. (equivalency is obtained by randomization),

Inclusion and Exclusion Criteria

Inclusion criteria are used to determine who is suitable to be a participant in the research study. The inclusion criteria are based on certain characteristics that are defined by the researcher based on the purpose of the research study. Exclusion criteria are used to determine participants who have met the inclusion criteria (18 years or older with a diagnosis of high blood pressure that requires medication) but should not be included in the study.

Reference mining

It involves finding and reading the research articles that are cited in other relevant articles' literature review. Reviewing the reference list at the end of a very relevant article will help to • broaden the scope of the search; • identify important scholars who have published on the topic; • open up new avenues to explore regarding a topic.

METHODS How Many Participants Are Needed for Experimental and Quasi-Experimental Research Designs?

It must be noted that sample size estimations and power analysis are vital when planning experimental research, especially clinical drug trials. The sample has a direct impact on the quality of the results of the study. A summary of the important points regarding this topic are: inclusion and exclusion criteria, especially in clinical drug trials, as they directly impact the study's internal and external validity; the size of the sample is determined by the size of the population, the confidence and interval levels set by the researcher; and the power analysis, which takes into consideration the alpha level, power level, and effect sizes.

Descriptive statistics: these analysis techniques describe the data. The results of a descriptive study might be expressed in descriptive statistics, which includes reporting frequencies, percentages, central tendency (mean, median, mode), and description of relative position (range, standard deviation) of the data. For example, a study's results might find that 85% of the individuals completing the survey strongly agreed with a particular statement. Additionally, all studies should provide the reader with a descriptive analysis of demographic characteristics and the data when appropriate.

Mean - the average of all the scores (i.e., if the data set was 12, 2, 12, 5, 5, 7, 5, 9, 5, the average score would be 6.8). Median - the score found directly in the center of the data when the data is arranged in order from lowest to highest (i.e., if the data set was 2, 5, 5, 5, 5, 7, 9, 12, 12, the score directly in the middle/center would be 5). Mode - the score that occurs most frequently (i.e., if the data set was 12, 2, 12, 5, 5, 7, 5, 9, 5, the most frequently occurring score would be 5). Range - represents the full range of scores, the lowest value to highest value in a data set (i.e., if the data set was 12, 2, 12, 5, 5, 7, 5, 9, 5, the range of scores would be 2-12). Standard deviation - the dispersion of data around the mean; for example, if the data set was 12, 2, 12, 5, 5, 7, 5, 9, 5 with a mean of 6.89, the standard deviation (SD) would be 3.44. Meaning, the scores are spread out and not tightly grouped around the mean. One SD would show that the 68% of all scores are clustered around the mean, an SD of 3 means the scores are spread apart from each other (12 and 5). In this case, to calculate an SD of 1, the data would look something like this: 7,6,6,6,5,5,5,8,6 and the mean would change to 6. For example, in Table 5.5, when comparing mean of the two groups, it is important to show how the data is grouped around the mean; in this case the reader can see that the academic variables (both mean and how they were grouped around the mean) were actually very similar between the two groups. Another noteworthy point is how the two groups differed in age in Table 5.5. While the mean was very similar, the range and standard deviation are variable. Actually, there was one person age 47 in the group accounting for the difference.

Research Questions in Mixed Methods Research

Mixed methods research questions combine the qualitative and quantitative approaches of research. Tashakkori and Creswell (2007) put forth three possibilities for writing questions for mixed methods research. The researcher can "Write separate quantitative and qualitative questions, followed by an explicit mixed methods question (or, more specifically, questions about the nature of integration)" (p. 208). A research question written for a study that con-currently collects qualitative and quantitative data could be "Do the quantitative results and the qualitative findings converge?" (p. 208). For a sequential study, the research question could be "How do qualitative results explain (expand on) the experimental outcomes?" a second way to write a mixed methods research question, which can then be broken down into quantitative and qualitative sub-questions to be answered in the various stages of the study. An example of this type of question in a parallel or concurrent design would be "What are the effects of Treatment X on the behaviors and perceptions of Groups A and B?" (p. 208). When broken down, the qualitative question would be "What are the perceptions and constructions of participants in groups A and B regarding treatment X?" (p. 208), and the quantitative question would be "What are the effects of Treatment X on the behaviors of Groups A and B?" Lastly, research questions can be written as the study evolves; this is especially true for grounded theory research. If the first phase of the study is qualitative, the research question would be framed as such, and if the second phase is quantitative, a research question would be framed as a quantitative question Include characteristics of both a good qualitative and quantitative purpose statement •Major intent of study (qualitative and quantitative purpose) •Intent of study from content perspective •Type of mixed methods design •Reason for combining qualitative and quantitative data

mixed methods research

Mixed methods research utilizes rigorous and systematic investigations that intentionally combine qualitative and quantitative methodologies into one single or multiphase study Mixed methods research typically involves a research purpose that is multilayered and is exploring a real-life issue that is often rooted in social constructs (e.g., cultural experiences, normative beliefs, oppression, geographical issues, etc.). In mixed methods research, the focus of the "quantitative research [is on] assessing magnitude and frequency of constructs and ... [the focus of the] qualitative research [is on] exploring the meaning and understanding of constructs"

Quantitative Data Analysis- a researcher collects data, which must be turned into numerical form. There are different types of numerical data that require the use of different data analysis methods. The different types of numerical data can be classified as four types OF Data Scales: nominal, ordinal, interval, ratio.

Nominal data: a type of data that allows a researcher to label a difference without putting a value on the difference. Here, different simply means different. Example: A survey question asks participants to answer a demographic question on gender. The survey gives two choices: do you self-identify as female or male? During data analysis the researcher turns all data into numbers. The answer female is assigned the number 1, and the answer male is assigned the number 2. 2 is not better than 1. Ordinal data: a type of data scale that identifies difference by yielding a rank ordering of difference. Participants rank order answers; for example, best, second best, and third best. Ordinal data shows a difference—this is better than that but does not reveal by how much. Example: A survey asks participants to select their top three leisure activities from a list and then rank order the preference (which one is the first choice, second choice, and third choice). Figure 5.1 demonstrates that the amount of difference can be extremely variable, but an ordinal scale cannot capture the difference. Interval and Ratio data: unlike ordinal, these are types of data that allow a researcher to measure the exact difference. These types of data have increments that are consistent and can be measured. The only difference between interval and ratio is whether the instrument used to measure the increments has a true zero. Interval and ratio data can be analyzed using inferential statistics. Example: Interval: a thermostat or thermometer measures temperature. There are discrete units of incremental measurement; the difference between 40 degrees and 50 degrees is 10 degrees. The same difference of 10 degrees applies between 70 degrees and 80 degrees. However, since the thermometer doesn't have a true zero, meaning a Celsius temperature reading of zero is not the absence of all heat, there is no absolute zero from which the tool can say measurement begins. Example: Ratio: a dollar is an incremental unit of measure. If a person earns 10 dollars per hour and works only 6 hours and another earns 20 dollars per hour and works only 6 hours, we know that the person earning 20 dollars per hour has double, or 2x, the amount of income as compared with the individual earning only 10 dollars per hour. Income would be ratio data; because this scale has a true zero, a person can have zero income.

Non-experimental Research Designs: Observational, Cohort, Case-Control

Observational: the researcher observes and records data without manipulating or intervening in any way during the research process. a researcher looking to understand if children will choose to wash their hands before eating lunch with-out being directed to do so may use trained observers to rate instances of handwashing in elementary school-age children during lunch. more concrete and rigid than qual research- not rooted in social constructs. looking for established proof in theories that already exist. in qual- codes after observational research. in quan- codes established before research. Cohort: determine if an exposure is linked to the progression of a disease or condition. starts with exposure--- looks for disease. a defining feature of a cohort study is that no one in the observed group or groups has developed the outcome of interest at the beginning of the study. can use control i. Retrospective:looking back into existing data-medical records or subject recall of self-reported data. cheap and fast but not as reliable. ii. Prospective:looking forward- The researcher knows exposure at the beginning of the study and collects data going forward looking for development of the disease or outcome. Prospective studies may take longer to complete, but the researcher has more control over what is being observed. Prospective cohort studies can follow ■ one group (a single cohort) who has had exposure to a variety of variables and observe for the development of an outcome; or ■ two groups (two cohorts), one who has had exposure to a variety of variables and the other who has not and observe both groups for the development of an outcome. iii. Issues with cohort studies-Attrition (loss to follow-up) -The absence of randomization-Recall bias-Unrelated data-Confounding and extraneous variables bc of no control. Case Control-While cohort design looks for development of an outcome (e.g., disease) based on expo-sure to a risk factor, case-control research design starts from a place where the researcher knows the outcome (e.g., disease) and looks to identify the exposure (risk factor). always looking back so its not called retrospective. can use control Issues with case control studies- sampling bias if, for example, all the cases (research participants) are being taken from one establishment or area. this does not help increase the study's external validity. also choosing right controls.observation and recall bias by the researcher and/or the participants. iv. Data analysis- cohort and case-control studies utilize unique data analysis methods when looking to establish a relationship or an association between variables. The data analysis of a cohort study centers on determining the relative risk-the probability of an outcome of interest developing as a result of the exposure being followed. In cases where the outcome of interest is rare, a researcher would most likely opt to utilize a case-control design as opposed to a cohort. The statistical test used in a case-control design is the odds ratio (OR), or the measurement of association between an exposure and an outcome."The OR represents the odds that an outcome will occur given a particular exposure, compared to the odds of the outcome occurring in the absence of the exposure" the odds ratio can determine the strength and degree of association between variables.

Statistics: Descriptive and Inferential- The statistical analysis a researcher can perform varies based on whether the data is parametric data (interval and ratio) or nonparametric data (nominal and ordinal).

Parametric data (interval and ratio) -Descriptive statistics: Central tendency and description of relative position -Inferential statistics: Significant difference: t-test, ANOVA, ANCOVA, MANOVA, Regression or Multiple regression, Pearson r, Odds ratio, etc. Nonparametric data (nominal and ordinal) -Descriptive statistics: Frequencies, percentages / cant calculate CT, SD and range -Inferential statistics: Mann-Whitney U test, Chi squared, etc.

Probability and Non-probability Sampling Methods

Probability methods allow the researcher to obtain a random selection of individuals from the population. When a researcher uses one of the probability methods, they have obtained a random sample. Probability sampling methods are thought to yield a sample that statistically is representative of the population. This practice yields a study with a high level of external validity; the results of the study can be generalized to other groups of people or similar groups of people in a different setting. It also increases internal validity because statistically, it controls for selection bias. Probability sampling methods include: Simple random: everyone in the population has an equal chance of being selected as a participant. Stratified random: the researcher identifies a subgroup or subgroups in the population and wants to ensure that the sample represents the subgroup(s) found in the population. Proportional stratified: the researcher identifies a subgroup or subgroups in the popula-tion that are very unequal in size and wants to ensure that the sample will represent the population. Systematic: the researcher selects participants based on a randomly chosen number. Cluster: selecting an intact homogeneous group(s) from within the population.

QUALITATIVE RESEARCH ARTICLES-the Background, the Methods, the Analysis/Findings, and the Discussion/Conclusion.

Qualitative research seeks to develop understanding in an area that has yet to be fully explored; for example, the how and why of a decision-making process or the shared meaning of an emotionally charged event. Qualitative study designs are more flexible than quantitative research, with data collection and patterns and themes emerging throughout the research process. To this end, qualitative research articles are structured similarly, but the information is presented differently.

Qualitative sample size- Data saturation

Qualitative researchers use non-probability sampling methods such as purposive, convenience, and snowball. The determination of the number of participants is often dictated by the research design and a concept known as data saturation. An appropriate sample size for a qualitative study is one that adequately answers the research question. In practice, the number of required subjects usually becomes obvious as the study progresses, as new categories, themes or explanations stop emerging from the data (data saturation)

first-cycle coding

Saldana (2009) explains that first-cycle coding is the process of dissecting and examining the data for similarities and differences. transcribing their own data to facilitate the immersion process, engaging in reflection on the data, and subsequent first-cycle coding. Immersion is an important process that leads to insights required for first-cycle coding. Immersion and first-cycle coding begin as soon as the researcher begins to collect data.

Solomon Four-Group Design

The Solomon four-group design is one of the most rigorous designs in quantitative research. This design has the highest level of controls for threats to internal validity by controlling for pretest effects. The Solomon four-group design combines the pretest-posttest design with the posttest-only design into one research design. In the Solomon four-group design, randomly selected subjects are randomly assigned to one of four groups (Solomon, 1949): 1st row (R1): Participants receive a pretest observation, an intervention, and a posttest observation. 2nd row (R2): Participants receive a pretest and posttest observation; this group does not receive an intervention. 3rd row (R3): Participants receive an intervention and a posttreatment observation; this group does not receive a pretest observation. 4th row (R4): Participants receive a post-test observation; this group does not receive an intervention or pretest observation. Comparison of the results of the four groups in the Solomon four-group design allows the researcher to determine if the results are valid—meaning, has the study been influenced by pretesting. This particular design controls for many threats to internal validity, such as threats of history and maturation, because it has a control group. If an outside event occurred during the course of the study that could impact the results or, if during the course of the study participants matured, these changes would be seen in both the experimental and control groups.

memo

The first-cycle coding also includes the researcher keeping an ongoing memo. Merriam (2009) demonstrates that a memo is the researcher's written record of notes, hunches, ideas, and evolving questions that arise while coding the data. These notes often guide the researcher to the realization of what new data and what type of data should be collected next (triangulation) in order to fully answer the research question.

hypothesis

The hypothesis should be written as a declarative statement and in such a way that it outlines the relationship (i.e., association, cause and effect) between two or more variables.When we refer to the relationship between two variables, we are usu-ally referring to an independent variable and a dependent variable. Hypotheses can provide direction for the researcher as to what type of design should be used, what type of data should be collected, and how the data should be analyzed. The hypothesis is written in such a manner that it is a tentative declarative statement regarding the findings of the study (the relationship between the variables) How the researcher chooses the hypotheses format is based on several factors: the researcher's discipline, findings of previous research studies, and theoretical framework of the study. For example, in medical research the standard is to use a null hypothesis and an alternative hypothesis (nondirectional). As stated above, the hypoth-esis is a tentative proposition which, in the end, may or may not be supported by the data analysis

Control Group Time-Series Design

The only difference between this design and the previous design (single group time-series design or interrupted time-series) is the inclusion of a control group, which in turn increases the internal validity of the experiment. There are numerous variations on how time-series studies are structured. Some are very similar to the previously presented study designs (e.g., pretest-posttest design or posttest-only design) where one group receives an intervention or treatment and the control group does not receive the intervention or treatment. Another type of interrupted time-series design staggers the intervention with the removal of the intervention and a reintroduction of the intervention. The one thing all these variations of the design have in common is multiple observations at posttesting (shown) the control group (Group 2) has a parallel series of observations to Group 1. If change is seen in the experimental group (Group 1) after the intervention and a corresponding change is not seen in the control group (Group 2), the change is most likely due to the intervention. Additionally, by collecting data at baseline and then at several times at the end of the study, it is possible to determine whether the effects of the intervention lasted over time.

Posttest-Only Control Group

The posttest-only control group is one of the simplest experimental designs. It is made up of two randomly assigned groups; one group receives a treatment, the other group does not. Sometimes it is not possible to include a pretest in an experiment for reasons including, but not limited to, the fact that a pretest may not exist and/or the experiment may be studying the effects of a life event (i.e., natural disaster) in which it was not possible to pretest individuals prior to the event. In this design, the researcher is interested in determining if there is a difference between the two groups in the observations made after the intervention is given to the experimental group. This is a relatively simple design for assessing cause-effect relationships.

Pretest-Posttest Control Group Design

The pretest-posttest control group design is made up of two groups that have been randomly sampled and then randomly assigned to either R1 or R2 both of which receive an observation prior to and at the same time afterward. The only difference between the two groups is that the R1 (experimental) group receives an intervention and the R or R2, group (control) does not. The research notation for this particular design is presented below.

Grounded Theory Design

The primary reason a researcher would select this design is when the researcher's goal is to develop a theory about a "process, action, or interaction, shaped by the views of participants" Often, grounded theory research questions focus on understanding what the process was, how a situation unfolded, what strategies were used to navigate the situation, or explain how/why something occurred The outcome of this research is a practical theory or conceptual framework about a real-world situation. The theory/framework is created by exploring the real-life experiences, views, and actions of the participants. grounded theory "goes beyond the description of the phenomena to the development of a theory or model, designed to better explain the process and actions, which could lead to improved methods" in health care delivery Specific Features of Grounded Theory Designs • Often there is very little previous peer-reviewed literature on the topic. • Often uses a larger sample than other qualitative studies; sample size can range from 20 to 50 participants. • Does not use a theoretical framework in developing the research question or study design; rather, the researcher gathers data, and it is the interplay between data collection and data analysis that refines the focus of the study. • Sampling method often involves theoretical sampling methods as opposed to purposive sampling. Glaser (1978) explains the process of theoretical sampling as the interplay between collecting, coding, and analyzing data in order to determine which participant should be sampled next. - the use of theoretical sampling assists researchers in determining which avenues need further exploration in the development of the theory. The authors also express that the researcher's "audit trail will be strengthened by having a theoretical sampling guide for each category" or core component of the theory • The primary focus of a grounded theory study is not solely to understand the experiences of participants; rather, the primary focus is uncovering processes and actions which are rooted in participant experience.

Phenomenological Design- obesity patients

The primary reason a researcher would select this design is when the researcher's goal is to develop an understanding of an event, life situation, or experience through the study of people who have lived through the event/situation/experience. This design has also been called the lived experiences, since the ultimate "goal in carrying out phenomenological research is to gain an in-depth understanding of the lived experienced of the participants" As such, this type of research is heavily focused on collecting data from first person experiences; the researcher combines the shared understanding and the multiple perceptions and perspectives of the individuals to uncover the meaning of the event, life situation, or experience. Specific Features of Phenomenological Designs •This research relies heavily on interviews with a purposefully selected group of people •The researcher starts with loosely structured interview questions but then allows the participants to drive the interview process so the participants' voices, experiences, and interpretations come through. • Member checking is often used to ensure the participants' voices, experiences, and interpretations are expressed in data analysis. • At times this research can explore emotionally charged events. • Researchers must suspend preconceived notions or personal interpretations of the data; this is accomplished through a practice called bracketing. Reflective bracketing involves the researcher intentionally putting aside their own knowledge, experiences, and feelings to allow only the participants' experiences and interpretation to guide

Methods: Data collection- Quantitative data collection uses tools and instruments

The tool or instrument a researcher uses to collect data is directly related to internal validity (testing and instrumentation threats). As previously stated, these tools/instruments must be both reliable and valid, otherwise the results of the study can be invalid. These concepts are known as instrument validity and reliability. Instrument Reliability:refers to the extent to which a given instrument consistently measures an attribute, variable, or construct that it is supposed to measure," 1. Interrater is required if the data collection involves judgments or ratings by different observers. Interrater reliability is a statistical comparison between the scores recorded between/among the people using the same data collection tool.The scores would be compared, and if the raters have high interrater reliability, the scores should be equivalent. 2. Test-retest data is collected using the same tool/instrument at different times with the same individuals. The repeat testing should yield identical (medical device) or highly correlated (questionnaire) measurements/scores.A person is weighed several times using the same scale, so the measurements should be identical. 3. Equivalent forms a data collection tool/instrument has two versions that are almost identical.To test this, both the pretest and the posttest would be given to a group of people. Those individuals would take both tests, and the scores on the tests should be almost identical. 4. Internal consistencythe scores on a group of items measuring the same concept within a tool/instrument are highly correlated. If the subject agrees with the first question, then the subject should also agree with a subsequent question that states the same concept with different wording. Instrument Validity:"refers to the extent to which the instrument actually measures what it is intended to measure" 1. Content - refers to how thoroughly the concept can be measured using this instrument. "Because there is no statistical test to determine whether a measure adequately covers a content area or adequately represents a construct [or a concept], content validity usually depends on the judgment of experts in the field"A researcher develops a new questionnaire on empathy; the researcher would ask an expert, someone who is well known in the field for defining and measuring empathy 2. Criterion refers to testing the new tool/instrument against another tool/instrument or other measurement if another tool/instrument does not exist. The scores from the new tool/instrument should correlate to the other tool.If the new test is valid, the scores should correlate to the standard (valid and reliable) test 3. Constructrefers to the "accumulation of evidence from numerous studies using a specific measuring instrument. Correlations that fit the expected pattern contribute to the evidence of construct validity"

Inferential statistics - these data analysis techniques allow the researcher to draw inferences (conclusions based on the type of study that was conducted: causation, correlation, etc.) from the sample and then apply those results back to the population. Inferential statistical tests are used to test hypotheses and require the researcher set an alpha level and a confidence level (presented earlier in the chapter) during the design phase of the study. These data analysis decisions impact sampling (determining the size of the sample). An alpha level is the level at which statistically significant results are determined. For example, a researcher has a null hypothesis that is being tested:

There will be no difference in the efficacy of Drug A and Drug B in controlling patients' cholesterol levels that are in excess of 340 mg/dl. The alpha level is the level at which the researcher can reject the null hypothesis and conclude there was a statistically significant difference between the drugs; Drug A was more effective than Drug B. Typically, the alpha level for educational intervention studies is set at .05; however, for a drug trial the researcher might want to set a more rigorous alpha level before rejecting a null hypothesis, perhaps .01 or .001. Alpha levels - the alpha level, also known as the p value, is the level that the researcher can state the results of the study did not occur by random chance. • p < .05 means that only 5 results out of 100 might have occurred by chance, not as a result of the experiment • p < .01 means 1 result out of 100 might have occurred by chance, not as a result of the experiment • p < .001 means 1 result out of 1000 might have occurred by chance and not as a result of the experiment When the results of the analysis reveal findings at or lower than the alpha (p value), the researcher can conclude that outcomes of the study were the result of something (intervention or correlation between variables) other than random chance. This is called statistical significance. Said another way, the researcher can reject the null hypothesis and say there was a statistically significant difference between the performance of Drug A and Drug B, with Drug A performing better.

Convergent Design

This design has also been referred to as the parallel or concurrent design, since both studies can be done simultaneously and the timing of the data collection for each study is not dependent on the results of the other study. The primary reason a researcher would select this design is to see how the results of a quantitative study and the findings of a qualitative study, when merged for comparison, can enhance the understanding of the research topic.

second-cycle coding

This emergence of patterns, categories, and themes is called second-cycle coding. Second-cycle coding reveals the meaning of the data. This continues until the researcher obtains data saturation Inherent in this process is that the researcher would use one of the numerous techniques to check the credibility of the findings (i.e., member checking)

themes

This first-cycle coding (collecting data, coding data, reflecting, comparing codes from previous data collection) continues until patterns (known as themes) begin to emerge from the data analysis.

Exploratory Design

This is a sequential design where the study is conducted in two phases, and the results of the first study (qualitative) are directly used to develop a tool, instrument, or intervention for the second phase of the study (quantitative). The primary reason a researcher would select this design is to explore an issue qualitatively in order to develop a better or more accurate quantitative instrument or intervention. First, the researcher would conduct the qualitative study. Then the findings of the study would be transformed (e.g., into variables, survey questions, key points in an intervention, etc.) so the researcher can develop and disseminate the quantitative instrument (i.e., survey) or intervention.

Explanatory Design

This is a sequential design where the study is conducted in two phases. The results of the first study (quantitative) are directly related to why the second study (qualitative) is conducted. The primary reason a researcher would select this design is to gain an in-depth understanding of an interesting, unusual, or key quantitative result by following up with a qualitative study. The qualitative findings provide further explanation or clarification of the quantitative results.

cited reference searching

This tool allows a researcher to search forward in the literature and find articles that cite the article being read. • For example, if the article being read was published in December of 2010 and presented original research on a new smoking intervention for vaping, using the cited reference search will find every article published after 2010 that used this article in their literature review. This is a great way to find more current research articles on the topic. Another use of cited reference searching is to create a stronger literature review by allowing the incorporation of the primary source, the original research that has been written by the researcher, into the paper rather than citing an article as a secondary source. As stated earlier, the use of primary sources ensures that all the information gathered is not subject to the interpretation of an outside author, further ensuring the scientific merit of the information cited.

Focused Ethnography Design- soroities, gatekeeper, key informant

To understand focused ethnography, a description of a full ethnography is first provided. The primary reason for selecting this design is when the goal is to understand the day-to-day aspects of a culture. The researchers would immerse themselves in the culture and collect data over the course of several years. It is important throughout this process to take extra precautions to ensure that their own biases and cultural beliefs do not taint the data collection and analysis. Ethnography research studies all forms of communication (verbal, nonverbal, symbolic) and interactions (both the explicit and implicit patterns of behavior). A significant portion of data collection occurs in the field. Focused ethnographies are commonly utilized in health science research to study a specific health-related belief/issue/practice within a culture-sharing group. Specific Features of Focused Ethnographic Designs The focus is on understanding a specific health-related issue within the culture group versus understanding every aspect of the group's culture. Focused ethnographies take significantly less time to conduct, with data collection ranging from weeks to months Gatekeeper - often used to gain entrance into this group. The researcher must find a member of the group who will guide the researcher and make an introduction to the group. It takes time to build trust with the group. In addition to collecting interview and observation data from members of the culture-sharing group, the researcher can identify a key informant from whom to collect data. A key informant is someone that has special knowledge of the group or a special relationship with the group. The researcher would use the key informant to explain certain aspects of the group's behavior and communication patterns in more detail

Trustworthiness: Transferability, credibility, dependability, and confirmability

Transferability: Merriam explains that the reader of the research study determines the extent to which findings can be transferred to their settings or group. In other words, the person who reads the research determines if the findings of the study are a good "fit" to their situation (p. 226). One way that helps the reader make this determination is to have a lot of detailed information about the participants, the research setting, and findings in the journal article. This practice is known as providing the reader a thick description. Credibility: the confidence in the truth of the findings. There are numerous data collection and data analysis strategies a researcher can use to increase the credibility of the findings. Two of these strategies are presented below. Triangulation: collecting different types of data (verbal, textual, images, observa-tion), collecting data at different times, and/or having two researchers collect and analyze data. An example of triangulation can be found in the Johnson et al. (2017) study of paramedics' decision making. The researcher's data collec-tion included: "document review, interviews, observation, digital diaries, focus groups and workshops," which the researchers found yielded "different ways of seeing reality, yet similar issues were highlighted in data generated by each method" of data collection (p. 10). Looking at many different types of data helps the researcher have confidence that they have uncovered all the data required to gain a complete understanding of the research question. Member checking: sharing the data analysis with the research participants and/or experts in the field working with participants. That is, when the researcher begins to see meaning develop from the data, the researcher asks the participants if the meaning revealed from the data is true. This practice assists the researcher in uncovering any hidden bias the researcher might have or the researcher's misin-terpreting the meaning of the data. Instead of the quantitative concept of reliability, qualitative researchers consider issues of: Dependability: dependability relies on whether the results of the study make sense to another researcher. Here the practice is not for a researcher to replicate the study and achieve identical results; rather it is to ask the question "are the results consistent with the data collected" (Merriam, 2009, pp. 221-223). One way to accomplish this is through an audit trail. An audit trail is a detailed reporting of how the researcher conducted the study, especially the collection and analysis of the data. This reporting can include the researcher sharing reflections, problems, insights they had during the collection and analysis of data

type I errors and type II errors. The researcher must consider other statistical information before making the final determination that drug A was indeed more effective than drug B.

Type I error - occurs when the null hypothesis is falsely rejected. The researcher believes that there was a significant difference when none actually exists: the researcher's statistical analysis leads them to believe that Drug A is more effective than Drug B, when in actuality it is not. A way to mitigate this type of error is to lower the alpha level, thus reducing the chance the null hypothesis will be falsely rejected. • Type II error - occurs when the null hypothesis is falsely accepted. The statistical analysis showed that there was no difference between the performance between Drug A and Drug B, when in actuality there is a difference. Unfortunately, lowering the alpha level increases the chances that the researcher will falsely accept the null hypothesis (incorrectly concluding there is no difference). The statistical tests called the power analysis and sample size calculation (previously mentioned in the sampling section) allows the researcher to determine how many participants are required in the study to achieve statistical power to help mitigate type II errors. The best way to help reduce type II errors is to have a larger number of participants in the study. The researcher must determine if this statistical finding of relationship or difference has any practical (applied) or medical (clinical) value. The health science researchers must ask themselves: Do the results of this study make a real-world difference in the quality of patient care? For example, it was found that Drug A was indeed statistically more effective than Drug B. However, upon a closer look, it only improved patients' cholesterol levels by 10 points, not nearly enough to make a significant impact when the recommended level is 200mg/dl.

reference librarian

While the Internet continues to make it easy to access information from just about anywhere, it is important to not underestimate the value the assistance a reference librarian can provide. Not only is a librarian an invaluable assistant when conducting research, working with a research librarian can save untold amounts of money when accessing articles through databases, which often charge individual users of monthly or per-article fees to download.

CRITIQUING A JOURNAL ARTICLE—DEFINED

Writing a critique of an article provides the opportunity to not only identify the strengths and weaknesses of each component of the research article, but it allows one to identify where, within the article, specific information about the study can be found. This section of the chapter will • Distinguish between the types of research article; • Identify the main components of a research article; • Help to identify how these components differ based on methodology; and • Provide an overview of important factors to evaluate when critiquing a research article A review of the literature scholarly article organizes and presents previous research in relation to a topic but does not put forth new knowledge. A literature review is the first section of every research article. It is a comprehensive review whose purpose is to identify the gap in the literature that led to the new study being discussed.

Theoretical sampling:

a type of non-probability sampling ■ Never used for quantitative sampling ■ Used in qualitative research to sample, especially in grounded theory research • Theoretical underpinning or using a theoretical framework: use a theory ■ A theory is something that has already been established; it defines and explains the relationship between variables to provide a foundational understanding of the issue. Used in qualitative, especially case study research, to help explain the case based on an established foundation, which assists in transferability. ■ Used in quantitative to make a data collection tool, intervention, or define variables. • Theoretical framework: make a theory ■ Purpose of grounded theory research

Writing questions

a. Dichotomous questions- Many survey questions are dichotomous, meaning they offer two possible answers (closed) for the respondent to choose: • yes/no; • agree/disagree; or • true/false male/female- could be issue bc of other genders b. Filter- Dichotomous (yes/no) are often used as filter/screening questions when a researcher is looking to weed out a portion of the respondents to whom the survey does not relate. c. Open ended- Open-ended questions are unstructured questions that allow a respondent to write a response in his or her own words. How do you feel about staying in a hospital overnight? Open-ended questions are often easier to write but are much harder to turn into quantifiable data. Another issue with open-ended questions is that they can often be misunderstood by a respondent, often resulting in answers that are difficult to quantify or group together.

Poorly Constructed Survey Instrument Question As stated earlier, the data collection tool (the questions in the survey) is the most important aspect of how a survey research design study answers the research question. Most issues in survey research occur from poorly constructed questions (data collection tool).

a. Double Barreled Questions: Asking two questions in one where the respondent may feel differently about the two concepts and therefore is unable to answer the question. Do you support the Affordable Care Act (ACA) and socialized medicine? Yes b. Bias/Loaded Questions: Framing the question in such a way that does not allow the respondent to disagree with the question or that creates an assumption of the respondent's feelings or beliefs. Why are Physician Assistants better than Nurse Practitioners? c. Sensitive Questions:When asking questions that are sensitive in nature, it is important to set the tone or introduce the section in a way that may make the respondent feel more comfortable. (bad)How many times in a week do you eat in front of the TV? vs (good) We've found that many people often eat dinner while watching television. During a typical week, how many nights a week do you eat dinner while watching television? 0-2, 3-5, 5-7 important to make it seem socially acceptable. d. Content wording- Are all the questions that are included necessary, and are they worded in such a way that they can be answered relatively quickly and easily? • Does the question make sense? • Is the question useful? • Is the question necessary? • Is more than one question needed? • Does the wording portray a clear meaning? • Will this survey take up too much time and cause the respondent to lose interest and walk away? • Will the respondent answer truthfully?

Writing questions for a survey instrument the survey instrument is how the researcher collects the data

a. Focus- Each question should focus on a specific topic. Questions often become confusing when the focus is unclear. Which pain reliever do you use most often? vs List the brand of over-the-counter pain reliever you most often buy. b. Clarity- The meaning of the question should be as clear as possible to avoid misinterpretation and incorrect answers. Do you use the computer for health-related issues? vs Do you utilize the patient portal on your primary care physician's website? c. Brevity- Shorter questions are easier to answer and provide less risk of the respondent answering only a portion of the question or skipping over it altogether. To determine how many times a month someone eats fast food, it would be better to ask to Indicate the number of times you ate at each fast food restaurant in the past month below: d. Open/unstructured- Open, or unstructured, questions allow the respondent some sense of freedom to answer the question and give the opportunity to elaborate on the topic using his or her own words- some form of qualitative analysis. e. Structure/Closed- Closed, or structured, questions limit the responses that can be given by requiring that each respondent indicate agreement or disagreement with predetermined choices. With structured questions, there is no opportunity to deviate from the script. Closed or structured responses are easy to quantify and are turned into numerical form for analysis. *In semi-structured surveys the respondent is asked predetermined questions with an occasional open-ended question when the researcher is looking for clarification or an elaboration on a response.*

How questions are structured

a. Nominal- Nominal questions are used to label, name, or group responses. With nominal questions, the researcher does not assign a value to each response. For example: My hair color is black_brown_blonde_ -not rank ordered or compared-nominal responses are assigned numbers, it is solely for the purpose of data analysis, where the numbers simply name a difference. b. Ordinal- Ordinal questions assign meaning to responses by ranking them in order from lowest to highest or vice versa. For example: whats your annual income 5k_10k_50k_100K/ they can also ask someone to rank activities 1-4 With ordinal questions the responses can be rank ordered, meaning that one response is better/larger/more than another response but it is often difficult or impossible to determine the distance between the responses in ordinal questions. c. Interval- similar to ordinal except, distance between responses in interval questions is measured in standard increments. Interval scales allow the researcher to utilize a wider range of data analysis, including inferential statistics and averages. i. Likert- measure attitudes, values, internal states, and judgments about ... behaviors in both research and clinical practice/ ratings scales used are 5-point and 7-point/ strongly disagree (1)- strongly agree (7)/ maybe sometimes use a forced-choice and take away a neutral option ii. Semantic Differential- measure attitudes, values, and opinions by having respondents rate their opinion or belief on a scale using bipolar adjectives. chose where you fall in between clean and dirty iv. Guttmann or Cumulative- •Respondent checks each item with which they agree. The items themselves are constructed so that they are cumulative, ranked ordered in difficulty from least to most extreme or most to least extreme. if they chose 3 they agree with 1 and 2 then.

1. Non-experimental Research Designs- methods

a. Sampling- Non-experimental researchers can sample using either probability or non-probability meth-ods (with the exception of theoretical or purposive). b. Data collection- The most common are observation, surveys, or archival records. i. Observation- sorting the data into predetermined and pre-specified sections. Observations are normally broken up into small segments of time, e.g., 5 minutes or 15 minutes; raters are highly trained in what to look for and how to rate each observation; and there is normally more than one independent rater collecting data at a time in order to ensure validity of the measurement tool. natural setting is more advan/disadvantageous for research. ii. Survey-ability to collect large amounts of data in a relatively short amount of time. Surveys are also relatively inexpensive to use and can be employed in many different ways. self-reported data/ loss of follow up iii. Archival records- Researchers often use previously collected information as data in a study. This may be in the form of medical records or data collected in a longitudinal cohort study. relatively inexpensive- but collected for a different purpose, it may not accurately reflect what the researcher is looking to measure in this study. Data Analysis-employs descriptive and inferential statistics to analyze data.

Survey Methods the research methods are sampling, data collection, and data analysis. The methods a researcher selects must align with the research design of the study.

a. Sampling- Though it lacks the rigor of experimental designs, survey research has the ability to have a high level of external validity if it uses one of the probability sampling methods. Therefore, using probability sampling methods will often produce results that are most representative of a sample to the population. In survey research, the term population is used to define "a set of elements; an element is defined as the basic unit that comprises the population" all males over the age of 25 living in the Northeast who attended a four-year college or university i. Response rate- The response rate is another issue the researcher needs to take into account when sampling. The response rate—the percentage of people who actually take part in a survey and return their responses—often tends to be much lower than the sample size. It is not uncommon for the researcher conducting this type of research to continue to sample until the optimal sample size has been reached. Some issues that may influence response rate are: length and topic, relationship between researcher and participants, how data is being collected, incentives, life issues ii. Population enumerated-Can the population be enumerated? In other words, can the researcher establish the number of units in the population to be sampled? access that population through voter registration logs or DMV records of people who hold a driver's license or nondriver identification, the researcher can access a list of people from which to draw a representative sample.This population can be enumerated in the sense that the researcher knows the total population from which a representative sample will be drawn at the start of the study. the researcher then can establish how large a representative sample will be needed to ensure generalizable results b. Data collected- descriptive and inferential data analysis methods i. Survey: valid - •Content Validity: Do the questions express the underlying concept they were designed to reflect? • Criterion Validity: Do the responses to the questions agree with the gold standard for the underlying concepts? • Construct Validity: Are the hypotheses concerning the relationships between the underlying concepts conveyed in the responses? reliable- •Test-retest: Does the same question have the same response over time or with a different sample? • Interrater: Do two interviewers with the same questionnaire get the same response? • Internal Consistency: Are questions designed to evaluate the same concept/ obtain equivalent responses? Other threats to internal validity common in survey research include self-selection, response bias, recall bias, interview distortion, and false respondents. ii. Interviewer bias- The researcher is often a data collection tool in survey research. it is always a possibility that an interviewer can distort the responses of a survey by not asking questions that may make them uncomfortable or questions that they believe they know the answer to, based on a respondent's previous answers. In addition, there is always the possibility that a researcher's subjectivity (personal opinions and beliefs) may accidentally play a part in the data collection process. iii. False respondents- One issue that commonly occurs in survey research is false respondents. The researcher often does not know whether the individual sampled completed the survey. For example, a survey is sent to a randomly selected group of physicians; however, the researcher would not know if the returned survey was completed by the physician or if the physician asked the office assistant to complete the survey instead.

Chapter 9: Survey research - Defined if the researchers wish to obtain generalizable results, they must use one of the probability sampling methods, sample size estimator, and weigh pros and cons of each survey

a. Survey as a design- Survey research design (also known as survey research) is a research design used to develop an understanding of a population's knowledge, attitudes and feelings, perception and beliefs, and/or behaviors about specific issues."predict attitudes and behaviors" or "describe attributes" of the population b. Survey as a data collection tool- Survey is a data collection instrument. There are two styles of surveys: questionnaire and interview. While interviews are used in qualitative research, the difference here is the structure and intention of the data collection process. When a survey is used to ask specific questions, whether data is being collected through a questionnaire or an interview, there is a level of standardization in data collection c. Standardization-everyone is asked the same questions in roughly the same order using the same terminology. These questions can be delivered in a variety of ways and look to solicit a range of responses from simple yes/no to in-depth explanations of feelings or attitudes. Additionally, the answers to those questions are analyzed numerically How is survey research conducted?- Survey research (design) is conducted by the dissemination of questionnaires (data collection) and by conducting interviews (data collection). There are a variety of survey data collection formats, each having their own strengths and weaknesses. Personal Interview • Strengths ■ Generally has high response rates ■Allows interviewer to elaborate on questions or ask for clarification ■ Responses are usually easy to analyze • Limitations ■ Costly due to large number of interviewers needed ■ Data collection is slow, and overall study requires a lot of time ■ Difficult to control for interviewer bias Telephone Interview • Strengths ■ Less costly because no field work is required ■ Random digital dialing (RDD) allows researchers to reach a large representative sample ■ Data collection takes less time than in-person interviews ■Allows interviewer to elaborate on questions or ask for clarification ■ Generally has a better response rate than mailed surveys • Limitations ■ Can only reach households that have telephones ■ Higher non-response rate than in-person interviews ■Subject listens and responds with no visual clues ■ Hard to control for question confusion when answering Mail Survey • Strengths ■ Cheaper than phone or in-person interviews ■ The need for a smaller number of interviewers/staff contributes to lower cost ■ Provides access to a large representative sample ■ Respondents can participate when it is convenient • Limitations ■ Easy for the respondent to not participate or forget to participate ■ Low response rate ■Incentives may increase participation but also increase the cost ■ Longer waiting period for responses to be returned ■ Reminders increase response rate but also increase cost Online Survey • Strengths ■ Lowest cost ■ Provides access to global population ■ Timely ■ Easy to collect only relevant data through online programs ■ Provides access to an enormous representative sample • Limitations ■ Varying computer capabilities may not allow access to some households ■ Easy for respondents to ignore or delete requests, which leads to very low response rates ■ May contain higher response rates from those interested in the topic, resulting in bias data

Non-experimental research designs: Threats to external and internal validity

a. Threats to external validity-External validity is the extent to which one can assume that the results of the study are generalizable to other groups or the larger population. Non-experimental research tends to have varying levels of external validity.major factors that can contribute to improving the external validity in non-experimental research -Real-life setting -Representative sample-non-experimental studies that are able to use one of the probability sampling methods will have higher levels of external validity than studies that cannot. -Replicability-The more another researcher can replicate a study in a different setting and have similar results, the more valid the results of the first study are considered. b. Threats to internal validity-Internal validity, on the other hand, is the extent to which the results of the study are true and not a result of some confounding variable or bias that has influenced the outcome and results of the study. -Self-selection- Non-experimental research is often conducted on groups that share a commonality like an occupation, lifestyle, geographic location, recreation preference, other commonalities that can define a group. These factors can be considered confounding variables that threaten internal validity -Response bias - Much of non-experimental research relies on self-reported data. -Recall bias - Much like response bias, recall bias occurs when participants' self-reported data have an incorrect or incomplete recollection of events or exposure to risk factors being studied. -Researcher bias - Because non-experimental research cannot always randomly sample or use double-blind placebo controls, there is always the possibility that a researcher's subjectivity (personal opinions and beliefs) may accidentally play a part in the analysis of data. -History and maturation- Non-experimental studies like cohort or cross-sectional often look at the natural progression or changes over time. Threats to validity can occur if, an uncontrolled historic event like a war occurs that may drastically alter people's perception or behavior for years to follow. Similarly, maturation can occur at different rates within a select group of people. When evaluating change over time in a developmental study, the maturation rate of the subjects may be a threat to the internal validity of the study.

Secondary sources:

articles written about research studies which are not written by the actual researcher. Reviews and articles published in scholarly and professional journals that summarize or evaluate research are still considered academic but are not as highly valued as primary sources, simply because they are open to interpretation and generalization by the author that may or may not remain in context with the original research ■ For example, a review of the literature article published in a peer-reviewed journal presents detailed overview and analysis of the results of a system-atic review of the literature. The primary goal of the article is to share a unique perspective on a topic. This type of article is considered a second-ary source in that it provides a compilation of past research but does not generate new knowledge.

Primary sources:

articles written to describe original research. Primary sources appear in scholarly journals as original research. ■ For example, the researcher disseminates the results/findings of their research through a firsthand account of the study they have conducted. This lends a deep level of credibility to the information contained in the article as it is written by the researcher. ■ Another example is dissertations. This often involves a doctoral student conducting original research guided by a faculty member, but these stud-ies have not been published in a peer-reviewed journal and as such have not undergone a peer-review process.

evidence-based medicine/systematic research

as the highest level of care a physician can provide patients.conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients

research paradigms.

basic assumptions or the worldview a researcher operates under Guba (1990) suggests that the determination of one's paradigm can be found in how one answers the following three questions (p. 18): 1.Ontological: What is the nature of the "knowable"? Or, what is the nature of "reality"? 2. Epistemological: What is the relationship between the knower (the inquirer) and the known (or knowable)? 3. Methodological: How should the inquirer go about finding out knowledge?

Scientific principle

global term used to describe the utilization of scientific procedures

Non-experimental Research Designs: Developmental

i. Cross-sectional: looks at development over time by taking a snapshot of the stages of development from different groups all at the same time. It can be considered a fast and inexpensive alternative to a longitudinal study, as there is only one data collection point for each respondent and the study does not look to follow participants over an extended period of time. Instead, development is shown through the differences that exist between groups, not from one group moving ahead through time. cross-sectional design does not utilize a control or comparison group. analyzing and surveying children's eating habits in 1st and 5th grade on one day to see if theres a change. ii. Longitudinal: follows a group (sometimes called a cohort) over a period of time. cohort as "a group of persons sharing a particular statistical or demographic characteristic". In a developmental longitudinal study, the focus is to record development over time; for example, a study can look to follow the survival rates of children born in 1980. Issues with cross-sectional and longitudinal designs: -While developmental longitudinal studies allow researchers to look at numerous variables over the course of the study, they are often expensive and require a team of researchers to complete. Another issue is attrition (loss to follow-up), where participants leave the study for various reasons before it is completed. -In developmental cross-sectional studies, researchers can collect data quickly and inexpensively, but confounding variables can influence the outcome of the study (e.g., low internal validity). the example of elementary students' eating habits at school, schools sometimes employ many lunch aides to cover all the lunch periods. If grade 4 students eat lunch with a lunch aide who is extremely strict and grade 3 students eat lunch with a lunch aide who is extremely lenient when it comes to enforcing lunchroom rules. confounding variable of the lunch aide

Methods: Data collection- Qualitative data collection

i. prolonged engagement- Prolonged engagement means the researcher spends enough time in the field with people so that the researcher can develop an overall understanding of the environment. if prolonged engagement provides scope, persistent observation provides depth ii. persistent observation- Once the researcher has a detailed understanding of the environment and has built trust with participants, the researcher can focus on collecting the data (persistent observation) that will best answer the research question. iii. triangulation- triangulation involves collecting different types of data: document, interview, image, observation; collecting data at different times, from different places, and from different people; or having multiple researchers collect data. iv. interviews- The researchers purposed to interview 15 patients in their study of the lived experiences of patients post-coronary angioplasty but the researchers found after interviewing 13 patients they were not identifying any new information. The researchers found "data saturation was achieved at interview 13, but the researchers continued on data collection by conducting 2 more interviews to make sure" v. data saturation- Qualitative researchers use data saturation to determine when to stop the data collection process.

Variable

is a term frequently used in quantitative research. According to Flannelly, Flan-nelly, and Jankowski (2014), a variable is "... something that takes on different values; it is something that varies" (p. 162). Although there are different types of variables, two commonly used categories are dependent and independent.

Applied research

is a type of research that seeks to study issues that have "immediate relevance to current practices, procedures, and policies"

moderating variable

is a variable that "modifies the form or strength of the relation between an independent and a dependent variable" researchers investigated whether anxiety (the mediating variable) mediated the effect of stress, self-esteem, and positive or negative affect (independent variables) on depression (the dependent variable). It was found that anxiety mediated the effect of stress and self-esteem (independent variable) on depression (dependent variable). However, when looking at moderating effects among all the variables, there was an "interaction between stress and negative affect and between positive and negative affect influenced self-reported depression symptoms" (p. e7). So while "anxiety partially mediated the effects of stress and self-esteem on depression" ... "there was a significant interaction between stress and negative affect, and positive affect and negative affect on depression" (p. e7). These results (the combination of examining the mediating and moderating effects on variables) provided a much more in-depth understanding of the relationship among independent variables and of their combined impact on depression.

Reliability

is a very complex concept. For the purpose of this book, it will be defined as issues related to the soundness of the data collection procedures. Validity and reliability as defined in the previous section are exclusive to quantitative research.

statement of the problem

is one or two sentences that outline the problem that the study addresses, while the "... problem statement is the statement of the problem and the argumentation for its viability"

Validity

is the authenticity of the results. There are several types of validity a researcher must consider; here we will limit the discussion to two types of validity: External validity: the extent to which the results of the study will be true for different groups of people or similar people in different settings. Meaning, a researcher conducts a study with a smaller group of people with the hopes of then applying those results to much larger groups of people. A researcher's decisions about research design and sampling method will impact the degree of external validity in each study. Internal validity: the extent to which the results of the study are true. That is, when a researcher conducts a study, they want to make sure the results they get are because the intervention worked instead of as a result of some confounding variable. A confounding variable is a variable the researcher is unaware of and that has an impact on the outcome of the study. A researcher's decisions about research design, sampling method, and data collection method will impact the degree of internal validity a study has. Validity and reliability as defined in the previous section are exclusive to quantitative research.

The directional hypothesis

not only predicts there will be a relationship between the variables, but it states how that relationship will be expressed (e.g., as one variable increases, the other will decrease, etc.), ultimately stating what the study results will be ■ Women who developed PTSD symptoms would show greater decreases in PA over time than women who experienced no trauma, or trauma but no symptoms. ■ Higher versus lower PTSD symptom severity would be associated with greater reduction in PA in a dose-response manner

The nondirectional hypothesis

predicts there will be a relationship between the variables, but it does not state what that relationship will be. there will be a relationship or association between variables but does not express what form that relationship or association will take. ■ There will be a correlation between knowledge, age, race, and gender. ■ There will be a correlation between attitude, age, race, and gender. ■ There will be a correlation between behavior, age, race, and gender. ■ There will be a correlation between behaviors and knowledge. ■ There will be an association between PTSD symptoms and PA over time. ■ Severity of PTSD symptoms will be associated with PA in a dose-response manner.

lit review- Relevant

the researcher conducts an in-depth digging through all that has been previously written specific to the purpose of the proposed research topic and incorporates all relevant information.

During the conceptualization stage

the researcher reviews all the previous research studies done on a specific topic to identify and create the problem statement, which leads to the development of a research purpose statement and research question(s).

Ethical research

therefore, is a systematic investigation designed to develop or contribute to generalizable knowledge that conforms to accepted standards of conduct.

Qualitative studies:

use an iterative process of breaking data into small constructs to find patterns that reveal the essence of meaning. Data analysis begins while data is being collected; the process is lengthy, and the presentation of findings includes rich textual descriptions, direct quotes, and/or images.

Quantitative studies:

use descriptive and inferential statistical procedures on var-ious types of numerical data. Data analysis is conducted at conclusion of study; if the correct test is selected, analysis is quick and straightforward; the results of analysis can be presented in tables and charts.

Non-experimental Research Designs: Correlation:

when a researcher is looking to test a hypothesis that may be unethical regarding possible relationships between variables or before moving to an experimental study, a correlational study is often used. Correlational research can determine the strength and degree of association between variables, but it stops short of determining a cause-and-effect relationship. can tell a researcher that as X increases, so will Y, but it does not go so far as to say X causes Y to increase. Data analysis- Typically the data is arranged on a scatterplot to determine association. By looking at what is referred to as the line of regression, the Pearson correlational coefficient, also known as the Pearson r, and the proximity of the scatter points to that line, a researcher can determine if there is a correlation between variables. The closeness of the points to the line indicates the strength of the relationship, or the r value: the closer to the line, the stronger the correlation. If all points fall exactly on the line, it would indicate a perfect correlation or an r-value of 1 or -1. a. No Correlation/Positive Correlation/Negative Correlation- an inferential statistical scale -1 neg 0 no, +1 pos Issues with correlation design- Correlational research does not identify a causal relationship between variables. In other words, it can identify a relationship between variables that is either positive or negative, but it cannot prove that one variable caused the change in the other or vice versa

Research Involving Children assent

"Children are persons who have not attained the legal age for consent to treatments or procedures involved in the research, under the applicable law of the jurisdiction in which the research will be conducted" Yet, in consideration of the evolving maturity and independence of a child and, consistent with federal regulations, investigators should engage children, when appropriate, in discussion about research and their assent. "Assent means a child's affirmative agreement to participate in research. Mere failure to object should not, absent affirmative agreement, be construed as assent" If a child does not assent to participate in research, even if the parents or legal guardian grant permission, the child's decision prevails.

Research Questions in Qualitative Research- PICo

"The content of a qualitative research question needs to reveal an area of interest or problem emerging from the researcher's professional and/or personal experience or the literature, where a gap of knowledge exists or where contradictory and unexplored facets are present" Qualitative research questions are not as narrow and specific as quantitative research questions. Rather, they are broad questions focused on "the why and where of human interactions". In other words, researchers are curious about people's experiences of an event, events, and/or a condition and seek to uncover the perspectives of an individual, a group or more than one group (Agee, 2009). When conducting research, the qualitative researcher cannot know ahead of time what will be described or what will emerge from the study. As a result, qualitative research questions are open ended and written in a way that seeks to interpret and/or describe and/or explore how or why something occurs, rather than seeking to find the relationship between variables

human subjects

"a living individual about whom an investigator (whether professional or student) conducting research obtains (1) data through intervention or interaction with the individual, or (2) identifiable private information

Coercion

"occurs when an overt threat of harm is intentionally presented by one person to another in order to obtain compliance". Consider the following example: an elderly woman who is a resident in a nursing home is forced to choose between participating in a research study or leaving the nursing home. The elderly woman lacks the ability to make a decision based on her own free will. She is being forced to choose one of two options; participate in the research study and stay in the nursing home or don't participate in the research study and leave the nursing home. The participant in this case is being threatened in order to obtain compliance, the threat that she will not be able to stay in the nursing home. Her ability to make a decision based on her own free will has been taken away. Another example of coercion in research would be where a physician threatens to stop providing care to his patient unless the patient joins a clinical trial. The physician is making an overt threat of harm: "You can no longer be my patient" in order to coerce or force the patient to participate in the study.

Scientific method

"principles and procedures for the systematic pursuit of knowledge involving the recognition and formulation of a problem, the collection of data through observation and experiment, and the formulation and testing of hypotheses."

Popular journals

(also known as magazines) and websites contain articles written by writers who are employed by the magazine or freelance. These articles cover popular, hot topics, which may appear at first glance to be research oriented but are not reviewed by experts in the field. These facts in articles published in popular journals may very well be accurate, but they are often quoted from opinion or taken from secondary sources. It is also equally likely that the facts in a popular journal article have been taken out of context. The reader must always make determinations on the validity of the material.

paradigms: positivism, post-positivism, constructivism, and pragmatism

-only one truth (quan) -not one absolute truth (quan) -multiple truths (qual) prag(mixed)focus on what works- truth governed by everything

Informed Consent Checklist

1.A statement that the study involves research, an explanation of the purposes of the research and the expected duration of the subject's participation, a description of the procedures to be followed, and identification of any procedures which are experimental; 2. A description of any reasonably foreseeable risks or discomforts to the subject; 3.A description of any benefits to the subject or to others which may reasonably be expected from the research; 4. A disclosure of appropriate alternative procedures or courses of treatment, if any, that might be advantageous to the subject; 5. A statement describing the extent, if any, to which confidentiality of records identifying the subject will be maintained; 6.For research involving more than minimal risk, an explanation as to whether any compensation and an explanation as to whether any medical treatments are avail-able if injury occurs and, if so, what they consist of, or where further information may be obtained; 7. An explanation of whom to contact for answers to pertinent questions about the research and research subjects' rights, and whom to contact in the event of a research-related injury to the subject; and 8.A statement that participation is voluntary, refusal to participate will involve no penalty or loss of benefits to which the subject is otherwise entitled, and the subject may discontinue participation at any time without penalty or loss of benefits to which the subject is otherwise entitled.

Voluntariness

An agreement to participate in research is considered a valid consent only if it is given voluntarily. Once the individual has received and comprehended the required information, it is important to give sufficient time for the individual to think about the research before giving consent to participate in the study. This element of the informed consent process requires that the conditions surrounding the consent are free of coercion (i.e., inappropriate financial or other rewards) and/or undue influence (refer to the section above on undue influence for a review of this information) Appelbaum, Lidz, and Klitzman (2009) use the term "potentially impaired voluntariness" (p. 31) when describing situations that may preclude an individual from being able to voluntarily give a valid consent. For instance, a substantial amount of money or compensation offered in exchange for participating in a research study may potentially impair an individual's ability to give thoughtful consideration as to whether they want to voluntarily give consent for the study. The same could be said for patients who are ill with a particular medical condition and do not have access to health care. Should this patient be presented with an opportunity to participate in a research study that will provide a possible treatment for their condition, this opportunity may potentially impair their ability to give a voluntary valid consent.

Jewish Chronic Disease Hospital Study

Another example of an egregious study that took place in an institution in which people were to have been cared for is the Jewish Chronic Disease Hospital Study. In 1963 two doctors injected live cancer cells into hospitalized patients with chronic diseases. The premise of the study was to see if patients who were debilitated with a chronic disease rejected cancer cells, albeit at a slower rate, than healthy patients (McNeil, 1993, p. 57). It is important to note that this study was so egregious that it was brought to the attention of the Board of Regents of the State University of New York who, upon review, found that not only had the research protocol not been presented to the hospital's review committee, but the patients' physicians were unaware of their patients' involvement in this study.

Research Stages and Corresponding Action Steps

Conceptual (Thinking)- Having an idea (research problem), systematically reviewing the literature to verify the problem has the potential to generate new knowledge, writing a problem statement, a research purpose statement, and research question(s) Design (Planning) Selecting the best research design and research methods (sampling, data collection, data analy-sis) that align with research question(s), submitting the research study proposal for IRB review Empirical (Doing)Obtaining IRB approval to conduct the study, recruiting/selecting the sample, obtaining informed consent from participants, collecting the data Analytic (Analyzing) Utilizing the best analysis strategies to yield meaningful results from data (answering the research questions and simultaneously uncovering new avenues of inquiry) Dissemination (Sharing) Writing a journal article to share results/finding (new knowledge) with the scientific community

systematic review of the literature

Conducting a systematic review of the literature is the first step a researcher takes after having an idea during the research study's conceptualization stage. It involves a methodical, systematic, and exhaustive evaluation of past research studies in order to identify whether the idea has the potential to generate new knowledge. -creates a "firm foundation for advancing knowledge; - facilitates theory development; - closes areas where a plethora of research exists; - uncovers areas where research is needed" A systematic review of the literature is an exhaustive review process involving a multi-phase and multistage search of all that has already been written on the topic of the proposed research study. It is the foundation of the research problem statement, which explicitly identifies the relationship between what has already been studied and the generation of new knowledge by providing a theoretical framework and rationale for future research studies

autonomous person

Demonstrating respect for the decisions made by an autonomous person would involve respecting their decisions and opinions, unless said decisions and opinions would be harmful to others. Showing a lack of respect to an autonomous person could be manifested in a number of ways. One could show a lack of respect by rejecting, or interfering with, a person's ability to carry out or act on their opinions and choices. Another example would be by withholding information, for no compelling reason, for the purpose of interfering with an individual's ability to make a decision. Two hallmarks of an autonomous individual would be that they have the ability to both understand and process information and, should they choose to participate in a research study, they are free to do so without being coerced or influenced by others. "In research involving human subjects, respect for persons implies that, when given adequate information about the research project, that subjects voluntarily decide to participate"

Hand searching

Hand searching is the process of identifying key journals in the researcher's reference list and going through the journals page by page. The tools listed above help the researcher find important articles; however, there is often a limit to the number of key search terms listed in an article. A hand search can identify new search terms, types of studies that are not listed under the article's search term, and other source material for reference mining.

Once the researcher is fairly confident that they have exhausted the literature, it is time to begin the process of organizing the material.

How Is the Systematic Review of the Literature Structured? There are two organizational steps when creating a systematic review of the literature. 1. Organizing the articles; 2. Organizing the written flow of the actual review.

QUALITATIVE RESEARCH ARTICLES-Discussion/Conclusion Section

In this final section it is important that the researchers use rich description to discuss the findings and conclude the study. Findings should be tied back to the central phenomenon described in the background. Qualitative articles will weave a discussion of the literature into the analysis of the findings and discussion of the central phenomenon as appropriate. Evaluating or Critiquing the Discussion/Conclusion Section The critique process for a qualitative study centers on an evaluation of the study's overall trustworthiness. The most common criteria include a review of the study's "credibility, dependability, transferability"

RESEARCH WITH HUMAN BEINGS REQUIRES INFORMED CONSENT

Informed consent is a process that includes giving all the information to a potential research participant in a way they can understand so they are able to make an informed decision on whether to volunteer for the study. In other words, "... relevant information is provided to a person who is competent to make a decision, and who is situated to do so voluntarily" The informed consent process is comprised of three elements: information, comprehension, and voluntariness. It is imperative that during the informed consent process the researcher put the participant's rights, welfare, and safety above all other concerns, whether they be personal or scientific

QUANTITATIVE RESEARCH ARTICLES- Introduction, the Methods, the Results, and the Discussion/ Conclusion

Introduction Section- The introduction, sometimes called the background and significance or background, follows the abstract. This section is often referred to as the literature review section of the research article. This section; • provides the highly summarized, integrated, and synthesized version of the systematic review of the literature conducted during the first stage of research; • ends with the research study's problem statement (problem statement identifies the topic as researchable and uses a cited summary of previously published research to support any claims made in the literature review. The research problem statement is a specific framing of the gap in the literature, which is used to develop the basis for the implementation of the research being conducted.), purpose statement, and research question(s). • Remember, the most important goal of the introduction is to identify the gap in the literature which sets the stage for the study.

Information

It is vitally important that potential research participants be given sufficient information in order to ascertain whether or not they want to participate in the research process. This information may include, but not be limited to: a description of the purpose of the research procedure as well as the procedures involved; potential risks and anticipated benefits; any available alternative procedures (when therapy is involved); and a statement informing the potential research participant of their ability to ask questions as well as withdraw from the research study at any time

Comprehension

Many factors may impact a participant's ability to comprehend the information presented as part of the informed consent process, and it is necessary to adapt the presentation of the material to the participant's capacities. If language is a barrier, it is important to present information in a language that is understandable to the participant or his or her representation. If English is not the participant's primary language, consideration should be given to providing non-English-speaking participants a translated informed consent document Should comprehension be severely limited due to immaturity or mental disability, it is important to seek the permission of a third party in order to protect the participant from harm. The individual selected to be the third party should be someone who understands the participant's situation and will act in their best interest. As part of the informed consent process, it is important to provide an opportunity for individuals to ask questions about any of the information presented. In addition, it is important to consider what, if any barriers may be present that might prevent an individ-ual from asking questions. For instance, in some cultures, it may be considered rude to ask questions of the investigator/researcher presenting the information, which may result in the participant not fully understanding the information presented. In these situations, the question of who is presenting the information to the participant, how it is explained, and establishing an atmosphere in which the participant is comfortable asking questions becomes extremely important

research methodology

Methodology, in combination with the type of research question, will delineate the appropriate research design, sampling method(s), data collection method(s), and data analysis method(s) utilized in the systematic investigation.

MIXED METHODS RESEARCH ARTICLES

Mixed methods research articles should clearly share the rationale for why conducting both a qualitative and a quantitative study was required to answer the research questions. both quantitative and qualitative designs and methods and include a discussion of both results and findings. Critiquing mixed methods research articles requires an understanding of how both qualitative and quantitative research is conducted and how their findings/results are disseminated.

diminished autonomy

Not every person has the capacity to act as an autonomous agent, whether due to illness, a mental disability, or circumstances that severely restrict their freedom and therefore may require protection while incapacitated. an individual who is not able to act as an autonomous agent and therefore is not "... capable of deliberation and personal goals and of acting under the direction of such deliberation" This diminished autonomy may be manifested as having limitations when it comes to giving thoughtful consideration to or carrying out their personal goals.

WRITING THE LITERATURE REVIEW

Once the researcher's systematic review of the literature has documented a researchable problem, it is important to think about how the literature review will be structured. Will the focus be on: • comparing and contrasting varying theoretical perspectives; • highlighting conflicting findings on a topic; or • on identifying the research progression over time? A balanced literature review will include previous research that looks to both • support the research question; and • counter the research question.

Research Questions in Quantitative Research- PICOT, PICO

Quantitative research questions are nondirectional; they can range from simple to complex, and studies can have more than one research question (Connelly, 2015). Research questions should not be written in a yes/no format. In other words, research questions should be written in such a way that they are not biased with regard to the terminology used or the position taken on a particular topic. Consider the following question as an example: To what extent is patient-centered teaching associated with self-reported reduction of dietary fat and salt content in individuals who have had an acute cardiac episode within the past six months? This question does not make an assumption about the association between patient-centered teaching and self-reported reduction of dietary fat and salt content; it merely asks to what extent does one variable influence another.

abstract

Regardless of whether the research is quantitative and qualitative, all articles begin with an abstract, an accurate, well-written, concise, and specific overview of all the components within the article. The abstract helps the reader determine whether to continue reading the entire article. An abstract is only a paragraph long (five to six sentences MAX!) and covers only a few basic items. 1. What is being studied, and why is it important? 2.What is the study methodology? 3. What are the findings, and what is their significance? The abstract should be succinct with no extra wording but comprehensive enough that it provides adequate information about why the research was conducted (problem/purpose), on whom the research was conducted (sampling), how the research was conducted (methodology), and what happened (the findings). Immediately following the abstract should be a key word list. This identifies the key words searched when compiling the review and allows researchers to conduct their own research on the topic using the same or similar terms.

Mixed Methods:

Research asks questions that cross quantitative and qualitative meth-odologies, often conducting two (qualitative/quantitative or quantitative/qualitative) studies to gain a more nuanced understanding of the topic. The combination and order of quantitative and qualitative studies are directly related to the purpose of the study and its resulting research questions. Paradigm: Pragmatism.

Quantitative Methodology:

Research involves the use of deductive reasoning in the collection and analysis of numerical data with the goal of proving, explaining, pre-dicting, testing, describing, or comparing. Based on the purpose of the research, the researcher can use the scientific method and will control, compare, or manipulate variables. Paradigm: Positivism or Post-Positivism.

Qualitative Methodology:

Research involves the use of inductive reasoning. The researcher drives the data collection, analysis, and interpretation of comprehen-sive verbal, narrative, and/or visual data in order to gain insights into a particular phenomenon of interest. Paradigm: Constructivism.

Full Review

Research on participants or any protected participant population (i.e., fetuses, pregnant women, prisoners, children, the elderly, and psychiatric patients) that involves more than minimal risk needs to be brought before the IRB board for a full review.

QUANTITATIVE RESEARCH ARTICLES-Research Questions and Hypotheses

Research questions and hypotheses, which are stated after the research purpose statement • drive the quantitative study; • provide clarity to the research problem statement; and • give a specific and narrow explanation of the questions the current study will be looking to determine. One question to keep in mind when critiquing the research questions is, does the research question contain an operational definition of each term? operational definition as assigning "meaning to a construct or a variable by specifying the activities or 'operations' necessary to measure it. While it is nearly impossible to create the perfect operational definition, the more detailed and clearly stated it is, the easier it will be for someone to recreate a study. Most operational definitions follow the specific theory behind the research and are rooted in previous literature.

Qualitative Purpose Statement

Script- The aim of this study is to (verb used: i.e., understand/explore) the (central phenomenon) of (population) in (research setting). Example- The aim of this phenomenological study is to understand the meaning of person-centered promotion of movement quality physical therapy services to patients with complicated diagnoses by inviting physical therapists to describe, through interviews, how they pro-mote movement quality in their clinical settings. This example is adapted from Skjaerven, Kristofferson, and Gard (2010). •Discover (e.g., grounded theory) •Seek to understand (e.g., focused ethnography) •Explore a process (e.g., case study) •Describe the experiences (e.g., phenomenology) "The _____ (purpose, intent, objective) of this (add qualitative design/tradition) study will be to ______ (understand, describe, develop, discover, explore) the ________ (central focus) for _______ (participants: person, process, groups) at ______________(site)."

Quantitative Purpose Statement

Script- The purpose of this study is to (verb used: i.e., determine/compare/test) the (Independent Variable) on (Dependent Variable) in (study participants) at (research location). Example- The purpose of this pretest-posttest control group study is to determine the impact of a service-learning activity on the development of professional behaviors in first-year health science students at a research university on Long Island. Components: •Major intent of study (purpose, intent, objective) •Theory, model or conceptual framework •Independent and dependent variables -As well as mediating, moderating or control variables •Connection between independent and dependent variables -Relationship between -Comparison of •Design •Participants •Define key variables Scripts should look like: •The purpose of this (add research design) study is to test ____ theory (the theory) by relating ( Ind variable/s) to (Dep variable/s) for (participants) at (research site). • •The purpose of this (add research design) study is to test ____ theory (the theory) by comparing ( group 1) and (group 2) in terms of the (Dep variable) for (participants) at (research site). • •The purpose of this (add research design) study is describe the effect of (Ind variable/s) on (Dep variable/s) for (participants) at (research site).

QUALITATIVE RESEARCH ARTICLES- Analysis/Findings Section

Since data can be collected through video recordings, audio recordings, drawing, creative media, and volumes of written transcripts, to name a few, the presentation of the analysis can vary greatly. Typically, the analysis/finding section includes pictures, direct quotes from participants, and/or flow charts showing the data analysis grouped by themes. Interpretation of the data also integrates the researchers' experience during the process. A major difference between a quantitative and qualitative article is that in a qualitative research article, this section (presentation of the findings) includes an interpretation of the results. Evaluating or Critiquing the Analysis/Findings Section- Researchers should describe the methods used to protect the analysis from being influenced by the researchers' own values and outlooks. Quantitative researchers report on mea-sures they took to increase the internal validity/external validity of the study. Qualitative researchers do not think in terms of validity or reliability; instead they focus on establishing trustworthiness

exempt from review

The Common Rule allows research activities to be exempt from IRB full review when they are considered low risk and the involvement of human participants is within one of the categories defined by the Department of Health and Human Services. Two examples that fall under this category are research conducted in commonly accepted educational settings and research that uses educational tests. Data used in these studies must be collected and presented in a manner in which research participants cannot be identified. While consent is always required, often in this type of research a signed consent form is not required (i.e., a statement on an anonymous survey that explains completing the survey is considered giving consent). Examples of research activities that are exempt from IRB approval include surveying educators regarding the use of a new curriculum, evaluating the use of a revised standardized test, and analyzing data from an existing database which are recorded without identifiers

Institutional Review Board (IRB)

The Common Rule mandates that an Institutional Review Board (IRB) be formed to review research studies. The IRB is a group that has been assembled for the primary purpose of reviewing research proposals to ensure that the rights and welfare of human subjects participating in research studies are protected. The OHRP (2008) human subjects regulations (45 CFR 46) define human subjects as "a living individual about whom an investigator (whether professional or student) conducting research obtains (1) data through intervention or interaction with the individual, or (2) identifiable private information"

Nuremberg Code

The Nuremberg Code was framed by American judges sitting in judgment of the Nazi doctors accused of committing heinous medical experiments in concentration camps. This code combined Hippocratic ethics and the protection of human participants into a single document and has been called the most important document in the history of ethics in medical research.

FINER

The acronym FINER has often been suggested in the literature as a method to judge the quality of a research question. These criteria succinctly capture the common qualities that are inherently important when crafting a well-written research question.

QUALITATIVE RESEARCH ARTICLES-Background Section

The background section contains the review of the literature. This section is similar to the introduction section in a quantitative research article in that it describes all the previous research pertaining to this topic. The most obvious difference is that this section is often significantly shorter in a qualitative article. This is a reflection on the types of questions qualitative research often looks to address. Exploratory. The purpose statement is often called the aim of the study and is usually open ended, as qualitative research questions evolve throughout the course of the data collection and analysis process. Evaluating or Critiquing the Background Section- From the beginning, the author(s) of the research article should be able to convince the reader that their work is important and that they have the skills. Does the literature review lead to a rationale explaining the gap in the literature? A critique should show that this section provides a theoretical framework to ascertain the value or rationale for carrying out the study and determining whether or not the study accomplished its goal.

National Research Act

The development of the regulatory process governing the ethical conduct of researchers began with the signing of the National Research Act into law in 1974, thereby creating the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The commission was charged with identifying the key components of ethical research involving human participants and developing guidelines to ensure that human research is conducted in accordance with those principles (OHRP, 1979).

QUANTITATIVE RESEARCH ARTICLES-Discussion/Conclusion Section

The discussion/conclusion section is the final section of the article. This section includes an interpretation of the results, limitations of the study, and suggestions for further research studies that can build upon the results of this study. While data was laid out in the result section, it was not interpreted; this section gives the authors a chance to discuss the results of the study in relation to the research questions and hypothesis posed earlier in the introduction. Said another way, in this section, the researcher gets to discuss specifically how the study generated new knowledge and added to the body of scientific literature Evaluating or Critiquing the Discussion/Conclusion Section- The following is a list of some questions to ask while reading this section. • Is the author's conclusion consistent with the statistical analysis of the study? -Meaning if the statistical test conducted yields correlations between variables, is the author's stated conclusion supported by the data analysis? The researcher should not misinterpret or misrepresent the conclusion that can be drawn from the data. • Can the conclusion be linked to the theoretical framework of the study, the literature review, problem statement, and the specific research questions found in the introduction? • Are the limitations of the study clearly stated and discussed? • In addition to a discussion of how the results of the study generated new knowledge, how did the results of this study open new avenues of inquiry?

Critiquing a Research Article

The following questions should be kept in mind when reading a research article: • Is it from a peer-reviewed source? • Is the research question implicitly or explicitly stated? Implicitly stated means that all the information is included, but it does not follow the exact format of a research question. The reader of the article will need to infer what the research question was. Explicitly stated means the research questions and hypothesis are listed. • Is it logically organized and easy to follow? • Are previous studies described and well integrated? • Are procedures clear and easy to follow? • Are the data collection method and analysis fully discussed? • Are the authors' interpretation and conclusion included? • What are the strengths and weaknesses of the article? Research articles describe the components of a research study and are distinctly constructed to describe the • methodology followed; • study design utilized; • data collection and analysis methods employed; • introduction of all new knowledge discovered.

Institutional Review Board (IRB).

The primary purpose of the IRB review is to ensure that in a research-er's zeal for new knowledge, human subjects are being treated ethically.

Beneficence

The principle of beneficence requires that persons are treated in an ethical manner by (1) protecting them from harm; and (2) maximizing possible benefits and minimizing possible risks of harm. It is the obligation of researchers to maximize benefits for the individual participant and/or society while minimizing the risk of harm to the individual participant. This doesn't mean that there are not any risks involved to participants. It means that thoughtful consideration has been given to both the possible benefits and harms, and a decision is then made as to when it is justifiable to seek certain benefits in spite of the risks involved and when the risks involved outweigh the potential benefits. Should the risks outweigh the benefits, consideration should be given to determining if there is another way to conduct a study in which the same knowledge could be obtained with lower risks to participants

Justice

The principle of justice in Part B raises the question: "Who ought to receive the benefits of research and bear its burdens?" An injustice occurs when a person who is entitled to a benefit is denied said benefit, without good reason, or when a burden is unduly imposed (OHRP, 1979). The selection of research participants must be fair, avoid-ing participants from a population (i.e., educationally or economically disadvantaged) or selecting participants from a certain population only for the experiment group. Research conducted in the United States in the early-to-mid-20th century showed the violation of the principle of justice. For example, participants in the Tuskegee Study of Untreated Syphilis were disadvantaged rural black men who were denied treatment, even though penicillin was available to treat syphilis, so that the study could be continued.

gaps in the literature

The purpose of a systematic review of the literature is to alert the researcher to gaps in the literature that can lead to new areas of research. A gap in the literature is an area identified during a systematic inquiry into a chosen topic where the researcher finds a question or problem that has not been addressed adequately or at all in previous studies. It can also refer to a research topic that has been previously studied but used a different methodology, population, or method of data collection than the proposed study. Lastly, it identifies a question or idea that can be further developed. Identifying a gap in the literature shows a deep understanding of the body of knowledge within a field of study By locating and reviewing what has already been written about a topic, the researcher can: • relate work to the larger body of literature that already exists; • uncover new ideas, perspectives, or approaches that were not considered before but could be utilized to strengthen a current argument; • validate the need for further studies addressing the subject.

problem statement

The purpose of the problem statement is to help the reader understand why the problem is important and to clearly articulate the gap in the literature. Meaning, an explanation of why the proposed study is worth conducting (scientific merit).

purpose statement

The purpose statement clearly states what the intent of the research study is. In other words, what is it that the researcher intends to accomplish in the study? One cannot underestimate the importance of the purpose statement as it is the springboard for all other elements of the study (i.e., research question, hypothesis).

QUANTITATIVE RESEARCH ARTICLES-Methods Section

The second section of a quantitative research article identifies how a research question was addressed. This section is most often labeled methods or methodology. This section should include the • methodology; • design; • methods (sampling, data collection, and data analysis)that were used to conduct the study. This section should be explained in sufficient detail so that another researcher can use the parameters discussed and replicate the study (recreate the study). Evaluating or Critiquing the Methods Section-It is important to look for a full description of the population and how it was identified and selected. Sampling methods should be listed and should clearly align with the methodol-ogy, the research design, and research questions. This section should also include how the data was collected, explicitly naming or detail-ing the data collection process. Information on the tool or instrument used to collect data should be provided, including information on the tool/instrument's validity and reliability , an operational definition and resulting measurement tools should clearly explain how incidence of depression is defined and measured.The final component of this section is a review of the data analysis procedures: statistical tests used to analyze the data. OVERALL- • Research design ■ Does the design of the study align with the types of research questions being asked? • Sampling methods ■ Is there information on how the participants were recruited and selected? Is there information on how the number of participants needed for the study was determined? • Data collection tools/instruments utilized and information on the tools/instru-ments' validity and reliability as data collection methods ■ If a survey was created specifically for the study, are samples or summa-ries of the questions along with a description of the process employed to create the tool and the methods used to ensure it was valid included? • Data analysis strategies ■ How was the data analyzed? If there are multiple research ques-tions or hypotheses are different, were statistical tests used for each question listed? • Any other strategies used to increase reliability and validity.

QUANTITATIVE RESEARCH ARTICLES-Results Section

The third section of a quantitative research article presents the results of the statistical data analysis. The results are often presented in both narrative format in descriptive paragraphs and in numerical format within tables or charts. This section should only present the statistical results of the study's data analysis. There should be no interpretation or discussion of the meaning of the results of the study in this section of the article. Evaluating or Critiquing the Results Section-A critique of this section requires an in-depth knowledge of statistics. The presentation of the data should be detailed enough that a fellow researcher could critique the appropriateness of the statistical techniques used.

annotated bibliography

This alphabetical listing and short summary of each article • keeps a bibliographic record of all research; • includes details about each study and findings. This organizational strategy makes it easy to locate a specific article when writing the literature review for the journal article. It is important to keep the annotation short and to the point, giving only details pertaining to the research methodology and design, population, data collection, and analysis methods and findings. Annotations often include the researcher's evaluation of the article, a brief section on thoughts, and strengths and weaknesses of the article/research study. All should be written in the researcher's own words, without the use of quotes, to help facilitate a comprehensive understanding of the article

Common Rule

Using the Belmont Report as an ethical guideline, currently what governs the protection of human subjects in the United States is the Federal Policy for the Protection of Human Subjects, also known as the Common Rule. This 1991 federal policy requires compliance across 15 different federal departments and agencies. Each department/agency was required to develop a set of policies that complied with this federal regulation

Professional Assessment

While all sections of each type of article have been evaluated at this point, there is still one more important section of the critique: the reader's individual interpretation of the article. This is an opportunity for the reviewer to determine the scientific merit of the study and the contribution it has made to the literature. • Is this study appropriate for my needs? • What are the strengths and weaknesses of the article? • Was the population sampled appropriately to coincide with the research question? • How did the researchers validate their findings? • Was the data analysis method used the best choice? each research design and method has strengths and limitations

research question

a research question as "... the purpose stated in the form of a question" (p. 1668). In other words, the research purpose statement and research question(s) should align with one another. Kwiatkowski and Silverman (1998) added that "the research question drives the development of the study protocol" and that "... it must be shaped and narrowed into an answerable format" (p. 1116). Agee (2009) explained that "good questions do not necessarily produce good research, but poorly conceived or constructed questions will likely create problems that affect all subsequent stages of a study"

research = generation of new knowledge

a systematic investigation, including research development, testing and evaluation, designed to develop or contribute to generalizable knowl-edge

Evidence-based practice/research evidence.

amalgamation of research evidence, experience and expertise, and patient preferences in the pro-cess of clinical patient care. EBP can be incorporated into various fields such as medicine, nursing, psychology, and allied health.

vulnerable population

can be described as "the disadvantaged sub-segment of the community ..." (Shivayogi, 2013, p. 53). When a person has limitations on either their capacity or voluntariness, they are considered vulnerable. "The vulnerable individuals' freedom and capacity to protect one-self from intended or inherent risks is variably abbreviated, from decreased freewill to inability to make informed choices" (Shivayogi, 2013, p. 53). Examples of participants who lack capacity or are unable to make their own choices and decisions are children, those with intellectual disability, prisoners, students in hierarchical organizations, institutionalized individuals, the elderly, and individuals who are educationally and economically disadvantaged.

Tertiary sources:

carry the least credibility of all available sources. This is not to say that tertiary sources are not factual and true; credibility is based on how far removed from the original research the information being presented has come and the expertise of the author presenting the information. They often summarize or provide an overview of a topic but do not delve deeply into previous research ■ One example of a tertiary source is an encyclopedia or a textbook where the information is presented in summary written by a third party. ■ Note: Popular journals and web sources that end in .com carry the least weight with regard to credibility of tertiary sources and come in at the bottom when determining source reliability

Respect for Persons

first, individuals should be treated as autonomous agents, and second, that persons with diminished autonomy are entitled to protection"

QUALITATIVE RESEARCH ARTICLES-Methods Section

detailed information on the research design and research methods used in the study. This section, unlike quantitative research articles, includes information about the role of the researcher in the study and the relationship between the researcher and study participants. The reason for this is that unlike in quantitative research, where the researcher attempts to conduct the study from an objective stance with the goal of eliminating researcher bias, the qualitative researchers immerse themselves in the data collection and analysis process. The research design, along with the sampling, data collection, and data analysis methods, are also included in this section Evaluating or Critiquing the Methods Section- While the goal of this section is not to provide enough information that another researcher could replicate the study as in a quantitative article, there should be enough detail so another researcher can make judgments about the quality and rigor of the study. Questions to focus on include • Has the relationship between researcher and participants been adequately considered? The researcher is often more immersed in the research process in qualitative research. It is important to critique if the relationship between researcher and participants has been adequately considered, accounted for, and explained in this section. •Was the research design appropriate? Is there a detailed description of who the sample was?

Belmont Report

drafted in 1979 by the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The Belmont Report is important to the content of this chapter as it identified three important basic principles: respect for persons, beneficence, and justice, to be followed in the ethical conduct of research on humans.

PICOT

encompasses the population (P), intervention (I), comparison group (C), outcome (O), and time (T). Time in the PICOT formula describes the period of time over which data collection takes place (Riva, Malik, Burnie, Endicott, & Busse, 2012). The PICOT format has been shown to be useful in studies that explore the effect of therapy Variants of the PICOT acronym include PICO and PICo. PICO includes population (P), intervention (I), comparison group (C), and outcome (O), while PICo includes population (P), phenomenon of Interest (I), and Context (Co). PICO can be used when time is not a relevant factor in the study and PICo can be used when there is no outcome or comparison being made As such, the PICo research question format is only used in qualitative research studies.

literature review

found in the introduction section of the journal article is a highly summarized, integrated, and synthesized version of the exhaustive sys-tematic review of the literature conducted during the first stage of research. The literature review in a researcher's journal article has two important goals: ■It must demonstrate that the research topic is important. ■It must show that the results presented in the article are filling a gap in the literature

Key word searches or relevant term searches

in databases are the easiest way to begin digging through the literature. Most concepts can be identified through the use of multiple terms that address the same concept. Start with broad terms that encompass all aspects of the general area or topic of interest covered in the research question, and move to more specific terms as you move more deeply into the body of literature.

Private information

includes information about behavior that occurs in a context in which an individual can reasonably expect that no observation or recording is taking place, and the individual can also reasonably expect that information that has been provided for specific purposes will not be made public (e.g., a medical record). Private information must be individually identifiable (i.e., the identity of the subject is or may readily be ascertained by the investigator or associated with the information) in order for obtaining the information to constitute research involving human subjects

Professional journals

like scholarly journals, contain work written by people working and conducting research in a specific field. Some articles may describe research, but the review process for acceptance for publication may be conducted with only an editorial review. These articles are often written to: • cover new and emerging topics in the field; • report on field-related research; and • evaluate best practice.

Basic research

is a type of research that seeks to enhance overall knowledge about the "physical, biological, psycholog-ical, or social world or to shed light on historical, cultural, or aesthetic phenomena

Scholarly journals, also referred to as academic or peer-reviewed

publish literature written by people who are experts in their field and who have conducted extensive research on a topic. Scholarly literature is the most scrutinized of all the literature published in journals. The process to have an article published in a peer-reviewed journal is often timely and scrupulous. Authors submit work to an editor who decides whether or not to put it up for review where it is evaluated by a panel of experts in the field (peer reviewers) who evaluate the scientific merit of the work and often suggest possible changes. The peer reviewers can recommend to the editor to publish the article; • send the article back to the author for revisions; or • reject it outright.

DEDUCTIVE VERSUS INDUCTIVE REASONING

rule out one by one and move on develops questions by learning

composition of the IRB

should be at least five members whose backgrounds are varied enough that they can completely and adequately review proposals put forth by the institution. Consideration should be given to the diversity of members with regard to race, gender, and cultural and professional backgrounds, as well as any other issues that would be relevant to the research interests of the institution. In addition, the IRB committee shall include at least one member who is primarily involved in a scientific area, one who is not primarily involved in a scientific area, and one who is not affiliated with the institution. If the IRB regulations are not followed, consequences could include, but are not limited to: suspension or termination of the research project; inability to use data or publish results; inability to receive federal grant funding; additional monitoring and oversight by the IRB and/or a third party; termination of employment; and termination of all research at the institution

scientific merit

the combination of the terms research and systematic investigation. First, the researcher must ensure that the proposed research study meets the definition of research. Second, the researcher must ensure that the proposed research study has been constructed in such a way that once the study has been concluded, new knowledge can be gleaned from the results of the study.

dissemination

the introduction or background and significance of an article written to share the results of original research is often characterized as the literature review.

lit review- Current

the researcher summarized, integrated, and synthesized all articles that have been published in the past 5-7 years on the research topic. One exception to this rule can be made for groundbreaking research (often referred to as seminal research) that has been proven to set the standard in a field and has been cited repeatedly in relevant articles. In addition to groundbreaking research articles, a researcher can choose to include additional research articles that are older than this range if the articles add important concepts or clarification to the body of literature. It is always good practice to back up any claims from older research with similar claims in newer research as a way to defend why the information found in the older article is relevant enough to be included

The researcher can stop the systematic review of the literature when

they start to find repetition. Some repetitions to keep an eye out for are: • familiar arguments used in a number of articles to make a convincing case for research; • similar methodologies beginning to appear in a number of articles describing research; and • the same key people and the same studies being cited in multiple articles

Tuskegee Study of Untreated Syphilis

to document the natural disease progression of syphilis in black men. Six hundred black men from Macon County, Alabama, most of whom were living in poverty, were enrolled in the study. Of these 600 men, 399 had syphilis; 201 did not. Men who participated in the study were not told they were in an experiment; they were told they were being treated for "bad blood," which for some involved painful spinal taps. Participants were given, "free medical exams and treatment," meals, and, upon death, a burial stipend was paid to their survivors. In 1945, penicillin was approved by the U.S. Public Health Service (USPHS) to treat syphilis, but the Tuskegee Study continued without the men being treated

Willowbrook Hepatitis Study

was initiated in the mid-1950s at the Willowbrook State School, an institution for cognitively disabled children on Staten Island, New York. Hepatitis was widespread at the institution, and researchers were interested in studying the natural course of viral hepatitis as well as the efficacy of gamma globulin (Krugman, 1986). Although parents gave consent for their children to be inoculated with a mild form of hepatitis, they were not fully informed of the possible hazards involved in the study. There is evidence that indicates parents might have been under the impression that if they did not give consent, their children would not be cared for. Researchers defended their actions under the guise that because hepatitis was widespread at the institution, the children most likely would have been infected with the disease within their first year at the institution

the person reading the article should be able to answer the questions below from what is written in the introduction section of a journal article.

• How the study specifically addresses the gap in the literature: ■ Will it duplicate a current study using a different research design, data collection tool, or a setting and/or population that has not been sufficiently investigated? ■ Will the study contribute to basic knowledge, improve current practice, develop a new theory, or create new policy?


Kaugnay na mga set ng pag-aaral

N174 Physical Assessment Objectives

View Set

ATI RN Maternal Newborn Online Practice 2019 A with NGN

View Set

chapter 11 case study low serum vitamin D

View Set

Chapter 1: Introduction to Operations Management (multiple choice) & Chapter 2: Competitiveness, Strategy, and Productivity

View Set

Exam 1 Stroke Lippincott practice questions

View Set

Chapter 5: Infection Control (Standard Foundations)

View Set

AP Biology Unit 5 Progress Check

View Set

Real Estate Principles Chapter 2

View Set