RS QA CH 5 Aggregating and Analyzing Performance Improvement Data

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

In addition to the four data management functions, AHIMA also provides 10 characteristics of data quality:

1. Data accuracy: The extent to which the data are free of identifiable errors 2. Data accessibility: The level of ease and efficiency at which data are legally obtainable, within a well-protected and controlled environment 3. Data comprehensiveness: The extent to which all required data within the entire scope are collected, documenting intended exclusions 4. Data consistency: The extent to which the healthcare data are reliable, identical, and reproducible by different users across applications 5. Data currency: The extent to which data are up-to-date; a datum value is up-to-date if it is current for a specific point in time, and it is outdated if it was current at a preceding time but incorrect at a later time 6. Data definition: the specific meaning of a healthcare-related data element 7. Data granularity: The level of detail at which the attributes and characteristics of data quality in healthcare data are defined 8.Data precision: The degree to which measures support their purpose, and the closeness of two or more measures to each other 9. Data relevancy: The extent to which healthcare-related data are useful for the purpose for which they were collected 10. Data timeliness: The availability of up-to-date data within the useful, operative, or indicated time see image of AHIMA Data Quality Model (functions and characteristics)

The following list shows the steps in developing a control chart:

1. Determine which process and what data to measure. 2. Collect about 20 observations of the measure. 3. Calculate the mean and SD for the data set. Various spreadsheet software programs can be used to make these calculations. The mean, or average, becomes the centerline for the control chart. 4. Calculate an upper control limit (UCL) and a lower control limit (LCL). The upper control limit is usually represented by a dashed line 2 SDs above the mean, and the lower control limit is usually 2 SDs below the mean. The resulting control chart becomes the standard against which the team can compare all future data for the process.

A line chart can be created by completing the following steps:

1. Select a time frame. 2. Identify the data to be tracked. 3. Use a check sheet to collect frequency data or query a database or other system to collect measurement data. 4. To create the line chart using Excel for the data displayed in figure 5.6, enter the data points into the spreadsheet. This includes one column for the dates and a second column indicating the number of charts analyzed for the corresponding date. Then, highlight the data in both columns with the cursor. Next, select the Insert tab in Excel, and then select the type of chart—in this case, a line chart—that you want the program to create. The software will display the chart on the screen. To add the axis labels, use your mouse to click in the chart; the chart tools tab should appear on the menu bar at the top of the screen. Using this tab, select the layout option. Next, locate the axis titles option; select this icon and add the x-axis title ("date") and the y-axis title ("number of charts analyzed"). To add a chart title, use the chart tools to select the layout icon and then the chart title; use the drop-down menu options to define the location of the title. To add the data values on the chart, select the data labels icon under the chart tools layout option and the values should appear on the chart. The chart can then be copied and pasted into your report

A check sheet is a simple, easy-to-understand form used to answer the question, how often are certain events happening? It starts the process of translating opinions into facts. Constructing a check sheet involves the following steps:

1. The PI team determines who is responsible for collecting the data: a clinician, a technician, or another person. 2. The PI team agrees on which event to observe. 3. The team decides on the time period during which the data will be collected. The time period can range from hours to weeks. 4. The team determines the appropriate source from which the data will be collected. This may be health records, charge slips, and so forth. 5. The team designs a form that is clear and easy to use. The team should make sure that every column is clearly labeled and that there is enough space on the form to enter the data. 6. The team collects the data consistently and honestly. Enough time should be allowed for this data-gathering task. Check sheets can also be used to tally survey responses. Healthcare organizations do not have to develop new data collection methods for every PI project. Organizations must determine what they are already collecting and how those data can be used in future PI measurement processes.

Follow these steps to construct a Pareto chart:

1. Use a check sheet or database query to collect the required data. Figure 5.13 shows the top 10 major diagnostic categories (MDCs) by total charges. A check sheet is used to collect cases by MDC and to total the hospital charges for that MDC. 2. Arrange the data in order, from the category with the greatest frequency to the category with the lowest frequency. In the figure, the data are arranged from the MDC with the highest charges to the MDC with the lowest charges. 3. Calculate the totals for each category. Figure 5.13 lists MDC 10 first, with charges totaling approximately $2,500,500; then MDC 06, with charges totaling $2,500,000; MDC 04, with charges of $2,300,000; and so forth to MDC 18, with charges totaling approximately $300,000. 4. Compute the cumulative percentage. This is accomplished by calculating the percentage of the total for each category and then adding the percentage for the greatest frequency to the percentage for the next greatest frequency, and so on. Using the example in figure 5.13, the charges for MDC 10 ($2,500,500) represent 18.7 percent of all charges listed, while the charges for MDC 06 ($2,500,000) represent 18.7 percent of all charges listed. The cumulative percentage is obtained by adding the percentages together—MDC 10's 18.7 percent plus MDC 06's 18.7 percent equals 37.4 percent. This cumulative percentage is then calculated for all categories until 100 percent is obtained. 5. Spreadsheet software can easily create a Pareto chart or sorted histogram.

Pareto Charts

A Pareto chart is a kind of bar graph that uses data to determine priorities in problem solving. The Pareto principle states that 80 percent of costs or problems are caused by 20 percent of the patients or staff Pareto charts are useful in the following situations: • When analyzing data about the frequency of problems or causes in a process • When there are many problems or causes and you want to focus on the most significant • When analyzing broad causes by looking at their specific components • When communicating with others about your data Using a Pareto chart can help the PI team focus on problems and their causes and demonstrate which are most responsible for the problem.

Bar Graphs

A bar graph is used to display discrete categories, such as the gender of respondents or the type of health insurance respondents have. Such categories are shown on the horizontal, or x, axis of the graph. The vertical, or y, axis shows the number or frequency of responses. The vertical scale always begins with zero. Most spreadsheet software can be used to "draw" a bar graph from a given data set.

When an organization needs to collect a new data element, a variety of data collection tools can be used, and these may be manual methods or electronic querying. First and also simplest is the check sheet.

A check sheet is used to gather data based on sample observations in order to detect patterns. When preparing to collect data, a team should consider the four W questions: • Who will collect the data? For example, collecting data on tobacco use by patients, determining who will collect the data is vital to data accuracy. Most often, the individual(s) collecting the tobacco use information will be on the clinical staff. A nonclinical person can be trained to look for specific documentation in the health record under the clinical guidance. • What data will be collected? The data elements might include tobacco use screening, tobacco use treatment provided or offered, tobacco use treatment provided or offered at discharge, and tobacco use assessment status after discharge. • Where will the data be collected? Most often, data for the tobacco use will be abstracted or collected from the individual patient health record. However, some data may have to be collected from other data sources. • When will the data be collected? This question is defined by time parameters as defined by the research or PI team. Once the team answers the four W questions, it can develop a check sheet to collect the data. Check sheets make it possible to systematically collect a large volume of data. It is important to make sure that the data are unbiased, accurate, properly recorded, and representative of typical conditions for the process.

Control Charts

A control chart can be used to measure key processes over time. Using a control chart focuses attention on any variation in the process and helps the PI team determine whether that variation is normal or a result of special circumstances. Normal variation also may be called common cause variation, or the expected variance in a process, because the process will not or cannot be performed in exactly the same manner each and every time. When a special circumstance or unexpected event occurs in the process, this will result in what is called special cause variation. It is this special cause variation that the PI team needs to investigate. The specific statistical calculations used to determine the upper and lower limits of a control chart depend on the type of data collected. The calculations used for statistical process control are the data mean, the median of the range between data points, and the SD. The appearance of the control chart is like turning the classic bell curve on its left tail and running it horizontally left to right as the time on the x-axis goes by. The upper and lower control limits are always +/-2 SD from the mean. As each successive month of data is added to the chart, the SDs are recalculated and may fluctuate in value. As the process is tightened up and improved over time, the SD should become smaller and smaller as variation is driven out of the process. If the SD expands, the process becomes less controlled, taking on variation and most likely less quality. The latest calculated SD is always used in the current display of the chart. Data points that lie outside the upper or lower control limits may signal special cause variation that should be examined.

Histograms

A histogram is a bar graph that displays data proportionally. Histograms are used to identify problems or changes in a system or process. They are based on raw data and absolute frequencies, which determine how the graphs will be structured. a histogram keeps the data in the order of the scale against which they were obtained. The horizontal axis measures intervals of continuous data, such as time or money. The scale is grouped into intervals appropriate to the nature of the data. The vertical axis shows the absolute frequency of occurrence in each of the interval categories. Because of its visual impact, a histogram is more effective for displaying data than a check sheet of raw data, particularly when the frequencies are large. A histogram rather than a pie chart should be used for continuous data. Histograms have the following characteristics: • Display large amounts of continuous data that are difficult to interpret in lists or other nongraphic forms • Show the relative frequency of occurrence of the various data categories indirectly using the height of the bars • Demonstrate the distribution of the absolute frequencies of the data in the grouped intervals 1. The first step on creating a histogram is to gather the data. .The data can be collected on check sheets, gathered from department logs, or gathered from querying of computer programs and systems. A histogram should be used in situations in which numerical observations can be collected; for example, cost in dollars for a surgical procedure. 2. Once the data have been gathered, the team can begin to group the data into a series of intervals or categories. A check sheet can be used to count how many times a data point appears in each interval grouping. For example, the cost of a surgical procedure on each patient might be grouped into the following intervals: $0 to $24,999; $25,000 to $49,999; $50,000 to $74,999; and $75,000 to $99,999. 3. To analyze a histogram, look for things that seem suspicious or strange. The team should review the various interpretations and write down its observations. The histogram in figure 5.5 shows data related to the number of patients and their total charges. The horizontal axis lists the revenue per stay by dollar amounts, and the vertical axis shows the interval groupings indicating the number of patients. see image

Line Charts

A line chart is a simple plotted chart of data that shows the progress of a process over time. By analyzing the chart, the PI team can identify trends, shifts, or changes in a process over time. The chart tracks the time frame on the horizontal axis and the measurement (the number of occurrences or the actual measure of a parameter) on the vertical axis. The data are gathered from sources specific to the process that has been evaluated. Each set of data (measurement or number of occurrences and time frame) must be related. To analyze the chart, the team should look for peaks and valleys that indicate possible problems with the process. Periodically redoing the line charts for a process helps the team monitor changes over time. A line chart is a good way to display trends in the data.

Pie Charts

A pie chart is used to show the relationship of each part to the whole; in other words, it shows how each part contributes to the total product or process. The 360 degrees of the circle, or pie, represent the total, or 100 percent. The pie is divided into "slices" proportionate to each component's percentage of the whole. To create a pie chart, first determine the percentages for each data element of the total population, and then draw the slice accordingly. Spreadsheet programs can automatically create pie charts from a given data set

Two terms often used in data analysis are absolute frequency and relative frequency.

Absolute frequency refers to the number of times a score or value occurs in the data set. Relative frequency is the percentage of the time a characteristic appears within a data set. see image

It is best to conduct an internal data comparison with data collected over a period of time.

Collecting data over a three- to six-month time frame, for example, establishes an internal baseline for benchmark purposes. To establish the baseline, the organization averages all collected data. This internal baseline becomes the organization's benchmark to maintain or improve upon when external benchmark comparisons are not available. Organizations use internal benchmark data to compare provider performance, unit-to-unit comparisons, or facility-to-facility comparisons within a corporation.

Analysis of Data

Healthcare organizations collect a significant amount of data through their electronic systems, and these data need to be utilized in a planned and meaningful manner. Decisions regarding quality of care and patient safety must be made after good data collection and analysis have been performed. Data analysis is primarily defined as the task of transforming, summarizing, or modeling data to allow the user to make meaningful conclusions. Data analysis may be characterized as turning data into information that may be used for operational decision making. The data must meet standards related to data quality to ensure that the decisions are being made on reliable data.

Analysis of Data: Data Quality

In order for healthcare organizations to use their data in performance improvement activities, they must have quality data. Healthcare organizations should embrace data quality management functions and ensure that their data collection is supported by sound data quality characteristics These data quality management functions include the following: • Application: the purpose for the data collection • Collection: the processes by which data elements are accumulated • Warehousing: processes and systems used to archive data • Analysis: the process of translating data into meaningful information

Management of Data Sets

Patient management systems record all aspects of patient registration for service, type of service requested, dates and times the service was rendered, rendering providers, clinic or hospital census, bed utilized, and so on. Patient financial systems manage the data associated with collecting charges, manifesting a bill, outputting that bill, receiving and recording payments from insurers or patients, and posting all those transactions as necessary to general ledger accounts. Increasingly, clinical systems are deployed to capture the information associated with the actual condition of the patient and the clinical provision of service by the provider, deriving such data as blood pressure, temperature, findings on physical examination, probable diagnosis, and the like. Many hospitals have provider order-entry systems in which the provider electronically communicates requirements for care provision to nursing and ancillary services staff, and each system maintains the data details for each of the order transactions. In addition to these principal systems, a multitude of smaller systems are utilized by individual departments or collections of departments in healthcare organizations, such as the radiology management system, the medication ordering and management systems of the pharmacy department, the myriad specialized systems used in the clinical laboratories to perform all the various tests used in patient evaluation, and the diagnosis and procedure encoding system used in health information services to encode the summary diagnoses and procedures performed for billing and research purposes.

Pivot Tables

Pivot tables are an excellent Excel tool to summarize data according to categories (as they can be done manually using a check sheet) Pivot tables also provide flexibility for the end user or analyst to organize and filter the data in various ways before finalizing the analysis

Predictive Analytics

Predictive analytics takes the products of statistical analysis and utilizes them in a way that provides organizations the ability to predict future events. This process identifies patterns in the data that have been previously determined to be potential risks both in outcome and cost. Through this process, organizations are able to identify certain predictive elements in the data that have shown in the past to be key indicators of specific diseases, risk of illness, or complications Healthcare Example: Numerous intuitive patient management systems (for example, clinical decision support systems and reimbursement systems) use predictive analytics technology to support patient care, decision-making, and preventable readmissions. This technology analyzes large of amounts of data from disparate, structured, and unstructured data sources to identify trends that will alter patient care, treatment, and medical decision-making in real-time situations. The application of predictive analytics is relatively new to healthcare, and not all of the potential benefits have been realized at this time.

Statistical Analysis: Skewing

Skewing means that there are a lot of very high or very low values in the observations that distort the calculated mean and may shift the distribution one direction or the other. Because the median is not calculated, if the data set is greatly distorted by the extreme values, it can help to define a truer picture of the middle of the set.

For the purposes of control chart construction, the SD can be defined by the percentage of the frequencies contained beneath various portions of the normal distribution.

The SD is a useful statistic for PI in healthcare. It helps define the interval in which 95 percent of the observations should be made. Any observation that occurs outside the interval defined by +/-2 SD from the mean can be identified as a variant. If many observations fall outside the +/-2 SD interval, the process under examination could be "out of control" and contributing negatively to the provision of healthcare services.

Data Collection Tools

The common types of data collection tools used in healthcare organizations include incident reports, safety and infection surveillance reports, employee performance appraisals, staff competency examinations, restraint use logs, adverse drug reaction reports, surveys, diagnosis and procedure indices, case abstracts, and peer review reports. Individually, these tools do not describe the quality of care provided by the healthcare organization. So the data from these reports must be aggregated to provide useful information about the organization's performance in key areas. To aggregate data, the values of one data element over a set period of time (such as a month or quarter) are added together. These aggregated data are then compared with previous months or quarters to determine if there is a variance from the established benchmark. Sometimes, the organizational characteristic or parameter about which data are being collected occurs too frequently to measure every occurrence. In this case, those collecting the data might want to use sampling techniques. Sampling is the recording of a smaller subset of observations of the characteristic or parameter, making certain, however, that a sufficient number of observations have been made to predict the overall configuration of the data. Accrediting bodies may have defined sample size parameters based on the size of an organization's patient population. These guidelines should be identified and adhered to by the healthcare organization to comply with their given accreditation standards.

QI Toolbox Techniques

The display tools most commonly used in PI activities include bar graphs, histograms, Pareto charts, pie charts, pivot tables, line charts, and control charts. Once the data have been collected, the PI team should sort the data and identify any significant findings. Graphs can be used to compare data sets from different years or over time to visually illustrate a trend in the data or a change in performance. The PI measures reported to a board of directors often highlight variances in data using graphs, tables, or charts When constructing charts, graphs, and tables, the team must provide explanatory labels and titles. Data display should be simple and accurate. Team members must report all of the data, even when the data appear to have positive or negative implications for the organization. Sometimes what appears to be a negative trend may actually turn out to be a positive trend after the team fully analyzes the data. Spreadsheet software is helpful to the PI team when they need to create graphs and charts. For example, using Microsoft Excel, the PI team simply needs to enter their data into the software and decide which graph is appropriate for the type of data. Next, highlight the data, and select "insert" and the graph or chart type.

Types of Data

The four data categories are nominal, ordinal, discrete, and continuous. 1. Nominal data, also called categorical data, include values assigned to name-specific categories. For example, health insurance status can be subdivided into three groups, "yes," "no," or "don't know," or three categories, "1," "2," and "3." Nominal data are usually displayed in bar graphs and pie charts. 2. Ordinal data, also called ranked data, express the comparative evaluation of various characteristics or entities, and relative assignment of each, to a class according to a set of criteria. Many surveys use a Likert scale to quantify or rank statements. Likert scale allows the respondent to state the degree to which he or she agrees or disagrees with a statement. Ordinal or ranked data, like nominal data, are also best displayed in bar graphs and pie charts. 3. Discrete (count) data are numerical values that represent whole numbers; for example, the number of children in a family or the number of unbillable patient accounts. Discrete data can be displayed in bar graphs. 4. Continuous data assume an infinite number of possible values in measurements that have decimal values as possibilities. Examples of continuous data include weight, blood pressure, and temperature. Continuous data are displayed in histograms or line charts

Statistical Analysis: Mean

The mean and standard deviation (SD) are methods of statistical analysis that are necessary to the graphic display of data. The mean (M), also known as the arithmetic average of a distribution of numerical values, is the average value in a range of values, calculated by summing the values and dividing the total by the number of values in the range. The values may be discrete or continuous in nature. If the data are discrete (or count), the mean should be rounded to the nearest whole value; if the data are continuous, whole numbers or numbers with decimal fractions can be reported. To calculate the mean, the various observed values are first summed or added together. There may be repetitions of specific values, all of which are included. Then, the sum is divided by the number of observations made.

Statistical Analysis: Median

The median usually is derived without calculation. The observed values are placed in ascending or descending order; the value that is in the very middle of the set is taken as the median. An odd number of observations is necessary for there to be a value in the middle. even number of values, the middle would fall between the two values Add values together and divide by 2 value 1 + value 2 / 2 = median It is sometimes better to use a median value in displaying some graphic representations of data, particularly if there is a lot of variation in the observed values or if they are skewed to one side.

Comparing an organization's performance with the performance of other organizations that provide the same types of services is known as external benchmarking.

The other organizations need not be in the same region of the country, but they should be comparable in terms of size and patient mix. The use of external benchmarks can be instructive when comparisons are made with an organization doing an outstanding job with a process similar to the process on which the PI team is focusing. Some reports allow the organization to do both internal and external comparisons.

Statistical Analysis: standard deviation (SD)

The standard deviation (SD) is a complex analysis technique used in developing control charts for the display of some PI data. The SD is most easily calculated using the statistical analysis feature of a spreadsheet application. To do so, enter and highlight the column of data and select the SD function from the menu bar's "fx" button. Alternatively, choose a cell to contain the SD, type = STDEV(), then record the range inside the parentheses. Although the SD is easily calculated using a spreadsheet application, what this statistic reveals about a data set is more difficult to understand. When a PI team begins observing a continuous measure, the observations are plotted along the x- and y-axes with very little clustering, or no discernible pattern or trend. As the number of observations increases into the hundreds or thousands, a graph similar to figure 5.10 begins to form, the typical bell-shaped curve of what is called a normal distribution. Almost all measures, when graphed, take on this bell-shaped or "normal" appearance as the number of observations increases. see image At the center or vertex of the normal distribution is the calculated mean.

when working out the details of data collection, it is important to identify systems that may be a source of data that can be beneficial to the organization's PI activities and to use the systems' reporting tools to provide often very complex data sets to the PI teams for analysis and interpretation.

The use of data sets for PI purposes must be designed very carefully. First, examine the nature of the data to be sure they accurately reflect the subject under investigation. Reliability can be significantly influenced by the original sources of the data set when the data were created, how carefully data were derived from those original sources, how carefully data were transcribed to paper- or computer-based record systems by users, and how faithfully data were interfaced from one electronic system to another. Using data without a clear picture of these aspects may lead a PI team to make inappropriate interpretations. Reporting them externally without examining satisfaction of core measure or state data reporting activities may lead an organization to be judged inaccurately by the Centers for Medicare and Medicaid Services (CMS) or by groups or individuals using that data published on public websites. (see page 78 in text for example) Next , once the data set to be used by a PI team has been identified, the actual extraction and reporting of specific incidences must be considered carefully. The process of extraction of relevant cases from the entire data set is known as querying. if a multilevel query is involved, one that begins by extracting cases exhibiting one relevant characteristic and then creating one or two subsets against other relevant characteristics, the process of the query needs to be planned out carefully. In reality, the multilevel query is more common, so individuals involved in this work usually deal with a fair amount of complexity. An example built on the earlier discussion of sepsis would probably find a multilevel query defining a time period of interest, requesting the ICD-10-CM codes for sepsis to be extracted from the time period, and then perhaps further limiting that set by whether sepsis was present on admission or developed during hospitalization. The resulting three-level set would probably still require validation by infection control staff against infection control databases.

A PI Team is concerned with the time that it is taking for patients to get through the registration process. To better understand the causes or reasons for the delay in this process the PI Team would like to gather observational data. What data collection tool would be appropriate for this team to develop for their observation data? a. Check sheet b. Ordinal data tool c. Balance sheet d. Nominal data tool

a A check sheet is used to gather data based on sample observations in order to detect patterns. When preparing to collect data, a team should consider the four W questions: Who will collect the data? What data will be collected? Where will the data be collected? When will the data be collected?

After the performance improvement (PI) team has administered its survey or collected data by abstracting them from other sources, it is ready to ___

aggregate and analyze the data.

A PI team wants to display the patient types that have the most coding errors in relationship to coder years of service. The desire of the PI team is to display how a coder's years of service is responsible for coding errors. The type of chart best suited for this is a(n): a. Bar graph b. Pareto chart c. Pie chart d. Line graph

b A Pareto chart is a kind of bar graph that uses data to determine priorities in problem solving. The Pareto principle states that 80 percent of costs or problems are caused by 20 percent of the patients or staff.

A survey tool asks a question about what year in college a respondent is in using these categories: freshman, sophomore, junior, or senior. These groups will collect data into four categories. This type of data is: a. Continuous data b. Ordinal data c. Discrete data d. Nominal data

b Ordinal data, also called ranked data, express the comparative evaluation of various characteristics or entities, and relative assignment of each, to a class according to a set of criteria. In this scenario, the class year category ranks the respondent into a group as well as in the progress in their education.

Internal and external data comparisons, also known as benchmarking

can provide additional information about why and how well the process works—or does not work—in meeting the customers' expectations.

If you want to plot data that displays patient temperature readings over their 72-hour hospitalization, what type of data display tool should be used? a. Bar graph b. Histogram c. Pie chart d. Line chart

d A line chart is a simple plotted chart of data that shows the progress of a process over time. By analyzing the chart, the PI team can identify trends, shifts, or changes in a process over time. The chart tracks the time frame on the horizontal axis and the measurement (the number of occurrences or the actual measure of a parameter) on the vertical axis.

Utilizing the four data management functions and adhering to the 10 data quality characteristics allow

healthcare organizations to better ensure that their performance improvement decisions are based on sound and trustworthy data and provide the foundation for their information governance strategies. Performance improvement activities are dependent on quality data.

sources of data may include

surveys, health records, organization-wide incident reports, and annual employee performance evaluations and staff competency results.


Kaugnay na mga set ng pag-aaral

Chapter 7: Legal Dimensions of Nursing Practice

View Set

SENTENCES : Fragments, identifying

View Set

Hiragana "H/F" group はひふへほ(ha, hi, fu, he & ho)

View Set

Chapter 28 Medication Management

View Set

Medical Laboratory Science Review Harr 8.2 Molecular Diagnostics: Molecular Diagnostics

View Set