chapter 6s statistical process control

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

1

--Statistical process control (SPC) is the application of statistical techniques to ensure that processes meet standards. All processes are subject to a certain degree of variability. -A process is said to be operating in statistical control when the only source of variation is common (natural) causes. The process must first be brought into statistical control by detecting and eliminating special (assignable) causes of variation.Then its performance is predictable, and its ability to meet customer expectations can be assessed. The objective of a process control system is to provide a statistical signal when assignable causes of variation are present. Such a signal can quicken appropriate action to eliminate assignable causes. -Natural Variations Natural variations affect almost every process and are to be expected. Natural variations are the many sources of variation that occur within a process, even one that is in statistical control. Natural variations form a pattern that can be described as a distribution. As long as the distribution (output measurements) remains within specified limits, the process is said to be "in control," and natural variations are tolerated. Assignable Variations Assignable variation in a process can be traced to a specific reason. Factors such as machine wear, misadjusted equipment, fatigued or untrained workers, or new batches of raw material are all potential sources of assignable variations. Natural and assignable variations distinguish two tasks for the operations manager. The first is to ensure that the process is capable of operating under control with only natural variation. The second is, of course, to identify and eliminate assignable variations so that the processes will remain under control. Samples Because of natural and assignable variation, statistical process control uses averages of small samples (often of four to eight items) as opposed to data on individual parts. Individual pieces tend to be too erratic to make trends quickly visible. -We plot small samples and then examine characteristics of the resulting data to see if the process is within "control limits." The purpose of control charts is to help distinguish between natural variations and variations due to assignable causes.a process is (a)in control and the process is capable of producing within established control limits, (b) in control but the process is not capable of producing within established limits, or (c) out of control. We now look at ways to build control charts that help the operations manager keep a process under control. -The variables of interest here are those that have continuous dimensions. They have an infinite number of possibilities.examples are weight,speed,length,or strength.control charts are for the mean, XBAR, Range, are used to monitor processes that have continuousdimsensions. the Xchart tells us whether changes have occurred in the central tendency (the mean, in this case) of a process. These changes might be due to such factors as tool wear, a gradual increase in temperature, a different method used on the second shift, or new and stronger materials. The R-chart values indicate that a gain or loss in dispersion has occurred. Such a change may be due to worn bearings, a loose tool, an erratic flow of lubricants to a machine, or to sloppiness on the part of a machine operator. The two types of charts go hand in hand when monitoring variables because they measure the two critical parameters: central tendency and dispersion. -The theoretical foundation for x⎯⎯x¯-charts is the central limit theorem. This theorem states that regardless of the distribution of the population, the distribution of x⎯⎯sx¯s (each of which is a mean of a sample drawn from the population) will tend to follow a normal curve as the number of samples increases. Fortunately, even if each sample (n) is fairly small (say, 4 or 5), the distributions of the averages will still roughly follow a normal curve. The theorem also states that: (1) the mean of the distribution of the x⎯⎯sx¯s (called x=x=) will equal the mean of the overall population (called μ); and (2) the standard deviation of the sampling distribution, σx⎯⎯σx¯, will be the population (process) standard deviation, divided by the square root of the sample size, n. In other words:2-In an ideal world, there is no need for control charts. Quality is uniform and so high that employees need not waste time and money sampling and monitoring variables and attributes. But because most processes have not reached perfection, managers must make three major decisions regarding control charts.-First, managers must select the points in their process that need SPC.Second, managers need to decide if variable charts (i.e., x⎯⎯x¯ and R) or attribute charts (i.e., p and c) are appropriate. Variable charts monitor weights or dimensions. Attribute charts are more of a "yes-no" or "go-no go" gauge and tend to be less costly to implement. Table S6.3 can help you understand when to use each of these types of control charts.THird, the company must set clear and specific SPC policies for employees to follow.-A tool called a run test is available to help identify the kind of abnormalities in a process that we see in Figure S6.7. In general, a run of 5 points above or below the target or centerline may suggest that an assignable, or nonrandom, variation is present. When this occurs, even though all the points may fall inside the control limits, a flag has been raised. This means the process may not be statistically in control. A variety of run tests are described in books on the subject of quality methods.

6.3

-Acceptance sampling is a form of testing that involves taking random samples of "lots," or batches, of finished products and measuring them against predetermined standards. Sampling is more economical than 100% inspection. The quality of the sample is used to judge the quality of all items in the lot. Although both attributes and variables can be inspected by acceptance sampling, attribute inspection is more commonly used, as illustrated in this section. Acceptance sampling can be applied either when materials arrive at a plant or at final inspection, but it is usually used to control incoming lots of purchased products. A lot of items rejected, based on an unacceptable level of defects found in the sample, can (1) be returned to the supplier or (2) be 100% inspected to cull out all defects, with the cost of this screening usually billed to the supplier. However, acceptance sampling is not a substitute for adequate process controls. In fact, the current approach is to build statistical quality controls at suppliers so that acceptance sampling can be eliminated. -The operating characteristic (OC) curve describes how well an acceptance plan discriminates between good and bad lots. A curve pertains to a specific plan—that is, to a combination of n(sample size) and c (acceptance level). It is intended to show the probability that the plan will accept lots of various quality levels. With acceptance sampling, two parties are usually involved: the producer of the product and the consumer of the product. In specifying a sampling plan, each party wants to avoid costly mistakes in accepting or rejecting a lot. The producer usually has the responsibility of replacing all defects in the rejected lot or of paying for a new lot to be shipped to the customer. The producer, therefore, wants to avoid the mistake of having a good lot rejected (producer's risk). On the other hand, the customer or consumer wants to avoid the mistake of accepting a bad lot because defects found in a lot that has already been accepted are usually the responsibility of the customer (consumer's risk). The OC curve shows the features of a particular sampling plan, including the risks of making a wrong decision. The steeper the curve, the better the plan distinguishes between good and bad lots.8 -The acceptable quality level (AQL) is the poorest level of quality that we are willing to accept. In other words, we wish to accept lots that have this or a better level of quality, but no worse. If an acceptable quality level is 20 defects in a lot of 1,000 items or parts, then AQL is 20/1,000 = 2% defectives. The lot tolerance percentage defective (LTPD) is the quality level of a lot that we consider bad. We wish to reject lots that have this or a poorer level of quality. If it is agreed that an unacceptable quality level is 70 defects in a lot of 1,000, then the LTPD is 70/1,000 = 7% defective. To derive a sampling plan, producer and consumer must define not only "good lots" and "bad lots" through the AQL and LTPD, but they must also specify risk levels. Producer's risk (α) is the probability that a "good" lot will be rejected. This is the risk that a random sample might result in a much higher proportion of defects than the population of all items. A lot with an acceptable quality level of AQL still has an α chance of being rejected. Sampling plans are often designed to have the producer's risk set at α = .05, or 5%. Consumer's risk (β) is the probability that a "bad" lot will be accepted. This is the risk that a random sample may result in a lower proportion of defects than the overall population of items. A common value for consumer's risk in sampling plans is β = .10, or 10%. The probability of rejecting a good lot is called a type I error. The probability of accepting a bad lot is a type II error. -In most sampling plans, when a lot is rejected, the entire lot is inspected and all defective items replaced. Use of this replacement technique improves the average outgoing quality in terms of percent defective. In fact, given (1) any sampling plan that replaces all defective items encountered and (2) the true incoming percent defective for the lot, it is possible to determine the average outgoing quality (AOQ) in percentage defective. The equation for AOQ is: -Acceptance sampling is useful for screening incoming lots. When the defective parts are replaced with good parts, acceptance sampling helps to increase the quality of the lots by reducing the outgoing percent defective.

6.1

-Moreover, the sampling distribution, as is shown in Figure S6.4(a), will have less variability than the process distribution. Because the sampling distribution is normal, we can state that: 95.45% of the time, the sample averages will fall within ±2σx⎯⎯±2σx¯ if the process has only natural variations. 99.73% of the time, the sample averages will fall within ±3σx⎯⎯±3σx¯ if the process has only natural variations. If a point on the control chart falls outside of the ±3σx⎯⎯±3σx¯ control limits, then we are 99.73% sure the process has changed. Figure S6.4(b) shows that as the sample size increases, the sampling distribution becomes narrower. So the sample statistic is closer to the true value of the population for larger sample sizes. This is the theory behind control charts. -In Examples S1 and S2, we determined the upper and lower control limits for the process average. In addition to being concerned with the process average, operations managers are interested in the process dispersion, or range. Even though the process average is under control, the dispersion of the process may not be. For example, something may have worked itself loose in a piece of equipment that fills boxes of Oat Flakes. As a result, the average of the samples may remain the same, but the variation within the samples could be entirely too large. For this reason, operations managers use control charts for ranges to monitor the process variability, as well as control charts for averages, which monitor the process central tendency. The theory behind the control charts for ranges is the same as that for process average control charts. Limits are established that contain ±3 standard deviations of the distribution for the average range R⎯⎯⎯R¯. We can use the following equations to set the upper and lower control limits for ranges: -The normal distribution is defined by two parameters, the mean and standard deviation. The x⎯⎯x¯(mean)-chart and the R-chart mimic these two parameters. The x⎯⎯x¯-chart is sensitive to shifts in the process mean, whereas the R-chart is sensitive to shifts in the process standard deviation. Consequently, by using both charts we can track changes in the process distribution. For instance, the samples and the resulting x⎯⎯x¯-chart in Figure S6.5(a) show the shift in the process mean, but because the dispersion is constant, no change is detected by the R-chart. Conversely, the samples and the x⎯⎯x¯-chart in Figure S6.5(b) detect no shift (because none is present), but the R-chart does detect the shift in the dispersion. Both charts are required to track the process accurately. -We use c-charts to control the number of defects per unit of output (or per insurance record, in the preceding case). ontrol charts for defects are helpful for monitoring processes in which a large number of potential errors can occur, but the actual number that do occur is relatively small. Defects may be errors in newspaper words, bad circuits in a microchip, blemishes on a table, or missing pickles on a fast-food hamburger. -

6.2

-Statistical process control means keeping a process in control. This means that the natural variation of the process must be stable. However, a process that is in statistical control may not yield goods or services that meet their design specifications (tolerances). In other words, the variation should be small enough to produce consistent output within specifications. The ability of a process to meet design specifications, which are set by engineering design or customer requirements, is called process capability. Even though that process may be statistically in control (stable), the output of that process may not conform to specifications. -There are two popular measures for quantitatively determining if a process is capable: process capability ratio (Cp) and process capability index (Cpk). -or a process to be capable, its values must fall within upper and lower specifications. This typically means the process capability is within ±3 standard deviations from the process mean. Because this range of values is 6 standard deviations, a capable process tolerance, which is the difference between the upper and lower specifications, must be greater than or equal to 6.- -A capable process has a Cp of at least 1.0. If the Cp is less than 1.0, the process yields products or services that are outside their allowable tolerance. With a Cp of 1.0, 2.7 parts in 1,000 can be expected to be "out of spec."6 The higher the process capability ratio, the greater the likelihood the process will be within design specifications. Many firms have chosen a Cp of 1.33 (a 4-sigma standard) as a target for reducing process variability. This means that only 64 parts per million can be expected to be out of specification. -Although Cp relates to the spread (dispersion) of the process output relative to its tolerance, it does not look at how well the process average is centered on the target value. -The process capability index, Cpk, measures the difference between the desired and actual dimensions of goods or services produced. -When the Cpk index for both the upper and lower specification limits equals 1.0, the process variation is centered and the process is capable of producing within ±3 standard deviations (fewer than 2,700 defects per million). A Cpk of 2.0 means the process is capable of producing fewer than 3.4 defects per million. For Cpk to exceed 1, σ must be less than 1313 of the difference between the specification and the process mean (X⎯⎯⎯X¯). -Note that Cp and Cpk will be the same when the process is centered. However, if the mean of the process is not centered on the desired (specified) mean, then the smaller numerator in Equation(S6-14) is used (the minimum of the difference between the upper specification limit and the mean or the lower specification limit and the mean). This application of Cpk is shown in Solved Problem S6.4. Cpk is the standard criterion used to express process performance.


संबंधित स्टडी सेट्स

Chapter 1- Basic Concepts of Strategic Management

View Set