ASQ - Ch 15 Measurement: Assessment & Metrics (P 391 - 429)

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

Analytical tools helpful in trend analysis

- Pareto Charts, check sheets, histograms, control charts and run charts (show data chronologically). Be careful to distinguish a trend from a cyclic pattern and from challenges in variation.

To put people in control must have

- knowledge of what they are supposed to do -knowledge of what they are doing -means for regulating what they are doing in the event they are failing to meet goals. These means must always include the authority to regulate and ability to regulate by varying either - a) the process under the person's authority or b) the person's own conduct.

GQM benefits the organization

- writing goals, defining questions, and showing the relationships between goals and metrics helps it focus on important issues. -GQM techniques provides the organization with flexible means to focus the measurement system on it particular need. -GQM aids in reducing costs and cycle time, increasing efficiency, understanding risk, and overall quality improvement.

Checklist for developing measures

-Are measures simple to understand and easy to use? -Do they have adequate and appropriate meaning to stakeholders? -Is the definition clear? -Are the data economical to gather and analyze? -Are the data verifiable and repeatable? -Does the measure make sense to those who must use it? -Is the message conveyed by the measure consistent with the organization's values, mission, strategy, goals and objectives? -Does the measure indicate trends? -Will the use of the measurement cause the correct actions? -Does the measure achieve the stated purpose?

If process capability is found to be unsatisfactory

-ensure that the process is centered -initiate process improvement projects to decrease variation -determine if the specifications can be changed -do nothing, but realize that a percentage of output will be outside acceptable variation

Methods to reduce Burn-out Period of Product Reliability

-stringent supplier certification -production and quality engineers participating in design reviews -appropriate acceptance sampling -application of process failure mode and effects analysis (PFMEA) -use of statistical process control methodology -application of design for manufacturability and assembly (DFMA) **the life cycle of the entire product is dependent on the failure of the weakest (most failure-prone) element or part within the product (e.g., material used, process used, robustness of design, or unexpected, unintended improper use of product by the buyer, regulatory restrictions not anticipated.)

Techniques for conducting Qualitative Assessments

-written surveys or questionnaires (mailed/e-mailed) to target groups or as part of a business interactions (e.g., feedback card) -one-on-one interviews (in person or by phone) or group intervals (focus groups) -observation of actual behaviors (e.g., mystery shoppers) -content analysis conducted by reviewing memos or other normal business processes to determine what events occur (or don't occur)

Essential Understandings of Variation

1. Everything is the result or outcome of some process. 2. Variation always exists, although it is sometimes too small to notice. 3. Variation can be controlled if its causes are known. The causes should be determined through the practical experience of workers in the process as well as by the expertise of managers. 4. Variation can result from special causes, common causes, or structural variation. Corrective action can not be taken unless the variation has been assigned to the proper type of cause. 5. Tampering by taking actions to compensate for variation within the control limits of a stable process increases rather than decreases variation. 6. Practical tools exist to detect variation and to distinguish controlled from uncontrolled variation. If management doesn't know the theory of variation they will tend to treat all anomalies as special causes and treat actual common causes with continual tampering. Variation doesn't need to be eliminated from everything, just focus on reducing variation in most critical areas to meeting customer requirements.

Establishing Process Measures (4 steps)

1. Identifying & defining the critical factors impacting the customers (timing, availability, cost to purchase, product quality, product reliability & usability, life cycle of product, safety & environmental safeguards). 2. Mapping the process across all applicable functions. 3. Identifying and defining the critical tasks and resources (including competencies required). 4. Establishing the measures that will be used to track and manage the required tasks & resources.

3 Timely Tenets of measuring a process/project

1. Never design and institute a measurement without first knowing (a. what dat you will need, b. who will supply the data, c. how will the data be transformed into info, d. how the info will be used and for what purpose, e. how you will communicate this information to all applicable stakeholder, f. how cost-effective taking the measurement will be) 2. Always involve people doing the work in helping in the design of the measurement. 3. Establish a rule that the primary reason for the measurement is to improve the process, project, or product. There may be other secondary reasons.

Rules for Tabular Data to make easier interpretation

1. Organize the data so they flow down, then across (generally easier to compare #s in a column than across rows) 2. Keep # of digits to the right of the decimal point constant so that numbers are properly aligned (helps highlight differences in magnitude of #s). 3. Use clear column titles & row headings, keep groupings consistent throughout the table 4. Length of column before beginning a new one on the same page should encompass a logical block of data (e.g., all data for a week or state should be in the same column is comparison of wks or states is of interest.

Classifications of Variation due to various factors

1. People (worker influences) 2. Machinery influences 3. Environmental factors 4. Material influences 5. Measurement influences 6. Method influences

Negatives to managing projects

1. Performers see excessive data collection & metrics as obsession with metrics rather than producing a better product/service. # of measures can be too cumbersome (become overwhelming, difficult to maintain & keep accurate) 2. Performers don't understand connection between their work and the metrics which management assesses their performance. If they don't see the connection, may feel unappreciated, unmotivated, and unfairly treated. 3. When performers receive poor feedback/disciplinary action because they are considered the cause of a negative event/trend in metrics; performer reaction may be to perceive the metric as the cause of the matter - may adopt a "don't care" attitude, or fudge #, bad-mouth management, initiate a grievance, intentionally cause defects or quit. 4. Performers may be asked to do task A, but recognition & reward relate to doing task B. 5. If management views use of metrics primarily as means of controlling subordinates, management demonstrates a misguided strategy. Metrics should be focused on surfacing areas for continual improvement. 6. People often resent paperwork, they feel greater emphasis on data gathering than on performing task. Become resentful and see no logical reason for data collection/utilization.

4 guidelines used as a basic test for randomness in control charts

1. Points should have no visible pattern 2. Approximately 2/3rds of pts should be close to the centerline (within +/- standard deviation) 3. Some points should be close to the outer limit (control limits in statistical process control) 4. An approximately equal number of pts should fall above and below the centerline. If all 4 criteria are met, the process is likely to be in control

Barriers to successful trend analysis

1. Timely access to data required in order to maintain a state of control or to prevent nonconformance's. 2. Knowledge of how to interpret the data 3. Knowledge or authority to respond to trends 4. Understanding of theory of variation (see Section 6), leading to poor decisions when evaluating alternatives and projecting results.

Shewart's 2 type of processes

1. stable process with "inevitable chance variation 2. an unstable process with "assignable cause variation" Deming called these "common causes" and "special causes"

Weibull Distribution

A mathematical distribution showing the probability of failure or survival of a material as a function of the stress (can be a variety of different patterns similar to but slightly different fro normal or exponential distributions)

Advantages & Disadvantages of Sampling

Advantages - Compared with 100% inspection, sampling is cheaper, less handling of product and less fatigue. Drawbacks- avoid adequate process control, less economical than process control and yield less information on the true state of nonconformance than do 100% inspection of statistical analysis. No sampling plan yields 1005 detection of nonconformance.

Shift Patterns in Trend Analysis

An abrupt change in an important variable (e.g., new sales initiative) may cause an output measure to suddenly increase/decrease.

Validity items to consider

Face - has face validity to the extent the distinctions it provides correspond with those that would be made by the most observers without the aid of the instrument. Criterion - how well the performance of one measure can be predict performance of a particular variable to be impacted. Construct - uses one variable to predict another, but does so using a pseudo variable (a construct one believes to be related) as the predictable variable. Content - how well a measure predicts the full range of performance measures for which it might be used

independent variable

The experimental factor that is manipulated; the variable whose effect is being studied.

Mean time between maintenance actions (MTBMA)

measure related to demand for maintenance personnel. Computed as the total number of systems product units divided by the total number of preventive and corrective maintenance actions during a stated time period.

Quality Improvement Projects (QIPs) - Juran

QIP should first seek to control variation by eliminating sporadic problems. When a state of controlled variation is reached, the QIP should break through to higher levels of quality by eliminating chronic problems (reducing the controlled variation). Juran distinguished between sporadic and chronic problems. Juran wanted to achieve repeatable and predictable results. Until that happens, it will be almost impossible to determine whether a quality improvement effort has had any effect. Once under control, breakthroughs are possible because they are detectable.

Leading Indicators

measures that can be used to predict the outcome of future events on the basis of analysis of past and current events

Plotting failure rate over life of a product

Stages: 1. Infant mortality (burn-in period) - failure rate decreases as next stage is reached. 2. Random failure (useful life period) - stable failure rate during normal use. 3. Wear-out - failure rate increases until the product is no longer useful as intended

Mean Time to Failure (MTTF)

The average amount of time expected until the first failure of a piece of equipment. Basic measure of reliability for non-repairable product and is computed as the total # of delivered products, divided by the total # of failures within a population receiving the product, during a discrete measurement interval and under-stated conditions. Don't confuse with MTBF

Statistical Process Control (SPC)

The process of testing statistical samples of product components at each stage of the production process and plotting those results on a graph. Any variances from quality standards are recognized and can be corrected if beyond the set standards.

Errors in Sampling

Type #1 - Alpha or producer's risk (possibility that good product will be rejected (type 1 error). The producer's risk is often 0.05 (acceptance probability of 0.95) Type #2 - Beta or consumer's risk (possibility that bad product will be accepted (type 2 error). The consumer's risk is often 0.10. Consumer's risk is also called lot tolerance percent defective (LTPD)

normal distribution

a bell-shaped curve, describing the spread of a characteristic throughout a population (approx. 2/3 within +/- 1 standard deviation) and is equally likely that an observation will occur above or below the average. This distribution especially applies to processes where there are usually more variables that contribute to the variation. This one of the most common distributions

standard deviation

a computed measure of how much scores vary around the mean score

Goal-question-metric (GQM)

a method used to determine measurement of the project, process, and product in a way such that: -resulting metrics are tailored to the organization & its goal -resulting measurement data play a constructive & instructive role in the organization -metrics & their interpretation reflect the values and the viewpoints of the different groups affected (e.g., developers, users, and operators)

cluster sampling

random sample is taken from within a selected subgroup

Design of experiments (DOE)

a statistical methodology to determine cause-and-effect relationships between process variables and output; looks at relationships between many variables at a item. Replication of runs at particular levels is used to reduce the amount of experimental error, and designs include techniques like blocking, nesting and fractional factorials to gain maximum useful info with a minimum of testing

GQM defines a measurement model on 3 levels

a) Conceptual level (goal) - a goal is defined for an object - for a variety of reasons - with respect to various models of quality, from various points of view, and relative to a particular environment. b) Operational level (question) - a set of questions is used to define models of the object of study and then focuses on that object to characterize the assessment or achievement of a specific goal. c) Quantitative level (metric) - a set of metrics, based on the models, is associated with every question in order to answer it in a measureable way. GQM can be used for control and improvement of a single project within an organization running several products. e.g., Steps - determine how to plan a good measurement program; determine how to define goals, questions, metrics & hypothesis for related processes; collect data; analyze & interpret the results, and present the results.

Reasons not to sample

a) Customer's requirement is for 100% inspection of entire shipment b) Number of items/services is small enough to inspect the entire quantity produced/delivered c) Cost of sampling is too high relative to the advantages gained from sampling d) Self-inspection by trained operators is sufficient of the nature of the product produced e) The inspection method is built in, or the process is mistake-proofed so that no defectives can be shipped f) product is too low cost and/or noncritical (wide or no specification range) to customer so that the producer can risk not sampling.

Data mining technologies

a) Neural networks - using computer with its architecture patterned after the neurons of a human brain, the computer's web of connections. A neural network is capable of simple learning. b) Decision trees - uses a tree-shaped structure representing sets of decisions used to predict which records in an uncategorized data set will yield as a stated outcome. c) Rule induction - finding of useful if/then rules within the data, based on statistical significance. d) Genetic algorithm - applies optimization techniques derived from the concepts of genetic combination, mutation, and natural selection. e) Nearest neighbor - classifies each record based on other records that are most similar. f) Other pattern recognition techniques (linear models, principal component analysis, factor analysis, linear regression) g) Other methods (Hotelling's T-square statistic, forecasting, clustering)

Well-intended/well-designed metrics can positively impact performers when

a) Provide a way for the performer to measure their own performance against a standard, objective, or customers' requirements. How effective this feedback is depends on: -performer being aware of measurement -how meaningful & understandable the metric is to the performer prior to the work being performed -frequency of feedback -presentation of feedback (suggest simple graphics) -whether the performer participated in the development of the metric -whether oral discussion with boss accompanies the visual, and whether oral feedback is constructive and reinforces positive actions on the performer -whether metrics presented clearly indicate a linkage with the higher-level organizational metrics (e.g., strategy/big picture) -whether metrics are perceived as beneficial/basis for improvement rather than just another management effort to control -whether metrics support or conflict with delivered rewards & formulated strategy b) Create a work environment in which performer feels motivated to improve including: -feeling good about achieving objectives, meeting expectations, or feeling good about progress since last feedback -making suggestions for improving the process being measured -making suggestions for improving the metric(s) by which the process is measured

Techniques for Measuring Projects

a) project planning management process including: -schedules met -resources used -cost vs. budget -Earned value analysis (EVA) -Risks identified & eliminated or mitigated -Project objectives met -Safety & environmental performance -Project team effectiveness b) project deliverables processes including; -targeted outcomes achieved -additional unplanned benefits obtained -return on investment -customer satisfaction (internal and/or external) **See Project Management details in Ch 10**

Maintainability

ability of a product to be retained in, or restored to, a state in which it can meet the service or intent for which it was designed. Focus is on reducing the duration of downtime. Maintainability would be a factor to consider during early stages of product development.

validity

accuracy of the data - how close it is, on average of the real value. For devices, it is called accuracy (amount of inherent biases in the device) **Data can be reliable but not valid; valid but not reliable or both reliable & valid**

poisson distribution

also used for discrete data and resembles the binomial. It is especially applicable when there are many opportunities for occurrence of an event, but a low probability (less than 0.10) on each trial

pre-control

alternative to statistical based control charts, and involves dividing the specification range into 4 equal zones. The 2 center zones are green (good) zone), the two areas on the outside of the green zone (still in spec) are considered the yellow (caution) zone, and the area outside specification is considered the red (defect zone). Decisions about the process are made according to particular zone in which process output falls (e.g., red zone indicates process shift, green zone-ok, yellow zone - another reading taken, if both readings are in same yellow zone, assumed process has changed while in opposite yellow zones (assumed that there is abnormal variation in the process that must be investigated. Pre-control is often used when a process is just being started, and can be especially useful when there is not yet sufficient data to develop statistically based control limits. Rules for pre-control include both sampling frequency and actions to take, with the sample frequency being varied according to how well the process appears to be working.

range

arithmetic difference between the largest and smallest number in the data - a measure of spread, dispersion, or variation

Measures of Central Tendency

average, median, mode

Control Chart theory

based on concept of statistical probability, but are used for time-ordered data. If the process from which the data are collected is stable (in control), then future values of data from the process should follow a predictable pattern. The probability/control limits are based on the distribution for the type of data being analyzed.

Mean Time between Failures (MTBF)

basic measure of reliability for repairable product and refers to the mean (average) number of products in which all elements or parts of the product perform within their specification limits during a discrete measurement interval under stated conditions.

Mean tine to repair (MTTR)

basic reliability measure. It is calculated by dividing the sum of corrective maintenance actions at any specific level of repair by the total # of failures within an item repaired at that level during a specific time interval under stated conditions.

Six Sigma methodology

can display the causal relationship between key business indicators (Y), the critical-to-quality (CTQ) process outputs (y) that directly affect the Y's, and the causal factors that affect the process outputs (x).

probability distributions

comparisons that take into account statistical probability using tests for significance such as F-test to compare standard deviation and the t-test to compare means. Advantage of this approach - compensating for sample size differences (adjust based on degrees of freedom) and basing the decision on whether observed differences are actually statistically significant

Binomial distribution

defines the probability for discrete data of "r" occurrences in "n" trials of an event that has a probability of occurrence of "p" for each trial. Used when sample size is small compared to population size and when proportion defective is greater than 0.10

Lagging indicators

depict actual trends vs. expected performance levels (e.g., warranty claims). A well balanced system of reports makes use of leading indicators as well as indicators that lag behind operations

Hypergeometric Distribution

distribution of a variable that has two outcomes when sampling is done without replacement; a discrete distribution defining the probability of "r" occurrences in "n" trials of an event when there are a total of "d" occurrences in a population of "N". It is most applicable when the sample size is a large proportion of the population

Advanced Statistical methods purposes

e.g., regression, analysis of variance (ANOVA) and response surface. Purposes: a. screening or trying to find which independent variables have the greatest impact on results b. optimization, or finding the level and combination for each independent variable that will provide the best overall performance

Types of Control Charts

each apply to either variables data (continuous or measured or attributes data (discrete or counted). a) Variable-type control charts -charts for average and range (x-bar and R chart) -charts for averages and standard deviation (x-bar and s- chart) -charts used to detect very small changes that occur over time (CUSUM chart) b) Attributes-type control charts -chart used to chart number of defective units where sample size is constant (np- chart) -chart used for tracking fraction of units defective, where sample size is not constant (p-chart) -chart used to chart number of defects in a constant sample size (c-chart) -chart for number of defects where sample size is changing (u-chart) Control limits are usually set at +/- 3 sigma using data from where the process is in control; in some applications it may be tighter (e.g., +/- 2.5 sigma) if one desires a quicker response to an out of control situation; but this increases probability of reacting when the process is actually to control. Inverse is true: control limits can be loosened in order to reduce the frequency of looking for a change in variation, but means there is increased chance of not reaching when the process has actually changed.

long term trend analysis

examination and projection of company level data over an extended period of time. Used primarily for planning and managing strategic progress to achieve company wide performance goal (financial and non-financial performance also compared to competitors)

Operating Characteristic (OC) curve

for a particular sampling plan indicates the probability of accepting a lot based on the sample size to be taken and the fraction defective in the patch.

bathtub curve

from Juran's Quality Handbook (Ch 48.5) for predicting reliability of a product over it lifetime. There would be a high incidence of failures during the product's "infant mortality" phase, random failures during normal use and increase in wear-out failures towards the end of the product's life cycle (overall cycle resembles a bathtub)

Advantages of data mining

highlights patterns of influence on quality that heretofore would have appeared impossible ot detect. It can uncover an optimum combination of factors that can be used for design of experiments (DOE). Can be combined with geographic info system to produce smart maps

median

if data is arranged in numerical order the median is the center number. If there is an even number of data points, the median is the average of the 2 middle numbers

Structured Variation

inherent in the process, however, when plotted on control chart it looks like a blip, even though it is predictable

sample size

is dependent on the desired level of statistical confidence and the amount of difference on the desired level of statistical confidence and the amount of difference between samples that one is trying to detect. For a given difference, testing a statistical probability of 0.01 (a one percent chance the result will be labeled as significant when it is not), a larger sample size will be needed than if a probability of 0.05 is acceptable. The cost of selecting the samples and collecting the information also enters into this decision.

double sampling

lots are inspected and the decision to accept or reject is based on the acceptance number, c for the combined sample size. If process quality is good, most lots will be accepted on the first sample, thereby reducing sampling cost. The inspector can feel that the lot being given a sample chance if a second sample must be drawn.

multiple sampling

lots are inspected, and the decision to accept or reject is based on a max of 7 samples of n. Multiple plans work the same as double plans but use as many of seven draws before a final decision is made. This is the most discriminating of plans, but is also the most susceptible to inspector abuse.

linear regression

mathematical application of scatter diagram (Ch 13, Section 1), but where the correlation is actually a cause-and-effect relationship. Analysis provides an equation describing the mathematical relationship between 2 variables, as well as the statistical confidence intervals for the relationship and the proportion of the variation it the dependent variable that is due to independent variable. Can also be applied to non-linear relationships b transforming data as well as situations where there are multiple independent variables (called multiple regression)

reliability

measure of repeatability of the data. For instruments, it is called "precision" and reflects the amount of spread in the data if the same measurement is repeatedly made.

rational subgrouping

must be considered for control charts. A chart is meant to detect a shift in the mean will be more sensitive if variability within subgroups is minimized. This means that all samples within a subgroup should consist of parts taken within a relatively short time as compared to the time between subgroups. In addition, a subgroup should consist of a single process stream rather than mixing products from different streams (e.g., 2 cavities in the same mold).

Acceptance Quality Limit (AQL)

must be determined and often spelled out in specification or contract. AQL is associated with point on curve correlated to the producer's risk. ANSI/ASQ Z1.4 (2008) perceives AQL as the worst tolerable process average.

mode

number in the data set that occurs most often

accuracy & precision

often used to discuss physical measurement devices, where accuracy is equivalent to validity (correct value) and precision is how well the device can reproduce the same results when measurements are taken repeatedly (reliability)

reliability

probability that a product can perform its required functions for a specified period of time under stated conditions. Consistent repeatability during the useful life of the product is the objective. Important concepts: - a product's failure conditions must be clearly stated, including its decreasing degree of performance over time - a time interval is specified using metrics such as hours, cycles, miles, and so on, in defining the life cycle of the product -some products are for one-time use, such as the ignition device for a rocket. These types of devices operate only when initiated or triggered, and are measured as worked as planned or failed.

Availability

probability that a unit will be ready for use at a stated time over a stated time period, based on the combined aspects of reliability & maintainability. 3 types of availability 1. Inherent availability - function of reliability (MTBF) and maintainability (MTTR). Formula: A1=MTBF/(MTBF+MTTR) 2. Achieved availability - measure of preventive and corrective maintenance. Formula: Aa=MTBMA/(MTBMA+MMT) where MMT=mean maintenance time=sum of preventative and corrective maintenance times divided by the sum of scheduled and unscheduled maintenance events during a stated period of time 3. Operational availability - both inherent and achieved availability & logistics and administrative downtime. Formula A0=MTBMA/(MBTMA + MDT) where MDT is the mean maintenance downtime, which includes supply time, logistics time, administrative delays, active maintenance time, etc.

Evolutionary operation (EVOP)

process of adjusting variables in a process in small increments in a search for more optimal point on the response surface

Response surface

process of graphically representing the relationship between 2 or more independent variables and a dependent variable. A response surface often looks similar to a typographic map, which uses lines to indicate changes in altitude at various latitudes and longtidudes

Trend analysis

process of looking at data over time in order to identify, interpret and respond to patterns. A desired trend may be a continuous upward movement or continuous downward performance or a sustained level of performance.

acceptance sampling

process of sampling a batch of material to evaluate the level of nonconformance relative to a specified quality level. It is most frequently applied to decide whether to move material from one process to another or to accept material received from a supplier. Many sampling processes are based work of Dodge & Romig and standards such as ANSI/ASQ A1.4 and ANSI/ASQ Z1.9

calibration

process used to maintain the performance of a measurement system. For physical measurement devices this involves determining whether, under a predefined set of conditions, the values indicated by the instrument agree with results obtained for a standard that has been independently tested. Standards are usually traceable back to the National Institute of Standards & Technology (NIST) in US.

gage responsibility and reproducibility (gage R&R) studies

purpose is to determine how much variation exists in the measurement system (includes variation in the product, the gage, and the individuals using the gage). The desire to have a measurement system whose error does not take up a large proportion of the product tolerance

Process capacity

range within which a process is normally able to operate given the inherent variation due to design and selection of materials, equipment, people and process steps. Knowing the capability of a process means knowing whether a particular specification can be held if the process is in control. Once in control, can calculate Process Capacity Index (CPCI). Cp=specification range/process range=Upper specs - Lower spec/6 sigma Initial process capability studies are often performed as part of the process validation stage of a new product launch.

measures of spread or variation (dispersion)

range, standard deviation, variance

Data Quality measurements

reliability & validity

average outgoing quality limit (AOQL)

sampling can keep nonconformance below a pre-determined level.

Single sampling

sampling on a single sample or multiple samples.

statistics

science of turning data into information.

random sample

selection is similar to pulling a number out of a hat and helps ensure that data will not be biased toward one particular portion of the population. It is expected that the sample will truly represent the range and relative distribution of characteristics of the population. In some cases, it may be desired that only a particular portion of the population be evaluated, which means that a stratified sample will be set up to create the desired distribution within the sample.

analysis of variance (ANOVA)

similar to performing a t-test, but you can look at the differences between multiple distributions. It tests the statistical significance by looking at how much the average of each factor being tested varies from the overall average of all factors combined. The ANOVA table then provides an "f" value for which statistical significance can be determined.

Cyclical Patterns in Trend Analysis

some businesses/processes are affected by seasonal or other factors that cause the output to gradually increase/decrease, the gradually return to approximately the original level

Geographic Information System (GIS)

specialized type of software that depicts patterns, and runs on personal computers. Resembles a database (analyzes, stores records) but the GIS database contains info used to draw a geometric shapes. Shape represents a unique place on earth to which the data corresponds. Fields of special data enable the program to draw the shape of the info. Useful in looking at markets, customer clusters and social/economic/political data - before change and after change. It is a system for mapping and analyzing the geographical distribution of data. GIS is an extremely useful tool for trend analysis, providing "before" and "after" maps of data from a target population or area of investigations.

variance

standard deviation squared (clearly represents both central tendency & variability). Could use a histogram to compare distributions

average

sum of the individual data values divided by the number of the samples

exponential distribution

the continuous distribution that describes where data are more likely to occur below the average than above (63.2 % and 36.8 % respectively, compared to 50%/50% for the normal distribution). Typically describe the break-in portion of the failure bathtub curve.

Calibration programs

typically consist of the following general requirements: -identify all equipment used for measuring product quality or controlling important process variables -ensure that the equipment is capable of the accuracy and precision necessary for the tolerance to be measured -give each piece of equipment a specific identification number, and establish a frequency and method for calibration -calibrate the equipment using traceable or otherwise documented standards -perform the calibrations under known environmental conditions -record the calibration findings, including as-found and after-correlation data -investigate any possible problems that may have been created by an out of calibration device -identify or otherwise control any devices that have not been calibrated or are otherwise not for use. Calibration principles can also be applied to nonphysical measurements.

Control Chart

used to determine whether or not a process is stable (meaning predictable performance). By monitoring the output of a process over time, a control chart can be used to asses whether the application of process changes, or other adjustments has resulted in improvements. If the process produces measurable parts, the average of a small sample of measurements, not individual measurements, should be plotted.

zero acceptance number plans

useful in emphasizing the concept of zero defects an din product liability prevention. Concept is that no defects are found in the sample, there are no defects in the remaining lot. Conversely, if a defect is found in the sample, the whole lot is defective. Juran's Quality Handbook, 5th Edition section 46 discusses acceptance sampling

reproducibility

variation from person to person using the same gage

repeatability

variation in results on a single gage when the same parts are measured repeatedly by the same person

Special Causes Variation (Deming)

variation of one or more factors is abnormal or unexpected (or assignable cause variation). This is observed in an unstable process due to special causes not inherent in the process. It can be sudden or gradual shift. Workers in the process often have the detailed knowledge necessary to guide investigation.

Common Cause Variation (Deming)

variation that is always present or inherent in a process. It occurs when one or more of the 5 factors fluctuate within the normal or expected manner and the process can be improved only by changing a factor. Common causes of variation occur continually and result in controlled variation (e.g., choice of supplier). If it is excessive, process must be changed. Account for 80 - 95% of workflow variation (it is the responsibility of the manager who work on the process). Management controls the budget

best judgement sampling

when an expert's opinion is used to determine the best location and characteristic of the sample group

statistical control (stable)

when the amount of variation can be predicted with confidence Can be used with control tarts.

systemic sampling

where every nth item is selected (e.g., every tenth item)


Ensembles d'études connexes

Personal Finance*- Monthly Budget Quick Check

View Set

Intro to Sustainability and the Built Environment

View Set

Intro to Clinical Nursing EXAM 2

View Set

ECOM 101 FINAL EXAM REVIEW T/F QUESTIONS

View Set

IRREGULAR VERBS: English Irregular Verbs

View Set

Lab Quiz #1-Biological Molecules Review

View Set