MGT 455 - Exam 3

¡Supera tus tareas y exámenes ahora con Quizwiz!

MTBF (Mean Time Between Failures)

A measure of the average time between failures in a system - the higher the amount, the more reliable the thing is. number of hours operated/number of failures

LCLr

D3Rbar

UCLr

D4Rbar

probability density function

f(t) = le-lt for t > 0

c-chart

sc-square root of cbar (stand devation) UCLc=cbar+3(squareroot cbar) LCLc=cbar-3(squareroot cbar)

variation is observed due to measurement system error

some are systematic (bias) others are random

standard deviation for p chart

sp=square root of pbar(1-pbar)/n

snpbar(standard deviation)

squareroot of npbar(1-parbar)

Total observed variation

sum of true process varation (what we actually want to measure) plus varation due to measurement.

Rbar

sumRi/k

UCLx for an x-chart

xbar+3Rbar/d2

LCLx for x-chart

xbar-3Rbar/d2

LCLx

xbar-A2Rbar

LCLs

B3sbar

UCLs

B4sbar

Design for six sigma

Concept development, design development, design optimization, and design verification

Expected loss

EL(x) = k(stand dev. squared+Dsquared) D is the deviation of the mean value from the target Dsquared = (xbar-T)(xbar-T)

Probabiliity of failures

F(T) = 1 - e to the -T/theta

Probability of failure in the interval

F(T) = 1-e to the -hT

Loss function

L(x) = k(x-T)(x-T) k is the constant (x-T) is the deviation from the target

Probability of survival

R(T) = 1 - F(T) = e to the -hT

Reliability function

R(T) = e to the -T/theta

Outline the process of building the House of Quality. What departments and functions within the company should be involved in each step of the process?

There are several steps to building the house of quality. First, you estabilsh the aim of the project. Then you determine the customer expectations - these are the rows of the house. Third, you describe the elements of the service (the columns). Fourth is the roof of the house which notes the strength of the relationship between the service elements. Fifth is the body, which is the relationship between the service elements (columns) and customer expectations (rows). Next you weight the service elements which is basically rating the customer expectations and getting weighted scores for each column, making the basement. The Basement is the next part which is the service element improvement difficulty rank. Last, is the assessment of the competition, which involves comparison of customer satisfaction and comparison on the strength of the service elements.

cp

UTL-LTL/6sigma

what does the term in statistical control mean? Explain the difference between capability and control

When a process is in statistical control it means there is no special cause variation, there is only common cause variation in the system or process. When there is no special cause variation, the process is under control. Capability is

questions to ask in a process capability study include:

Where is the process centered? How much variability exists in the process? Is the performance relative to specifications acceptable? What proportion of output will be expected to meet specifications? What factors contribute to variability?

UCLx

Xbar+A2Rbar

Physics of failures

many failures are due to deterioration because of chemical reactions over time, which may be aggravated by temperature or humidity effects. Understanding the physical properties of materials and their response to environmental effects helps to eliminate potential failures.

MTTF (mean time to failure)

the average time to failure for a nonrepairable system 1/h

cpl

u-LTL/3sigma

standardization

use components with proven track records

Repeatability (equipment variation)

variation in multiple measurements by an individual using the same instrument. This measure indicates how precise and accurate the equipment is. influenced by the condition of the measurement, level of calibration, environmental conditions (noise or lighting), workers' eyesight, and the process used

environmental testing

varying the temperature

redundancy

provides backup components. Redundant components are designed either in a standby configuration or a parallel configuration. In a standby system, the standby unit is switched in when the operating unit fails; in the parallel configuration, both units operate normally but only one is required for proper functioning.

Design Reviews

purpose is to start discussion, raise questions and generate new ideas and solutions to help designers anticipate problems before they happen. Preliminary design review evaluates such issues as the function of the product, conformance to customer's needs, completeness of specifications, manufacturing costs, and liability issues. typically involved higher level management. Intermediate design review is done by the personnel at lower levels after the design is well established. It studies the design in greater detail to identify potential problems and suggest corrective action Final design review is done just before release to production. It evaluates material lists, drawings, and other detailed design information with the purpose of preventing costly changes after production setup.

Life testing

purpose of life testing, that is, running devices until they fail, is to measure the distribution of failures to better understand and eliminate their causes. For devices that have long natural lives, life testing is not practical.

xbar

sumxi/k

Briefly describe the methodology of constructing and using control charts.

the first step of creating a control chart is to prepare it. You may do this by choosing whether you will use variable or attribute measurements. This will determine what type of control chart you should use (x-bar and r-chart, x-bar and s-chart, or x-chart if using variable data, OR p-chart, np-chart, c-chart, or u-chart for attribute data). You will also choose the basis, size and frequency of your sample. Next you will collect your data. This step will be done by recording the sample observations. Once you have collected your observations, you will calculate relevant statistics such as averages, ranges, proportions, standard deviation, etc. The third step will be determining your initial control limits. You will compute the upper and lower control limits. Once the upper and lower control limits are determined you will draw the center line, also called the average, and control limits on your chart. The forth step is analyzing your chart. While analyzing, you will determine if the process is in control. Then you will find and get rid of any out-of-control points and recompute the control limits. Lastly, you will continue to use your control chart over time. You will continue to collect data and plot on the charts. You will stop any time you notice an out of control point and make any adjustments or corrections that may be needed. Attribute charts are often used for service industries whereas variable charts are used for manufacturing industries.

Describe some situations in which a chart for individual measurements would be used.

A chart for individual measurements may be used for a variety of reasons. In some instances, it just does not make sense to take a sample. X-charts are used in these situations One particular reason that was noted in the notes was in a chemical production process. The sampling of a single mixture will have very little variation. The only reason variation may occur in this instance would be a potential measurement error. Another reason individual charts may be used would be in a very low-volume production situation. Likely different samples taken in this situation were taken over longer periods of time. Over longer periods of time, processes could potentially change, therefore, there wouldn't really be a need to take more than one sample because measurements will obviously be skewed if the processes have changed. IN other situations, you would not want to take a sample because you simply want to chart every single observation. An example of this could be the wait time of patients at the doctor's office. You could also want to chart every observation in a manufacturing situation where technology can easily collect the data for you.

What is the difference between a functional failure and a reliability failure?

A functional failure happens at the start of product life due to manufacturing or material defects where as reliability failure is the failure after some period of use. Reliability failure could include a device not working, the operation of the device is not stable, or the performance of a device worsens. The failure must be clearly defined in order for it to be fixed because the type of failure can vary so greatly.

Describe the difference between variables and Attributes data. What types of control charts are used for each?

An attribute is a performance characteristic of a product or service that is either there or not. This is basically saying that it is either in or out of control and errors are either there or not. These is easier to collet than variable data and are typically shown as proportions or rates. Attribute data often requires a large sample than variables and can usually be measured by visual inspection. Variable data is continuous, meaning that this data is there all the time no matter what, such as the length, wait or time. These are shown as averages and standard deviations. These require measurement instruments, making them harder to collect. X-bar and R-charts, x-bar and s-charts, and x-charts are all used for variables data. Attribute data is use for defectives such as p-chart and np-charts, or defects such as c-chart and u-charts.

discuss the three primary applications of control charts.

Control charts can be used for three main applications. These include to; establish a state of statistical control, monitor a process and signal when it goes out of control, and lastly to determine process capability. By creating a control chart you can establish a state of statistical control through first drawing your control chart. Once you have constructed your control chart, you can see if your process is out of statistical control. You will be able to tell if the process is out of control if there are any points outside of the control limits, there is an uneven number of points above and below the center line, points seem to follow a pattern above and below the center line, and most points are far away from the center line. If the process is out of statistical control, you can then look for what the special cause variation may be and determine a way to eliminate it. If you are monitoring a process with control charts, you are likely drawing several over time to determine the normal patterns. Once you see an outlier or something no longer normal, you are then made aware that there may be some special cause variation. The sooner you are able to detect this, the easier it will be to eliminate. By determining your process capability with your control chart, you are able to see what your upper and lower control limits are, and will be able to more easily detect and eliminate special cause variations.

Describe the difference between control limits and specification limits

Control limits relate to averages of samples. Specification limits relate to individual measurements. These are NOT the same.

explain the difference between nominal dimensions and tolerances. How should tolerances be realistically set?

Nomial is the ideal dimension or the target value that manufacturing is trying to meet. Tolerance is the permissible variation, recognizing that it is often difficult to meet a target consistently. Tolerances should take into account the 7 Ms when realistically setting them. Manpower, materials, methods, measurements, maintenance, management, and machines are the 7 Ms. Engineers must also understand the necessary trade-offs. Narrow tolerances raise manufacturing costs but also increase the interchangeability of parts within the plant, product performance, durability, and appearance. Wide tolerances on the other hand increase material utilization, machine throughput time and labor productivity, but have a negative impact on product characteristics.

What is design failure mode and effects analysis (DFMEA)? Provide a simple example illustrating the concept.

Design failure mode and effects analysis is the identification of all the ways in which a failure can occur, to estimate the effect and seriousness of the failure, and to recommend corrective design actions. DFMEA consists of specifying failure modes, effect of the failure on the customer, severity, likelihood of occurrence and detection rating, potential causes of failure, and corrective actions or controls. DFMEA can improve product functionality, safety, reduce failure costs and decrease manufacturing and service delivery problems. An example of this is NASA using this to test out new space shuttles.

summarize the key design practices for high quality in manufacturing and assembly.

Design for manufacturability is the process of designing a product for efficient production at the highest level of quality. Design for environment considers environmental concerns during the design of products and processes and includes such practices as designing for recyclability and disassembly. Design for disassembly promises to bring back easy and affordable product repair. Design for excellence includes design related initiatives such as concurrent engineering, design for manufacturability, design for assembly, design for environment and other design for approaches. This approach is constantly thinking in terms of how one can design or manufacture products better. It focuses on things done right and defines customer expectations and goes beyond them. It optimizes desirable features or results and minimizes the overall cost without compromising quality. Target and tolerance design is designers setting specific dimensional or operational targets or tolerances for manufacturing or service characteristics.

Does an np-chart provide any different info than a p-chart? When would an np-chart be used?

Np-charts plot the number of nonconforming units in each sample, as opposed to the fraction of nonconforming units that are plotted in a p-chart. Np-charts can only be used If the size of each sample is constant. Often times the np-chart is easier to use than the p-chart. Similarly to a p-chart, you take k samples of size n. An np-chart may be used instead of a p-chart when all the workers are needing to know is the number of nonconforming units in each sample. This makes it faster and easier for the workers as opposed to calculating proportions.

explain the difference between defects and defectives.

One of the most obvious differences between defectives and defects is the type of control chart used. P-charts and np-charts are used for defectives. C-charts and u-charts are used for u-charts. A defect is any nonconformance of the unit of product with the specified requirements. Essentially, a defect is when the manufacturer does not follow what the customer asks for. A defective is a unit of a product that has one or more defects. When the unit of a product has one or more defects, the entire product or service failed to meet the required criteria.

Briefly describe the process of constructing a p-chart. What are the key differences compared with an xbar-chart

P-charts monitor the fraction of nonconforming items. There are usually 25-30 samples of an attribute being drawn. The size of each sample drawn should be big enough to have multiple non-conforming characteristics. If there is a small chance of there being nonconforming items, there likely needs to be a sample of 100 or more items. Sample size will typically be larger than the number of samples. The samples are chosen over periods of time so that any identified special causes can be investigated. You will collect k samples each of size n. The primary difference between this and an xbar-chart is that an xbar-chart is constructed using variable data. An xbar-chart also does not take any samples. This type of chart is used in situations where it isn't appropriate to take samples, so the whole set of observed data is used.

how can product design affect manufacturability? Explain the concept and importance of design for manufacturability.

Product design can affect manufacturability for the obvious reason that if the product has a bad design, it will affect how it is manufactured. A poor design will lead to more errors and defects and ultimately the manufacturing of a poor product. The poor design will take more time to create, and will likely lead to special cause variation when creating the product. If the product has a good design, it will likely be easier to manufacture and will lead to less defects and errors. Essentially design for manufacturability (DFM) is the process of designing a product for efficient production at the highest level quality. This is important for a multitude of reasons. It minimizes the number of parts, designs for robustness, eliminates adjustments, makes assembly easy and foolproof, uses repeatable, well-understood processes, chooses parts that can survive process operations, design for efficient and adequate testing, lays out parts for reliable process completion, and eliminates engineering changes.

Repeatability & Reproducibility Studies

Quantify and evaluate the capability of a measurement system Select m operators and n parts Calibrate the measuring instrument Randomly measure each part by each operator for r trials Compute key statistics to quantify repeatability and reproducibility

estimate of standard deviaton

Rbar/d2

What is the importance of reliability and why has it become such a prominent area within the quality disciplines?

Reliability is the ability of a product to perform as expected over time. The more formal definition is the probability that a product, piece of equipment, or system performs its intended function for a stated period of time under specified operating conditions.Reliability depends on probability, time, performance, and operating conditions. Probability is between one and zero. If the intended performance is not met, it is a failure. Reliability is an essential part of product and process design. High reliability is much harder to achieve when there is increased complexity - such as many modern products have. Although, in manufacturing the increased use of automation, complexity of machines, low profit margins and time-based competitiveness are the reason reliability in production is so important.

Series systems

Rs(T)=e to the -(failure rate+failure rate)T

If the redundancies are different........

Rs= 1 - (1-R1)(1-R2)...

If all components of a parallel system have identical reliabilities

Rs=1-(1-R) to the n power

Why is the s-chart sometimes used in place of the R-chart?

S-charts are sometimes used in place of R-charts because they are the better alternative. S-Charts are used to compute and plot the standard deviation of each sample.

vibration and shock testing

Semiconductors are the basic building blocks of numerous modern products. Semiconductors have a small proportion of defects, called latent defects, that can cause them to fail during the first 1,000 hours of normal operation. After that, the failure rate stabilizes, perhaps for as long as 25 years, before beginning to rise again as components wear out.

explain the role of the taguchi loss function in process and tolerance design.

Taguchi says that the losses can be approximated by a quadratic function so that larger deviation from the target corresponds better to increasingly larger losses. The loss function relates in process and tolerance design because it shows what the expected losses are going to be, and potentially shows where the deviations. This helps stress continuous improvement rather than acceptance of the norm just because the product conforms to specifications,

Explain the differences and relationships between the cumulative failure rate curve and the failure rate curev.

The cumulative failure rate curve is plotting the cumulative percentage of failures over time. The slope of this curve at any point gives the instantaneous failure rate at any point in time. The failure rate curve is often used when looking at electronic components. The failure rate is high at the beginning phases in their lives, then followed by a period of constant failure, and end with an increasing failure rate. In this curve there is early failure, useful life, and wear out period. The early faiure is also sometimes called the infant mortality period. Weak components from poor manufacturing or quality control leads to these higher failure rates. This cannot be caught through normal testing procedures. The useful life phase normally has usually low relatively constant failure rate that is caused by uncontrollable factors like sudden or unexpected stresses from the environment for example. These are nearly impossible to predict on an individual basis. The wear out period kicks in at the end of the life of the product and failure rate increases. If manufacturers know the product's reliability they can develop warranties.

What is the definition of failure rate? How is it measured?

The failure rate is the number of failures per unit time. The equation is number of failures divided by the total unit operating hours. Another formula could also be the number of failures divided by units tested times number of hours tested. Other measures include the Mean time to failure which is used when items can not be fixed and mean time between failures for items that can be fixed.

p-chart with average sample size

UCLp=pbar+3(squareroot of pbar(1-pbar)/nbar) LCLp=pbar-3(squareroot of pbar(1-pbar)/nbar)

Control limits for pchar

UCLp=pbar+3sp LCLp=pbar-3sp if LCL is less than zero, use zero

cpu

UTL-u/3sigma

Explain Concept Engineering. Why is it an important tool for assuring quality in product and process design activities?

Understanding the customer's environment. Select team, identify fit with business strategy, gain team consensus on the project focus, collect the voice of the customer Converting understanding into requirements. Analyze the customer transcripts to translate the voice of the customer into more specific requirements Operationalizing what has been learned. Determine how to measure how well a customer requirement is met by the design concept Concept generation. Generate ideas for solutions that will potentially meet customers' needs. Concept selection. Evaluate potential ideas with respect to meeting requirements, trade-offs are assessed, and prototyping begins It is a tool that is important to quality because you are basically ensuring quality throughout all of the steps of this process. You are internalizing what the customer's requirements are so overall you are creating the best possible quality for your customer.

Define statistical process control and discuss its advantages

and Statistical process control is used to identify special causes of variation and throw up flags when there is need to take corrective action. If there are special causes, the process is out of control. If there are only common causes, the process is in statistical control. In order to do this, control charts must be used. Statistical Process Control is most effective during the early stages of quality efforts. While using SPC, you're able to discover quality problems early on and eliminate them prior to them causing large amounts of issues. This can then lead to time saved when producing a product. If problems are discovered early enough, you can eliminate the variation before they become so large, it is very time consuming to fix. Reducing the amount of waste is another advantage. By discovering issues early on, you're not wasting product or resources producing a product that will likely have to be scrapped at the end. Another advantage is discovering problems before they are actually a problem, therefore some amounts of prevention can be done. If you discover the issue early enough, you are also able to reduce waste and amount of time spent here as well as I listed above. SPC is important in today's competitive world as many customers require their suppliers to prove they have SPC. This system let's workers know when to take action and when to leave processes alone.

Precision

closeness of repeated measurements to each other. It is the closeness of agreement between randomly selected individual measurements or results. Precision, therefore, relates to the variance of repeated measurements.

Tools for design verification

design reviews, reliability testing, measurement system evaluation, process capability evaluation

Accuracy

difference between the true value and the observed average of a measurement. It is the closeness of agreement between an observed value and a standard. The lack of accuracy reflects a systematic bias in the measurement such as a gauge out of calibration, worn, or used improperly by the operator. Accuracy is measured as the amount of error in a measurement in proportion to the total size of the measurement.

Hazard function (instantaneous failure rate)

h(t) = f(t)/[1 - F(t)] = f(t)/R(t)

Predicting system reliability

in series, in parallel or some mixed. In order to compute, you just multiply all the reliabilities together.

Burn-in, or component stress testing

involves exposing integrated circuits to elevated temperatures in order to force latent defects to occur. Latent defects, typically in semiconductors, can cause them to fail during the first 1,000 hours of normal operation.

metrology

is the science of measurement. It is broadly defined as the collection of people, equipment, facilities, methods, and procedures used to assure the correctness or adequacy of measurements.

Reproducibility (operator variation)

it is the variation in the same measuring instrument used by different individuals to measure the same parts. It indicates how robust the measuring process is to the operator and environmental conditions. causes may be - poor trainging or unclear calibrations on the dial

Reasons for conducting a capability study

manufacturing may want to determine a performance baseline for a process, to prioritize projects for quality improvement, or to provide a statistical evidence of quality for customers. Purchasing might conduct a study at a supplier plant to evaluate a new piece of equipment or to compare different suppliers. Engineering might conduct a study to determine the adequacy of R&D pilot facilities or to evaluate new processes.

cpk

min{cpl, cpu}

UCLnpbar

npbar+3(squareroot of npbar(1-pbar)

LCLnpbar

npbar-3(squareroot of npbar(1-pbar)

pbar

number nonconfirming/sum sample sizes

Accelerated life testing

overstressing components to reduces the time to failure and find weaknesses. This form of testing might involve running a motor faster than typically found in normal operating conditions. However, failure rates must correlate well to actual operating conditions if accelerated life testing is to be useful.

p -chart

pbar=p1+p2+...............+pk/k

Quality Function Deployment (QFD)

planning process to guide the design, manufacturing, and marketing of goods by integrating the voice of the customer throughout the organization. Product planning - customer requirements and design requirements Product design - design requirements and part/item characteristics Process planning- part/item characteristics and process operations Process control - process operations and operations requirements


Conjuntos de estudio relacionados

Ch. 42- Bowel Elimination/Gastrointestinal Disorder Prep U

View Set

Kinetics Quiz AP Chemistry Lovrencic

View Set

PDHPE-HSC Online- Health Priorities in Australia

View Set