Sigma Practice Questions 5

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

"11. What is the name for the amount of completed product divided by the original amount of product? a. Scrap rate b. Throughput yield c. Yield d. Rolled throughput yield

"11. C: Yield. Yield is the amount of completed product divided by the original amount of product. This is one of the more popular critical-to-quality metrics. The ideal yield is one (or 100%). Scrap rate, meanwhile, is the percentage of materials not ultimately used in products. Throughput yield is the average percentage of completed units with no defects. Rolled throughput yield, finally, is the quality level that can be anticipated after several steps in the process have been completed.

23. Which of the following is a disadvantage of using engineering process control devices to prevent deviation? a. The devices must be monitored by human operators. b. The use of these devices precludes the use of statistical process controls. c. These devices require constant maintenance. d. These devices cannot handle multiple inputs.

"23. B: The use of these devices precludes the use of statistical process controls. One disadvantage of using engineering process controls to prevent deviation is that the use of these devices precludes the use of statistical process controls. An engineering process control is a mechanism that automatically adjusts inputs when it detects variations in the process. A thermostat is a basic example of an engineering process control. It is not necessary for these devices to be monitored by human operators, and in most cases engineering process controls do not require constant maintenance. The constant adjustments made by these devices, however, mean that any data related to their activities is not independent, and therefore cannot be analyzed with statistical process control charts. However, the engineering process controls used by heavy industry are capable of handling a number of different inputs and outputs simultaneously. "

30. During which stage of DMAIC is it most useful to calculate process velocity? a. Analyze b. Define c. Control d. Improve

"30. A: Analyze. It is most useful to calculate process velocity during the analyze stage of DMAIC. Process velocity is the rate at which a particular phase of the process adds value. Obviously, the higher the process velocity, the better. This metric is most useful during the analyze stage of DMAIC because it can be used to prioritize methods for improving cycle time. Velocity is typically calculated by dividing the number of value-added steps by the process lead time, which is the number of items in the process divided by the number of process completions per hour. Of course, as with any metric of quality, process velocity is somewhat subjective. "

31. In kaizen, the idea that one step in a process should be completed only when the subsequent steps are ready is referred to as: a. Flow b. Poka-yoke c. Pull d. Perfection

"31. C: Pull. In kaizen, the idea that one step in a process should be completed only when the subsequent steps are ready is referred to as pull. This is opposite to the typical arrangement in manufacturing processes, in which materials are pushed through the process chain as they are completed. Kaizen recommends instead that materials be drawn along by vacuums created in the production chain. A process chain in which this occurs is said to have pull. Flow, meanwhile, is the continuous completion of a process. Organizations that adopt the kaizen philosophy attempt to make flow constant in every department and stage of processes. Poka-yoke is a Japanese system for error-proofing, based on the premise that avoiding errors in the first run is worth a slightly higher cost. Perfection is the kaizen ideal of continuous improvement. Perfection is a goal that can never be attained but should be strived towards regardless.

"33. If all of the data points on an Np chart fall between the upper and lower control limits, the process is: a. Representative b. Stable "c. Efficient d. Erratic

"33. B: Stable. If all of the data points on an Np chart fall between the upper and lower control limits, the process is stable. So long as all of the variation is within these limits, it can be assumed to be the result of common causes. Assuming the chart is reliable, data points that fall outside the upper and lower control limits are the result of special-cause variation. At the least, the presence of data points outside the upper and lower control limits identifies areas where employees will need to conduct further research. Np charts are control charts for analyzing attributes data. These charts are used when the sample size is regular and the targeted condition may only occur once per sample. "

34. According to Little's law, the number of items included in a process divided by the number of process completions per hour is the: a. Process lead time b. Value-added time c. Velocity d. Process cycle efficiency

"34. A: Process lead time. According to Little's law, the number of items included in a process divided by the number of process completions per hour is the process lead time. Process cycle efficiency is calculated by dividing value-added time by process lead time. When every activity in the process adds value, the process may attain the maximum process cycle efficiency of 100%. Of course, very few processes actually reach this level of efficiency. It is much more common for a process to have a process cycle efficiency below 50%.

36. When a batch sample has upper and lower specifications, which statistic is used in the creation of a process performance index? a. Pp b. Ppk c. Ppl d. Cp

"36. A: Pp. When a batch sample has upper and lower specifications, the Pp statistic is used in the creation of a process performance index. If the batch sample has either an upper or a lower specification but not both, the Ppk statistic may be used. If the distributions are not normal, the Cp statistic is used to calculate the process performance indices. "

37. Which of the following factors is not included in the calculation of risk priority number? a. Detection level b. Severity c. Expense d. Likelihood"

"37. C: Expense. Expense is not one of the factors included in the calculation of risk priority number. Risk priority number is calculated by multiplying severity, likelihood, and detection level. The severity of the risk is the significance of its occurrence. Various industries have created standardized tables for indicating the severity of common risks. The likelihood of a risk is simply the chances of it happening. Finally, the detection level is based on the number of modes for identifying the error or failure, as well as the chances that any one of these modes will be successful in detection. A common formula for calculating risk priority number is to place all of these categories on a scale from 1 to 10, then multiply them together. In this scenario, the maximum risk priority number would be 1,000."

"38. Which of the following run tests identifies shifts in the process mean? a. Run test 4 b. Run test 6 c. Run test 7 d. Run test 8

"38. B: Run test 6. Run test 6 identifies shifts in the process mean. The other run tests provide information about sampling errors. Run tests 1, 2, 3, and 5 also identify shifts in the process mean. Run tests are typically used in statistical process control programs to identify errors in data collection. Unfortunately, run tests are only able to identify the presence of errors and are not very good at pinpointing their location.

4. What are the three most important characteristics of process metrics? a. Rationality, reliability, and repeatability b. Reliability, reproducibility, and repeatability c. Reliability, responsibility, and rationality" "d. Repeatability, responsibility, and reproducibility"

"4. B: Reliability, reproducibility, and repeatability. The three most important characteristics of process metrics are reliability, reproducibility, and repeatability. Reliability is the extent to which the results of an experiment can be trusted to represent accurately the process being measured. Reproducibility is the extent to which a metric can be applied in different situations and obtain a reliable result. Repeatability is the extent to which a metric can be applied to the same situation multiple times and achieve the same result. "

40. Which distribution should be used when the targeted characteristic may appear more than once per unit? a. Binomial b. Exponential c. Lognormal d. Poisson

"40. D: Poisson. A Poisson distribution should be used when the targeted characteristic may appear more than once per unit. In order for a Poisson distribution to be effective, the data should consist of positive whole numbers and the experimental trials should be independent. A binomial distribution is appropriate for situations in which the units in the population will only have one of two possible characteristics (for example, off or on). An exponential distribution is appropriate for measurement data, especially frequency. A lognormal distribution is appropriate for continuous data with a fixed lower boundary but no upper boundary. In most cases, the lower boundary of a lognormal distribution is zero.

42. Which of the following distributions would be appropriate for discrete data? a. Exponential b. Poisson c. Normal d. Johnson"

"42. B: Poisson. A Poisson distribution would be most appropriate for discrete data. Binomial distributions may also be used for discrete data. Continuous data, on the other hand, should be handled with a normal, exponential, Johnson, or Pearson distribution. Continuous data is obtained from measurement, while discrete data is based on observation. A discrete data set, for instance, would only indicate the number of times an event occurred, but "but would not give any indication of the size or intensity of the event.

44. In hypothesis testing, why is it better to set a p value than to select a significance level? a. It ensures that a true hypothesis will not be rejected. b. It is then easier to make adjustments later in the experiment. c. It enables the collection of more samples. d. It makes it possible to reject the null hypothesis.

"44. B: It is then easier to make adjustments later in the experiment. In hypothesis testing, it's better to set a p value than to select a significance level because it is then easier to make adjustments later in the experiment. In general, a p value allows for more freedom in the later parts of the experiment. There is always a possibility of rejecting a true hypothesis, in what is known as a Type 1 error. The number of samples collected is not dependent on whether a p value is set or a significance level is selected, and either method maintains the possibility that the null hypothesis will be rejected.

46. During which stage of DMAIC is the 5S method used most often? a. Define b. Measure c. Analyze d. Improve

"46. D: Improve. The 5S method is used most often during the improve stage of DMAIC. 5S is a Japanese lean tool for reducing cycle time. In Japanese, the five aspects of 5S are organization, purity, cleanliness, discipline, and tidiness. In English, these words are often translated as sort, straighten, shine, standardize, and sustain. The most common targets of 5S programs are processes that tend to lose materials or require unnecessary movement.

"48. Which statistical distribution is appropriate for continuous data with neither an upper nor a lower boundary? a. Lognormal b. Weibull c. Exponential d. Normal

"48. D: Normal. A normal distribution is appropriate for continuous data with neither an upper nor a lower boundary. Continuous data is obtained through measurement. A lognormal or Weibull distribution is appropriate for sets of continuous data with a fixed lower boundary but no upper boundary. In most lognormal and Weibull distributions, the lower boundary is zero. An exponential distribution is appropriate for continuous data sets in which the values are relatively consistent.

"Which pioneer of quality control wrote Quality Is Free? a. W. Edward Deming b. Joseph M. Juran c. Armand V. Feigenbaum d. Philip B. Crosby"

"5. D: Philip B. Crosby. Philip B. Crosby wrote Quality is Free, a book that revolutionized quality management by placing an explicit emphasis on getting processes right the first time. Crosby insisted that businesses are better served by investing more money in quality control on the first run, and thereby avoiding the costs of defective products. W. Edwards Deming is famous for enumerating the seven deadly diseases of the workplace and fourteen points of "emphasis for management. Joseph M. Juran stressed the importance of customer satisfaction as a goal of quality control. Armand V. Feigenbaum is known for emphasizing four key actions in the implementation of quality management: establishing standards; creating metrics for conformance to these standards; resolving issues that impede conformance; and planning for continuous improvement." REVIEW THE WORK OR BOOKS OF THE OTHER CHOICES

50. Which of the following increases the power of an estimation of the confidence interval on the mean? a. A sample population with a normal distribution b. A smaller number of samples c. A known standard deviation d. An unknown standard deviation"

"50. C: A known standard deviation. A known standard deviation increases the power of an estimation of the confidence interval on the mean. Indeed, when the standard deviation is known, the z tables may be used to find the confidence interval on the mean; when the standard deviation is unknown, the t tables must be used. The confidence interval on the mean is the percentage of samples that will contain the true population mean. It is assumed that the sample population will follow a normal distribution. When there are more samples, this increases the power of the estimation of the confidence interval on the mean."

10. Which parameter of a statistical distribution relates to the sharpness of its peak? a. Central tendency b. Kurtosis c. Skewness d. Standard deviation"

10. B: Kurtosis. Kurtosis is the parameter of a statistical distribution related to the sharpness of the peak. In a normal distribution, where the points resemble the standard bell curve, the kurtosis value is one. If the peak is sharper, the kurtosis value will be higher than one; if the peak is less severe; the kurtosis value will be less than one. Central tendency is the general trend of the data: in an asymmetrical distribution, the median is roughly equivalent to the central tendency, while in an asymmetrical distribution, the mean is a better marker. Skewness is basically the difference between the mean and the mode of a data set. The mode of the data set is the value that appears most often. Finally, the standard deviation of the data set is the average amount of variation from the mean.

12. Which method of creating a prioritization matrix is appropriate when time is limited? a. Partial analytical method b. Consensus-criteria method c. Full analytical method d. Summary method

12. B: Consensus-criteria method. When time is limited, the consensus-criteria method should be used to create a prioritization matrix. In this method, a group of people is each allotted a hundred points, which they then allocate across a series of criteria according to perceived importance. Prioritization matrices are used to identify those projects that will create the most value improvement over the long term. Also, organizations use participation matrices to identify the projects that will contribute the most to the achievement of the organizational goals. Besides the consensus-criteria method, the other method for creating a prioritization matrix is called the full analytical method. In this method, all of the various options are listed, and the members of the team assign a numerical value to each. "

16. During which phase of response surface analysis is the direction of maximum response identified using the steepest ascent methodology? a. Phase 0 b.Phase 1 c. Phase 2 d. Phase 3

16. B: Phase 1. During Phase 1 of response surface analysis, the direction of maximum response is identified with the steepest ascent methodology. This methodology is also used to define the current operating region. Phase 0 involves the use of screening designs to assemble a basic list of significant factors and create a first-order regression model. P hase 2 is the application of ridge analysis and a second-order model. The intention of Phase 2 of response surface analysis is to identify optimal conditions at the stationary points in a limited region. There is no Phase 3 in response surface analysis."

18. In an analysis of variance, how is the F statistic used? a. To compare the mean square treatment with the mean square error b. To estimate the process average c. To find the variation within each subgroup d. To find the variation between different subgroups"

18. A: To compare the mean square treatment with the mean square error. In an analysis of variance, the F statistic is used to compare the mean square treatment with the mean square error. The mean square treatment is the average variation between the subsets, while the mean square error is the sum of the squares of the residuals. In order to trust the results of the F statistic, one must assume that the subsets have a normal distribution and unequal variance. The variation within each subgroup is calculated by taking repeated samples from the subgroup. The variation between different subgroups is found by comparing the averages of each subgroup.

20. In response surface analysis, which of the following values for s and t weights would indicate that the upper and lower boundaries are more important than the target? a. -0.3 b. 0 c. 1 d. .7

20. D: 7. In response surface analysis, values of .7 for the s and t weights would indicate that the upper and lower boundaries are more important than the target. In Phase 2 of response surface analysis, the s and t weights are based on the relationship between the target and the boundary. When the target and the boundary have equal value, the s and t weights are 1. When the target is more important than the boundary, the s and t weights are between 1 and 10. When the boundary is more important than the target, the s and t weights are between 0.1 and 1

22. Which type of human error is typically limited to a particular task? a. Willful b. Inadvertent c. Technique d. Selective

22. C: Technique. Technique error is typically limited to a particular task. Six Sigma experts identify three categories of human error: technique, inadvertent, and willful. Technique errors are the result of a lack of comprehension or poor training. It is more likely that technique errors will occur on difficult tasks. Inadvertent errors are slightly different, because they occur by accident even when an employee is experienced and understands the task. It is impossible to eliminate entirely inadvertent errors so long as there are human operators. A willful error is made intentionally by an employee. The best way to reduce willful errors is to maintain high morale and incentivize high performance."

24. Which type of chart is appropriate when sample size is variable and each sample may contain more than one instance of the targeted condition? a. P chart b. Autocorrelation chart c. U chart d. X-bar chart

24. C: U chart. A U chart is appropriate when sample size is variable and each sample may contain more than one instance of the targeted condition. These are control charts most appropriate for handling attributes data. A P chart, on the other hand, is better for measuring the percentage of samples with a particular characteristic when sample size is variable and the characteristic will either be present or absent. An autocorrelation chart indicates the relationships between various factors in the process. An X-bar chart, finally, is a control chart for variables data, in which the subgroup averages are assessed to determine the process location variation over time.

25. From whose perspective is value defined in the lean methodology? a. Customer b. Chief executive c. Entry-level employee d. Competitor

25. A: Customer. In the lean methodology, value is always defined from the perspective of the customer. This was a radical shift in perspective when it was first introduced. Most businesses assessed value from the perspective of executives or in-house experts. In lean methodology, value is defined as the qualities or characteristics for which a customer is willing to compensate the business.

26. On an X-bar chart, what variable is always represented on the x-axis? a. Variations b. Errors c. Length d. Time

26. D: Time. On an X-bar chart, time is always represented on the x-axis. X-bar charts are control charts for variables data. The chart should resemble a chronological model of the process: as the bars move away from the y-axis, they represent the advancement of time. In order for an X-bar chart to be possible, any variation must be assigned a time value. Outlying values on the X-bar chart indicate the presence of special-cause variation.

27. Which of the following diagrams indicates the critical path of a process? a. Gantt chart b. Work breakdown structure c. Value stream analysis d. Matrix diagram

27. A: Gantt chart. A Gantt chart indicates the critical path of a process. The critical path is the sequence of steps that have a direct bearing on the overall length of the process. Some steps can be delayed without elongating the overall duration of the process: these steps are not considered to be on the critical path. A work breakdown structure depicts the organization of a process. To create a work breakdown structure, one isolates the various components of a problem and then considers the various contingencies associated with each component. A value stream analysis determines the elements of a process that add value to the finished product. These elements are targeted for special attention. Finally, a matrix diagram depicts the relative strengths of the relationships between the items in different groups. A matrix diagram might indicate causal relationships between various factors in a process or might simply indicate which of the factors are related.

28. Which of the following is a disadvantage of higher-order multiple regression models? a. These models do a poor job of defining the area around a stationary point. b. Comprehensive and detailed experiments must be performed on the main effects. c. These models rarely have clear peaks and valleys. d. Small regions are difficult to perceive. "

28. B: Comprehensive and detailed experiments must be performed on the main effects. One disadvantage of higher-order multiple regression models is that comprehensive and detailed experiments must be performed on the main effects. Otherwise, it will not be wise to assume that the results of the higher-order multiple regression models are useful or accurate. However, higher-order multiple regression models have a number of advantages. For one thing, they are excellent at clearly defining the area around a stationary point. They typically have well-defined peaks and valleys, which facilitates analysis. Also, they are very effective at mapping small regions in the process, so they are able to achieve a high level of precision and detail.

"29. If there are 32 observations in an experiment, it is typical to run autocorrelations from lag 1 to: a. Lag 4 b. Lag 8 c. Lag 16 d. Lag 32

29. B: Lag 8. If there are 32 observations in an experiment, it is typical to run autocorrelations from lag 1 to lag 8. The basic calculation for the number of autocorrelations in an experiment is lag 1 to lag x/4, in which x is the number of observations. Since there are 32 observations in this experiment, autocorrelations should run from lag 1 to lag 8. The lag is the difference between correlated observations. In lag 1, for instance, observation[...]"

32. Which type of Pareto chart would be the least useful? a. One in which the bars represent costs b. One in which the cumulative percentage line is steep c. One in which all the bars are roughly the same height d. One in which the bars on the left are significantly taller than the bars on the right"

32. C: One in which all the bars are roughly the same height. The least useful type of Pareto chart would be one in which all the bars are roughly the same height. A Pareto chart is used to identify the most important and urgent problems in a process. It is based on the Pareto principle, which is basically that a process can be improved dramatically through attention to the few most important problems. It is essential that the bars on a Pareto chart represent fungible values, like cost or count. A Pareto chart will not be useful if it is based on percentages or rates. The most useful Pareto charts have several large bars on the left, indicating problems that are significantly more important than others. Similarly, a steeply ascending line on a Pareto chart indicates that a few of the identified factors are very important, and therefore that the chart will be useful. If all of the bars on a Pareto chart are roughly the same height, no one factor is more important than another, meaning it will be impossible to generate an unusual amount of benefit by solving a single problem."

35. In nominal group technique, how many pieces of paper should each participant receive if there are 40 options to be considered? a. 2 b. 4 c. 6 d. 8

35. D: b. In nominal group technique, each participant should receive eight pieces of paper if there are 40 options to be considered. Each participant will then write one of the options down on each piece of paper, along with its rank (first through eighth). It is typical for each participant to receive eight pieces of paper when there are more than 35 options. When there are from 20 to 35 options, the typical number of papers for each person is six. When there are fewer than 20 options to be considered, it is typical for each member of the group to receive four pieces of paper. Once all of the group members turn in their rankings, the various options are compared, and the most popular are given further consideration."

39. Which of the following autocorrelation functions would indicate the strongest correlation? a. 0.1 b. -0.8 c. 0.9 d. -0.2

39. C: 0.9. An autocorrelation function of 0.9 would indicate the strongest correlation. The range of autocorrelation functions and partial autocorrelation functions extends from -1 to 1. The strength of the correlation is indicated by the distance from zero (that is, the absolute value), regardless of whether the value is on the positive or negative side. Therefore, an autocorrelation function of 0.9 would indicate a stronger correlation than would functions of 0.1, -0.8, and -0.2."

41. Which Six Sigma methodology is more appropriate for existing processes? a. DMADV b. IDOV c. DMAIC d. DFSS

41. C: DMAIC. The Six Sigma methodology of DMAIC (define, measure, analyze, improve, and control) is most appropriate for handling existing processes. It is geared toward gradual improvement. DMADV (define, measure, analyze, design, and verify), on the other hand, includes a design phase, during which new products can be developed. On occasion, DMADV is used to give existing products a large-scale remodeling. IDOV (identification, design, optimization, and validation) is the primary methodology of DFSS (design for Six Sigma). The main difference between DFSS and DMAIC is that the former attempts to prevent rather than reduce defects. "

"43. Which goodness-of-fit test focuses on the relationship between number of data points and distributional fit? a. Nonparametric test b. Kolmogorov-Smirnov test c. Chi-square test d. Anderson-Darling test

43. B: Kolmogorov-Smirnov test. The Kolmogorov-Smirnov test focuses on the relationship between number of data points and distributional fit. A nonparametric test is used instead of a hypothesis test for comparing the means from samples with different conditions and for assessing the effects of changes on process averages. A chi-square test is the simplest form of goodness-of-fit test. An Anderson-Darling test is excellent for obtaining information from the extreme ends of a distribution."

45. How are decisions represented in the ANSI set of flowchart symbols? a. Circles b. Squares c. Rectangles d. Diamonds

45. D: Diamonds. In the ANSI (American National Standards Institute) set of flowchart symbols, decisions are represented with diamonds. There are other symbols to represent different types of tasks, but many simple flowcharts will simply use the diamond for decisions and rectangles for all other tasks in the process. An excessive number of decisions on a flowchart is a common symptom of inefficiency."

47. In a histogram, the number of bars is equal to: a. The square root of the total number of data values b. The square root of the range of data c. The range of data divided by the total number of data values d. The number of data observations"

47. A: The square root of the total number of data values. In a histogram, the number of bars is equal to the square root of the total number of data values. Histograms look like bar graphs, but the bars on a histogram represent the number of observations that fall within a particular range. Histograms are often used to locate multiple distributions or apply a distribution to capability analysis. The width of each bar in a histogram is calculated by dividing the range of data by the number of bars. The range of data is determined by subtracting the minimum data value from the maximum data value. On a histogram, the x-axis represents the data values of each bar, and the y-axis indicates the number of observations."

49. In a contour plot, what is indicated by a series of evenly spaced parallel lines? a. First-order main effects b. Second-order main effects c. Interactions between two responses d. Interactions between three responses

49. A: First-order main effects. In a contour plot, a series of evenly spaced parallel lines indicates first-order main effects. Interactions between responses are indicated by curving contour lines. Very few contour plots consist of evenly spaced parallel lines. The most general use of contour plots is during the improve stage of DMAIC, when they are used in response surface analysis to predict minimum and maximum response values for specific data ranges."

7. Which is typically the first category to be identified in SIPOC analysis? a. Suppliers b. Inputs c. Outputs d. Processes

7. C: Outputs. In SIPOC analysis, the first category to be identified is outputs. SIPOC (suppliers, inputs, processes, outputs, and customers) analysis is typically performed during the define stage of DMAIC. Its intention is to identify the most important processes and the relevant stakeholders. At the beginning of SIPOC analysis, it is typical to create a process map or flowchart. Outputs are the first category to be identified, because the identification of outputs facilitates the identification of suppliers, inputs, and customers.

2. What is one major problem with obtaining information about customer satisfaction from comment cards? a. Participants must be compensated. b. The most pleased and displeased customers are overrepresented. c. Responses are often vague. d. The expense is high.

B "2. B: The most pleased and displeased customers are overrepresented. One major problem with obtaining information about customer satisfaction from comment cards is that the most pleased and displeased customers are overrepresented. That is, customers who have extreme opinions, whether positive or negative, will be the most motivated to comment. Businesses that use comment cards may find them to be a valuable source of specific information, but should avoid assuming that commenters are representative of the larger body of customers. Some of the advantages of comment cards are that they are relatively inexpensive and tend to elicit detailed feedback. Also, participation by customers is voluntary and does not require compensation. "

3. Which distribution is appropriate for a continuous set of data with a fixed lower boundary but no upper boundary? a. Johnson b. exponential c. normal d. Logormal

B- my answer but incorrect ...look up difference between a, b, d "3. D: Lognormal. A lognormal distribution is appropriate for a continuous set of data with a fixed lower boundary but no upper boundary. In most cases, the lower boundary on a lognormal distribution is zero. These distributions can be tested with a goodness-of-fit test. A Johnson distribution is more appropriate for continuous data that for whatever reason is inappropriate for a normal or exponential distribution. An exponential distribution is appropriate for any set of continuous data, though these distributions are most often used for frequency data. A normal distribution is appropriate for a set of continuous data with neither an upper nor a lower boundary. The normal distribution follows the pattern of the classic bell curve." "

8. How is takt time calculated? a. Available time divided by demand b. Overall process time minus time required for a particular task c. Demand divided by the amount of time available d. Time required for a task divided by demand"

C "8. C: Demand divided by the amount of time available. Takt time is calculated by dividing demand by the amount of time available. This value, which is also known as target process time, should be posted at each workstation during the process of level loading. The takt time is the maximum amount of time a process can take without slowing down the overall completion of the task."

"9. How many runs would be required in a complete factorial design if there are four levels and three factors? a. 7 "b. 12 c. 64 d. 81

C "9. C: 64. If there are four levels and three factors in a complete factorial design, 64 runs would be required. The number of required runs is calculated by raising the number of levels to a power equal to the number of factors. In this case, then, the calculation is performed 43 = 64. "If the complete factorial design had five levels and three factors, the number of runs would be calculated 53 = 125."

6. Which of the following conflict-response strategies would be most appropriate when a group is fragile? a. Collaboration b. Competition c. Avoidance d. Accommodation

D "6. D: Accommodation. When a group is fragile, the most appropriate conflict response strategy would be accommodation. Accommodation is the temporary sacrifice of personal desires in the name of group consensus. If a group is in danger of falling apart, the best way to handle a conflict may be to temporarily put aside differences in order to make progress in other areas. Collaboration or competition may be too risky for a fragile group, and avoidance of the conflict only jeopardizes the long-term health of the group by failing to resolve the underlying issues.

"In businesses that apply the theory of constraints, which element of a process receives immediate attention? a. The most problematic b. The most important c. The most efficient d. The most complicated

Most Problematic "1. A: The most problematic. In businesses that apply the theory of constraints, the most problematic element of a process receives immediate attention. Indeed, the most problematic area is known as the constraint. The focus of improvement will be reducing the constraint without diminishing performance in any other area of the process. Once the targeted constraint has been diminished so that is no longer the most problematic component of th"

13. In gauge repeatability and reproducibility analysis, what percentage of total process variation is acceptable? a. 10% or less b. 15% or less c. 20% or less d. 30% or less"

a "13. A: 10% or less. In gauge repeatability and reproducibility analysis, 10% or less total process variation is acceptable. Variation from 10 to 30% is considered problematic, and any variation over 30% is unacceptable. A gauge repeatability and reproducibility analysis may result in a statistic expressed as a percentage of total process variation or a percentage of tolerance. The range of acceptable percentages is the same when the statistic generated is a percentage of tolerance. Gauge repeatability and reproducibility analysis indicates the degree to which a measurement system avoids common- and special-cause variation.

"14. Which type of diagram is used to eliminate unnecessary movement during a process? a. Spaghetti diagram b. Scatter diagram c. Ishikawa diagram d. Matrix diagram

a 14. A: Spaghetti diagram. A spaghetti diagram indicates the physical travel of employees, resources, and equipment during a process. These diagrams are used to identify unnecessary movements and to streamline processes as much as possible. A scatter diagram displays the correlation between two variables, with the independent variable on the x-axis and the dependent variable on the y-axis. An Ishikawa diagram, also known as a fishbone or cause-and-effect diagram, is used to outline the causes of a particular event, as well as the possible results of particular actions. Finally, a matrix diagram depicts the relationships between a group of different items in several groups. A matrix diagram looks a great deal like a table of data looks a great deal like a table of data, with the strength between relationships indicated by the values in each cell.

15. In what order are the process steps presented in a process decision program chart? a. Right to left b. Left to right c. Top to bottom d. Bottom to top

a 15. A: Right to left. In a process decision program chart, the process steps are presented from right to left. A process decision program chart is used to isolate possible problems with a particular process or strategy. These charts are typically used during the Analyze and Improve stages of DMAIC. At the top of the chart, the process is named. The steps in the process are then presented from right to left, with any necessary substeps mentioned underneath. Then the potential problems in each step are listed, along with some brainstormed solutions.

17. In which method of sampling is a population divided into groups and a sample taken from each group? a. Systematic sampling b. Stratified sampling c. Judgment sampling d. Cluster sampling

a "17. B: Stratified sampling. In stratified sampling, the population is divided into groups, and a sample is taken from each group. In systematic sampling, on the other hand, there is a particular order to the selection of samples. In judgment sampling, an expert or group of experts selects the samples. In cluster sampling, experts create a representative group from which a random sample is drawn.

"19. Which of the following characteristics of a team most often results in groupthink? a. Frequent communication b. Lack of accountability c. Lack of subject expertise d. Undefined roles

c 9. C: Lack of subject expertise. When a team has a lack of subject expertise, it is more likely to suffer from groupthink. Groupthink[...]" "Groupthink is a phenomenon in which team members agree too readily, without adequately challenging each other's ideas. When groupthink occurs, a team will often select the first proposed solution to a problem, even if it has serious weaknesses. Groupthink is more likely to be a problem when the team members do not have enough experience or expertise in the subject area to come up with alternatives to a recommendation. Also, if the team members do not feel empowered to offer their views, they may be more likely to engage in groupthink.

21. Which of the following would be considered a value-added activity? a. Design b. Delivery c. Marketing d. Manufacturing

d "21. D: Manufacturing. Manufacturing is considered a value-added activity. Value-added activities are those that create value in a product or service from the perspective of the customer. There are certain processes that are necessary but that do not add value for the customer. These are known as business-value-added activities. Design, delivery, and marketing are classic examples of business-value-added activities, because they do not directly add value for the customer, but they are a necessary part of the production process.


Set pelajaran terkait

#1: Information and Communication Technology

View Set

Turning Points of the Revolution

View Set