SECTION 1 Basic 1. From Specifications to Process Behavior (October 2001) Answers to October 2001 Brain Teaser Q: Should Matt assume that all of the products that were shipped met the specifications for breakage? A: All of the data values met the minimum specification, but the individuals and moving range chart, Breakage Values of Plastic Housings, shows that the process is unpredictable. When a process is unpredictable, assignable causes of variation also act in an unpredictable manner. Therefore, it is unwise to assume that all of the products shipped have met the breakage specifications. Q: What critical message has been contained in the breakage data all along that Matt was not able to see? A: Until he saw the chart, Matt was not aware of the low-breakage values. He always received reports of out-of-specification events and had not considered the process behavior. Q: What can Matt expect to happen in the future if production continues as it has up to now? A: Because the process is unpredictable, statements about the future are problematic. Matt needs to take appropriate actions to find why the points are outside the limits on the chart. After he finds these causes of variation and eliminates them from the process, Matt may find the process is predictable with respect to breakage. If that is the case, the Natural Process Limits would be in the neighborhood of 42 to 98, as seen in the chart, Breakage Values for Plastic Housings, Analysis Using the Median Moving Range. These limits are based on an analysis using the median moving range to compute the limits. In this situation, the median moving range will minimize the impact of the high moving range values in the computation of the limits. Key Lessons • Meeting specifications does not assure a stable process. • Even when all inspected units meet specifications, a process may still produce units that are out of specification because of instability which can lead to customer complaints. • The only way to check for process stability is to use an appropriate process behavior chart. • The median moving range is useful with an XmR chart where there are a few points associated with exceptions. 2. Maintaining Dough Temperature (August 2000) Answers to August 2000 Brain Teaser Q: Is the dough temperature for this process predictable over the limited time span of these data? A: The Individuals and moving Range chart for Dough Temperature shows that there are signals in the data. The presence of signals indicates that there are assignable causes of variation in the process. Therefore, the process is unpredictable. Q: Using these data what would you predict the dough temperature to be in the future? A: Given that the dough temperatures have an unpredictable behavior, no prediction can be made regarding future values for dough temperature. Q: How would you describe the appearance of the process behavior chart? A: The process behavior chart looks choppy. On the individuals chart, there are only 3 possible observed values for dough temperature, while on the moving range chart there are only two possible values, 0 and 1. Q: What problem exists with data that have this type of appearance on a process behavior chart? A: These data suffer from the problem of inadequate measurement units. The data have been rounded to the point where there is insufficient detail. The lack of adequate detail in data can lead to an estimate of the standard deviation of a process that is too low. In such cases, the limits that are calculated for the process behavior chart will be too tight and false signals appear. To assure that you have enough detail in your data, count the number of possible values for the range that would be inside the control limits. For XmR charts you need at least 4 possible values for the range inside the control limit to assure there is adequate detail in your data. For subgrouped data, you need at least 5 possible values. For the dough temperatures, only one possible value for the moving range was inside the limits on the moving range chart. This value was zero. There were values of one, but these were outside the limit. If the dough temperature data had been recorded to the nearest tenth of a degree, the revised XmR chart for dough temperature would show the process to be predictable. The average for the moving ranges in this revised XmR chart is almost twice as large as the one using the original data. There are nineteen possible values on the moving range chart. This is greater than the minimum of 4 and assures adequate detail in the data. Key Lessons • A process with signals of exceptional variation is, by definition, not predictable and no statement of predictability for the future would be valid. • Signals should always be investigated to discover the cause. • Resolution of data values is a critical issue for correct analysis of data. • Data with too much rounding (too little detail or chunky) causes an underestimate of the standard deviation of the process and consequently limits that are artificially tight. • Artificially tight limits may lead to false signals on the process behavior chart. • The range chart can be used to detect the situation of chunky data. 3. Defective Sheets of Glass (November 2000) Answers to November 2000 Brain Teaser Q: How does the scoring process behave with respect to the percent of defective sheets? A: A process behavior chart of all of the data, Percent Defective Sheets of Glass, shows that the process is not predictable over the 28 days. There are some signals of assignable causes and there are patterns on both the individuals chart and the moving range chart. Q: What are some possible causes of the behavior you see? A: Some possible causes of the patterns in the data are shift differences or time of day differences. Separate process behavior charts for each shift and for the time of day would be useful in detecting causes of the behavior that is seen in the chart of all of the data. The process behavior chart for B Shift shows an average of 1.31 percent defective with two points outside the upper limit. There is at least one assignable cause that impacts B Shift. The long run below the average line would disappear if the assignable cause of the two points had not been present. For C Shift, the average is 3.638 and only one point outside the upper limit. This average is almost 3 times that of B Shift. The behavior of the data points in this XmR chart suggests that the limits have been inflated due to assignable causes impacting the moving ranges. Process behavior charts for all four shifts and all three times of day are also shown. Q: What percentage of defective sheets of glass due to the scoring process can you predict for the future? A: Because the process is currently not predictable, it is not appropriate to predict the percentage of defective sheets of glass to expect in the future. Causes of the differences in the shifts and assignable causes of the signals need to be addressed and eliminated before any prediction of future percent defective can be made. Key Lessons • Exceptional variation can be discovered by repeated patterns in the data on a process behavior chart. • Data can be charted by any identified characteristic, such as shift, time of day or product, to discover the cause of exceptional variation. • No predictions can be made for the future if a process exhibits exceptional variation. • If the chance of a defective unit is not constant for a process, then percentages based on counts should be analyzed on an individuals and moving range chart. 4. Monitoring Process Temperature (November 2001) Answers to November 2001 Brain Teaser Q: Is the process temperature for producing this chemical behaving in a predictable manner? A: The individuals and moving range chart of temperature values, which is shown in the graph, “Temperature of Chemical Process,” indicates that this process is predictable with an average of 831°F. Individual temperature values could range from 786°F to 875°F on any given day. Q: Is the process temperature meeting the process specifications? A: A capability analysis of the temperature values shows that the process is not capable of meeting the specifications of 840°F ±25°F. This analysis is summarized in the graph, “Capability Analysis for Temperature of Chemical Process.” Q: What evidence, if any, can be found in the data to explain an increase in the variability of the quality characteristics for the finished product? A: While there is no “official” pattern in the data that would be detected by traditional runs tests, there is an interesting zig-zaggedness to the running record. On examination, some data values appear to be in pairs with the moving range chart periodically showing a saw-tooth pattern. A close look at the data and the timing of the data collection suggests that the values for Monday, Wednesday and Friday might be different in some way from the values for Tuesday and Thursday. Two individuals and moving range charts of temperature values—one for Monday, Wednesday and Friday and the other for Tuesday and Thursday—are shown. The capability analyses for these days are posted as well. These analyses show that on Monday, Wednesday and Friday, the temperature values are predictable with an average of 842°F with individual values possible from 817°F to 867°F. Currently, no values are outside specification. On Tuesday and Thursday, the process is also predictable, but the average is 814°F with individual values possible from 789°F to 840°F. Approximately 50% of the values are outside of specification on these days. Further investigation is required to determine why these two situations are different. Key Lessons • Process variables often provide the clues to the behavior of product quality characteristics. • Patterns in time-ordered data values are clues to process behavior. • Interpreting a pattern in time-ordered data in the context of the process is a step toward identifying causes of variation. • For ongoing process improvement, all causes of exceptional variation must be identified and appropriate actions taken to eliminate or minimize them. 5. A Sticky Situation (January 2002) Answers to January 2002 Brain Teaser Q: How should Stacey analyze these data? A: The best way for Stacey to see the behavior of the press over a period of time is to put the data on a run chart. If he then analyzes the data using an XmR chart, Stacey will see that the press operates efficiently for two to three days after the preventative maintenance. By day three, the stickiness reappears and escalates over the next several days of production. By the ninth day, the press has reached a point where the average number of times a bun sticks to the machine is 42.5 in a 10 minute period; it appears to level off at this point. See the chart “Number of Stuck Buns at the Press.” By this time, the press is scheduled for maintenance. Q: What can Stacey learn about the overall behavior of the press by analyzing the “stuck” bun problem? A: If Stacey studies the “stuck” bun problem several times and sees the same pattern of behavior, he can begin to predict the behavior of the press with respect to the sticking problem. Q: How can Stacey use these data to determine a better maintenance schedule for the press and still meet production targets? A: After Stacey has enough evidence of the pattern of behavior with respect to the stickiness of the press, he can evaluate the cost of paying an employee to separate the stuck buns versus the cost of additional lubricating maintenance for the machine. Any additional maintenance procedures need to be done quickly so as to minimize the impact on production. If it is advantageous to schedule lubrication maintenance every two to three days, he can modify the maintenance procedure. Key Lessons • Simple measures (number of stuck buns) and a minimal amount of data collection (4 times a shift) may provide the needed proof of an exceptional cause of variation. • Sometimes just plotting data over time can lead to beneficial insights, such as appropriateness of maintenance procedures. • It may be more costly to continue to run a process that is unpredictable than to take the time to get the process predictable. 6. Machined Parts (February 2002) Answers to February 2002 Brain Teaser Q: When Derek asked how to analyze the data, he was told to make a process behavior chart. What can Derek learn from a process behavior chart of Height A and Height B? A: Because the data are in time order, an individuals and moving-range chart of each characteristic will show if the hourto- hour variability is consistent, how large it is, and if any exceptionally large or small parts are present. See the charts “Height A” and “Height B.” The only signal is a run of ten points on the chart for Height B. This suggests that the parts from suppliers X and Y may be different than the parts from supplier W. Q: What other method can Derek use to analyze his data? A: He can make separate individuals and moving-range charts for each supplier for each dimension. Q: Do the castings from the three suppliers differ? A: From the separate charts for each supplier, Derek can tell that the results are not the same for all three suppliers. If he puts data values from one supplier onto the chart made for another supplier he will see the signals that indicate the differences. See charts “Height A by Supplier” and “Height B by Supplier”. Key Lessons • Using data to study a process, regardless of the source of the part, can show how the various suppliers products compare under like conditions. • Once a signal of exceptional variation is identified, analyzing the same data by supplier (shift or product code) may reveal the source of the signal. • Using only separate charts for each supplier can hide patterns that are revealed in a graph of the data in time order. 7. Electrical Problems (March 2002) Answers to March 2002 Brain Teaser Q: What are some ways that Donna might use these data to discover the root causes of the problems? A: The behavior of this process can be determined with respect to percent of instruments that pass the first test. It is also possible to evaluate the behavior of the percentage of reworked instruments that pass. Unfortunately, while many characteristics such as resistance could cause the failures, no details are available as to their reasons for failure and so the root causes of the problems cannot be determined. Q: Analyze the data to determine if the production process is predictable with respect to percentage of instruments that pass the first test. A: An individuals and moving range chart for percentage of instruments that pass the first test is shown in the graph, “Percent Good at First Test.” The process is predictable at an average of 47% good at first test. Q: What other data does Donna need to collect to get to the bottom of the problem? A: Donna needs to record the details about why each instrument fails at first test as well as any information gained during the rework process. Some of the causes of failure may also be analyzed on a process behavior chart to see if each problem behaves predictably. Key Lessons • First pass yield, the percent of good units without any reworking, shows how this process currently behaves. • When collecting data, details concerning the specific reasons for failure are critical to searching for a root cause. • If the process behavior is unpredictable, the root cause is usually an exceptional cause of variation and may be easier to discover. • If the process behavior is predictable, the root cause is part of the routine (common cause) variation and may be more difficult to discover. • Connecting a failure to a particular point in the process and being able to reproduce the failure deliberately is the key to validating a root cause. 8. Digging for Root Causes (April 2002) Answers to April 2002 Brain Teaser Q: How does the current behavior of this process compare to the behavior based on the previous data? See the March 2002 Brain Teaser for the data. A: The individuals and moving range chart, “Percent Good at First Test” includes both sets of data. The average value is still 47% but the variation is less, which leads to tighter limits. However, reducing the variation around an unacceptable average does not fix the problem. The average value needs to increase dramatically. Q: How can you use the details for the causes of rejection of instruments to determine the root cause? A: A Pareto Chart can be used to show the relative contribution of the various causes for rejection to the total. Bad solder was the overwhelming problem identified, with 60% of all causes that were tabled. Q: Consider the specific causes of rejections and brainstorm about what step in the process the problem is occurring. A: For the two highest causes of rejection, bad solder can come from the soldering process or the previous step in which connections are made. The first place to investigate the incorrect connection problem is in the step in which the connections are initially made. Data from the incoming component test and the kitting process, which is the process for gathering incoming components together for a unit and placing them in a container, may provide insights into the component failure and the wrong PC board problems. Key Lessons • The total number of causes for rejection (defects) may be greater than the number of units rejected because more than one failure may occur on a single unit. • Reducing variation in the number or percent of units that pass first test does not eliminate failures. The average number or percent of units that pass at first test must be increased. • A Pareto chart can be used to communicate the relative percentage of causes of failures. • Each cause of failure must be connected to a step in the process to find the root cause. 9. Analyzing Downtime: Part I (February 2001) Answers to February 2001 Brain Teaser Q: How should these data be analyzed before they are presented to the plant manager? A: The overall downtime as a percent of the production day was specifically set up for the plant manager. To get these percentages, Patrick divided the total downtime each day by the length of the production day in minutes. Patrick made an individuals and moving range (XmR) chart of these values to see if percent downtime is predictable. An XmR chart of total downtime as a percent of the production day is shown in the graph titled Percent Downtime. Q: Is there anything unusual about these data? A: The unusual aspect of these data is that downtime in step 2 is always greater than downtime in step 1 and downtime in step 3 is greater than or equal to the sum of the downtimes from steps 1 and 2. While the reasons may not be immediately known, suspect that the downtime recorded in each step is not only a result of causes in that step, but it also reflects downtime from causes that happen at a previous step. After investigating, Patrick found that the downtime for steps 2 and 3 reflected the accumulation of downtime caused by the previous step plus downtime caused in that step. Thus, adding the downtime for the three steps gives an inflated result that does not reflect the true picture. Q: What additional information is needed to be comfortable with the analysis? A: To analyze the data correctly, what has been recorded must be known. In this situation, all downtime minutes were recorded and the reasons indicated. In many cases, the reason for the downtime was that the previous step incurred downtime. To provide a realistic analysis for the supervisors in each step of the process, Patrick kept a separate tally detailing the reasons each step incurred downtime. In this way, the supervisors of steps 2 and 3 can understand the downtime because of causes in their own steps of the process and then direct their efforts to reducing downtime for these causes. Key Lessons • From a management perspective, percent downtime might be useful as one of several measures that make up a plant or production scorecard. • Cumulative values for downtime from different departments may misrepresent the process by double counting. • Each step in the process needs a separate chart to focus on reducing downtime that is associated with its own area because that is where the causes will be found and where the improvements will show. 10. Analyzing Downtime: Part II (March 2001) Answers March 2001 to Brain Teaser Q: Use the data to prepare an analysis of downtime for the supervisor in each of the three process steps and for the plant manager. A: To determine how each process step behaves with respect to downtime, the three supervisors and the plant manager need an Individuals and moving Range chart (XmR) for their particular step in the process. An example is given in the chart, Downtime for Causes in Step 2 of the Process, which will be used by the supervisor of step 2. The charts for downtime for causes in steps 1 and 3 are also shown. For the plant manager, two XmR charts are possible: an XmR chart for the downtime from all causes in step 3 and an XmR chart for the percent downtime each day. The chart for downtime for the process as a whole is titled Downtime at Step 3 for All Causes. The chart for percent downtime is also shown. Q: Interpret the behavior of the downtime for each step in the process that result from causes in that step. A: Downtime in step 1 is predictable at an average of 47.5 minutes, and on any given day, the downtime can range from 34 to 61 minutes. Downtime is unpredictable in steps 2 and 3 for causes specific to each individual step. As a whole, downtime is predictable at an average of 107 minutes and can range from 39 to 176 minutes on any day. On a percentage basis, downtime averages about 15% of the production day with individual days that range from 5% to 25%. Q: How can the supervisors in each step use the analyses of downtime data to reduce downtime? A: The key to improvement is to have an understanding of the process at the work level. While the charts for the plant manager can document the overall behavior for downtime for this process, it is at the individual step level that improvements must be made. Thus, each supervisor needs to use the downtime chart for causes in each individual area to begin reducing downtime. In steps 2 and 3, the supervisors first need to find and eliminate the assignable causes of variation. When the downtime is predictable for a given step, the emphasis needs to be on finding ways to fundamentally change the process to reduce downtime. Key Lessons • At the production process level, actual minutes (not percents) should be recorded for downtime. • Each step or area of the process used its own data to study process behavior. • Downtime for the last step in the process will be equivalent to total process downtime. • As long as the process is in statistical control, downtime will behave in a predictable or steady manner. SECTION 2 Intermediate 11. Daily Audits (February 2000) Answers to February 2000 Brain Teaser Q. How can these data assure the manufacturer that all product will meet specifications? A. As they stand these data do not ensure the manufacturer of 100% conformance to specifications. The results of a 10% sample of product do not tell you what to expect in the rest of the shipment. Only when a process is operating with a predictable behavior can a statement be made about the quality of the untested product. Current data show that some sample contained defective units. This makes it unlikely that the rest of the shipment is free of defectives. Q. How can these data be used by the vendor and the manufacturer to determine what can be expected in the future? A. These data can be analyzed on a process behavior chart by the vendor or the manufacturer. If the process behaves predictability, then the average DPM can be used to predict what the vendor is sending to the manufacturer. Q. What are the problems in expressing results in defects per million when only one to two thousand units are inspected? A. There are two problems in these data and they are related. One is the problem of expressing counts of defective units as rates in DPM when only 500 to 1000 or so units are tested. In such situations, there is a perceived level of detail in the data that does not exist. The second problem occurs when the true level of detail in the data (of which one may be unaware) is not small enough to obtain an appropriate estimate of the standard deviation of the process. Under this circumstance, the estimated standard deviation is too small causing the limits on the process behavior chart to be too tight. Consequently, false signals can occur. Q. What would be a better way for the vendor to supply data to reassure the manufacturer of conformance to requirements? A. A better way for the vendor to provide final audit data to the manufacturer is to provide a process behavior chart of the number (actual count) of defective units based on a fixed size sample. For example, the average number of units tested over the 28-day period is 696. The median number is 500. Choose a fixed number of units to test and then make a chart of the number defective. Key Lessons • Audits of finished product are insufficient to determine the quality of uninspected units. • When working with a sample based on a constant percentage, the risks of errors in decision making change from one time to the next. • A process behavior chart based on a constant number of units per day can reveal how the process behaves and also maintain a constant level of risk in decision making. • Converting inspection results based on several hundred or thousand units to defects per million may obscure the interpretation of the numbers. • The number of units rejected may be easier to use and understand as relates to the process. Defects per million may be more appropriate as a performance measure to compare to other processes. 12. Reworking Percent Solids (March 2000) Answers to March 2000 Brain Teaser Q. How can you use these data values to show the customer that you are meeting specifications? A. You can make a histogram of the final data values for each batch and show that these results are all in specification. Of course, this histogram does not convey any information about the process behind the results or what can be expected in the future. Q. How can you use these data values to understand the production process? A. By analyzing the values obtained on the first test of each batch using an individuals and moving range chart, you can see how the production process behaves. Q. Of what use are Cp and Cpk for this process? A. Using the first test values, the capability indexes, Cp and Cpk, show what you might expect should you get the production process to behave predictably. Currently these values are not appropriate since the process is not predictable. Key Lessons • A histogram is a technique to show the distribution of all data values and see if all values are within specifications. • A histogram does not analyze process behavior. • The data consist of first pass values plus values from reworking the batches. Only the first pass data values should be used to understand the process behavior. • Capability values are applied to the process and only first test values are appropriate to use in calculating Cp and Cpk. • While a process is unpredictable, the capability values are not appropriate to use. 13. Splitting a Double Coil into Two Rolls (January 2000) Answers to January 2000 Brain Teaser Q. The manufacturer wants to know if all of the product can be expected to meet the specifications. What is the best way to analyze the given data in order to determine if all of the product from this particular production line meets specifications? Hint: In your analysis, use a process behavior chart (Individuals and Moving Range or Average and Range) along with a capability analysis to answer the question. A. An individuals and moving range chart (XmR) of all of the data values in order shows an alternating pattern above and below the average line. This nonrandom pattern is evidence of an assignable cause of variation that needs to be identified (See graph, “Individuals and Moving Range Chart”). A close look at the data reveals that each strip has two values, one for each of the two final strips. A histogram of the data reinforces the message seen in the XmR chart that there are two distinct processes (See graph, “Capability Analysis”). Separate charts for each show that the two strips are different. The left strip is predictable and the right strip has one data value outside the limits. The average values for each strip (0.9429 for the left and 0.9632 for the right) are detectably different since the limits for each strip have no overlap with the other. Because the total width is determined at the extrusion step, the position of the knife for slitting the original strip into two pieces is critical for the final product results. These analyses are shown. Q. What additional data would you recommend that this company collect if they want to understand and improve their process? Hint: Think of process improvement as running “On Target with Minimum Variation.” What data would be needed and how should it be analyzed to tell them that this is being accomplished? A. To focus on process improvement, the manufacturer needs to measure the process rather than the product. Both the total width of the initial and the position of the knife that splits the strip are key to making good product. A chart for total width and another for the width of one strip will be sufficient to ensure that the splitting process is on target and that the resulting product is “on target with minimum variation.” Q. What other questions should the manufacturer ask about this situation? What questions would be more useful for the customer to ask about the finished product? A. The manufacturer should ask questions about how the process behaves and how to keep the process running predictably. The desired results will follow. Customer questions should be directed to predictability of the manufacturer’s process as well as to meeting requirements. Key Lessons • A histogram can be used to see if all measurements are inside specifications. • A process behavior chart is needed to determine if the process is running in such a way that future product can be expected to meet specifications. • Patterns in a running record or process behavior chart indicate an exceptional cause of variation which can be traced back to the process. • Verifying the root cause can often be done by making multiple process behavior charts to correspond with the specific process step suspected. • Collecting data on process measures during the process vs. waiting until the product is finished can prevent issues with the finished product and assure customer specifications will be met. 14. Short Run Machining Parts (September 2000) Answers to September 2000 Brain Teaser Q: What is the difference between the process and the product for the KLX4337? A: The KLX4337 is a versatile machine that can produce many different parts. Specifications are typically written for products and there is a tendency to keep track of each part number separately even when all of them are produced on the same machine. From a process point of view, the various part numbers are not critical. The process includes the KLX4337 along with the raw materials, procedures used for setup, and the people who operate the machine and is the same regardless of the part number. The goal is to operate the process in a predictable manner regardless of the part number. Q: How can SPC be used with these data to study the process? A: The data for Length A from the KLX4337 can be analyzed on a difference from nominal average and range chart with subgroups of size n=3. The analysis for the Length A data is shown in the graph entitled, KLX4337 Length A, Difference from Nominal. This analysis clearly shows that the overall average difference from nominal for the 24-hour period is very close to zero, but the process is not predictable. There are points outside the limits on the average chart, which indicate that exceptional causes of variation are present. Q: Is this process capable of producing parts that fit within ± 0.10 of the nominal value? A: Since the process is not predictable, the question of capability can only be addressed from a hypothetical perspective. The within subgroup variability, which represents the common cause variability or “noise” of the process, is tracked on the range chart. Since there are no signals on the range chart, the natural process limits calculated from these data can be used to determine a hypothetical capability as shown in the graph entitled, KLX4337 Capability Analysis. This is an indication of what could be accomplished if the process were to behave predictably. As you can see, once the exceptional variation is eliminated, this process should be able to make the various parts within a specification of ± 0.01 from the nominal. Key Lessons • For short runs of different products, process behavior is critical and can be analyzed by plotting the difference from nominal for similar characteristics that may have different targets or nominal values. • Making separate charts of each product will show the product results but may miss signals associated with process behavior. • Short run process behavior charts for averages and ranges can be used with subgroups of data if each data value is recorded as a difference from nominal. • Capability of a process can only be answered for a predictable process. Hypothetical capability can be presented, but it is not current reality. 15. A Layering Process (December 2001) Answers to December 2001 Brain Teaser Q: If Virginia uses the percent deviation from the target value for the short runs produced on Reactor 11, what does the analysis of the data show with respect to the behavior of Reactor 11? A: An average and range chart of the percent deviation from target has two ranges above the upper limit for product D. This suggests that the variability of product D may be higher than that of the other products, as seen in the chart, Percent Deviation from Target for Reactor 11. Q: What assumption is made when using the percent deviation from target? A: Because the specifications were quoted as a percent deviation from target, ±5% from target, Virginia assumed that the variation of the layering process would be a percentage of the target. She has a wide range of target values for her process, but the standard deviation of the measurements will not always vary as a percent of the target. In this situation, the standard deviations calculated from the data values for each product vary from 5.5% of the target value for product C to 11.5% for product D. Q: What would be a more appropriate way to analyze the data? A: If the variability of the data values is different for the different target values in the reactor, the best way to analyze the data and understand the behavior of the reactor is to plot the deviation from target divided by the estimated value of the standard deviation for that specific product. These values are called Zed values. The resulting chart, Zed Chart for Reactor 11, shows that the process is predictable. Because the specifications are written as a percent of target, the capability analysis would need to be done for each product separately. This situation illustrates the difference between analyzing data to understand the behavior of a process versus analyzing data to determine if the product meets specifications. Key Lessons • Expressing data values as a percent deviation from the target assumes that the average of the values equals the target and that the variation is a function of the target. • When using the same process for products with different requirements, measurements of the different products should be put on the same chart so the process behavior can be seen. • Short run charts adjust for the nominal or average associated with a product as well as for the differing amounts of variation that might exist as in this example. SECTION 3 Advanced 16. Process Capability for Production Line 5 (December 2000) Answers to December 2000 Brain Teaser Q: Bryan analyzed the capability of Production Line 5 using statistical process control (SPC) software. Analyze the data using SPC techniques to determine the capability for Production Line 5. A: An average and range chart for these data, Production Line 5: Average and Range Chart, shows the process is not predictable. Bryan observed that many points were outside the control limits on the average chart and 1 point was just outside the limits on the range chart. The Histogram for Production Line 5 shows a lot of values outside the calculated Natural Process Limits of 43.47 and 44.32. These Natural Process Limits, or 3 Sigma limits, are calculated from the data using a standard deviation based on subgroup variation or the “noise” of the process. The standard deviation is determined by dividing the average range by a bias correction factor called d2. The bias correction factor can be found in tables of control chart constants indexed by the subgroup size. For subgroups of size 4, the d2 value is 2.059, and this gives an estimate of 0.1412 for the standard deviation of the process. Because of the range value outside the limit on the range chart, the standard deviation calculated from the average range could be inflated and the Natural Process Limits slightly too wide. Any statement of capability of Production Line 5 is premature and not justified by the analysis. The Natural Process Limits can be used to describe a hypothetical capability, which would exist only if the process were brought into a predictable state. Actions taken to eliminate the causes of the signals are required before the hypothetical capability could be achieved. Q: Karen used the average and standard deviation functions in a spreadsheet program to analyze the data. What is the process capability for Production Line 5 using these functions? A: When using the spreadsheet function to calculate a standard deviation for the data from Production Line 5, Karen obtained an answer of 0.472. This value is more than three times larger than the one Bryan calculated. The standard deviation of 0.472 leads to Natural Process Limits of 42.48 to 45.31. Q: What is the difference between Bryan’s analysis and Karen’s analysis? Which report is correct? A: While Bryan’s approach was based on variation within subgroups of data, Karen’s approach was based on a global calculation. For a global calculation of the standard deviation, all of the data are considered as a single subgroup of 52 data values. The global calculation of standard deviation includes variation because of the “noise” of the process, as well as the exceptional variation that causes the process to be unpredictable. This added source of variation is what caused Karen’s standard deviation to be larger than the one Bryan calculated. The correct approach is the one Bryan took. Once the assignable causes of the exceptional variation are removed, the process will be capable of the Natural Process Limits that Bryan calculated. Key Lessons • The key to a correct analysis of process behavior is to create subgroups of data that contain only routine variation. • Process behavior chart limits are calculated from the within-subgroup variation which is assumed to be only routine variation. • If ranges are used to study within-subgroup variation, an estimate of the process standard deviation is obtained by dividing the average range by a bias correction factor called d2. • A global approach to calculating a standard deviation uses all data values in the set without consideration of the subgrouping and ignores the difference between routine and exceptional variation. • When using a global approach to calculating a standard deviation, the result will be inflated in the presence of exceptional variation in the process. 17. Fabrication of Media (October 2000) Answers to October 2000 Brain Teaser Q: Paula set up an average and range chart for the thickness data using a subgroup of size n=3. Construct such a chart using the data provided and interpret the results of this analysis for her. A: A process behavior chart with a subgroup of size three is shown in the graph entitled, Average and Range Chart for Media Thickness. Most of the average values for the coils are outside the limits on the average and range chart indicating a process that is unpredictable. However, the average range, which determines the width of the limits on the charts, is based on ranges that include only the variation in media thickness on the same roll. While these ranges are within the limits on the range chart, none of the roll-to-roll variation is present in these ranges. Therefore the limits do not provide for some typical amount of roll-to-roll variation that is almost certainly present. Q: If Paula wanted to determine the behavior of media thickness from roll to roll, how should the data be organized on a process behavior chart? A: To determine the variation of media thickness from roll to roll, Paula would need to analyze the average value for each coil using an individuals and moving range chart like the one shown in the graph entitled, Individuals and Moving Range Chart for Coil Averages. The moving ranges would pick up the variation that is present in the process from roll to roll and the limits would reflect this additional variation. A three-way chart consisting of a range chart for the three individual data values on each coil with an individuals and moving range chart of the subgroup averages is useful to keep track of variation within each coil as well as the process variation from coil to coil. Q: Is this process meeting the specifications for media thickness? What could Paula predict for media thickness? A: Based on the chart for the average media thickness for each coil, Paula could predict an average of 12.57 for the average media thickness of the coils with the natural process limits for coil averages of 10.6 to 14.5. Currently all of the coil averages are inside this specification, but the capability ratios, Cp = 0.76 and Cpk = 0.72, indicate that the process is not capable of meeting the requirements with the averages. Capability analyses should be conducted on individual data values; however, it would be inappropriate to do a capability analysis based on the original data as charted using an average and range chart for the three values on each coil because the variability in the range values does not include roll-to-roll variations. Key Lessons • To study process behavior, data must be organized so that routine process variation will be present within the data values that form the subgroup. • In some processes, particularly batch operations, the variation within a batch may not appropriately reflect the routine process variation over time. • When two sources of variation – within subgroup and between subgroup – must be monitored, a three-way chart can be used. • Three-way charts consist of a range chart for the within-subgroup variation combined with a chart for subgroup averages and a moving range chart of the subgroup averages. 18. A Filling Operation (July 2000) Answers to July 2000 Brain Teaser Q: Which characteristic should be analyzed using statistical process control (SPC)? Justify your choice. A: The data for all three characteristics can be analyzed using SPC, but fill height and cork depth measurements relate directly to specific actions in the filling and corking processes. The gap between fill height and cork depth is a result of the other two and can be determined by subtraction. Fill height and cork depth are the key characteristics of using real-time SPC in the filling process because they relate directly to something that can be changed. If fill height and cork depth are predictable at the appropriate average and variability, then the gap will be predictable as well. An additional chart on the gap is not needed. If fill height or cork depth are predictable but the average or variability are different than required specifications, actions should be taken to fix the fill height or cork depth process. The same types of actions apply if fill height or cork depth are not predictable. Q: Use the current data to determine if this process can meet cork depth specifications. A: The process behavior chart for cork depth shows this process is predictable at an average of 48.52 millimeters and an average range of 0.49 millimeter. The capability analysis shows that the cork depth is centered close to the nominal, but the natural process limits, 47.90 and 49.15, are outside the specification limits. The process variability is greater than the specifications require. Capability values are below 1, with Cp = 0.798 and Cpk = 0.766. Even though all of the current values are inside the specifications, projections are that the cork depth can’t meet the specifications for all results. Q: What is the best way to determine the process capability of the gap between the fill height and the cork depth? A: Determining the capability of the gap between the fill height and cork depth is possible by directly analyzing the gap values. With a predictable process, capability analysis will reveal if the gap is meeting specifications. However, if the gap is not capable or predictable, any corrective actions must consider fill height or cork depth. Because the gap is a function of fill height and cork depth, determining the capability of the gap is possible using the analyses for fill height and cork depth. When the characteristics are in statistical control, a linear combination of the fill height, cork depth averages and standard deviations will describe the behavior of the gap. The gap average will be the difference between the fill height average and the cork depth averages. The gap standard deviation will be the square root of the sum of the variances for fill height and cork depth; the variance of a characteristic is the square of its standard deviation. This approach relates the capability of the gap to the behavior of fill height and cork depth, which can be changed if required. Key Lessons • Not all finished product characteristics with customer specifications need to be analyzed with process behavior charts. • Those characteristics whose value is determined directly from a process step or activity, such as fill height, should be studied using process behavior charts. • Some finished product characteristics, such as the gap between fill height and cork depth, are a function of, or dependent upon, other product characteristics whose values are directly connected to a process activity or step. • The behavior of finished product characteristics such as the gap between fill height and cork depth can be assessed indirectly from an analysis of the characteristics directly connected to the process. SECTION 4 Measurement Studies 19. Comparing Measurement Methods (January 2001) Answers to January 2001 Brain Teaser Q: Analyze these data and determine if Traci is correct in her assertion that audit net weights are lower than production net weights. A: In deciding how to analyze these data, it is imperative to be clear on the structure of the data collection. The two methods, A and B, were used on the same bottles. So each container has a value for Method A and Method B. Furthermore, the containers were measured in 4 sets of 3 bottles each. A traditional hypothesis testing approach would be to calculate a t value for paired differences for Traci and for Carl. Traci had an average difference between Methods A and B of 1.6 with a standard deviation of 0.63 giving a t value of 8.78, while Carl had an average difference of 1.8 with a standard deviation of 0.396 giving a t value of 15.65. In both cases, the t value indicates that the difference is definitely real (not zero) and that Method A gives results that are systematically higher than Method B. However, since the data are collected in repeated sets of 3, it would be appropriate to make an average and range chart of these subgroups of size n=3. There are a total of 8 subgroups, 4 each for Traci and Carl. The differences between the two methods are shown on the chart entitled “Difference Between Methods A and B. The grand average is 1.696 with control limits of 0.673 and 2.719. There are no signals of assignable causes of variation. Thus we would expect that on the average, Method A is 1.696 higher than Method B. The control chart shows that the results are consistent over time and provides a basis from which to make predictions. From either analysis, we see that Method B, which is used for audits, gives lower values than Method A. Q: What should be done to bring the two techniques in closer alignment? A: When emptying containers, there is always the chance that some of the fill remains in the container. If Method B cannot be changed, then a constant of 1.7 can be added to the results for Method B to make them comparable. Q: What additional questions need to be answered? A: Would this same constant apply for heavier or lighter containers? Would the operators get the same results as Traci and Carl? Does the technique for Method B need to be changed? Will this difference of 1.7 hold over time? Key Lessons • Traditionally hypothesis testing for paired differences of the two measurement methods, A and B, would be used in analyzing these data. • Using process behavior charts, it is possible to analyze the difference between the two measurement methods plus determine if there are any signals of exceptional or assignable causes in the process. • By using the process behavior chart approach vs. hypothesis testing, any exceptional variation can be seen, identified and corrected. 20. How Good is my Measurement Process: Parts I and II (August 2001 & September 2001) Answers to August 2001 Brain Teaser Q: What is the variability (test-retest errors or repeatability) because of the measurement technique? A: The four operators tested the three parts to generate 12 subgroups of data. The first step in analyzing these data is to construct an average and range chart for these subgroups, as seen in the chart “Measurement Study for Height.” From the range chart, the test-retest error is determined by dividing the average range by 1.128 for subgroups of size n=2. This gives a value of 1.592/1.128 or 1.411. The average range is 1.592 and the 1.128 is a factor used to convert the average range to a standard deviation, which is the test-retest error. Q: Do the operators report comparable answers? If not, which operators differ from the others? A: The easiest way to detect differences among operators is to use an Analysis of Means (ANOM) technique. An ANOM of the four operator’s average values is shown in the chart, “Operator ANOM.” Two operator averages are outside the 3-sigma limits for the operators, which indicates that these two operators give detectably different results from the others. Q: Is this measurement technique adequate for measuring the characteristic under study? A: If the test-retest error of a measurement technique is small relative to the variability of the parts being measured, then the technique is adequate to detect part-to-part variation. The formula for the discrimination ratio, DR, can be found in Evaluating the Measurement Process, 2nd Edition, by Donald J. Wheeler and Richard Lyday. For this situation, DR = 5.7. A DR that is greater than 4 indicates that the measurement technique is adequate for the current situation. If the variability in the parts were to be reduced, the DR would also be reduced and the measurement technique would need to be improved. Q: What actions should Edward take based on the information from the measurement study? A: Edward should take actions that would minimize the differences among operators. This may require work on operational definitions as well as training. While some amount of variation will always be in this process, it is desirable to reduce the variation between operators to a level that is just noise. Key Lessons, Part I • Variation in a measurement process, test-retest error, is calculated from repeated measures of the same units by the same person using the same device. This is called the repeatability of the measurement process. • An Analysis of Means (ANOM) is useful for detecting a bias than might exist for different operators that do the measuring. • Operator bias is often referred to as reproducibility and is combined with the repeatability to obtain an overall value for measurement variation. • Rather than combine the extra variation attributable to operator bias, any differences between operators should be eliminated. • The discrimination ratio, DR, provides guidance regarding the ability of a measurement process to detect that different units definitely have different measured values. Answers to September 2001 Brain Teaser Q: Is there evidence in these data that the training program for operators was successful? A: In the initial study, two of the four operators gave detectably different results from the other two operators. During the training program, all four operators reviewed the measurement technique and practiced it so that they would all use the same technique. The Analysis of Means (ANOM) from the follow-up measurement study is shown in the graph, Operator ANOM, Follow-Up Study. In this graph, all of the operator averages are inside the limits of variation based on the test-retest error from the follow-up study. Currently, all four operators are using the measurement procedure in a consistent manner. Periodic follow- up studies are advisable. Q: What is the current test-retest error or repeatability of the measurement process? A: To find the test-retest error or repeatability of the measurement process, first make an average and range chart with the 12 subgroups of data (4 operators x 3 parts = 12). This chart is titled, Follow-Up Measurement Study for Height. The test-retest error is found by dividing the average range on the chart by a factor, 1.128 for subgroups of size n=2. This gives a test-retest error value of 1.77/1.128 or 1.57. This value for test-retest error in the follow-up study is comparable to the value of 1.411 in the original study. Q: Is this measurement process still adequate for detecting differences in the parts? A: The discrimination ratio (DR) is used to determine if the measurement technique is adequate to detect part-to-part variation. The formula for computing the discrimination ratio can be found in “Evaluating the Measurement Process, 2nd Edition,” by Donald J. Wheeler and Richard Lyday. For the follow-up study, the discrimination ratio is DR= 4.0. This value indicates that the measurement process is currently adequate, but any improvement in the part-to-part variation would also require that the measurement process be improved. Key Lessons, Part II • If training in measurement technique results in no operator bias, then measurement process variation consists only of the repeatability. • As with other values calculated from data, the discrimination ratio will change somewhat from one study to the next. • A discrimination ratio with a value of 4.0 or lower will not adequately detect part-to-part variation and therefore cannot distinguish improvements in the process.
Published by QualityMagazine. View All Articles.
This page can be found at http://digital.bnpmedia.com/article/Section+Five%3A+Answers+With+Key+Lessons/1637736/197664/article.html.