Taguchi methods for robust design
Title: Taguchi Method for Robust Design
Authors: Jennifersue Bowker, George Cater, and Kibwe John Date Presented: 12/5/2006 /Date Revised:
Introduction to Robust Design
Taguchi Methods to Robust design focus on the principles of producing higher quality goods faster and cheaper, with more consumer satisfaction. The idea is to develop a family of products or processes that are optimized so in the future all that is required is proper scale-up. These approaches use non-standard statistical analyses with a novel methodology to approach manufacturing processes, which can be applied in numerous ways to the typical roles of the chemical engineer. The following wiki article presents the basics behind robust development in a qualitative fashion.
Basics of Robust Design
The concept of robust design was pioneered by Dr. Genichi Taguchi in order to improve engineering productivity and the quality of manufactured goods. This approach to controls and design engineering incorporates innovative statistical analysis as well as new approaches to the design of experiments. The difference in the Taguchi method heavily relies on cost analysis of the product in the field, where it will ultimately be used, and the effect it will have on the consumer to increase end product satisfaction. Robust design itself focuses on improving the fundamental function of the product or process, thereby allowing for co-current engineering where essential elements, or blocks, can be used for a number of goods. In this way, think of the Taguchi method as the design of a Lego® kit, where numerous products can be developed using the same basic units. Thus, the Taguchi method for robust design is a powerful tool available to reduce cost, improve quality, decrease development time, and increase customer satisfaction.
In the past, robust design was specifically selected for those variables which are under-specified in a design system. By improving upon these under-specified parameters, specifically any effect on the final performance of the end product due to process and environmental factors, will be minimized. While this is still the case, further approaches held to the idea that the development of a family of goods which once produced, provided a generic function to other products. The optimization of this generic function begins in the product planning and laboratory research phases, ultimately with the hope that the effectiveness of the product will be least affected by the end user in a systematic reproducible way. To verify the robust design criteria of reproducibility, a number of different topics are discussed mathematically in other wiki articles including one and two way layouts, orthogonal arrays, and factorial designs. The basis of all these statistical manipulations are reliant upon the Taguchi loss function, an innovative equation in determining end product profitability.
The following items will be addressed below in more detail.
1) Generic Concepts of Robust Design
2) Advantages to Robust Design Implementation
3) Application of Robust Technology Development
The repeated use of S/N ratios in product development will also be discussed briefly because of the importance of implementing its analysis on robust design methodology.
Concepts of Robust Design
The main principles behind the Taguchi method for robust design are:
1) Robustness is first, adjusting average to meet the target is last.
2) To improve product quality, parameter design is first, tolerance design is last.
This "two-step" optimization technique utilizes the idea that improving the functionality of a process will reduce the variability. thus resulting in more precise control of the product quality. To incorporate the Taguchi method into product improvement engineering, three design criteria must be considered:
System Design: Development of a system to meet a defined objective
Parameter Design: Selection and optimization of controllable parameters within the system
Tolerance Design: Determination of limitations in variability for each parameter
System design is the most important criteria because the system functionality is the main indication of whether the defined objective can be met with reproducible results. Selecting an adequate system design is also the most difficult task because the functionality of the system cannot be confirmed without results, obtained from the parameter design. The parameter design is the second most important criteria because it is used to optimize the system and reduce variability in the product, but requires experimentation or simulation, so it must be done with maximum efficiency. The tolerance design is the last criterion for improving product quality because only after the system is defined and optimized can the product quality limitation be set. Using these criteria to build a new process or develop a new product is a systematic approach for robust design because one can apply this approach to many different applications or industries (flexibility) and obtain results with less variabiality (reproducibility).
The first concept in robust design of a product or process system is the selection of the technical means in order to meet a specific objective function. An example of an objective function may be to modify an existing washer's design to meet the consumer's need for a higher efficiency washer that minimizes energy usage. In designing a process system or developing a new product, it is critical to remember that a greater amount of control variables (a more complex system) will allow for improvement in a larger number of subsections in that system or product. Each control variable is then selected by an engineer in terms of its generic function. The selection of the generic function must serve to meet the requirements of the objective function. From the previous example, choosing to add a temperature control system, timed rinse cycle, and lower power agitator to create a more energy efficient washer would serve as the generic functions.
The two different categories of decision-making strategies employed by engineers to aid in the proper selection of the generic functions are:
- Error-free implementation using a collection of past knowledge and experience
- Generation of new design information used for improving quality, reliability, performance of the product/process as well as reducing the associated cost
The three generic functions used in the washer design above stem from the second category because improving efficiency is an important customer need.
Another consideration in system design occurs when integrating an existing and new process system. The degree of improvement in product quality depends on the complexity of both the individual and integrated systems. For example, when adding a temperature control (TC) system to an existing washer design, the reduction in energy usage is dependent on a variety of factors.
- Compatibility of the TC system with the current washer design
- Material of component parts used in the TC system
- Number of settings in the TC system
- Controllability of the temperature
For the last two factors, if there are only three temperature settings (High, Medium, and Low) and the degree of temperature control must be maintained with 1 degree of the setpoint, then the system may not be complex enough to see an appreciable improvement in the energy reduction of the washer.
Once the product or system design is chosen, optimal setting for the control parameters are determined using one of two approaches; develop a prototype or run a simulation. The main objective of either approach is to find the settings that have the greatest reduction in variability of the product quality. The first approach requires experimentation (with inexpensive raw materials or parts) under certain conditions (usually specified by the customer), while the second approach is completed using the same conditions, but does not require costly experiments.
If experimentation is performed, to increase efficiency, using an orthogonal array allows for quickly determining the most concise experimental test method with fewest number of trials. If the results from either approach meet the system objective under the specified conditions, the study of the system's robustness is complete. At this point, the product quality target can be adjusted in the tolerance design of the system.
Tolerance design for a system is a methodology for finding acceptable limits of variability in the quality of a product. In order to understand the methodology outlined below, it is first necessary to determine what is meant by "quality". There are four different types of quality; origin, upstream, midstream, and downstream quality. "Origin" quality refers to the robustness of developing a group or family of products, whereas "upstream" quality are the characteristics of one product. For example, the characteristics of the robotic arm (used for welding) are considered an origin quality because they can be used in multiple applications, such as welding car parts, steel vessels, machinery, etc. The quality of a robotic arm used for welding the frame of a car is considered of the "upstream" type, since it is specific to one particular application. "Midstream" quality refers to the product specifications (i.e. purity, size of particles, length), which can be measured before the product is sent to the customer. From the previous example, the actual welds on the frame are considered the midstream quality. "Downstream" quality refers to the product quality seen by the customer (i.e. performance, color, efficiency). The performance of the welds (how well they hold the frame together) is an example of downstream quality. The first two types (origin and upstream quality) are adjusted in parameter and system design, while the last two types (midstream and downstream quality) can be adjusted using tolerance design. Improvement in the downstream quality is accomplished by modifying the midstream quality (i.e. tightening the tolerance). However, to determine if the decrease in the variability of a product specification is worth the cost, the loss function must be analyzed, which is done by finding the Signal-to-Noise ratio.
Signal-to-Noise (SN) Ratio
What is it
Quality engineering is in essence the evaluation of functionality and the concept of a measure known as the signal-to-noise ratio (SN ratio) is employed for this purpose. The concept of the SN ratio has in itself been in the communication industry for almost a century but due to the efforts and breakthroughs made by Taguchi in generalizing this concept, it is now commonly used for the evaluation of measurement systems as well as for the function of products and processes. Conceptually, the SN ratio is the ratio of signal to noise in terms of power. More specifically, it is the ratio of the magnitude of energy used for the objective function to the magnitude of energy consumed for variability. The SN ratio can also be viewed as the ratio of sensitivity to variability and is typically measured in units of decibels (dB). For example, a value of 30dB indicates that the magnitude of the signal is 1000 times more powerful than the power of noise. Thus, the larger the SN ratio, the better the quality. The SN ratio typically highlights the interactions between the signal factor, the control factor, and noise factors and when used along with orthogonal arrays enable one to avoid undesirable interactions between control factors.
The input-to-output relationship is studied in a measurement system where the true value of the object is the input and the result of the measurement is the output. A good measurement system must fulfill the following criteria. Firstly, the result of measurement must be proportional to the true value and thus the input/output relationship is linear. Secondly, a good system must be sensitive to various inputs and as a result the desired effect is that slope showing the input/output relationships must be steep. Finally, a desired criterion is that variability is small. All three criteria are lumped into a single index when a SN ratio is used to evaluate a measurement system and thus allows easy evaluation and improvement of a system by using these ratios. The three elements of the SN ratio are illustrated below.
How to use it
Traditionally, engineers tended to design a process or product by first meeting the target instead of maximizing robustness. This outlook is very inefficient in that after the target was met the engineer had to make sure that the product worked under various noise conditions. One can see how tedious this can become, as when investigating the effect of one noise factor all other noise factors must be fixed. This approach is analogous to solving many simultaneous equations in terms of trial and error experimentation on hardware and not calculations. In order to maximize robustness for a system, one must maximize the SN ratio. Another advantage of using the SN ratio is its direct relationship with economy. As mentioned briefly earlier, this ratio is defined by the following equation:
From the above equation we can see that the inverse of the SN ratio will give the variance per unit input. As loss is proportional to variance in the loss function monetary evaluation is thus feasible from this ratio.
Advantages of Robust Technology Development
As companies move into the 21st century and the value of using typical six sigma manufacturing standards begins to curtail, the implementation of cost-saving robust design development will prove to be the economic driving force in the manufacturing industry. The advantages of using Robust Design technologies include three main features, readiness, flexibility, and reproducibility. If a process system desires these traits, it would be advised to implement Taguchi robust design.
The essence behind technology readiness depends upon quality being designed into the product from the beginning of the process. By researching and designing for an ideal perfect product it is possible to bring products to market ahead of competitors. This is dependent upon the optimization of the manufacturing process however. Ideal product design is only possible once the process has been rigorously determined to be of sufficient quality.
An example of this is in the development of robotic arms used for spot-welding. Once these were developed and optimized they were used in creating a variety of other products which require precision welding with little concern as to further improvement of their design.
After optimization, the process can produce any product having specifications with the ranges the process was designed for and studied. Therefore only one study will be sufficient to optimize a group of products, decreasing costs and saving in time and capital. This flexibility is achieved if linearity and sensitivity are increased and variability is decreased. In order to accomplish this idea of the dynamic signal to noise ratio is used, as discussed later in the wiki.
Continuing with the robotic welding arms example. It is possible to use this technology in a vast number of applications, from cars to aircraft to reactor vessels. The flexibility of the product also means it is simple to scale-up or scale-down the technology in an effort to produce other goods.
In the robust design method, control factors which interact are selected against and therefore ultimate product quality is produced. The other advantage of proper optimization following this is an increased reproducibility downstream.
Robotic welding arms, once their control systems are in place, are more reliable, consistent and cost effective than their human counterparts. The fact that these tools are able to produce on a mass scale their products with little worry about quality is another advantage to their use.
Application of Robust Technology Development
Typically the activities of quality control are performed solely on the manufacturing plant floor. Statisticians and manufacturing engineers involved in the process usually play a significant role in tackling quality issues. Research and development teams and design teams usually take a hands-off role when dealing with quality.
In the Taguchi methodology, quality requirements are designed directly into new products as well as the process, so both R&D and manufacturers are involved in maintaining quality. This will therefore prevent the typical role of manufacturing problem solving or “firefighting” and bring the focus onto the study of creating a generic product. The strategy behind using Taguchi methods for robust design follows a logical progression involving design, experimentation, optimization, and implementation.
In summary, the steps to application are as follows:
1. Identify the generic function of the item to be optimized 2. Maximize the SN ratio using the generic function 3. Use test pieces before product planning 4. Compound noise factors into main conditions 5. Use Orthogonal Arrays to check for interactions 6. Calculate response tables for the SN ratio and sensitivity 7. Optimize conditions determined from SN ratio and sensitivity tables 8. Confirm experiment
- Although these steps are not an explicit formula for robust design, it is the recommended step-wise approach that is oftentimes used in achieving the goals of a robust product.
In order to further understand this process more clearly the concepts of robust design will be applied to the production of a specialty chemical. This process will incorporate all aspects of chemical engineering. Fluid flow, reactions, separations, heat transfer and process control must be considered and therefore there are many regions where robust design can be applied. In the following example an analysis of temperature on a reactor will be considered, with the objective of increasing the robustness of the temperature control strategy and implementation.
Identification of Generic Function
The generic function of a product or process is the relationship between the signal factor input and output of the method the engineer is going to use. An example of this is in the injection molding process, where the material produced is expected to have a certain physical properties, like modulus or hardness.
Specifically to the temperature control example, the generic function is in using the temperature of the reactor to control reaction rates, side-reactions and end product purity. However, the development of the temperature control scheme can be applied to other systems, and therefore it should be optimized for many different applications.
Maximize the SN Ratio
Using the parameters of the generic function developed earlier, the SN ratio is maximized based on the tolerances allowable for the product. This may be done using the static or dynamic SN ratio and effectively maximizes robustness for the process system being analyzed. Sensitivity is analyzed in dynamic systems while adjusting the average value desired contributes to changing non-dynamic systems.
Mazimizing the SN ratio in temperature control involves minimizing the variablitiy of the temperature the controller will output, while it maximizes the sensitivity of the controler to disturbances that may occur as a result of increased steam flow, or increasing reaction rate. Maximizing the SN ratio will provide the operator with the individual aspects which will impact the temperature the most.
Using Test Pieces
In order to minimize time and increase efficiency, using test pieces to verify the development process will provide valuable information to the validity of the system. Test pieces are used here because the final product is not kept in mind, only the particular application that is to be implemented. For example, the automated welding process for the manufacture of cars first would first be designed not with building cars in mind, but with welding pieces of steel together. The cars came second to the task at hand allowing for faster, cheaper implementation because the process was determined before the actual car schematic was finalized and could be tested simply by welding a test sample of steel together.
Testing the temperature controller in a fully implemented reactor would be highly inefficient in case changes must be made. Therefore testing on a smaller heat exchanger or a small, well charachterized reactor would be advisable to ensure its proper functionality.
Compounding Noise Factors
In order to study overall experiments efficiently, compounding noise factors that may occur in the ultimate use or production is necessary. Compounding noise basically means to determine the ultimate product goal and look at how process parameters will ultimately impact that goal. The main conditions for this are negative side extreme, positive side extreme, and standard condition. Negative side extreme desires a particular aspect of the product to have a smallest desired condition, like the resistance of steel to corrosion. Positive side extreme desires the largest possible condition, like the hardness or modulus of a ceramic. Standard condition relies on obtaining a specific desired parameter, such as the coupling between the ceramic ball and fibular implant in a hip replacement orthopedic implant.
For temperature control, as well as in many other chemical engineering applications, compounding noise factors will ultimately impact the 'Standard Condition' because it is desired to reach a specific setpoint of temperature. All factors such as valve types, flowrates and other factors like operator error may be considered in thinking about how noise will factor into the controller and what its response will ultimately be.
Use of Orthogonal Arrays
The mathematics behind Orthogonal Arrays are described in another wiki section referred to above. Essentially, this allows you to check for the existence of interactions between process components.
Experimentation with Orthogonal arrays will determine the most important factors involved with controlling the temperature in the reactor. The advantage of using these arrays lies in reducing the number of experiments required for full charachterization.
Calculation of Response Tables
The mathematics behind Response Table calculations are described in another wiki. However, using the response tables for SN ratio and sensitivity provide for the optimization of the process parameters.
Similarly to orthogonal arrays, response tables can give the engineer useful information behind what the proper control strategy should be.
Optimum Condition Determination
The optimum condition for the process is determined from the SN ratio and sensitivity response tables. The SN ratios or sensitivity of the current condition and the optimum condition are calculated and, using the difference between them,the predicted gain is calculated.
In the temperature control scheme, optimal contidions will include the proper valves, and materials to use as well as expected controller inputs such as gain and bias.
Confirmation of Experiment
Finally, under the optimal and current conditions confirmatory experiments are done. Verification of process gain is crucial to show reproducibility. If the two gains are not close enough, then the quality characteristic determined in the first step must be examined.
The finalized controller apparatus will ultimately be tested on a reactor in order to verify the optimal conditions assumed based on SN significance and previous experimentation. If the experiment is not confirmed then further analysis must be done in order to verify the inputs available to the temperature controller.
Glossary of Terms
Downstream quality: Characteristics that are noted by the customer
Generic function: Relationship between the signal factor input and the output of the technical means
Midstream quality (specified quality): Product specifications
Objective function: Relationship between the signal factor input used by a customer and the objective output
Origin quality: Characteristics that are defined by the robustness of multiple fixed outputs (family or group of products); expressed by the dynamic SN ratio, which relates the generic function to the objective function
SN Ratio: Ratio of the power of the magnitude of energy consumed for the objective function to the magnitude of energy consumed for variability
Upstream quality: Characteristics that are defined by the robustness of a fixed output (one product); expressed by the nondynamic SN ratio
Worked out Example 1
Submerged Arc Welding is a commonly employed, industrial arc welding process in which the molten weld and the arc zone are protected from contamination due to atmospheric forces by submerging under a blanket of granular fusible flux. This flux, when molten, becomes conductive in nature and in doing so provides a current pathway between the electrode and the work being done. For a process in which two metal plates are connected to form a butt joint, it was determined that one of the control factors was the wire feed speed, a main factor in welding current control.
For this process it was determined that the aforementioned control factor existed at three levels/conditions, A1, A2, and A3. If the SN ratios obtained for these levels were 15.8dB, 3.2dB, and 14.3dB respectively, what level should this factor be operated at?
SOLUTION:The first level, A1, of the control factor of wire feed speed would be the best choice in optimizing the system. At this level the magnitude of the signal to nosie ratio is the greatest here and using this level of the control factor would maximize robustness of the experiment leading to decreased variablility in the final product.
Worked out Example 2
As a new engineer working for a company in gelatin manufacturing, you are assigned the task of designing more efficiency drying system, which converts 25 wt% liquid gelatin solution to a 95% solid form. Based on the Taguchi method, what steps would you take to design the drying system?
SOLUTION: There are many possible solutions to this problem, one suggested method is outlined below.
System Design: Choose the process technology that incorporates evaporation as well as solidification (i.e. spray dryer). Choose the control variables of the system (air temperature, number of sprayers and size of the spray hole, air flow rate, etc.)
Parameter Design: Use an orthogonal array to determine the fewest number of simulations that must be run to determine the optimal conditions for each parameter to achieve the desired product of 95 wt% gelatin powder. Using ASPEN or similar simulation software, test the various conditions to see which achieved the desired results and then calculate the corresponding variability for each test run.
Tolerance Design: Calculate the variance and quality loss associated with each of the various combinations. Determine which design should be implemented and the acceptable tolerance limits associated with that design based on the lowest loss.
Multiple Choice Question 1
You can compare Taguchi Methods for Robust Design to the design of which of the following childhood toys.
A) A Tamagochi digital pet
B) My Little Pony dolls
D) Tickle Me Elmo
Multiple Choice Question 2
In Taguchi design which of the following is considered to be of most importance in the two step optimization?
A) The final product average spec
B) SN minimization
D) All noise factors
Submitting answers to the multiple choice questions
- Authors of this wiki, please email the correct answers to firstname.lastname@example.org (and please remember to indicate which wiki article the answers correspond to).
- Everyone else, the deadline for submitting your answers is the start of class on Tuesday, 11/7.
You are expected to work on these multiple choice questions under the Honor Code.
Please use the following link to submit your answers to the above multiple choice questions:
Taguchi, Genichi, Chowdhury, Subir, Yuin, Wu. "Taguchi's Quality Engineering Handbook" New York: John Wiley & Sons.