Design of Experiments is a useful methodology for process improvement. The purpose is to find a relationship between process variables and key process outputs. 

I find it useful to use Excel statistical analysis tools, Solver, Pivot charts, etc. to plan and analyse the results of these experiments.

The first approach is to look for a linear relationship as shown in: 

 http://www.polyhedrika.com/2-uncategorised/32-excel-doe

But in some cases this relationship may not be linear, in this case we will try a quadratic model with Response Surface Analysis. 

We will use an example in this Excel file you can download:

 ExcelResponseSurface.xlsm

In this example we are trying to maximise process yield acting on the critical factors pH, Temperature and Time.

We will run the experiments in the Experiments simulation sheet using coded values.  

Code pH Temperature Time
-1 2 120 7
1 12 150 15

Factorial Experiments

We start by running a full factorial experiment with 2 central points:

We now compute the 2 way interactions by multiplying the corresponding columns:

We now use Excel Data Analysis/ Regression: 

This is a confirmation that this linear mathematical model is not valid: R^2 adjusted is very low and all p values for all factors and interactions are above 0.05.

We suspect curvature so we look to the average yield values for the pH inputs with a Pivot Chart:

We can, clearly, see the lack of linearity: Yield for pH = 0 is totally out of line with the -1 and +1 values.

 Response Surface Designs

In order to approximate a quadratic model we need two additional values for each factor. In our case of 3 factors we have 2 possible designs:

Both CCD and CCI designs involve the same number of experiments (20) but in the case of CCD the two additional values go beyond the values of 1 and -1 we originally wanted to experiment. 

In the following table we can see the correspondence between coded and uncoded values for each factor:

We can not run CCD because the resulting pH values are physically impossible (pH varies between 0 and 14)

Therefore we will run a CCI design for 3 factors.

 

Response Surface CCI Experiments 

By loading the coded CCI design in our simulator we obtain:

 

Analysis with Solver

We are going to use two alternatives for the approximation of a quadratic formula to our data: Solver and Regression. 

We start with the Solver alternative:

The purpose of the analysis is to find a quadratic mathematical formula to approximate the behaviour of our process. 

This formula will have as terms an  independent value, the 3 factors each with a coefficient, 2 way interactions and the squares of each factor with their coefficients as shown from J2 to R2. 

To use Solver we start by defining where we want Solver to store our results (the coefficients): they will be the yellow cells I3 to R3.

Assuming we already have the results we estimate yield for each experiment in column F with a formula in F3 which we will replicate all the way down. In this formula we use the corresponding factor values in columns B C and D with the coefficients in I3 to R3.

In column G we enter =(actual yield - estimated yield) ^ 2

In G23 we have the sum of the G column.

Now we ask Solver to calculate the values from I3 to R3 to make G23 a minimum.

Then we obtain the coefficients we were looking for in I3 to R3.

Optimization with Solver

Now that we have the formula to estimate Yield from pH, Temperature and Time we want to find the values that will give the maximum yield:

We will again use Solver this time to find these optimal values. Now we will use the coefficients in I3:R3 we just calculated and the values in I13 to R13. The only values we are asking Solver to calculate are J13 to L13. All the other values for interactions and squares are calculated from these with a formula in the corresponding cell. In I13 we put 1 that will be multiplied by the independent term in I3.

In K15 we put the formula for yield which is just the SUMPRODUCT of the coefficients in I3:R3 and the values in I13:R13

We ask Solver the maximize Yield (K15) with the values of our 3 factors in J13:L13. With the constraints that they should be between -1 and +1.

We obtain the optimal coded values of pH = 0.26, Temperature = 0 and Time = 0.2 and this will give us a Yield = 22.92

Analysis with Regression

 Another Excel alternative is to approximate the quadratic formula using regression.

In order to do this we need to build the table of interactions and squares just by multiplying the corresponding columns of the factors. 

We can now go to Analysis/ Regression:

Now we can see excellent R^2 values and see which factors and interactions are significant (p < 0.05).

Removing non significant terms:

We obtain the coefficients similar to those we calculated with solver.

Optimize Yield with Solver 

We can again use Solver to maximize yield:

We have a more simplified formula because Temperature was not significant:

Yield = 20.31 + 15.05 pH + 6.68 Time  - 24.81 pH^2 -12.17 pH * Time - 9.11 Time^2

This formula is for coded factor values.

 Response Surface Graph

Let's have a look at the shape of this formula:

Now we can understand why the linear model was unable to find the optimal yield: we were trying to represent this with a flat surface experimenting only with +1 and -1. 

Another way to see it:

Residuals Analysis

We are assuming that this mathematical formula represents our process but there are some additional checks we should do to validate our assumptions. 

In order to do this we are going to analyse the differences between the actual results of our experiments and the estimated values using our formula:

Comparing the residuals with the estimated values we should not detect any trends.

Residuals Normality

We expect the residuals to follow a normal (Gaussian) distribution with an average of zero:

The test for normality gives us a p = 0.61 > 0.05 so it is OK (no evidence of lack  of normality)

Test for Trends in the Residuals

No trends have been detected along the time the experiments were run (see the timestamps of each experiment)

Confirmation Runs

Now we run confirmation runs with our calculated optimal values for pH and Time and we obtain an average yield of 22.17 with a confidence interval 21.03 - 23.31

Conclusions

  • Design of Experiments can be used to approximate linear or quadratic models to our process response in order to understand its behaviour
  • It allows the identification of critical factors affecting our outputs
  • We can then find the values of these factors to optimize these outputs
  • All this analysis can be performed with Excel Statistical Analysis tools, Solver, Pivot Charts, etc. 

 

Design of Experiments (DOE) is a very useful process improvement methodology. Microsoft Excel has some powerful data analysis tools which I have successfully used for DOE. 

Factorial Experiment Example

We want to minimize process loss and after some brainstorming among the process specialists we concluded that 5 factors may affect process loss. Based on current factor levels we have selected the following levels to experiment.

Download this Excel file:

 DOE_with_Excel.xlsm

Factorial Designs

Full factorial designs are constructed simply counting in binary to obtain all the different combinations of 0 and 1 for all factors. In our designs we just replace 0 with -1 to meet the balance property. 

1/2 Fraction Design

With our 5 factors to run a full factorial set of experiments we would need 2 ^ 5 = 32 experiments. This could involve time and money before we are sure that all factors really affect our process. Therefore we might start with a subset of the full factorial as a detection experiment. A 1/2 fraction will involve 1/2 the number of experiments: 16.

We can build a 1/2 fraction 5 factor design from a 4 factor full factorial:

We do this by replacing the 4 factor interaction (which is very unlikely to happen) with the 5th factor (Flow). We construct therefore the Flow column multiplying the other 4 columns. 

 Run the Experiments

Now with this design we would replace the coded values ( 1 and -1) with the high and low values we selected for each factor and run the experiments in our process. 

In this example we are going to use a simulator to run our experiments:

In this simulator we run the experiments with the factors in coded form.

We have added 2 additional experiments with the values called central points (all zeros) which will correspond to the average value of the high and low levels in each factor. 

We do this to make a more robust design without going all the way to the full 32 experiments. 

The simulator has provided the process loss outputs for each experiment. 

Factor Interactions

We want to be able to estimate the influence of each individual factor in the result but also the interactions among them so before we analyse the results we will build a column for each interaction among 2 factors. If we wanted to include 3 factor interactions we would also add them to this matrix.

You obtain each interaction column just by multiplying the corresponding factor columns. Just enter the formula in the first line and replicate all the way down.

Regression with Excel Data Analysis 

Enter in Excel: Data/ Data Analysis/ Regression

And the result will be

:

We notice that R^2 = 0.99 which validates our mathematical model.

The Probability column (p) tells us which factors and interactions are significant: those below 0.05.

Factors Temp and Press have a p value above 0.05 but their interaction is below (0.02) therefore we must keep them.

We run the Regression again keeping only the factors and interactions not removed:

This gives us the formula for the estimation of Process Loss as a function of 3 factors (out of the original 5 we thought relevant):

Loss = 20.24 + 6.85 pH -1.4 Temp - 0.06 Pres + 4.41 Temp * Pres

This equation is with factors in coded values. 

 Process Optimization

We now want to calculate with this mathematical model of our process which are the pH, Temp and Pres values which will minimize Process Loss.

In order to do this we will use Excel Solver. 

http://polyhedrika.com/2-uncategorised/17-theory-of-constraints-with-solver

We add cells N17:N21 to house the results we are looking for. N17 is the factor to multiply with the interception so it must be 1. Cell N21 should have de product Temp*Pres so we input the formula (=N19*N20).

So we will just ask Solver to give us the values for the yellow cells. 

In N23 we put the formula to calculate loss with the coefficients in column I:  (=SUMPRODUCT(I17:I21;N17:N21)

We now call Solver to minimize this value changing pH, Temp and Pres with the restrictions that they must be between -1 and +1.

The result is that the minimum loss of 7.64 can be obtained with pH = 2 (coded -1), Temp = 150 (coded 1) and Pres =1 (coded -1)

 Confirmation Experiments

We would now run confirmation experiments to validate these optimal values. With our simulator:

We obtain a confidence interval for the process loss with our optimal factor values.

Graphical Representation

We can view a graphical representation of our mathematical model to help us understand the process behaviour:

Entering a value for pH we can see the Loss for each Press-Temp combination.

With the color code we clearly identify the minimum value in dark green: 7.6 which corresponds to Press = -1 and Temp = 1 as calculated by Solver.

Which in 3D would be:

In spite of the curvature of this shape the behaviour of each factor is linear. The twisting of the shape is due to the interaction between Pressure and Temperature.

Residue Analysis to Validate the Model

We want to estimate the error we make by replacing our process behaviour with this mathematical model.

 

We subtract the actual Loss values measured in our process from the estimates of our formula to obtain the residuals.

We see no trends of the residuals along the estimate range so this is OK.

Test for Normality of the Residuals

The next check is to make sure that the average of the residuals is close to 0 and they follow a normal distribution:

If the P value for this normality check was below 0.05 this would indicate lack of normality. Since it is 0.983 well above, it passes the test.

Test for Stability Along Time

We want to check that the process has not significantly changed during the time of our experiments:

http://polyhedrika.com/2-uncategorised/21-spc-analysis

No out of control alerts appear in this SPC chart of the residuals so no evidence of significant changes.

Notice that we have a timestamp next to each residual. That would be the timestamp of the moment each experiment was performed. It is essential to collect these timestamps to investigate any anomalies detected during the analysis.

http://polyhedrika.com/2-uncategorised/13-time-stamp-a-key-process-metric-you-can-collect-for-free

 Conclusions

  • Design of Experiments (DOE) is a powerful methodology for process improvement 
  • It enables the identification of critical process factors based on data rather than impressions
  • We can estimate the optimal values of these critical factors to optimize the process
  • Excel provides some useful Data Analysis tools to achieve this
  • Some processes outputs don't have a linear relation with the critical factors. In this case we will need a more sophisticated formula such as RSM:

https://www.polyhedrika.com/2-uncategorised/33-response-surface-doe-with-excel

 

 

When a process variable has only random variation each output is independent of the previous ones. This is what happens in a lottery.  In some processes this independence does not happen. If we control our daily weight, for instance, our weight today is correlated to the weight of the previous days: it has autocorrelation

A similar effect happens when you control a heavy aircraft or a ship: the weight prevents you from making a sharp turning to change the course. This opposing force to change is what is called Inertia

The inertia definition applies to moving objects and it is proportional to the object mass. But inertia also applies to fluids: a tank accumulating a fluid will also have this inertia effect.

 If we try to control a process with only random variation by reacting to every output we can see in Process Control that the process will get worse: variation will increase. 

We will now experience how to control a process with inertia with a simulator of the tilt control in a plane:

Download Inertia simulator.xlsm

Close other Excels and allow macros to run it.

Press Start to start simulating and place the cursor on top of the vertical arrows to adjust the tilt by shifting up/ down (do not click).

You should try to keep tilt as close to zero as possible.

The graphs below will show you the adjustments you made and the actual tilt evolution along the 50 runs

The Average and StdDev on top will tell you the extent of your success.

Response delay

The first thing you will notice is that there is a delay between your actions and the tilt response 

This is the result of inertia: the response is slow. 

Stabilising effect of inertia

 In this graph we can see the random source of variation and the resulting tilt without adjustments. We notice that inertia has produced a stabilising effect reducing drastically variability. This can be confirmed by the histograms:

We notice that inertia has caused a reduction of standard deviation from 18.4 to 2.6. We also notice that Tilt (with autocorrelation) passes the normality test (p = 0.21).

Now let's look at the stability of tilt: 

Variation has been drastically reduced but the source of variation was in control and now we have situations of out of control as shown by the Individuals Control Chart.

These tilt out-of-control situations have very limited range compared with the source range of variation.

Autocorrelation

Another effect of inertia is autocorrelation:

We notice that the random source of variation had no autocorrelation but inertia has caused significant tilt autocorrelation.

Automatic Control of Process with Inertia

If we automatically balance by making Adjustment = - Tilt: 

the result will be reasonable as compared to the case of no adjustment: 

We obtain very similar standard deviations: 3.35 Vs 4.00.

 Now we can experience the difficulties of achieving an effective balancing:

It is difficult to predict when adjustment is required and how much to adjust. 

Real Life: Unstable Source of Variation

So far we have assumed that the source of variation was random with an average of zero.

Now let us try something closer to reality:

Download Inertia variation.xlsm

Try to achieve balance now. 

With automatic Adjust = - Tilt we can achieve:

You can try other automatic adjustment formulas as a function of tilt by modifying the formula in A6.

Continuous Process Balancing by Accumulation in a Tank

By accumulating a fluid in a tank we achieve the autocorrelation effect which is useful to reduce the standard deviation of a critical metric.

Factories often need to drain water to a river or to the sea in which case they have to comply with regulations about its pH. 

Pure water has a pH of 7. Some local regulations require that water pH should be between 5.5 and 9.5 before it can be drained into a river.

In the following example accumulation in a tank has been used in order to reduce the pH standard deviation and meet the required specs. We have done a capability analysis of both the input to the tank and the output drained to the river:

Ppk has increased from 0.26 (totally unacceptable) to 1.15. Total ppms are reduced from 342,594 to 285.

Looking at autocorrelation:

We can see significant autocorrelation produced by the tank accumulation.

Looking at pH stability:

We notice clustering and trends but this happens well within the spec limits.

Body Weight Autocorrelation

 Looking at the data used in

 http://polyhedrika.com/2-uncategorised/20-real-time-spc

Let's check it for autocorrelation:

We confirm that body weight is indeed autocorrelated: our weight today is correlated to the weight we had in the previous days: it has inertia. 

Conclusions

  • Process inertia shows with autocorrelation: metric values are dependent of previous values
  • Inertia causes a delay between cause and effect
  • The standard deviation of the sources of variability is reduced in the effects
  • Mechanical inertia is proportional to the mass of the object
  • Fluid inertia is proportional to the volume of its storage in a tank or pond
  • Mechanical inertia is used by flywheels to smooth rotation speed
  • Fluid accumulation in a tank is used to reduce the standard deviation of its parameters

 

Production based on forecast uses resources to produce items which will not eventually be sold while there is a shortage of those items the market actually demands.

Demand lead times are getting shorter in most business while supply lead times are not able to keep pace. This often leaves just one alternative: "make to plan", also called "Push" logistics. The result is that in spite of our excess inventories we are unable to satisfy the demand: we have plenty of what nobody wants and no enough of what they want.

"Make to order" or "Pull" logistics produces only what has been ordered by the customer so we avoid dedicating resources to produce unwanted items. The problem usually is lead time: can the customer wait until our supply chain is able to deliver?

One way to implement "Pull" logistics is to use Kanban. 

A Kanban (Japanese for a card) is a token generated by the consumption of an item (or fixed lot of items) which authorises the production of a similar one to replace it.

Download:  Kanban Logistics Simulator

In the model above the blue workstation can produce one item A or one B every 4 minutes. There is no setup time when changing product. "a" is a kanban which enables the production of "A" and "b" a kanban to produce "B". 

When an item "A" is consumed its corresponding kanban "a" is released and it joins the queue at the production station. Kanbans "a" and "b" will be produced in the order of arrival to the queue.

Market demand is defined by the Takt times of A and B: one item A is required every 10 minutes and one B also every 10 minutes. 

The number of kanbans in circulation determines the maximum inventory we could accumulate.

 

Supply Capacity Greater than Demand

In this case Takt of both A and B is 20 minutes which is equivalent to 3 items per hour. Since we need 3 A's and 3 B's every hour this gives a total of 6 items per hour to be produced. Production process time is 5 minutes therefore production capacity is 12 items/ hour. Kanbans are limiting production to the consumption rate of 6/ hour therefore we are producing at 50% capacity.

In this case we have decided to put into circulation 10 "a" kanbans and 10 "b" therefore we have accumulated an inventory of 10 A's and 10 B's.

 

Balanced Supply and Demand

 In this case supply capacity 12/h is the sum of demand thruputs (6 + 6)/h therefore supply capacity utilization is 100% and demand fulfilment of both A and B is also 100%.

 

Short Supply

 In this case Supply is the bottleneck: we are unable to supply the market demand which is 12 A's plus 12 B's' per hour. Since total capacity is 12/h fulfilment of both demand A and demand B is 50%. 

We see no inventory of A's or B's because as soon as they are produced they are consumed.

 

Priority Allocation

In this case of limited supply we might decide to give priority to one item (or customer) Vs the other. We can do this with the number of kanbans in circulation. In this case by releasing 10 a's and 5 b's we are able to increase fulfilment of A to 70% at the expense of fulfilment of B which will drop to 30%.

 

Item B Discontinued

With this Just-In-Time approach a drop in the demand of one item (customer) could be balanced by an increase in another one.

In this example B in no longer required so the full capacity is available to produce A. 

 

Balanced Fulfilment

In this case of short supply the demand of A and B is different. We may decide to manage this situation by balancing the fulfilment of A and B. Since demand of A is double of B we release double number of "a" kanbans Vs "b".

In this way we achieve 80% fulfilment both for A and B.

 

Number of Kanbans

 The more kanbans we release the more inventory we could accumulate so we want to keep the number of kanbans as low as possible.

On the other hand we may have variation both in process time and in the demands of both A and B. To compensate for this variation we will need to increase the number of kanbans and therefore the level of inventory.

 

Kanban and Variation

Variation causes an accumulation of WIP in the Value Stream:

As shown in:  https://polyhedrika.com/2-uncategorised/12-process-simulation-2

Variation in process capacity §2 causes an accumulation of WIP both before and after it. This increases the lead time for the whole value stream.

If we eliminate this excess WIP by applying kanban:

We have, indeed, eliminated the excess WIP but at the expense of reducing the effective capacity of the total value stream (Average thruput) and exposing the customer to the variation of step §2: drastic drop in on-time-delivery.

So this is a case where kanban is NOT recommended. 

 

Conclusion

  • Kanban is a practical way to implement pull logistics in our supply chain
  • Managing the number of kanbans we put a limit in the inventory level of each item
  • High variation in the supply or the demand side will require a higher number of kanbans and might make pull logistics impossible