The Obeya room is a "war room" used by process improvement teams to meet and solve critical multi functional problems.  The room walls are lined up with boards, highly visual charts and graphs showing program timing, milestones and progress to date and countermeasures to existing timing or technical problems. The team meets in this room regularly but team members can also visit the room, which is fully dedicated to the project, throughout the day.

Obeya Room Benefts

  1. Remove organizational barriers
  2. Visual management by displaying all relevant data required in the improvement project
  3. Encourage a collaborative environment through regular meetings
  4. Implement quicker, more effective solutions
  5. The whole team knows what is going on in real time

Some Practical Constraints in the Implementation

  1. Some companies can't afford the luxury of a fully dedicated room
  2. Some of the team members may be located far away so they will only attend the meetings
  3. The data displayed on the walls may, very quickly, become obsolete: some team members may be too busy to physically go to the room to update their charts
  4. Some team members may attend the meetings unprepared because they didn't have the time to analyse the data in the walls before the meeting

 Alternative: Virtual Obeya Room

The improvement team shares a folder in the cloud: Company intranet, Google Drive, Microsoft One Drive, etc.

This folder has the same charts and documents which were displayed in the physical Obeya room.

In this case each chart can be hyperlinked to other documents with more detailed information. 

Each chart has an owner who has WRITE access and the rest of the team READ ONLY access.

Obeya meetings are still held the same way but they can take place in any room with a large display. The room is only used during the meeting and then it can be used for other meetings along the day.

Some people have their meetings standing in front of the screen to keep them short. You may want to use a standing table to hold the portables.

The owner of each document is able to update it from a portable in the office or anywhere with a smartphone. Even during the Obeya meeting. 

The Obeya virtual room can be visited from anywhere at any time: for instance while visiting the line or a customer.

Some team members may be located in remote locations, in which case they can take part in the meetings with video-conference and have the same access to the virtual Obeya room as everyone else.  

This alternative may be put in place for free in a very short period of time: no IS application is required.

Actions Tracking Spreadsheet

A key document in the Obeya room is a sheet to keep track of all the actions committed during the meetings: who is doing what and when. 

In the virtual Obeya this can be done with a spreadsheet where all participants have WRITE access. 

Google Drive keeps track of all changes made to the document: who did it and when.

Instead of committing actions "before the next meeting" you can commit  date and time. The moment the action has taken place the person responsible can enter the timestamp it was done from a smartphone with a single click. 

With this approach all team members have real time information on the project status on their finger tips no matter where they are.

Understanding variation is key to interpret process behavior. 

This simple exercise can help to experience process variation and understand the difference between process change and inherent process variation. 

This understanding is key on management decisions to avoid both overreaction and lack of reaction.

Manual dice throwing

To run the exercise with actual dice download and print the form:    Manual.JPG

Exercise:

  1. You will need a printed form and 4 dice for each team
  2. Throw 4 dice and add the outcomes
  3. Record the result in the Run Chart
  4. Repeat 50 times
  5. Join the dots in the Run Chart with a line
  6. Build the Histogram by counting the total number of dots on each group of 3 values 

Run the exercise with the simulator

Download this Excel Simulator:   Variation.xls

Close all Excel sheets before you open this one and enable MACROS.

Press RESET in the DICE sheet and keep on pressing F9 to throw the dice. 

Results Interpretation:

Try to answer these questions on each team and then discuss all teams together:

  1. Does each outcome depend on the previous one?
  2. Are there any special trends or indication of process change in the run chart? Is this a stable process?
  3. Is there a special cause that explains the maximum and the minimum values? Did you do anything different to obtain them?
  4. Is the frequency distribution close to normal: maximum at the center, declining frequency on both sides, symmetrical?
  5. What distribution would result throwing one single die? Why?
  6. Is it possible to predict the outcome of one specific throw?
  7. Is it possible to predict the frequency  of one specific outcome for a large number of throws?
  8. Can two different run charts produce the same histogram?

 Is there a downward trend?              Is this an upward trend?

Neither are statistically significant trends. The conclusion is that this is a stable process: there are no significant trends.

This is what we would expect since we have not changed the way we throw the dice so the process will continue to behave this way until we do so.

Stability doesn't necessarily mean that the process is OK: it just means that the process is neither improving nor getting worse. 

This is a stable process following a normal distribution with average 14 and standard deviation around 3.

If we throw one single dice the distribution would not be normal: it would be a uniform distribution (flat) because all values have the same probability. Throwing 4 dice there is only one combination that gives a sum of 4 (all 1's) but there are many combinations that give a sum of 14. The distribution of sums of 4 throws follows a normal distribution in spite of the distribution of single values following a uniform distribution (Central limit theorem).

Alternative process

 Let's now analyse another process: Open Variation.xls sheet ALT, press RESET and keep pressing F9 to run the process.

Now try to answer these questions on each team and then discuss all teams together:

  1. Is it likely that this data comes from the previous process of throwing 4 dice? Why?
  2. Is this a stable process?  Why?
  3. What is the meaning of the frequency distribution histogram in this case?
  4. Can we use it to predict the process behavior?
  5. What is the probability that values 7 - 13 happen again? Can you conclude that by looking at the histogram?

Alternative Process Conclusions

We can clearly notice that this process has an upward trend, therefore it isn't a stable process: its average is shifting up. 

Therefore it is very unlikely that it comes from throwing dice (apart from the values above 24 impossible with 4 dice).

In this case of an unstable process, as shown by the Run Chart, the Histogram is completely misleading if we want to predict the process behavior. Indeed, the histogram predicts that values below 15 have a certain probability of occurring but from the Run Chart we see that this is very unlikely. 

We can conclude that the Run Chart and the Histogram are both necessary and they complement each other. 

First we must check that the process is stable with the Run Chart and only if it is we can use the Histogram to characterize the process behavior.

 The question is how do we know if the process is stable (apart from just our impression)? The answer is to use a statistical analysis program such as Minitab:

Both significant Clustering and Trends confirm that the process is not stable.

Defect Rate Comparisons

Open Variation.xls sheet Defects

This are the defect rates produced by the 6 operators during one week. 

The department manager should decide, based on this data, whether he/ she should take some action such as:

  1. Talk to Amparo to remind her of our zero defects commitment with the customer
  2. Congratulate Fernando for his results and, maybe, give him a prize
  3. Ask the process engineer to explain why Mondays produce more defects than Fridays
  4. Tell operators that anyone producing above 8% average weekly defects will be penalized

If you have decided on any of these actions you are wrong. If fact, they may be counter productive.

This is an example of overreaction on the part of management due to a lack of understanding of process variation.

To see this just press F9 to simulate another week with this same process. 

If we analyse the results of 4 weeks we notice that the differences among operators aren't that large 

How do we know if there is a statistically significant difference among the different operators or days of the week?

This analysis can be done with the Analysis Tools in Excel: 2 Way ANOVA

The conclusion is that neither the differences among operators or days of the week are statistically significant.

Management improvement actions, in this case, should be directed to improve the overall process to reduce the defect rate. The process owner may run a Design Of Experiments (DOE) to optimize the critical process parameters. 

 Process Control

Open Variation.xls sheet Control press Reset

  1. You are responsible to control a machine to insure the deviation from target is zero in every run
  2. You can set the adjustment value (+ or -) to be applied to the next run 
  3. Set the adjustment and press F9 for the next run
  4. Repeat for 25 runs

  1. Did you achieve your objectives of controlling the process to obtain zero deviation in every run?
  2. Why?
  3. What strategy did you follow?
  4. What could you have done differently?
  5. Can you be held responsible for these results?
  6. Why?

 These are the results of overreaction:

If we made no adjustments:

Before we start making adjustments we should have seen how the process behaves with no adjustments. We see that the process has random variation and it is well centered in zero: this means we can't improve it with our adjustments, in fact, we will increase the variation and make it worse. 

In this case the operator has been given an impossible task.

 Conclusions

  1. An understanding of process variation is essential in order to take the right improvement decisions
  2. Decisions based on one single outcome can lead to overreaction and making the process worse
  3. Statistical analysis is required to distinguish between a change in the process and intrinsic process variation
  4. Statistical applications such as Excel Data Analysis Tools or an application such as Minitab can help in this analysis
  5. Six Sigma education for professionals and management can be useful to adopt this new way of thinking

A Test/ Repair loop can become a bottleneck for the whole Value Stream when the test First Pass Yield is lower than planned.

A drop in test FPY is normally caused by problems upstream in the Value Stream. This Test/ Repair loop is often absent in Value Stream Maps in spite of the potential to become the bottleneck for the total process. 

You can download this example file:  TestRepair.xlsm

This Excel file simulates a Test/ Repair loop such as:

Ideal Situation

The ideal case is when FPY = 100%. In this case no repair is necessary and the required test capacity is just 100, which corresponds to the Value Stream throughput.

You must close all open Excel files before you open this one and you should enable Macros.

To simulate 1 hour operation just press F9. Keep pressing to see the evolution.

To reset (put all Work-In-Process to zero) press Ctrl and r keys.

You can only write in the yellow cells.

You need to work out the minimum required Test and Repair capacities in order to deliver 100 units each hour (balanced loop).

Test FPY Drop

Let's assume 1% of the units are failing test: Test FPY = 99%:

The immediate result is that the output of this loop drops to 99: It has become the bottleneck of the Value Stream.

Since we are not repairing the faulty units they are accumulating in front of the Repair station. 

How much repair capacity do we need in this case?

All repaired items need to be retested so they will add to the new items entering the loop. What test capacity do we need then?

We can calculate this without the need for simulation:

In this case when we test 100 units only 99 come out OK (we have multiplied 100 by FPY which is 0.99). 

If we need an output of 100 items OK how many do we need to test? 

Answer: We divide by the FPY:  100/ 0.99 = 101.1 units per hour

And the repair capacity will be: 101.1 - 100 = 1.1 units per hour

 Repair Time Variation

Test time typically doesn't have much variation, specially if it is automatic. Repair time, on the other hand, tends to have high variability. Some automatic testers may provide specific repair instructions but very often repair requires an investigation which might take a long time. 

Let us assume that in our example an average repair capacity of 1.1 items/ hour has a standard deviation of 0.5:

The frequency distribution of this repair capacity will be: 

And the result will be:

We see that occasionally output will drop and a queue will develop waiting for repair. This means we will need additional average repair capacity to compensate for this variation:

Now the "waiting for repair" queue has moved to "waiting for test" but output is maintained in 100. 

Conclusion

The Test/ Repair loops should normally be included in the Value Stream Map because they are critical steps in the process which could become the bottleneck of the Value Stream. See an example in:

https://polyhedrika.com/2-uncategorised/14-value-stream-map-with-excel

A drop in Test FPY should trigger immediate actions to analyse and correct the source of the defects.

In the mean time we will need additional Test and Repair capacity to maintain the VSM throughput.

The Repair operation may require high skills in short supply so enough training should be provided to insure it doesn't become the bottleneck for the total process. 

By developing a robust Repair process we can reduce the average repair time and its standard deviation reducing this risk. 

 System constraints are key when it comes to optimizing our process. The process bottleneck limits the overall throughput and it determines such things as manufacturing lot sizes. 

With this simple manufacturing line simulation you can experience the effects of alternative solutions in order to maximize profit. 

Download Excel file:  TOCeng4.xlsm

Close other Excel files before you open this one and enable Macros.

Process Objective

Run the simulator to obtain the maximum profit after one simulated week. 

You have an initial capital of 1000 € which you can use to buy materials to feed the blue, green and orange machines. 

The green machine performs 3 operations: b, c and d. All parts should be processed through all 3 so you should decide what manufacturing lot size you want and process the lot through each of these operations. Before each operation there is a setup time. In the same way the orange machine has 2 operations: e and f. 

The market will accept any amount of product P with a price of 70 €. Spares P1 and P2 can also be sold but their quantities can never be above the number of products P already sold (if you have already sold 5 P's you can sell, if you want, 5 P1's and 5 P2,s).

Fixed expenses amount to 2000 €/ week and they will be subtracted from the cash balance at the end of each week. 

Week 1 will start with an empty line so you can simulate one single week leaving an empty line at the end.

You can also simulate several weeks, in which case you don't empty the line at the end. 

 Simulator Operation

You operate the simulator with control buttons:

You can either press the start button or use Ctrl + s. The same with the others. The reset button will empty the line and start simulation from zero.

The counter will tell you where you are:

One week is 5 working days of 8 hours. The simulator will stop at the end of each day: just press start to continue.

You start by buying materials based on the lot sizes you have decided and you must select the operation you want to run in the green machine from the pull-down menu: b, c or d.

The same with the orange machine: select e or f.

You can see the details in the Help sheet.

To transfer to the next machine type the amount to be transferred on the yellow boxes. 

Financial control

You can control your financial situation in real time:

You can buy materials as long as you have a positive balance. 

System Constraints

You may want to try your manufacturing strategy with this simulator before you go into a deeper analysis.

These are the constraints of the different machines:

The bottleneck of the whole line is therefore the blue machine (operation a): each product P will need 60 minutes of this machine. 

You will notice that we are only considering the process times (not the setup times). The reason is that the influence of setup times can be eliminated by using large enough lot sizes as we will see later.

Another constraint is the fact that all products P and spares P1 need machine a (the bottleneck). Spares P2, although they don't need machine a to be produced they can't be sold unless products P (which need machine a) have been sold. 

The bottleneck dictates how much we can produce and therefore the profit. We must focus on optimizing the bottleneck time to maximize profit:

To produce a spare P1 we need 60 minutes of bottleneck time and obtain a profit of 30€. If we use that time to produce a product P this allows us to sell also a spare P2 (which doesn't use the bottleneck) and the profit will be 70€.

The conclusion is that we should not produce any spares P1.

Theoretically we should be able to produce and sell 40 P's and 40 P2's per week.

 Lot size

If we decide to produce only P and P2 it will take 60 minutes of bottleneck a to produce one of each. In the green machine we will need to process 2 units: one for P and one for P2; this will take 18 x 2 = 36 minutes of process time.

To process a lot in the green machine we will need 3 setups of 40 min (total 120 min) and 18 x lot size processing time.

During this time bottleneck a must process lot size/ 2 units (only for P). If we dedicate all spare time in the green machine to do setups we conclude that the lot size is 10. 

Indeed, to process these 10 parts we will need 300 minutes which is the time it takes to process 5 parts in the blue machine a.

This means that if we reduce the lot size below 10 the green machine will be more restrictive than blue a so it will become the new bottleneck.

In practice to avoid the green machine from becoming the bottleneck we must leave a margin choosing a lot size greater than 10.

If we apply the same reasoning to the orange machine:

In this case the minimum lot size is 8. It may not be practical to have different lot sizes in both machines so we may decide on a value above both such as 12 to compensate for any inefficiencies.

Possible Results

Theoretically starting with a full line we should be able to produce and sell 40 P,s and 40 P2,s which gives us a weekly margin of 40 x 70 = 2800 €. If we subtract the weekly fixed cost of 2000 € that leaves us a profit of 800 €

Starting with an empty line and leaving it almost empty at the end of one week we obtained the results:

Capacity Utilization

On the top right corner of the simulator we can keep track of the capacity utilization on each of the machines.

At the end of the week we obtained the following results:

The first thing we notice is that the bottleneck a has been producing 100% of the time: one minute lost in the bottleneck would be a loss for the whole line.

The green and the orange machines have been stopped at the end of the week in order to empty the line and also due to inefficiencies on each setup. 

Machines g and h were stopped at the beginning of the  week due to the empty line and also along the week due to their excess capacity. 

 Conclusion

System constraints need to be considered when it comes to optimizing a process.

The bottleneck defines the maximum possible throughput for the total process.

The bottleneck defines the rate at which product should be started in the line: starting above the bottleneck capacity will only build up Work In Process and it will not increase throughput

In machines with several operations we want the minimum manufacturing lot size but not so small that it becomes the bottleneck of the total process.

Capacity utilization should be 100% in the bottleneck but not in the rest of the operations.

 

The Standard Work Combination of machine and operator times tries to meet the throughput requirements of the line with the available machines and operators.

You can download this example file:     Standard-Work-Combination-Table.xlsx

In this example 4 autonomous machines are handled by one operator who is performing the following operations on each machine in sequence:

  1. Unload finished part from the machine
  2. Load the new part
  3. Start the machine
  4. Take the removed part to the next machine

The following table contains the operator times to perform these operations as well as the machine times involved:

The manual times include the loading and unloading of the parts in the machine therefore these times are included in the machine times. Walk times involve the operator but not the machines.

Each start time is calculated from the previous start time plus manual and walk times.

The Manual Gantt chart above is built from the start times and manual. You can see that the delay from a manual time and the next is the walk time.

The quality control operation is performed by the operator and no machine is involved. 

The Machine Gantt Chart is built from start times and machine times. 

If we want to maximize the throughput of the line with one single operator the cycle time would be 60. This is the time between two finished parts leaving the line. If we are talking about seconds a cycle of 60 seconds corresponds to a throughput of one unit per minute. 

This cycle was calculated with operator times but did not take into account machine times so we need to insure that no machine times are above the cycle time. 

Lead Time Calculation

How long will it take for one item from start to finish? This is what we call lead time:

You can see it takes 5 cycles for one item to go through the total process so lead time will be 300. 

Increase Throughput

If a cycle of 60 is what the market demands we can handle this line with one single operator. We realize that the machines are kept idle most of the time because it does not make sense to produce more than market demand. 

If we need to increase throughput (decrease cycle) we can add another operator to do this:

If we assign work in this way the cycle would be 25 for operator 1 and 35 for operator 2. This means we can have an overall cycle of 35 and all machine times are below this value. 

The corresponding Gantt charts are:

So with 2 operators we can almost double throughput: cycle is reduced from 60 to 35. 

Conclusion

Standard work combination charts allows planning with machine and operator times in order to achieve the required throughput with the existing equipment and using the minimum human resources.