With this simple value stream to produce products A and B we want to illustrate the way to maximize profit taking into account the system constraints.

·         Each product is made by assembling two subassemblies, one of them common to both products.

·         We have three special purpose workstations with a corresponding operator trained for the specific job.

·         The line operates one shift with 2400 productive minutes per week.

·         The fixed costs, which include the salaries of the three operators, are 8000 € per week.

·         Product A is sold for 90€ and B for 100€.

·         The maximum weekly market demand is 200 units for A and 100 for B

·         Total direct costs (materials) for product A is 50€ and for B 40€ as shown above.

·         We want to know how many products A and how many B we should produce to maximize profit.

 First of all let’s look at the system constraints:

  • Time available per week for each of the three operators is 2400 minutes
  • Weekly market constraints: 200 A and 100 B.

 

We want to maximize overall profit so we calculate the margin of each product:

Since we can’t satisfy all the market demand we must decide which product should be our priority. If we ask Finance they will tell us B is the best, since it has the highest margin.

 Let’s see how many resources we will need to produce all the market demand for B and how many A’s we will be able to produce with the remaining resources:

With the remaining manufacturing capacity we would be able to produce 140 A’s and assemble 380 A’s but we can only test 40 A’s.  It makes no sense to manufacture more A’s than we can test. The excess assembly capacity will be wasted because we can only assemble the units which have been tested. This means that in fact we will only be able to produce 100 B’s and 40 A’s. This gives us a total loss per week of 400€.

Maybe the Finance criteria of giving priority to the product with the highest margin was not so good after all.

We have already noticed that the bottleneck in this line is Test: it is limiting the effective capacity of the whole line.

The use of bottleneck time is therefore a critical factor we should consider in optimizing the line. 

We need to ask ourselves how much bottleneck time is consumed to obtain a 40€ margin in product A, and the same for B. 

This means that each minute of test time (bottleneck) if used to test product A gives us a margin of 4€ while B only gives us 3€.

 So when we take into consideration the existing constraints A is more profitable. Let’s give priority to A and use the remaining capacity to produce B:

So the bottleneck plays an important role in finding the optimal solution: it defines the effective capacity of the whole line, therefore we need to optimize the use of bottleneck time.

 We can clearly see that due to the existing constraints this line is very inefficient. Let’s look at the capacity utilization of each operation:

The only operation with 100% utilization is test. This is what we would expect since it is the bottleneck. We see that final assembly has very low utilization (less than 50%).

We can see that the only way to improve further the productivity is to reduce test time by making it more efficient or to transfer some work (if possible) to the assembly operation.

 

Graphical Solution

 Let’s now look at the problem graphically:

The red area is the forbidden area due to the constraints: maximum A, maximum B and the constraint of the 2400 minutes available at the bottleneck (test): each A consumes 10 minutes of test and each B 20 minutes. 

We now look at the point of maximum profit: It will be the furthest point from the origin within the allowed area (green) which is the point of maximum A (200 units) and 20 units of B.

 

Solution with Solver

 Now let’s see how to solve the problem with Microsoft Excel Solver. First we need to set all the data in a spread sheet:

Download file: Solver1.xlsx

In matrix B3:D4 we have the unit times for each operation in products A and B.

Column E has material costs and F product selling price. In column G (margin) we need a formula which just subtracts F – E.

In column H (Produced) we want Solver to place the results we are looking for: number of A’s and B’s to produce. 

Column I has the market constraints for A and B.

Row 5 has the formulas above to calculate time consumed on each operation based on the products produced. These values we will compare with row 6 (available times for each operation)

Finally B10 has the shown formula to calculate the profit. This is the cell we will ask Solver to maximize. 

Now we go to Solver to setup the parameters as shown:

We want Profit (B10) to maximize changing cells H3:H4 where Solver will place the solution we are looking for.

Now the constraints:

  •  B5:D5 B6:D6    Used times Available times
  •  H3:H4 I3:I4      Don’t produce above market demand
  •  H3:H4    Must be integers and not negative

 Press resolve and we will get the results:

 

Process Improvement

Now imagine that an engineer has found a way to reduce test time of product A from a total of 10 minutes to 9 minutes by adding some work to the assembly operation which would involve an increase from the previous time of 5 minutes to 7.

This suggestion would be difficult to justify with Finance, since we are increasing the overall operator time of product A one minute.

 But let us look at our Excel to see what Solver has to say:

We have been able to produce 10 more B’s increasing the profit from 1200€ to 1800€.

This is telling us that maybe the way we compute product profit may not help when we have idle time in some of the operations.

 

Container distribution example

Now let’s see another example of Theory of Constraints solution with Solver:

 In this case we want to minimize the cost of distribution of goods containers from warehouses X, Y, Z  to shops A, B, C, D, E, F:

Download file: Solver2.xlsx

Matrix C4:H6 contains the transport costs of one container from each warehouse to each shop.

Matrix C9:H11 will contain the number of containers from each warehouse to each shop we are looking for.

The blue cells have the formulas shown.

 The Solver parameters will be:

Notice we must meet all the shops requirements:  C12:H12 = C13:H13

 And the resulting solution:

This is the optimal solution which meets all the constraints.

 

Conclusion

  • All Value Streams are subject to multiple constraints such as:
    • Time
    • Cost
    •  Delivery
    •  Workforce 
    •  Market demand
  • The optimal operating point very much depends on the existing constraints which may vary along time.
  • Microsoft Excel Solver is useful to optimize a Value Stream which is subject to multiple constraints.

 

The quality level seen by the customer of our process is one of the most critical process metrics. This quality level is the result of all the defects generated by the different process steps as well as those coming from the parts suppliers. In our Value Stream Map we focus on process flow but this flow is very much affected by quality. It is important to find out where in the process each type of defect is generated and where in the process it will be detected to make sure it doesn’t find its way to the customer.

In this model you can download DPUdpmo.xlsx we are simulating a simplified map of an Electronic Assembly and Test Process.

Defect contributors are defective components coming from suppliers and defects from the 4 process steps:

  • SMT (Surface Mount Technology) automatic component placement
  • Reflow solder of these components
  • Manual PTH (Pin Thru Hole) component insertion
  • Wave soldering of PTH components.

Defects are detected on: 

  • Several visual inspections along the process
  • ICT (In-Circuit Test) at the end of the assembly process
  • Final Functional test.

“Defects per unit” (DPU) is the metric that interests the customer but it’s not very practical as a process metric because we are typically producing products with very different levels of complexity: some boards may have just 5 components and others 500.

“Defects Per Million Opportunities” (DPMO) is a useful process metric that is independent of the product complexity. This metric will allow us, for instance, to know if our soldering process is improving or getting worse with independence of the complexity of the products we are soldering.

In the following example: if we have one defective component in 3 boards the average DPU = 1/3 = 0.33

 When we measure in DPMO we must define the OFEs (Opportunities For Error) of the process we are dealing with:

  • Component placement process: OFE = Number of components placed
  • Soldering process: OFE = Number of solder joints soldered

Therefore if we say our component placement DPMO = 100 this means that out of every million components placed 100 will be missing or misplaced.

 If our wave solder DPMO =200 this means that out of every million solder joints 200 will be defective.   

 

In our Electronic Assembly example we have 500 SMT components and 100 PTH therefore these are the corresponding OFEs. Total components used por board are therefore 600.

The 500 SMT components have a total of 1500 solder joints, therefore these are the reflow soder OFEs.

The 100 PTH componntes have a total of 300 solder joints (wave solder OFEs).

Process DPMOs define the quality of our different process steps and the OFEs are specific for each product type.

On each step we can estimate the expected defects per unit:

DPU = DPMO x OFE / 1 000 000

Since the types of defects contributed by the 5 different sources are independent, the total number of defects produced will be sum of each defect contribution. Total DPU is the sum of all DPUs.

Process Yield is the proportion of defect free units. If defects are independent DPU follows a Poisson distribution so Yield and DPU are related by:

Yield = exp(-DPU)

For instance an average of 1 defect per unit will still produce exp(-1) = 0.37  that is 37% defect free units (there will be units with more than one defect)

Total DPU is therefore the sum of DPUs while the total Yield is the product of yields. 

 

Now let’s look at defect detection:

Control points (visual inspections and tests) are defined in the Control Plan for each of the product types. In this example we have several visual inspections, one ICT and one Functional Test performed in this order. 

Visual inspection is performed as close as possible to the step generating the defect and the defects detected are typically corrected on the spot. The problem is that it is very inefficient: it can only catch 60% of the defects. In this example this means that out of the total 0.32 DPUs only 0.192 will be caught and corrected, therefore 0.128 will escape.

ICT uses a bed of nails to contact many points in the board and is able to test the functionality of some components. If the test coverage is 80%, 0.102 DPU will be caught in ICT and corrected and therefore 0.026 will escape. 

Finally the functional test has a high test coverage but the test is not able to pinpoint where the defect is, therefore faulty boards need to be analysed and repaired by an expert through a lengthy process.

The proportion of boards that will need to be reworked as a result of visual inspection is 17%. This will require additional resources and skills. In the case of ICT and functional test 12% of boards will require rework and this may require a higher level of skills and it will take longer to identify and correct these defects.

At the end of this process we have 0.001 DPU going to the customer which means 1280 faulty boards out of every million.

This is obviously unacceptable so we can start looking at the main contributors to defects and the main escapes of our detection and correction.

We can look at different improvement alternatives with this simulator and estimate the effect on the final ppms going to the customer.

PTH insertion, being a manual operation, is the main contributor to defects. We can look at ways to add some Poka-Yokes (defect proof devices) or look at ways to automate. Suppose we are able the reduce the PTH insertion DPMO from 1500 to 500 the result would be a reduction of output to 1120 ppm.

If we can further improve visual inspection coverage by automation to 80% them it would go down to 560 ppm.

If we redesign the board to eliminate all PTH components and convert them to SMT we would further reduce it to 384 ppm.

In this simulator Process DPMOs come from measuring our current processes. Theoretically they should be Product independent but in actual fact they are not: some designs produce more manufacturing defects than others and the process parameters for some products may not be optimized. 

This simulation helps us focus on the main contributors of defects and defect escapes from our tests. In this way we can identify who has to do what in order to improve the process. 

Download this VSM example:   Value Stream Map and Modeling

Value Stream Maps are typically designed to flow horizontally left to right. When a process is complex it becomes difficult to view the whole process: it may require several pages and this makes it difficult to see the total picture.

Making the VSM vertically allows us to visualize complex processes in a single page.

Another common practice is to map the process by sticking post-its on a wall or large wrapping paper. This has the advantage of simplicity but it creates some inconveniences. It becomes impractical to save all the work done to continue working on it later on. It is also difficult to communicate to others: photos are difficult to read. If you ask other people, not present in the VSM exercise, to give their input they can not modify the work you have done.

Doing a VSM in Excel has several advantages:

  1. The team involved in the VSM exercise can be located in remote locations and can all share a spread sheet in the cloud with communication via Hangouts or Skype.
  2. All the work done is not lost at the end of the meeting and it can be updated remotely at a later stage by all process participants
  3. Excel allows the addition of the process parameters for each process step by adding as many columns as required on the right:

·         Process time

·         Setup time

·         Cycle time

·         Work-In-Process waiting

Complex processes require often feedback loops: repair failing items and test them again. These loops are an essential part of the VSM because they often become the bottleneck of the total process. 

When the process doesn’t have a single flow but there are several branches you need to estimate the proportion of items that will flow along each branch and this you can do by collecting data along a period of time.

Process times have variability so you will need to estimate not only their average values but also their standard deviations.

 

In this Repair process the average time it takes to receive a product is 2 min and its standard deviation 0.5 min.

20% of the products are under warranty so 80% will follow the “N” branch.

Average inter arrival time of products coming from the customer for repair is 5 min. This is the takt time the process has to comply with. 

 

Not all products go through all process steps, therefore we need to calculate % throughput for each step based on the % of products flowing along each branch.

In order to balance the line we need the right staffing on each process step to insure enough capacity to handle the product arrivals. This means that Cycle time  Inter arrival time (takt).

Cycle time  =  Average time   x   % Throughput  /  Staffing 

We can calculate the staffing (Full Time Equivalent) required for each step M4 – M13 with Solver

 To do this we calculate the total staff required in cell M1 just adding all the yellow cells in column M. This cell is what we want to minimize as long as we meet the requirements that all cycle times P4:P13  are below the inter arrival time in K3. 

 We are adding the constraint M4:M13 0.1. The proposed staffing by Solver in the yellow cells M4:M13 is:

Which gives a total staff of 13.6 (M1). This is a theoretical minimum which assumes all staff has all skills, which is not the case.

In cells D1 to H1 we obtain the compound for each process participant (some perform several operations) obtained from the column M.

If Receive and Coordinator can be combined we may have to round up the sum of the two from the 1.6 requirement to 2.0

Invoicing we round up from 1.8 to 2. Test from 2.9 to 3 and repair from 7.4 to 8. 

This manual rounding will lead us to this situation:

Where the total staff has increased from the theoretical 13.6 to 15 after considering the different skills required in the different steps.

This increase in the staffing has an effect on lowering the cycle of the steps affected below the takt time of 5 minutes.

Looking at the % Throughput column we notice that some steps only process 4% of the products to be repaired while test and repair have to process 123%. The reason for this is that only 80% pass the test OK the first time. This means 20% have to be tested a second time. Same with burn-in, where 10% fail and have to be retested.

 So far we have not taken into account variability but we will do so by adding some additional columns to our VSM converting it into a simulator in our next article: How to convert your VSM into a simulator to experience variation

 

In part 1: Excel Value Stream Map we saw how a complex process could be Value-Stream-Mapped using Excel. We saw the importance of mapping feedback loops such as test-repair loops because they often become the bottleneck for the complete process. In this situation not all items flow along all branches so we need to calculate the % flowing along each branch. This enabled us to calculate the cycle for each workstation based on the staffing. Finally we were able to calculate the ideal staffing to balance the line: insure no workstation cycle is above the average inter-arrival time (takt time). This we did using Solver.

Download this VSM simulator example:   Value Stream Map and Modeling

A value stream map is not complete without Work-In-Process (WIP) between process steps. The problem with WIP data is that it typically varies widely along time so a snap shot of the moment we visit the line may be very misleading. In our VSM Simulator WIP before each workstation (column Q) is calculated each hour:

New WIP = Previous WIP + Items processed by previous step – Items processed by this step

Items processed (column R) are calculated each hour based on Average time (K), Time Standard deviation (L) and Staffing (M) taking into account the constraint of enough WIP in the previous step. The simulator is assuming that process times follow a normal distribution.

Process times: Normal distribution with averages (K) and standard deviations (L)

 On the other hand Inter-arrival time (Takt) follows an exponential distribution:

Inter arrival time distribution: Exponential with average K3

Lead Time in column T, for each process step, is the average time it takes for an item to be processed in that step including the waiting time due to the WIP in front of it.

Capacity utilization in column U is the average % time the workstation has been operating since the start of simulation based on its theoretical capacity.

WIP (Q) and Capacity Utilization (U) are shown on bar charts on the right:

Elapsed hours simulated are in B2. Products received for repair,  products repaired and products returned without repair are shown in column A:

 Total values for columns K to U are shown in row 1:

K1 represents the average Value Add Time per product

M1 is the total staffing in the line

P1 is the maximum value of all cycle times: it corresponds to the line bottleneck capacity. P1 should be ≤ K3 (Takt)

Q1 is the total WIP in the line. 

Received (A3) = WIP (Q1) + Repaired (A13) + Returned without repair (A8)

T1 is the Total Lead Time (hours): The average lead time seen by the customer (not including the customer contribution)

U1 is the overall line capacity utilization.

The Value add to total time rate can be calculated:

Value Add Time / TOTAL Time = K1 / (T1 x 60)

To run the simulator:

press F9 to simulate 1 hour. Keep pressing to run several hours. The counter in B2 shows the elapsed simulator hours.

To start simulation with an empty line: press Ctrl key + r. This will reset all line WIPs to zero.

 Evolution graphs

In the tab Evolución you can see how all Processes and WIP values evolve along each hour in the main steps as well as the evolution of total Lead Time:

During the first hours, when the line is empty, there is no waiting time along the process so the total lead time is basically equal to the value add time of one hour. As queues start building up as shown in the WIP chart, waiting time starts to increase in all workstations and that causes the total lead time to increase to a value of 12 hours, where it seems to stabilize. This means that if we take a snap shot of the line in hour 7 basically all lead time is value add. On the other hand if we look at the line in hour 22, when stability seems to be reached, lead time is 12 times value add time: NVA/ VA = 11. Non value add time is made up of all the waiting taking place in front of each workstation. 

We can see that, although the average inter-arrival time is 5 min (12 items per hour), 3 arrivals of more than 30 items each have collapsed receiving causing an accumulation of 80 items at reception which has eventually been absorbed by the line. 

This is a typical case where most of the variability comes from the customer inter arrival times as seen in the process chart. 

Tracking this evolution you can discover unexpected bottlenecks caused by variability that may need to be reinforced with additional staff. We can also estimate the total lead time we can commit to customers. In this case if the process is stable that would be 12 hours minimum. 

These are some of the effects we can detect in our simulation:

  • Accumulation of WIP in front of a workstation due to insufficient staffing

  • Low capacity utilization due to excess staffing

  • Accumulation of WIP in front of the repair station due to low first-pass-yield in test (N11)

  • Accumulation of WIP in reception (Q4) due to arrivals above plan

  • High overall lead time due to excess total WIP in the line 

  • Highly unbalanced line as shown by large differences in capacity utilization

  • Proportion of products under warranty different to plan: it would require line re-balancing

 Conclusions:

  1. A static Value Stream Map is a good start to understand how the process behaves 

  2. Feedback loops such as test-repair loops are an essential part of the process so they should be in the VSM

  3. A snap shot of the WIP situation along the line may not be representative of the normal operation 

  4. If the WIP situation is not typical the NVA calculation will not be correct

  5. The VSM simulator provides a deeper understanding of the process behavior and it enables what-if studies to optimize the process

  6. Simulation helps us understand some of the failure mechanisms caused by variability so we can act on the root cause to make real improvements

 

 

Time Stamp : A key process metric you can collect for free

 

 Data collection is not for free. While an operator is recording data, he/she is not producing. In general variable data such as a speed, temperature, etc. requires some sort of equipment to measure it and therefore it is more expensive to collect than attribute data: Pass/Fail, On-time/Late, etc.

Time is a key variable in any process which can automatically be collected by any PC, Portable, Tablet or Smart Phone, therefore it does not require any additional measuring equipment.

Entering a date or a time manually into a terminal is prone to multiple typing errors which if undetected can distort all subsequent key metrics.

If data is collected on paper and then typed into the system  at a later stage, apart from the waste of this operation, the errors might be difficult to recover.

If we are able to simplify the data recording operation, the operator can report in real time all incidents or data entries, in which case the data collection application can automatically associate a Time Stamp to each piece of data entered:

 

Incident reported

Automatic Time Stamp

Line stopped

21/06/2017  19:40

Line restarted

21/06/2017  20:08

 

The operator just selects the incident from a drop down list and the time stamp is added automatically. 

In the previous example by subtracting the two time stamps you know the line was stopped for 28 minutes. 

The way to register a time stamp with Excel is: 

Current Time Stamp English

Spanish

= NOW()

= AHORA()

 

From a time stamp X you can easily obtain: 

 

English

Spanish

Day of the week

=WEEKDAY(X:2)

=DIASEM(X;2)

Week of the year

=WEEKNUM(X)

=NUM.DE.SEMANA(X)

 

You can also obtain the Shift (Morning, Afternoon, Night) from the hour of the Time Stamp with a formula.

 This data obtained from the time stamp could be very useful during analysis in order to detect significant differences between: 

  • Hours
  • Shifts/ operators
  • Days of the week
  • Weeks
  • Months

 Very often data is still being collected on paper because there are no terminals at the point of use and it would be very costly to install them.

 On the other hand we all use cloud based technologies in our private life: smart phones, tablets and portables sharing data in the cloud, with basically no other cost than the WIFI/ Wideband connection.

 A new approach of BYOD (Bring Your Own Device) is being practiced by some companies which makes data collection almost free by using the employees own smart phone or tablet with the Company WIFI and private cloud.

 In the following example a spread sheet is shared in Google Drive by line operators who report line incidents in real time with their phones and maintenance engineers who report the repair actions the moment they do it also with their phones.  

Equipo Componente
Fecha avería
 
Avería Tipo Fecha actuación prevista
Fecha/ hora actuación
 
Acción de reparación
Contraincendios
 
Tanque
 
25/03/2017 13:02:52
 
Fuga
 
A
 
26/03/2017
25/03/2017 13:19:21
 
Reajustado
 
Pulidora
 
Bomba
 
25/03/2017 13:03:58
 
Fuga
 
A
 
26/03/2017
25/03/2017 13:08:17
 
Limpiado
 
Calibrador
 
Balancín
 
25/03/2017 13:18:04
 
Deformado
 
B
 
1/04/2017
25/03/2017 13:20:33
 
Reemplazado
 
Centrifugadora
 
Cable
 
25/03/2017 13:18:19
 
Fundido
 
A
 
26/03/2017
25/03/2017 13:20:43
 
Acción
 
CIP SKID
 
Chapa
 
25/03/2017 13:18:29
 
Máquina desprogramada
 
C
 
24/04/2017
25/03/2017 13:21:13
 
Reprogramado
 
Cinta transportadora
 
Bombilla
 
25/03/2017 13:18:40
 
Defectuoso
 
C
 
24/04/2017
25/03/2017 13:21:18
 
Reemplazado
 
CIP procesos
 
Engrasador
 
25/03/2017 13:19:05
 
Roto
 
A
 
26/03/2017
25/03/2017 13:21:44
 
Reemplazado
 
Filtro rotativo
 
Chapa
 
25/03/2017 13:19:17
 
Oxidado
 
B
 
1/04/2017
25/03/2017 13:21:54
 
Reemplazado
 
Cinta transportadora
 
Canjilón
 
25/03/2017 13:19:32
 
Salta el fusible
 
B
 
1/04/2017    
Centrifugadora
 
Bomba
 
29/03/2017 17:25:21
 
Fuga
 
A
 
30/03/2017    

 

 

 

 

 

 

 

 

 

 

Operators select the equipment, component, defect and severity with a single touch on each of the pull down lists. The current time stamp is entered automatically and the expected repair date is also calculated automatically from the severity and defect reported date.

 If a repair date is missing and the expected repair date is exceeded cells become red to warn the maintenance engineers. The moment the maintenance engineer enters the repair action the time stamp is also entered automatically. 

Lists of pending repairs are obtained from this table with pivot tables to inform the maintenance engineers of all pending repairs and their due dates and severities.

We can easily extract  reports such as:

  • Average repair times per severity 
  • Correlation between defects and repair actions 
  • Frequency of defective components for each equipment 

CONCLUSIONS: 

  • Paper recording can easily be replaced by widely used devices such as smart phones and tablets which require no operator training
  • Easy  availability of these devices enables reporting in real-time and therefore real-time control of the process
  • This enables the automatic recording of time stamps associated to each piece of data the moment it is entered
  • Data entry is made much more simple and robust by eliminating data entry errors which are difficult and cumbersome to detect and correct at a later stage.