Efficiency improvement feels like a good thing. I mean it makes sense, doesn’t it? If we can make something more efficient, it stands to reason that it should improve the process.
Not always. I propose that in most cases increases in efficiency will be of little benefit, if any, to the overall process. In fact, it can reduce the overall thruput.
How can that be? How could I increase the efficiency of a machine and get less output? Let’s say we had a production line that was well designed and had buffering in the proper places. Upstream of the constraint is a machine that runs 20% faster than the constraint and does a nice job of keeping the buffer full. Let’s also say that the machine is running at 85% efficiency (MTBF/(MTBF+MTR). Someone gets the bright idea that if they slow the machine down 20%, the efficiency will rise to 92%.
That again also sounds like a good thing. However, if you plug those numbers into the analysis, you will find the total output will go down. This being the case, efficiency cannot be an appropriate measure of performance. It doesn’t do what we want. What we want is more quality product off the end of the line, not less.
The difference in the rates of the machines is what allows you to recover from malfunctions on the non-constraints. Later on, in the training, I will go into detail on how to calculate the low-end threshold for efficiency for non-constraint machines. And, guess What happens? The greater the rate difference between the constraint and the non-constraints, the less efficiently they can run without hurting the overall output. That really doesn’t make any sense, but it will when we’re done.
Recovery time is an excellent objective for machines. If a machine is a non-constraint and it is downstream of the buffer, we make sure there is enough rate difference between the machines to use down the buffer before there’s another malfunction. Therefore, in effect, the non-constraint is running at 100% because it is never shutting the constraint down.