Efficiency sounds like a good thing. Whether it’s in our personal lives or on our production lines, we all want to do things as efficiently as possible. That will help us reach our goals. Right?
In fact, there are times when our quest for efficiency can actually make us less efficient. On a manufacturing line, efficiency improvements can even reduce your throughput, which is certainly not a desirable outcome. (For a fascinating non-manufacturing take on this topic, check out historian Edward Tenner’s TED Talk: “The Paradox of Efficiency”.)
We’ve written about this topic before, but now seems like a good time to revisit it because efficiency is once again at the top of many manufacturers’ lists of objectives. For example, Food Engineering’s latest State of Food Manufacturing Survey found that, having improved their throughput, food processors are placing renewed focus on increasing efficiency.
If you’re in this group, it’s important that you think carefully about how you approach efficiency improvements so that they don’t end up having unintended consequences, like throughput reductions.
How can higher efficiency possibly be a bad thing?
When manufacturers talk about efficiency, they’re typically talking about the amount of time a particular machine is up and running relative to the amount of time it could be up and running.
Efficiency = Uptime / Total Time
If a machine runs for 85 out of every 100 minutes, its efficiency is 85%. If you can take an action to increase the machine’s uptime to 92 minutes, then its efficiency will be 92%. That’s an improvement, right?
Maybe, but maybe not. There’s a lot of wiggle room in the “take an action” part of the scenario. And this wiggle room makes efficiency alone a poor measure of performance. It’s counterintuitive, but we’ve been in plants where machine operators took an action that resulted in higher efficiency, but lower throughput.
Suppose you have a beverage bottling line that includes a decaser, a filler, a capsuler, a labeler, and a case packer.
Your decaser is capable of processing 240 bpm. Of course, machines can’t run constantly, so we assume some downtime, which we quantify based on two metrics: the mean time between failures (MTBF) and the mean time to repair (MTR). For your decaser, failures happen an average of every 15 minutes of runtime and they take an average of 1.5 minutes to repair.
Let’s calculate the efficiency and throughput for this machine using the following equations:
- Efficiency = (MTBF / (MTR + MTBF)) x 100
- Throughput = Efficiency x Rate
For a full explanation of these calculations, see our article “What Is Throughput? (And Why You Should Care).”
|Decaser||1.5 min||15 min||240 bpm||91%||218 bpm|
In terms of efficiency, 91% is pretty good. But let’s say you have a goal of 95%. What can you do?
Perhaps one of your operators has discovered that if you run the decaser at a slightly lower rate, processing only 220 bpm, the mean time between failures increases from 15 minutes to 30 minutes. Slowing the machine down will result in greater efficiency, because you’ll have to stop it less often.
That sounds like a great idea! But what happens when we plug these numbers into the efficiency and throughput equations?
|Decaser||1.5 min||30 min||220 bpm||95%||209 bpm|
As you can see, when the machine slows down, its efficiency increases to an impressive 95%. But, at the same time, its throughput declines by 9 bpm. That’s a decrease of 540 bottles per hour and 4,320 bottles over the course of an 8-hour shift. If you run just one shift a day, 5 days a week, then by the end of the year, your efficiency improvement will have reduced your throughput by 1,123,200 bottles!
A different way of thinking about efficiency
In the example above, greater efficiency didn’t result in higher throughput — on the contrary, it did exactly the opposite — because efficiency alone isn’t a good measure of performance. The ultimate goal isn’t just to have your machines running all of the time; it’s to have more quality products coming off of the line.
The problem with efficiency-based approaches to production line improvements is that they’re too narrow. The example above illustrates a special case where boosting efficiency was detrimental to throughput. A more common scenario is that an efficiency improvement to a particular machine has no material impact on the throughput of the entire process at all.
Of course, we’re not suggesting that efficiency doesn’t matter at all. If your machines were down all of the time, you wouldn’t be able to make any products. But, once you get to a reasonable level of efficiency, like the 91% in the decaser example, increasing it a couple of percentage points won’t likely make huge difference for your overall production goals.
The reason for this is that machines don’t work alone. In the example line, there was a decaser, a filler, a capsuler, a labeler, and a case packer. But we only looked at the efficiency of the decaser. And on a bottling line, the decaser isn’t typically the rate-limiting machine.
A lot of us here at Garvey are runners, so let’s use a running analogy.
Say you and I are running a 6-mile partner race — not a relay, but one where we start together and cross the finish line together. You’re in much better shape than I am — while you can easily run a 6-minute mile, I struggle to consistently hit the 10-minute mark. For us to finish the race together in 60 minutes, I need to run the whole time (efficiency at 100%), while you could run for just 36 minutes (efficiency at 60%). You could slow down to an 8-minute mile, which would improve your efficiency to 80%, but it would still take 60 minutes for us to cross the finish line. We could even add three more people to the team, all running at peak efficiency somewhere between 6 and 10 minutes a mile, and we still wouldn’t finish the race in less than an hour.
In other words, it’s not you (or the rest of the team), it’s me.
I’m the constraint. Everyone else can run as fast or as slow as they want. As long as I’m the slowest person, I will set the pace for everybody.
The exact same thing happens on your production line. The slowest machine is your constraint, and it’s the one that’s setting the pace. Even if you haven’t done the calculations, chances are you already know what this machine is. On bottling lines like the one described above, the slowest machine is typically the filler. (Check out our previous articles on common production constraints on beverage and food packaging lines.)
Here’s an example of efficiency and throughput calculations for a complete bottling line:
|Decaser||1.5 min||30 min||240 bpm||95%||228 bpm|
|Filler||4.0 min||60 min||180 bpm||93%||167 bpm|
|Capsuler||30 sec||15 min||240 bpm||95%||230 bpm|
|Labeler||45 sec||10 min||240 bpm||92%||220 bpm|
|Case packer||3 min||40 min||300 bpm||93%||279 bpm|
Here you can see that no matter how quickly the other machines can run, you’ll never achieve throughput of greater than 167 bpm on this line, because that’s the pace being set by the constraint. We can use this idea to reframe how we think about efficiency:
- The constraint is the only machine for which maximizing efficiency really matters — only by maximizing this machine’s runtime can you achieve peak throughput.
- Rate differences between the constraint and the machines immediately upstream and downstream of it are desirable because they allow you to recover from malfunctions on the non-constraint machines without sacrificing throughput.
Let’s dig into these ideas.
Maximizing the efficiency and throughput of your constraint
Since the constraint sets the pace for the rest of your line, you want to make sure that it’s working as much as possible. Just like every minute of rest I take during our race immediately translates into a later finish for the entire team, every minute your constraint is down represents an immediate hit to your throughput, and, in turn, your bottom line.
Of course, no machine can run at 100% efficiency. But, say your constraint reaches a respectable 93%. The key is to make sure that for that 93% of the time the machine is up and running, it has products to process — that, for example, an upstream machine hasn’t stopped working and caused the constraint to stop as well by starving it of products.
How do you do that? By adding buffers.
Buffers protect your constraint from the malfunctions of other machines by giving products a space to accumulate when they’re not actively being processed. This is why the most effective buffers are accumulation systems.
By placing one buffer before the constraint and another one after the constraint, we can effectively isolate the constraint from the other machines on the line:
- Protecting the constraint from upstream malfunctions: On our line, the decaser processes 61 more bottles a minute than the filler. Rather than waiting for the filler to be ready, the decaser can feed products to an accumulator. Then, when the decaser fails, which it does roughly every 30 minutes, the filler can keep working on the products that have stockpiled in the buffer.
- Protecting the constraint from downstream malfunctions: Similarly, we don’t want the filler to stop working because the capsuler is down, so we place another accumulator after the filler so it can create a store of products ready to proceed down the line.
By doing this, we guarantee that the filler is working at maximum capacity at all times.
Understanding rate differences
In the table above, you’ll notice that the throughput rates are strikingly different between machines. Even working its hardest, the filler can only process 167 bpm, while all of the other machines can easily handle 200+. This isn’t a problem. In fact, it’s a desirable characteristic that helps your line recover more quickly from malfunctions.
There are two ways rate differences help keep your line running, even when machines fail:
- They guarantee enough products have accumulated in the buffer so that a stoppage of one machine doesn’t starve the next machine. The decaser sends an extra 61 products to the buffer every minute, which means that after 10 minutes, there are 610 bottles in the buffer. That’s enough to keep the filler busy for almost 4 minutes if the decaser malfunctions.
- They make the efficiency of the non-constraint less important. The greater the rate difference between the constraint and the non-constraint, the less efficiently the non-constraint can run without hurting your overall output. If, for some reason, the efficiency of the decaser decreased from 95% to 93%, or even to the 91% of our original example, that would be okay as long as it doesn’t fall to the point of impacting the constraint. You can run a 6-minute mile, a 7-minute mile, an 8-minute mile, or a 9-minute mile — the important thing is that you don’t prevent me from trucking along at my 10-minute pace.
The key takeaway is that if you want to increase production, efficiency isn’t going to get you there.
So, what will?
The metric that will help you reach your goal is throughput. We’ve written extensively about throughput on this blog. Here are some articles to get you started exploring this topic in more detail: