Tuesday, August 19, 2025

Posts from a Business Class

I recently completed a course in total quality management (TQM), and here are all my writings from that class. This was only my second business class, and these classes have one thing in common: their adherence to the "happy path." By this I mean that the textbooks and course material assumed that all involved individuals are perfect angels (except for those of us who reject TQM), and that the world is adorned with sprinkles and populated by unicorns that fart Skittles. For example, advocates of TQM call for "employee engagement," but managers are not prepared for when that truly happens.

Despite this, I do not consider this class to be a waste of my time. When someone asks me "when am I going to use this?" I counter with: "when is it going to use you?" This holds even for business management classes. Plus, those classes have a perverse entertainment factor: there is enough that is correct about TQM that when it goes wrong, it is worth observing, like a slow motion car accident.

To put a bow on all this, I present this list of posts. They are grouped by topics, and are also placed in the order they were written during this eight week course. There are some new ideas, some humor, and some criticism. Mostly, though, these posts are about connecting business management topics to the concepts I find interesting.


General Criticisms of TQM


New and Interesting Concepts


Humor


Military/Militia-Related


Voice of the Customer (VOC) and when Vox Populi is not Vox Dei


Just-in-Time Manufacturing


Leadership and Management

  

Week 1

Getting Started with Total Quality Management

Week 2

Culture, Ethics, and Strategic Alliances

Week 3

Leadership, Customers, and Employees

Week 4

Teams, Communication, and Training

Week 5

Quality Tools and Programs

Week 6

Continual Improvement Methods

Week 7

Benchmarking and Implementation of TQM

Week 8

Application of Quality Theories

Sunday, August 17, 2025

A Sorted Tale of Pizza and Red Bull!

Introduction to Gaggle dot Com

Gaggle dot Com is a small software company founded in June 2025. The staff consists of 3 rockstar software developers, the manager/founder (also a rockstar developer), a one-man sales and advertising team, and a graphic designer that is a consultant working on an as-needed basis.

Their primary product is GaggleMail, which is not a Gmail rip-off. Pinky promise.

The first version of GaggleMail was developed by the founder in four straight 24-hour days, while he was strung out on multiple cases of Red Bull.

After this coding binge, the founder saw that it was good. He then had to be medevac'd to the closest hospital on account of heart palpitations.

Once released from the hospital, the founder decided to hire 3 buddies who were also rockstar software developers. Getting this many rockstar developers in one garage can only result in the formation of a militia or the founding of a startup company. Fortunately for everybody else, the outcome was the latter.

The founder paid his team in the universal currency of software engineers: pizza and Red Bull.

Next, the founder asked his sister to design a logo for Gaggle dot Com and to produce some screen designs that the rockstar developers could implement. The screen designs were good, but the logo looked like it was made in Microsoft Paint.

The final logo was made by a freelancer on Fivver.com. It took the freelancer twenty minutes, and it looked like it was made using AI. That's OK, though.

The rockstar developers improved GaggleMail, incorporating the screen designs made by the founder's sister and the AI-generated logo. They saw that it was good and decided to tell the world about it.

They hired a sales and marketing guy named Bob. He had two tasks: advertise GaggleMail and raise venture capital.

Bob's first marketing campaign involved TikTok videos showing three geese swimming in a pond, each carrying an envelope, taped to their beaks. Just like in the logo. He was arrested on animal cruelty charges. The founder bailed him out, and the next advertising campaign’s TikTok videos were made using AI.

For the third campaign, Bob decided to repeat the classic "Turkey Drop" from that old TV show called "WKRP in Cincinnati." Bob knew that geese could fly, so he tied them up in rubber suits as seen in “Pulp Fiction” and gagged each of them with envelopes.

He hired an airplane with a banner that read "GaggleMail by Gaggle dot Com." He loaded the bound geese into the plane and had the pilot fly at an altitude of fifty feet above a shopping mall's parking lot.

He had one of his friends recording this on an iPhone.

Bob proceeded to toss the three geese out of the airplane. Splat! Splat! Splat!

Again, Bob was arrested for animal cruelty. The founder again bailed him out and made him promise to only use AI from now on to make his TikTok videos. The video of the geese hitting the parking lot went viral! Gaggle dot Com was getting noticed!

Bob started raising venture capital and was somewhat successful, as long as he avoided the SPCA crowd.

Gaggle dot Com is now "ramen noodle profitable" and is poised to take the internet by storm. A storm of geese, but still a storm!


Problem Statement: Increase Quality!

With the boost following the "Geese Drop" video, the user base of GaggleMail grew rapidly. The GaggleMail product was performing well, until one day when the rockstar devs were inundated by emails from customers stating that GaggleMail was loading slowly and sometimes not even available!

Being only ramen noodle profitable, the manager/founder could not afford to hire customer service reps or a QA guy. Bob the sales and marketing guy was raising venture capital, and the manager/founder didn't want to take him away from that task. Besides, the manager/founder was tired of bailing him out.

So, the manager/founder led from the front! He took the following actions together with his 3-man rockstar developer team.

First, he chose one of the rockstars, the one who is a good writer, to craft emails that would be sent to the customers experiencing problems.

Second, he measured GaggleMail's response time and the percent of time that it was unavailable. The numbers looked bad: the response time was 15 seconds, and the uptime was only 75%. No wonder their customers were not happy!

Third, the manager/founder worked with the remaining rockstars to diagnose the problem.

It turned out that the problem was with the computer used to host Gaggle dot Com. That computer sat in his father's basement. He called his dad, and his dad was in a panic! "The server is melting, son!" He sounded like a goose with its head cut off.

Fourth, the manager/founder set goals for the metrics: he wanted a response time of under 1 second, and an uptime of 97%. Baby steps.

Fifth, the manager/founder worked with the 2 remaining rockstar devs (meaning, the ones not sending out apology emails) to calculate just how powerful a computer they would need to handle GaggleMail's current user base at the desired response time and uptime. They then projected how many users GaggleMail would have in a year, assuming that Bob does NOT make another viral video. They reran the calculations based on that number of people at the desired metrics.

It turned out the computer they needed was a top-of-the-line Mac Studio. The manager/founder bit his lip. "This is going to hurt!" he said. Fortunately, Bob, the sales and marketing guy, came through with some more venture capital! The manager/founder was very relieved: he didn't want his kneecaps broken by the local mob boss, again.

The manager/founder had the rockstar dev writing emails pause his work so he could join them at the Apple Store. This was going to be an experience that they would tell their children and grandchildren, and the manager/founder wanted all his friends to be there. Arriving at the Apple Store, they looked at all the computers.

There it was, the high-end Mac Studio! The four rockstars approached it, then hesitantly touched it, like those apes that touched the monolith at the start of "2001: A Space Odyssey." The Apple Store manager, concerned, approached them. Mac groupies sometimes needed a firm hand.

"Are you going to purchase that Mac Studio, or just drool on it?" the store manager asked.

The manager/founder stepped back… he was about to fulfill a lifelong dream…

He reached into his pocket. Then, in his best Cleavon Little accent, he said "excuse me while I whip this out!" He removed his wallet from his pocket and pulled out an Amex Black Card.

Half the store gasped in fear! Some of the old women even fainted!

The four rockstar devs took their shiny new Mac Studio over to the manager/founder's dad's house and replaced the old Gateway computer sitting in his basement. They transferred the Gaggle dot Com website and database to the new Mac Studio. They tested everything out, and all was well.

They returned to the manager/founder's garage then sent out a new email to the customers letting them know that all was well asking them to try the NEW! IMPROVED! GaggleMail!

Bob got that evil look in his eye. He wanted to make another banger TikTok video. "No! Don't you dare!" the manager/founder scolded.

They celebrated in the only way rockstar devs and sketchy social media influencers knew how: with pizza and Red Bull.


Ongoing Measurement

The rockstars learned a valuable lesson from all this: an ounce of prevention is worth a pound of cure. So, they needed a way to prevent this problem from recurring.

The idea foremost in the manager/founder's head was the diagnostic process they used to identify the problem and the cure they used to fix it.

The problem was that customer demand exceeded the specifications of Gaggle dot Com's computer in dad’s basement.

One solution was to lay out considerable cash - unfortunately, Apple Store employees aren't fond of pizza and Red Bull. They prefer Starbucks and avocado toast.

Could this problem be anticipated? Could Ben Franklin's adage be made actionable?

The manager/founder gave considerable thought to the problem and how to anticipate it. His first idea was to purchase more Mac Studios, but there were two problems: their cost, and the very real possibility that his dad would object to the rising electric bills.

Then he hit on a compromise, sort of. The manager/founder decided that the best solution was to continually measure the chosen metrics AND the number of customers. This would allow him to do several things:

  • Determine the growth curve for the size of user base
  • Estimate the relationship between the number of users and the chosen metrics (response time and uptime)
  • Predict the number of customers that will exceed even the immense power of that Mac Studio computer sitting in dad’s basement
  • Only purchase computers on an as-needed basis

Lessons Learned from the First Problem

This was Gaggle dot Com's first major problem, besides Bob and his TikTok videos. The leader/founder wanted to record his thoughts. Here’s what he came up with:

  • Pay attention to customer complaints and be prepared to address them
  • Be proactive so the complaints were limited
  • Choose appropriate metrics that are relevant to customers
  • Determine the baseline and set goals for improvement of those metrics
  • Continually measure these metrics with the goal of improving them
  • Automate the measurement process
  • Make predictions based on those measurements
  • Pay your rockstar devs well: pizza and Red Bull are the coin of the realm

Quality Philosophy Used

Should the manager/founder's actions count as a "quality philosophy?" Yes and no: he concerned himself with customer satisfaction and continual improvements, but those two factors do not count as a complete total quality management (TQM) implementation. Let's go through the "8 Principles of TQM" as listed in Isolocity (2024):

  • Customer focus – yes, customer satisfaction was the driving factor
  • Leadership involvement – the manager/founder led from the front
  • Employee involvement – they live for this stuff!
  • Process approach – heck no
  • Systematic management approach – heck no
  • Continual improvement – yes, the manager/founder took actions to improve the relevant metrics and is considering how to continue the process
  • Factual decision-making – how else could it be?
  • Mutually beneficial supplier relationships – Gaggle dot Com maintains excellent relations with the local pizza shops and Red Bull suppliers.

So, the quality philosophy used was not full TQM – it included modifications appropriate to our scrappy software company. It allows Gaggle dot Com to retain its innovative nature required by all startup companies while preventing the (malignant) growth of bureaucracy that paralyzes and destroys such companies.


Quality Tool Used: Log Analysis

The quality tool used by Gaggle dot Com wasn’t one the usual quality tools, but it is certainly common and valuable in the software development industry: log analysis!

All computers running software like GaggleMail record some of the events taking place in that shiny new Mac Studio sitting in dad’s basement. The information recorded includes customers interacting with GaggleMail, database access, potential security concerns, system crashes, and so on. This is an incredible amount of information that not even our rockstar developers could make sense of it (really, they could, they just have better things to do).

Mundane events like customer login attempts, interaction with databases, and so on usually do not require immediate analysis. However, the data recorded is still valuable and is the foundation of relevant statistical process controls (described next).

However, a log analysis tool can immediately spot security problems and system crashes. How to act on that information? Usually, the log analyzer sends a text message to one of the rockstar developers who is “on call” so that he can diagnose it and fix it.

Something our manager/founder must consider is that for many rockstars, there is not enough pizza and Red Bull in the world to be on on call. So, the duty would fall on the manager/founder.


Statistical Process Control Used

One of the best statistical process control methods for a company like Gaggle dot Com to use are histograms of the hourly web traffic GaggleMail receives. This feature is usually part of system monitoring or logging tools and can be easily added to dashboards for use by all the employees at the Gaggle dot Com.

As a concrete example, consider a histogram that shows the number of GaggleMail users in each hour of the day. There would have to be adjustments for different time zones, of course.

The rockstar devs would look at the histogram to figure out when extra computers would be needed to handle the extra traffic. In the case of GaggleMail and their brand-new computer, the Mac Studio would have to be more fully committed to making sure GaggleMail customers are serviced during peak hours. An “edgy” application of hourly web traffic is to enable or disable features in GaggleMail based on traffic volume – expensive (computer intensive) features could be disabled during peak hours. This would lower service quality, so must be considered as a last resort.

The manager/founder will look at the chart to figure out if it makes sense to continue to host the site on a single Mac Studio computer or move to a system like Amazon Web Services which includes "load scaling" – automatically making more computers available during peak hours and taking away those computers when not needed during off hours.

Bob, the sales and marketing guy, would use this chart to determine the peak hours that GaggleMail users check their mail. If Gaggle dot Com decides to sell advertisements on the site, Bob would use the histogram to set the prices the advertisers would have to pay. Ads shown during peak hours would cost the advertiser more than ads shown during off hours.


Conclusion

The manager/founder was happy with the way things worked out:

  • He implemented a system used to measure the quality of GaggleMail
  • He extracts usable information from that system
  • He uses the information to predict growth and to financially plan for upcoming expenses
  • This allows him to continually improve the metrics
  • He performs competitor analysis to add desirable features

All of this is great: the customers are happy with existing quality of service, and the quality of service is always improving. In essence, he has moved from merely reacting to being aggressive, as all good rockstar developers should be.

Our manager/founder has no illusions about the future, however.

He and his team of rockstars will soon no longer work for pizza and Red Bull. Their standards are evolving! Soon, they'll want higher quality pizza (Nico's or Papa Johns) instead of that horrible Domino's. Also, their tastes will change from ordinary Red Bull to Fresh Squeezed Red Bull, then to Tropical Red Bull Margaritas, and maybe even all the way to Vodka Red Bulls!

That means that Bob the sales and marketing guy must continue raising venture capital, or worse, return to his shady past of abusing geese, all for the engagement. You can see it in his eyes - he wants those clicks!

The manager/founder knows that these events are on the horizon and must plan accordingly – again, he must not only be proactive but aggressive.

Bob could establish multiple revenue streams for Gaggle dot Com, like advertising, or somehow "gamifying" GaggleMail.

Gaggle dot Com's team is small, but this is made up for by the incredible power of rockstar developers! Archimedes once supposedly said “give me enough pizza and Red Bull, and some rockstar developers, and I will move the world.” This proves that rockstar devs have been around since about 230 BC.

It may seem that GaggleMail depends on having only rockstar programmers. It doesn't: non-rockstars are welcome and are valuable, so long as they fully understand their own strengths and weaknesses.

One event that would require the addition of more developers is if Gaggle dot Com adds more software products to their lineup besides GaggleMail. The manager/founder has been hearing complaints about Google Maps, and he has considered making something called Gaggle Maps (totally not a copy of Google Maps, really).

How to pull this off?

A single team of rockstar developers rightfully scoff at the whole agile development and scrum processes with its daily standup meetings and sprints and other bureaucratic bloat. But what about two teams?

Our manager/founder has read works about something called "scrum of scrums," a technique for combining and synchronizing multiple teams (Spanner, n/d). The problem our manager/founder has with this is that the basic frameworks of agile and scrum are flawed, and making a pile of them (as required by scrum of scrums) does not fix those flaws and, indeed, magnifies them!

Our manager/founder understands that forcing all available people into an organizational or team model should not be done. It is better to devise an organizational or team model that works for the people already there.

Then scrum and scrum of scrum advocates call for something called "work-life balance." Our manager/founder understands that work-life balance is a myth (Pontefract, 2024), and it is a reason for companies to not demand the best from its employees.

In fact, whenever he reads about scrums or scrum of scrums, our manager/founder wants to find a rope and a wobbly stool!

Still, the problem remains. All our manager/founder understands is that it is inappropriate to share expertise across different teams except in very specific ways: doing weekly "brown bag lunches" is great for sharing knowledge, but sharing a QA person or a project manager across multiple teams are causes of failure.

This is shaped by his experience: small teams work great but combining them by treating the team members as "human resources" is vile, disgusting, and downright repulsive. Unfortunately, our manager/founder has no practical experience in combining teams in a way that eliminates bureaucracy, maintains creativity and autonomy, and preserves dignity.

This failure to understand how to combine multiple teams must not be taken as a reason to stop asking questions. The future is wide open, and there must be ways for Gaggle dot Com to stay scrappy and not turn into another IBM!

Thus ends our story of pizza and Red Bull!


References

Goetsch, D. L. & Davis, S. B. (2021). Quality management for organizational excellence: Introduction to total quality (9th ed.). Pearson.

Isolocity. (2024). What are the 8 principles of TQM? https://isolocity.com/what-are-the-8-principles-of-tqm

Pontefract, D. (2024, 2 June). The fallacy of work-life balance. Forbes. https://www.forbes.com/sites/danpontefract/2024/06/02/the-fallacy-of-work-life-balance/

Spanner, C. (n/d). Scrum of scrums. Atlassian. https://www.atlassian.com/agile/scrum/scrum-of-scrums

Management Paralysis and the Good Idea Fairy

Because both total quality management (TQM) and the Malcolm Baldrige approach both require that companies and organizations use a “fact-based” or “evidence-based” or “data driven” approach to setting strategy and making decisions, some type of integrated performance measurement system (El Mola and Parsaei, 2010) seems like a requirement for ongoing operations. [By the way, there apparently is something called “evidence-based medicine.” Going on those words alone, one must shudder at the opposite. But, if anything is true, it is that when words are used to obscure, one must wait for the truth and real intentions to be revealed.]

An integrated performance measurement system must be action-oriented, meaning that not only can it be used to track performance but can also be used to identify slow-downs, excessive costs, and other areas that require improvement.

In addition, an integrated performance measurement system must be able to measure performance based on processes that span across an entire company or organization and not be relegated to single departments. That’s called a process-oriented metric. It is not clear whether an integrated performance measurement system can propose a restructuring of an organization or company so that these cross-department processes do not cross so many departments, and whether such a restructuring is eventually worth it.

The voice of the customer (VOC) and market forces must be considered in any quality management system such as TQM and Malcolm Baldrige approach. Parast et al. (2024) imply that some companies or organizations have a difficult time converting those into actionable items. Instead of a customer-satisfaction loop or a market-analysis loop, companies get stuck in what engineers call “analysis paralysis” – they never make it pass the data collection stage.

Something similar to this is what I call “management paralysis.” This is when management hasn’t developed strategy for some product, or when old management leaves the company and takes their strategies with them. Goetsch & Davis (2021, p. 295) use the phrase “voice of the company,” and with management paralysis, the voice of the company is mute.

Here is a good example: in October 2021, Apple introduced a “notch” in the screen of their MacBook and MacBook Pro products. The idea is that this notch would be a place to hold higher resolution cameras, as well as face tracking and face detection technology. Here we are in August 2025 and none of that has happened. One must conclude that the Good Idea Fairy paid a visit to some mid-level manager and gave him the "vision" of putting the notch into otherwise outstanding computers, and either there was a change in management or an underbaked strategy. Or, the Good Idea Fairy is paid by the hour!


References

El Mola, K. & Parsaei, H. (2010). Integrated performance measurement systems: A review and analysis. The 40th International Conference on Computers & Industrial Engineering, Awaji, Japan, 2010, pp. 1-6. https://doi.org/10.1109/ICCIE.2010.5668237

Goetsch, D. L. & Davis, S. B. (2021). Quality management for organizational excellence: Introduction to total quality (9th ed.). Pearson.

Parast, M. M., Safari, A., & Golgeci, I. (2024). A comparative assessment of quality management practices in manufacturing firms and service firms: A repeated cross-sectional analysis. IEEE Transactions on Engineering Management, 71, 4676-4691. https://doi.org/10.1109/TEM.2022.3221851

Malcolm Baldrige National Quality Award


Introduction

This post discusses the Malcolm Baldrige National Quality Award (MBNQA), including the distinction between the Baldrige Framework and Baldrige Award Criteria. Next, the award application process is described. Finally, the critical success factors will be applied to a small, fictitious software company.


What is the MBNQA?

The MBNQA was created by the Malcolm Baldrige National Quality Improvement Act, signed by Reagan in 1987, and was named after a former commerce secretary. Awards were originally awarded in three categories: manufacturing, service, and small business, but the number of categories has since increased. As of 2007, there are six categories: manufacturing, service company, small business, education, healthcare, and non-profit (American Society for Quality, n/d). A seventh category was added in 2022: community (Baldrige Foundation, 2022).

The purpose of the award is to highlight companies and organizations that make use of quality management standards, including total quality management. The award promotes awareness of the importance of quality improvement, recognizes companies that practice quality management, and allows for an exchange of quality management techniques. Recipients of the MBNQA are encouraged to share non-proprietary techniques about their companies and organizations, particularly at the award ceremony. This allows other companies and organizations everywhere to duplicate those techniques (Goetsch & Davis, p. 424).

One of the criticisms of the MBNQA is that it fails to predict a company or organization’s success. Garvin (1991) notes that the award was never meant to be a predictor of financial success – then immediately advocates use of this measurement anyway. Insert “no, but actually yes” meme here.


The Baldrige Excellence Framework and Baldrige Award Criteria

There are two related concepts relevant to the MBNQA: the Baldrige Excellence Framework and Baldrige Award Criteria.

The Baldrige Excellence Framework lists seven categories relevant to quality-related performance: leadership, strategy, customers, measurement-analysis-knowledge-management, workforce, operations, and results (NIST, 2024).

The leadership category is about how upper management leads the organization, and how the organization leads within the community. The strategy category is about how the organization establishes and implements strategic goals.

The customer category determines how the company builds and maintains long-term relationships with its customers.

The measurement-analysis-knowledge-management category describes how the organization gathers and uses information to support its processes.

The way the organization involves and empowers its employees is covered in the workforce category. The design, management and improvement of key processes are described in the operations category.

Finally, the results category describes the organization’s customer satisfaction, human resources, “governance and social responsibility,” and finances as well as how it compares to its competitors (American Society for Quality, n/d).

These are applicable to all types of industries and organizations, of all sizes. They are intended to allow companies or organizations to self-evaluate themselves against quality standards, as well as to prepare them for competition for the Baldrige Award.

The Baldrige Award Criteria are the factors that are considered specifically when awarding the MBNQA. These are: organization description, leadership and governance, operations, workforce, customers and markets, finance, strategy, organizational learning, and community relations. (NIST, 2025). These can be interpreted as refinements of the Baldrige Excellence Framework.


The Award Application Process

The actual scoring system, judging process, and evaluation criteria were not specified in the Act that created the MBNQA. It was left up to the National Institute of Standards and Technology (NIST) – then known as the National Bureau of Standards – to decide all this.

Companies submit applications of 50-75 pages describing their practices and performance in the Baldrige Award Criteria. The application fee for companies with 500 or fewer employees is $10,000, and for larger companies it is 19,000. (NIST, 2025, 6 January)

Reviewers then grade these applications. A small set of high-scoring applicants are selected for multi-day visits which consists of interviews and document checks. The site visit fee for companies with 500 or fewer employees is $25,500, and for larger companies it is $40,800. (NIST, 2025, 6 January)

The judges then meet, review the applications and the results of the site visits, and select winners. Winners get a trophy and a pony.

Winners then attend an award ceremony, usually held in Baltimore, MD. Following the ceremony, the Quest for Excellence Conference occurs, and that is where MBNQA recipients can share their non-proprietary best practices and innovations.


Conclusion and Application to a Fictitious Software Company

Of the two, the Baldrige Excellence Framework and the Baldrige Award Criteria, the former is far more valuable to a fictitious software company. The application of the Baldrige Excellence Framework does not require any costly application fees, access to finances and proprietary information, and does not require costly site visits.

The Baldrige Excellence Framework is really the critical success factors that the MBNQA seeks to capture. This is perfect for small, fictitious software companies, especially those at the “ramen noodle profitability” stage. Preparing such a company for evaluation for the MBNQA is simply cost prohibitive.


References

American Society for Quality. (n/d). What is the Malcolm Baldrige National Quality Award (MBNQA)? https://asq.org/quality-resources/malcolm-baldrige-national-quality-award

Baldrige Foundation. (2022, 9 August). Congress adds “community” as the 7th category of the Malcolm Baldrige National Quality Awards. https://baldrigefoundation.org/news-resources/press-releases.html/article/2022/08/09/congress-adds-community-as-the-7th-category-of-the-malcolm-baldrige-national-quality-awards

Garvin, D. (1991). How the Baldrige Award really works. Harvard Business Review. https://hbr.org/1991/11/how-the-baldrige-award-really-works

Goetsch, D. L. & Davis, S. B. (2021). Quality management for organizational excellence: Introduction to total quality (9th ed.). Pearson.

NIST. (2025, 6 January). Baldrige Award Process Fees. https://www.nist.gov/baldrige/baldrige-award/award-process-fees

NIST. (2025). Award criteria. https://www.nist.gov/baldrige/baldrige-award/award-criteria

NIST. (2024). Baldrige excellence builder: Key questions for improving your organization’s performance: 2023-2024. https://www.nist.gov/system/files/documents/2025/02/27/2023-2024-Baldrige-Excellence-Builder.pdf

More Problems with JIT Manufacturing

Just-in-tine (JIT) manufacturing is the idea that a manufacturer should order component parts only when a customer places an order from the manufacturer. As such it is a pull-only process. There are several advantages: the manufacturer does not incur any inventory holding costs; there is no chance of components becoming spoiled (in the case of non-durable goods) or obsolete; and that any defects in the component parts can be quickly identified, and remedial steps can be taken.

The obvious problem with JIT manufacturing is that it is highly vulnerable to supply chain disruptions (Goetsch & Davis, p. 378-379). By having no inventory, a company practicing JIT cannot easily weather interruptions in the flow of needed parts. Problems like this can be minimized by using multiple suppliers, if not all the suppliers are disrupted. Ye et al (2022) recommend a global centralized solution, but this is just making the problem bigger.

There are other problems, however.

Goetsch & Davis discuss the problem of supply chain interruptions from the “up-stream” perspective. Another problem, a “down-stream” problem, is that the customer may be unable to make requests. For example, a recent storm here caused damage to a power station, which forced the closings of two car dealerships and one auto mechanic for four days. During that time there were demands for automobiles and auto parts, but the dealerships and mechanics were unable to place orders for them.

Another problem with JIT is synchronizing the arrival of parts (Guo et al, 2022). If the parts do not arrive at the same time, then production cannot be completed until the remaining parts arrive. The manufacturer not only depends on one supplier but all suppliers. During that time, the manufacturer incurs storage costs. Guo, et al, call for improved manufacturing planning and control (MPC) systems, but no MPC system was listed in their paper.

Does it make sense for all supply chain partners to practice JIT? For example, suppose an automobile maker practices JIT. The maker receives an order for a car, and they then must place orders for each of the component parts (and according to Collectors Auto Supply (2020), there are approximately 30,000 parts). Next, a part maker must order the parts they need from other suppliers. Finally, the raw materials must be dug from the ground and smelted. This is a consequence for when all supply chain partners follow JIT practices. Does this make sense, or is it “JIT for me but not for thee?”

When seen in this light, one of the major benefits of JIT manufacturing vanishes: inventory costs are merely pushed off to suppliers.

The result is to lower customer satisfaction by forcing the customer to wait for fulfillment of his demands. This is acceptable in some industries. For example, certain medium- to high-end automakers sometimes have a waiting period of weeks. Construction companies operate on a timeframe of months - unless they’re Amish! Customer needs are most often better met through companies not practicing strict JIT.

One last problem, the most fundamental problem, is that JIT explicitly ignores sales forecasts. “As the processes and suppliers become more proficient, and the JIT/Lean line takes hold, production will be geared to customer demand rather than to sales forecasts.” (Goetsch & Davis, 2021, p. 383). JIT is calling for us to close our eyes to very real situations where sales forecasts are repeating patterns based on historical data. This happens anywhere from Christmas shopping patterns to the battle regularities of the Taliban in Afghanistan. These patterns are very real, and it is foolish to ignore them.

Imagine a situation where a customer is shopping for a high-value product. This is currently happening in the US where we are seeking to increase the number of merchant vessels and battle force ships. One cannot simply go to a shipbuilder and have the same experience as buying a car! Instead, the customer is purchasing a currently nonexistent product, based only on detailed plans plus the shipbuilder’s reputation. This is a situation where JIT is practical, and it may be practical for other high-value or luxury products. Otherwise, JIT cannot be used as a universal management policy.


References

Collectors Auto Supply. (2020, 5 May). How many parts are in a car? https://collectorsautosupply.com/blog/how-many-parts-are-in-a-car/

Goetsch, D. L. & Davis, S. B. (2021). Quality management for organizational excellence: Introduction to total quality (9th ed.). Pearson.

Guo, D., et al. (2022). Towards synchronization-oriented manufacturing planning and control for Industry 4.0 and beyond. IFAC, 55(2), 163-168. https://doi.org/10.1016/j.ifacol.2022.04.187

Ye, Y., Suleiman, M., & Huo, B. (2022) Impact of just-in-time (JIT) on supply chain disruption risk: The moderating role of supply chain centralization. Industrial Management & Data Systems, 122(7), 1665–1685. https://doi.org/10.1108/IMDS-09-2021-0552

JIT Manufacturing and Supply Chain Fragility

One of the problems with JIT manufacturing is that it is susceptible to supply chain interruptions. Taiichi Ohno, the inventor of JIT/Lean manufacturing, recognized that problem. It is worth quoting in full Goetsch & Davis’s discussion of this issue and Ohno’s solution:

Mass production advocates emphasize that the lines need to keep moving and that the only way to do this is to have lots of parts available for any contingency that might arise. This is the fallacy of just-in-time/Lean according to mass production advocates. JIT/Lean, with no buffer stock of parts, is too precarious. One missing part or a single failure of a machine (because there are no stores of parts) causes the JIT/Lean line to stop. It was this very idea that represented the power of JIT/Lean to Ohno. It meant that there could be no work-arounds for problems that did develop, only solutions to the problems. It focused everyone concerned with the production process on anticipating problems before they happened and on developing and implementing solutions so that they would not cause mischief later on. The fact is that as long as the factory has the security buffer of a warehouse full of parts that might be needed, problems that interrupt the flow of parts to the line do not get solved because they are hidden by the buffer stock. When that buffer is eliminated, the same problems become immediately visible, they take on a new urgency, and solutions emerge—solutions that fix the problem not only for this time but for the future as well. Ohno was absolutely correct. JIT/Lean’s perceived weakness is one of its great strengths. (Goetsch & Davis, 2021, p. 378-379)

According to this, maintaining a buffer stock hides any supply chain issues until the buffer stock is exhausted. This only happens, though, when the buffer stock levels are not monitored. By continually tracking buffer stock – and the rate at which the stock is replenished – any supply chain problems are revealed, and they are revealed at the exact same time that users of JIT manufacturing would notice these shortages. The difference is that the company maintaining buffer stock is not immediately affected, whereas the one using JIT must halt production until the situation is resolved.

The solution Ohno advocates (according to Goetsch & Davis) is that supply chain problems cannot occur (“there could be no work-arounds for problems that did develop, only solutions to the problems”). Problems are avoided simply by having everybody involved working on alternatives to problems that have not yet occurred. Unfortunately, no plan survives contact with reality, and no amount of mental gymnastics will change this, and when there are shortages, Ohno would resolve the issue by having multiple people screaming for a solution. Having multiple people call a supplier pressuring them to resolve a delay does no better than having one person making one call. Phone calls, by themselves, are not sufficient to identify and repair the problem that caused the supplier’s inability to produce needed parts.

One of the workarounds (that Ohno claims is unneeded) to the issue of supplier shortage is to maintain “total visibility – of equipment, people, material, and process” (Kumar, et al, 2013). There are two problems with this: adding such visibility is sure to increase the level of bureaucracy in the supplier, and not all suppliers are willing to allow total visibility. The reason for the latter is that when a company wants visibility into a supplier, it is wanting not only the production rates of a certain part, but also for all the company’s competitors that happen to use the same part.

Akhil Bhargava offers a number of different solutions to the supplier shortage issue. According to him, “The solutions to the traditional mindset of holding Safety stock include Increased data processing involvement in implementation planning efforts in order to upgrade systems to JIT level, statistical process control enhancement to provide timely feedback for engineering and managing tuning, meaningful contingency planning as a response to defects in critical parts, and materials and effective user supply dialogues to support delivery and quality issues.” (Bhargava, 2017). He is basically calling for “better living through IT™”, and none of these solutions actually address supplier shortages, except for the “meaningful contingency planning” option, which is just another phrase for maintaining buffer stock.

The JIT supply chain fragility issue appears to be a problem that has not been resolved and may be unsolvable.


References

Bhargava, A. (2017). A study on the challenges and solutions to just in time manufacturing. International Journal of Business and Management Invention, 6(12), 47-54. https://www.academia.edu/69920210/A_Study_on_The_Challenges_And_Solutions_To_Just_In_Time_Manufacturing

Goetsch, D. L. & Davis, S. B. (2021). Quality management for organizational excellence: Introduction to total quality (9th ed.). Pearson.

Kumar, S., et al. (2013). Difficulties of Just-in-Time implementation. International Journal on Theoretical and Applied Research in Mechanical Engineering, 2(1), 8-11. http://www.irdindia.in/journal_ijtarme/pdf/vol2_iss1/2.pdf

Benchmarking in High-Security Environments

Benchmarking is certainly important (Goetsch & Davis, 2021), but making it happen in a sector where secrets must be kept involves “creative” solutions, or only making extremely broad comparisons that do not have value to opponents. For example, Gebicke & Magid’s (2010) global study doesn’t compare specific defense systems, but they do compare force size, tooth-to-tail ratio, and so on.

Another type of comparison that doesn’t involve sharing secretive information is between military education institutions. For example, V. Kravets (2024) compares Ukraine’s higher military educational instructions, but instead of comparing technical proficiencies in military science, her goal is to determine the feasibility of including management activities into those institutions. Aren't things going bad enough for Ukraine?

Because of the need to avoid classified information becoming public, the benchmarking that could be used by companies like Rheinmetall AG against BAE Systems or General Dynamics is fraught with difficulties that aren’t encountered in civilian industries. For example, in any company that has IT infrastructure (which means all companies), benchmarking various IT components (like databases or servers) is possible because different industries use the same IT components, and so the benchmarking partners need not be competitors. For example, Google’s Gmail and the fictitious Gaggle dot Com’s GaggleMail are competitors, so no mission-critical information should pass between them. Gaggle dot Com is not a competitor of X (formerly Twitter), so it is OK to benchmark their databases, for example. And this database benchmarking can involve direct comparison of databases made by the same company (like Microsoft) or comparisons of databases made by different companies (Microsoft vs Oracle).

It's not clear whether the same benchmarking would happen in the IT departments of artillery manufacturers since there are all sorts of proprietary IT components. But one can benchmark various systemic quality measures like lean or six sigma standards against other companies, without giving away classified information.

In a 1999 paper by Yarrow & Prabhu, three different modes of benchmarking are presented: metric benchmarking, diagnostic benchmarking, and process benchmarking. Metric benchmarking is the comparison of “apples with apples” performance data. Process benchmarking “involves two or more organizations comparing their practices in a specific area of activity, in depth, to learn how better results can be achieved.” And diagnostic benchmarking “seeks to explore both practices and performance, establishing not only which of the company’s results areas are relatively weak, but also which practices exhibit room for improvement.”

I would like to make guesses of the types of benchmarking done at companies like Rheinmetall AG, without knowing anything about artillery manufacturing! Metric benchmarking could be done on an IT component level (everybody uses databases), but exact benchmarking may be precluded because proprietary software is used. In that case, process benchmarking would still be possible. Diagnostic benchmarking seems to best describe comparisons of six sigma measurements. But I don’t know anything about six sigma, either!

For metric comparisons, then, companies like Rheinmetall AG must enter into consortium agreements with other defense manufacturers as you stated. It isn’t clear how security would be maintained, even if the data is anonymized. In the absence of consortium agreements, one would have to look to different industries. For example, for information about specific tolerances, one would have to compare data from, say, civilian pipe manufacturers. This is probably a one-way transfer of information.


References

Gebicke, S. & Magid, S. (2010). Lessons from around the world: Benchmarking performance in defense. McKinsey & Company. https://www.mckinsey.com/~/media/mckinsey/dotcom/client_service/public%20sector/pdfs/mck%20on%20govt/defense/mog_benchmarking_v9.pdf

Goetsch, D. L. & Davis, S. B. (2021). Quality management for organizational excellence: Introduction to total quality (9th ed.). Pearson.

Kravets, V. (2024). Development strategies for higher military educational institutions of Ukraine: analysis based on benchmarking. Честь і закон, 2(89), 74-82. https://chiz.nangu.edu.ua/article/download/309198/300732/714412

Yarrow, D. & Prabhu, V. (1999). Collaborating to compete: Benchmarking through regional partnerships. Total Quality Management, 10(4-5), 793-802. https://doi.org/10.1080/0954412997820