Sunday, August 17, 2025

A Sorted Tale of Pizza and Red Bull!


Introduction to Gaggle dot Com

Gaggle dot Com is a small software company founded in June 2025. The staff consists of 3 rockstar software developers, the manager/founder (also a rockstar developer), a one-man sales and advertising team, and a graphic designer that is a consultant working on an as-needed basis.

Their primary product is GaggleMail, which is not a Gmail rip-off. Pinky promise.

The first version of GaggleMail was developed by the founder in four straight 24-hour days, while he was strung out on multiple cases of Red Bull.

After this coding binge, the founder saw that it was good. He then had to be medevac'd to the closest hospital on account of heart palpitations.

Once released from the hospital, the founder decided to hire 3 buddies who were also rockstar software developers. Getting this many rockstar developers in one garage can only result in the formation of a militia or the founding of a startup company. Fortunately for everybody else, the outcome was the latter.

The founder paid his team in the universal currency of software engineers: pizza and Red Bull.

Next, the founder asked his sister to design a logo for Gaggle dot Com and to produce some screen designs that the rockstar developers could implement. The screen designs were good, but the logo looked like it was made in Microsoft Paint.

The final logo was made by a freelancer on Fivver.com. It took the freelancer twenty minutes, and it looked like it was made using AI. That's OK, though.

The rockstar developers improved GaggleMail, incorporating the screen designs made by the founder's sister and the AI-generated logo. They saw that it was good and decided to tell the world about it.

They hired a sales and marketing guy named Bob. He had two tasks: advertise GaggleMail and raise venture capital.

Bob's first marketing campaign involved TikTok videos showing three geese swimming in a pond, each carrying an envelope in their beaks. Just like in the logo. He was arrested on animal cruelty charges. The founder bailed him out, and the next advertising campaign’s TikTok videos were made using AI.

For the third campaign, Bob decided to repeat the classic "Turkey Drop" from that old TV show called "WKRP in Cincinnati." Bob knew that geese could fly, so he tied them up in rubber suits as seen in “Pulp Fiction” and gagged each of them with envelopes.

He hired an airplane with a banner that read "GaggleMail by Gaggle dot Com." He loaded the bound geese into the plane and had the pilot fly at an altitude of fifty feet above a shopping mall's parking lot.

He had one of his friends recording this on an iPhone.

Bob proceeded to toss the three geese out of the airplane. Splat! Splat! Splat!

Again, Bob was arrested for animal cruelty. The founder again bailed him out and made him promise to only use AI from now on to make his TikTok videos. The video of the geese hitting the parking lot went viral! Gaggle dot Com was getting noticed!

Bob started raising venture capital and was somewhat successful, as long as he avoided the SPCA crowd.

Gaggle dot Com is now "ramen noodle profitable" and is poised to take the internet by storm. A storm of geese, but still a storm!


Problem Statement: Increase Quality!

With the boost following the "Geese Drop" video, the user base of GaggleMail grew rapidly. The GaggleMail product was performing well, until one day when the rockstar devs were inundated by emails from customers stating that GaggleMail was loading slowly and sometimes not even available!

Being only ramen noodle profitable, the manager/founder could not afford to hire customer service reps or a QA guy. Bob the sales and marketing guy was raising venture capital, and the manager/founder didn't want to take him away from that task. Besides, the manager/founder was tired of bailing him out.

So, the manager/founder led from the front! He took the following actions together with his 3-man rockstar developer team.

First, he chose one of the rockstars, the one who is a good writer, to craft emails that would be sent to the customers experiencing problems.

Second, he measured GaggleMail's response time and the percent of time that it was unavailable. The numbers looked bad: the response time was 15 seconds, and the uptime was only 75%. No wonder their customers were not happy!

Third, the manager/founder worked with the remaining rockstars to diagnose the problem.

It turned out that the problem was with the computer used to host Gaggle dot Com. That computer sat in his father's basement. He called his dad, and his dad was in a panic! "The server is melting, son!" He sounded like a goose with its head cut off.

Fourth, the manager/founder set goals for the metrics: he wanted a response time of under 1 second, and an uptime of 97%.

Fifth, the manager/founder worked with the 2 remaining rockstar devs (meaning, the ones not sending out apology emails) to calculate just how powerful a computer they would need to handle GaggleMail's current user base at the desired response time and uptime. They then projected how many users GaggleMail would have in a year, assuming that Bob does NOT make another viral video. They reran the calculations based on that number of people at the desired metrics.

It turned out the computer they needed was a top-of-the-line Mac Studio. The manager/founder bit his lip. "This is going to hurt!" he said. Fortunately, Bob, the sales and marketing guy, came through with some more venture capital! The manager/founder was very relieved: he didn't want his kneecaps broken by the local mob boss, again.

The manager/founder had the rockstar dev writing emails pause his work so he could join them at the Apple Store. This was going to be an experience that they would tell their children and grandchildren, and the manager/founder wanted all his friends to be there. Arriving at the Apple Store, they looked at all the computers.

There it was, the high-end Mac Studio! The four rockstars approached it, then hesitantly touched it, like those apes that touched the monolith at the start of "2001: A Space Odyssey." The Apple Store manager, concerned, approached them. Mac groupies sometimes needed a firm hand.

"Are you going to purchase that Mac Studio, or just drool on it?" the store manager asked.

The manager/founder stepped back… he was about to fulfill a lifelong dream…

He reached into his pocket. Then, in his best Cleavon Little accent, he said "excuse me while I whip this out!" He removed his wallet from his pocket and pulled out an Amex Black Card.

Half the store gasped in fear! Some of the old women even fainted!

The four rockstar devs took their shiny new Mac Studio over to the manager/founder's dad's house and replaced the old Gateway computer sitting in his basement. They transferred the Gaggle dot Com website and database to the new Mac Studio. They tested everything out, and all was well.

They returned to the manager/founder's garage then sent out a new email to the customers letting them know that all was well asking them to try the NEW! IMPROVED! GaggleMail!

Bob got that evil look in his eye. He wanted to make another banger TikTok video. "No! Don't you dare!" the manager/founder scolded.

They celebrated in the only way rockstar devs and sketchy social media influencers knew how: with pizza and Red Bull.


Ongoing Measurement

The rockstars learned a valuable lesson from all this: an ounce of prevention is worth a pound of cure. So, they needed a way to prevent this problem from recurring.

The idea foremost in the manager/founder's head was the diagnostic process they used to identify the problem and the cure they used to fix it.

The problem was that customer demand exceeded the specifications of Gaggle dot Com's computer in dad’s basement.

One solution was to lay out considerable cash - unfortunately, Apple Store employees aren't fond of pizza and Red Bull.

Could this problem be anticipated? Could Ben Franklin's adage be made actionable?

The manager/founder gave considerable thought to the problem and how to anticipate it. His first idea was to purchase more Mac Studios, but there were two problems: their cost, and the very real possibility that his dad would object to the rising electric bills.

Then he hit on a compromise, sort of. The manager/founder decided that the best solution was to continually measure the chosen metrics AND the number of customers. This would allow him to do several things:

  • Determine the growth curve for the size of user base
  • Estimate the relationship between the number of users and the chosen metrics (response time and uptime)
  • Predict the number of customers that will exceed even the immense power of that Mac Studio computer sitting in dad’s basement
  • Only purchase computers on an as-needed basis

Lessons Learned from the First Problem

This was Gaggle dot Com's first major problem, besides Bob and his TikTok videos. The leader/founder wanted to record his thoughts. Here’s what he came up with:

  • Pay attention to customer complaints and be prepared to address them
  • Be proactive so the complaints were limited
  • Choose appropriate metrics that are relevant to customers
  • Determine the baseline and set goals for improvement of those metrics
  • Continually measure these metrics with the goal of improving them
  • Automate the measurement process
  • Make predictions based on those measurements
  • Pay your rockstar devs well: pizza and Red Bull are the coin of the realm

Quality Philosophy Used

Should the actions the manager/founder used count as a "quality philosophy?" Yes and no: he concerned himself with customer satisfaction and continual improvements, but those two factors do not count as a complete total quality management (TQM) implementation. Let's go through the "8 Principles of TQM" as listed in Isolocity (2024):

  • Customer focus – yes, customer satisfaction was the driving factor
  • Leadership involvement – the manager/founder led from the front
  • Employee involvement – they live for this stuff!
  • Process approach – heck no
  • Systematic management approach – heck no
  • Continual improvement – yes, the manager/founder took actions to improve the relevant metrics and is considering how to continue the process
  • Factual decision-making – how else could it be?
  • Mutually beneficial supplier relationships – Gaggle dot Com maintains excellent relations with the local pizza shops and Red Bull suppliers.

So, the quality philosophy used was not full TQM – it included modifications appropriate to our scrappy software company. It allows Gaggle dot Com to retain its innovative nature required by all startup companies while preventing the (malignant) growth of bureaucracy that paralyzes and destroys such companies.


Quality Tool Used: Log Analysis

The quality tool used by Gaggle dot Com wasn’t one the usual quality tools, but it is certainly common and valuable in the software development industry: log analysis!

All computers running software like GaggleMail record some of the events taking place in that shiny new Mac Studio sitting in dad’s basement. The information recorded includes customers interacting with GaggleMail, database access, potential security concerns, system crashes, and so on. This is an incredible amount of information that not even our rockstar developers could make sense of it (really, they could, they just have better things to do).

Mundane events like customer login attempts, interaction with databases, and so on usually do not require immediate analysis. However, the data recorded is still valuable and is the foundation of relevant statistical process controls (described next).

However, a log analysis tool can immediately spot security problems and system crashes. How to act on that information? Usually, the log analyzer sends a text message to one of the rockstar developers who is “on call” so that he can diagnose it and fix it.

Something our manager/founder must consider is that for many rockstars, there is not enough pizza and Red Bull in the world to be on on call. So, the duty would fall on the manager/founder.


Statistical Process Control Used

One of the best statistical process control methods for a company like Gaggle dot Com to use are histograms of the hourly web traffic GaggleMail receives. This feature is usually part of system monitoring or logging tools and can be easily added to dashboards for use by all the employees at the Gaggle dot Com.

As a concrete example, consider a histogram that shows the number of GaggleMail users in each hour of the day. There would have to be adjustments for different time zones, of course.

The rockstar devs would look at the histogram to figure out when extra computers would be needed to handle the extra traffic. In the case of GaggleMail and their brand-new computer, the Mac Studio would have to be more fully committed to making sure GaggleMail customers are serviced during peak hours. An “edgy” application of hourly web traffic is to enable or disable features in GaggleMail based on traffic volume – expensive (computer intensive) features could be disabled during peak hours. This would lower service quality, so must be considered as a last resort.

The manager/founder will look at the chart to figure out if it makes sense to continue to host the site on a single Mac Studio computer or move to a system like Amazon Web Services which includes "load scaling" – automatically making more computers available during peak hours and taking away those computers when not needed during off hours.

Bob, the sales and marketing guy, would use this chart to determine the peak hours that GaggleMail users check their mail. If Gaggle dot Com decides to sell advertisements on the site, Bob would use the histogram to set the prices the advertisers would have to pay. Ads shown during peak hours would cost the advertiser more than ads shown during off hours.


Conclusion

The manager/founder was happy with the way things worked out:

  • He implemented a system used to measure the quality of GaggleMail
  • He extracts usable information from that system
  • He uses the information to predict growth and to financially plan for upcoming expenses
  • This allows him to continually improve the metrics
  • He performs competitor analysis to add desirable features

All of this is great: the customers are happy with existing quality of service, and the quality of service is always improving. In essence, he has moved from merely reacting to being aggressive, as all good rockstar developers should be.

Our manager/founder has no illusions about the future, however.

He and his team of rockstars will soon no longer work for pizza and Red Bull. Their standards are evolving! Soon, they'll want higher quality pizza (Nico's or Papa Johns) instead of that horrible Domino's. Also, their tastes will change from ordinary Red Bull to Fresh Squeezed Red Bull, then to Tropical Red Bull Margaritas, and maybe even all the way to Vodka Red Bulls!

That means that Bob the sales and marketing guy must continue raising venture capital, or worse, return to his shady past of abusing geese, all for the engagement. You can see it in his eyes - he wants those clicks!

The manager/founder knows that these events are on the horizon and must plan accordingly – again, he must not only be proactive but aggressive.

Bob could establish multiple revenue streams for Gaggle dot Com, like advertising, or somehow "gamifying" GaggleMail.

Gaggle dot Com's team is small, but this is made up for by the incredible power of rockstar developers! Archimedes once supposedly said “give me enough pizza and Red Bull, and some rockstar developers, and I will move the world.” This proves that rockstar devs have been around since about 230 BC.

It may seem that GaggleMail depends on having only rockstar programmers. It doesn't: non-rockstars are welcome and are valuable, so long as they fully understand their own strengths and weaknesses.

One event that would require the addition of more developers is if Gaggle dot Com adds more software products to their lineup besides GaggleMail. The manager/founder has been hearing complaints about Google Maps, and he has considered making something called Gaggle Maps (totally not a copy of Google Maps, really).

How to pull this off?

A single team of rockstar developers rightfully scoff at the whole agile development and scrum processes with its daily standup meetings and sprints and other bureaucratic bloat. But what about two teams?

Our manager/founder has read works about something called "scrum of scrums," a technique for combining and synchronizing multiple teams (Spanner, n/d). The problem our manager/founder has with this is that the basic frameworks of agile and scrum are flawed, and making a pile of them (as required by scrum of scrums) does not fix those flaws and, indeed, magnifies them!

Our manager/founder understands that fitting all available people into an organizational or team model should not be done. It is better to devise an organizational or team model that works for the people already there.

Then scrum and scrum of scrum advocates call for something called "work-life balance." Our manager/founder understands that work-life balance is a myth (Pontefract, 2024), and it is a reason for companies to not demand the best from its employees.

In fact, whenever he reads about scrums or scrum of scrums, our manager/founder wants to find a rope and a wobbly stool!

Still, the problem remains. All our manager/founder understands is that it is inappropriate to share expertise across different teams except in very specific ways: doing weekly "brown bag lunches" is great for sharing knowledge, but sharing a QA person or a project manager across multiple teams are causes of failure.

This is shaped by his experience: small teams work great but combining them by treating the team members as "human resources" is vile, disgusting, and downright repulsive. Unfortunately, our manager/founder has no practical experience in combining teams in a way that eliminates bureaucracy, maintains creativity and autonomy, and preserves dignity.

This failure to understand how to combine multiple teams must not be taken as a reason to stop asking questions. The future is wide open, and there must be ways for Gaggle dot Com to stay scrappy and not turn into another IBM!

Thus ends our story of pizza and Red Bull!


References

Goetsch, D. L. & Davis, S. B. (2021). Quality management for organizational excellence: Introduction to total quality (9th ed.). Pearson.

Isolocity. (2024). What are the 8 principles of TQM? https://isolocity.com/what-are-the-8-principles-of-tqm

Pontefract, D. (2024, 2 June). The fallacy of work-life balance. Forbes. https://www.forbes.com/sites/danpontefract/2024/06/02/the-fallacy-of-work-life-balance/

Spanner, C. (n/d). Scrum of scrums. Atlassian. https://www.atlassian.com/agile/scrum/scrum-of-scrums

Management Paralysis and the Good Idea Fairy

Because both total quality management (TQM) and the Malcolm Baldrige approach both require that companies and organizations use a “fact-based” or “evidence-based” or “data driven” approach to setting strategy and making decisions, some type of integrated performance measurement system (El Mola and Parsaei, 2010) seems like a requirement for ongoing operations. [By the way, there apparently is something called “evidence-based medicine.” Going on those words alone, one must shudder at the opposite. But, if anything is true, it is that when words are used to obscure, one must wait for the truth and real intentions to be revealed.]

An integrated performance measurement system must be action-oriented, meaning that not only can it be used to track performance but can also be used to identify slow-downs, excessive costs, and other areas that require improvement.

In addition, an integrated performance measurement system must be able to measure performance based on processes that span across an entire company or organization and not be relegated to single departments. That’s called a process-oriented metric. It is not clear whether an integrated performance measurement system can propose a restructuring of an organization or company so that these cross-department processes do not cross so many departments, and whether such a restructuring is eventually worth it.

The voice of the customer (VOC) and market forces must be considered in any quality management system such as TQM and Malcolm Baldrige approach. Parast et al. (2024) imply that some companies or organizations have a difficult time converting those into actionable items. Instead of a customer-satisfaction loop or a market-analysis loop, companies get stuck in what engineers call “analysis paralysis” – they never make it pass the data collection stage.

Something similar to this is what I call “management paralysis.” This is when management hasn’t developed strategy for some product, or when old management leaves the company and takes their strategies with them. Goetsch & Davis (2021, p. 295) use the phrase “voice of the company,” and with management paralysis, the voice of the company is mute.

Here is a good example: in October 2021, Apple introduced a “notch” in the screen of their MacBook and MacBook Pro products. The idea is that this notch would be a place to hold higher resolution cameras, as well as face tracking and face detection technology. Here we are in August 2025 and none of that has happened. One must conclude that the Good Idea Fairy paid a visit to some mid-level manager and gave him the "vision" of putting the notch into otherwise outstanding computers, and either there was a change in management or an underbaked strategy. Or, the Good Idea Fairy is paid by the hour!


References

El Mola, K. & Parsaei, H. (2010). Integrated performance measurement systems: A review and analysis. The 40th International Conference on Computers & Industrial Engineering, Awaji, Japan, 2010, pp. 1-6. https://doi.org/10.1109/ICCIE.2010.5668237

Goetsch, D. L. & Davis, S. B. (2021). Quality management for organizational excellence: Introduction to total quality (9th ed.). Pearson.

Parast, M. M., Safari, A., & Golgeci, I. (2024). A comparative assessment of quality management practices in manufacturing firms and service firms: A repeated cross-sectional analysis. IEEE Transactions on Engineering Management, 71, 4676-4691. https://doi.org/10.1109/TEM.2022.3221851

Malcolm Baldrige National Quality Award


Introduction

This post discusses the Malcolm Baldrige National Quality Award (MBNQA), including the distinction between the Baldrige Framework and Baldrige Award Criteria. Next, the award application process is described. Finally, the critical success factors will be applied to a small, fictitious software company.


What is the MBNQA?

The MBNQA was created by the Malcolm Baldrige National Quality Improvement Act, signed by Reagan in 1987, and was named after a former commerce secretary. Awards were originally awarded in three categories: manufacturing, service, and small business, but the number of categories has since increased. As of 2007, there are six categories: manufacturing, service company, small business, education, healthcare, and non-profit (American Society for Quality, n/d). A seventh category was added in 2022: community (Baldrige Foundation, 2022).

The purpose of the award is to highlight companies and organizations that make use of quality management standards, including total quality management. The award promotes awareness of the importance of quality improvement, recognizes companies that practice quality management, and allows for an exchange of quality management techniques. Recipients of the MBNQA are encouraged to share non-proprietary techniques about their companies and organizations, particularly at the award ceremony. This allows other companies and organizations everywhere to duplicate those techniques (Goetsch & Davis, p. 424).

One of the criticisms of the MBNQA is that it fails to predict a company or organization’s success. Garvin (1991) notes that the award was never meant to be a predictor of financial success – then immediately advocates use of this measurement anyway. Insert “no, but actually yes” meme here.


The Baldrige Excellence Framework and Baldrige Award Criteria

There are two related concepts relevant to the MBNQA: the Baldrige Excellence Framework and Baldrige Award Criteria.

The Baldrige Excellence Framework lists seven categories relevant to quality-related performance: leadership, strategy, customers, measurement-analysis-knowledge-management, workforce, operations, and results (NIST, 2024).

The leadership category is about how upper management leads the organization, and how the organization leads within the community. The strategy category is about how the organization establishes and implements strategic goals.

The customer category determines how the company builds and maintains long-term relationships with its customers.

The measurement-analysis-knowledge-management category describes how the organization gathers and uses information to support its processes.

The way the organization involves and empowers its employees is covered in the workforce category. The design, management and improvement of key processes are described in the operations category.

Finally, the results category describes the organization’s customer satisfaction, human resources, “governance and social responsibility,” and finances as well as how it compares to its competitors (American Society for Quality, n/d).

These are applicable to all types of industries and organizations, of all sizes. They are intended to allow companies or organizations to self-evaluate themselves against quality standards, as well as to prepare them for competition for the Baldrige Award.

The Baldrige Award Criteria are the factors that are considered specifically when awarding the MBNQA. These are: organization description, leadership and governance, operations, workforce, customers and markets, finance, strategy, organizational learning, and community relations. (NIST, 2025). These can be interpreted as refinements of the Baldrige Excellence Framework.


The Award Application Process

The actual scoring system, judging process, and evaluation criteria were not specified in the Act that created the MBNQA. It was left up to the National Institute of Standards and Technology (NIST) – then known as the National Bureau of Standards – to decide all this.

Companies submit applications of 50-75 pages describing their practices and performance in the Baldrige Award Criteria. The application fee for companies with 500 or fewer employees is $10,000, and for larger companies it is 19,000. (NIST, 2025, 6 January)

Reviewers then grade these applications. A small set of high-scoring applicants are selected for multi-day visits which consists of interviews and document checks. The site visit fee for companies with 500 or fewer employees is $25,500, and for larger companies it is $40,800. (NIST, 2025, 6 January)

The judges then meet, review the applications and the results of the site visits, and select winners. Winners get a trophy and a pony.

Winners then attend an award ceremony, usually held in Baltimore, MD. Following the ceremony, the Quest for Excellence Conference occurs, and that is where MBNQA recipients can share their non-proprietary best practices and innovations.


Conclusion and Application to a Fictitious Software Company

Of the two, the Baldrige Excellence Framework and the Baldrige Award Criteria, the former is far more valuable to a fictitious software company. The application of the Baldrige Excellence Framework does not require any costly application fees, access to finances and proprietary information, and does not require costly site visits.

The Baldrige Excellence Framework is really the critical success factors that the MBNQA seeks to capture. This is perfect for small, fictitious software companies, especially those at the “ramen noodle profitability” stage. Preparing such a company for evaluation for the MBNQA is simply cost prohibitive.


References

American Society for Quality. (n/d). What is the Malcolm Baldrige National Quality Award (MBNQA)? https://asq.org/quality-resources/malcolm-baldrige-national-quality-award

Baldrige Foundation. (2022, 9 August). Congress adds “community” as the 7th category of the Malcolm Baldrige National Quality Awards. https://baldrigefoundation.org/news-resources/press-releases.html/article/2022/08/09/congress-adds-community-as-the-7th-category-of-the-malcolm-baldrige-national-quality-awards

Garvin, D. (1991). How the Baldrige Award really works. Harvard Business Review. https://hbr.org/1991/11/how-the-baldrige-award-really-works

Goetsch, D. L. & Davis, S. B. (2021). Quality management for organizational excellence: Introduction to total quality (9th ed.). Pearson.

NIST. (2025, 6 January). Baldrige Award Process Fees. https://www.nist.gov/baldrige/baldrige-award/award-process-fees

NIST. (2025). Award criteria. https://www.nist.gov/baldrige/baldrige-award/award-criteria

NIST. (2024). Baldrige excellence builder: Key questions for improving your organization’s performance: 2023-2024. https://www.nist.gov/system/files/documents/2025/02/27/2023-2024-Baldrige-Excellence-Builder.pdf

More Problems with JIT Manufacturing

Just-in-tine (JIT) manufacturing is the idea that a manufacturer should order component parts only when a customer places an order from the manufacturer. As such it is a pull-only process. There are several advantages: the manufacturer does not incur any inventory holding costs; there is no chance of components becoming spoiled (in the case of non-durable goods) or obsolete; and that any defects in the component parts can be quickly identified, and remedial steps can be taken.

The obvious problem with JIT manufacturing is that it is highly vulnerable to supply chain disruptions (Goetsch & Davis, p. 378-379). By having no inventory, a company practicing JIT cannot easily weather interruptions in the flow of needed parts. Problems like this can be minimized by using multiple suppliers, if not all the suppliers are disrupted. Ye et al (2022) recommend a global centralized solution, but this is just making the problem bigger.

There are other problems, however.

Goetsch & Davis discuss the problem of supply chain interruptions from the “up-stream” perspective. Another problem, a “down-stream” problem, is that the customer may be unable to make requests. For example, a recent storm here caused damage to a power station, which forced the closings of two car dealerships and one auto mechanic for four days. During that time there were demands for automobiles and auto parts, but the dealerships and mechanics were unable to place orders for them.

Another problem with JIT is synchronizing the arrival of parts (Guo et al, 2022). If the parts do not arrive at the same time, then production cannot be completed until the remaining parts arrive. The manufacturer not only depends on one supplier but all suppliers. During that time, the manufacturer incurs storage costs. Guo, et al, call for improved manufacturing planning and control (MPC) systems, but no MPC system was listed in their paper.

Does it make sense for all supply chain partners to practice JIT? For example, suppose an automobile maker practices JIT. The maker receives an order for a car, and they then must place orders for each of the component parts (and according to Collectors Auto Supply (2020), there are approximately 30,000 parts). Next, a part maker must order the parts they need from other suppliers. Finally, the raw materials must be dug from the ground and smelted. This is a consequence for when all supply chain partners follow JIT practices. Does this make sense, or is it “JIT for me but not for thee?”

When seen in this light, one of the major benefits of JIT manufacturing vanishes: inventory costs are merely pushed off to suppliers.

The result is to lower customer satisfaction by forcing the customer to wait for fulfillment of his demands. This is acceptable in some industries. For example, certain medium- to high-end automakers sometimes have a waiting period of weeks. Construction companies operate on a timeframe of months - unless they’re Amish! Customer needs are most often better met through companies not practicing strict JIT.

One last problem, the most fundamental problem, is that JIT explicitly ignores sales forecasts. “As the processes and suppliers become more proficient, and the JIT/Lean line takes hold, production will be geared to customer demand rather than to sales forecasts.” (Goetsch & Davis, 2021, p. 383). JIT is calling for us to close our eyes to very real situations where sales forecasts are repeating patterns based on historical data. This happens anywhere from Christmas shopping patterns to the battle regularities of the Taliban in Afghanistan. These patterns are very real, and it is foolish to ignore them.

Imagine a situation where a customer is shopping for a high-value product. This is currently happening in the US where we are seeking to increase the number of merchant vessels and battle force ships. One cannot simply go to a shipbuilder and have the same experience as buying a car! Instead, the customer is purchasing a currently nonexistent product, based only on detailed plans plus the shipbuilder’s reputation. This is a situation where JIT is practical, and it may be practical for other high-value or luxury products. Otherwise, JIT cannot be used as a universal management policy.


References

Collectors Auto Supply. (2020, 5 May). How many parts are in a car? https://collectorsautosupply.com/blog/how-many-parts-are-in-a-car/

Goetsch, D. L. & Davis, S. B. (2021). Quality management for organizational excellence: Introduction to total quality (9th ed.). Pearson.

Guo, D., et al. (2022). Towards synchronization-oriented manufacturing planning and control for Industry 4.0 and beyond. IFAC, 55(2), 163-168. https://doi.org/10.1016/j.ifacol.2022.04.187

Ye, Y., Suleiman, M., & Huo, B. (2022) Impact of just-in-time (JIT) on supply chain disruption risk: The moderating role of supply chain centralization. Industrial Management & Data Systems, 122(7), 1665–1685. https://doi.org/10.1108/IMDS-09-2021-0552

JIT Manufacturing and Supply Chain Fragility

One of the problems with JIT manufacturing is that it is susceptible to supply chain interruptions. Taiichi Ohno, the inventor of JIT/Lean manufacturing, recognized that problem. It is worth quoting in full Goetsch & Davis’s discussion of this issue and Ohno’s solution:

Mass production advocates emphasize that the lines need to keep moving and that the only way to do this is to have lots of parts available for any contingency that might arise. This is the fallacy of just-in-time/Lean according to mass production advocates. JIT/Lean, with no buffer stock of parts, is too precarious. One missing part or a single failure of a machine (because there are no stores of parts) causes the JIT/Lean line to stop. It was this very idea that represented the power of JIT/Lean to Ohno. It meant that there could be no work-arounds for problems that did develop, only solutions to the problems. It focused everyone concerned with the production process on anticipating problems before they happened and on developing and implementing solutions so that they would not cause mischief later on. The fact is that as long as the factory has the security buffer of a warehouse full of parts that might be needed, problems that interrupt the flow of parts to the line do not get solved because they are hidden by the buffer stock. When that buffer is eliminated, the same problems become immediately visible, they take on a new urgency, and solutions emerge—solutions that fix the problem not only for this time but for the future as well. Ohno was absolutely correct. JIT/Lean’s perceived weakness is one of its great strengths. (Goetsch & Davis, 2021, p. 378-379)

According to this, maintaining a buffer stock hides any supply chain issues until the buffer stock is exhausted. This only happens, though, when the buffer stock levels are not monitored. By continually tracking buffer stock – and the rate at which the stock is replenished – any supply chain problems are revealed, and they are revealed at the exact same time that users of JIT manufacturing would notice these shortages. The difference is that the company maintaining buffer stock is not immediately affected, whereas the one using JIT must halt production until the situation is resolved.

The solution Ohno advocates (according to Goetsch & Davis) is that supply chain problems cannot occur (“there could be no work-arounds for problems that did develop, only solutions to the problems”). Problems are avoided simply by having everybody involved working on alternatives to problems that have not yet occurred. Unfortunately, no plan survives contact with reality, and no amount of mental gymnastics will change this, and when there are shortages, Ohno would resolve the issue by having multiple people screaming for a solution. Having multiple people call a supplier pressuring them to resolve a delay does no better than having one person making one call. Phone calls, by themselves, are not sufficient to identify and repair the problem that caused the supplier’s inability to produce needed parts.

One of the workarounds (that Ohno claims is unneeded) to the issue of supplier shortage is to maintain “total visibility – of equipment, people, material, and process” (Kumar, et al, 2013). There are two problems with this: adding such visibility is sure to increase the level of bureaucracy in the supplier, and not all suppliers are willing to allow total visibility. The reason for the latter is that when a company wants visibility into a supplier, it is wanting not only the production rates of a certain part, but also for all the company’s competitors that happen to use the same part.

Akhil Bhargava offers a number of different solutions to the supplier shortage issue. According to him, “The solutions to the traditional mindset of holding Safety stock include Increased data processing involvement in implementation planning efforts in order to upgrade systems to JIT level, statistical process control enhancement to provide timely feedback for engineering and managing tuning, meaningful contingency planning as a response to defects in critical parts, and materials and effective user supply dialogues to support delivery and quality issues.” (Bhargava, 2017). He is basically calling for “better living through IT™”, and none of these solutions actually address supplier shortages, except for the “meaningful contingency planning” option, which is just another phrase for maintaining buffer stock.

The JIT supply chain fragility issue appears to be a problem that has not been resolved and may be unsolvable.


References

Bhargava, A. (2017). A study on the challenges and solutions to just in time manufacturing. International Journal of Business and Management Invention, 6(12), 47-54. https://www.academia.edu/69920210/A_Study_on_The_Challenges_And_Solutions_To_Just_In_Time_Manufacturing

Goetsch, D. L. & Davis, S. B. (2021). Quality management for organizational excellence: Introduction to total quality (9th ed.). Pearson.

Kumar, S., et al. (2013). Difficulties of Just-in-Time implementation. International Journal on Theoretical and Applied Research in Mechanical Engineering, 2(1), 8-11. http://www.irdindia.in/journal_ijtarme/pdf/vol2_iss1/2.pdf

Benchmarking in High-Security Environments

Benchmarking is certainly important (Goetsch & Davis, 2021), but making it happen in a sector where secrets must be kept involves “creative” solutions, or only making extremely broad comparisons that do not have value to opponents. For example, Gebicke & Magid’s (2010) global study doesn’t compare specific defense systems, but they do compare force size, tooth-to-tail ratio, and so on.

Another type of comparison that doesn’t involve sharing secretive information is between military education institutions. For example, V. Kravets (2024) compares Ukraine’s higher military educational instructions, but instead of comparing technical proficiencies in military science, her goal is to determine the feasibility of including management activities into those institutions. Aren't things going bad enough for Ukraine?

Because of the need to avoid classified information becoming public, the benchmarking that could be used by companies like Rheinmetall AG against BAE Systems or General Dynamics is fraught with difficulties that aren’t encountered in civilian industries. For example, in any company that has IT infrastructure (which means all companies), benchmarking various IT components (like databases or servers) is possible because different industries use the same IT components, and so the benchmarking partners need not be competitors. For example, Google’s Gmail and the fictitious Gaggle dot Com’s GaggleMail are competitors, so no mission-critical information should pass between them. Gaggle dot Com is not a competitor of X (formerly Twitter), so it is OK to benchmark their databases, for example. And this database benchmarking can involve direct comparison of databases made by the same company (like Microsoft) or comparisons of databases made by different companies (Microsoft vs Oracle).

It's not clear whether the same benchmarking would happen in the IT departments of artillery manufacturers since there are all sorts of proprietary IT components. But one can benchmark various systemic quality measures like lean or six sigma standards against other companies, without giving away classified information.

In a 1999 paper by Yarrow & Prabhu, three different modes of benchmarking are presented: metric benchmarking, diagnostic benchmarking, and process benchmarking. Metric benchmarking is the comparison of “apples with apples” performance data. Process benchmarking “involves two or more organizations comparing their practices in a specific area of activity, in depth, to learn how better results can be achieved.” And diagnostic benchmarking “seeks to explore both practices and performance, establishing not only which of the company’s results areas are relatively weak, but also which practices exhibit room for improvement.”

I would like to make guesses of the types of benchmarking done at companies like Rheinmetall AG, without knowing anything about artillery manufacturing! Metric benchmarking could be done on an IT component level (everybody uses databases), but exact benchmarking may be precluded because proprietary software is used. In that case, process benchmarking would still be possible. Diagnostic benchmarking seems to best describe comparisons of six sigma measurements. But I don’t know anything about six sigma, either!

For metric comparisons, then, companies like Rheinmetall AG must enter into consortium agreements with other defense manufacturers as you stated. It isn’t clear how security would be maintained, even if the data is anonymized. In the absence of consortium agreements, one would have to look to different industries. For example, for information about specific tolerances, one would have to compare data from, say, civilian pipe manufacturers. This is probably a one-way transfer of information.


References

Gebicke, S. & Magid, S. (2010). Lessons from around the world: Benchmarking performance in defense. McKinsey & Company. https://www.mckinsey.com/~/media/mckinsey/dotcom/client_service/public%20sector/pdfs/mck%20on%20govt/defense/mog_benchmarking_v9.pdf

Goetsch, D. L. & Davis, S. B. (2021). Quality management for organizational excellence: Introduction to total quality (9th ed.). Pearson.

Kravets, V. (2024). Development strategies for higher military educational institutions of Ukraine: analysis based on benchmarking. Честь і закон, 2(89), 74-82. https://chiz.nangu.edu.ua/article/download/309198/300732/714412

Yarrow, D. & Prabhu, V. (1999). Collaborating to compete: Benchmarking through regional partnerships. Total Quality Management, 10(4-5), 793-802. https://doi.org/10.1080/0954412997820

Benchmarking in Brick-and-Mortar Stores

According to Jenkins (2025):

Reverse logistics is the reverse of the standard supply chain flow, where goods move from manufacturer to end consumer. Reverse logistics includes activities like returns management, refurbishment, recycling, and disposal. It’s an important part of supply chain management, often involving the return of products due to damage, seasonal inventory, restock, salvage, recalls, or excess inventory.

Benchmarking has several interesting twists when applied to the problems of reverse logistics in brick-and-mortar stores, such as hardware stores.

In the context of a hardware store, internal benchmarking of reverse logistics is possible. For example, comparing rates of customer returns by manufacturer would be valuable to the customer, with the idea of minimizing the number of returns. Based on that information, manufacturers making products with high return rates can be dropped from the hardware store’s offerings (Jenkins, 2025).

In cases where customers do return a product, the speed at which the vendor provides credit can also be tracked. Even if products from the slowest vendor are maintained in the store’s offerings, knowing the expected delay in credit could be useful for accounting purposes. For example, if a particular vendor takes 60 days to provide credit, then that credit cannot be used to cover any expenses until 60 days.

Other information that can be gleaned from benchmarking returns includes measurements to identify and reduce slow-running processes like return processing, reentry into the inventory system, and coordinating with the vendor (Goetsch & Davis, 2021).

Internal comparisons aren’t the only route to using benchmarking for process improvement, of course. Benchmarking partners are also available, and in a way that is different from benchmarking between the information technology (IT) departments of various companies.

By being brick-and-mortar, hardware stores can engage companies in the same business but are separated by geographic distance so that they aren’t direct competitors. For example, a hardware store in Hawaii can form a benchmarking partnership with a hardware store in Philadelphia, say. Because these stores rely on foot traffic, there is extremely little chance that a customer of the Philly store would travel to Hawaii to pick up a hammer!

So, information learned by forming a benchmarking partnership is extremely relevant and valuable to both partners (since they are in the same business), but that information cannot be used against each other (since they are brick-and-mortar stores located on opposite sides of the globe)!

It is also possible to set up a reuse supply chain (Atterblad & Blomkvist, 2023). With this, used and returned products are shunted to a used hardware store. It isn’t clear from Atterblad & Blomkvist how profitable this is to the original hardware store, but it at least avoids a 100% loss. Besides “return to point of origin,” it is also possible for the customer to sell or donate products to a “second-life retailer” (Beh et al, 2016). In that situation, the original hardware store does not benefit at all.


References

Atterblad, R., & Blomkvist, H. (2023). Challenges and recommendations for product reuse: Exploring the reuse supply chain of in-store hardware: A case study. https://www.diva-portal.org/smash/get/diva2:1770268/FULLTEXT01.pdf

Beh, L. S., Ghobadian, A., He, Q., Gallear, D., & O'Regan, N. (2016). Second-life retailing: a reverse supply chain perspective. Supply Chain Management: An International Journal, 21(2), 259-272. https://doi.org/10.1108/SCM-07-2015-0296

Goetsch, D. L. & Davis, S. B. (2021). Quality management for organizational excellence: Introduction to total quality (9th ed.). Pearson.

Jenkins, A. (2025, 1 May). A guide to reverse logistics: How it works, types and strategies. Oracle NetSuite. https://www.netsuite.com/portal/resource/articles/inventory-management/reverse-logistics.shtml

Change Management Strategy for Converting to Total Quality Management


Introduction

This post follows the steps needed for a company to transition in a total quality management (TQM) operation as outlined in Goetsch & Davis (2021). We begin by listing some of the factors that determine the success of a TQM conversion, and some alternative implementation plans. Next, the implementation process recommended by Goetsch & Davis – the Goetsch-Davis 20-Step Total Quality Implementation Process – is described. The paper concludes by describing situations and alternatives when management is lacking commitment to TQM.


Success of Implementing TQM

No two implementations of TQM are the same, and the success depends on several factors. Mann & Kehoe (1993) list several factors besides management buy-in. These factors include the employees’ age distribution, their education level, whether the management uses long term planning, and so on. There are also different approaches to implementing TQM, as described in (Yusof & Aspinwall, 2000, p. 642), which range from companies not implementing TQM all at once, to implementing it on a department-by-department basis. The Goetsch-Davis 20-Step Total Quality Implementation Process described next requires a top-down commitment to TQM throughout the entire company. There is room for adjusting the pace of adopting TQM (determined with the choice of projects deemed fit for “TQM-ization”), but the process is meant to be total.


Implementation According to the Goetsch-Davis 20-Step Process

Goetsch & Davis (2021) utilize a three-phase process for implementing TQM. These phases are preparation, planning, and implementation. The details of these phases follow the Goetsch-Davis 20-Step Total Quality Implementation Process (Goetsch & Davis, p. 419-423).


Preparation

As the implementation of TQM is a top-down process, preparation begins with the top executive (CEO for example) becoming committed to TQM. He then forms a Total Quality Steering Committee consisting of the CEO’s direct reports and with the CEO chairing the committee. If a union is involved, the senior union member is also included in the steering committee. This committee is a permanent entity and replaces the former executive staff organization. With the help of a consultant, they engage in team building and get training in TQM’s philosophy, tools, and techniques.

Control then moves to the total quality steering committee. They begin by creating statements of “vision” and guiding principles. Based on those documents they set broad strategic objectives. Next, they communicate and publicize the statements and their plans, and this communication is an ongoing activity by the steering committee.

The steering committee then identifies organizational strengths and weaknesses – why wasn’t that done earlier? As part of this, they identify TQM advocates and TQM resisters. One of these groups of employees could be added to project teams created during the planning phase (guess which one?)

The steering committee will then establish baselines for employee satisfaction and attitudes (performed by the HR department), as well as baseline customer satisfaction. For large customer bases, satisfaction can be determined by using sampling. Customer feedback must include both extremal and internal customers.


Planning

At this point, the steering committee can enter the planning phase! The approach they should use follows the PDCA (Plan-Do-Check-Adjust) cycle, so it may be necessary to return to this step based on the results of what follows. For reference, this is step 12.

The steering committee identifies projects that are amenable (or vulnerable) to adopting TQM. One of the determining factors for the initial choice of projects is the likelihood of success. Teams for each project are appointed. These teams can be cross-departmental, and it is handy to know who the TQM advocates are (Goetsch & Davis, p. 422). The project teams are then trained on TQM principles by members of the steering committee. Finally, teams’ direction is set, and they are activated, each starting their own PDCA cycle.


Implementation or Execution

The project teams then lead the implementation or execution phase. They gather feedback from the team members, the customers, and the employees and report their findings back to the steering committee, perhaps on a monthly basis. (Goetsch & Davis, p. 422) This is the “check” stage of the PDCA loop, and the steering committee makes appropriate adjustments, returning to step 12.

The steering committee modifies organizational structure, procedures, and processes, as necessary. They also implement reward or recognition systems. Finally, union rules are considered.


Conclusion

By following these steps, it should be possible to have a company or organization adopt TQM. If there is no commitment from top management on total quality, then it may be possible to “sell” TQM to them, but…

If enlightenment does not work, it may be time to consider moving on to different employment. That is not always a reasonable option, but long-term prospects for your current employment are not bright either, given top management’s attitude toward total quality. (Goetsch & Davis, p. 423)
Yup, enlightenment.

It also may be possible to implement TQM within a single department. Department total quality is a contradiction since TQM requires commitment from every aspect of the company, but Goetsch & Davis (p. 424) note that this is better than nothing.

For companies with management not committed to adopting TQM, there are other courses of action that can get a company close to using TQM: pursuing ISO 9000 certification and competing for the so-called Baldrige Award.


References

Goetsch, D. L. & Davis, S. B. (2021). Quality management for organizational excellence: Introduction to total quality (9th ed.). Pearson.

Mann, R., & Kehoe, D. (1995). Factors affecting the implementation and success of TQM. International Journal of Quality & Reliability Management, 12(1), 11-23. https://coer.org.nz/wp-content/uploads/2011/09/D22_Factors_affecting_the_implementation_and_success_of_TQM.pdf

Yusof, S. R. M., & Aspinwall, E. (2000). TQM implementation issues: review and case study. International Journal of Operations & Production Management, 20(6), 634-655. https://coer.org.nz/wp-content/uploads/2011/09/D22_Factors_affecting_the_implementation_and_success_of_TQM.pdf

Just-In-Time and Lean Strategies


Introduction

This post discusses the relationship between Just-In-Time (JIT) manufacturing and Lean strategies. We begin by (trying to) define each of these separately, then examine how they work together as JIT/Lean. Next, the relationship between JIT/Lean and total quality management (TQM) is discussed. We conclude by noting that while JIT/Lean strategies seek to advance the goals of TQM, it does not advance all the goals of TQM.


Just-In-Time Manufacturing

Just-In-Time (JIT) manufacturing is a production strategy that minimizes waste by ordering and producing goods on an as-needed basis, directly in response to customer demand. JIT manufacturing is a pull system, so there is no need to rely on forecasts (which is a push system). There is little or no inventory holding costs since production is triggered only when the customer demands it. Besides low inventory holding costs, one of the other advantages to requesting parts only as needed, there is reduced risk of waste in the forms of spoilage (in the case of perishable goods) or obsolescence (for manufactured goods).


Lean Manufacturing

Like JIT, Lean manufacturing is also concerned with reducing waste, but on a broader scale. Goetsch & Davis (2021, p. 377) state that there are seven types of waste that Lean manufacturing seeks to minimize:

  • Overproduction
  • Wait time
  • Transportation costs
  • Processing
  • Inventory
  • Unnecessary motion
  • Product defects.
These include wastes not strictly covered by JIT, in particular transportation costs and unnecessary motion.


Comparing the Strategies

It makes sense to combine these two manufacturing philosophies, as they were both invented by Taiichi Ohno (1912 - 1990). As Ohno was employed at Toyota Motor Corporation, the system was initially called the Toyota Production System (TPS) and was seen as an alternative (or refinement) of Henry Ford’s mass production system. As it spread to other industries, it gained the name Lean manufacturing.

Goetsch & Davis do indeed combine JIT and Lean manufacturing, calling it JIT/Lean, which they roughly define as follows:

Just-in-time/Lean is producing only what is needed, when it is needed, and in the quantity that is needed. (p. 376)
This definition doesn’t include the full scope of Lean manufacturing, however.


Combining JIT/Lean with Total Quality Management

JIT/Lean manufacturing integrates well with total quality management (TQM) manufacturing. In particular, by minimizing the production of defective goods, companies following JIT/Lean are concerned with increasing the quality of their goods. Since the system operates only in response to customer demand, product defects are identified early and corrected. Finally, since the JIT/Lean operates as a pull system, it is inherently concerned with customer satisfaction.

This is essentially the conclusion of Cua, McKone & Schroeder, (2001). They combine TQM and JIT together and find that they are compatible with each other as well as with something called Total Productive Maintenance (TPM).

Tesfaye & Kitaw (2017) claim that integrating TQM and JIT are insufficient to guarantee organizational success and requires “interaction between the core company and the external stakeholders (such as governmental organizations, universities, banks, research institutions, and others)” as well as what they call “technological capability accumulation.” This latter refers to transferring and adopting knowledge into the company instead of being “just passive receivers and users of foreign technologies.” (Tesfaye & Kitaw, 2017, p. 22).

The research by Tesfaye & Kitaw (2017) focused exclusively on Ethiopian leather and leather manufacturing companies, but the lack of technological capability accumulation occurs in other industries, even in software companies. Software and IT companies “burn through” technologies at an incredible rate, caused by employee turnover as well as the idea of rejecting older technologies in favor of adopting “the new hotness.”


Conclusion

JIT and Lean are both strategies that improve manufacturing processes. Both are concerned with eliminating waste in such processes, with JIT concerned with minimizing inventory holding costs and minimizing costs that result from spoilage and obsolescence. Lean improves on this by minimizing additional types of waste such as wait times and transportation costs.

JIT/Lean brings the benefits of TQM – improved quality and focus on customer satisfaction – but only to production departments. Companies practicing TQM require continual improvement and customer focus of all departments of a company, whereas JIT/Lean is applicable to production departments. As such, JIT/Lean works well with TQM, but it is distinct from TQM.


References

Cua, K., McKone, K., & Schroeder, R. (2001). Relationships between implementation of TQM, JIT, and TPM and manufacturing performance. Journal of Operations Management 19(6), 675-694. https://doi.org/10.1016/S0272-6963(01)00066-3

Goetsch, D. L. & Davis, S. B. (2021). Quality management for organizational excellence: Introduction to total quality (9th ed.). Pearson.

Tesfaye, G. & Kitaw, D. (2017). A TQM and JIT integrated continuous improvement model for organizational success: An innovative framework. Journal of Optimization in Industrial Engineering 22,15-23. https://doi.org/10.22094/joie.2017.265

Choosing Key Metrics

The use of Statistical Process Control (SPC) extends across numerous domains, and the key metrics vary by industry. For example, Vetter & Morrice (2019) describe how SPC is used in medical applications. According to them, “Undertaking, achieving, and reporting continuous quality improvement in anesthesiology, critical care, perioperative medicine, and acute and chronic pain management all fundamentally rely on applying statistical process control methods and tools.” Each one of those applications would have their own key metrics.

Goetsch & Davis (p. 305) tell a story about the use of SPC in another industry: semiconductor manufacturing. According to them, a North American semiconductor plant they visited had reduced the number of control charts from 900 down to 100 in a few years. This indicates how it is possible to “overmeasure” processes. The problem here is that the semiconductor plant was collecting too much data. Why is that a bad thing? First, it takes time and money: some poor employer at the plant had to update all those control charts. Second, the charts had to be stored somewhere and that consumes space, even when charts are stored digitally. Finally, all that "overmeasured" data is just noise that masks the signal that is the real information.

Determining exactly what to measure is exceedingly difficult when the processes being monitored are complex. Even when using another control technique, an SPC is also recommended, at least according to Montgomery (2018). For example, SPC is used to augment a system called engineering process control (EPC). EPC is used for industrial processes and is best for situations where the mean value of what is being measured drifts over time. This is completely different from SPC systems, which measures values that vary about a fixed mean.

The data storage problems mentioned in Goetsch & Davis even happen for companies that store their data digitally. For example, web site hosting companies log (record) information about which web pages are requested, which images are downloaded, and any errors that occur. The most common one is the dreaded “404 – page not found” error.

The length of time these logs are retained depend on the industry and the jurisdiction. For example, in the U.S. healthcare industry, HIPAA requirements mandate that logs be maintained for six years. These requirements have been straining smaller web hosting companies because they occupy so much space on hard drives!

As much space as they occupy, log files are extremely valuable! Network engineers and QA specialists pour over these logs not only looking for errors but also to determine statistics about each of the web pages. To do this, there are specialized tools that make finding errors and measuring statistics easy.

Security specialists also use these logs to detect potential security threats. They can detect unauthorized access, identify malware infections, and use this information to respond to security breaches.

One advantage of digital logs over traditional paper-based control charts is that it is possible to create “alerts.” For example, if the log shows that a website went down, an alert in the form of a text message is automatically sent to a network engineer so that he can correct the problem. I imagine that other SPC software systems have a similar feature.

While storing digital logs is expensive, website hosting companies stick to the motto: “store everything, analyze later.” The costs are sometimes worth it.


References

Goetsch, D. L. & Davis, S. B. (2021). Quality management for organizational excellence: Introduction to total quality (9th ed.). Pearson.

Montgomery, D. et al. (2018). Integrating Statistical Process Control and Engineering Process Control. Journal of Quality Technology 26(2). https://doi.org/10.1080/00224065.1994.11979508

Vetter, T. & Morrice, D. (2019). Statistical process control: No hits, no runs, no errors? Anesthesia & Analgesia 128(2), 374-382. https://doi.org/10.1213/ANE.0000000000003977

When Statistical Process Control Goes Wrong

The role of managers in statistical process control (SPC) is quite important. The manager must set quality standards for his company’s products or services and enforce those standards. This demonstrates an overall commitment to quality and motivates employees to produce quality products and services (Rungtusanatham, 2001). There is the question as to how the enforcement is implemented. Bushe (1988) argues that gradual implementation is more successful than an abrupt imposition.

What I found missing in Goetsch & Davis (2021) was their coverage of management’s duties when things go wrong. For each measurable and tracked quantity, the manager established production quality standards so there is the possibility that the quality can fall below the standard.

In the context of software companies – web hosting companies in particular – there are SPC systems in place, and they are always automated. One of the benefits of automated systems is that text-message alerts can be sent to the appropriate people when some measurement goes out of spec. The “appropriate people” aren’t always managers, but they are usually in the position to effect repairs. Managers are required whenever money is required, however.

A similar situation happens in manufacturing: machine operators would be the first to spot a problem and would most likely be able to repair the machine. If the machine needs replacement, a manager must approve the required funds.

Besides situations needing the expenditure of funds, managers are required when problems arise with supply chain partners. For example, suppose a supplier is providing substandard parts, parts whose quality falls below the agreed-upon quality level. The manager must not allow the quality of his company’s product to suffer as a result.

The manager must work with the supplier to arrive at some solution.

One thing the manager can do is to get an estimate for the time needed for the supplier to resume manufacturing products that are within agreed-upon specifications. Based on this information, the manager may have to delay delivery to his customers or deliver less than what was promised.

In situations when some fixed percentage of the supplier’s parts are below quality standards, the supplier can deliver additional parts in hope that enough of them are acceptable. With additional parts, the company can then deliver quality items to its customers.

The most drastic option is to switch suppliers, either temporarily or permanently. Well-ran businesses will always maintain alternative suppliers, and changing to an alternative supplier would require management decisions.

Managers are not only responsible for setting quality standards, but they are also responsible for deviations from quality standards. By embracing these duties, managers ensure adherence to established product quality standards and sustain customer satisfaction.


References

Bushe, G. (1988). Cultural contradictions of statistical process control in American manufacturing organizations. Journal of Management 14(1). https://doi.org/10.1177/014920638801400103

Goetsch, D. L. & Davis, S. B. (2021). Quality management for organizational excellence: Introduction to total quality (9th ed.). Pearson.

Rungtusanatham, M. (2001). Beyond improved quality: the motivational effects of statistical process control. Journal of Operations Management 19(6). https://doi.org/10.1016/S0272-6963(01)00070-5

Ignoring the Voice of the Customer

The voice of the customer (VOC) is the driving force behind Quality Function Deployment (QFD), and it sets the direction for a company to improve its products, satisfy its customers, and respond to the competition (Goetsch & Davis, 2021, p. 290). There are many ways this can go wrong. For example, the individuals comprising the VOC may be in conflict - whis is addressed by Xiao & Wang (2024). Another way is that the VOC is misrepresented - this is described in another post on this blog. A third way this can go wrong is when a company completely ignores the VOC, when the vox populi is not the vox Dei.

For this post I was going to write about the process by which the U.S. military decided to use the High Mobility Multipurpose Wheeled Vehicle (HMMWV) in Iraq. Insurgents soon learned that the HMMWV and other vehicles were vulnerable from below to Improvised Explosive Devices (IEDs). U.S. troops modified the HMMWVs and other vehicles to improve survivability by adding sandbags to the floor (Haji sandbags) and welding scrap metal to the bottom. We don’t know whether the QFD process was used in choosing the HMMWV, but the VOC certainly did not include input from anyone with experience in asymmetric warfare, where attack from unexpected directions is a rudimentary and fundamental tactic.

Beyond that one observation, For now, I have nothing more to write on that subject, but there are numerous other examples where the VOC is minimized or misinterpreted. I do have something to write about a more recent example where the VOC was completely ignored...

Consider the 2024 “Copy Nothing” advertising campaign for Jaguar Cars (Jaguar, 2024). This campaign was launched on 19 November 2024 to announce their conversion to an all-electric brand. This conversion was not mentioned in the ad itself, nor was the fact that the ad was even for an automotive manufacturer until the word “Jaguar” appeared at the very end, in a new font and without the stylized image of a leaping jaguar that used to be their logo.

Instead, the advertisement begins with elevator doors opening onto a barren wasteland of a set. Stepping from the elevator are various stunning and brave gender-ambiguous runway models each feigning purposefulness but are really just displaying a mixture of smugness and boredom. Next, there are scenes of the models alongside the phrases “create exuberant,” “live vivid,” “delete ordinary,” “break moulds,” and “copy nothing.” The models then walk out of frame and the name of the brand is finally revealed.

The soundtrack to all this has a heavy beat, which represents the heartbeats of Jaguar stockholders as they experience cardiac arrest upon watching this ad.

Immediately, the advertisement became more popular than the Jaguar car brand itself, and indeed it became an embarrassment to Jaguar. Talk show hosts lampooned it, it was roasted on social media, and people used AI to add a jaguar back into the commercial – with the jaguar attacking the models! (Sunrise Video, 2024)

The traditional customer base of Jaguar consisted of people going for the “James Bond aesthetic.” Even when attempting to attract new customers, existing customers must not be forgotten – they are still customers, and their voices must be part of the VOC. The “Copy Nothing” ad campaign went further: Jaguar not only ignored the traditional base but seemed to reject them. This is verified in an interview with Rawdon Glover, managing director of the automaker, who stated that “We need to re-establish our brand and at a completely different price point so we need to act differently. We wanted to move away from traditional automotive stereotypes” (Brady, 2024).

Jaguar sales dropped 97.5% in Europe following the rebrand (Singh, 2025). The automaker cut 500 management jobs in the United Kingdom, and Adrian Mardell, the CEO of Jaguar’s parent company JLR, will be retiring at the end of this year (Creed, 2025).

It is not clear why some corporations ignore the VOC. Kolarska & Aldrich (1980) note that managers and leaders can become highly unresponsive when a company is in decline, and one of the reasons for this is that formally loyal customers have switched brands (“exited”). In that case, the VOC itself is harder to interpret because there are fewer customers, so the “smoothing” approach taken in Xiao & Wang (2024) is less effective in arriving at consensus.

Research by Xueming (2007) has shown that customer negative voice in the form of complaint records hurts a company’ stock price and concludes that “investments in reducing consumer negative voice could indeed make financial sense in terms of promoting firm-idiosyncratic stock returns.” This should come as no surprise.

Neither of these reasons – declining customer base or unhappy customer base – is enough to explain why Jaguar approved the “Copy Nothing” ad campaign. Further, the experiences of other brands that “went woke,” such as Bud Light’s 2023 partnership with Dylan Mulvaney, provide direct evidence that nothing good can come from Jaguar’s style of rebranding. One must therefore conclude that their actions were nothing other than corporate suicide.


References

Brady, J. (2024, 23 November). Jaguar boss hits out at 'vile hatred and intolerance' after car fans turned on firm's widely-ridiculed woke rebrand. Daily Mail. https://www.dailymail.co.uk/news/article-14117385/jaguar-boss.html

Creed, S. (2025, 1 August). On the move: Jaguar Land Rover boss behind ‘woke’ pink rebrand to quit after campaign saw carmaker universally panned. The Sun. https://www.thesun.co.uk/motors/36107702/jaguar-land-rover-boss-quits-woke-rebrand-backlash/

Goetsch, D. L. & Davis, S. B. (2021). Quality management for organizational excellence: Introduction to total quality (9th ed.). Pearson.

Jaguar. (2024, 19 November). Jaguar | Copy nothing [Video]. You Tube.https://www.youtube.com/watch?v=rLtFIrqhfng

Kolarska, L. & Aldrich, H. (1980). Exit, voice, and silence: Consumers' and managers' responses to organizational decline. Organizational Studies 1(1). https://doi.org/10.1177/017084068000100104

Singh, E. (2025, 3 July). Woke woe: Jaguar sales plummet 97.5% after fierce backlash over woke pink ‘rebrand’ that left fans slamming ‘nonsense’ EV. The Sun. https://www.thesun.co.uk/motors/35669921/jaguar-sales-plummet-woke-pink-backlash/

Sunrise Video. (2024, 30 November). New Jaguar commercial part 2 [Video]. You Tube. https://www.youtube.com/watch?v=9awWSN-h1es

Xiao, J. & Wang, X. (2024). An optimization method for handling incomplete and conflicting opinions in quality function deployment based on consistency and consensus reaching process. Computers and Industrial Engineering 183. https://doi.org/10.1016/j.cie.2023.109779

Xueming, L. (2007). Consumer negative voice and firm-idiosyncratic stock returns. Journal of Marketing 71(3). https://doi.org/10.1509/jmkg.71.3.075

Misrepresenting the Voice of the Customer

In Quality Function Deployment (QFD), input requirements from the customers are translated into a set of customer needs, known as the “voice of the customer.” In small companies, there are very few employees standing between the person who determines the Voice of the Customer (VOC) and the person who implements the recommendations of the QFD analysis. In fact, they may be the same person!

When they are not the same, there is a problem not mentioned in Goetsch & Davis (2021, p. 289-302). The problem is the misrepresentation of the VOC. This can result from incorrect analysis of the customer feedback as presented in the customer needs matrix (Goetsch & Davis, p. 291-292), or it can have nefarious causes.

Here is an example of the latter from a former employer, a once large software company. To explain the situation, four pieces of background information are required. Stay with me.

First, software companies usually attempt to cater to as many people as possible. This requires consideration of the types of computers customers are using (Windows or Macintosh) as well as the browsers they are using (this was in the early 2000s, so it was Internet Explorer and Firefox). To minimize costs, software companies try to develop web pages that work on both Windows and Macintosh and in both types of browsers.

Second, the events described below happened shortly after the bursting of the dot com bubble, when even badly-ran software companies still had money. This attracted ambulance chasers, and a new player entered the chat: litigious companies going after money under the guise of enforcing the ADA, the Americans with Disabilities Act (ADA National Network, 2023). Company management was too spineless to mount a resistance, so these ADA enforcers frequently called the tune within software companies, even down to the level of individual software developers. The situation was very reminiscent of the diversity, inclusion, and equity (DEI) racket now plaguing universities, organizations, and companies (Lawson, 2025).

Third, there was an extraordinarily strong push for “open source” software, which is software whose internal mechanisms (source code) can be read by anybody. Those advocating open-source software sometimes stand to gain from stealing a competitor’s source code, but in many cases the goal is openness for the sake of openness: most advocates wouldn’t be able to understand the source code even if it were open and easily available.

Finally, the software industry is rife with politics, and not just the usual workplace cattiness. This existed even back in the early 2000s. At present, the level of politics in software companies is turned up to Spinal Tap level 11.

With that background, the nefarious misrepresentation of the VOC can now be described!

“User advocates,” the personification of the VOC, claimed that a certain type of software, Adobe Flash, was unsuitable for use on our web pages. Their reasons were as follows: it was not open source, it was claimed that it was inaccessible under ADA standards, and it was not available to all our users. Because of this, user advocates wanted all the games, financial charting apps, and other engaging user experiences on our websites to be dropped and replaced with other technologies.

An investigation into these claims and recommended actions revealed some disturbing information. While Adobe Flash was indeed closed source, it could be made ADA compliant. Also, only 3% of our users did not have Flash on their computers. For comparison, 10% of our customers used Macintosh computers.

These findings undermined the user advocates’ case for eliminating Flash from our websites. In addition, the user advocates had a visceral hatred of Flash, and those strong emotions compromised their objectivity.

Most damaging to the user advocates was the fact that their proposed actions were simply impractical: viable alternatives to Flash did not exist at the time, and that the user advocates did not consider the costs involved in changing from Flash to a (non-existent) alternative technology.

Further investigation showed that the user advocates had no evidence that customers were indeed calling for the elimination of Flash. Instead, the user advocates were advocating their own beliefs and passing them off as the customers’ voice.

In the end, these user advocates won, sort of. Interactive Flash experiences were sometimes replaced with less interactive experiences, but usually they were simply dropped with no replacements. This spread throughout the industry, and the entire world wide web is now a far less interesting place.


References

ADA National Network. (2023). Americans With Disabilities Act: Enforcement options under the Employment Provisions (Title I). https://adata.org/factsheet/enforcement-options-employment-provisions

Goetsch, D. L. & Davis, S. B. (2021). Quality management for organizational excellence: Introduction to total quality (9th ed.). Pearson.

Lawson, T. (2025, 14 January). Black is NOT a credential: The corporate scam of DEI. FIG Ink.

Application of the PDCA Cycle


Introduction

The Plan-Do-Check-Act cycle is an iterative problem-solving technique that can be used to improve the quality of an organization’s products or services and to increase customer satisfaction. This paper begins with a description of this problem-solving technique. Next, it is applied to a typical problem encountered at a fictitious software company. Finally, the paper concludes with a discussion of some modifications to the Plan-Do-Check-Act cycle that increases the speed that solutions can be found using this process.


Description of the PDCA Cycle

The Plan-Do-Check-Act (PDCA) cycle is a problem-solving method that can be applied to either existent or latent problems (Goetsch & Davis, p. 272-278). The PDCA cycle begins with the “Plan” step, which presupposes an observation of an undesired behavior, undesired quality, or an opportunity for improvement, and to create a plan to address this. The “Do” step involves implementing the plan from step 1, on a limited basis. The “Check” step determines the success or failure of the implementation. Finally, the “Act” step – also called the “Adjust” step – involves acting based on the results of the “Check” step. If the plan did not work, a brief diagnosis is performed, and the cycle repeats with a new plan based on what was learned from the diagnosis. If the plan was successful, then the plan can be executed on a wider basis, or additional changes can be made, in the next cycle.


PDCA Application

The PDCA cycle will be demonstrated on the following fictitious problem. The software company called Gaggle dot Com makes an online email service called GaggleMail, which is in no way a copy of Google’s Gmail. The PDCA problem solving method will be used to address customer comments that the user interface (UI) is not very friendly to color blind users. The names, phone numbers, and email addresses of people experiencing this problem are recorded by customer service representatives and passed on to the team that will be fixing the problem.


Step 1: Plan

To make the UI accessible to color blind users, the software developers and graphic designers quickly decide to add a toggle for changing the color of the UI. When the toggle is activated, the screen color changes to make the text readable to color blind users. A simple web search indicates that there are different types of color-blindness (WebAIM, 2021), and it is sufficient to increase the contrast between text color and background color, as well as avoiding certain color combinations.


Step 2: Do

The developers implement the toggle in GaggleMail’s UI. In the process of doing so, the software developers test that the toggle does change text and background colors, and the results can be verified using a tool that simulates the way a color-blind person would see the page (Bureau of Internet Accessibility, Inc., 2022). According to PDCA dogma, this action belongs in the “Check” step, but it makes sense to do it here – it is an easy verification, performed by the individual who is in the position to correct any problems, and it takes almost no time. This check verifies the functionality of the toggle, that it changes text and background colors. But are they the right colors? This is where the next step becomes relevant.


Step 3: Check

As described above, the changes in color are first verified by using a color-blindness simulation tool. Assuming that the new colors work with the simulator, it is time to have the customers who reported the problem check that activating the toggle does indeed make the GaggleMail’s UI readable to them. This is called “beta testing,” and the customers are called “beta testers.” The changes to the UI will be rolled-out on a limited basis, just to the beta testers.


Step 4: Act or Adjust

The responses from the beta testers will fall into three categories: “the page is extremely readable,” “the page readability can be improved,” or “the page is still unreadable.”

If the page is extremely readable to color-blind users when using the toggle, the changes will then be made available to all users.

If the page readability can be improved, better colors are chosen, and the PDCA cycle is repeated. If the page is still unreadable to the color-blind, the toggle will be checked to verify that it is indeed working, and if so, a different set of colors are used. Again, the PDCA cycle repeats.


Conclusion

In a highly competitive industry, in which the fictitious Gaggle dot Com company is a part, it is absolutely necessary to run the PDCA cycle as rapidly as possible. This can be done by minimizing the number of employees involved and limiting the team to only the most relevant people. In this example, the relevant employees are the graphic designers, the software developers, and customer service representatives. Only one of each type of these employees will be required, for a total of three people. A formal QA process is not required.

Another way to speed the PDCA cycle is to partially move the “Check” step into the “Do” step. In this example, the software developer personally tests (checks) the toggle while he implements it during the “Do” step.

By executing the PDCA cycle as described here, the GaggleMail UI is improved to make it extremely usable to color-blind customers in just a few minutes. If a bureaucratic process is used – especially processes that involve litigious web accessibility advocates – the changes may be tied-up in committee meetings for weeks.


References

Bureau of Internet Accessibility, Inc. (2022, 4 November). What is color blindness accessibility? https://www.boia.org/blog/what-is-color-blindness-accessibility

Goetsch, D. L. & Davis, S. B. (2021). Quality management for organizational excellence: Introduction to total quality (9th ed.). Pearson.

WebAIM. (2021, 12 August). Visual disabilities: Color-blindness. https://webaim.org/articles/visual/colorblind

Statistical Process Control


Introduction

Goetsch & Davis (2021, p. 306) define Statistical Process Control as follows:

Statistical process control (SPC) is a statistical method of separating variation resulting from special causes from variation resulting from natural causes in order to eliminate the special causes and to establish and maintain consistency in the process, enabling process improvement.

SPC is a methodology for maintaining and improving quality in production processes. It is implemented so as to control variation, eliminate waste, make processes predictable, perform product inspections, all with the goal of continual improvement.

In this post, the function of management in SPC is described. This includes management’s role in establishing quality measures and using control charts to maintain the level of quality. The actions management must take when quality from standards are also described.


Role of Management

Management’s primary responsibilities is to establish the production quality level so that it matches customers’ expectations. This requires setting measurable standards for product and service quality. By providing these standards, management demonstrates a commitment to quality, and to convert that commitment into a culture of quality. Research has shown that a gradual implementation of SPC is more successful than abrupt enforcement (Bushe, 1988), but SPC implantation does set the direction. These production quality levels are also required for control charts to be applicable to monitor and maintain quality (Rungtusanatham, 2001).

Management must also be involved in establishing budgets and to allocate resources in support of statistical process controls. This includes funding new machines and modern technologies that may be required for process improvements.

In addition, management is responsible for approving and sometimes conducting training programs needed by employees to use SPC effectively (Goetsch & Davis, p. 320).

Management is useful for evaluating and approving changes to processes suggested by other departments. In a sense, management is acting like a sieve, allowing only promising ideas through to line workers. Besides this, implementing these changes may involve new machinery or personnel changes, which are budgetary issues.


When Production Quality Slips

Management is also involved, or should be involved, in diagnosing problems when production quality falls below the established levels. In the context of manufacturing, machine operators would have the most direct understanding of the problem. It is managers’ responsibility to appraise the operator’s findings and approve the budget necessary to repair the machine or replace it.

Another situation that requires managerial intervention is when a supplier’s parts fall below the expected quality level. There are several courses of action, all of which require a manager’s decision.

One option is for the manager to contact the supplier to get an estimate for the time needed for them to resume manufacturing products that are within specifications. Based on this information, the manager may have to delay delivery to his customers or deliver less than what was promised.

A second option is to temporarily require the supplier to provide additional parts with the hope that there will be enough parts that are within specifications to satisfy customer orders.

A third option is to switch suppliers, which requires a manager’s decision. This will entail delays in fulfilling customer orders.

The least desirable option is to provide the customer with substandard parts. This is contrary to the philosophy of total quality management, however.


Conclusion

Statistical process control is a vital methodology for ensuring consistent quality and continuous improvement in production contexts. Management plays a pivotal role in successfully implementing SPC by setting quality standards, allocating financial resources for training, new machinery, etc. Managers are also essential for addressing deviations from quality standards. This could entail working with machine operators to diagnose and resolve such problems or making decisions about supplier relationships. By accepting these responsibilities, managers uphold the agreed-upon product quality standards as well as maintain customer satisfaction.


References

Bushe, G. (1988). Cultural contradictions of statistical process control in American manufacturing organizations. Journal of Management 14(1). https://doi.org/10.1177/014920638801400103

Goetsch, D. L. & Davis, S. B. (2021). Quality management for organizational excellence: Introduction to total quality (9th ed.). Pearson.

Rungtusanatham, M. (2001). Beyond improved quality: the motivational effects of statistical process control. Journal of Operations Management 19(6). https://doi.org/10.1016/S0272-6963(01)00070-5