We Punish the Hands Holding the Swiss Cheese and Ignore the Factory That Makes It Full of Holes
Healthcare, for all its technology, is primitive in one key area: it cannot see its own systems. We measure people in detail but see systems only in outline. We have sharp tools for judging individual performance but almost nothing to monitor the health of the system itself.
This isn’t a small oversight; it’s a fundamental design flaw. It means that when harm occurs, we instinctively blame the person closest to the event, while the systemic failures that made the error inevitable remain hidden. This is best understood through the lens of the Swiss Cheese Model, which shows how disasters happen when holes in successive layers of defence line up.
In healthcare, however, we are conditioned to only see the final person holding the last slice of cheese. This article explains how this happens, why our focus is so dangerously narrow, and what it would take to finally see the whole system.
The Anatomy of a System Failure
Imagine a fatal medication error. A nurse is suspended after administering the wrong dose. The investigation concludes with “human error,” and the case is closed. This provides a sense of resolution, but it ignores the real story.
The story our system fails to record is that the high-concentration vial was stored next to the standard one (a latent error), the electronic warning was overridden due to alert fatigue (a design flaw), and the nurse was covering two units due to short staffing (an operational pressure). These are the holes in the Swiss cheese. The nurse didn’t cause the failure; they were the final person to inherit a chain of system-level defects.
The Loop of Blame
This is the loop of blame. Because we look for individual failings, we find them. Each investigation that ends by blaming a person strengthens the belief that error is a personal, moral failing. Hindsight bias makes it worse, turning a decision made under pressure and with incomplete information into an obvious mistake. We retrain the surgeon for a wrong-site surgery but leave the rushed, interruption-prone “time-out” process unchanged, ready to trap the next person.
Our system is wired with blinded sensors. It can detect the pharmacist or the nurse, but it is blind to the holes they are forced to work around: confusing packaging, similar drug names, and missing decision support. We punish the hands holding the cheese and ignore the factory that makes it full of holes.
A System Designed to Blame
This systemic blindness isn’t accidental; it’s embedded in our culture, structures, and tools.
- Narrative Bias: Simple stories with villains are easier to tell and understand than complex stories of systemic failure. “The doctor forgot” is a simpler narrative than “The diagnostic pathway is fragmented across three different IT systems.”
- Measurement Bias: It’s easy to count an individual’s errors. It’s far harder to measure team resilience, the usability of an electronic record, or the psychological safety of a ward. We manage what we measure, and we only measure people.
- Accountability Structures: Our systems are built around individual responsibility.
- The name above the bed assigns responsibility to a single consultant, even if care is delivered by a sprawling, often disconnected team.
- Appraisals and job plans focus on personal activities and portfolios, not on the health of the teams or processes that individual works within.
- When a poor referral pathway leads to a delayed diagnosis, there is no clear owner to hold accountable, so we default to blaming the last clinician who saw the patient.
These forces relentlessly pull our attention toward the individual, pushing the system and its flaws into the background.
Developing System Sight
To stop blaming individuals, we must learn to see our systems. This requires a deliberate shift in focus, from punishing people to improving the conditions they work in. This isn’t about removing accountability; it’s about redefining it.
Embrace a Just Culture
First, we must adopt a Just Culture — a framework for differentiating between honest human error (which requires system redesign), at-risk behaviour driven by broken processes (which requires coaching), and genuine recklessness (which requires accountability). This gives us a way to be both compassionate and responsible, ensuring we only punish a conscious disregard for safety, not the predictable errors our own systems create.
A New Dashboard for Safety
Second, we need to track the vital signs of the organisation itself. The following are a few examples of metrics that can be used as early warning indicators of a system under stress, allowing us to diagnose problems before they cause harm (and there are many other system level metrics in healthcare):
- System Stress & Staff Wellbeing: Regular, anonymous measures of burnout and fatigue are essential. A burned-out workforce isn’t a collection of personal failings; it’s a symptom of an overloaded system.
- Improvised routines and variations in practice: Every time staff bypass a procedure, often out of necessity, it may signal a broken process. Workarounds should be logged and analysed, not ignored—and certainly not criticised by default. We must also check how often critical safety processes, such as surgical time-outs or patient handovers, are carried out as designed. This reveals the gap between “work as imagined” in protocols and “work as done” on the front line. The task is then to identify and address the underlying system-level causes of this difference.
- Psychological Safety: Teams must feel safe to speak up and report errors without fear of punishment. Without this, data on safety is incomplete, because staff will only share good news.
- Rapid and Accurate Differentiation of Error — emphasising the need to quickly and reliably distinguish between system-caused errors (the majority) and deliberate harm (a very small minority), since low specificity in this process causes unnecessary and damaging investigations, adding further psychological harm to both patients and staff.
Conclusion
Making the system visible is a duty of leadership. Boards and senior executives must review system health with the same seriousness they apply to finance. Their first question should not be, “Who was at fault?” but, “How did our system fail this patient and our staff?”
We have built a powerful machine for finding fault in people but remain blind to the systemic faults that produce that “human error.” This guarantees we will continue to find a final person to blame—the nurse who gave the drug, the surgeon who operated on the wrong site, the doctor who missed the critical detail. These people are not the cause of harm but its inheritors.
Punishing them does nothing to fix the chain of failure. It only ensures it will form again for the next patient, with a different person at its end. We see our people in fine detail, but our systems only in outline. Until we bring our systems into sharp focus, the next tragedy is not a matter of if, but of who.