Planet Lean: The Official online magazine of the Lean Global Network
Teaching/learning systems

Teaching/learning systems

Michael Ballé
March 22, 2024

FEATURE – To get the results we want from a system, or prevent it from generating undesirable events, we need to understand how it works and behaves in the real world. That’s exactly what Jidoka does.


Words: Michael Ballé


Errors – let’s call them undesirable events – usually follow the same pattern and the analysis of them usually identifies three obvious culprits: a faulty part or equipment, a person at fault doing their job poorly due to wrong assumptions or confusion, or information that got lost, misdirected or simply wrong. The two last adverse events I witnessed in a hospital were a cardiac problem, undetected because the ECG machine was malfunctioning, and a psychiatric patient sent to a psychiatric ward with a badly deteriorating pneumonia because his psychiatrist remembered him healthy as a horse in their last visit three months prior and didn’t think to check there could be something biologically wrong with him. In both cases, the information was in the system but failed to connect with the decision-makers.

Finding what went wrong is a matter of patiently going through every event and spotting when the actual deviated from the expected – relatively easy to do in hindsight, extremely hard in foresight because it’s impossible to know in advance which variation will have which effect. Insisting to follow every procedure to the letter usually results in bogging things down until nothing at all happens and every step takes infinite time and heartache. On the other hand, procedures are there for a reason and one never knows which variance can or will have catastrophic effects.

However, when one asks why things went wrong, answers are immediately murkier. Few people commit actual violations – deliberately doing something wrong for their own gain. Mostly, they make mistakes, just like parts fail with use and information gets miscommunicated. Looking for answers often leads to the “culture” or the “system”, both driven by incompetent, immoral or negligent leaders who have repeatedly ignored calls for more work on safety or pushed so hard for performance that people, understandably, have been driven to take short cuts and – wham. Such systemic explanations are undeniably true in some instances and there are documented cases of noxious cultures encouraging toxic managers and bullying people into doing the wrong thing, but the wrong thing happens also when the culture is neither particularly bad nor obsessed with performance. Culture-level explanations are riveting, yet hardly helpful since no one quite knows how to change cultures other than change leaders – and even that has unexpected uncontrolled effects (culture eats strategy – and leadership – for breakfast, as the saying goes).

What kind of “in between” explanations could we build to explain failure? If we put aside both the “broken part, incompetent person” and the “culture” accounts, how could we build a rationale for adverse events that avoids both extremes?

We can recognize that, first, systems do have a mechanistic goal-oriented aspect – they’re designed that way in any case. Second, the system’s processes are always actuated in context, in local conditions, that can range from “in control” (far from the cliff’s edge) to “out of control” (one step away from chaos), and these local conditions will significantly affect how the system performs. Finally, humans are voluntary agents: they act on their own will, according to their lights, and choose to get involved or not, to express their intuitions or feelings or not, to act out or not. By looking at how these three factors converge, we can draw a different picture of how things happen.

Human systems rarely behave as they were designed, but are designed, nonetheless. An emergency ward is designed to process patients in terms of urgency and pathology type. It is not designed to handle large swings in influx, contrarily to a supermarket checkout where more tills can be opened or closed according to the size of the lines. Systems have a mechanistic “process” dimension that gets them to achieve one or two main goals with long list of constraints, such as regulations, budget allocation, and available resources. The system is meant to handle generic cases and satisfy its client population overall – this also means that it is rarely conceived to take care of you and your specific circumstances. The emergency personnel will hardly care that you have a flight to catch if your complaint looks benign (to them).

Systems have a baseline. They have goals and subgoals. They have defined processes and activities to achieve these goals. They have controls, to make sure performance, regulations and processes are carried out according to standards. They are designed for average cases in average conditions, with specific procedures to deal with special circumstances (which they rarely do well). To understand how the system works, we can start by outlining its mechanical set point: what it will try to achieve in any circumstance – whether it’s appropriate or not.

Secondly, systems are actuated in local conditions. Although the system is designed for average circumstances, very often real life happens, and conditions are not what was expected. Conditions cannot be systematized (and included in the mechanics of the system) precisely because they vary. One needs to rely on human experience and judgement to figure out conditions: is it a normal day or an abnormal one? Was the chief physician in a good mood or a bad mood? Is the situation clear or are there so many things happening at the same time everyone is confused? Is there rapid access to experts that understand these conditions or are the people left to make decisions on their own? Is there a culture of high or low expectations? And so on, the questions can be endless. Local conditions will translate into, first, friction: difficulties in achieving the process step due to ambiguity, chance, uncertainty, or simple entropy and all the wild cards the real world can throw at a process. Friction is the key to understanding local conditions rather than having a priori ideas, because friction is recognizable, and it is possible to ask what condition creates this friction.

Locally, friction can lead to repurposing: people adapting the system to local conditions for their own use. Maintenance of the same type of equipment, for instance, can vary widely according to local conditions: how far are they from the manufacturer and how easy is it to get expert coaching and local training, how keen is the local manager to spend the maintenance budget on maintenance or is she using it for something else, etc. Repurposing of the system is an adaptation to local conditions where people change the goals just to get by.

Repurposing is normally gradual – it doesn’t happen all at once, but more in a slow drift as people are defeated by local conditions and progressively accept abnormal conditions as normal. Habituation is a driving force in human beings and people will basically get used to everything and stop being aroused by situations they know should be intolerable, but which have become routine. Starting with friction and leading into repurposing, local conditions are the second element that can help understand system behavior, if we look into them closely enough (and with enough of an open mind).

Thirdly, people have intentional actions: they will willfully intervene or, conversely, willfully decide not to. Silence or violence are typical reactions to conflict and frustration, and people can be expected to react very differently to the same situations. “Don’t text and drive” is a simple rule and a no-brainer in terms of safety, and yet people will interpret that very differently according to how they see themselves, their identity or sense of their own character, according to their focus on the spot and the logic of the situation to their eyes (long stretch of road, slow traffic, etc.) and also according to their emotional state – whether they’re focused on driving, bored out of their skulls, or driven insane by an urgent problem that needs to be resolved right now, for instance.

We are, by nature, creatures of mixed feelings. What we want changes from one moment to the next. What we think about a situation in the morning can change by evening. It is not by accident that most ancient philosophies seek ways to find some kind of peace – our minds are simply not put together for it. In our current world of machines and systems, we often treat people as if they were machines themselves, following set instructions and processes like robots, but that requires extreme pressure – it’s simply not how people think and behave. There is no one way of describing the complexity of human behavior, motivation, and decision-making, but for lack of a better system we still often break it down in the three age-old components of pathos, logos, ethos:

  • Pathos: how emotionally aroused is the person? Are they calm and reasoning things through, weighing arguments and considering perspectives? Or are they emotional and reducing the situation to what they need to do right now, impulsively and instinctively, driven by their mood and subjective experience of right now? The same person will react very differently according to their emotional state (feeling good/feeling lousy) and how intense it is (feeling calm/being emotional).
  • Logos: how solid is their logic? According to their understanding of the situation (and how strong their emotions and intuitions are), people can make either logical cause-and-effect analyses, where their behavior will logically lead towards a desired goal based on facts, or wild claims and “hail Mary” intuitive jumps based on shaky assumptions.
  • Ethos: How responsible and trustworthy are they generally? Some people can be depended on more than others to accept accountability and do what’s best for the group, encouraging cooperation and mutual support as opposed to letting others down by doing what’s best for them in the moment – acting impulsively or negligently and not completing tasks or assignments as promised, and then blaming others for their shortcomings. Can this person be usually relied on – or not?

With these three situational drivers in mind, we can now go beyond the “bad part, human error, wrong information” analysis of incidents and look deeper at: 1) How far from its baseline did the system behave? 2) What were the specific local conditions? And 3) Who were the people present and how did they react? Did they choose to intervene or not? Did their interventions make sense or not?

If we look at it this way, we can see that a fourth element makes a critical difference to the outcome: how much did the people understand how the system works? System knowledge of what its baseline is and how it reacts in different conditions will greatly impact both the emotionality of the situation and the logic of the responses. If people know what the system does, they are more likely to feel able to intervene and to do so sensibly in order to avoid (or mitigate) an accident.

In most of the hospital incidents I see, doctors and nurses often don’t know how the computer and administrative systems work, and when they choose to intervene, they go a work-around that can make things worse – or conversely ignore something that should be done. In the average hospital, nurses will rarely isolate equipment at the first sight of trouble and pursue aggressively a maintenance fix through the system. They are so overwhelmed with tasks and know that any maintenance action is likely to take long, so they will continue to use the equipment until it fails completely – adding uncertainty and risk as they do so. They usually have very vague mental models of how steeply equipment degrades (or how bad the impact on patient of minor measuring misfits) and what it needs to get the equipment fixed or replaced.

Wrong theories about how things work lead to both illogical and more emotional reactions as people feel more threatened by the situation and will react more intuitively – a common path to adverse effects. What people know about the systems they use has a significant impact on outcomes the moment the system is no longer in its baseline conditions.

Consequently, systems can be designed in two ways: either as a black box where the user has no way to understand how the system works – most complex systems these days – or as teaching devices where the user can look into the system to understand it as they use it. We have seen cars move from teaching systems, where car owners could fix basic stuff in their automobiles, to increasingly black boxes made to keep the driver – and indeed, to a large extent, the mechanic – out of the engine.

Black boxes are attractive to system designers because they feel – rightly, no doubt – that the less people tinker with the system, the better the chances it achieves its baseline behavior. The assumption here is to use the system until it breaks and then either call maintenance or replace it. This assumption, however, does not take into account all the intermediate states users find themselves in when the system is neither in baseline conditions nor has catastrophically broken yet – exactly where action can avert disaster if only users knew what actions to take.

Jidoka systems, with devices like andon, are designed to educate users. From Sakichi’s original automatic loom that would stop when a thread in the cotton broke to today’s andon-driven factories, such systems are made to spot their own abnormality, stop-and-call a user so that they can look into and replace the faulty component (or retrain the operative). In doing so, such systems teach the users about themselves and how they work to build a more accurate mental picture of both baseline and, through experience, how the system behaves in different external conditions.

As systems become more complex and sophisticated with ever new tech, system designers must ask themselves a further question: how much should users learn about the system? Many managers are now part of and in charge of incredibly complex systems that have been designed as black boxes. Managers are now trained to follow processes, not understand system behaviors, and specifically not their system's behaviors. As a result, although systems increase in sophistication, outcomes are questionable everywhere – and certainly the sense of human agency and control is diminishing rapidly.

Lean’s jidoka has more to teach the world than simply a few mistake-proofing devices or clever ways to detect problems before they occur. It originally was about enabling machines with some aspects of human intelligence – namely recognizing a good part from a bad part or a used component from a valid component. But in doing so, jidoka (spot every defect, notify every defect, react immediately and correct the situation) also changes the human relationship to systems: it creates teaching/learning systems where people can understand more about how the system works and behaves in the real world, and what to do about it to get the results they seek. In every complex system like a hospital or a plant where intentional behavior impacts safety, we need to know more about how the system works, not less – to make it safer.


THE AUTHOR

Michael Ballé is a lean author, executive coach and co-founder of Institut Lean France

Read more

A story of caring leadership and creativity
November 28, 2021
A story of caring leadership and creativity

FEATURE – A leader’s creativity and care for her people can lead to extraordinary results even in the most challenging of environments. As the latest LGN book comes out, the author reflects on one of the most impressive lean transformations you will come across.

Continue reading
Change management at an airline catering business using lean thinking
June 12, 2014
Change management at an airline catering business using lean thinking

INTERVIEW - Having to adapt to changing customer needs, KLM Catering Services decided to turn to lean thinking, reducing its changeover times and recycling more.

Continue reading
A lean Christmas tale
December 23, 2021
A lean Christmas tale

FICTION – The pandemic is throwing curveballs to us all, including a certain bearded gentleman who lives in the North Pole – until a magic “time-bending” solution is presented to him, that is.

Continue reading
Is Barcelona's government run with lean thinking?
October 1, 2015
Is Barcelona's government run with lean thinking?

OPINION – When thinking about Barcelona, most of us think of Gaudí’s architecture, tapas and beach weather, but the Catalonian capital might also be a worthy example of lean principles applied to city management.

Continue reading

Read more

Lean is an educational system
August 27, 2019
Lean is an educational system

FEATURE – In this compelling read, the author discusses lean thinking as a system for learning that challenges our assumptions and tells us why blindly applying “best practices” takes us nowhere.

Continue reading
Four types of problems, in the management system
July 2, 2020
Four types of problems, in the management system

FEATURE – A lean management system is necessary to effectively run a business. Can Art Smalley’s four types of problems framework help such a system to focus on what’s really important?

Continue reading
Arrows and loops - the evolution of our management system
February 17, 2023
Arrows and loops - the evolution of our management system

FEATURE – A senior executive of a Spanish hotel chain shares an example of how the organization’s management system and visual tools have changed over time.

Continue reading
Rethinking an entire production system
September 30, 2014
Rethinking an entire production system

FEATURE - It took a Spanish clinical diagnostics instruments manufacturer the introduction of a new product to realize how an old-fashioned system and management style were preventing the company from thriving.

Continue reading