Dr Henry Neave was the W Edwards Deming Professor of Leadership and Management in the Business School of the Nottingham Trent University until his retirement in 2004; he worked closely with Dr Deming from 1985. Mitch Beedie is a freelance writer, specialising in energy and environmental management.
This article provides a basic introduction to Dr Walter Shewhart’s fundamental work on understanding variation and the use of process behaviour charts which in the 1920s effectively launched Deming’s lifetime’s work. We have written it for the benefit of those who have no previous knowledge of the subject, though “older hands” may also discover some points of interest. [NB For some technical reasons, Shewhart used the term “control chart”; however “control” can give the wrong impression in this context and so the more descriptive name “process behaviour chart” has become increasingly popular in recent years. Similarly we mostly avoid the term “Statistical Process Control” (SPC).]
The article comprises a main discussion on Understanding Variation and three links which provide additional practical details.
- Shewhart’s discovery
- Stable processes and common causes
- So what?
- Unstable processes and special causes
- Monitoring is not enough
References and Reading Guide
Activities as varied as answering customers’ questions, preparing invoices, manufacturing products, delivering services, powering offices and factories, and even gaining new business can all be understood as “processes”. A fundamental property of all but the most trivial of processes is that their outputs vary, and such variation is usually troublesome. The foundation of Dr W Edwards Deming’s guidance on process improvement was the understanding of that variation, with the aim of reducing it, thus making the process more reliable and predictable.
Process behaviour charts are an essential aid. They are widely used in manufacturing, though often not in the way that Dr Deming (and indeed their creator, Dr Shewhart) recommended. In the service industries they are still little used, which itself explains much of the current difference between manufacturing and service quality. “Improvements” that are based merely on opinion or “gut feel” stand on thin ice. Charting process performance and important related factors—and knowing how to interpret those charts—is the springboard for process improvement. The process behaviour chart is the “voice of the process”, telling us both (i) what the process is currently doing and (ii) what the process is capable of doing—which may be very different.
Every time you do your work—be it in service, manufacturing, administration, etc—the results will generally not turn out precisely the same as before. Unless you lead an unusually simple life there will always be some variation. Similarly for customers, processes such as buying a product or subscribing to a service may produce unpleasant surprises. It is a fact of life that processes often do not work as efficiently or as speedily or as smoothly as they did on some previous occasions. It is rare that we either experience, or do, a “perfect” job. All this is “variability” or “variation”. It is the train (or bus, or taxi, or plane) not arriving and thus making you late for a meeting, the promised cheque that doesn’t appear, the Helpline that can’t be contacted, the final self-assembly nut that doesn’t fit.
Note the difference between “variation” and “variety”. Variety is the “spice of life”; variation is not. Variety in available products and services can of course enrich life. In contrast, variation is nasty: it makes life difficult, unpredictable, untrustworthy. It is “bad quality”. “Good quality” implies reliability, trustworthiness, no nasty surprises: i.e. little variation.
Understanding variation – the launchpad for Deming’s management philosophy – is essential for improving any process
Thus since too much variation is a feature of bad quality, while relatively little variation is a feature of good quality, understanding variation—the launchpad for Deming’s management philosophy—is essential for improving any process. It was Shewhart’s breakthrough in understanding variation during the 1920s that formed the foundation for Deming’s lifetime’s work.
Deming repeatedly attributed much of his most important learning to Dr Shewhart. As an illustration of this, in his Dedication to the reprint of Shewhart’s 1931 book Economic Control of Quality of Manufactured Product, Deming wrote:
“To Shewhart, quality … meant every activity and every technique that can contribute to better living ... His book emphasizes the need for continual search for better knowledge about materials, how they behave in manufacture, and how the product behaves in use. Economic manufacture requires achievement of statistical control in the process and statistical control of measurements. It requires improvement of the process in every … feasible way.”
Today, some 80 years after Shewhart’s book was published, most people’s interpretation of “quality” is still hopelessly narrow and limited compared with Shewhart’s wisdom. And thus so is their use of process behaviour charts—if they use them at all.
In the 1920s the Western Electric Company was developing telephone and related equipment, and investing massively to increase the company’s knowledge and abilities. But although early improvement efforts paid handsome dividends, such progress gradually “ran out of steam”. People were still working as hard, if not harder, than before. The company was still spending much money, time and effort on trying to make things better. But, despite all the work and resources, their quality efforts were achieving less and less. Shewhart was invited to study their problems, and eventually the light dawned. In 1989, during a presentation at the Palace of Versailles in France, Deming explained it as follows:
“ ... the harder they tried to achieve consistency and uniformity, the worse were the effects. The more they tried to shrink variation, the larger it got. They were naturally also interested in cutting costs. When any kind of error, mistake or accident occurred, they went to work on it to try to correct it. It was a noble aim. There was only one little trouble—their worthy efforts did not work. Things got worse ... they were failing to understand the difference between common causes and special causes, and that mixing them up makes things worse ... Sure we don’t like mistakes, complaints from customers, accidents—but if we weigh in at them without understanding then we make things worse.”
Not just fail to make things better, but actually make them worse. This is one of the most important things to learn. In essence, Shewhart’s breakthrough was to recognise two very different types of variation—and their very different types of implications as regards improvement efforts, capability, reliability, and so on. What Deming called “common-cause” variation is the routine variation to be expected because of what the process is, and the circumstances in which it exists and operates. “Special-cause” variation is anything over and above that routine variation.
How can we distinguish between the two types of variation in practice? By using the simple tool that Shewhart created for the purpose: the process behaviour chart
Now, very different actions are appropriate depending on whether something is routine (there all the time) or exceptional (perhaps just one-off). That is surely a “no brainer”. But how can we distinguish between the two types of variation in practice? By using the simple tool that Shewhart created for the purpose: the process behaviour chart. And that is how—and why—the process behaviour chart provides essential guidance for improvement.
Compared with other statistical techniques, the process behaviour chart is unusually easy to understand and use. It does not require knowledge of standard statistical fodder such as probability, normal distributions, t-tests, confidence intervals, etc, etc. And it played a crucial role in Japan’s quality revolution after World War II. We believe it should be more widely used in the West—and not just in manufacturing. We believe that everyone involved in key company processes, particularly those in charge of them, should know how to choose appropriate data and then construct, keep and interpret process behaviour charts—and do so. (For some advice on what kind of data to record see Unknowns and Unknowables?.)
Stable processes and common causes
Let’s revisit this important sentence: “common-cause variation is the routine variation to be expected because of what the process is, and the circumstances in which it exists and operates”. The extent of this common-cause variation can be computed from process data to provide the so-called “process limits” (see Calculating Process Limits). If the outputs from the process are all contained within these process limits and show no patterns, trends, etc then the process is said to be “stable” or “predictable”. This is what Deming meant above by the term “statistical control”.
It follows that, while we continue to obtain such outputs which are within the process limits, it is illogical and impractical to claim that anything specific “caused” any particular outcome. Any and all such results are typical of those produced by the whole system of common causes (which Deming simply referred to as “the system”).
Individual variations in outputs can have any number of contributory causes. For example, weekly sales figures will be affected by how many “hot prospects” were available for Sales to contact, whether a certain customer has a particular need this month, the weather, even things like the length of traffic jams on the roads to the store or the office. The process behaviour chart guides us as to when it is economically worthwhile to look for a reason why a particular result occurred, and when it isn’t.
The previous section indicates the futility of a widespread practice that costs businesses huge amounts of time and money. Innumerable organisations try to explain monthly, weekly or daily (indeed, sometimes even hourly!) differences in cash flow, sales, production, profits and indeed performance of all kinds, when in fact the variation is primarily or wholly due to common causes—“the system”. Worse still, this futile practice increases stress for millions of workers: individual below-average results are often seen as justification for aggressive management action; above-average results are seen as evidence of the effects of such action (“I told you it was possible”)—and this lucky result may well become the next numerical target.
Whole company cultures can be built around these destructive methods. And not just in companies. Schools are given targets to meet in repeated pupil testing (itself a stressful and destructive process). Ambulance staff are given bonuses for treating emergencies themselves rather than bringing people into hospital, or alternatively are required to delay the patient’s entry into the Accident & Emergency Department in order to reduce recorded waiting times within the hospital. Such practices are a clear sign that those in charge know they should be seen to act, but don’t actually know what to do. The clear message from the setting of such arbitrary targets is that the staff are just being regarded as lazy and if only they would “pull their socks up” and work harder then everything would be fine.
After his long experience with business, Deming ascribed no less than 98% of the quality problems he encountered to management (i.e. “the system”) and thus at most 2% to workers. Staff cannot change the system because only management has the authority so to do. Deming stressed that numerical targets are literally worse than useless unless managers provide staff with the means of honestly reaching them. Otherwise they will reach them dishonestly. Of course, wouldn’t you? Workers must be given the time and resources they need to help make real improvements.
So let’s get back to some simple facts about stable processes. Unless there is a significant change in the system—good or bad—there is, by definition, a roughly 50–50 chance of the next result being above average or, of course, below average. To illustrate, if you repeatedly toss five coins then roughly half the time you’ll get three or more heads and the other half of the time you’ll get two or less—either way, praise or blame is hardly justified! What does that imply regarding “real work”? As the late and great American consultant Peter Scholtes put it: “Suppose that, when things are going well, we say to people: ‘Good job. Keep it up.’ Well, the chances are that we are reacting to the higher figures in a common-cause system. But, in a common-cause system, when things are going well there is no place to go but down. The only way for that to change is for the system to get changed.” Similarly, when things are going badly (again, in a stable system) there is no place to go but up.
The simple but vital message is this: as long as the measurements lie between the process limits, do not get distracted by short-term data or (even worse) individual data points. Consider the forest, not just one tree.
Unstable processes and special causes
In contrast to stable processes, unstable (or “unpredictable”) processes show changes in behaviour—that is what we mean by unstable. Unusual vibration from a nearby motor may suddenly cause a machine’s mountings to fail. Or half of a department’s staff may have been fired in an “efficiency drive”. “Economies” in a production department may result in greater demands on the repairs department. If such factors are enough to drive a process outside its process limits, they are special causes (see Six Processes) and need treating as such.
So changes in behaviour, be they temporary or enduring, do not just happen: they are caused by something (or some things). And such causes must be different from common causes. For, remember, common causes do not produce changes in behaviour—they produce continuing similar behaviour. Unlike common causes, it will be profitable to try to discover and remove special causes. Here the process behaviour chart is a twofold essential aid. Not only does it warn us of the appearance of special causes: it also indicates approximately when they arrive, which is of course a vital clue for identifying them. (Naturally you will only want to remove a special cause if the change in behaviour is bad. If it’s good, you’ll still want to identify the cause, but the purpose will now be to see if it’s possible to absorb it into the system.)
The difference between stable and unstable processes is hence an eminently practical one. If we look for the cause of some particular detail in the data, are we likely to find something useful? Or are we setting off on a wild-goose chase, mistakenly believing we have seen something important? (Deming referred to this as “tampering”, a common hazard in mismanaged processes.) Shewhart described the process limits as “economic limits” because his experience showed that they minimise the combined cost of making the two kinds of mistakes when interpreting process data: reacting to short-term data when you shouldn’t and failing to act when you should.
Monitoring is not enough
Watching out for, and dealing with, special causes is what we call process “monitoring”. Monitoring thus provides valuable early warnings of impending trouble. It enables us to stabilise processes, and the beauty of stable processes is that we can then predict process costs and performance, product/service quality and quantity, and so on. In comparison, dealing with an unstable process is just guesswork. Until you eliminate special causes, you cannot tell what the process will do.
If you only use the process behaviour chart for process monitoring, you are missing out on the main purpose for which Shewhart created it: process improvement.
However, process monitoring is just “fire-fighting”, and surely by itself this is not good enough. If you only use the process behaviour chart for process monitoring, you are missing out on the main purpose for which Shewhart created it: process improvement. Process monitoring aims to reach and maintain a state of stability (predictability). That is of course very important, but it’s only the beginning. The next issue is: is the process “capable”? This term implies that, as long as the process is stable, it will produce outputs that meet customers’ stated requirements and specifications.
But is even that enough? It is surely necessary for a process to be capable; but is it sufficient? “Capable” produces satisfied customers, but Deming and others have pointed out the need to “surprise and delight” customers if you want to maintain and grow your business. In addition to your prices being reasonable, this means e.g. routinely delivering within one day whereas the customer used to believe that three days was the best that could be expected. It means comfortable trains consistently arriving on time. It means new houses incorporating (for example) solar heating as standard, and designed to conserve water. All this is down to process improvement. And that—to be blunt—starts with the company’s top management. Otherwise it just will not happen. For a country, it starts with government. To repeat, otherwise it just will not happen.
Businesses provide essential products and services and are central to any notion we have of a modern community. However, the sheer size of their social and environmental impacts is placing increasing stress on society. Managers usually make decisions for the short term (and feel it is more than their job is worth to do otherwise). These decisions may well produce the desired short-term effects, but at the cost of introducing instability to the system resulting in undesirable longer-term consequences. Short-term profit then turns into long-term loss, both for the company and, more broadly, society. Real, continual and sustainable process improvement comes from managing an organisation systemically; that one word effectively summarises Deming’s teaching.
Dr Deming showed how to build a healthy and expanding business by looking to the long term. As indicated in "Unknowns and Unknowables?", a good start can be made by discovering and focusing on those factors which are really important to your customers and others. When you know those, you can record key data on process behaviour charts, and start fixing what is wrong with your processes. A practical guide to systematic improvement that really works is Deming’s PDSA (Plan–Do–Study–Act) cycle. (An inferior version, PDCA: Plan–Do–Check–Act, is more widely known.) See, for example, Chapter 9 of "The Deming Dimension".
There may be a way to transform true business quality other than that developed by Shewhart and Deming —but we’ve not found it yet. If managers would listen, Deming’s teachings could transform business. If governments would listen, they could transform society.
Unknowns and Unknowables?
Of course, no organisation’s complex activities can be wholly reduced to mere numbers. A famous message from Dr Deming was that, when it comes down to it, the most important figures for managing a business are “unknown and unknowable”. If this surprises you, think about lost customer goodwill and the resulting lack of repeat sales caused by faulty products or bad service, or the costs incurred both by business and the environment through the wastage of scarce resources. On the other hand, how about the increased business we get by delighting our customers rather than merely satisfying them? But while we cannot get exact figures for such losses or gains, who could doubt that we could and should try to measure and interpret factors that influence them and are influenced by them?
So you need to consider what factors are most important to the people affected by your company’s activities, be they good or bad. If you don’t know, ask them! These people include customers, staff, shareholders, local authorities, governments, and the neighbours who are sharing in your factory’s environmental emissions. A company should chart data from each of its major departments—figures such as sales, income, expenditure, accidents, profit, delivery times, proportion of quality defect(ive)s, waste of environmental resources like energy and water, key equipment downtimes, etc along with other major factors related to these figures.
When studying a process, choose measurements that are directly or indirectly related to the quality of service or product. Some of these measurements should come from within the process, while others should be consequences of it. Where possible, choose measurements that are relatively easy to understand rather than ultra-technical measures that only mean something to experts; this advice is of course particularly important when working with teams.
For example, on the principle that your customers’ time is valuable, your company may want incoming phone calls to always be answered promptly by a receptionist who immediately directs the caller to the right person. Charting the time taken to do this can show whether any types of phone calls pose a special problem to be worked on, and whether the more routine calls are being handled efficiently. Process behaviour charts can also show up changes in performance, e.g. caused by a receptionist being ill and someone untrained taking over.
The intention is not to have scores of receptionists noting down how long they take to connect each caller! If such information is needed then it should be collected automatically, but that itself can generate (often perfectly understandable) fears of being spied upon. This largely depends on whether or not your company operates in a “climate of fear”. As with all attempted improvements, their effectiveness will depend not only on what you do but how you do it. Improvements will only work if people understand that it is not they who are being targeted. The prime purpose of the exercise is to improve the system (i.e. to “do things” to the system rather than to “do things” to people).
Two other pieces of advice are important. First, simultaneously studying different kinds of measurements on process behaviour charts can show how some factors affect and are affected by others. Second, it can be very worthwhile to chart some processes over different time-frames (e.g. using both daily and monthly data for the same process) to learn more about both shorter- and longer-term behaviour.
Real improvements are often not simple to achieve. In particular they require thought, planning, discussions with staff, and acting on what staff tell you. One of Deming’s most endearing qualities was that, when he was invited into a company, he spent much of his time walking the shopfloor, asking workers what their problems were. He recognised that problems are rarely the staff’s “fault”. For instance, they are of course working with the equipment they are given; this may be old and unreliable, or new and incomprehensible. So, whose fault is it?
This is one reason why “SPC training” on its own is not enough. It ignores the company culture. That culture, however, is precisely what determines whether attempted improvements will work. SPC (Statistical Process Control) gives guidance on what questions to ask, and when. But these questions will not necessarily attract the right answers. Ultimately, success will depend on factors like whether receptionists feel able to come back to you and say: “I spend all day listening to customers telling me they’re taking their business elsewhere because we can’t deliver, and now you’re simply asking me to write down how long they’re on the phone?”
<< Back to main article (clicking on this link will take you back to "Shewhart’s discovery" section)
Calculating Process Limits
The simplest processes to chart are those that generate just one measurement at a time: the number of mistakes per 100 invoices, delivery times, weekly chemicals and water usage, etc. This is true for most service operations (as opposed to many manufacturing processes, where it is often conventional to obtain data “a few at a time” rather than individually).
Individual counts or measurements are plotted on an “X-chart”; this is just an ordinary graph (a “run chart”) of these values against time but with three horizontal lines superposed showing the process average (the central line) and the upper and lower process limits. These process limits are calculated by a simple formula which is easily implemented on a spreadsheet if desired. We calculate the average value Xave and the average “moving range” MRave using some data from the process (typically between 10 and 25 values). The moving range is the size of the change (difference) between successive values, thus giving a “time-localised” measure of the variation in the data. Based on Shewhart’s guidance (in his 1931 book), the upper and lower process limits are then simply placed at 2.66 MRave above and below Xave respectively.
Let’s take an example of 24 successive daily readings:
82 81 82 81 91 85 76 84 81 80 80 82 82 85 86 88 78 89 81 87 76 66 69 64
These are plotted on a run chart:
What might we say about the behaviour of this process? There seem to be two periods of time over which it behaves very smoothly: the first few days, and days 9 to 16. In between these two periods (days 5 to 8) there is quite a wild “hiccup”; also, after day 16 the behaviour becomes quite erratic. So which of these features are “real”, i.e. may well have resulted from some specific (special) causes? And which might realistically be dismissed as just randomness or natural variability and thus not worth trying to explain?
The process behaviour chart (i.e. the same picture but with the addition of the three horizontal lines) helps us distinguish.
The chart’s central line is the average of the 24 values. The total of the 24 values is 1936; dividing this by 24 gives Xave = 80.67.
Recall that the moving ranges are the differences between adjacent values. The change from the first value (82) to the second value (81) is 82 − 81 = 1. The next two moving ranges are also equal to 1. But then from the fourth to the fifth value there is a jump of 91 − 81 = 10. From the fifth to the sixth, the change is 91 – 85 = 6, and so on. Note that to compute moving ranges we simply subtract the smaller value from the larger value each time irrespective of whether the change is up or down.
The final moving range is the change from the 23rd to the 24th value, 69 − 64 = 5 (note that there are 23 moving ranges altogether, not 24). The 23 moving ranges add up to 112, and dividing by 23 gives MRave = 4.87.
We can now calculate the process limits. The upper and lower process limits (UPL and LPL) are respectively:
UPL = Xave + 2.66 × MRave = 80.67 + 2.66 × 4.87 = 80.67 + 12.95 = 93.62 ;
LPL = Xave − 2.66 × MRave = 80.67 − 2.66 × 4.87 = 80.67 −12.95 = 67.72 .
In a spreadsheet, suppose the column headings are in cells B1 to Y1. The data series could then be held in cells B2 to Y2. The moving ranges ABS(B2−C2) to ABS(X2−Y2) could be calculated into cells C3 to Y3. The average Xave calculated into cell Z2 is SUM(B2:Y2)/24, and the average moving range MRave calculated into cell Z3 is SUM(C3:Y3)/23. The upper and lower process limits are then respectively Z2+2.66×Z3 and Z2−2.66×Z3.
Drawing the central line and these process limits on the run chart, we get:
What do we see? Everything lies comfortably between the process limits (i.e. there is probably nothing we could “explain”) except at the right-hand end of the graph: all of the last three values are hovering around the lower limit, with two of them below it. The chart therefore tells us that these final values cannot be explained away in terms of the natural variation of the process, and thus there must be another reason for them. There was. As you will sen in "Six Processes" below (refer to the descriptions of Figures D1 and D2), these data are pulse-rates, and there was a change in medication starting on the 21st day.
<< Back to main article (Clicking on this link will take you back to "Stable processes and common causes" section)
Two sets of six process behaviour charts (Figures A1 to F1 and A2 to F2) will help to illustrate the difference between common and special causes and their effects as seen in data. For ease of comparison, all six quantities are plotted over the same number of points. 24 points are in fact more than ample for enabling us to start drawing solid conclusions: less are often used, particularly with monthly data—we wouldn’t usually want to wait two years! (This matter of how much data to use for the computation of process limits is discussed in detail on pp 82–84 of "Statistics Tables"—see “References and Reading Guide” at the end of the main discussion.) In Figures A1 to F1 the data all behave quite similarly to each other and are all contained within the process limits. But not so in Figures A2 to F2.
Figures A1 to F1 (on the left) are pretty boring. Why “boring”? Because nothing happens in any of them other than what may be interpreted as chance (random) fluctuations. They are all “much of a muchness”. In fact, they are from six entirely different sources. All of these processes are stable, “within limits”: their general behaviour is predictable. “Boring” is nice! The prediction is that, if the process stays unchanged, future data will continue to behave in the same manner. And, in particular, the data will continue to lie between the process limits. We can therefore extend these limits into the future to help us easily spot future changes if and when they occur.
As you’ll see from Figures A1 to F1, process behaviour charts of stable processes all look pretty much the same, wherever they’re from. These particular processes are in fact (A1) the sum of four dice thrown 24 times; (B1) the number of heads from 25 coins tossed 24 times; (C1) the number of “defective products” recorded by inspectors in a “Red Beads” experiment used by Dr Deming to demonstrate how wrong it is to blame workers for common-cause variations; (D1) this author’s pulse rate taken at breakfast-time over 24 days; (E1) the dimensions of a cigarette-lighter socket from 24 samples in a Japanese case study (see “References and Reading Guide”); and (F1) the monthly US trade deficits in billions of dollars over two successive years. (As you recall, although the charts look so similar in nature, we did say these processes were very different from each other!)
Things get more “interesting” (but often therefore more problematic) if the processes become unstable. This is seen in the second six charts (on the right), which show what happened to the same six processes later on.
So why are charts A2 to F2 “interesting” rather than “boring”? Because, in each case, something has happened—indicated in particular by one or more points lying outside the process limits. These points show real changes, caused by something different from the routine factors which were previously the only influences on the processes. We should now look for something new—something either improving results or, more usually in practice, degrading them.
Actually, with all but one of these processes, the writer knows what changed: (A2) four dice were used to start with, then only two dice were used for throws 7 to 12, and then six dice for the rest; (B2) the number of coins was increased by two each time from point 15 onwards; (C2) another “Red Beads” experiment but with the inspectors’ final six measurements mistakenly being added rather than averaged; (D2) the author's pulse rate over a later 24 days, with the final four days showing the effects of a newly-prescribed beta-blocker; (E2) a further 24 samples from the Japanese case study, during which a fault developed and was rectified; (F2) the monthly US trade deficits for the following two years—it would be a task for economists to explain what happened here. Of course, there was no way in which the process behaviour charts could have “known” any of these facts—but nevertheless these charts certainly told us when it was worth looking for them.
The process behaviour chart hence picks out those features (if any) in the data that can be distinguished from normal background variation. Many people find the analogy between “noise” (common-cause variation) and “signals” (special-cause variation) to be helpful. The variation shown in the process behaviour charts A1 to F1 is indistinguishable from “noise”, but that in charts A2 to F2 is not. It is the changed behaviour—“signals”—that makes processes A2 to F2 “interesting”, but also unpredictable until we identify the special causes and act accordingly.
There is a vitally important lesson to learn from all this. Everybody would surely agree that it is pointless when throwing dice or tossing coins to try to explain the fine detail of the results obtained. So what if now and again we get a rather larger number of heads than normal—or an unusually low score on the dice? Of course we do: that’s just a feature of the randomness—there’s nothing “special” about it. However, having already observed that we cannot distinguish between the statistical behaviour of any of the six processes A1 to F1, why should we be any more justified in trying to explain the fine detail in any other stable processes?
So, in summary, as long as the process remains stable, there is nothing worth learning from a relatively high or low pulse rate, or even a high or low trade figure, etc. Except for the occasional repeated number, every figure must be higher or lower than the previous one! That isn’t opinion, or even theory—it is simple fact. So usually there is nothing to learn from whether a figure goes up or goes down. It is only when a process becomes unstable that it is worth your while to go looking for the cause.
How much time and money are wasted in our companies because such a simple fact is ignored?
<< Back to main article (Clicking on this link will take you back to the "Unstable processes and special causes" section)
If you compare (i) the process behaviour chart in "Calculating process limits" with (ii) chart D2 above (on which is plotted exactly the same data), an obvious question arises: why are the process limits on the two charts different from each other? The answer is almost as obvious but is nonetheless important. In the "Calculating process limits" version we had no previous history of the process. In that case therefore the process limits were computed from the 24 data-points that were recorded over a time-period in which, as we soon observed, the process changed, i.e. was unstable. This instability was bound to contaminate those process limits to some extent. However in the current section we did have previous history of this process: chart D1. At that time the process was stable, i.e. predictable, and thus it was appropriate to extend the process limits calculated for chart D1 into the future. Because these limits were computed when the process was stable, they reflect the natural process variability more truly than in the "Calculating process limits" version. The consequence is that, later on when the process does change, the signals using the limits from chart D1 are stronger than when using the “contaminated” limits.
The important practical point to note is that, despite the “contamination” and the resulting weaker signals, those weaker signals were still strong enough to leave us in no doubt that the process had changed. It is a powerful feature of process behaviour charts that, while it is better to have limits computed when the process is stable, limits computed when the process is unstable are still usually good enough to signal important changes to the process.
References and Reading Guide
In each case an indication of reading level and content is provided, as follows:
(a) Short and easy introduction but nevertheless instructive.
(b) Introductory, relatively short but excellent coverage.
(c) Introductory but substantial and comprehensive.
(d) Excellent but quite advanced and time-consuming.
(e) “Classic” and only for the enthusiast.
Understanding variation and process behaviour charts
NEAVE, HENRY R, Elementary Statistics Tables. Taylor & Francis, London (2nd edition, 2010), especially pp 45–51. (a)
This book originated as a short book of standard statistics tables aimed at a very general audience rather than “specialists” such as mathematicians or engineers. The tables are fully supplemented by easy-to-read explanatory text and simple worked examples. Because of its importance, a wholly new section on process behaviour charts has been added in the new edition, featuring illustrations from as wide a range of processes as throwing dice, monthly inflation, and the author’s blood pressure!
NEAVE, HENRY R, Statistics Tables. Taylor & Francis, London (2nd edition, 2010), especially pp 73–86. (a)
Statistics Tables was based on tables needed for a substantial introductory statistics course taught by the author at the University of Nottingham. The course was open to students from all disciplines but required a prerequisite qualification in mathematics. The approach here is thus a little more “sophisticated” than in Elementary Statistics Tables. Again a wholly new section on process behaviour charts has been included in the second edition, and this contains some guidance on use currently unpublished elsewhere.
SHEWHART, WALTER A, Economic Control of Quality of Manufactured Product. van Nostrand (1931); reprinted by the American Society for Quality Control (1980). (e)
As Deming himself indicated, Shewhart’s classic book, although packed full of fundamentally important material and wisdom and way ahead of its time, is definitely not for the beginner though is fascinating to the enthusiast.
WHEELER, DONALD J and CHAMBERS, DAVID S, Understanding Statistical Process Control. SPC Press, Knoxville, Tennessee (2nd edition, 1992). (d)
The data in Examples E1 and E2 of Six Processes come from a fascinating, albeit ancient, case study based on the interpretation of hand-drawn process behaviour charts discovered in a little-known Japanese company in 1982. An excellent detailed description of this case study is presented in Section 7.2 (pp 154–183) of Understanding Statistical Process Control. Much of the rest of this book is however written at a rather more advanced level than Dr Wheeler’s two books cited below.
WHEELER, DONALD J, Understanding Variation: the Key to Managing Chaos. SPC Press, Knoxville, Tennessee (2nd edition, 1999). (b)
This author is, in our judgment, the world master in this area today. This is a superb introductory book, easy to read and quite short, with very clear illustrations of using and interpreting process behaviour charts—sometimes with alarming but nonetheless valuable results!
WHEELER, DONALD J, Making Sense of Data: SPC for the Service Sector. SPC Press, Knoxville, Tennessee (2003). (c)
Whereas Wheeler’s Understanding Variation is a uniquely brilliant introduction, this is the book you will need if you then want to become your organisation’s expert. Excellent and comprehensive, and packed full of practical guidance.
The Deming philosophy of management
DEMING, W EDWARDS, Out of the Crisis. Massachusetts Institute of Technology, Center for Advanced Engineering Study (1986); Cambridge University Press (1988); MIT Press, Cambridge, Massachusetts (2000). (d)
Dr Deming's classic and best-known book, packed full of valuable material, but many find it quite difficult in parts.
DEMING, W EDWARDS, The New Economics for Industry, Government, Education. Massachusetts Institute of Technology, Center for Advanced Educational Services (2nd edition, 1994); MIT Press, Cambridge, Massachusetts (2000). (d)
Dr Deming's final book: much shorter and apparently easier language than Out of the Crisis, but with much depth underlying the simpler words.
NEAVE, HENRY R, The Deming Dimension. SPC Press, Knoxville, Tennessee (1990). (c)
An introductory yet comprehensive book, easier to read and ideal as an introduction to either or both of Dr Deming's books referenced above. It was described in Quality Progress (a journal of the American Society for Quality) as “the best theoretical yet practical book on the Deming philosophy”. As Kurt Lewin famously said, “There is nothing so practical as a good theory”!
All of these books can be obtained from SPC Press (USA-based, www.spcpress.com) or from the Deming Forum (UK-based, www.deming.org.uk).