Tag Archives: quality

How to tell who is productive if work is done in groups

It is a particular skill of management to hog the glory and cast the blame; if a project succeeds, executives will make it understood that the groups’ success was based on their leadership (and their ability to get everyone to work hard for low pay). If the project fails, a executive will cast blame typically on those who spotted the problem some months early. These are the people most likely to blame the executive, so the executive discredits them first.

This being the dynamic of executive oversight, it becomes difficult to look over the work of a group and tell who is doing good and who is coasting. If someone’s got to be fired in the middle of a project, or after, who do you fire? My first thought is that, following a failure, you fire the manager and the guy at the top who drew the top salary. That’s what winning sports teams do. It seems to promote “rebuilding” it’s a warning to those who follow. After the top people are gone, you might get an honest appraisal of what went wrong and what to do next.

A related problem, if you’re looking to hire is who to pick or promote from within. In the revolutionary army, they allowed the conscripts to pick some of their commanders, and promoted others based on success. This may not be entirely fair, as there are many causes to success and failure, but it seemed to work better than the British system, where you picked by birth or education. Here’s a lovely song about the value of university education in a modern major general.

A form of this feedback about who knows what he’d doing and who does not, is to look at who is listened to by colleagues. When someone speaks, do people who know listen. It’s a method I’ve used to try to guess who knew things in a field outside my own. Bull-shitters tend to be ignored when they speak. The major general above is never listened to.

In basketball or hockey, the equivalent method is to see who the other players pass to the most, and who steals the most from the other side. It does not take much watching to get a general sense, but statistics help. With statistics, one can set up a hierarchical system based on who listens to whom, or who passes to whom with a logistic equation as used for chess and dating sites. A lower-paid person at the center-top is a gem who you might consider promoting.

In terms of overall group management, it was the opinion of W Edwards Deming, the name-sake of the Deming prize for quality, that overall group success was typically caused by luck or by some non-human cause. Thus that any manager would be as good as any other. Deming had a lovely experiment to show why this is likely the case– see it here. If one company or team did better year after year, it was common that they were in the right territory, or at the right time. As an example, the person who succeeded selling big computers in New York in the 1960s was not necessarily a good salesman or manager. Anyone could have managed that success. To the extent that this is true, you should not fire people readily, but neither worry that your highest paid manager or salesman is irreplaceable.

Robert Buxbaum, October 9, 2022

A British tradition of inefficiency and silliness

While many British industries are forward thinking and reasonably efficient, i find Britons take particular pride in traditional craftsmanship. That is, while the Swiss seem to take no particular pride in their coo-coo clocks, the British positively glory in their handmade products: hand-woven, tweed jackets, expensive suits, expensive whiskey, and hand-cut diamonds. To me, an American-trained engineer, “traditional craftsmanship,” of this sort is another way of saying silly and in-efficient. Not having a better explanation, I associate these behaviors with the decline of English power in the 20th century. England went from financial and military preëminence in 1900 to second-tier status a century later. It’s an amazing change that I credit to tradition-bound inefficiency — and socialism.

Queen Elizabeth and Edward VII give the Nazi solute.

Queen Elizabeth and Edward VII give the Nazi solute.

Britain is one of only two major industrial nations to have a monarch and the only one where the monarch is an actual ambassador. The British Monarchy is not all bad, but it’s certainly inefficient. Britain benefits from the major royals, the Queen and crown prince in terms of tourism and good will. In this she’s rather like our Mickey Mouse or Disneyland. The problem for England has to do with the other royals, We don’t spend anything on Mickey’s second cousins or grandchildren. And we don’t elevate Micky’s relatives to military or political prominence. England’s royal leaders gave it horrors like the charge of the light brigade in the Crimean war (and the Crimean war itself), Natzi-ism doing WWII, the Grand Panjandrum in WWII, and the attack on Bunker Hill. There is a silliness to its imperialism via a Busby-hatted military. Britain’s powdered-wigged jurors are equally silly.

Per hour worker productivity in the industrial world.

Per hour worker productivity in the industrial world.

As the chart shows, England has the second lowest per-hour productivity of the industrial world. Japan, the other industrial giant with a monarch, has the lowest. They do far better per worker-year because they work an ungodly number of hours per year. French and German workers produce 20+% more per hour: enough that they can take off a month each year and still do as well. Much of the productivity advantage of France, Germany, and the US derive from manufacturing and management flexibility. US Management does not favor as narrow a gene pool. Our workers are allowed real input into equipment and product decisions, and are given a real chance to move up. The result is new products, efficient manufacture, and less class-struggle.

The upside of British manufacturing tradition is the historical cachet of English products. Americans and Germans have been willing to pay more for the historical patina of British whiskey, suits, and cars. Products benefit from historical connection. British suits remind one of the king, or of James Bond; British cars maintain a certain style, avoiding fads of the era: fins on cars, or cup-holders, and electric accessories. A lack of change produces a lack of flaws too, perhaps the main things keeping Britain from declining faster. A lack of flaws is particularly worthwhile in some industries, like banking and diamonds, products that have provided an increasing share of Britain’s foreign exchange. The down-side is a non-competitive military, a horrible food industry, and an economy that depends, increasingly on oil.

Britain has a low birthrate too, due in part to low social mobility, I suspect. Social mobility looked like it would get worse when Britain joined the European Union. An influx of foreign workers entered taking key jobs including those that with historical cachet. The Brits reacted by voting to leave the EC, a vote that seems to have taken the upper class by surprise, With Brexit, we can hope to see many years more of manufacturing by the traditional and silly.

Robert Buxbaum, December 31, 2016. I’ve also written about art, good and bad, about the US aesthetic of strength, about the French tradition of innovation, And about European vs US education.

Ab Normal Statistics and joke

The normal distribution of observation data looks sort of like a ghost. A Distribution  that really looks like a ghost is scary.

The normal distribution of observation data looks sort of like a ghost. A Distribution that really looks like a ghost is scary.

It’s funny because …. the normal distribution curve looks sort-of like a ghost. It’s also funny because it would be possible to imagine data being distributed like the ghost, and most people would be totally clue-less as to how to deal with data like that — abnormal statistics. They’d find it scary and would likely try to ignore the problem. When faced with a statistics problem, most people just hope that the data is normal; they then use standard mathematical methods with a calculator or simulation package and hope for the best.

Take the following example: you’re interested in buying a house near a river. You’d like to analyze river flood data to know your risks. How high will the river rise in 100 years, or 1000. Or perhaps you would like to analyze wind data to know how strong to make a sculpture so it does not blow down. Your first thought is to use the normal distribution math in your college statistics book. This looks awfully daunting (it doesn’t have to) and may be wrong, but it’s all you’ve got.

The normal distribution graph is considered normal, in part, because it’s fairly common to find that measured data deviates from the average in this way. Also, this distribution can be derived from the mathematics of an idealized view of the world, where any variety derives from multiple small errors around a common norm, and not from some single, giant issue. It’s not clear this is a realistic assumption in most cases, but it is comforting. I’ll show you how to do the common math as it’s normally done, and then how to do it better and quicker with no math at all, and without those assumptions.

Lets say you want to know the hundred-year maximum flood-height of a river near your house. You don’t want to wait 100 years, so you measure the maximum flood height every year over five years, say, and use statistics. Lets say you measure 8 foot, 6 foot, 3 foot (a draught year), 5 feet, and 7 feet.

The “normal” approach (pardon the pun), is to take a quick look at the data, and see that it is sort-of normal (many people don’t bother). One now takes the average, calculated here as (8+6+3+5+7)/5 = 5.8 feet. About half the times the flood waters should be higher than this (a good researcher would check this, many do not). You now calculate the standard deviation for your data, a measure of the width of the ghost, generally using a spreadsheet. The formula for standard deviation of a sample is s = √{[(8-5.8)2 + (6-5.8)2 + (3-5.8)2 + (5-5.8)2 + (7-5.8)2]/4} = 1.92. The use of 4 here in the denominator instead of 5 is called the Brussels correction – it refers to the fact that a standard of deviation is meaningless if there is only one data point.

For normal data, the one hundred year maximum height of the river (the 1% maximum) is the average height plus 2.2 times the deviation; in this case, 5.8 + 2.2 x 1.92 = 10.0 feet. If your house is any higher than this you should expect few troubles in a century. But is this confidence warranted? You could build on stilts or further from the river, but you don’t want to go too far. How far is too far?

So let’s do this better. We can, with less math, through the use of probability paper. As with any good science we begin with data, not assumptions, like that the data is normal. Arrange the river height data in a list from highest to lowest (or lowest to highest), and plot the values in this order on your probability paper as shown below. That is on paper where likelihoods from .01% to 99.99% are arranged along the bottom — x axis, and your other numbers, in this case the river heights, are the y values listed at the left. Graph paper of this sort is sold in university book stores; you can also get jpeg versions on line, but they don’t look as nice.

probability plot of maximum river height over 5 years -- looks reasonably normal, but slightly ghost-like.

Probability plot of the maximum river height over 5 years. If the data suggests a straight line, like here the data is reasonably normal. Extrapolating to 99% suggests the 100 year flood height would be 9.5 to 10.2 feet, and that it is 99.99% unlikely to reach 11 feet. That’s once in 10,000 years, other things being equal.

For the x axis values of the 5 data points above, I’ve taken the likelihood to be the middle of its percentile. Since there are 5 data points, each point is taken to represent its own 20 percentile; the middles appear at 10%, 30%, 50%, etc. I’ve plotted the highest value (8 feet) at the 10% point on the x axis, that being the middle of the upper 20%. I then plotted the second highest (7 feet) at 30%, the middle of the second 20%; the third, 6 ft at 50%; the fourth at 70%; and the draught year maximum (3 feet) at 90%.  When done, I judge if a reasonably straight line would describe the data. In this case, a line through the data looks reasonably straight, suggesting a fairly normal distribution of river heights. I notice that, if anything the heights drop off at the left suggesting that really high river levels are less likely than normal. The points will also have to drop off at the right since a negative river height is impossible. Thus my river heights describe a version of the ghost distribution in the cartoon above. This is a welcome finding since it suggests that really high flood levels are unlikely. If the data were non-normal, curving the other way we’d want to build our house higher than a normal distribution would suggest. 

You can now find the 100 year flood height from the graph above without going through any the math. Just draw your best line through the data, and look where it crosses the 1% value on your graph (that’s two major lines from the left in the graph above — you may have to expand your view to see the little 1% at top). My extrapolation suggests the hundred-year flood maximum will be somewhere between about 9.5 feet, and 10.2 feet, depending on how I choose my line. This prediction is a little lower than we calculated above, and was done graphically, without the need for a spreadsheet or math. What’s more, our predictions is more accurate, since we were in a position to evaluate the normality of the data and thus able to fit the extrapolation line accordingly. There are several ways to handle extreme curvature in the line, but all involve fitting the curve some way. Most weather data is curved, e.g. normal against a fractal, I think, and this affects you predictions. You might expect to have an ice age in 10,000 years.

The standard deviation we calculated above is related to a quality standard called six sigma — something you may have heard of. If we had a lot of parts we were making, for example, we might expect to find that the size deviation varies from a target according to a normal distribution. We call this variation σ, the greek version of s. If your production is such that the upper spec is 2.2 standard deviations from the norm, 99% of your product will be within spec; good, but not great. If you’ve got six sigmas there is one-in-a-billion confidence of meeting the spec, other things being equal. Some companies (like Starbucks) aim for this low variation, a six sigma confidence of being within spec. That is, they aim for total product uniformity in the belief that uniformity is the same as quality. There are several problems with this thinking, in my opinion. The average is rarely an optimum, and you want to have a rational theory for acceptable variation boundaries. Still, uniformity is a popular metric in quality management, and companies that use it are better off than those that do nothing. At REB Research, we like to employ the quality methods of W. Edwards Deming; we assume non-normality and aim for an optimum (that’s subject matter for a further essay). If you want help with statistics, or a quality engineering project, contact us.

I’ve also meant to write about the phrase “other things being equal”, Ceteris paribus in Latin. All this math only makes sense so long as the general parameters don’t change much. Your home won’t flood so long as they don’t build a new mall up river from you with runoff in the river, and so long as the dam doesn’t break. If these are concerns (and they should be) you still need to use statistics and probability paper, but you will now have to use other data, like on the likelihood of malls going up, or of dams breaking. When you input this other data, you will find the probability curve is not normal, but typically has a long tail (when the dam breaks, the water goes up by a lot). That’s outside of standard statistic analysis, but why those hundred year floods come a lot more often than once in 100 years. I’ve noticed that, even at Starbucks, more than 1/1,000,000,000 cups of coffee come out wrong. Even in analyzing a common snafu like this, you still use probability paper, though. It may be ‘situation normal”, but the distribution curve it describes has an abnormal tail.

by Dr. Robert E. Buxbaum, November 6, 2013. This is my second statistics post/ joke, by the way. The first one dealt with bombs on airplanes — well, take a look.