Category Archives: Statistics

Near-Poisson statistics: how many police – firemen for a small city?

In a previous post, I dealt with the nearly-normal statistics of common things, like river crests, and explained why 100 year floods come more often than once every hundred years. As is not uncommon, the data was sort-of like a normal distribution, but deviated at the tail (the fantastic tail of the abnormal distribution). But now I’d like to present my take on a sort of statistics that (I think) should be used for the common problem of uncommon events: car crashes, fires, epidemics, wars…

Normally the mathematics used for these processes is Poisson statistics, and occasionally exponential statistics. I think these approaches lead to incorrect conclusions when applied to real-world cases of interest, e.g. choosing the size of a police force or fire department of a small town that rarely sees any crime or fire. This is relevant to Oak Park Michigan (where I live). I’ll show you how it’s treated by Poisson, and will then suggest a simpler way that’s more relevant.

First, consider an idealized version of Oak Park, Michigan (a semi-true version until the 1980s): the town had a small police department and a small fire department that saw only occasional crimes or fires, all of which required only 2 or 4 people respectively. Lets imagine that the likelihood of having one small fire at a given time is x = 5%, and that of having a violent crime is y =5% (it was 6% in 2011). A police department will need to have to have 2 policemen on call at all times, but will want 4 on the 0.25% chance that there are two simultaneous crimes (.05 x .05 = .0025); the fire department will want 8 souls on call at all times for the same reason. Either department will use the other 95% of their officers dealing with training, paperwork, investigations of less-immediate cases, care of equipment, and visiting schools, but this number on call is needed for immediate response. As there are 8760 hours per year and the police and fire workers only work 2000 hours, you’ll need at least 4.4 times this many officers. We’ll add some more for administration and sick-day relief, and predict a total staff of 20 police and 40 firemen. This is, more or less, what it was in the 1980s.

If each fire or violent crime took 3 hours (1/8 of a day), you’ll find that the entire on-call staff was busy 7.3 times per year (8x365x.0025 = 7.3), or a bit more since there is likely a seasonal effect, and since fires and violent crimes don’t fall into neat time slots. Having 3 fires or violent crimes simultaneously was very rare — and for those rare times, you could call on nearby communities, or do triage.

In response to austerity (towns always overspend in the good times, and come up short later), Oak Park realized it could use fewer employees if they combined the police and fire departments into an entity renamed “Public safety.” With 45-55 employees assigned to combined police / fire duty they’d still be able to handle the few violent crimes and fires. The sum of these events occurs 10% of the time, and we can apply the sort of statistics above to suggest that about 91% of the time there will be neither a fire nor violent crime; about 9% of the time there will be one or more fires or violent crimes (there is a 5% chance for each, but also a chance that 2 happen simultaneously). At least two events will occur 0.9% of the time (2 fires, 2 crimes or one of each), and they will have 3 or more events .09% of the time, or twice per year. The combined force allowed fewer responders since it was only rarely that 4 events happened simultaneously, and some of those were 4 crimes or 3 crimes and a fire — events that needed fewer responders. Your only real worry was when you have 3 fires, something that should happen every 3 years, or so, an acceptable risk at the time.

Before going to what caused this model of police and fire service to break down as Oak Park got bigger, I should explain Poisson statistics, exponential Statistics, and Power Law/ Fractal Statistics. The only type of statistics taught for dealing with crime like this is Poisson statistics, a type that works well when the events happen so suddenly and pass so briefly that we can claim to be interested in only how often we will see multiples of them in a period of time. The Poisson distribution formula is, P = rke/r! where P is the Probability of having some number of events, r is the total number of events divided by the total number of periods, and k is the number of events we are interested in.

Using the data above for a period-time of 3 hours, we can say that r= .1, and the likelihood of zero, one, or two events begin in the 3 hour period is 90.4%, 9.04% and 0.45%. These numbers are reasonable in terms of when events happen, but they are irrelevant to the problem anyone is really interested in: what resources are needed to come to the aid of the victims. That’s the problem with Poisson statistics: it treats something that no one cares about (when the thing start), and under-predicts the important things, like how often you’ll have multiple events in-progress. For 4 events, Poisson statistics predicts it happens only .00037% of the time — true enough, but irrelevant in terms of how often multiple teams are needed out on the job. We need four teams no matter if the 4 events began in a single 3 hour period or in close succession in two adjoining periods. The events take time to deal with, and the time overlaps.

The way I’d dealt with these events, above, suggests a power law approach. In this case, each likelihood was 1/10 the previous, and the probability P = .9 x10-k . This is called power law statistics. I’ve never seen it taught, though it appears very briefly in Wikipedia. Those who like math can re-write the above relation as log10P = log10 .9 -k.

One can generalize the above so that, for example, the decay rate can be 1/8 and not 1/10 (that is the chance of having k+1 events is 1/8 that of having k events). In this case, we could say that P = 7/8 x 8-k , or more generally that log10P = log10 A –kβ. Here k is the number of teams required at any time, β is a free variable, and Α = 1-10 because the sum of all probabilities has to equal 100%.

In college math, when behaviors like this appear, they are incorrectly translated into differential form to create “exponential statistics.” One begins by saying ∂P/∂k = -βP, where β = .9 as before, or remains some free-floating term. Everything looks fine until we integrate and set the total to 100%. We find that P = 1/λ e-kλ for k ≥ 0. This looks the same as before except that the pre-exponential always comes out wrong. In the above, the chance of having 0 events turns out to be 111%. Exponential statistics has the advantage (or disadvantage) that we find a non-zero possibility of having 1/100 of a fire, or 3.14159 crimes at a given time. We assign excessive likelihoods for fractional events and end up predicting artificially low likelihoods for the discrete events we are interested in except going away from a calculus that assumes continuity in a world where there is none. Discrete math is better than calculus here.

I now wish to generalize the power law statistics, to something similar but more robust. I’ll call my development fractal statistics (there’s already a section called fractal statistics on Wikipedia, but it’s really power-law statistics; mine will be different). Fractals were championed by Benoit B. Mandelbrot (who’s middle initial, according to the old joke, stood for Benoit B. Mandelbrot). Many random processes look fractal, e.g. the stock market. Before going here, I’d like to recall that the motivation for all this is figuring out how many people to hire for a police /fire force; we are not interested in any other irrelevant factoid, like how many calls of a certain type come in during a period of time.

To choose the size of the force, lets estimate how many times per year some number of people are needed simultaneously now that the city has bigger buildings and is seeing a few larger fires, and crimes. Lets assume that the larger fires and crimes occur only .05% of the time but might require 15 officers or more. Being prepared for even one event of this size will require expanding the force to about 80 men; 50% more than we have today, but we find that this expansion isn’t enough to cover the 0.0025% of the time when we will have two such major events simultaneously. That would require a 160 man fire-squad, and we still could not deal with two major fires and a simultaneous assault, or with a strike, or a lot of people who take sick at the same time. 

To treat this situation mathematically, we’ll say that the number times per year where a certain number of people are need, relates to the number of people based on a simple modification of the power law statistics. Thus:  log10N = A – βθ  where A and β are constants, N is the number of times per year that some number of officers are needed, and θ is the number of officers needed. To solve for the constants, plot the experimental values on a semi-log scale, and find the best straight line: -β is the slope and A  is the intercept. If the line is really straight, you are now done, and I would say that the fractal order is 1. But from the above discussion, I don’t expect this line to be straight. Rather I expect it to curve upward at high θ: there will be a tail where you require a higher number of officers. One might be tempted to modify the above by adding a term like but this will cause problems at very high θ. Thus, I’d suggest a fractal fix.

My fractal modification of the equation above is the following: log10N = A-βθ-w where A and β are similar to the power law coefficients and w is the fractal order of the decay, a coefficient that I expect to be slightly less than 1. To solve for the coefficients, pick a value of w, and find the best fits for A and β as before. The right value of w is the one that results in the straightest line fit. The equation above does not look like anything I’ve seen quite, or anything like the one shown in Wikipedia under the heading of fractal statistics, but I believe it to be correct — or at least useful.

To treat this politically is more difficult than treating it mathematically. I suspect we will have to combine our police and fire department with those of surrounding towns, and this will likely require our city to revert to a pure police department and a pure fire department. We can’t expect other cities specialists to work with our generalists particularly well. It may also mean payments to other cities, plus (perhaps) standardizing salaries and staffing. This should save money for Oak Park and should provide better service as specialists tend to do their jobs better than generalists (they also tend to be safer). But the change goes against the desire (need) of our local politicians to hand out favors of money and jobs to their friends. Keeping a non-specialized force costs lives as well as money but that doesn’t mean we’re likely to change soon.

Robert E. Buxbaum  December 6, 2013. My two previous posts are on how to climb a ladder safely, and on the relationship between mustaches in WWII: mustache men do things, and those with similar mustache styles get along best.

The 2013 hurricane drought

News about the bad weather that didn’t happen: there were no major hurricanes in 2013. That is, there was not one storm in the Atlantic Ocean, the Caribbean Sea, or the Gulf of Mexico with a maximum wind speed over 110 mph. None. As I write this, we are near the end of the hurricane season (it officially ends Nov. 30), and we have seen nothing like what we saw in 2012; compare the top and bottom charts below. Barring a very late, very major storm, this looks like it will go down as the most uneventful season in at least 2 decades. Our monitoring equipment has improved over the years, but even with improved detection, we’ve seen nothing major. The last time we saw this lack was 1994 — and before that 1986, 1972, and 1968.

Hurricanes 2012 -2013. This year looks like it will be the one with the lowest number and strength of modern times.

Hurricanes 2012 -2013. This year there were only two hurricanes, and both were category 1 The last time we had this few was 1994. By comparison, in 2012 we saw 5 category 1 hurricanes, 3 Category 2s, and 2 Category 3s including Sandy, the most destructive hurricane to hit New York City since 1938.

In the pacific, major storms are called typhoons, and this year has been fairly typical: 13 typhoons, 5 of them super, the same as in 2012.  Weather tends to be chaotic, but it’s nice to have a year without major hurricane damage or death.

In the news this month, no major storm lead to the lack of destruction of the boats, beaches and stately homes of the North Carolina shore.

In the news, a lack of major storms lead to the lack of destruction of the boats, beaches, and stately homes of the North Carolina shore.

The reason you have not heard of this before is that it’s hard to write a story about events that didn’t happen. Good news is as important as bad, and 2013 had been predicted to be one of the worst seasons on record, but then it didn’t happen and there was nothing to write about. Global warming is supposed to increase hurricane activity, but global warming has taken a 16 year rest. You didn’t hear about the lack of global warming for the same reason you didn’t hear about the lack of storms.

Here’s why hurricanes form in fall and spin so fast, plus how they pick up stuff (an explanation from Einstein). In other good weather news, the ozone hole is smaller, and arctic ice is growing (I suggest we build a northwest passage). It’s hard to write about the lack of bad news, still Good science requires an open mind to the data, as it is, or as it isn’t. Here is a simple way to do abnormal statistics, plus why 100 year storms come more often than once every 100 years.

Robert E. Buxbaum. November 23, 2013.

Ab Normal Statistics and joke

The normal distribution of observation data looks sort of like a ghost. A Distribution  that really looks like a ghost is scary.

The normal distribution of observation data looks sort of like a ghost. A Distribution that really looks like a ghost is scary.

It’s funny because …. the normal distribution curve looks sort-of like a ghost. It’s also funny because it would be possible to imagine data being distributed like the ghost, and most people would be totally clue-less as to how to deal with data like that — abnormal statistics. They’d find it scary and would likely try to ignore the problem. When faced with a statistics problem, most people just hope that the data is normal; they then use standard mathematical methods with a calculator or simulation package and hope for the best.

Take the following example: you’re interested in buying a house near a river. You’d like to analyze river flood data to know your risks. How high will the river rise in 100 years, or 1000. Or perhaps you would like to analyze wind data to know how strong to make a sculpture so it does not blow down. Your first thought is to use the normal distribution math in your college statistics book. This looks awfully daunting (it doesn’t have to) and may be wrong, but it’s all you’ve got.

The normal distribution graph is considered normal, in part, because it’s fairly common to find that measured data deviates from the average in this way. Also, this distribution can be derived from the mathematics of an idealized view of the world, where any variety derives from multiple small errors around a common norm, and not from some single, giant issue. It’s not clear this is a realistic assumption in most cases, but it is comforting. I’ll show you how to do the common math as it’s normally done, and then how to do it better and quicker with no math at all, and without those assumptions.

Lets say you want to know the hundred-year maximum flood-height of a river near your house. You don’t want to wait 100 years, so you measure the maximum flood height every year over five years, say, and use statistics. Lets say you measure 8 foot, 6 foot, 3 foot (a draught year), 5 feet, and 7 feet.

The “normal” approach (pardon the pun), is to take a quick look at the data, and see that it is sort-of normal (many people don’t bother). One now takes the average, calculated here as (8+6+3+5+7)/5 = 5.8 feet. About half the times the flood waters should be higher than this (a good researcher would check this, many do not). You now calculate the standard deviation for your data, a measure of the width of the ghost, generally using a spreadsheet. The formula for standard deviation of a sample is s = √{[(8-5.8)2 + (6-5.8)2 + (3-5.8)2 + (5-5.8)2 + (7-5.8)2]/4} = 1.92. The use of 4 here in the denominator instead of 5 is called the Brussels correction – it refers to the fact that a standard of deviation is meaningless if there is only one data point.

For normal data, the one hundred year maximum height of the river (the 1% maximum) is the average height plus 2.2 times the deviation; in this case, 5.8 + 2.2 x 1.92 = 10.0 feet. If your house is any higher than this you should expect few troubles in a century. But is this confidence warranted? You could build on stilts or further from the river, but you don’t want to go too far. How far is too far?

So let’s do this better. We can, with less math, through the use of probability paper. As with any good science we begin with data, not assumptions, like that the data is normal. Arrange the river height data in a list from highest to lowest (or lowest to highest), and plot the values in this order on your probability paper as shown below. That is on paper where likelihoods from .01% to 99.99% are arranged along the bottom — x axis, and your other numbers, in this case the river heights, are the y values listed at the left. Graph paper of this sort is sold in university book stores; you can also get jpeg versions on line, but they don’t look as nice.

probability plot of maximum river height over 5 years -- looks reasonably normal, but slightly ghost-like.

Probability plot of the maximum river height over 5 years. If the data suggests a straight line, like here the data is reasonably normal. Extrapolating to 99% suggests the 100 year flood height would be 9.5 to 10.2 feet, and that it is 99.99% unlikely to reach 11 feet. That’s once in 10,000 years, other things being equal.

For the x axis values of the 5 data points above, I’ve taken the likelihood to be the middle of its percentile. Since there are 5 data points, each point is taken to represent its own 20 percentile; the middles appear at 10%, 30%, 50%, etc. I’ve plotted the highest value (8 feet) at the 10% point on the x axis, that being the middle of the upper 20%. I then plotted the second highest (7 feet) at 30%, the middle of the second 20%; the third, 6 ft at 50%; the fourth at 70%; and the draught year maximum (3 feet) at 90%.  When done, I judge if a reasonably straight line would describe the data. In this case, a line through the data looks reasonably straight, suggesting a fairly normal distribution of river heights. I notice that, if anything the heights drop off at the left suggesting that really high river levels are less likely than normal. The points will also have to drop off at the right since a negative river height is impossible. Thus my river heights describe a version of the ghost distribution in the cartoon above. This is a welcome finding since it suggests that really high flood levels are unlikely. If the data were non-normal, curving the other way we’d want to build our house higher than a normal distribution would suggest. 

You can now find the 100 year flood height from the graph above without going through any the math. Just draw your best line through the data, and look where it crosses the 1% value on your graph (that’s two major lines from the left in the graph above — you may have to expand your view to see the little 1% at top). My extrapolation suggests the hundred-year flood maximum will be somewhere between about 9.5 feet, and 10.2 feet, depending on how I choose my line. This prediction is a little lower than we calculated above, and was done graphically, without the need for a spreadsheet or math. What’s more, our predictions is more accurate, since we were in a position to evaluate the normality of the data and thus able to fit the extrapolation line accordingly. There are several ways to handle extreme curvature in the line, but all involve fitting the curve some way. Most weather data is curved, e.g. normal against a fractal, I think, and this affects you predictions. You might expect to have an ice age in 10,000 years.

The standard deviation we calculated above is related to a quality standard called six sigma — something you may have heard of. If we had a lot of parts we were making, for example, we might expect to find that the size deviation varies from a target according to a normal distribution. We call this variation σ, the greek version of s. If your production is such that the upper spec is 2.2 standard deviations from the norm, 99% of your product will be within spec; good, but not great. If you’ve got six sigmas there is one-in-a-billion confidence of meeting the spec, other things being equal. Some companies (like Starbucks) aim for this low variation, a six sigma confidence of being within spec. That is, they aim for total product uniformity in the belief that uniformity is the same as quality. There are several problems with this thinking, in my opinion. The average is rarely an optimum, and you want to have a rational theory for acceptable variation boundaries. Still, uniformity is a popular metric in quality management, and companies that use it are better off than those that do nothing. At REB Research, we like to employ the quality methods of W. Edwards Deming; we assume non-normality and aim for an optimum (that’s subject matter for a further essay). If you want help with statistics, or a quality engineering project, contact us.

I’ve also meant to write about the phrase “other things being equal”, Ceteris paribus in Latin. All this math only makes sense so long as the general parameters don’t change much. Your home won’t flood so long as they don’t build a new mall up river from you with runoff in the river, and so long as the dam doesn’t break. If these are concerns (and they should be) you still need to use statistics and probability paper, but you will now have to use other data, like on the likelihood of malls going up, or of dams breaking. When you input this other data, you will find the probability curve is not normal, but typically has a long tail (when the dam breaks, the water goes up by a lot). That’s outside of standard statistic analysis, but why those hundred year floods come a lot more often than once in 100 years. I’ve noticed that, even at Starbucks, more than 1/1,000,000,000 cups of coffee come out wrong. Even in analyzing a common snafu like this, you still use probability paper, though. It may be ‘situation normal”, but the distribution curve it describes has an abnormal tail.

by Dr. Robert E. Buxbaum, November 6, 2013. This is my second statistics post/ joke, by the way. The first one dealt with bombs on airplanes — well, take a look.

Murder rate in Finland, Japan higher than in US

The murder rate in Finland and Japan is higher than in the US if suicide is considered as a type of murder. In the figure below, I’ve plotted total murder rates (homicide plus suicide) for several developed-world countries. The homicide component is in blue, with the suicide rate above it, in green. In terms of this total, the US is seen to be about average among the developed counties. Mexico has the highest homicide rate for those shown, Japan has the highest suicide rate, and Russia has this highest total murder rate shown (homicide + suicide): nearly double that of the US and Canada. In Russia and Japan, some .02% of the population commit suicide every year. The Scandinavian countries are quite similar to the US, and Japan, and Mexico are far worse. Italy, Greece and the UK are better than the US, both in terms of low suicide rate and low homicide rate.

  Combined homicide and suicide rates for selected countries, 2005.


Homicide and suicide rates for selected countries, 2005 Source: Wikipedia.

In the US, pundants like Piers Morgan like to use our high murder rate as an indicator of the ills of American society: loose gun laws are to blame, they say, along with the lack of social welfare safety net, a lack of support for the arts, and a lack of education and civility in general. Japan, Canada, and Scandinavia are presented as near idyls, in these regards. When murder is considered to include suicide though, the murder-rate difference disappears. Add to this, that violent crime rates are higher in Europe, Canada, and the UK, suggesting that clean streets and education do not deter crime.

The interesting thing though is suicide, and what it suggests about happiness. According to my graphic, the happiest, safest countries appear to be Italy and Greece. Part of this is likely weather , people commit suicide more in cold countries, but another part may be that some people (malcontents?) are better served by dirty, noisy cafés and pubs where people meet and complain, and are not so well served by clean streets and civility. It’s bad enough to be a depressed outsider, but it’s really miserable if everything around you is clean, and everyone is polite but busy.

Yet another thought about the lower suicide rates in the US and Mexico, is that some of the homicide in these countries is really suicide by proxy. In the US and Mexico depressed people (particularly men) can go off to war or join gangs. They still die, but they die more heroically (they think) by homicide. They volunteer for dangerous army missions or to attack a rival drug-lord outside a bar. Either they succeed in killing someone else, or they’re shot dead. If you’re really suicidal and can’t join the army, you could move to Detroit; the average house sold for $7100 last year (it’s higher now, I think), and the homicide rate was over 56 per 100,000. As bad as that sounds, it’s half the murder rate of Greenland, assuming you take suicide to be murder.

R.E. Buxbaum, Sept 14, 2013

Why random experimental design is better

In a previous post I claimed that, to do good research, you want to arrange experiments so there is no pre-hypothesis of how the results will turn out. As the post was long, I said nothing direct on how such experiments should be organized, but only alluded to my preference: experiments should be organized at randomly chosen conditions within the area of interest. The alternative, shown below is that experiments should be done at the cardinal points in the space, or at corner extremes: the Wilson Box and Taguchi design of experiments (DoE), respectively. Doing experiments at these points implies a sort of expectation of the outcome; generally that results will be linearly, orthogonal related to causes; in such cases, the extreme values are the most telling. Sorry to say, this usually isn’t how experimental data will fall out. First experimental test points according to a Wilson Box, a Taguchi, and a random experimental design. The Wilson box and Taguchi are OK choices if you know or suspect that there are no significant non-linear interactions, and where experiments can be done at these extreme points. Random is the way nature works; and I suspect that's best -- it's certainly easiest.

First experimental test points according to a Wilson Box, a Taguchi, and a random experimental design. The Wilson box and Taguchi are OK choices if you know or suspect that there are no significant non-linear interactions, and where experiments can be done at these extreme points. Random is the way nature works; and I suspect that’s best — it’s certainly easiest.

The first test-points for experiments according to the Wilson Box method and Taguchi method of experimental designs are shown on the left and center of the figure above, along with a randomly chosen set of experimental conditions on the right. Taguchi experiments are the most popular choice nowadays, especially in Japan, but as Taguchi himself points out, this approach works best if there are “few interactions between variables, and if only a few variables contribute significantly.” Wilson Box experimental choices help if there is a parabolic effect from at least one parameter, but are fairly unsuited to cases with strong cross-interactions.

Perhaps the main problems with doing experiments at extreme or cardinal points is that these experiments are usually harder than at random points, and that the results from these difficult tests generally tell you nothing you didn’t know or suspect from the start. The minimum concentration is usually zero, and the minimum temperature is usually one where reactions are too slow to matter. When you test at the minimum-minimum point, you expect to find nothing, and generally that’s what you find. In the data sets shown above, it will not be uncommon that the two minimum W-B data points, and the 3 minimum Taguchi data points, will show no measurable result at all.

Randomly selected experimental conditions are the experimental equivalent of Monte Carlo simulation, and is the method evolution uses. Set out the space of possible compositions, morphologies and test conditions as with the other method, and perhaps plot them on graph paper. Now, toss darts at the paper to pick a few compositions and sets of conditions to test; and do a few experiments. Because nature is rarely linear, you are likely to find better results and more interesting phenomena than at any of those at the extremes. After the first few experiments, when you think you understand how things work, you can pick experimental points that target an optimum extreme point, or that visit a more-interesting or representative survey of the possibilities. In any case, you’ll quickly get a sense of how things work, and how successful the experimental program will be. If nothing works at all, you may want to cancel the program early, if things work really well you’ll want to expand it. With random experimental points you do fewer worthless experiments, and you can easily increase or decrease the number of experiments in the program as funding and time allows.

Consider the simple case of choosing a composition for gunpowder. The composition itself involves only 3 or 4 components, but there is also morphology to consider including the gross structure and fine structure (degree of grinding). Instead of picking experiments at the maximum compositions: 100% salt-peter, 0% salt-peter, grinding to sub-micron size, etc., as with Taguchi, a random methodology is to pick random, easily do-able conditions: 20% S and 40% salt-peter, say. These compositions will be easier to ignite, and the results are likely to be more relevant to the project goals.

The advantages of random testing get bigger the more variables and levels you need to test. Testing 9 variables at 3 levels each takes 27 Taguchi points, but only 16 or so if the experimental points are randomly chosen. To test if the behavior is linear, you can use the results from your first 7 or 8 randomly chosen experiments, derive the vector that gives the steepest improvement in n-dimensional space (a weighted sum of all the improvement vectors), and then do another experimental point that’s as far along in the direction of that vector as you think reasonable. If your result at this point is better than at any point you’ve visited, you’re well on your way to determining the conditions of optimal operation. That’s a lot faster than by starting with 27 hard-to-do experiments. What’s more, if you don’t find an optimum; congratulate yourself, you’ve just discovered an non-linear behavior; something that would be easy to overlook with Taguchi or Wilson Box methodologies.

The basic idea is one Sherlock Holmes pointed out (Study in Scarlet): It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.” (Case of Identity). Life is infinitely stranger than anything which the mind of man could invent.

Robert E. Buxbaum, September 11, 2013. A nice description of the Wilson Box method is presented in Perry’s Handbook (6th ed). SInce I had trouble finding a free, on-line description, I linked to a paper by someone using it to test ingredient choices in baked bread. Here’s a link for more info about random experimental choice, from the University of Michigan, Chemical Engineering dept. Here’s a joke on the misuse of statistics, and a link regarding the Taguchi Methodology. Finally, here’s a pointless joke on irrational numbers, that I posted for pi-day.

Slowing Cancer with Fish and Unhealth Food

Some 25 years ago, while still a chemical engineering professor at Michigan State University, I did some statistical work for a group in the Physiology department on the relationship between diet and cancer. The research involved giving cancer to groups of rats and feeding them different diets of the same calorie intake to see which promoted or slowed the disease. It had been determined that low-calorie diets slowed cancer growth, and were good for longevity in general, while overweight rats died young (true in humans too, by the way, though there’s a limit and starvation will kill you).

The group found that fish oil was generally good for you, but they found that there were several unhealthy foods that slowed cancer growth in rats. The statistics were clouded by the fact that cancer growth rates are not normally distributed, and I was brought in to help untangle the observations.

With help from probability paper (a favorite trick of mine), I confirmed that healthy rats fared better on healthily diets, but cancerous rats did better with some unhealth food. Sick or well, all rats did best with fish oil, and all rats did pretty well with olive oil, but the cancerous rats did better with lard or palm oil (normally an unhealthy diet) and very poorly with corn oil or canola, oils that are normally healthful. The results are published in several articles in the journals “Cancer” and “Cancer Research.”

Among vitamins, they found something similar (it was before I joined the group). Several anti-oxidizing vitamins, A, D and E made things worse for carcinogenic rats while being good for healthy rats (and for people in moderation). Moderation is key; too much of a good thing isn’t good, and a diet with too much fish oil promotes cancer.

What seems to be happening is that the cancer cells grow at the same rate with all of the equi-caloric diets, but that there was a difference the rate of natural cancer cell death. More cancer cells died when the rat was fed junk food oils than those fed a diet of corn oil and canola. Similarly, the reason anti-oxidizing vitamins hurt cancerous rats was that fewer cancer cells died when the rats were fed these vitamins. A working hypothesis is that the junk oils (and the fish oil) produced free radicals that did more damage to the cancer than to the rats. In healthy rats (and people), these free radicals are bad, promoting cell mutation, cell degradation, and sometimes cancer. But perhaps our body use these same free radicals to fight disease.

Larger amounts of vitamins A, D, and E hurt cancerous-rats by removing the free radicals they normally use fight the disease, or so our model went. Bad oils and fish-oil in moderation, with calorie intake held constant, helped slow the cancer, by a presumed mechanism of adding a few more free radicals. Fish oil, it can be assumed, killed some healthy cells in the healthy rats too, but not enough to cause problems when taken in moderation. Even healthy people are often benefitted by poisons like sunlight, coffee, alcohol and radiation.

At this point, a warning is in-order: Don’t rely on fish oil and lard as home remedies if you’ve got cancer. Rats are not people, and your calorie intake is not held artificially constant with no other treatments given. Get treated by a real doctor — he or she will use radiation and/ or real drugs, and those will form the right amount of free radicals, targeted to the right places. Our rats were given massive amounts of cancer and had no other treatment besides diet. Excess vitamin A has been shown to be bad for humans under treatment for lung cancer, and that’s perhaps because of the mechanism we imagine, or perhaps everything works by some other mechanism. However it works, a little fish in your diet is probably a good idea whether you are sick or well.

A simpler health trick is that it couldn’t hurt most Americans is a lower calorie diet, especially if combined with exercise. Dr. Mites, a colleague of mine in the department (now deceased at 90+) liked to say that, if exercise could be put into a pill, it would be the most prescribed drug in America. There are few things that would benefit most Americans more than (moderate) exercise. There was a sign in the physiology office, perhaps his doing, “If it’s physical, it’s therapy.”

Anyway these are some useful things I learned as an associate professor in the physiology department at Michigan State. I ended up writing 30-35 physiology papers, e.g. on how cells crawl and cell regulation through architecture; and I met a lot of cool people. Perhaps I’ll blog more about health, biology, the body, or about non-normal statistics and probability paper. Please tell me what you’re interested in, or give me some keen insights of your own.

Dr. Robert Buxbaum is a Chemical Engineer who mostly works in hydrogen I’ve published some 75 technical papers, including two each in Science and Nature: fancy magazines that you’d normally have to pay for, but this blog is free. August 14, 2013

Global warming takes a 15 year rest

I have long thought that global climate change was chaotic, rather than steadily warming. Global temperatures show self-similar (fractal) variation with time and long-term cycles; they also show strange attractors generally states including ice ages and El Niño events. These are sudden rests of the global temperature pattern, classic symptoms of chaos. The standard models of global warming is does not predict El Niño and other chaotic events, and thus are fundamentally wrong. The models assume that a steady amount of sun heat reaches the earth, while a decreasing amount leaves, held in by increasing amounts of man-produced CO2 (carbon dioxide) in the atmosphere. These models are “tweaked” to match the observed temperature to the CO2 content of the atmosphere from 1930 to about 2004. In the movie “An Inconvenient Truth” Al Gore uses these models to predict massive arctic melting leading to a 20 foot rise in sea levels by 2100. To the embarrassment of Al Gore, and the relief of everyone else, though COconcentrations continue to rise, global warming took a 15 year break starting shortly before the movie came out, and the sea level is, more-or-less where it was except for temporary changes during periodic El Niño cycles.

Global temperature variation Fifteen years and four El Niño cycles, with little obvious change. Most models predict .25°C/decade.

Fifteen years of global temperature variation to June 2013; 4 El Niños but no sign of a long-term change.

Hans von Storch, a German expert on global warming, told the German newspaper, der Spiegel: “We’re facing a puzzle. Recent CO2 emissions have actually risen even more steeply than we feared. As a result, according to most climate models, we should have seen temperatures rise by around 0.25 degrees Celsius (0.45 degrees Fahrenheit) over the past 10 years. That hasn’t happened. [Further], according to the models, the Mediterranean region will grow drier all year round. At the moment, however, there is actually more rain there in the fall months than there used to be. We will need to observe further developments closely in the coming years.”

Aside from the lack of warming for the last 15 years, von Storch mentions that there has been no increase in severe weather. You might find that surprising given the news reports; still it’s so. Storms are caused by temperature and humidity differences, and these have not changed. (Click here to see why tornadoes lift stuff up).

At this point, I should mention that the majority of global warming experts do not see a problem with the 15 year pause. Global temperatures have been rising unsteadily since 1900, and even von Storch expects this trend to continue — sooner or later. I do see a problem, though, highlighted by the various chaotic changes that are left out of the models. A source of the chaos, and a fundamental problem with the models could be with how they treat the effects of water vapor. When uncondensed, water vapor acts as a very strong thermal blanket; it allows the sun’s light in, but prevents the heat energy from radiating out. CObehaves the same way, but weaker (there’s less of it).

More water vapor enters the air as the planet warms, and this should amplify the CO2 -caused run-away heating except for one thing. Every now and again, the water vapor condenses into clouds, and then (sometimes) falls as rain or show. Clouds and snow reflect the incoming sunlight, and this leads to global cooling. Rain and snow drive water vapor from the air, and this leads to accelerated global cooling. To the extent that clouds are chaotic, and out of man’s control, the global climate should be chaotic too. So far, no one has a very good global model for cloud formation, or for rain and snowfall, but it’s well accepted that these phenomena are chaotic and self-similar (each part of a cloud looks like the whole). Clouds may also admit “the butterfly effect” where a butterfly in China can cause a hurricane in New Jersey if it flaps at the right time.

For those wishing to examine the longer-range view, here’s a thermal history of central England since 1659, Oliver Cromwell’s time. At this scale, each peak is an El Niño. There is a lot of chaotic noise, but you can also notice either a 280 year periodicity (lat peak around 1720), or a 100 year temperature rise beginning about 1900.

Global warming; Central England Since 1659; From http://www.climate4you.com

It is not clear that the cycle is human-caused,but my hope is that it is. My sense is that the last 100 years of global warming has been a good thing; for agriculture and trade it’s far better than an ice age. If we caused it with our  CO2, we could continue to use CO2 to just balance the natural tendency toward another ice age. If it’s chaotic, as I suspect, such optimism is probably misplaced. It is very hard to get a chaotic system out of its behavior. The evidence that we’ve never moved an El Niño out of its normal period of every 3 to 7 years (expect another this year or next). If so, we should expect another ice age within the next few centuries.

Global temperatures measured from the antarctic ice showing stable, cyclic chaos and self-similarity.

Global temperatures measured from the antarctic ice showing 4 Ice ages.

Just as clouds cool the earth, you can cool your building too by painting the roof white. If you are interested in more weather-related posts, here’s why the sky is blue on earth, and why the sky on Mars is yellow.

Robert E. Buxbaum July 27, 2013 (mostly my business makes hydrogen generators and I consult on hydrogen).

Crime: US vs UK and Canada

The US has a lot of guns and a lot of murders compared to England, Canada, and most of Europe. This is something Piers Morgan likes to point out to Americans who then struggle to defend the wisdom of gun ownership and the 2nd Amendment: “How do you justify 4.8 murders/year per 100,000 population when there are only 1.6/year per 100,000 in Canada, 1.2/year per 100,000 in the UK, and 1.0/year per 100,000 in Australia — countries with few murders and tough anti-gun laws?,” he asks. What Piers doesn’t mention, is that these anti-gun countries have far higher contact crime (assault) rates than the US, see below.

Contact Crime Per Country

Contact crime rates for 17 industrialized countries. From the Dutch Ministry of Justice. Click here for details about the survey and a breakdown of crimes.

The differences narrow somewhat when considering most violent crimes, but we still have far fewer than Canada and the UK. Canada has 963/year per 100,000 “most violent crimes,” while the US has 420/year per 100,000. “Most violent crimes” here are counted as: “murder and non-negligent manslaughter,” “forcible rape,” “robbery,” and “aggravated assault” (FBI values). England and Wales classify crimes somewhat differently, but have about two times the US rate, 775/year per 100,000, if “most violent crimes” are defined as: “violence against the person, with injury,” “most serious sexual crime,” and “robbery.”

It is possible that the presence of guns protects Americans from general crime while making murder more common, but it’s also possible that gun ownership is a murder deterrent too. Our murder rate is 1/5 that of Mexico, 1/4 that of Brazil, and 1/3 that of Russia; all countries with strong anti-gun laws but a violent populous. Perhaps the US (Texan) penchant for guns is what keeps Mexican gangs on their, gun-control side of the border. Then again, it’s possible that guns neither increase nor decrease murder rates, so that changing our laws would not have any major effect. Switzerland (a country with famously high gun ownership) has far fewer murders than the US and about 1/2 the rate of the UK: 0.7 murders/ year per 100,000. Japan, a country with low gun ownership has hardly any crime of any sort — not even littering. As in the zen buddhist joke, change comes from within.

Homicide rate per country

Homicide rate per country

One major theory for US violence was that drugs and poverty were the causes. Remove these by stricter anti-drug laws and government welfare, and the violent crime would go away. Sorry to say, it has not happened; worse yet, murder rates are highest in cities like Detroit where welfare is a way of life, and where a fairly high fraction of the population is in prison for drugs.

I suspect that our welfare payments have hurt Detroit as much as they’ve helped, and that Detroit’s higher living wage, has made it hard for people to find honest work. Stiff drug penalties have not helped Detroit either, and may contribute to making crimes more violent. As Thomas More pointed out in the 1500s, if you are going to prison for many years for a small crime, you’re more likely to use force to avoid risk capture. Perhaps penalties would work better if they were smaller.

Charity can help a city, i think, and so can good architecture. I’m on the board of two charities that try to do positive things, and I plant trees in Detroit (sometimes).

R. E. Buxbaum, July 10, 2013. To make money, I sell hydrogen generators: stuff I invented, mostly.

Hormesis, Sunshine and Radioactivity

It is often the case that something is good for you in small amounts, but bad in large amounts. As expressed by Paracelsus, an early 16th century doctor, “There is no difference between a poison and a cure: everything depends on dose.”

Aereolis Bombastus von Hoenheim (Paracelcus)

Phillipus Aureolus Theophrastus Bombastus von Hoenheim (Dr. Paracelsus).

Some obvious examples involve foods: an apple a day may keep the doctor away. Fifteen will cause deep physical problems. Alcohol, something bad in high doses, and once banned in the US, tends to promote longevity and health when consumed in moderation, 1/2-2 glasses per day. This is called “hormesis”, where the dose vs benefit curve looks like an upside down U. While it may not apply to all foods, poisons, and insults, a view called “mitridatism,” it has been shown to apply to exercise, chocolate, coffee and (most recently) sunlight.

Up until recently, the advice was to avoid direct sun because of the risk of cancer. More recent studies show that the benefits of small amounts of sunlight outweigh the risks. Health is improved by lowering blood pressure and exciting the immune system, perhaps through release of nitric oxide. At low doses, these benefits far outweigh the small chance of skin cancer. Here’s a New York Times article reviewing the health benefits of 2-6 cups of coffee per day.

A hotly debated issue is whether radiation too has a hormetic dose range. In a previous post, I noted that thyroid cancer rates down-wind of the Chernobyl disaster are lower than in the US as a whole. I thought this was a curious statistical fluke, but apparently it is not. According to a review by The Harvard Medical School, apparent health improvements have been seen among the cleanup workers at Chernobyl, and among those exposed to low levels of radiation from the atomic bombs dropped on Hiroshima and Nagasaki. The health   improvements relative to the general population could be a fluke, but after a while several flukes become a pattern.

Among the comments on my post, came this link to this scholarly summary article of several studies showing that long-term exposure to nuclear radiation below 1 Sv appears to be beneficial. One study involved an incident where a highly radioactive, Co-60 source was accidentally melted into a batch of steel that was subsequently used in the construction of apartments in Taiwan. The mistake was not discovered for over a decade, and by then the tenants had received between 0.4 and 6 Sv (far more than US law would allow). On average, they were healthier than the norm and had significantly lower cancer death rates. Supporting this is the finding, in the US, that lung cancer death rates are 35% lower in the states with the highest average radon radiation levels (Colorado, North Dakota, and Iowa) than in those with the lowest levels (Delaware, Louisiana, and California). Note: SHORT-TERM exposure to 1 Sv is NOT good for you; it will give radiation sickness, and short-term exposure to 4.5 Sv is the 50% death level

Most people in the irradiated Taiwan apartments got .2 Sv/year or less, but the same health benefit has also been shown for people living on radioactive sites in China and India where the levels were as high as .6 Sv/year (normal US background radiation is .0024 Sv/year). Similarly, virtually all animal and plant studies show that radiation appears to improve life expectancy and fecundity (fruit production, number of offspring) at dose rates as high as 1 Sv/month.

I’m not recommending 1 Sv/month for healthy people, it’s a cancer treatment dose, and will make healthy people feel sick. A possible reason it works for plants and some animals is that the radiation may kill proto- cancer, harmful bacteria, and viruses — organisms that lack the repair mechanisms of larger, more sophisticated organisms. Alternately, it could kill non-productive, benign growths allowing the more-healthy growths to do their thing. This explanation is similar to that for the benefits farmers produce by pinching off unwanted leaves and pruning unwanted branches.

It is not conclusive radiation improved human health in any of these studies. It is possible that exposed people happened to choose healthier life-styles than non-exposed people, choosing to smoke less, do more exercise, or eat fewer cheeseburgers (that, more-or-less, was my original explanation). Or it may be purely psychological: people who think they have only a few years to live, live healthier. Then again, it’s possible that radiation is healthy in small doses and maybe cheeseburgers and cigarettes are too?! Here’s a scene from “Sleeper” a 1973, science fiction, comedy movie where Woody Allan, asleep for 200 years, finds that deep fat, chocolate, and cigarettes are the best things for your health. You may not want a cigarette or a radium necklace quite yet, but based on these studies, I’m inclined to reconsider the risk/ benefit balance in favor of nuclear power.

Note: my company, REB Research makes (among other things), hydrogen getters (used to reduce the risks of radioactive waste transportation) and hydrogen separation filters (useful for cleanup of tritium from radioactive water, for fusion reactors, and to reduce the likelihood of explosions in nuclear facilities.

by Dr. Robert E. Buxbaum June 9, 2013

Chernobyl radiation appears to cure cancer

In a recent post about nuclear power, I mentioned that the health risks of nuclear power are low compared to the main alternatives: coal and natural gas. Even with scrubbing, the fumes from coal burning power plants are deadly once the cumulative effect on health over 1000 square miles is considered. And natural gas plants and pipes have fairly common explosions.

With this post I’d like to discuss a statistical fluke (or observation), that even with the worst type of nuclear accident, the broad area increased cancer incidence is generally too small to measure. The worst nuclear disaster we are ever likely to encounter was the explosion at Chernobyl. It occurred 27 years ago during a test of the safety shutdown system and sent a massive plume of radioactive core into the atmosphere. If any accident should increase the cancer rate of those around it, this should. Still, by fluke or not, the rate of thyroid cancer is higher in the US than in Belarus, close to the Chernobyl plant in the prime path of the wind. Thyroid cancer is likely the most excited cancer, enhanced by radio-iodine, and Chernobyl had the largest radio-iodine release to date. Thus, it’s easy to wonder why the rates of Thyroid cancer seem to suggest that the radiation cures cancer rather than causes it.

Thyroid Cancer Rates for Belarus and US; the effect of Chernobyl is less-than clear.

Thyroid Cancer Rates for Belarus and US; the effect of Chernobyl is less-than clear.

The chart above raises more questions than it answers. Note that the rate of thyroid cancer has doubled over the past few years, both in the US and in Belarus. Also note that the rate of cancer is 2 1/2 times as high in Pennsylvania as in Arkansas. One thought is test bias: perhaps we are  better at spotting cancer in the US than in Belarus, and perhaps better at spotting it in Pennsylvania than elsewhere. Perhaps. Another thought is coal. Areas that use a lot of coal tend to become sicker; Europe keeps getting sicker from its non-nuclear energy sources, Perhaps Pennsylvania (a coal state) uses more coal that Belarus (maybe).

Fukushima was a much less damaging accident, and much more recent. So far there has been no observed difference in cancer rate. As the reference below says: “there is no statistical evidence of a difference in thyroid cancer caused by the disaster.” This is not to say that explosions are OK. My company, REB Research, makes are high pressure, low temperature hydrogen-extracting membranes used to reduce the likelihood of hydrogen explosions in nuclear reactors; so far all the explosions have been hydrogen explosions.

Sources: for Belarus: Cancer consequences of the Chernobyl accident: 20 years on. For the US: GEOGRAPHIC VARIATION IN U.S. THYROID CANCER INCIDENCE, AND A CLUSTER NEAR NUCLEAR REACTORS IN NEW JERSEY, NEW YORK, AND PENNSYLVANIA.

R. E. Buxbaum, April 19, 2013; Here are some further, updated thoughts: radiation hormesis (and other hormesis)