Tag Archives: statistics

Race and suicide

Suicide is generally understood as a cry of desperation. If so, you’d expect that the poorer, less-powerful, less-mobile members of society — black people, Hispanics, and women — would be the most suicidal. The opposite is true. While black people and Hispanics have low savings, and mobility, they rarely commit suicide. White Protestants and Indians are the most suicidal groups in the US; Blacks, Hispanics, Jews, Catholics, Moslems, Orientals, are significantly less prone. And black, non-Hispanic women are the least suicidal group of all — something I find rather surprising.

US, Race-specific suicide, all ages, Center for Disease control 2002-2012

US, Race-specific suicide, all ages, Center for Disease control 2002-2012

Aha, I hear you say: It’s the stress of upward mobility that causes suicide. If this were true, you’d expect Asians would have a high suicide rate. They do not, at least not American Asians. Their rate (male + female) is only 6.5/100,000, even lower than that for Afro-Americans. In their own countries, it’s different, and Japanese, Chinese, and Koreans commit suicide at a frightening rate. My suspicion is that American Asians feel less trapped by their jobs, and less identified too. They do not feel shame in their company’s failures, and that’s a good, healthy situation. In Korea, several suicides were related to the Samsung phones that burst into flames. While there is some stress from upward mobility, suggested by the suicide rates for Asian-American females being higher than for other woman, it’s still half that of non-hispanic white women, and for women in China and Korea. This suggests, to me, that the attitude of Asian Americans is relatively healthy.

The only group with a suicide rate that matches that of white protestants is American Indians, particularly Alaskan Indians. You’d figure their rate would be high given the alcoholism, but you’d expect it to be similar to that for South-American Hispanics, as these are a similar culture, but you’d be wrong, and it’s worthwhile to ask why. While men in both cultures have similar genes, suffer financially, and are jailed often, American Indians are far more suicidal than Mexican Americans. It’s been suggested that the difference is religiosity or despair. But if Indians despair, why don’t Mexicans or black people? I find I don’t have a completely satisfactory explanation, and will leave it at that.

Age-specific suicide rates.

Age-specific suicide rates, US, all races, 2012, CDC.

Concerning age, you’d probably guess that teenagers and young adults would be most suicidal — they seem the most depressed. This is not the case. Instead, middle age men are twice as likely to commit suicide as teenage men, and old men, 85+, are 3.5 times more suicidal. The same age group, 85+ women, is among the least suicidal. This is sort-of surprising since they are often in a lot of pain. Why men and not women? My suspicion is that the difference, as with the Asians has to do with job identification. I note that middle age is a particularly important time for job progress, and note that men are more-expected to hold a job and provide than women are. When men feel they are not providing –or worse –see themselves as a drag on family resources, they commit suicide. At least, this is my explanation.

It’s been suggested that religion is the consolation of women and particularly of black women and Catholics. I find this explanation doubtful as I have no real reason to think that old women are more religious than old men, or that Protestants and Indians are less religious than Hispanics, Asians, Moslems, and Jews. Another difference that I (mostly) reject is that access to guns is the driver of suicide. Backing this up is a claim in a recent AFSP report, that women attempt suicide three times more often than men. That men prefer guns, while women prefer pills and other, less-violent means is used to suggest that removal of guns would (or should) reduce suicide. Sorry to say, a comparison between the US and Canada (below) suggests the opposite.

A Centers for Disease Control study (2012) found that people doing manual labor jobs are more prone to suicide than are people in high-strew, thinking jobs. That is, lumberjacks, farmers, fishermen, construction workers, carpenters, miners, etc. All commit suicide far more than librarians, doctors, and teachers, whatever the race. My suspicion is that it’s not the stress of the job so much, as the stress of unemployment between gigs. The high suicide jobs, it strikes me, are jobs one would identify with (I’m a lumberjack, I’m a plumber, etc. ) and short term. I suspect that the men doing these jobs (and all these are male-oriented jobs) tend to identify with their job, and tend to fall into a deadly funk when their laid off. They can not sit around the house. Then again, many of these jobs go hand in hand with heavy drinking and an uncommon access to guns, poison, and suicidal opportunities.

zc-percentage-total-suicides-by-method-2000-2003-ca-2007-us

Canadians commit suicide slightly more often than Americans, but Canadians do it mostly with rope and poison, while more than half of US suicides are with guns.

I suspect that suicide among older men stems from the stress of unemployment and the boredom of sitting around feeling useless. Older women tend to have hobbies and friends, while older men do not. And older men seem to feel they are “a burden” if they can no-longer work. Actor Robin Williams, as an example, committed suicide, supposedly, because he found he could not remember his lines as he had. And Kurt Gödel (famous philosopher) just stopped eating until he died (apparently, this is a fairly uncommon method). My speculation is that he thought he was no longer doing productive work and concluded “if I don’t produce, I don’t deserve to eat.” i’m going to speculate that the culture of women, black men, Hispanics, Asians, etc. are less bound to their job, and less burdened by feelings of worthlessness when they are not working. Clearly, black men have as much access to guns as white men, and anyone could potentially fast himself to death.

I should also note that people tend to commit suicide when they lose their wife or husband; girlfriend or boyfriend. My thought is that this is similar to job identification. It seems to me that a wife, husband, or loved one is an affirmation of worth, someone to do for. Without someone to do for, one may feel he has nothing to live for. Based on the above, my guess about counseling is that a particularly good approach would be to remind people in this situation that there are always other opportunities. Always more fish in the sea, as it were. There are other women and men out there, and other job opportunities. Two weeks ago, I sent a suicidal friend a link to the YouTube of Stephen Foster’s song, “there are plenty of fish in the sea” and it seemed to help. It might also help to make the person feel wanted, or needed by someone else — to involve him or her is some new political or social activity. Another thought, take away the opportunity. Since you can’t easily take someone’s gun, rope, or pills — they’d get mad and suspicious –I’d suggest taking the person somewhere where these things are not — a park, the beach, a sauna or hot-tub, or just for a walk. These are just my thoughts, I’m a PhD engineer, so my thinking may seem odd. I try to use numbers to guide my thought. If what I say makes sense, use it at your own risk.

Robert Buxbaum, June 21, 2017.Some other odd conclusions: that Hamilton didn’t throw away his shot, but tried to kill Burr. That tax day is particularly accident prone, both in the US and Canada, and that old people are not particularly bad drivers, but they drive more dangerous routes (country roads, not highways).

if everyone agrees, something is wrong

I thought I’d try to semi-derive, and explain a remarkable mathematical paper that was published last month in The Proceedings of the Royal Society A (see full paper here). The paper demonstrates that too much agreement about a thing is counter-indicative of the thing being true. Unless an observation is blindingly obvious, near 100% agreement suggests there is a hidden flaw or conspiracy, perhaps unknown to the observers. This paper has broad application, but I thought the presentation was too confusing for most people to make use of, even those with a background in mathematics, science, or engineering. And the popular versions press versions didn’t even try to be useful. So here’s my shot:

Figure 2 from the original paper. For a method that is 80% accurate, you get your maximum reliability at the third to fifth witness. Beyond that, more agreement suggest a flaw in the people or procedure.

Figure 2 from the original paper. For a method that is 80% accurate, you get your maximum reliability at 3-5 witnesses. More agreement suggests a flaw in the people or procedure.

I will discuss only on specific application, the second one mentioned in the paper, crime (read the paper for others). Lets say there’s been a crime with several witnesses. The police line up a half-dozen, equal (?) suspects, and show them to the first witness. Lets say the first witness points to one of the suspects, the police will not arrest on this because they know that people correctly identify suspects only about 40% of the time, and incorrectly identify perhaps 10% (the say they don’t know or can’t remember the remaining 50% of time). The original paper includes the actual factions here; they’re similar. Since the witness pointed to someone, you already know he/she isn’t among the 50% who don’t know. But you don’t know if this witness is among the 40% who identify right or the 10% who identify wrong. Our confidence that this is the criminal is thus .4/(.4 +.1) = .8, or 80%.

Now you bring in the second witness. If this person identifies the same suspect, your confidence increases; to roughly (.4)2/(.42+.12) = .941,  or 94.1%. This is enough to make an arrest, but let’s say you have ten more witnesses, and all identify this same person. You might first think that this must be the guy with a confidence of (.4)10/(.410+.110) = 99.99999%, but then you wonder how unlikely it is to find ten people who identify correctly when, as we mentioned, each person has only a 40% chance. The chance of all ten witnesses identifying a suspect right is small: (.4)10 = .000104 or 0.01%. This fraction is smaller than the likelihood of having a crooked cop or a screw up the line-up (only one suspect had the right jacket, say). If crooked cops and systemic errors show up 1% of the time, and point to the correct fellow only 15% of these, we find that the chance of being right if ten out of ten agree is (0.0015 +(.4)10)/( .01+ .410+.110) = .16%. Total agreement on guilt suggests the fellow is innocent!

The graph above, the second in the paper, presents a generalization of the math I just presented: n identical tests of 80% accuracy and three different likelihoods of systemic failure. If this systemic failure rate is 1% and the chance of the error pointing right or wrong is 50/50, the chance of being right is P = (.005+ .4n)/(.01 +.4n+.1n), and is the red curve in the graph above. The authors find you get your maximum reliability when there are two to four agreeing witness.

Confidence of guilt as related to the number of judges that agree and your confidence in the integrity of the judges.

Confidence of guilt as related to the number of judges that agree and the integrity of the judges.

The Royal Society article went on to a approve of a feature of Jewish capital-punishment law. In Jewish law, capital cases are tried by 23 judges. To convict a super majority (13) must find guilty, but if all 23 judges agree on guilt the court pronounces innocent (see chart, or an anecdote about Justice Antonin Scalia). My suspicion, by the way, is that more than 1% of judges and police are crooked or inept, and that the same applies to scientific analysis of mental diseases like diagnosing ADHD or autism, and predictions about stocks or climate change. (Do 98% of scientists really agree independently?). Perhaps there are so many people in US prisons, because of excessive agreement and inaccurate witnesses, e.g Ruben Carter. I suspect the agreement on climate experts is a similar sham.

Robert Buxbaum, March 11, 2016. Here are some thoughts on how to do science right. Here is some climate data: can you spot a clear pattern of man-made change?

An approach to teaching statistics to 8th graders

There are two main obstacles students have to overcome to learn statistics: one mathematical one philosophical. The math is somewhat difficult, and will be new to a high schooler. What’s more, philosophically, it is rarely obvious what it means to discover a true pattern, or underlying cause. Nor is it obvious how to separate the general pattern from the random accident, the pattern from the variation. This philosophical confusion (cause and effect, essence and accident) is exists in the back of even in the greatest minds. Accepting and dealing with it is at the heart of the best research: seeing what is and is not captured in the formulas of the day. But it is a lot to ask of the young (or the old) who are trying to understand the statistical technique while at the same time trying to understand the subject of the statistical analysis, For young students, especially the good ones, the issue of general and specific will compound the difficulty of the experiment and of the math. Thus, I’ll try to teach statistics with a problem or two where the distinction between essential cause and random variation is uncommonly clear.

A good case to get around the philosophical issue is gambling with crooked dice. I show the class a pair of normal-looking dice and a caliper and demonstrate that the dice are not square; virtually every store-bought die is not square, so finding an uneven pair is easy. After checking my caliper, students will readily accept that these dice are crooked, and so someone who knows how it is crooked will have an unfair advantage. After enough throws, someone who knows the degree of crookedness will win more often than those who do not. Students will also accept that there is a degree of randomness in the throw, so that any pair of dice will look pretty fair if you don’t gable with them too long. I can then use statistics to see which faces show up most, and justify the whole study of statistics to deal with a world where the dice are loaded by God, and you don’t have a caliper, or any more-direct way of checking them. The underlying uneven-ness of the dice is the underlying pattern, the random part in this case is in the throw, and you want to use statistics to grasp them both.

Two important numbers to understand when trying to use statistics are the average and the standard deviation. For an honest pair of dice, you’d expect an average of 1/6 = 0.1667 for every number on the face. But throw a die a thousand times and you’ll find that hardly any of the faces show up at the average rate of 1/6. The average of all the averages will still be 1/6. We will call that grand average, 1/6 = x°-bar, and we will call the specific face average of the face Xi-bar. where i is one, two three, four, five, or six.

There is also a standard deviation — SD. This relates to how often do you expect one fact to turn up more than the next. SD = √SD2, and SD2 is defined by the following formula

SD2 = 1/n ∑(xi – x°-bar)2

Let’s pick some face of the dice, 3 say. I’ll give a value of 1 if we throw that number and 0 if we do not. For an honest pair of dice, x°-bar = 1/6, that is to say, 1 out of 6 throws will be land on the number 3, going us a value of 1, and the others won’t. In this situation, SD2 = 1/n ∑(xi – x°-bar)2 will equal 1/6 ( (1/6)2 + 5 (5/6)2 )= 1/6 (126/36) = 3.5/6 = .58333. Taking the square root, SD = 0.734. We now calculate the standard error. For honest dice, you expect that for every face, on average

SE = Xi-bar minus x°-bar = ± SD √(1/n).

By the time you’ve thrown 10,000 throws, √(1/n) = 1/100 and you expect an error on the order of 0.0073. This is to say that you expect to see each face show up between about 0.1740 and 0.1594. In point of fact, you will likely find that at least one face of your dice shows up a lot more often than this, or a lot less often. To the extent you see that, this is the extent that your dice is crooked. If you throw someone’s dice enough, you can find out how crooked they are, and you can then use this information to beat the house. That, more or less is the purpose of science, by the way: you want to beat the house — you want to live a life where you do better than you would by random chance.

As a less-mathematical way to look at the same thing — understanding statistics — I suggest we consider a crooked coin throw with only two outcomes, heads and tails. Not that I have a crooked coin, but your job as before is to figure out if the coin is crooked, and if so how crooked. This problem also appears in political polling before a major election: how do you figure out who will win between Mr Head and Ms Tail from a sampling of only a few voters. For an honest coin or an even election, on each throw, there is a 50-50 chance of head, or of Mr Head. If you do it twice, there is a 25% chance of two heads, a 25% chance of throwing two tails and a 50% chance of one of each. That’s because there are four possibilities and two ways of getting a Head and a Tail.

pascal's triangle

Pascal’s triangle

You can systematize this with a Pascal’s triangle, shown at left. Pascal’s triangle shows the various outcomes for a coin toss, and shows the ways they can be arrived at. Thus, for example, we see that, by the time you’ve thrown the coin 6 times, or polled 6 people, you’ve introduced 26 = 64 distinct outcomes, of which 20 (about 1/3) are the expected, even result: 3 heads and 3 tails. There is only 1 way to get all heads and one way to get all tails. While an honest coin is unlikely to come up all heads or tails after six throws, more often than not an honest coin will not come up with half heads. In the case above, 44 out of 64 possible outcomes describe situations with more heads than tales, or more tales than heads — with an honest coin.

Similarly, in a poll of an even election, the result will not likely come up even. This is something that confuses many political savants. The lack of an even result after relatively few throws (or phone calls) should not be used to convince us that the die is crooked, or the election has a clear winner. On the other hand there is only a 1/32 chance of getting all heads or all tails (2/64). If you call 6 people, and all claim to be for Mr Head, it is likely that Mr Head is the true favorite to a confidence of 3% = 1/32. In sports, it’s not uncommon for one side to win 6 out of 6 times. If that happens, it is a good possibility that there is a real underlying cause, e.g. that one team is really better than the other.

And now we get to how significant is significant. If you threw 4 heads and 2 tails out of 6 throws we can accept that this is not significant because there are 15 ways to get this outcome (or 30 if you also include 2 heads and 4 tail) and only 20 to get the even outcome of 3-3. But what about if you threw 5 heads and one tail? In that case the ratio is 6/20 and the odds of this being significant is better, similarly, if you called potential voters and found 5 Head supporters and 1 for Tail. What do you do? I would like to suggest you take the ratio as 12/20 — the ratio of both ways to get to this outcome to that of the greatest probability. Since 12/20 = 60%, you could say there is a 60% chance that this result is random, and a 40% chance of significance. What statisticians call this is “suggestive” at slightly over 1 standard deviation. A standard deviation, also known as σ (sigma) is a minimal standard of significance, it’s if the one tailed value is 1/2 of the most likely value. In this case, where 6 tosses come in as 5 and 1, we find the ratio to be 6/20. Since 6/20 is less than 1/2, we meet this, very minimal standard for “suggestive.” A more normative standard is when the value is 5%. Clearly 6/20 does not meet that standard, but 1/20 does; for you to conclude that the dice is likely fixed after only 6 throws, all 6 have to come up heads or tails.

From skdz. It's typical in science to say that <5% chances, p <.050 are significant. If things don't quite come out that way, you redo.

From xkcd. It’s typical in science to say that <5% chances, p< .05. If things don’t quite come out that way, you redo.

If you graph the possibilities from a large Poisson Triangle they will resemble a bell curve; in many real cases (not all) your experiential data variation will also resemble this bell curve. From a larger Poisson’s triange, or a large bell curve, you  will find that the 5% value occurs at about σ =2, that is at about twice the distance from the average as to where σ  = 1. Generally speaking, the number of observations you need is proportional to the square of the difference you are looking for. Thus, if you think there is a one-headed coin in use, it will only take 6 or seven observations; if you think the die is loaded by 10% it will take some 600 throws of that side to show it.

In many (most) experiments, you can not easily use the poisson triangle to get sigma, σ. Thus, for example, if you want to see if 8th graders are taller than 7th graders, you might measure the height of people in both classes and take an average of all the heights  but you might wonder what sigma is so you can tell if the difference is significant, or just random variation. The classic mathematical approach is to calculate sigma as the square root of the average of the square of the difference of the data from the average. Thus if the average is <h> = ∑h/N where h is the height of a student and N is the number of students, we can say that σ = √ (∑ (<h> – h)2/N). This formula is found in most books. Significance is either specified as 2 sigma, or some close variation. As convenient as this is, my preference is for this graphical version. It also show if the data is normal — an important consideration.

If you find the data is not normal, you may decide to break the data into sub-groups. E.g. if you look at heights of 7th and 8th graders and you find a lack of normal distribution, you may find you’re better off looking at the heights of the girls and boys separately. You can then compare those two subgroups to see if, perhaps, only the boys are still growing, or only the girls. One should not pick a hypothesis and then test it but collect the data first and let the data determine the analysis. This was the method of Sherlock Homes — a very worthwhile read.

Another good trick for statistics is to use a linear regression, If you are trying to show that music helps to improve concentration, try to see if more music improves it more, You want to find a linear relationship, or at lest a plausible curve relationship. Generally there is a relationship if (y – <y>)/(x-<x>) is 0.9 or so. A discredited study where the author did not use regressions, but should have, and did not report sub-groups, but should have, involved cancer and genetically modified foods. The author found cancer increased with one sub-group, and publicized that finding, but didn’t mention that cancer didn’t increase in nearby sub-groups of different doses, and decreased in a nearby sub-group. By not including the subgroups, and not doing a regression, the author mislead people for 2 years– perhaps out of a misguided attempt to help. Don’t do that.

Dr. Robert E. Buxbaum, June 5-7, 2015. Lack of trust in statistics, or of understanding of statistical formulas should not be taken as a sign of stupidity, or a symptom of ADHD. A fine book on the misuse of statistics and its pitfalls is called “How to Lie with Statistics.” Most of the examples come from advertising.

Zombie invasion model for surviving plagues

Imagine a highly infectious, people-borne plague for which there is no immunization or ready cure, e.g. leprosy or small pox in the 1800s, or bubonic plague in the 1500s assuming that the carrier was fleas on people (there is a good argument that people-fleas were the carrier, not rat-fleas). We’ll call these plagues zombie invasions to highlight understanding that there is no way to cure these diseases or protect from them aside from quarantining the infected or killing them. Classical leprosy was treated by quarantine.

I propose to model the progress of these plagues to know how to survive one, if it should arise. I will follow a recent paper out of Cornell that highlighted a fact, perhaps forgotten in the 21 century, that population density makes a tremendous difference in the rate of plague-spread. In medieval Europe plagues spread fastest in the cities because a city dweller interacted with far more people per day. I’ll attempt to simplify the mathematics of that paper without losing any of the key insights. As often happens when I try this, I’ve found a new insight.

Assume that the density of zombies per square mile is Z, and the density of susceptible people is S in the same units, susceptible population per square mile. We define a bite transmission likelihood, ß so that dS/dt = -ßSZ. The total rate of susceptibles becoming zombies is proportional to the product of the density of zombies and of susceptibles. Assume, for now, that the plague moves fast enough that we can ignore natural death, immunity, or the birth rate of new susceptibles. I’ll relax this assumption at the end of the essay.

The rate of zombie increase will be less than the rate of susceptible population decrease because some zombies will be killed or rounded up. Classically, zombies are killed by shot-gun fire to the head, by flame-throwers, or removed to leper colonies. However zombies are removed, the process requires people. We can say that, dR/dt = kSZ where R is the density per square mile of removed zombies, and k is the rate factor for killing or quarantining them. From the above, dZ/dt = (ß-k) SZ.

We now have three, non-linear, indefinite differential equations. As a first step to solving them, we set the derivates to zero and calculate the end result of the plague: what happens at t –> ∞. Using just equation 1 and setting dS/dt= 0 we see that, since ß≠0, the end result is SZ =0. Thus, there are only two possible end-outcomes: either S=0 and we’ve all become zombies or Z=0, and all the zombies are all dead or rounded up. Zombie plagues can never end in mixed live-and-let-live situations. Worse yet, rounded up zombies are dangerous.

If you start with a small fraction of infected people Z0/S0 <<1, the equations above suggest that the outcome depends entirely on k/ß. If zombies are killed/ rounded up faster than they infect/bite, all is well. Otherwise, all is zombies. A situation like this is shown in the diagram below for a population of 200 and k/ß = .6

FIG. 1. Example dynamics for progress of a normal disease and a zombie apocalypse for an initial population of 199 unin- fected and 1 infected. The S, Z, and R populations are shown in (blue, red, black respectively, with solid lines for the zombie apocalypse, and lighter lines for the normal plague. t= tNß where N is the total popula- tion. For both models the k/ß = 0.6 to show similar evolutions. In the SZR case, the S population disap- pears, while the SIR is self limiting, and only a fraction of the population becomes infected.

Fig. 1, Dynamics of a normal plague (light lines) and a zombie apocalypse (dark) for 199 uninfected and 1 infected. The S and R populations are shown in blue and black respectively. Zombie and infected populations, Z and I , are shown in red; k/ß = 0.6 and τ = tNß. With zombies, the S population disappears. With normal infection, the infected die and some S survive.

Sorry to say, things get worse for higher initial ratios,  Z0/S0 >> 0. For these cases, you can kill zombies faster than they infect you, and the last susceptible person will still be infected before the last zombie is killed. To analyze this, we create a new parameter P = Z + (1 – k/ß)S and note that dP/dt = 0 for all S and Z; the path of possible outcomes will always be along a path of constant P. We already know that, for any zombies to survive, S = 0. We now use algebra to show that the final concentration of zombies will be Z = Z0 + (1-k/ß)S0. Free zombies survive so long as the following ratio is non zero: Z0/S0 + 1- k/ß. If Z0/S0 = 1, a situation that could arise if a small army of zombies breaks out of quarantine, you’ll need a high kill ratio, k/ß > 2 or the zombies take over. It’s seen to be harder to stop a zombie outbreak than to stop the original plague. This is a strong motivation to kill any infected people you’ve rounded up, a moral dilemma that appears some plague literature.

Figure 1, from the Cornell paper, gives a sense of the time necessary to reach the final state of S=0 or Z=0. For k/ß of .6, we see that it takes is a dimensionless time τ of 25 or to reach this final, steady state of all zombies. Here, τ= t Nß and N is the total population; it takes more real time to reach τ= 25 if N is high than if N is low. We find that the best course in a zombie invasion is to head for the country hoping to find a place where N is vanishingly low, or (better yet) where Z0 is zero. This was the main conclusion of the Cornell paper.

Figure 1 also shows the progress of a more normal disease, one where a significant fraction of the infected die on their own or develop a natural immunity and recover. As before, S is the density of the susceptible, R is the density of the removed + recovered, but here I is the density of those Infected by non-zombie disease. The time-scales are the same, but the outcome is different. As before, τ = 25 but now the infected are entirely killed off or isolated, I =0 though ß > k. Some non-infected, susceptible individuals survive as well.

From this observation, I now add a new conclusion, not from the Cornell paper. It seems clear that more immune people will be in the cities. I’ve also noted that τ = 25 will be reached faster in the cities, where N is large, than in the country where N is small. I conclude that, while you will be worse off in the city at the beginning of a plague, you’re likely better off there at the end. You may need to get through an intermediate zombie zone, and you will want to get the infected to bury their own, but my new insight is that you’ll want to return to the city at the end of the plague and look for the immune remnant. This is a typical zombie story-line; it should be the winning strategy if a plague strikes too. Good luck.

Robert Buxbaum, April 21, 2015. While everything I presented above was done with differential calculus, the original paper showed a more-complete, stochastic solution. I’ve noted before that difference calculus is better. Stochastic calculus shows that, if you start with only one or two zombies, there is still a chance to survive even if ß/k is high and there is no immunity. You’ve just got to kill all the zombies early on (gun ownership can help). Here’s my statistical way to look at this. James Sethna, lead author of the Cornell paper, was one of the brightest of my Princeton PhD chums.

Statistics of death and taxes — death on tax day

Strange as it seems, Americans tend to die in road accidents on tax-day. This deadly day is April 15 most years, but on some years April 15th falls out on a weekend and the fatal tax day shifts to April 16 or 17. Whatever weekday it is, about 8% more people die on the road on tax day than on the same weekday a week earlier or a week later; data courtesy of the US highway safety bureau and two statisticians, Redelmeier and Yarnell, 2014.

Forest plot of individuals in fatal road crashes over 30 years. X-axis shows relative increase in risk on tax days compared to control days expressed as odds ratio. Y-axis denotes subgroup (results for full cohort in final row). Column data are counts of individuals in crashes. Analytic results expressed with 95% confidence intervals setting control days as referent. Results show increased risk on tax day for full cohort, similar increase for 25 of 27 subgroups, and all confidence intervals overlapping main analysis. Recall that odds ratios are reliable estimates of relative risk when event rates are low from an individual driver’s perspective.

Forest plot of individuals in fatal road crashes for the 30 years to 2008  on US highways (Redelmeier and Yarnell, 2014). X-axis shows relative increase in risk on tax days compared to control days expressed as odds ratio. Y-axis denotes subgroup (results for full cohort in final row). Column data are counts of individuals in crashes (there are twice as many control days as tax days). Analytic results are 95% confidence intervals based on control days as referent. Dividing the experimental subjects into groups is a key trick of experimental design.

To confirm that the relation isn’t a fluke, the result of well-timed ice storms or football games, the traffic death data was down into subgroups by time, age, region etc– see figure. Each groups showed more deaths than on the average of the day a week before and after.

The cause appears unrelated to paying the tax bill, as such. The increase is near equal for men and women; with alcohol and without, and for those over 18 and under (presumably those under 18 don’t pay taxes). The death increase isn’t concentrated at midnight either, as might be expected if the cause were people rushing to the post office. The consistency through all groups suggests this is not a quirk of non-normal data, nor a fluke but a direct result of  tax-day itself.Redelmeier and Yarnell suggest that stress — the stress of thinking of taxes — is the cause.

Though stress seems a plausible explanation, I’d like to see if other stress-related deaths are more common on tax day — heart attack or stroke. I have not done this, I’m sorry to say, and neither have they. General US death data is not tabulated day by day. I’ve done a quick study of Canadian tax-day deaths though (unpublished) and I’ve found that, for Canadians, Canadian tax day is even more deadly than US tax day is for Americans. Perhaps heart attack and stroke data is available day by day in Canada (?).

Robert Buxbaum, December 12, 2014. I write about all sorts of stuff. Here’s my suggested, low stress income tax structure, and a way to reduce/ eliminate income taxes: tariffs– they worked till the Civil war. Here’s my thought on why old people have more fatal car accidents per mile driven.

Seniors are not bad drivers.

Seniors cause accidents, but need to get places too

Seniors are often made fun of for confusion and speeding, but it’s not clear they speed, and it is clear they need to get places. Would reduced speed limits help them arrive alive?

Seniors have more accidents per-mile traveled than middle age drivers. As shown on the chart below, older Canadians, 75+, get into seven times more fatal accidents per mile than 35 to 55 year olds. At first glance, this would suggest they are bad drivers who should be kept from the road, or at least made to drive slower. But I’m not so sure they are bad drivers, and am pretty certain that lower speed limits should not be generally imposed. I suspect that a lot of the problem comes from the a per-mile basis comparison with folks who drive long distances on the same superhighways instead of longer, leisurely drives on country roads. I suspect that, on a per-hour basis, the seniors would look a lot safer, and on a per highway-mile basis they might look identical to younger drivers.

Canadian Vehicle Survey, 2001, Statistics Canada, includes drivers of light duty vehicles.

Deaths per billion km. Canadian Vehicle Survey, 2001, Statistics Canada, includes light duty vehicles.

Another source of misunderstanding, I find, is that comparisons tend to overlook how very low the accident rates are. The fatal accent rate for 75+ year old drivers sounds high when you report it as 20 deaths per billion km. But that’s 50,000,000 km between fatalities, or roughly one fatality for each 1300 drives around the earth. In absolute terms it’s nothing to worry about. Old folks driving provides far fewer deaths per km than 12-29 year olds walking, and fewer deaths per km than for 16-19 year olds driving.

When starting to research this essay, I thought I’d find that the high death rates were the result of bad reaction times for the elderly. I half expected to find that reduced speed limits for them helped. I’ve not found any data directly related to reduced speeds, but now think that lowered speed limits would not help them any more than anyone else. I note that seniors drive for pleasure more than younger folks and do a lot more short errand drives too — to the stores, for example. These are places where accidents are more common. By contrast, 40 to 70 year olds drive more miles on roads that are relatively safe.

Don't walk, especially if you're old.

Don’t walk, especially if you’re old. Netherlands data, 2001-2005 fatalities per billion km.

The Netherlands data above suggest that any proposed solution should not involve getting seniors out of their cars. Not only do seniors find walking difficult, statistics suggest walking is 8 to 10 times more dangerous than driving, and bicycling is little better. A far better solution, I suspect, is reduced speeds for everyone on rural roads. If you’re zipping along a one-lane road at the posted 40, 55, or 60 mph and someone backs out of a driveway, you’re toast. The high posted speeds on these roads pose a particular danger to bicyclists and motorcyclists of all ages – and these are folks who I suspect drive a lot on the rural roads. I suspect that a 5 mph reduction would do quite a lot.

For automobiles on super-highways, it may be worthwhile to increase the speed limits. As things are now, the accident fatality rates are near zero, and the main problem may be the time wasted behind the wheel – driving from place to place. I suspect that an automobile speed limit raise to 80 mph would make sense on most US and Canadian superhighways; it’s already higher on the Autobahn in Germany.

Robert Buxbaum, November 24, 2014. Expect an essay about death on tax-day, coming soon. I’ve also written about marijuana, and about ADHD.

US cancer rates highest on the rivers, low in mountains, desert

Sometimes I find I have important data that I can’t quite explain. For example, cancer rates in the US vary by more than double from county to county, but not at random. The highest rates are on the rivers, and the lowest are in the mountains and deserts. I don’t know why, but the map shows it’s so.

Cancer rate map of the US age adjusted

Cancer death rates map of the US age adjusted 2006-2010, by county. From www.statecancerprofiles.cancer.gov.

Counties shown in red on the map have cancer death rates between 210 and 393 per 100,000, more than double, on average the counties in blue. These red counties are mostly along the southern Mississippi, the Arkansas branching to its left; along the Alabama, to its right, and along the Ohio and the Tennessee rivers (these rivers straddle Kentucky). The Yukon (Alaska) shows up in bright red, while Hawaii (no major rivers) is blue; southern Alaska (mountains) is also in blue. In orange, showing less-elevated cancer death, you can make out the Delaware river between NJ and DC, the Missouri heading Northwest from the Mississippi, the Columbia, and the Colorado between the Grand Canyon and Las Vegas. For some reason, counties near the Rio Grande do not show elevated cancer death rates. nor does the Northern Mississippi and the Colorado south of Las Vegas.

Contrasting this are areas of low cancer death, 56 to 156 deaths per year per 100,000, shown in blue. These appear along the major mountain ranges: The Rockies (both in the continental US and Alaska), the Sierra Nevada, and the Appalachian range. Virtually every mountain county appears in blue. Desert areas of the west also appear as blue, low cancer regions: Arizona, New Mexico, Utah, Idaho, Colorado, south-west Texas and southern California. Exceptions to this are the oasis areas in the desert: Lake Tahoe in western Nevada and Lake Meade in southern nevada. These oases stand out in red showing high cancer-death rates in a sea of low. Despite the AIDS epidemic and better health care, the major cities appear average in terms of cancer. It seems the two effects cancel; see the cancer incidence map (below).

My first thought of an explanation was pollution: that the mountains were cleaner, and thus healthier, while industry had polluted the rivers so badly that people living there were cancer-prone. I don’t think this explanation fits, quite, since I’d expect the Yukon to be pollution free, while the Rio Grande should be among the most polluted. Also, I’d expect cities like Detroit, Cleveland, Chicago, and New York to be pollution-heavy, but they don’t show up for particularly high cancer rates. A related thought was that specific industries are at fault: oil, metals, chemicals, or coal, but this too doesn’t quite fit: Utah has coal, southern California has oil, Colorado has mining, and Cleveland was home to major Chemical production.

Another thought is poverty: that poor people live along the major rivers, while richer, healthier ones live in the mountains. The problem here is that the mountains and deserts are home to some very poor counties with low cancer rates, e.g. in Indian areas of the west and in South Florida and North Michigan. Detroit is a very poor city, with land polluted by coal, steel, and chemical manufacture — all the worst industries, you’d expect. We’re home to the famous black lagoon, and to Zug Island, a place that looks like Hades when seen from the air. The Indian reservation areas of Arizona are, if anything, poorer yet. 

Cancer incidence map

Cancer incidence,age adjusted, from statecancerprofiles.cancer.gov

My final thought was that people might go to the river to die, but perhaps don’t get cancer by the river. To check this explanation, I looked at the map of cancer incidence rates. While many counties repress their cancer rate data, the pattern in the remaining ones is similar to that for cancer death: the western mountain and desert counties show less than half the incidence rates of the counties along the southern Mississippi, the Arkansas, and the Ohio rivers. The incidence rates are somewhat elevated in the north-east, and lower on the Yukon, but otherwise it’s the same map as for cancer death. Bottom line: I’m left with an observation of the cancer pattern, but no good explanation or model.

Dr. Robert E. Buxbaum, May 1, 2014. Two other unsolved mysteries I’ve observed: the tornado drought of the last few years, and that dilute toxins and radiation may prevent cancer. To do science, you first observe, and then try to analyze.

Patterns in climate; change is the only constant

There is a general problem when looking for climate trends: you have to look at weather data. That’s a problem because weather data goes back thousands of years, and it’s always changing. As a result it’s never clear what start year to use for the trend. If you start too early or too late the trend disappears. If you start your trend line in a hot year, like in the late roman period, the trend will show global cooling. If you start in a cold year, like the early 1970s, or the small ice age (1500 -1800) you’ll find global warming: perhaps too much. Begin 10-15 years ago, and you’ll find no change in global temperatures.

Ice coverage data shows the same problem: take the Canadian Arctic Ice maximums, shown below. If you start your regression in 1980-83, the record ice year (green) you’ll see ice loss. If you start in 1971, the year of minimum ice (red), you’ll see ice gain. It might also be nice to incorporate physics thought a computer model of the weather, but this method doesn’t seem to help. Perhaps that’s because the physics models generally have to be fed coefficients calculated from the trend line. Using the best computers and a trend line showing ice loss, the US Navy predicted, in January 2006, that the Arctic would be ice-free by 2013. It didn’t happen; a new prediction is 2016 — something I suspect is equally unlikely. Five years ago the National Academy of Sciences predicted global warming would resume in the next year or two — it didn’t either. Garbage in -garbage out, as they say.

Arctic Ice in Northern Canada waters, 1970-2014 from icecanada.ca 2014 is not totally in yet. What year do you start when looking for a trend?

Arctic Ice in Northern Canada waters, 1971-2014 from the Canadian ice service 2014 is not totally in yet , but is likely to exceed 2013. If you are looking for trends, in what year do you start?

The same trend problem appears with predicting sea temperatures and el Niño, a Christmastime warming current in the Pacific ocean. This year, 2013-14, was predicted to be a super El Niño, an exceptionally hot, stormy year with exceptionally strong sea currents. Instead, there was no el Niño, and many cities saw record cold — Detroit by 9 degrees. The Antarctic ice hit record levels, stranding a ship of anti warming activists. There were record few hurricanes.  As I look at the Pacific sea temperature from 1950 to the present, below, I see change, but no pattern or direction: El Nada (the nothing). If one did a regression analysis, the slope might be slightly positive or negative, but r squared, the significance, would be near zero. There is no real directionality, just noise if 1950 is the start date.

El Niño and La Niña since 1950. There is no sign that they are coming more often, or stronger. Nor is there evidence even that the ocean is warming.

El Niño and La Niña since 1950. There is no sign that they are coming more often, or stronger. Nor is clear evidence that the ocean is warming.

This appears to be as much a fundamental problem in applied math as in climate science: when looking for a trend, where do you start, how do you handle data confidence, and how do you prevent bias? A thought I’ve had is to try to weight a regression in terms of the confidence in the data. The Canadian ice data shows that the Canadian Ice Service is less confident about their older data than the new; this is shown by the grey lines. It would be nice if some form of this confidence could be incorporated into the regression trend analysis, but I’m not sure how to do this right.

It’s not so much that I doubt global warming, but I’d like a better explanation of the calculation. Weather changes: how do you know when you’re looking at climate, not weather? The president of the US claimed that the science is established, and Prince Charles of England claimed climate skeptics were headless chickens, but it’s certainly not predictive, and that’s the normal standard of knowledge. Neither country has any statement of how one would back up their statements. If this is global warming, I’d expect it to be warm.

Robert Buxbaum, Feb 5, 2014. Here’s a post I’ve written on the scientific method, and on dealing with abnormal statistics. I’ve also written about an important recent statistical fraud against genetically modified corn. As far as energy policy, I’m inclined to prefer hydrogen over batteries, and nuclear over wind and solar. The president has promoted the opposite policy — for unexplained, “scientific” reasons.

Fractal power laws and radioactive waste decay

Here’s a fairly simple model for nuclear reactor decay heat versus time. It’s based on a fractal model I came up with for dealing with the statistics of crime, fires, etc. The start was to notice that radioactive waste is typically a mixture of isotopes with different decay times and different decay heats. I then came to suspect that there would be a general fractal relation, and that the fractal relation would hold through as the elements of the mixed waste decayed to more stable, less radioactive products. After looking a bit, if seems that the fractal time characteristic is time to the 1/4 power, that is

heat output = H° exp (-at1/4).

Here H° is the heat output rate at some time =0 and “a” is a characteristic of the waste. Different waste mixes will have different values of this decay characteristic.

If nuclear waste consisted of one isotope and one decay path, the number of atoms decaying per day would decrease exponentially with time to the power of 1. If there were only one daughter product produced, and it were non-radioactive, the heat output of a sample would also decay with time to the power of 1. Thus, Heat output would equal  H° exp (-at) and a plot of the log of the decay heat would be linear against linear time — you could plot it all conveniently on semi-log paper.

But nuclear waste generally consists of many radioactive components with different half lives, and these commpnents decay into other radioactive isotopes, all of whom have half-lives that vary by quite a lot. The result is that a semi-log plot is rarely helpful.  Some people therefore plot radioactivity on a log-log plot, typically including a curve for each major isotope and decay mode. I find these plots hardly useful. They are certainly impossible to extrapolate. What I’d like to propose instead is a fractal variation of the original semi-log plot: a  plot of the log of the heat rate against a fractal time. As shown below the use of time to the 1/4 power seems to be helpful. The plot is similar to a fractal decay model that I’d developed for crimes and fires a few weeks ago

Afterheat of fuel rods used to generate 20 kW/kg U; Top graph 35 MW-days/kg U; bottom graph 20 Mw-day /kg  U. Data from US NRC Regulatory Guide 3.54 - Spent Fuel Heat Generation in an Independent Spent Fuel Storage Installation, rev 1, 1999. http://www.nrc.gov/reading-rm/doc-collections/reg-guides/fuels-materials/rg/03-054/

After-heat of nuclear fuel rods used at 20 kW/kg U; Top graph 35 MW-days/kg U; bottom graph 20 Mw-day /kg U. Data from US NRC Regulatory Guide 3.54. A typical reactor has 200,000 kg of uranium.

A plausible justification for this fractal semi-log plot is to observe that the half-life of daughter isotopes relates to the parent isotopes. Unless I find that someone else has come up with this sort of plot or analysis before, I’ll call it after myself: a Buxbaum Mandelbrot plot –Why not?

Nuclear power is attractive because it is a lot more energy dense than any normal fuel. Still the graph at right illustrates the problem of radioactive waste. With nuclear, you generate about 35 MW-days of power per kg of uranium. This is enough to power an average US home for 8 years, but it produces 1 kg of radioactive waste. Even after 81 years the waste is generating about 1/2 W of decay heat. It should be easier to handle and store the 1 kg of spent uranium than to deal with the many tons of coal-smoke produced when 35 MW-days of electricity is made from coal, still, there is reason to worry about the decay heat.

I’ve made a similar plot of decay heat of a fusion reactor, see below. Fusion looks better in this regard. A fission-based nuclear reactor to power 1/2 of Detroit, would hold some 200,000 kg of uranium that would be replaced every 5 years. Even 81 years after removal, the after-heat would be about 100 kW, and that’s a lot.

Afterheat of a 4000 MWth Fusion Reactor, from UMAC III Report. Nb-1%Zr is a fairly common high-temerature engineering material of construction.

After-heat of a 4000 MWth Fusion Reactor built from niobium-1%zirconium; from UWMAC III Report. The after heat is far less than with normal uranium fission.

The plot of the after-heat of a similar power fusion reactor (right) shows a far greater slope, but the same time to the1/4 power dependence. The heat output drops from 1 MW at 3 weeks to only 100 W after 1 year and far less than 1 W after 81 years. Nuclear fusion is still a few years off, but the plot at left shows the advantages fairly clearly, I. think.

This plot was really designed to look at the statistics of crime, fires, and the need for servers / checkout people.

Dr. R.E. Buxbaum, January 2, 2014, edited Aug 30, 2022. *A final, final thought about theory from Yogi Berra: “In theory, it matches reality.”

Genetically modified food not found to cause cancer.

It’s always nice when a study is retracted, especially so if the study alerts the world to a danger that is found to not exist. Retractions don’t happen often enough, I think, given that false positives should occur in at least 5% of all biological studies. Biological studies typically use 95% confidence limits, a confidence limit that indicates there will be false positives 5% of the time for the best-run versions (or 10% if both 5% tails are taken to be significant). These false positives will appear in 5-10% of all papers as an expected result of statistics, no matter how carefully the study is done, or how many rats used. Still, one hopes that researchers will check for confirmation from other researchers and other groups within the study. Neither check was not done in a well publicized, recent paper claiming genetically modified foods cause cancer. Worse yet, the experiment design was such that false positives were almost guaranteed.

Séralini published this book, “We are all Guinea Pigs,” simultaneously with the paper.

As reported in Nature, the journal Food and Chemical Toxicology retracted a 2012 paper by Gilles-Eric Séralini claiming that eating genetically modified (GM) maize causes cancerous tumors in rats despite “no evidence of fraud or intentional misrepresentation.” I would not exactly say no evidence. For one, the choice of rats and length of the study was such that a 30% of the rats would be expected to get cancer and die even under the best of circumstances. Also, Séralini failed to mention that earlier studies had come to the opposite conclusion about GM foods. Even the same journal had published a review of 12 long-term studies, between 90 days and two years, that showed no harm from GM corn or other GM crops. Those reports didn’t get much press because it is hard to get excited at good news, still you’d have hoped the journal editors would demand their review, at least, would be referenced in a paper stating the contrary.

A wonderful book on understanding the correct and incorrect uses of statistics.

A wonderful book on understanding the correct and incorrect uses of statistics.

The main problem I found is that the study was organized to virtually guarantee false positives. Séralini took 200 rats and divided them into 20 groups of 10. Taking two groups of ten (one male, one female) as a control, he fed the other 18 groups of ten various doses of genetically modified grain, either alone of mixed with roundup, a pesticide often used with GM foods. Based on pure statistics, and 95% confidence, you should expect that, out of the 18 groups fed GM grain there is a 1- .9518 chance (60%) that at least one group will show cancer increase, and a similar 60% chance that at least one group will show cancer decrease at the 95% confidence level. Séralini’s study found both these results: One group, the female rats fed with 10% GM grain and no roundup, showed cancer increase; another group, the female rats fed 33% GM grain and no roundup, showed cancer decrease — both at the 95% confidence level. Séralini then dismissed the observation of cancer decrease, and published the inflammatory article and a companion book (“We are all Guinea Pigs,” pictured above) proclaiming that GM grain causes cancer. Better editors would have forced Séralini to acknowledge the observation of cancer decrease, or demanded he analyze the data by linear regression. If he had, Séralini would have found no net cancer effect. Instead he got to publish his bad statistics, and (since non of the counter studies were mentioned) unleashed a firestorm of GM grain products pulled from store shelves.

Did Séralini knowingly design a research method aimed to produce false positives? In a sense, I’d hope so; the alternative is pure ignorance. Séralini is a long-time, anti GM-activist. He claims he used few rats because he was not expecting to find any cancer — no previous tests on GM foods had suggested a cancer risk!? But this is mis-direction; no matter how many rats in each group, if you use 20 groups this way, there is a 60% chance you’ll find at least one group with cancer at the 95% confidence limit. (This is Poisson-type statistics see here). My suspicion is that Séralini knowingly gamed the experiments in an effort to save the world from something he was sure was bad. That he was a do-gooder twisting science for the greater good.

The most common reason for retraction is that the article has appeared elsewhere, either as a substantial repeat from the authors, or from other authors by plagiarism or coincidence. (BC Comics, by Johnny Hart, 11/25/10).

It’s important to cite previous work and aspects of the current work that may undermine the story you’d like to tell; BC Comics, Johnny Hart.

This was not the only major  retraction of the month, by the way. The Harrisburg Patriot & Union retracted its 1863 review of Lincoln’s Gettysburg Address, a speech the editors originally panned as “silly remarks”, deserving “a veil of oblivion….” In a sense, it’s nice that they reconsidered, and “…have come to a different conclusion…” My guess is that the editors were originally motivated by do-gooder instinct; they hoped to shorten the war by panning the speech.

There is an entire blog devoted to retractions, by the way:  http://retractionwatch.com. A good friend, Richard Fezza alerted me to it. I went to high school with him, then through under-grad at Cooper Union, and to grad school at Princeton, where we both earned PhDs. We’ll probably end up in the same old-age home. Cooper Union tried to foster a skeptical attitude against group-think.

Robert Buxbaum, Dec 23, 2013. Here is a short essay on the correct way to do science, and how to organize experiments (randomly) to make biassed analysis less likely. I’ve also written on nearly normal statistics, and near poisson statistics. Plus on other random stuff in the science and art world: Time travel, anti-matter, the size of the universe, Surrealism, Architecture, Music.