Tag Archives: math

Calculating π as a fraction

Pi is a wonderful number, π = 3.14159265…. It’s very useful, ratio of the circumference of a circle to its diameter, or the ratio of area of a circle to the square of its radius, but it is irrational: one can show that it can not be described as an exact fraction. When I was in middle school, I thought to calculate Pi by approximations of the circumference or area, but found that, as soon as I got past some simple techniques, I was left with massive sums involving lots of square-roots. Even with a computer, I found this slow, annoying, and aesthetically unpleasing: I was calculating one irrational number from the sum of many other irrational numbers.

At some point, I moved to try solving via the following fractional sum (Gregory and Leibniz).

π/4 = 1/1 -1/3 +1/5 -1/7 …

This was an appealing approach, but I found the series converges amazingly slowly. I tried to make it converge faster by combining terms, but that just made the terms more complex; it didn’t speed convergence. Next to try was Euler’s formula:

π2/6 = 1/1 + 1/4 + 1/9 + ….

This series converges barely faster than the Gregory/Leibniz series, and now I’ve got a square-root to deal with. And that brings us to my latest attempt, one I’m pretty happy with discovering (I’m probably not the first). I start with the Taylor series for sin x. If x is measured in radians: 180° = π radians; 30° = π/6 radians. With the angle x measured in radians, can show that

sin x = x – x3/6 x5/120 – x7/5040 

Notice that the series is fractional and that the denominators get large fast. That suggests that the series will converge fast (2 to 3 terms?). To speed things up further, I chose to solve the above for sin 30° = 1/2 = sin π/6. Truncating the series to the first term gives us the following approximation for pi.

1/2 = sin (π/6) ≈ π/6.

Rearrange this and you find π ≈ 6/2 = 3.

That’s not bad for a first order solution. The Gregory/ Leibniz series would have gotten me π ≈ 4, and the Euler series π ≈ √6 = 2.45…: I’m ahead of the game already. Now, lets truncate to the second term.

1/2 ≈ π/6 – (π/6)3/6.

In theory, I could solve this via the cubic equation formula, but that would leave me with two square roots, something I’d like to avoid. Instead, and here’s my innovation, I’ll substitute 3 + ∂ for π . I’ll then use the binomial theorem to claim that (π)3 ≈ 27 + 27∂ = 27(1+∂). Put this into the equation above and we find:

1/2 = (3+∂)/6 – 27(1+∂)/1296

Rearranging and solving for ∂, I find that

27/216 = ∂ (1- 27/216) = ∂ (189/216)

∂ = 27/189 = 1/7 = .1428…

If π ≈ 3 + ∂, I’ve just calculated π ≈ 22/7. This is not bad for an approximation based on just the second term in the series.

Where to go from here? One thought was to revisit the second term, and now say that π = 22/7 + ∂, but it seemed wrong to ignore the third term. Instead, I’ll include the 3rd term, and say that π/6 = 11/21 + ∂. Extending the derivative approximations I used above, (π/6)3 ≈ (11/21)+ 3∂(11/21)2, etc., I find:

1/2 ≈ (11/21 + ∂) -(11/21)3/6 – 3∂(11/21)2/6 + (11/21)5/120 + 5∂(11/21)4/120.

For a while I tried to solve this for ∂ as fraction using long-hand algebra, but I kept making mistakes. Thus, I’ve chosen to use two faster options: decimals or wolfram alpha. Using decimals is simpler, I find: 11/21 ≈ .523810, (11/21)2 =  .274376; (11/21)3 = .143721; (11/21)4 = .075282, and (11/21)5 = .039434.

Put these numbers into the original equation and I find:

1/2 – .52381 +.143721/6 -.039434/120 = ∂ (1-.274376/2 + .075282/24),

∂ = -.000185/.86595 ≈ -.000214. Based on this,

π ≈ 6 (11/21  -.000214) = 3,141573… Not half bad.

Alternately, using Wolfram alpha to reduce the fractions,

1/2 – 11/21+ 113/6•213 -115/(120•215) = ∂ (24(21)4/24(21)4 – 12•112212/24•214+ (11)4/24•214)

∂ = -90491/424394565 ≈ -.000213618. This is a more exact solution, but it gives a result that’s no more accurate since it is based on a 3 -term approximation of the infinite series.

We find that π/6 ≈ .523596, or, in fractional form, that π ≈ 444422848 / 141464855 = 3.14158.

Either approach seems OK in terms of accuracy: I can’t imagine needing more (I’m just an engineer). I like that I’ve got a fraction, but find the fraction quite ugly, as fractions go. It’s too big. Working with decimals gets me the same accuracy with less work — I avoided needing square roots, and avoided having to resort to Wolfram.

As an experiment, I’ll see if I get a nicer fraction if I drop the last term (11)4/24•214: it is a small correction to a small number, ∂. The equation is now:

1/2 – 11/21+ 113/6•213 -115/(120•215) = ∂ (24(21)4/24(21)4 – 12(11221)2/24•214).

I’ll multiply both sides by 24•214 and then by (5•21) to find that:

12•214 – 24•11•213+ 4•21•113 -115/(5•21) = ∂ (24(21)4 – 12•112212),

60•215 – 120•11•214+ 20•21^2•113 -115 = ∂ (120(21)5 – 60•112213).

Solving for π, I now get, 221406169/70476210 = 3.1415731

It’s still an ugly fraction, about as accurate as before. As with the digital version, I got to 5-decimal accuracy without having to deal with square roots, but I still had to go to Wolfram. If I were to go further, I’d start with the pi value above in digital form, π = 3.141573 + ∂; I’d add the 7th power term, and I’d stick to decimals for the solution. I imagine I’d add 4-5 more decimals that way.

Robert Buxbaum, April 2, 2018

Beyond oil lies … more oil + price volatility

One of many best selling books by Kenneth Deffeyes

One of many best-selling books by Kenneth Deffeyes

While I was at Princeton, one of the most popular courses was geology 101 taught by Dr. Kenneth S. Deffeyes. It was a sort of “Rocks for Jocks,” but had an unusual bite since Dr. Deffeyes focussed particularly on the geology of oil. Deffeyes had an impressive understanding of oil and oil production, and one outcome of this impressive understanding was his certainty that US oil production had peaked in 1970, and that world oil was about to run out too. The prediction that US oil production had peaked was not original to him. It was called Hubbert’s peak after King Hubbert who correctly predicted (rationalized?) the date, but published it only in 1971. What Deffeyes added to Hubbard’s analysis was a simplified mathematical justification and a new prediction: that world oil production would peak in the 1980s, or 2000, and then run out fast. By 2005, the peak date was fixed to November 24, of the same year: Thanksgiving day 2005 ± 3 weeks.

As with any prediction of global doom, I was skeptical, but generally trusted the experts, and virtually every experts was on board to predict gloom in the near future. A British group, The Institute for Peak Oil picked 2007 for the oil to run out, and the several movies expanded the theme, e.g. Mad Max. I was convinced enough to direct my PhD research to nuclear fusion engineering. Fusion being presented as the essential salvation for our civilization to survive beyond 2050 years or so. I’m happy to report that the dire prediction of his mathematics did not come to pass, at least not yet. To quote Yogi Berra, “In theory, theory is just like reality.” Still I think it’s worthwhile to review the mathematical thinking for what went wrong, and see if some value might be retained from the rubble.

proof of peak oilDeffeyes’s Maltheisan proof went like this: take a year-by year history of the rate of production, P and divide this by the amount of oil known to be recoverable in that year, Q. Plot this P/Q data against Q, and you find the data follows a reasonably straight line: P/Q = b-mQ. This occurs between 1962 and 1983, or between 1983 and 2005. Fro whichever straight line you pick, m and b are positive. Once you find values for m and b that you trust, you can rearrange the equation to read,

P = -mQ2+ bQ

You the calculate the peak of production from this as the point where dP/dQ = 0. With a little calculus you’ll see this occurs at Q = b/2m, or at P/Q = b/2. This is the half-way point on the P/Q vs Q line. If you extrapolate the line to zero production, P=0, you predict a total possible oil production, QT = b/m. According to this model this is always double the total Q discovered by the peak. In 1983, QT was calculated to be 1 trillion barrels. By May of 2005, again predicted to be a peak year, QT had grown to two trillion barrels.

I suppose Deffayes might have suspected there was a mistake somewhere in the calculation from the way that QT had doubled, but he did not. See him lecture here in May 2005; he predicts war, famine, and pestilence, with no real chance of salvation. It’s a depressing conclusion, confidently presented by someone enamored of his own theories. In retrospect, I’d say he did not realize that he was over-enamored of his own theory, and blind to the possibility that the P/Q vs Q line might curve upward, have a positive second derivative.

Aside from his theory of peak oil, Deffayes also had a theory of oil price, one that was not all that popular. It’s not presented in the YouTube video, nor in his popular books, but it’s one that I still find valuable, and plausibly true. Deffeyes claimed the wildly varying prices of the time were the result of an inherent quay imbalance between a varying supply and an inelastic demand. If this was the cause, we’d expect the price jumps of oil up and down will match the way the wait-line at a barber shop gets longer and shorter. Assume supply varies because discoveries came in random packets, while demand rises steadily, and it all makes sense. After each new discovery, price is seen to fall. It then rises slowly till the next discovery. Price is seen as a symptom of supply unpredictability rather than a useful corrective to supply needs. This view is the opposite of Adam Smith, but I think he’s not wrong, at least in the short term with a necessary commodity like oil.

Academics accepted the peak oil prediction, I suspect, in part because it supported a Marxian remedy. If oil was running out and the market was broken, then our only recourse was government management of energy production and use. By the late 70s, Jimmy Carter told us to turn our thermostats to 65. This went with price controls, gas rationing, and a 55 mph speed limit, and a strong message of population management – birth control. We were running out of energy, we were told because we had too many people and they (we) were using too much. America’s grown days were behind us, and only the best and the brightest could be trusted to manage our decline into the abyss. I half believed these scary predictions, in part because everyone did, and in part because they made my research at Princeton particularly important. The Science fiction of the day told tales of bold energy leaders, and I was ready to step up and lead, or so I thought.

By 2009 Dr. Deffayes was being regarded as chicken little as world oil production continued to expand.

By 2009 Dr. Deffayes was being regarded as chicken little as world oil production continued to expand.

I’m happy to report that none of the dire predictions of the 70’s to 90s came to pass. Some of my colleagues became world leaders, the rest because stock brokers with their own private planes and SUVs. As of my writing in 2018, world oil production has been rising, and even King Hubbert’s original prediction of US production has been overturned. Deffayes’s reputation suffered for a few years, then politicians moved on to other dire dangers that require world-class management. Among the major dangers of today, school shootings, Ebola, and Al Gore’s claim that the ice caps will melt by 2014, flooding New York. Sooner or later, one of these predictions will come true, but the lesson I take is that it’s hard to predict change accurately.

Just when you thought US oil had beed depleted for good, production began rising. It's now higher than the 1970 peak.

Just when you thought US oil was depleted, production began rising. We now produce more than in 1970.

Much of the new oil production you’ll see on the chart above comes from tar-sands, oil the Deffeyes  considered unrecoverable, even while it was being recovered. We also  discovered new ways to extract leftover oil, and got better at using nuclear electricity and natural gas. In the long run, I expect nuclear electricity and hydrogen will replace oil. Trees have a value, as does solar. As for nuclear fusion, it has not turned out practical. See my analysis of why.

Robert Buxbaum, March 15, 2018. Happy Ides of March, a most republican holiday.

Why is it hot at the equator, cold at the poles?

Here’s a somewhat mathematical look at why it is hotter at the equator that at the poles. This is high school or basic college level science, using trigonometry (pre calculus), a slight step beyond the basic statement that the sun hits down more directly at the equator than at the poles. That’s the kid’s explanation, but we can understand better if we add a little math.

Solar radiation hits Detroit at an angle, as a result less radiation power hits per square meter of Detroit.

Solar radiation hits Detroit or any other non-equator point at an angle, As a result, less radiation power hits per square meter of land.

Lets use the diagram at right and trigonometry to compare the amount of sun-energy that falls on a square meter of land at the equator (0 latitude) and in a city at 42.5 N latitude (Detroit, Boston, and Rome are at this latitude). In each case, let’s consider high-noon on March 21 or September 20. These are the two equinox days, the only days each year when the day and night are equal length, and the only times when it is easy to calculate the angle of the sun as it deviates from the vertical by exactly the latitude on the days and times.

More specifically the equator is zero latitude, so on the equator at high-noon on the equinox, the sun will shine from directly overhead, or 0° from the vertical. Since the sun’s power in space is 1050 W/m2, every square meter of equator can expect to receive 1050 W of sun-energy, less the amount reflected off clouds and dust, or scattered off or air molecules (air scattering is what makes the sky blue). Further north, Detroit, Boston, and Rome sit at 42.5 latitude. At noon on March 31 the sun will strike earth at 42.5° from the vertical as shown in the lower figure above. From trigonometry, you can see that each square meter of these cities will receive cos 42.5 as much power as a square meter at the equator, except for any difference in clouds, dust, etc. Without clouds etc. that would be 1050 cos 42.5 = 774 W. Less sun power hits per square meter because each square meter is tilted. Earlier and later in the day, each spot will get less sunlight than at noon, but the proportion is the same, at least on one of the equinox days.

To calculate the likely temperature in Detroit, Boston, or Rome, I will use a simple energy balance. Ignoring heat storage in the earth for now, we will say that the heat in equals the heat out. We now ignore heat transfer by way of winds and rain, and approximate to say that the heat out leaves by black-body radiation alone, radiating into the extreme cold of space. This is not a very bad approximation since Black body radiation is the main temperature removal mechanism in most situations where large distances are involved. I’ve discussed black body radiation previously; the amount of energy radiated is proportional to luminosity, and to T4, where T is the temperature as measured in an absolute temperature scale, Kelvin or Rankin. Based on this, and assuming that the luminosity of the earth is the same in Detroit as at the equator,

T Detroit / Tequator  = √√ cos 42.5 = .927

I’ll now calculate the actual temperatures. For American convenience, I’ll choose to calculation in the Rankin Temperature scale, the absolute Fahrenheit scale. In this scale, 100°F = 560°R, 0°F = 460°R, and the temperature of space is 0°R as a good approximation. If the average temperature of the equator = 100°F = 38°C = 560°R, we calculate that the average temperature of Detroit, Boston, or Rome will be about .927 x 560 = 519°R = 59°F (15°C). This is not a bad prediction, given the assumptions. We can expect the temperature will be somewhat lower at night as there is no light, but the temperature will not fall to zero as there is retained heat from the day. The same reason, retained heat, explains why it is warmer will be warmer in these cities on September 20 than on March 31.

In the summer, these cities will be warmer because they are in the northern hemisphere, and the north pole is tilted 23°. At the height of summer (June 21) at high noon, the sun will shine on Detroit at an angle of 42.5 – 23° = 19.5° from the vertical. The difference in angle is why these cities are warmer on that day than on March 21. The equator will be cooler on that day (June 21) than on March 21 since the sun’s rays will strike the equator at 23° from the vertical on that day. These  temperature differences are behind the formation of tornadoes and hurricanes, with a tornado season in the US centering on May to July.

When looking at the poles, we find a curious problem in guessing what the average temperature will be. At noon on the equinox, the sun comes in horizontally, or at 90°from the vertical. We thus expect there is no warming power at all this day, and none for the six months of winter either. At first glance, you’d think the temperature at the poles would be zero, at least for six months of the year. It isn’t zero because there is retained heat from the summer, but still it makes for a more-difficult calculation.

To figure an average temperature of the poles, lets remember that during the 6 month summer the sun shines for 24 hours per day, and that the angle of the sun will be as high as 23° from the horizon, or 67° from the vertical for all 24 hours. Let’s assume that the retained heat from the summer is what keeps the temperature from falling too low in the winter and calculate the temperature at an .

Let’s assume that the sun comes in at the equivalent of 25° for the sun during the 6 month “day” of the polar summer. I don’t look at equinox, but rather the solar day, and note that the heating angle stays fixed through each 24 hour day during the summer, and does not decrease in the morning or as the afternoon wears on. Based on this angle, we expect that

TPole / Tequator  = √√ cos 65° = .806

TPole = .806 x 560°R = 452°R = -8°F (-22°C).

This, as it happens is 4° colder than the average temperature at the north pole, but not bad, given the assumptions. Maybe winds and water currents account for the difference. Of course there is a large temperature difference at the pole between the fall equinox and the spring equinox, but that’s to be expected. The average is, -4°F, about the temperature at night in Detroit in the winter.

One last thing, one that might be unexpected, is that temperature at the south pole is lower than at the north pole, on average -44°F. The main reason for this is that the snow on south pole is quite deep — more than 1 1/2 miles deep, with some rock underneath. As I showed elsewhere, we expect that, temperatures are lower at high altitude. Data collected from cores through the 1 1/2 mile deep snow suggest (to me) chaotic temperature change, with long ice ages, and brief (6000 year) periods of warm. The ice ages seem far worse than global warming.

Dr. Robert Buxbaum, December 30, 2017

math jokes and cartoons

uo3wgcxeParallel lines have so much in common.

It’s a shame they never get to meet.

he-who-is-without-mathematics

bad-vector-math-jokemath-for-dummies

sometimes education is the removal of false notions.

sometimes education is the removal of false notions.

pi therapy

pi therapy

Robert E. Buxbaum, January 4, 2017. Aside from the beauty of math itself, I’ve previously noted that, if your child is interested in science, the best route for development is math. I’ve also noted that Einstein did not fail at math, and that calculus is taught wrong, and probably is.

The game is rigged and you can always win.

A few months ago, I wrote a rather depressing essay based on Nobel Laureate, Kenneth Arrow’s work, and the paradox of de Condorcet. It is mathematically shown that you can not make a fair election, even if you wanted to, and no one in power wants to. The game is rigged.

To make up for that insight, I’d like to show from the work of John Forbes Nash (A Beautiful Mind) that you, personally, can win, basically all the time, if you can get someone, anyone to coöperate by trade. Let’s begin with an example in Nash’s first major paper, “The Bargaining Problem,” the one Nash is working on in the movie— read the whole paper here.  Consider two people, each with a few durable good items. Person A has a bat, a ball, a book, a whip, and a box. Person B has a pen, a toy, a knife, and a hat. Since each item is worth a different amount (has a different utility) to the owner and to the other person, there are almost always sets of trades that benefit both. In our world, where there are many people and everyone has many durable items, it is inconceivable that there are not many trades a person can make to benefit him/her while benefiting the trade partner.

Figure 3, from Nash’s, “The bargaining problem.” U1 and U2 are the utilities of the items to the two people, and O is the current state. You can improve by barter so long as your current state is not on the boundary. The parallel lines are places one could reach if money trades as well.

Good trades are even more likely when money is involved or non-durables. A person may trade his or her time for money, that is work, and any half-normal person will have enough skill to be of some value to someone. If one trades some money for durables, particularly tools, one can become rich (slowly). If one trades this work for items to make them happy (food, entertainment) they can become happier. There are just two key skills: knowing what something is worth to you, and being willing to trade. It’s not that easy for most folks to figure out what their old sofa means to them, but it’s gotten easier with garage sales and eBay.

Let us now move to the problem of elections, e.g. in this year 2016. There are few people who find the person of their dreams running for president this year. The system has fundamental flaws, and has delivered two thoroughly disliked individuals. But you can vote for a generally positive result by splitting your ticket. American society generally elects a mix of Democrats and Republicans. This mix either delivers the outcome we want, or we vote out some of the bums. Americans are generally happy with the result.

A Stamp act stamp. The British used these to tax every transaction, making it impossible for the ordinary person to benefit by small trade.

A Stamp act stamp,. Used to tax every transaction, the British made it impossible for ordinary people to benefit by small trades.

The mix does not have to involve different people, it can involve different periods of time. One can elect a Democrat president this year, and an Republican four years later. Or take the problem of time management for college students. If a student had to make a one time choice, they’d discover that you can’t have good grades, good friends, and sleep. Instead, most college students figure out you can have everything if you do one or two of these now, and switch when you get bored. And this may be the most important thing they learn.

This is my solution to Israel’s classic identity dilemma. David Ben-Gurion famously noted that Israel had the following three choices: they could be a nation of Jews living in the land of Israel, but not democratic. They could be a democratic nation in the land of Israel, but not Jewish; or they could be Jewish and democratic, but not (for the most part) in Israel. This sounds horrible until you realize that Israel can elect politicians to deliver different pairs of the options, and can have different cities that cater to thee options too. Because Jerusalem does not have to look like Tel Aviv, Israel can achieve a balance that’s better than any pure solution.

Robert E. Buxbaum, July 17-22, 2016. Balance is all, and pure solutions are a doom. I’m running for water commissioner.

if everyone agrees, something is wrong

I thought I’d try to semi-derive, and explain a remarkable mathematical paper that was published last month in The Proceedings of the Royal Society A (see full paper here). The paper demonstrates that too much agreement about a thing is counter-indicative of the thing being true. Unless an observation is blindingly obvious, near 100% agreement suggests there is a hidden flaw or conspiracy, perhaps unknown to the observers. This paper has broad application, but I thought the presentation was too confusing for most people to make use of, even those with a background in mathematics, science, or engineering. And the popular versions press versions didn’t even try to be useful. So here’s my shot:

Figure 2 from the original paper. For a method that is 80% accurate, you get your maximum reliability at the third to fifth witness. Beyond that, more agreement suggest a flaw in the people or procedure.

Figure 2 from the original paper. For a method that is 80% accurate, you get your maximum reliability at 3-5 witnesses. More agreement suggests a flaw in the people or procedure.

I will discuss only on specific application, the second one mentioned in the paper, crime (read the paper for others). Lets say there’s been a crime with several witnesses. The police line up a half-dozen, equal (?) suspects, and show them to the first witness. Lets say the first witness points to one of the suspects, the police will not arrest on this because they know that people correctly identify suspects only about 40% of the time, and incorrectly identify perhaps 10% (the say they don’t know or can’t remember the remaining 50% of time). The original paper includes the actual factions here; they’re similar. Since the witness pointed to someone, you already know he/she isn’t among the 50% who don’t know. But you don’t know if this witness is among the 40% who identify right or the 10% who identify wrong. Our confidence that this is the criminal is thus .4/(.4 +.1) = .8, or 80%.

Now you bring in the second witness. If this person identifies the same suspect, your confidence increases; to roughly (.4)2/(.42+.12) = .941,  or 94.1%. This is enough to make an arrest, but let’s say you have ten more witnesses, and all identify this same person. You might first think that this must be the guy with a confidence of (.4)10/(.410+.110) = 99.99999%, but then you wonder how unlikely it is to find ten people who identify correctly when, as we mentioned, each person has only a 40% chance. The chance of all ten witnesses identifying a suspect right is small: (.4)10 = .000104 or 0.01%. This fraction is smaller than the likelihood of having a crooked cop or a screw up the line-up (only one suspect had the right jacket, say). If crooked cops and systemic errors show up 1% of the time, and point to the correct fellow only 15% of these, we find that the chance of being right if ten out of ten agree is (0.0015 +(.4)10)/( .01+ .410+.110) = .16%. Total agreement on guilt suggests the fellow is innocent!

The graph above, the second in the paper, presents a generalization of the math I just presented: n identical tests of 80% accuracy and three different likelihoods of systemic failure. If this systemic failure rate is 1% and the chance of the error pointing right or wrong is 50/50, the chance of being right is P = (.005+ .4n)/(.01 +.4n+.1n), and is the red curve in the graph above. The authors find you get your maximum reliability when there are two to four agreeing witness.

Confidence of guilt as related to the number of judges that agree and your confidence in the integrity of the judges.

Confidence of guilt as related to the number of judges that agree and the integrity of the judges.

The Royal Society article went on to a approve of a feature of Jewish capital-punishment law. In Jewish law, capital cases are tried by 23 judges. To convict a super majority (13) must find guilty, but if all 23 judges agree on guilt the court pronounces innocent (see chart, or an anecdote about Justice Antonin Scalia). My suspicion, by the way, is that more than 1% of judges and police are crooked or inept, and that the same applies to scientific analysis of mental diseases like diagnosing ADHD or autism, and predictions about stocks or climate change. (Do 98% of scientists really agree independently?). Perhaps there are so many people in US prisons, because of excessive agreement and inaccurate witnesses, e.g Ruben Carter. I suspect the agreement on climate experts is a similar sham.

Robert Buxbaum, March 11, 2016. Here are some thoughts on how to do science right. Here is some climate data: can you spot a clear pattern of man-made change?

An approach to teaching statistics to 8th graders

There are two main obstacles students have to overcome to learn statistics: one mathematical one philosophical. The math is difficult, and will be new to a high schooler, and (philosophically) it is rarely obvious what is the true, underlying cause and what is the random accident behind the statistical variation. This philosophical confusion (cause and effect, essence and accident) is a background confusion in the work of even in the greatest minds. Accepting and dealing with it is the root of the best research, separating it from blind formula-following, but it confuses the young who try to understand the subject, The young student (especially the best ones) will worry about these issues, compounding the difficulty posed by the math. Thus, I’ll try to teach statistics with a problem or two where the distinction between essential cause and random variation is uncommonly clear.

A good case to get around the philosophical issue is gambling with crooked dice. I show the class a pair of normal-looking dice and a caliper and demonstrate that the dice are not square; virtually every store-bought die is uneven, so finding an uneven one is not a problem. After checking my caliper, students will readily accept that after enough tests some throws will show up more often than others, and will also accept that there is a degree of randomness in the throw, so that any few throws will look pretty fair. I then justify the need for statistics as an attempt to figure out if the dice are loaded in a case where you don’t have a caliper, or are otherwise prevented from checking the dice. The evenness of the dice is the underlying truth, the random part is in the throw, and you want to grasp them both.

To simplify the problem, mathematically, I suggest we just consider a crooked coin throw with only two outcomes, heads and tails, not that I have a crooked coin; you’re to try to figure out if the coin is crooked, and if so how crooked. A similar problem appears with political polling: trying to figure out who will win an election between two people (Mr Head, and Ms Tail) from a sampling of only a few voters. For an honest coin or an even election, on each throw, there is a 50-50 chance of throwing a head, or finding a supporter of Mr Head. If you do it twice, there is a 25% chance of two heads, a 25% chance of throwing two tails and a 50% chance of one of each. That’s because there are four possibilities and two ways of getting a Head and a Tail.

pascal's triangle

Pascal’s triangle

After we discuss the process for a while, and I become convinced they have the basics down, I show the students a Pascal’s triangle. Pascal’s triangle shows the various outcomes and shows the ways they can be arrived at. Thus, for example, we see that, by the time you’ve thrown the dice 6 times, or called 6 people, you’re introduced 64 distinct outcomes, of which 20 (about 1/3) are the expected, even result: 3 heads and 3 tails. There is also only 1 way to get all heads and one way to get all tails. Thus, it is more likely than not that an honest coin will not come up even after 6 (or more) throws, and a poll in an even election will not likely come up even after 6 (or more) calls. Thus, the lack of an even result is hardly convincing that the die is crooked, or the election has a clear winner. On the other hand there is only a 1/32 chance of getting all heads or all tails (2/64). If you call 6 people, and all claim to be for Mr Head, it is likely that Mr Head is the favorite. Similarly, in a sport where one side wins 6 out of 6 times, there is a good possibility that there is a real underlying cause: a crooked coin, or one team is really better than the other.

And now we get to how significant is significant. If you threw 4 heads and 2 tails out of 6 throws we can accept that this is not significant because there are 15 ways to get this outcome (or 30 if you also include 2 heads and 4 tail) and only 20 to get the even outcome of 3-3. But what about if you threw 5 heads and one tail? In that case the ratio is 6/20 and the odds of this being significant is better, similarly, if you called potential voters and found 5 Head supporters and 1 for Tail. What do you do? I would like to suggest you take the ratio as 12/20 — the ratio of both ways to get to this outcome to that of the greatest probability. Since 12/20 = 60%, you could say there is a 60% chance that this result is random, and a 40% chance of significance. What statisticians call this is “suggestive” at slightly over 1 standard deviation. A standard deviation, also known as σ (sigma) is a minimal standard of significance, it’s if the one tailed value is 1/2 of the most likely value. In this case, where 6 tosses come in as 5 and 1, we find the ratio to be 6/20. Since 6/20 is less than 1/2, we meet this, very minimal standard for “suggestive.” A more normative standard is when the value is 5%. Clearly 6/20 does not meet that standard, but 1/20 does; for you to conclude that the dice is likely fixed after only 6 throws, all 6 have to come up heads or tails.

From skdz. It's typical in science to say that <5% chances, p <.050 are significant. If things don't quite come out that way, you redo.

From xkcd. It’s typical in science to say that <5% chances, p< .05. If things don’t quite come out that way, you redo.

If you graph the possibilities from a large Poisson Triangle they will resemble a bell curve; in many real cases (not all) your experiential data variation will also resemble this bell curve. From a larger Poisson’s triange, or a large bell curve, you  will find that the 5% value occurs at about σ =2, that is at about twice the distance from the average as to where σ  = 1. Generally speaking, the number of observations you need is proportional to the square of the difference you are looking for. Thus, if you think there is a one-headed coin in use, it will only take 6 or seven observations; if you think the die is loaded by 10% it will take some 600 throws of that side to show it.

In many (most) experiments, you can not easily use the poisson triangle to get sigma, σ. Thus, for example, if you want to see if 8th graders are taller than 7th graders, you might measure the height of people in both classes and take an average of all the heights  but you might wonder what sigma is so you can tell if the difference is significant, or just random variation. The classic mathematical approach is to calculate sigma as the square root of the average of the square of the difference of the data from the average. Thus if the average is <h> = ∑h/N where h is the height of a student and N is the number of students, we can say that σ = √ (∑ (<h> – h)2/N). This formula is found in most books. Significance is either specified as 2 sigma, or some close variation. As convenient as this is, my preference is for this graphical version. It also show if the data is normal — an important consideration.

If you find the data is not normal, you may decide to break the data into sub-groups. E.g. if you look at heights of 7th and 8th graders and you find a lack of normal distribution, you may find you’re better off looking at the heights of the girls and boys separately. You can then compare those two subgroups to see if, perhaps, only the boys are still growing, or only the girls. One should not pick a hypothesis and then test it but collect the data first and let the data determine the analysis. This was the method of Sherlock Homes — a very worthwhile read.

Another good trick for statistics is to use a linear regression, If you are trying to show that music helps to improve concentration, try to see if more music improves it more, You want to find a linear relationship, or at lest a plausible curve relationship. Generally there is a relationship if (y – <y>)/(x-<x>) is 0.9 or so. A discredited study where the author did not use regressions, but should have, and did not report sub-groups, but should have, involved cancer and genetically modified foods. The author found cancer increased with one sub-group, and publicized that finding, but didn’t mention that cancer didn’t increase in nearby sub-groups of different doses, and decreased in a nearby sub-group. By not including the subgroups, and not doing a regression, the author mislead people for 2 years– perhaps out of a misguided attempt to help. Don’t do that.

Dr. Robert E. Buxbaum, June 5-7, 2015. Lack of trust in statistics, or of understanding of statistical formulas should not be taken as a sign of stupidity, or a symptom of ADHD. A fine book on the misuse of statistics and its pitfalls is called “How to Lie with Statistics.” Most of the examples come from advertising.

Zombie invasion model for surviving plagues

Imagine a highly infectious, people-borne plague for which there is no immunization or ready cure, e.g. leprosy or small pox in the 1800s, or bubonic plague in the 1500s assuming that the carrier was fleas on people (there is a good argument that people-fleas were the carrier, not rat-fleas). We’ll call these plagues zombie invasions to highlight understanding that there is no way to cure these diseases or protect from them aside from quarantining the infected or killing them. Classical leprosy was treated by quarantine.

I propose to model the progress of these plagues to know how to survive one, if it should arise. I will follow a recent paper out of Cornell that highlighted a fact, perhaps forgotten in the 21 century, that population density makes a tremendous difference in the rate of plague-spread. In medieval Europe plagues spread fastest in the cities because a city dweller interacted with far more people per day. I’ll attempt to simplify the mathematics of that paper without losing any of the key insights. As often happens when I try this, I’ve found a new insight.

Assume that the density of zombies per square mile is Z, and the density of susceptible people is S in the same units, susceptible population per square mile. We define a bite transmission likelihood, ß so that dS/dt = -ßSZ. The total rate of susceptibles becoming zombies is proportional to the product of the density of zombies and of susceptibles. Assume, for now, that the plague moves fast enough that we can ignore natural death, immunity, or the birth rate of new susceptibles. I’ll relax this assumption at the end of the essay.

The rate of zombie increase will be less than the rate of susceptible population decrease because some zombies will be killed or rounded up. Classically, zombies are killed by shot-gun fire to the head, by flame-throwers, or removed to leper colonies. However zombies are removed, the process requires people. We can say that, dR/dt = kSZ where R is the density per square mile of removed zombies, and k is the rate factor for killing or quarantining them. From the above, dZ/dt = (ß-k) SZ.

We now have three, non-linear, indefinite differential equations. As a first step to solving them, we set the derivates to zero and calculate the end result of the plague: what happens at t –> ∞. Using just equation 1 and setting dS/dt= 0 we see that, since ß≠0, the end result is SZ =0. Thus, there are only two possible end-outcomes: either S=0 and we’ve all become zombies or Z=0, and all the zombies are all dead or rounded up. Zombie plagues can never end in mixed live-and-let-live situations. Worse yet, rounded up zombies are dangerous.

If you start with a small fraction of infected people Z0/S0 <<1, the equations above suggest that the outcome depends entirely on k/ß. If zombies are killed/ rounded up faster than they infect/bite, all is well. Otherwise, all is zombies. A situation like this is shown in the diagram below for a population of 200 and k/ß = .6

FIG. 1. Example dynamics for progress of a normal disease and a zombie apocalypse for an initial population of 199 unin- fected and 1 infected. The S, Z, and R populations are shown in (blue, red, black respectively, with solid lines for the zombie apocalypse, and lighter lines for the normal plague. t= tNß where N is the total popula- tion. For both models the k/ß = 0.6 to show similar evolutions. In the SZR case, the S population disap- pears, while the SIR is self limiting, and only a fraction of the population becomes infected.

Fig. 1, Dynamics of a normal plague (light lines) and a zombie apocalypse (dark) for 199 uninfected and 1 infected. The S and R populations are shown in blue and black respectively. Zombie and infected populations, Z and I , are shown in red; k/ß = 0.6 and τ = tNß. With zombies, the S population disappears. With normal infection, the infected die and some S survive.

Sorry to say, things get worse for higher initial ratios,  Z0/S0 >> 0. For these cases, you can kill zombies faster than they infect you, and the last susceptible person will still be infected before the last zombie is killed. To analyze this, we create a new parameter P = Z + (1 – k/ß)S and note that dP/dt = 0 for all S and Z; the path of possible outcomes will always be along a path of constant P. We already know that, for any zombies to survive, S = 0. We now use algebra to show that the final concentration of zombies will be Z = Z0 + (1-k/ß)S0. Free zombies survive so long as the following ratio is non zero: Z0/S0 + 1- k/ß. If Z0/S0 = 1, a situation that could arise if a small army of zombies breaks out of quarantine, you’ll need a high kill ratio, k/ß > 2 or the zombies take over. It’s seen to be harder to stop a zombie outbreak than to stop the original plague. This is a strong motivation to kill any infected people you’ve rounded up, a moral dilemma that appears some plague literature.

Figure 1, from the Cornell paper, gives a sense of the time necessary to reach the final state of S=0 or Z=0. For k/ß of .6, we see that it takes is a dimensionless time τ of 25 or to reach this final, steady state of all zombies. Here, τ= t Nß and N is the total population; it takes more real time to reach τ= 25 if N is high than if N is low. We find that the best course in a zombie invasion is to head for the country hoping to find a place where N is vanishingly low, or (better yet) where Z0 is zero. This was the main conclusion of the Cornell paper.

Figure 1 also shows the progress of a more normal disease, one where a significant fraction of the infected die on their own or develop a natural immunity and recover. As before, S is the density of the susceptible, R is the density of the removed + recovered, but here I is the density of those Infected by non-zombie disease. The time-scales are the same, but the outcome is different. As before, τ = 25 but now the infected are entirely killed off or isolated, I =0 though ß > k. Some non-infected, susceptible individuals survive as well.

From this observation, I now add a new conclusion, not from the Cornell paper. It seems clear that more immune people will be in the cities. I’ve also noted that τ = 25 will be reached faster in the cities, where N is large, than in the country where N is small. I conclude that, while you will be worse off in the city at the beginning of a plague, you’re likely better off there at the end. You may need to get through an intermediate zombie zone, and you will want to get the infected to bury their own, but my new insight is that you’ll want to return to the city at the end of the plague and look for the immune remnant. This is a typical zombie story-line; it should be the winning strategy if a plague strikes too. Good luck.

Robert Buxbaum, April 21, 2015. While everything I presented above was done with differential calculus, the original paper showed a more-complete, stochastic solution. I’ve noted before that difference calculus is better. Stochastic calculus shows that, if you start with only one or two zombies, there is still a chance to survive even if ß/k is high and there is no immunity. You’ve just got to kill all the zombies early on (gun ownership can help). Here’s my statistical way to look at this. James Sethna, lead author of the Cornell paper, was one of the brightest of my Princeton PhD chums.

Brass monkey cold

In case it should ever come up in conversation, only the picture at left shows a brass monkey. The other is a bronze statue of some sort of a primate. A brass monkey is a rack used to stack cannon balls into a face centered pyramid. A cannon crew could fire about once per minute, and an engagement could last 5 hours, so you could hope to go through a lot of cannon balls during an engagement (assuming you survived).

A brass monkey cannonball holder. The classic monkeys were 10 x 10 and made of navy brass.

Small brass monkey. The classic monkey might have 9 x 9 or 10×10 cannon balls on the lower level.

Bronze sculpture of a primate playing with balls -- but look what the balls are sitting on: it's a surreal joke.

Bronze sculpture of a primate playing with balls — but look what the balls are sitting on: it’s a dada art joke.

But brass monkeys typically show up in conversation in terms of it being cold enough to freeze the balls off of a brass monkey, and if you imagine an ornamental statue, you’d never guess how cold could that be. Well, for a cannonball holder, the answer has to do with the thermal expansion of metals. Cannon balls were made of iron and the classic brass monkey was made of brass, an alloy with a much-greater thermal expansion than iron. As the temperature drops, the brass monkey contracts more than the iron balls. When the drop is enough the balls will fall off and roll around.

The thermal expansion coefficient of brass is 18.9 x 10-4/°C while the thermal expansion coefficient of iron is 11.7 x10-4/°C. The difference is 7.2×10-4/°C; this will determine the key temperature. Now consider a large brass monkey, one with 10 x 10 holes on the lower level, 81 at the second, and so on. Though it doesn’t affect the result, we’ll consider a monkey that holds 12 lb cannon balls, a typical size of 1750 -1830. Each 12 lb ball is 4.4″ in diameter at room temperature, 20°C in those days. At 20°C, this monkey is about 44″ wide. The balls will fall off when the monkey shrinks more than the balls by about 1/3 of a diameter, 1.5″.

We can calculate ∆T, the temperature change, °C, that is required to lower the width-difference by 1.5″ as follows:

kepler conjecture, brass monkey

-1.5″ = ∆T x 44″ x 7.2 x10-4

We find that ∆T = -47°C. The temperature where this happens is 47 degrees cooler than 20°C, or -27°C. That’s 3.2°F, not a very unusual temperature on land, e.g. in Detroit, but on sea, the temperature is rarely much colder than 0°C or 32°F, the temperature where water freezes. If it gets to 3.2°F on the sea, something is surely amiss. To avoid this problem, land-based army cannon-crew uses a smaller brass monkey — e.g. the 5×5 shown. This stack holds 1/7 as many balls, but holds them to -74°F, a really cold temperature.

Robert E. Buxbaum, February 21, 2015. Some fun thoughts: Convince yourself that the key temperature is independent of the size of the cannon balls. That is, that I didn’t need to choose 12 pounders. A bit more advanced, what is the equation for the number of balls on any particular base-size monkey. Show that the packing density is no more efficient if the bottom lawyer were an equilateral triangle, and not a square. If you liked this, you might want to know how much wood a woodchuck chucks if a woodchuck could chuck wood, or on the relationship between mustaches and WWII diplomacy.

Einstein failed high-school math –not.

I don’t know quite why people persist in claiming that Einstein failed high school math. Perhaps it’s to put down teachers –who clearly can’t teach or recognize genius — or perhaps to stake a claim to a higher understanding that’s masked by ADHD — a disease Einstein is supposed to have had. But, sorry to say, it ain’t true. Here’s Einstein’s diploma, 1896. His math and physics scores are perfect. Only his English seems to have been lacking. He would have been 17 at the time.

Einstein's high school diploma

Albert Einstein’s high school diploma, 1896.

Robert Buxbaum, December 16, 2014. Here’s Einstein relaxing in Princeton. Here’s something on black holes, and on High School calculus for non-continuous functions.