Category Archives: math

Social science is irreproducible, drug tests nonreplicable, and stoves studies ignore confounders.

Efforts to replicate the results of the most prominent studies in health and social science have found them largely irreproducible with the worst replicability appearing in cancer drug research. The figure below, from “The Reproducibility Project in Cancer Biology, Errington et al. 2021, compares the reported effects in 50 cancer drug experiments from 23 papers with the results from repeated versions of the same experiments, looking at a total of 158 effects.

Graph comparing the original, published effect of a cancer drug with the replication effect. The units are whatever units were used in the original study, percent, or risk ratio, etc. From “Investigating the replicability of preclinical cancer biology,”
Timothy M Errington et al. Center for Open Science, United States; Stanford University, Dec 7, 2021, https://doi.org/10.7554/eLife.71601.

It’s seen that virtually none of the drugs are found to work the same as originally reported. Those below the dotted, horizontal line behaved the opposite in the replication studies. About half, those shown in pink, showed no significant effect. Of those that showed positive behavior as originally published, mostly they show about half the activity with two drugs that now appear to be far more active. A favorite web-site of mine, retraction watch, is filled with retractions of articles on these drugs.

The general lack of replicability has been called a crisis. It was first seen in the social sciences, e.g. the figure below from this article in Science, 2015. Psychology research is bad enough such that Nobel Laureate, Daniel Kahneman, came to disown most of the conclusions in his book, “Thinking, Fast and Slow“. The experiments that underly his major sections don’t replicate. Take, for example, social printing. Classic studies had claimed that, if you take a group of students and have them fill out surveys with words about the aged or the flag, they will then walk slower from the survey room or stand longer near a flag. All efforts to reproduce these studies have failed. We now think they are not true. The problem here is that much of education and social engineering is based on such studies. Public policy too. The lack of replicability throws doubt on much of what modern society thinks and does. We like to have experts we can trust; we now have experts we can’t.

From “Estimating the reproducibility of psychological science” Science, 2015. Social science replication is better than dance drug replication, about 35% of the classic social science studies replicate to some, reasonable extent.

Are gas stoves dangerous? This 2022 environmental study said they are, claiming with 95% confidence that they are responsible for 12.7% of childhood asthma. I doubt the study will be reproducible for reasons I’ll detail below, but for now it’s science, and it may soon be law.

Part of the replication problem is that researchers have been found to lie. They fudge data or eliminate undesirable results, some more some less, and a few are honest, but the journals don’t bother checking. Some researchers convince themselves that they are doing the world a favor, but many seem money-motivated. A foundational study on Alzheimers was faked outright. The authors doctored photos using photoshop, and used the fake results to justify approval of non-working, expensive drugs. The researchers got $1B in NIH funding too. I’d want to see the researchers jailed, long term: it’s grand larceny and a serious violation of trust.

Another cause of this replication crisis — one that particularly hurt Daniel Kahneman’s book — is that many social science researchers do statistically illegitimate studies on populations that are vastly too small to give reliable results. Then, they only publish the results they like. The graph of z-values shown below suggest this is common, at least in some journals, including “Personality and social psychology Bulletin”. The vast fraction of results at ≥95% confidence suggest that researchers don’t publish the 90-95% of their work that doesn’t fit the desired hypothesis. While there has been no detailed analysis of all the social science research, it’s clear that this method was used to show that GMO grains caused cancer. The researcher did many small studies, and only published the one study where GMOs appeared to cause cancer. I review the GMO study here.

From Ulrich Schimmack, ReplicationIndex.com, January, 2023, https://replicationindex.com/2023/01/08/which-social-psychologists-can-you-trust/. If you really want to get into this he is a great resource.

The chart at left shows Z-scores, were Z = ∆X √n/σ. A Z score above 1.93 generally indicates significance, p < .05. Notice that almost all the studies have Z scores just over 1.93 that is almost all the studies proved their hypothesis at 95% confidence. That makes it seem that the researchers were very lucky, near prescient. But it’s clear from the distribution that there were a lot of studies that done but never shown to the public. That is a lot of data that was thrown out, either by the researchers or by the publishers. If all data was published, you’d expect to see a bell curve. Instead the Z values are of a tiny bit of a bell curve, just the tail end. The implication is that these studies with Z= >1.93 suggest far less than 95% confidence. This then shows up in the results being only 25% reproducible. It’s been suggested that you should not throw out all the results in the journal, just look for Z-scores of 3.6 or more. That leaves you with the top 23%, and these should have a good chance of being reproducible. The top graph somewhat supports this, but it’s not that simple.

Another classic way to cook the books, as it were, and make irreproducible studies provide the results you seek is to ignore “confounders.” This leads to association – causation errors. As an example, it’s observed that people taking aspirin have more heart attacks than those who do not, but the confounder is that aspirin is prescribed to those with heart problems; the aspirin actually helps, but appears to hurt. In the case of stoves, it seems likely that poorer, sicker people own gas, and that they live in older, moldy homes, and cook more at home, frying onions, etc. These are confounders that the study to my reading ignores. They could easily be the reason that gas stove owners get more asthma toxins than the rich folks who own electric, induction stoves. If you confuse association, you seem to find that owning the wrong stove causes you to be poor and sick with a moldy home. I suspect that the stove study will not replicate if they correct for the confounders.

I’d like to recommend a book, hardly mathematical, “How to Lie with Statistics” by Darrell Huff ($8.99 on Amazon). I read it in high school. It gives you a sense of what to look out for. I should also mention Dr. Anthony Fauci. He has been going around to campuses saying we should have zero tolerance for those who deny science, particularly health science. Given that so much of health science research is nonreplicable, I’d recommend questioning all of it. Here is a classic clip from the 1973 movie, ‘Sleeper’, where a health food expert wakes up in 2173 to discover that health science has changed.

Robert Buxbaum , February 7, 2023.

The equation behind Tinder, J-swipe, and good chess matchups.

Near the beginning of the movie “The social network”, Zuckerberg asks his Harvard roommate, Saverin, to explain the chess rating system. His friend writes an equation on the window, Zuckerberg looks for a while, nods, and uses it as a basis for Facemash, the predecessor of Facebook. The dating site, Tinder said it used this equation to match dates, but claims to have moved on from there, somewhat. The same is likely true at J-swipe, a jewish coating site, and Christian mingle.

Scene from the social network, Saverin shows Zuckerberg the equations for the expected outcome of a chess match between players of different rankings, Ra and Rb.

I’ll explain how the original chess ranking system worked, and then why it works also for dating. If you’ve used Tinder or J-swipe, you know that they provide fairly decent matches based on a brief questionnaire and your pattern of swiping left or right on pictures of people, but it is not at all clear that your left-right swipes are treated like wins and losses in a chess game: your first pairings are with people of equal rating.

Start with the chess match equations. These were developed by Anand Elo (pronounced like hello without the h) in the 1950s, a physics professor who was the top chess player in Wisconsin at the time. Based on the fact that chess ability changes relatively slowly (usually) he chose to change a persons rating based on a logistic equation, sigmoid model of your chances of winning a given match. He set a limit to the amount your rating could change with a single game, but the equation he chose changed your rating fastest when you someone much better than you or lost to someone much weaker. Based on lots of inaccurate comparisons, the game results, you get a remarkably accurate rating of your chess ability. Also, as it happens, this chess rating also works well to match people for chess games.

The knowledge equation, an S curve that can be assumed to relate to the expected outcome of chess matchups or dating opportunities.

For each player in a chess match, we estimate the likelihood that each player will win, lose or tie based on the difference in their ratings, Ra -Rb and the sigmoid curve at left. We call these expected outcome Ea for player A, and Eb for player B where Ea = Eb = is 50% when Ra = RB. It’s seen that Ea never exceeds 1; you can never more than 100% certain about a victory. The S-graph shows several possible estimates of Ea where x= Ra-Rb, and k is a measure of how strongly we imagine this difference predicts outcome. Elo chose a value of k such that 400 points difference in rating gave the higher ranked player a 91% expectation of winning.

To adjust your rating, the outcomes of a game is given a number between 1 and 0, where 1 represents a win, 0 a loss, and 0.5 a draw. Your rating changes in proportion to the difference between this outcome and your expected chance of winning. If player A wins, his new rating, Ra’, is determined from the old rating, Ra as follows:

Ra’ = Ra + 10 (1 – Ea)

It’s seen that one game can not change your rating by any more than 10, no matter how spectacular the win, nor can your rating drop by any more than 10 if you lose. If you lose, Ra’ = Ra – 10 Ea. New chess players are given a start ranking, and are matched with other new players at first. For new players, the maximum change is increased to 24, so you can be placed in a proper cohort that much quicker. My guess is that something similar is done with new people on dating sites: a basic rating (or several), a basic rating, and a fast rating change at first that slows down later.

As best I can tell, dating apps use one or more ratings to solve a mathematical economics problem called “the stable marriage problem.” Gayle and Shapely won the Nobel prize in economics for work on this problem. The idea of the problem is to pair everyone in such a way that no couple is happier by a swap of partners. It can be shown that there is always a solution that achieves that. If there is a singe, understood ranking, one way of achieving this stable marriage pairing is by pairing best with best, 2nd with second, and thus all the way down. The folks at the bottom may not be happy with their mates, but neither is there a pair that would like to switch mates with them.

Part of this, for better or worse, is physical attractiveness. Even if the low ranked (ugly) people are not happy with the people they are matched with, they may be happy to find that these people are reasonably happy with them. Besides a rating based on attractiveness, there is a rating based on age and location; sexual orientation and religiosity. On J-swipe and Tinder, people are shown others that are similar to them in attractiveness, and similar to the target in other regards. The first people you are shown are people who have already swiped right for you. If you agree too, you agree to a date, at least via a text message. Generally, the matches are not bad, and having immediate successes provides a nice jolt of pleasure at the start.

Religious dating sites, J-swipe and Christian Mingle work to match men with women, and to match people by claimed orthodoxy to their religion. Tinder is a lot less picky: not only will they match “men looking for men” but they also find that “men looking for women” will fairly often decide to date other “men looking for women”. The results of actual, chosen pairings will then affect future proposed pairings so that a man who once dates a man will be shown more men as possible dates. In each of the characteristic rankings, when you swipe right it is taken as a win for the person in the picture, if you swipe left it’s a loss: like a game outcome of 1 or 0. If both of you agree, or don’t it’s like a tie. Your rating on the scale of religion or beauty goes up or down in proportion to the difference between the outcome and the predictions. If you date a person of the same sex, it’s likely that your religion rating drops, but what do I know?

One way or another, this system seems to work at least as well as other matchmaking systems that paired people based on age, height, and claims of interest. If anything, I think there is room for far more applications, like matching doctors to patients in a hospital based on needs, skills, and availability, or matching coaches to players.

Robert Buxbaum, December 31, 2020. In February, at the beginning of the COVID outbreak I claimed that the disease was a lot worse than thought by most, but the it would not kill 10% of the population as thought by the alarmist. The reason: most diseases follow the logistic equation, the same sigmoid.

COVID-19 is worse than SARS, especially for China.

The corona virus, COVID-19 is already a lot worse than SARS, and it’s likely to get even worse. As of today, there are 78,993 known cases and 2,444 deaths. By comparison, from the first appearance of SARS about December 1 2002, there have been a total of 8439 cases and 813 deaths. It seems the first COVID-19 patient was also about December 1, but the COVID-19 infection moved much faster. Both are viral infections, but it seems the COVID virus is infectious for more days, including days when the patient is asymptomatic. Quarantine is being used to stop COVID-19; it was successful with SARS. As shown below, by July 2003 SARS had stopped, essentially. I don’t think COVID-19 will stop so easily.

The process of SARS, worldwide; a dramatic rise and it’s over by July 2003. Source: Int J Health Geogr. 2004; 3: 2. Published online 2004 Jan 28. doi: 10.1186/1476-072X-3-2.

We see that COVID-19 started in November, like SARS, but we already have 10 times more cases than the SARS total, and 150 times more than we had at this time during the SARS epidemic. If the disease stops in July, as with SARS, we should expect to see about a total of 150 times the current number of cases: about 12 million cases by July 2020. Assuming a death rate of 2.5%, that suggests 1/4 million dead. This is a best case scenario, and it’s not good. It’s about as bad as the Hong Kong flu pandemic of 1968-69, a pandemic that killed 60,000 approximately in the US, and which remains with us, somewhat today. By the summer of 69, the spreading rate R° (R-naught) fell below 1 for and the disease began to die out, a process I discussed previously regarding measles and the atom bomb, but the disease re-emerged, less infectious the next winter and the next. A good quarantine is essential to make this best option happen, but I don’t believe the Chinese have a good-enough quarantine.

Several things suggest that the Chinese will not be able to stop this disease, and thus that the spread of COVID-19 will be worse than that of the HK flu and much worse than SARS. For one, both those disease centered in Hong Kong, a free, modern country, with resources to spend, and a willingness to trust its citizens. In fighting SARS, HK passed out germ masks — as many as anyone needed, and posted maps of infection showing places where you can go safely and where you should only go with caution. China is a closed, autocratic country, and it has not treated quarantine this way. Little information is available, and there are not enough masks. The few good masks in China go to the police. Health workers are dying. China has rounded up anyone who talks about the disease, or who they think may have the disease. These infected people are locked up with the uninfected in giant dorms, see below. In rooms like this, most of the uninfected will become infected. And, since the disease is deadly, many people try to hide their exposure to avoid being rounded up. In over 80% of COVID cases the symptoms are mild, and somewhat over 1% are asymptomatic, so a lot of people will be able to hide. The more people do this, the poorer the chance that the quarantine will work. Given this, I believe that over 10% of Hubei province is already infected, some 1.5 million people, not the 79,000 that China reports.

Wuhan quarantine “living room”. It’s guaranteed to spread the disease as much as it protects the neighbors.

Also making me think that quarantine will not work as well here as with SARS, there is a big difference in R°, the transmission rate. SARS infected some 2000 people over the first 120 days, Dec. 1 to April 1. Assuming a typical infection time of 15 days, that’s 8 cycles. We calculate R° for this stage as the 8th root of 2000, 8√2000 = 2.58. This is, more or less the number in the literature, and it is not that far above 1. To be successful, the SARS quarantine had to reduce the person’s contacts by a factor of 3. With COVID-19, it’s clear that the transmission rate is higher. Assuming the first case was December 1, we see that there were 73,437 cases in only 80. R° is calculated as the 5 1/3 root of 73,437. Based on this, R° = 8.17. It will take a far higher level of quarantine to decrease R° below 1. The only good news here is that COVID-19 appears to be less deadly than SARS. Based on Chinese numbers the death rate appears to be about 2000/73,437, or about 3%, varying with age (see table), but these numbers are overly high. I believe there are a lot more cases. Meanwhile the death rate for SARS was over 9%. For most people infected with COVID-19, the symptoms are mild, like a cold; for another 18% it’s like the flu. A better estimate for the death rate of COVID-19 is 0.5-1%, less deadly than the Spanish flu of 1918. The death rate on the Diamond Princess was 3/600 = 0.5%, with 24% infected.

The elderly are particularly vulnerable. It’s not clear why.

Backing up my value of R°, consider the case of the first Briton to contact the disease. As reported by CNN, he got it at conference in Singapore in late January. He left the conference, asymptomatic on January 24, and spent the next 4 days at a French ski resort where he infected one person, a child. On January 28, he flew to England where he infected 8 more before checking himself into a hospital with mild symptoms. That’s nine people infected over 3 weeks. We can expect that schools, factories, and prisons will be even more hospitable to transmission since everyone sits together and eats together. As a worst case extrapolation, assume that 20% of the world population gets this disease. That’s 1.5 billion people including 70 million Americans. A 1% death rate suggests we’ll see 700,000 US deaths, and 15 million world-wide this year. That’s almost as bad as the Spanish flu of 1918. I don’t think things will be that bad, but it might be. The again, it could be worse.

If COVID-19 follows the 1918 flu model, the disease will go into semi-remission in the summer, and will re-emerge in the fall to kill another few hundred thousand Americans in the next fall and winter, and the next after that. Woodrow Wilson got the Spanish Flu in the fall of 1918, after it had passed through much of the US, and it nearly killed him. COVID-19 could continue to rampage every year until a sufficient fraction of the population is immune or a vaccine is developed. In this scenario, quarantine will have no long-term effect. My sense is that quarantine and vaccine will work enough in the US to reduce the effect of COVID-19 to that of the Hong Kong flu (1968), so that the death rate will be only 0.1 – 0.2%. In this scenario, the one I think most likely, the US will experience some 100,000 deaths, that is 0.15% of 20% of the population, mostly among the elderly. Without good quarantine or vaccines, China will lose at least 1% of 20% = about 3 million people. In terms of economics, I expect a slowdown in the US and a major problem in China, North Korea, and related closed societies.

Robert Buxbaum, February 18, 2020. (Updated, Feb. 23, I raised the US death totals, and lowered the totals for China).

A series solution to the fussy suitor/ secretary problem

One way to look at dating and other life choices is to consider them as decision-time problems. Imagine, for example that have a number of candidates for a job, and all can be expected to say yes. You want a recipe that maximizes your chance to pick the best. This might apply to a fabulously wealthy individual picking a secretary or a husband (Mr Right) in a situation where there are 50 male choices. We’ll assume that you have the ability to recognize who is better than whom, but that your pool has enough ego that you can’t go back to anyone once you’ve rejected the person.

Under the above restrictions, I mentioned in this previous post that you maximize your chance of finding Mr Right by dating without intent to marry 36.8% of the fellows. After that, you marry the first fellow who is better than any of the previous. My previous post had a link to a solution using Riemann integrals, but I will now show how to do it with more prosaic math — a series. One reason for doing this by series is that it allows you to modify your strategy for a situation where you can not be guaranteed a yes, or where you’re OK with number 2, but you don’t like the high odds of the other method, 36.8%, that you’ll marry no one.

I present this, not only for the math interest, but because the above recipe is sometimes presented as good advice for real-life dating, e.g. in a recent Washington Post article. With the series solution, you’re in a position to modify the method for more realistic dating, and for another related situation, options cashing. Let’s assume you have stock options in a volatile stock company, if the options are good for 10 years, how do you pick when to cash in. This problem is similar to the fussy suitor, but the penalty for second best is small.

The solution to all of these problems is to pick a stopping point between the research phase and the decision phase. We will assume you can’t un-cash in an option, or continue dating after marriage. We will optimize for this fractional stopping point between phases, a point we will call x. This is the fraction of guys dated without intent of marriage, or the fraction of years you develop your formula before you look to cash in.

Let’s consider various ways you might find Mr Right given some fractional value X. One way this might work, perhaps the most likely way you’ll find Mr. Right, is if the #2 person is in the first, rejected group, and Mr. Right is in the group after the cut off, x. We’ll call chance of of finding Mr Right through this arrangement C1, where

C1 = x (1-x) = x – x2.

We could used derivatives to solve for the optimum value of x, but there are other ways of finding Mr Right. What if Guy #3 is in the first group and both Guys 1 and 2 are in the second group, and Guy #1 is earlier in the second line-up. You’d still marry Mr Right. We’ll call the chance of finding Mr Right this way C2. The odds of this are

C2 = x (1-x)2/2

= x/2 – x2 + x3/2

There is also a C3 and a C4 etc. Your C3 chance of Mr Right occurs when guy number 4 is in the first group, while #1, 2, and 3 are in the latter group, but guy number one is the first.

C3 = x (1-x)3/4 = x/4 – 3x2/4 + 3x3/4 – x4/4.

I could try to sum the series, but lets say I decide to truncate here. I’ll ignore C4, C5 etc, and I’ll further throw out any term bigger than x^2. Adding all smaller terms together, I get ∑C = C, where

C ~ 1.75 x – 2.75 x2.

To find the optimal x, take the derivative and set it to zero:

dC/dx = 0 ~ 1.75 -5.5 x

x ~ 1.75/5.5 = 31.8%.

That’s not an optimal answer, but it’s close. Based on this, C1 = 21.4%, C2 = 14.8%, C3 =10.2%, and C4= 7.0% C5= 4.8%Your chance of finding Mr Right using this stopping point is at least 33.4%. This may not be ideal, but you’re clearly going to very close to it.

The nice thing about this solution is that it makes it easy to modify your model. Let’s say you decide to add a negative value to not ever getting married. That’s easily done using the series method. Let’s say you choose to optimize your chance for either Mr 1 or 2 on the chance that both will be pretty similar and one of them may say no. You can modify your model for that too. You can also use series methods for the possibility that the house you seek is not at the last exit in Brooklyn. For the dating cases, you will find that it makes sense to stop your test-dating earlier, for the parking problem, you’l find that it’s Ok to wait til you’re less than 1 mile away before you settle on a spot. I’ll talk more about this latter, but wanted to note that the popular press seems overly impressed by math that they don’t understand, and that they have a willingness to accept assumptions that bear only the flimsiest relationship to relaity.

Robert Buxbaum, January 20, 2020

A mathematical approach to finding Mr (or Ms) Right.

A lot of folks want to marry their special soulmate, and there are many books to help get you there, but I thought I might discuss a mathematical approach that optimizes your chance of marrying the very best under some quite-odd assumptions. The set of assumptions is sometimes called “the fussy suitor problem” or the secretary problem. It’s sometimes presented as a practical dating guide, e.g. in a recent Washington Post article. My take, is that it’s not a great strategy for dealing with the real world, but neither is it total nonsense.

The basic problem was presented by Martin Gardner in Scientific American in 1960 or so. Assume you’re certain you can get whoever you like (who’s single); assume further that you have a good idea of the number of potential mates you will meet, and that you can quickly identify who is better than whom; you have a desire to marry none but the very best, but you don’t know who’s out there until you date, and you’ve an the inability to go back to someone you’ve rejected. This might be he case if you are a female engineering student studying in a program with 50 male engineers, all of whom have easily bruised egos. Assuming the above, it is possible to show, using Riemann Integrals (see solution here), that you maximize your chance of finding Mr/Ms Right by dating without intent to marry 36.8 % of the fellows (1/e), and then marrying the first fellow who’s better than any of the previous you’ve dated. I have a simpler, more flexible approach to getting the right answer, that involves infinite serieses; I’ll hope to show off some version of this at a later date.

Bluto, Popeye, or wait for someone better? In the cartoon as I recall, she rejects the first few people she meets, then meets Bluto and Popeye. What to do?

With this strategy, one can show that there is a 63.2% chance you will marry someone, and a 36.8% you’ll wed the best of the bunch. There is a decent chance you’ll end up with number 2. You end up with no-one if the best guy appears among the early rejects. That’s a 36.8% chance. If you are fussy enough, this is an OK outcome: it’s either the best or no-one. I don’t consider this a totally likely assumption, but it’s not that bad, and I find you can recalculate fairly easily for someone OK with number 2 or 3. The optimal strategy then, I think, is to date without intent at the start, as before, but to take a 2nd or 3rd choice if you find you’re unmarried after some secondary cut off. It’s solvable by series methods, or dynamic computing.

It’s unlikely that you have a fixed passel of passive suitors, of course, or that you know nothing of guys at the start. It also seems unlikely that you’re able to get anyone to say yes or that you are so fast evaluating fellows that there is no errors involved and no time-cost to the dating process. The Washington Post does not seem bothered by any of this, perhaps because the result is “mathematical” and reasonable looking. I’m bothered, though, in part because I don’t like the idea of dating under false pretense, it’s cruel. I also think it’s not a winning strategy in the real world, as I’ll explain below.

One true/useful lesson from the mathematical solution is that it’s important to learn from each date. Even a bad date, one with an unsuitable fellow, is not a waste of time so long as you leave with a better sense of what’s out there, and of what you like. A corollary of this, not in the mathematical analysis but from life, is that it’s important to choose your circle of daters. If your circle of friends are all geeky engineers, don’t expect to find Prince Charming among them. If you want Prince Charming, you’ll have to go to balls at the palace, and you’ll have to pass on the departmental wine and cheese.

If you want Prince Charming, you may have to associate with a different crowd from the one you grew up with. Whether that’s a good idea for a happy marriage is another matter.

The assumptions here that you know how many fellows there are is not a bad one, to my mind. Thus, if you start dating at 16 and hope to be married by 32, that’s 16 years of dating. You can use this time-frame as a stand in for total numbers. Thus if you decide to date-for-real after 37%, that’s about age 22, not an unreasonable age. It’s younger than most people marry, but you’re not likely to marry the fort person you meet after age 22. Besides, it’s not great dating into your thirties — trust me, I’ve done it.

The biggest problem with the original version of this model, to my mind, comes from the cost of non-marriage just because the mate isn’t the very best, or might not be. This cost gets worse when you realize that, even if you meet prince charming, he might say no; perhaps he’s gay, or would like someone royal, or richer. Then again, perhaps the Kennedy boy is just a cad who will drop you at some time (preferably not while crossing a bridge). I would therefor suggest, though I can’t show it’s optimal that you start out by collecting information on guys (or girls) by observing the people around you who you know: watch your parents, your brothers and sisters, your friends, uncles, aunts, and cousins. Listen to their conversation and you can get a pretty good idea of what’s available even before your first date. If you don’t like any of them, and find you’d like a completely different circle, it’s good to know early. Try to get a service job within ‘the better circle’. Working with people you think you might like to be with, long term, is a good idea even if you don’t decide to marry into the group in the end.

Once you’ve observed and interacted with the folks you think you might like, you can start dating for real from the start. If you’re super-organized, you can create a chart of the characteristics and ‘tells’ of characteristics you really want. Also, what is nice but not a deal-breaker. For these first dates, you can figure out the average and standard deviation, and aim for someone in the top 5%. A 5% target is someone whose two standard deviations above the average. This is simple Analysis of variation math (ANOVA), math that I discussed elsewhere. In general you’ll get to someone in the top 5% by dating ten people chosen with help from friends. Starting this way, you’ll avoid being unreasonably cruel to date #1, nor will you loose out on a great mate early on.

Some effort should be taken to look at the fellow’s family and his/her relationship with them. If their relationship is poor, or their behavior is, your kids may turn out similar.

After a while, you can say, I’ll marry the best I see, or the best that seems like he/she will say yes (a smaller sub-set). You should learn from each date, though, and don’t assume you can instantly size someone up. It’s also a good idea to meet the family since many things you would not expect seem to be inheritable. Meeting some friends too is a good idea. Even professionals can be fooled by a phony, and a phony will try to hide his/her family and friends. In the real world, dating should take time, and even if you discover that he/ she is not for you, you’ll learn something about what is out there: what the true average and standard deviation is. It’s not even clear that people fall on a normal distribution, by the way.

Don’t be too upset if you reject someone, and find you wish you had not. In the real world you can go back to one of the earlier fellows, to one of the rejects, if one does not wait too long. If you date with honesty from the start you can call up and say, ‘when I dated you I didn’t realize what a catch you were’ or words to that effect. That’s a lot better than saying ‘I rejected you based on a mathematical strategy that involved lying to all the first 36.8%.’

Robert Buxbaum, December 9, 2019. This started out as an essay on the mathematics of the fussy suitor problem. I see it morphed into a father’s dating advice to his marriage-age daughters. Here’s the advice I’d given to one of them at 16. I hope to do more with the math in a later post.

How long could you make a suspension bridge?

The above is one of the engineering questions that puzzled me as a student engineer at Brooklyn Technical High School and at Cooper Union in New York. The Brooklyn Bridge stood as a wonder of late 1800s engineering, and it had recently been eclipsed by the Verrazano bridge, a pure suspension bridge. At the time it was the longest and heaviest in the world. How long could a bridge be made, and why did Brooklyn bridge have those catenary cables, when the Verrazano didn’t? (Sometimes I’d imagine a Chinese engineer being asked the top question, and answering “Certainly, but How Long is my cousin.”)

I found the above problem unsolvable with the basic calculus at my disposal. because it was clear that both the angle of the main cable and its tension varied significantly along the length of the cable. Eventually I solved this problem using a big dose of geometry and vectors, as I’ll show.

Vector diagram of forces on the cable at the center-left of the bridge.

Vector diagram of forces on the cable at the center-left of the bridge.

Consider the above vector diagram (above) of forces on a section of the main cable near the center of the bridge. At the right, the center of the bridge, the cable is horizontal, and has a significant tension. Let’s call that T°. Away from the center of the bridge, there is a vertical cable supporting a fraction of  roadway. Lets call the force on this point w. It equals the weight of this section of cable and this section of roadway. Because of this weight, the main cable bends upward to the left and carries more tension than T°. The tangent (slope) of the upward curve will equal w/T°, and the new tension will be the vector sum along the new slope. From geometry, T= √(w2 +T°2).

Vector diagram of forces on the cable further from the center of the bridge.

Vector diagram of forces on the cable further from the center of the bridge.

As we continue from the center, there are more and more verticals, each supporting approximately the same weight, w. From geometry, if w weight is added at each vertical, the change in slope is always w/T° as shown. When you reach the towers, the weight of the bridge must equal 2T Sin Θ, where Θ is the angle of the bridge cable at the tower and T is the tension in the cable at the tower.

The limit to the weight of a bridge, and thus its length, is the maximum tension in the main cable, T, and the maximum angle, that at the towers. Θ. I assumed that the maximum bridge would be made of T1 bridge steel, the strongest material I could think of, with a tensile strength of 100,000 psi, and I imagined a maximum angle at the towers of 30°. Since there are two towers and sin 30° = 1/2, it becomes clear that, with this 30° angle cable, the tension at the tower must equal the total weight of the bridge. Interesting.

Now, to find the length of the bridge, note that the weight of the bridge is proportional to its length times the density and cross section of the metal. I imagined a bridge where the half of the weight was in the main cable, and the rest was in the roadway, cars and verticals. If the main cable is made of T1 “bridge steel”, the density of the cable is 0.2833 lb/in3, and the density of the bridge is twice this. If the bridge cable is at its yield strength, 100,000 psi, at the towers, it must be that each square inch of cable supports 50,000 pounds of cable and 50,000 lbs of cars, roadway and verticals. The maximum length (with no allowance for wind or a safety factor) is thus

L(max) = 100,000 psi / 2 x 0.2833 pounds/in3 = 176,500 inches = 14,700 feet = 2.79 miles.

This was more than three times the length of the Verrazano bridge, whose main span is ‎4,260 ft. I attributed the difference to safety factors, wind, price, etc. I then set out to calculate the height of the towers, and the only rational approach I could think of involved calculus. Fortunately, I could integrate for the curve now that I knew the slope changed linearly with distance from the center. That is for every length between verticals, the slope changes by the same amount, w/T°. This was to say that d2y/dx2 = w/T° and the curve this described was a parabola.

Rather than solving with heavy calculus, I noticed that the slope, dy/dx increases in proportion to x, and since the slope at the end, at L/2, was that of a 30° triangle, 1/√3, it was clear to me that

dy/dx = (x/(L/2))/√3

where x is the distance from the center of the bridge, and L is the length of the bridge, 14,700 ft. dy/dx = 2x/L√3.

We find that:
H = ∫dy = ∫ 2x/L√3 dx = L/4√3 = 2122 ft,

where H is the height of the towers. Calculated this way, the towers were quite tall, higher than that of any building then standing, but not impossibly high (the Dubai tower is higher). It was fairly clear that you didn’t want a tower much higher than this, though, suggesting that you didn’t want to go any higher than a 30° angle for the main cable.

I decided that suspension bridges had some advantages over other designs in that they avoid the problem of beam “buckling.’ Further, they readjust their shape somewhat to accommodate heavy point loads. Arch and truss bridges don’t do this, quite. Since the towers were quite a lot taller than any building then in existence, I came to I decide that this length, 2.79 miles, was about as long as you could make the main span of a bridge.

I later came to discover materials with a higher strength per weight (titanium, fiber glass, aramid, carbon fiber…) and came to think you could go longer, but the calculation is the same, and any practical bridge would be shorter, if only because of the need for a safety factor. I also came to recalculate the height of the towers without calculus, and got an answer that was shorter, for some versions, a hundred feet shorter, as shown here. In terms of wind, I note that you could make the bridge so heavy that you don’t have to worry about wind except for resonance effects. Those are the effects are significant, but were not my concern at the moment.

The Brooklyn Bridge showing its main cable suspension structure and its catenaries.

Now to discuss catenaries, the diagonal wires that support many modern bridges and that, on the Brooklyn bridge, provide  support at the ends of the spans only. Since the catenaries support some weight of the Brooklyn bridge, they decrease the need for very thick cables and very high towers. The benefit goes down as the catenary angle goes to the horizontal, though as the lower the angle the longer the catenary, and the lower the fraction of the force goes into lift. I suspect this is why Roebling used catenaries only near the Brooklyn bridge towers, for angles no more than about 45°. I was very proud of all this when I thought it through and explained it to a friend. It still gives me joy to explain it here.

Robert Buxbaum, May 16, 2019.  I’ve wondered about adding vibration dampers to very long bridges to decrease resonance problems. It seems like a good idea. Though I have never gone so far as to do calculations along these lines, I note that several of the world’s tallest buildings were made of concrete, not steel, because concrete provides natural vibration damping.

Statistics for psychologists, sociologists, and political scientists

In terms of mathematical structure, psychologists, sociologists, and poly-sci folks all do the same experiment, over and over, and all use the same simple statistical calculation, the ANOVA, to determine its significance. I thought I’d explain that experiment and the calculation below, walking you through an actual paper (one I find interesting) in psychology / poly-sci. The results are true at the 95% level (that’s the same as saying p >0.05) — a significant achievement in poly-sci, but that doesn’t mean the experiment means what the researchers think. I’ll then suggest another statistic measure, r-squared, that deserves to be used along with ANOVA.

The standard psychological or poly-sci research experiments involves taking a group of people (often students) and giving them a questionnaire or test to measure their feelings about something — the war in Iraq, their fear of flying, their degree of racism, etc. This is scored on some scale to get an average. Another, near-identical group of subjects is now brought in and given a prompt: shown a movie, or a picture, or asked to visualize something, and then given the same questionnaire or test as the first group. The prompt is shown to have changed to average score, up or down, an ANOVA (analysis of variation) is used to show if this change is one the researcher can have confidence in. If the confidence exceeds 95% the researcher goes on to discuss the significance, and submits the study for publication. I’ll now walk you through the analysis the old fashioned way: the way it would have been done in the days of hand calculators and slide-rules so you understand it. Even when done this way, it only takes 20 minutes or so: far less time than the experiment.

I’ll call the “off the street score” for the ith subject, Xi°. It would be nice if papers would publish these, but usually they do not. Instead, researchers publish the survey and the average score, something I’ll call X°-bar, or X°. they also publish a standard deviation, calculated from the above, something I’ll call, SD°. In older papers, it’s called sigma, σ. Sigma and SD are the same thing. Now, moving to the group that’s been given the prompt, I’ll call the score for the ith subject, Xi*. Similar to the above, the average for this prompted group is X*, or X°-bar, and the standard deviation SD*.

I have assumed that there is only one prompt, identified by an asterix, *, one particular movie, picture, or challenge. For some studies there will be different concentrations of the prompt (show half the movie, for example), and some researchers throw in completely different prompts. The more prompts, the more likely you get false positives with an ANOVA, and the more likely you are to need to go to r-squared. Warning: very few researchers do this, intentionally (and crookedly) or by complete obliviousness to the math. Either way, if you have a study with ten prompt variations, and you are testing to 95% confidence your result is meaningless. Random variation will give you this result 50% of the time. A crooked researcher used ANOVA and 20 prompt variations “to prove to 95% confidence” that genetic modified food caused cancer; I’ll assume (trust) you won’t fall into that mistake, and that you won’t use the ANOVA knowledge I provide to get notoriety and easy publication of total, un-reproducible nonsense. If you have more than one or two prompts, you’ve got to add r-squared (and it’s probably a good idea with one or two). I’d discuss r-squared at the end.

I’ll now show how you calculate X°-bar the old-fashioned way, as would be done with a hand calculator. I do this, not because I think social-scientists can’t calculate an average, nor because I don’t trust the ANOVA function on your laptop or calculator, but because this is a good way to familiarize yourself with the notation:

X°-bar = X° = 1/n° ∑ Xi°.

Here, n° is the total number of subjects who take the test but who have not seen the prompt. Typically, for professional studies, there are 30 to 50 of these. ∑ means sum, and Xi° is the score of the ith subject, as I’d mentioned. Thus, ∑ Xi° indicates the sum of all the scores in this group, and 1/n° is the average, X°-bar. Convince yourself that this is, indeed the formula. The same formula is used for X*-bar. For a hand calculation, you’d write numbers 1 to n° on the left column of some paper, and each Xi° value next to its number, leaving room for more work to follow. This used to be done in a note-book, nowadays a spreadsheet will make that easier. Write the value of X°-bar on a separate line on the bottom.

T-table

T-table

In virtually all cases you’ll find that X°-bar is different from X*-bar, but there will be a lot of variation among the scores in both groups. The ANOVA (analysis of variation) is a simple way to determine whether the difference is significant enough to mean anything. Statistics books make this calculation seem far too complicated — they go into too much math-theory, or consider too many types of ANOVA tests, most of which make no sense in psychology or poly-sci but were developed for ball-bearings and cement. The only ANOVA approach used involves the T-table shown and the 95% confidence (column this is the same as two-tailed p<0.05 column). Though 99% is nice, it isn’t necessary. Other significances are on the chart, but they’re not really useful for publication. If you do this on a calculator, the table is buried in there someplace. The confidence level is written across the bottom line of the cart. 95% here is seen to be the same as a two-tailed P value of 0.05 = 5% seen on the third from the top line of the chart. For about 60 subjects (two groups of 30, say) and 95% certainty, T= 2.000. This is a very useful number to carry about in your head. It allows you to eyeball your results.

In order to use this T value, you will have to calculate the standard deviation, SD for both groups and the standard variation between them, SV. Typically, the SDs will be similar, but large, and the SV will be much smaller. First lets calculate SD° by hand. To do this, you first calculate its square, SD°2; once you have that, you’ll take the square-root. Take each of the X°i scores, each of the scores of the first group, and calculate the difference between each score and the average, X°-bar. Square each number and divide by (n°-1). These numbers go into their own column, each in line with its own Xi. The sum of this column will be SD°2. Put in mathematical terms, for the original group (the ones that didn’t see the movie),

SD°2 = 1/(n°-1) ∑ (Xi°- X°)2

SD° = √SD°2.

Similarly for the group that saw the movie, SD*2 = 1/(n*-1) ∑ (Xi*- X*)2

SD* = √SD*2.

As before, n° and n* are the number of subjects in each of the two groups. Usually you’ll aim for these to be the same, but often they’ll be different. Some students will end up only seeing half the move, some will see it twice, even if you don’t plan it that way; these students’ scares can not be used with the above, but be sure to write them down; save them. They might have tremendous value later on.

Write down the standard deviations, SD for each group calculated above, and check that the SDs are similar, differing by less than a factor of 2. If so, you can take a weighted average and call it SD-bar, and move on with your work. There are formulas for this average, and in some cases you’ll need an F-table to help choose the formula, but for my purposes, I’ll assume that the SDs are similar enough that any weighted average will do. If they are not, it’s likely a sign that something very significant is going on, and you may want to re-think your study.

Once you calculate SD-bar, the weighted average of the SD’s above, you can move on to calculate the standard variation, the SV between the two groups. This is the average difference that you’d expect to see if there were no real differences. That is, if there were no movie, no prompt, no nothing, just random chance of who showed up for the test. SV is calculated as:

SV = SD-bar √(1/n° + 1/n*).

Now, go to your T-table and look up the T value for two tailed tests at 95% certainty and N = n° + n*. You probably learned that you should be using degrees of freedom where, in this case, df = N-2, but for normal group sizes used, the T value will be nearly the same. As an example, I’ll assume that N is 80, two groups of 40 subjects the degrees of freedom is N-2, or 78. I you look at the T-table for 95% confidence, you’ll notice that the T value for 80 df is 1.99. You can use this. The value for  62 subjects would be 2.000, and the true value for 80 is 1.991; the least of your problems is the difference between 1.991 and 1.990; it’s unlikely your test is ideal, or your data is normally distributed. Such things cause far more problems for your results. If you want to see how to deal with these, go here.

Assuming random variation, and 80 subjects tested, we can say that, so long as X°-bar differs from X*-bar by at least 1.99 times the SV calculated above, you’ve demonstrated a difference with enough confidence that you can go for a publication. In math terms, you can publish if and only if: |X°-X*| ≥ 1.99 SV where the vertical lines represent absolute value. This is all the statistics you need. Do the above, and you’re good to publish. The reviewers will look at your average score values, and your value for SV. If the difference between the two averages is more than 2 times the SV, most people will accept that you’ve found something.

If you want any of this to sink in, you should now do a worked problem with actual numbers, in this case two groups, 11 and 10 students. It’s not difficult, but you should try at least with these real numbers. When you are done, go here. I will grind through to the answer. I’ll also introduce r-squared.

The worked problem: Assume you have two groups of people tested for racism, or political views, or some allergic reaction. One group was given nothing more than the test, the other group is given some prompt: an advertisement, a drug, a lecture… We want to know if we had a significant effect at 95% confidence. Here are the test scores for both groups assuming a scale from 0 to 3.

Control group: 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 3.  These are the Xi° s; there are 11 of them

Prompted group: 0, 1, 1, 1, 2, 2, 2, 2, 3, 3.  These are the Xi* s; there are 10 of them.

On a semi-humorous side: Here is the relationship between marriage and a PhD.

Robert Buxbaum, March 18, 2019. I also have an explanation of loaded dice and flipped coins; statistics for high school students.

A probability paradox

Here is a classic math paradox for your amusement, and perhaps your edification: (edification is a fancy word for: beware, I’m trying to learn you something).

You are on a TV game show where you will be asked to choose between two, identical-looking envelopes. All you know about the envelopes is that one of them has twice as much money as the other. The envelopes are shuffled, and you pick one. You peak in and see that your envelope contains $400, and you feel pretty good. But then you are given a choice: you can switch your envelope with the other one; the one you didn’t take. You reason that the other envelope either has $800 or $200 with equal probability. That is, a switch will either net you a $400 gain, or loose you $200. Since $400 is bigger than $200, you switch. Did that decision make sense. It seems that, at this game, every contestant should switch envelopes. Hmm.

The solution follows: The problem with this analysis is an error common in children and politicians — the confusion between your lack of knowledge of a thing, and actual variability in the system. In this case, the contestant is confusing his (or her) lack of knowledge of whether he/she has the big envelope or the smaller, with the fixed fact that the total between the two envelopes has already been set. It is some known total, in this case it is either $600 or $1200. Lets call this unknown sum y. There is a 50% chance that you now are holding 2/3 y and a 50% chance you are holding only 1/3y. therefore, the value of your current envelope is 1/3 y + 1/6y = 1/2 y. Similarly, the other envelope has a value 1/2y; there is no advantage is switching once it is accepted that the total, y had already been set before you got to choose an envelope.

And here, unfortunately is the lesson:The same issue applies in reverse when it comes to government taxation. If you assume that the total amount of goods produced by the economy is always fixed to some amount, then there is no fundamental problem with high taxes. You can print money, or redistribute it to anyone you think is worthy — more worthy than the person who has it now – and you won’t affect the usable wealth of the society. Some will gain others will lose, and likely you’ll find you have more friends than before. On the other hand, if you assume that government redistribution will affect the total: that there is some relationship between reward and the amount produced, then to the extent that you diminish the relation between work and income, or savings and wealth, you diminish the total output and wealth of your society. While some balance is needed, a redistribution that aims at identical outcomes will result in total poverty.

This is a variant of the “two-envelopes problem,” originally posed in 1912 by German, Jewish mathematician, Edmund Landau. It is described, with related problems, by Prakash Gorroochurn, Classic Problems of Probability. Wiley, 314pp. ISBN: 978-1-118-06325-5. Wikipedia article: Two Envelopes Problem.

Robert Buxbaum, February 27, 2019

Calculating π as a fraction

Pi is a wonderful number, π = 3.14159265…. It’s very useful, ratio of the circumference of a circle to its diameter, or the ratio of area of a circle to the square of its radius, but it is irrational: one can show that it can not be described as an exact fraction. When I was in middle school, I thought to calculate Pi by approximations of the circumference or area, but found that, as soon as I got past some simple techniques, I was left with massive sums involving lots of square-roots. Even with a computer, I found this slow, annoying, and aesthetically unpleasing: I was calculating one irrational number from the sum of many other irrational numbers.

At some point, I moved to try solving via the following fractional sum (Gregory and Leibniz).

π/4 = 1/1 -1/3 +1/5 -1/7 …

This was an appealing approach, but I found the series converges amazingly slowly. I tried to make it converge faster by combining terms, but that just made the terms more complex; it didn’t speed convergence. Next to try was Euler’s formula:

π2/6 = 1/1 + 1/4 + 1/9 + ….

This series converges barely faster than the Gregory/Leibniz series, and now I’ve got a square-root to deal with. And that brings us to my latest attempt, one I’m pretty happy with discovering (I’m probably not the first). I start with the Taylor series for sin x. If x is measured in radians: 180° = π radians; 30° = π/6 radians. With the angle x measured in radians, can show that

sin x = x – x3/6 x5/120 – x7/5040 

Notice that the series is fractional and that the denominators get large fast. That suggests that the series will converge fast (2 to 3 terms?). To speed things up further, I chose to solve the above for sin 30° = 1/2 = sin π/6. Truncating the series to the first term gives us the following approximation for pi.

1/2 = sin (π/6) ≈ π/6.

Rearrange this and you find π ≈ 6/2 = 3.

That’s not bad for a first order solution. The Gregory/ Leibniz series would have gotten me π ≈ 4, and the Euler series π ≈ √6 = 2.45…: I’m ahead of the game already. Now, lets truncate to the second term.

1/2 ≈ π/6 – (π/6)3/6.

In theory, I could solve this via the cubic equation formula, but that would leave me with two square roots, something I’d like to avoid. Instead, and here’s my innovation, I’ll substitute 3 + ∂ for π . I’ll then use the binomial theorem to claim that (π)3 ≈ 27 + 27∂ = 27(1+∂). Put this into the equation above and we find:

1/2 = (3+∂)/6 – 27(1+∂)/1296

Rearranging and solving for ∂, I find that

27/216 = ∂ (1- 27/216) = ∂ (189/216)

∂ = 27/189 = 1/7 = .1428…

If π ≈ 3 + ∂, I’ve just calculated π ≈ 22/7. This is not bad for an approximation based on just the second term in the series.

Where to go from here? One thought was to revisit the second term, and now say that π = 22/7 + ∂, but it seemed wrong to ignore the third term. Instead, I’ll include the 3rd term, and say that π/6 = 11/21 + ∂. Extending the derivative approximations I used above, (π/6)3 ≈ (11/21)+ 3∂(11/21)2, etc., I find:

1/2 ≈ (11/21 + ∂) -(11/21)3/6 – 3∂(11/21)2/6 + (11/21)5/120 + 5∂(11/21)4/120.

For a while I tried to solve this for ∂ as fraction using long-hand algebra, but I kept making mistakes. Thus, I’ve chosen to use two faster options: decimals or wolfram alpha. Using decimals is simpler, I find: 11/21 ≈ .523810, (11/21)2 =  .274376; (11/21)3 = .143721; (11/21)4 = .075282, and (11/21)5 = .039434.

Put these numbers into the original equation and I find:

1/2 – .52381 +.143721/6 -.039434/120 = ∂ (1-.274376/2 + .075282/24),

∂ = -.000185/.86595 ≈ -.000214. Based on this,

π ≈ 6 (11/21  -.000214) = 3,141573… Not half bad.

Alternately, using Wolfram alpha to reduce the fractions,

1/2 – 11/21+ 113/6•213 -115/(120•215) = ∂ (24(21)4/24(21)4 – 12•112212/24•214+ (11)4/24•214)

∂ = -90491/424394565 ≈ -.000213618. This is a more exact solution, but it gives a result that’s no more accurate since it is based on a 3 -term approximation of the infinite series.

We find that π/6 ≈ .523596, or, in fractional form, that π ≈ 444422848 / 141464855 = 3.14158.

Either approach seems OK in terms of accuracy: I can’t imagine needing more (I’m just an engineer). I like that I’ve got a fraction, but find the fraction quite ugly, as fractions go. It’s too big. Working with decimals gets me the same accuracy with less work — I avoided needing square roots, and avoided having to resort to Wolfram.

As an experiment, I’ll see if I get a nicer fraction if I drop the last term (11)4/24•214: it is a small correction to a small number, ∂. The equation is now:

1/2 – 11/21+ 113/6•213 -115/(120•215) = ∂ (24(21)4/24(21)4 – 12(11221)2/24•214).

I’ll multiply both sides by 24•214 and then by (5•21) to find that:

12•214 – 24•11•213+ 4•21•113 -115/(5•21) = ∂ (24(21)4 – 12•112212),

60•215 – 120•11•214+ 20•21^2•113 -115 = ∂ (120(21)5 – 60•112213).

Solving for π, I now get, 221406169/70476210 = 3.1415731

It’s still an ugly fraction, about as accurate as before. As with the digital version, I got to 5-decimal accuracy without having to deal with square roots, but I still had to go to Wolfram. If I were to go further, I’d start with the pi value above in digital form, π = 3.141573 + ∂; I’d add the 7th power term, and I’d stick to decimals for the solution. I imagine I’d add 4-5 more decimals that way.

Robert Buxbaum, April 2, 2018

Beyond oil lies … more oil + price volatility

One of many best selling books by Kenneth Deffeyes

One of many best-selling books by Kenneth Deffeyes

While I was at Princeton, one of the most popular courses was geology 101 taught by Dr. Kenneth S. Deffeyes. It was a sort of “Rocks for Jocks,” but had an unusual bite since Dr. Deffeyes focussed particularly on the geology of oil. Deffeyes had an impressive understanding of oil and oil production, and one outcome of this impressive understanding was his certainty that US oil production had peaked in 1970, and that world oil was about to run out too. The prediction that US oil production had peaked was not original to him. It was called Hubbert’s peak after King Hubbert who correctly predicted (rationalized?) the date, but published it only in 1971. What Deffeyes added to Hubbard’s analysis was a simplified mathematical justification and a new prediction: that world oil production would peak in the 1980s, or 2000, and then run out fast. By 2005, the peak date was fixed to November 24, of the same year: Thanksgiving day 2005 ± 3 weeks.

As with any prediction of global doom, I was skeptical, but generally trusted the experts, and virtually every experts was on board to predict gloom in the near future. A British group, The Institute for Peak Oil picked 2007 for the oil to run out, and the several movies expanded the theme, e.g. Mad Max. I was convinced enough to direct my PhD research to nuclear fusion engineering. Fusion being presented as the essential salvation for our civilization to survive beyond 2050 years or so. I’m happy to report that the dire prediction of his mathematics did not come to pass, at least not yet. To quote Yogi Berra, “In theory, theory is just like reality.” Still I think it’s worthwhile to review the mathematical thinking for what went wrong, and see if some value might be retained from the rubble.

proof of peak oilDeffeyes’s Maltheisan proof went like this: take a year-by year history of the rate of production, P and divide this by the amount of oil known to be recoverable in that year, Q. Plot this P/Q data against Q, and you find the data follows a reasonably straight line: P/Q = b-mQ. This occurs between 1962 and 1983, or between 1983 and 2005. Fro whichever straight line you pick, m and b are positive. Once you find values for m and b that you trust, you can rearrange the equation to read,

P = -mQ2+ bQ

You the calculate the peak of production from this as the point where dP/dQ = 0. With a little calculus you’ll see this occurs at Q = b/2m, or at P/Q = b/2. This is the half-way point on the P/Q vs Q line. If you extrapolate the line to zero production, P=0, you predict a total possible oil production, QT = b/m. According to this model this is always double the total Q discovered by the peak. In 1983, QT was calculated to be 1 trillion barrels. By May of 2005, again predicted to be a peak year, QT had grown to two trillion barrels.

I suppose Deffayes might have suspected there was a mistake somewhere in the calculation from the way that QT had doubled, but he did not. See him lecture here in May 2005; he predicts war, famine, and pestilence, with no real chance of salvation. It’s a depressing conclusion, confidently presented by someone enamored of his own theories. In retrospect, I’d say he did not realize that he was over-enamored of his own theory, and blind to the possibility that the P/Q vs Q line might curve upward, have a positive second derivative.

Aside from his theory of peak oil, Deffayes also had a theory of oil price, one that was not all that popular. It’s not presented in the YouTube video, nor in his popular books, but it’s one that I still find valuable, and plausibly true. Deffeyes claimed the wildly varying prices of the time were the result of an inherent quay imbalance between a varying supply and an inelastic demand. If this was the cause, we’d expect the price jumps of oil up and down will match the way the wait-line at a barber shop gets longer and shorter. Assume supply varies because discoveries came in random packets, while demand rises steadily, and it all makes sense. After each new discovery, price is seen to fall. It then rises slowly till the next discovery. Price is seen as a symptom of supply unpredictability rather than a useful corrective to supply needs. This view is the opposite of Adam Smith, but I think he’s not wrong, at least in the short term with a necessary commodity like oil.

Academics accepted the peak oil prediction, I suspect, in part because it supported a Marxian remedy. If oil was running out and the market was broken, then our only recourse was government management of energy production and use. By the late 70s, Jimmy Carter told us to turn our thermostats to 65. This went with price controls, gas rationing, and a 55 mph speed limit, and a strong message of population management – birth control. We were running out of energy, we were told because we had too many people and they (we) were using too much. America’s grown days were behind us, and only the best and the brightest could be trusted to manage our decline into the abyss. I half believed these scary predictions, in part because everyone did, and in part because they made my research at Princeton particularly important. The Science fiction of the day told tales of bold energy leaders, and I was ready to step up and lead, or so I thought.

By 2009 Dr. Deffayes was being regarded as chicken little as world oil production continued to expand.

By 2009 Dr. Deffayes was being regarded as chicken little as world oil production continued to expand.

I’m happy to report that none of the dire predictions of the 70’s to 90s came to pass. Some of my colleagues became world leaders, the rest because stock brokers with their own private planes and SUVs. As of my writing in 2018, world oil production has been rising, and even King Hubbert’s original prediction of US production has been overturned. Deffayes’s reputation suffered for a few years, then politicians moved on to other dire dangers that require world-class management. Among the major dangers of today, school shootings, Ebola, and Al Gore’s claim that the ice caps will melt by 2014, flooding New York. Sooner or later, one of these predictions will come true, but the lesson I take is that it’s hard to predict change accurately.

Just when you thought US oil had beed depleted for good, production began rising. It's now higher than the 1970 peak.

Just when you thought US oil was depleted, production began rising. We now produce more than in 1970.

Much of the new oil production you’ll see on the chart above comes from tar-sands, oil the Deffeyes  considered unrecoverable, even while it was being recovered. We also  discovered new ways to extract leftover oil, and got better at using nuclear electricity and natural gas. In the long run, I expect nuclear electricity and hydrogen will replace oil. Trees have a value, as does solar. As for nuclear fusion, it has not turned out practical. See my analysis of why.

Robert Buxbaum, March 15, 2018. Happy Ides of March, a most republican holiday.