Author Archives: R.E. Buxbaum

About R.E. Buxbaum

Robert Buxbaum is a life-long engineer, a product of New York's Brooklyn Technical High School, New York's Cooper Union to Science and Art, and Princeton University where he got a PhD in Chemical Engineering. From 1981 to 1991 he was a professor of Chemical Engineering at Michigan State, and now runs an engineering shop in Oak Park, outside of Detroit, Michigan. REB Research manufactures and sells hydrogen generation and purification equipment. He's married with 3 wonderful children who, he's told, would prefer to not be mentioned except by way of complete, unadulterated compliments. As of 2016, he's running to be the drain commissioner/ water resources commissioner of Oakland county.

Alkaline batteries have second lives

Most people assume that alkaline batteries are one-time only, throwaway items. Some have used rechargeable cells, but these are Ni-metal hydride, or Ni-Cads, expensive variants that have lower power densities than normal alkaline batteries, and almost impossible to find in stores. It would be nice to be able to recharge ordinary alkaline batteries, e.g. when a smoke alarm goes off in the middle of the night and you find you’re out, but people assume this is impossible. People assume incorrectly.

Modern alkaline batteries are highly efficient: more efficient than even a few years ago, and that always suggests reversibility. Unlike the acid batteries you learned about in highschool chemistry class (basic chemistry due to Volta) the chemistry of modern alkaline batteries is based on Edison’s alkaline car batteries. They have been tweaked to an extent that even the non-rechargeable versions can be recharged. I’ve found I can reliably recharge an ordinary alkaline cell, 9V, at least once using the crude means of a standard 12 V car battery charger by watching the amperage closely. It only took 10 minutes. I suspect I can get nine lives out of these batteries, but have not tried.

To do this experiment, I took a 9 V alkaline that had recently died, and finding I had no replacement, I attached it to a 6 Amp, 12 V, car battery charger that I had on hand. I would have preferred to use a 2 A charger and ideally a charger designed to output 9-10 V, but a 12 V charger is what I had available, and it worked. I only let it charge for 10 minutes because, at that amperage, I calculated that I’d recharged to the full 1 Amp-hr capacity. Since the new alkaline batteries only claimed 1 amp hr, I figured that more charge would likely do bad things, even perhaps cause the thing to blow up.  After 5 minutes, I found that the voltage had returned to normal and the battery worked fine with no bad effects, but went for the full 10 minutes. Perhaps stopping at 5 would have been safer.

I changed for 10 minutes (1/6 hour) because the battery claimed a capacity of 1 Amp-hour when new. My thought was 1 amp-hour = 1 Amp for 1 hour, = 6 Amps for 1/6 hour = ten minutes. That’s engineering math for you, the reason engineers earn so much. I figured that watching the recharge for ten minutes was less work and quicker than running to the store (20 minutes). I used this battery in my firm alarm, and have tested it twice since then to see that it works. After a few days in my fire alarm, I took it out and checked that the voltage was still 9 V, just like when the battery was new. Confirming experiments like this are a good idea. Another confirmation occurred when I overcooked some eggs and the alarm went off from the smoke.

If you want to experiment, you can try a 9V as I did, or try putting a 1.5 volt AA or AAA battery in a charger designed for rechargeables. Another thought is to see what happens when you overcharge. Keep safe: do this in a wood box outside at a distance, but I’d like to know how close I got to having an exploding energizer. Also, it would be worthwhile to try several charge/ discharge cycles to see how the energy content degrades. I expect you can get ~9 recharges with a “non-rechargeable” alkaline battery because the label says: “9 lives,” but even getting a second life from each battery is a significant savings. Try using a charger that’s made for rechargeables. One last experiment: If you’ve got a cell phone charger that works on a car battery, and you get the polarity right, you’ll find you can use a 9V alkaline to recharge your iPhone or Android. How do I know? I judged a science fair not long ago, and a 4th grader did this for her science fair project.

Robert Buxbaum, April 19, 2018. For more, semi-dangerous electrochemistry and biology experiments.

Calculating π as a fraction

Pi is a wonderful number, π = 3.14159265…. It’s very useful, ratio of the circumference of a circle to its diameter, or the ratio of area of a circle to the square of its radius, but it is irrational: one can show that it can not be described as an exact fraction. When I was in middle school, I thought to calculate Pi by approximations of the circumference or area, but found that, as soon as I got past some simple techniques, I was left with massive sums involving lots of square-roots. Even with a computer, I found this slow, annoying, and aesthetically unpleasing: I was calculating one irrational number from the sum of many other irrational numbers.

At some point, I moved to try solving via the following fractional sum (Gregory and Leibniz).

π/4 = 1/1 -1/3 +1/5 -1/7 …

This was an appealing approach, but I found the series converges amazingly slowly. I tried to make it converge faster by combining terms, but that just made the terms more complex; it didn’t speed convergence. Next to try was Euler’s formula:

π2/6 = 1/1 + 1/4 + 1/9 + ….

This series converges barely faster than the Gregory/Leibniz series, and now I’ve got a square-root to deal with. And that brings us to my latest attempt, one I’m pretty happy with discovering (I’m probably not the first). I start with the Taylor series for sin x. If x is measured in radians: 180° = π radians; 30° = π/6 radians. With the angle x measured in radians, can show that

sin x = x – x3/6 x5/120 – x7/5040 

Notice that the series is fractional and that the denominators get large fast. That suggests that the series will converge fast (2 to 3 terms?). To speed things up further, I chose to solve the above for sin 30° = 1/2 = sin π/6. Truncating the series to the first term gives us the following approximation for pi.

1/2 = sin (π/6) ≈ π/6.

Rearrange this and you find π ≈ 6/2 = 3.

That’s not bad for a first order solution. The Gregory/ Leibniz series would have gotten me π ≈ 4, and the Euler series π ≈ √6 = 2.45…: I’m ahead of the game already. Now, lets truncate to the second term.

1/2 ≈ π/6 – (π/6)3/6.

In theory, I could solve this via the cubic equation formula, but that would leave me with two square roots, something I’d like to avoid. Instead, and here’s my innovation, I’ll substitute 3 + ∂ for π . I’ll then use the binomial theorem to claim that (π)3 ≈ 27 + 27∂ = 27(1+∂). Put this into the equation above and we find:

1/2 = (3+∂)/6 – 27(1+∂)/1296

Rearranging and solving for ∂, I find that

27/216 = ∂ (1- 27/216) = ∂ (189/216)

∂ = 27/189 = 1/7 = .1428…

If π ≈ 3 + ∂, I’ve just calculated π ≈ 22/7. This is not bad for an approximation based on just the second term in the series.

Where to go from here? One thought was to revisit the second term, and now say that π = 22/7 + ∂, but it seemed wrong to ignore the third term. Instead, I’ll include the 3rd term, and say that π/6 = 11/21 + ∂. Extending the derivative approximations I used above, (π/6)3 ≈ (11/21)+ 3∂(11/21)2, etc., I find:

1/2 ≈ (11/21 + ∂) -(11/21)3/6 – 3∂(11/21)2/6 + (11/21)5/120 + 5∂(11/21)4/120.

For a while I tried to solve this for ∂ as fraction using long-hand algebra, but I kept making mistakes. Thus, I’ve chosen to use two faster options: decimals or wolfram alpha. Using decimals is simpler, I find: 11/21 ≈ .523810, (11/21)2 =  .274376; (11/21)3 = .143721; (11/21)4 = .075282, and (11/21)5 = .039434.

Put these numbers into the original equation and I find:

1/2 – .52381 +.143721/6 -.039434/120 = ∂ (1-.274376/2 + .075282/24),

∂ = -.000185/.86595 ≈ -.000214. Based on this,

π ≈ 6 (11/21  -.000214) = 3,141573… Not half bad.

Alternately, using Wolfram alpha to reduce the fractions,

1/2 – 11/21+ 113/6•213 -115/(120•215) = ∂ (24(21)4/24(21)4 – 12•112212/24•214+ (11)4/24•214)

∂ = -90491/424394565 ≈ -.000213618. This is a more exact solution, but it gives a result that’s no more accurate since it is based on a 3 -term approximation of the infinite series.

We find that π/6 ≈ .523596, or, in fractional form, that π ≈ 444422848 / 141464855 = 3.14158.

Either approach seems OK in terms of accuracy: I can’t imagine needing more (I’m just an engineer). I like that I’ve got a fraction, but find the fraction quite ugly, as fractions go. It’s too big. Working with decimals gets me the same accuracy with less work — I avoided needing square roots, and avoided having to resort to Wolfram.

As an experiment, I’ll see if I get a nicer fraction if I drop the last term (11)4/24•214: it is a small correction to a small number, ∂. The equation is now:

1/2 – 11/21+ 113/6•213 -115/(120•215) = ∂ (24(21)4/24(21)4 – 12(11221)2/24•214).

I’ll multiply both sides by 24•214 and then by (5•21) to find that:

12•214 – 24•11•213+ 4•21•113 -115/(5•21) = ∂ (24(21)4 – 12•112212),

60•215 – 120•11•214+ 20•21^2•113 -115 = ∂ (120(21)5 – 60•112213).

Solving for π, I now get, 221406169/70476210 = 3.1415731

It’s still an ugly fraction, about as accurate as before. As with the digital version, I got to 5-decimal accuracy without having to deal with square roots, but I still had to go to Wolfram. If I were to go further, I’d start with the pi value above in digital form, π = 3.141573 + ∂; I’d add the 7th power term, and I’d stick to decimals for the solution. I imagine I’d add 4-5 more decimals that way.

Robert Buxbaum, April 2, 2018

What drives the gulf stream?

I’m not much of a fan of todays’ kids’ science books because they don’t teach science IMHO. They have nice pictures and a few numbers; almost no equations, and lots of words. You can’t do science that way. On the odd occasion that they give the right answer to some problem, the lack of math means the kid has no way of understanding the reasoning, and no reason to believe the answer. Professional science articles on the web are bad in the opposite direction: too many numbers and for math, hey rely on supercomputers. No human can understand the outcome. I like to use my blog to offer science with insight, the type you’d get in an old “everyman science” book.

In previous posts, I gave answers to why the sky is blue, why it’s cold at the poles, why it’s cold on mountains, how tornadoes pick stuff up, and why hurricanes blow the way they do. In this post, we’ll try to figure out what drives the gulf-stream. The main argument will be deduction — disproving things that are not driving the gulf stream to leave us with one or two that could. Deduction is a classic method of science, well presented by Sherlock Holmes.

The gulf stream. The speed in the white area is ≥ 0.5 m/s (1.1 mph.).

The gulf stream. The speed in the white area is ≥ 0.5 m/s (1.1 mph.).

For those who don’t know, the Gulf stream is a massive river of water that runs within the Atlantic ocean. As shown at right, it starts roughly at the end of Florida, runs north to the Carolinas, and then turns dramatically east towards Spain. Flowing east, It’s about 150 miles wide, but only about 62 miles (100 km) when flowing along the US coast. According to some of the science books of my youth this massive flow was driven by temperature according to others, by salinity (whatever that means), and yet other books of my youth wind. My conclusion: they had no clue.

As a start to doing the science here, it’s important to fill in the numerical information that the science books left out. The Gulf stream is roughly 1000 meters deep, with a typical speed of 1 m/s (2.3 mph). The maximum speed is the surface water as the stream flows along the US coast. It is about 2.5 metres per second (5.6 mph), see map above.

From the size and the speed of the Gulf Stream, we conclude that land rivers are not driving the flow. The Mississippi is a big river with an outflow point near the head waters of the gulf stream, but the volume of flow is vastly too small. The volume of the gulf stream is roughly

Q=wdv = 100,000 x 1000 x .5 =  50 million m3/s = 1.5 billion cubic feet/s.

This is about 2000 times more flow than the volume flow of the Mississippi, 18,000 m3/s. The great difference in flow suggests the Mississippi could not be the driving force. The map of flow speeds (above) also suggest rivers do not drive the flow. The Gulf Stream does not flow at its maximum speed near the mouth of any river.  We now look for another driver.

Moving on to temperature. Temperature drives the whirl of hurricanes. The logic for temperature driving the gulf stream is as follows: it’s warm by the equator and cold at the poles; warm things expand and as water flows downhill, the polls will always be downhill from the equator. Lets put some math in here or my explanation will be lacking. First lets consider how much hight difference we might expect to see. The thermal expansivity of water is about 2x 10-4 m/m°C (.0002/°C) in the desired temperature range). To calculate the amount of expansion we multiply this by the depth of the stream, 1000m, and the temperature difference between two points, eg. the end of Florida to the Carolina coast. This is 5°C (9°F) I estimate. I calculate the temperature-induced seawater height as:

∆h (thermal) ≈ 5° x .0002/° x 1000m = 1 m (3.3 feet).

This is a fair amount of height. It’s only about 1/100 the height driving the Mississippi river, but it’s something. To see if 1 m is enough to drive the Gulf flow, I’ll compare it to the velocity-head. Velocity-head is a concept that’s useful in plumbing (I ran for water commissioner). It’s the potential energy height equivalent of any kinetic energy — typically of a fluid flow. The kinetic energy for any velocity v and mass of water, m is 1/2 mv2 . The potential energy equivalent is mgh. Combine the above and remove the mass terms, and we have:

∆h (velocity) = v2/2g.

Where g is the acceleration of gravity. Let’s consider  v = 1 m/s and g= 9.8 m/s2.≤ 0.05 m ≈ 2 inches. This is far less than the driving force calculated above. We have 5x more driving force than we need, but there is a problem: why isn’t the flow faster? Why does the Mississippi move so slowly when it has 100 times more head.

To answer the above questions, and to check if heat could really drive the Gulf Stream, we’ll check if the flow is turbulent — it is. A measure of how turbulent is based on something called the Reynolds number, Re#, it’s the ratio of kinetic energy and viscous loss in a fluid flow. Flows are turbulent if this ratio is more than 3000, or so;

Re# = vdρ/µ.

In the above, v is velocity, say 1 m/s, d is depth, 1000m, ρ = density, 1000 kg/m3 for water, and  0.00133 Pa∙s is the viscosity of water. Plug in these numbers, and we find a RE# = 750 million: this flow will be highly turbulent. Assuming a friction factor of 1/20 (.05), e find that we’d expect complete mixing 20 depths or 20 km. We find we need the above 0.05 m of velocity height to drive every 20 km of flow up the US coast. If the distance to the Carolina coast is 1000 km we need 1000*.05m/20 = 1 meter, that’s just about the velocity-head that the temperature difference would suggest. Temperature is thus a plausible driving force for 0.5 m/s, though not likely for the faster 2.5 m/s flow seen in the center of the stream. Turbulent flow is a big part of figuring the mpg of an automobile; it becomes rapidly more important at high speeds.

World sea salinity

World sea salinity. The maximum and minimum are in the wrong places.

What about salinity? For salinity to work, the salinity would have to be higher at the end of the flow. As a model of the flow, we might imagine that we freeze arctic seawater, and thus we concentrate salt in the seawater just below the ice. The heavy, saline water would flow down to the bottom of the sea, and then flow south to an area of low salinity and low pressure. Somewhere in the south, the salinity would be reduced by rains. If evaporation were to exceed the rains, the flow would go in the other direction. Sorry to say, I see no evidence of any of this. For one the end of the Gulf Stream is not that far north; there is no freezing, For two other problems: there are major rains in the Caribbean, and rains too in the North Atlantic. Finally, while the salinity head is too small. Each pen of salinity adds about 0.0001g/cc, and the salinity difference in this case is less than 1 ppm, lets say 0.5ppm.

h = .0001 x 0.5 x 1000 = 0.05m

I don’t see a case for northern-driven Gulf-stream flow caused by salinity.

Surface level winds in the Atlantic.

Surface level winds in the Atlantic. Trade winds in purple, 15-20 mph.

Now consider winds. The wind velocities are certainly enough to produce 5+ miles per hour flows, and the path of flows is appropriate. Consider, for example, the trade winds. In the southern Caribbean, they blow steadily from east to west slightly above the equator at 15 -20 mph. This could certainly drive a circulation flow of 4.5 mph north. Out of the Caribbean basin and along the eastern US coat the trade winds blow at 15-50 mph north and east. This too would easily drive a 4.5 mph flow.  I conclude that a combination of winds and temperature are the most likely drivers of the gulf stream flow. To quote Holmes, once you’ve eliminated the impossible, whatever remains, however improbable, must be the truth.

Robert E. Buxbaum, March 25, 2018. I used the thermal argument above to figure out how cold it had to be to freeze the balls off of a brass monkey.

New Chinese emperor, will famine not follow

For most of its 2300 year history, the Chinese empire has rattled between strong leaders who brought famine, and weak leaders who brought temporary reprieve. Mao, a strong leader, killed his associates plus over 100 million by his “great leap forward” famine. Since then, 30+ years, we’ve had some weaker leaders, semi-democracy, and some personal wealth, plus the occasional massacre, e.g. at Tiananmen square, and a growing demographic problem. And now a new strongman is establishing himself with hopes of solving China’s problems. I hope for the best, but fear the repeat of the worse parts of Chinese history.

Two weeks ago, Chairman Xi amended the Chinese constitution to make himself emperor for life, essentially. He’s already in charge of the government, the party, and the military. Yesterday (Tuesday), he consolidated his power further by replacing the head of the banks. The legal system is, in theory, is the last independent part of government, but there is hardly any legal system in the sense of a balance of power. If history is any guide, “Emperor” Xi will weaken the courts further before the year is out. He will also likely remove many or all of his close associates and relatives. It is not for nothing that Nero, Stalin, and Mao killed their relatives and friends — generally for “corruption” following a show trial.

China's Imperial past is never is quite out of sight. Picture from the Economist.

China’s past is never is quite out of sight. Picture from the Economist.

Xi might be different, but he faces a looming demographic problem that makes it likely he will follow the president of the stronger emperors. China’s growth was fueled in part by a one child policy. Left behind is an aging, rural population with no children to take care of the elderly. As top-down societies do not tolerate “useless workers,” I can expect a killing famine within the next 10 years. This would shed the rural burden while providing a warning to potential critics. “Burn the chicken to scare the monkey,” is a Chinese Imperial aphorism. Besides, who needs dirt farmers when we have modern machines.

Lazy beds (feannagan) use only half the soil are for planting. The English experts were sure this was inefficient and land-wasting. Plowing was imposed on Ireland, and famine followed

“Lazy beds” of potatoes were used in Ireland for a century until experts forced their abandonment in the mid 1800s. The experts saw the beds, and the Irish as lazy, inefficient, and land-wasting. Famine followed.

Currently about 40% of the country is rural, about 560 million people spread out over a country the size of Canada or the US. The rest, 60% or 830 million, live concentrated in a few cities. The cities are rich, industrial, and young. The countryside is old, agricultural and poor, salaries are about 1/3 those of the cities. The countryside holds about 2/3 of those over 65, about 100 million elderly with no social safety net. The demographic imbalance is likely to become worse — a lot worse — within the next decade.

What is likely to happen, I fear, is that the party leaders — all of whom live in the cities — will decide that the countryside is full of non-productive, uneducated whiners. They will demand that more food should be produced, and will help them achieve this by misguided science and severe punishments. Mao’s experts, like Stalin’s and Queen Victoria’s, demanded unachievable quotas and academic-based advice that neither the leaders nor the academics had ever tried to make work. Mao’s experts told peasants to kill the birds that were stealing their grain. It worked for a while until the insects multiplied. As for the quotas, the party took grain as if the quotas were being met. If the peasants starved, they starved.

I expect that China’s experts will propose machine-based modern agriculture, perhaps imported from the US or Israel: Whatever is in style at the time. The expert attitude exists everywhere to this day, and the results are always the same. See potato famine picture above. When the famine comes, the old will request food and healthcare, but the city leaders will provide none, or just opioids as they did to ailing Elvis. When the complaining stops the doctor is happy.

China's population pyramid as of 2016. Notice the bulge of 40-55 year olds.

China’s population pyramid as of 2016. Notice the bulge of 40-55 year olds. Note too that there are millions more males (blue) than females (pink).

In single leader societies, newspapers do not report bad news. Rather, they like to show happy, well-fed peasants singing the leaders’ praise. When there’s a riot too big to ignore, rioters are presented as lazy malcontents and counter-revolutionaries. Sympathizers are sent to work in the fields. American academia will sing the praises of the autocratic leader, or will be silent. We never see the peasants, but often see the experts. And we live in a society where newspapers report only the bad, and where we only believe when there pictures. No pictures, no story. As with Stalin’s Gulags, Mao’s famine, or North Korea today, there are likely to be few pictures released to the press. Eventually, a census will reveal that tens of million aged have vanished, and we’ll have to guess where they went.

I can expect China to continue its military buildup over the next decade. The military will be necessary to put down riots, and keep young men occupied, and to protect China from foreign intervention. China will especially need to protect its ill-gotten, new oil-assets. Oil is needed if China is to replace its farmers with machines. It will be a challenge for a wise American leader to avoid being drawn into war with China, while protecting some of our interests: Taiwan, Hong Kong, etc. As with Theodore Roosevelt, he should offer support and non-biassed mediation. Is Trump up to this?  Hu Knows?

Robert Buxbaum, March 21, 2018. The above might be Xi-nephobia, Then again, this just in: Chairman Xi announces that Taiwan will face punishment if it attempts to break free. Doesn’t sound good.

Beyond oil lies … more oil + price volatility

One of many best selling books by Kenneth Deffeyes

One of many best-selling books by Kenneth Deffeyes

While I was at Princeton, one of the most popular courses was geology 101 taught by Dr. Kenneth S. Deffeyes. It was a sort of “Rocks for Jocks,” but had an unusual bite since Dr. Deffeyes focussed particularly on the geology of oil. Deffeyes had an impressive understanding of oil and oil production, and one outcome of this impressive understanding was his certainty that US oil production had peaked in 1970, and that world oil was about to run out too. The prediction that US oil production had peaked was not original to him. It was called Hubbert’s peak after King Hubbert who correctly predicted (rationalized?) the date, but published it only in 1971. What Deffeyes added to Hubbard’s analysis was a simplified mathematical justification and a new prediction: that world oil production would peak in the 1980s, or 2000, and then run out fast. By 2005, the peak date was fixed to November 24, of the same year: Thanksgiving day 2005 ± 3 weeks.

As with any prediction of global doom, I was skeptical, but generally trusted the experts, and virtually every experts was on board to predict gloom in the near future. A British group, The Institute for Peak Oil picked 2007 for the oil to run out, and the several movies expanded the theme, e.g. Mad Max. I was convinced enough to direct my PhD research to nuclear fusion engineering. Fusion being presented as the essential salvation for our civilization to survive beyond 2050 years or so. I’m happy to report that the dire prediction of his mathematics did not come to pass, at least not yet. To quote Yogi Berra, “In theory, theory is just like reality.” Still I think it’s worthwhile to review the mathematical thinking for what went wrong, and see if some value might be retained from the rubble.

proof of peak oilDeffeyes’s Maltheisan proof went like this: take a year-by year history of the rate of production, P and divide this by the amount of oil known to be recoverable in that year, Q. Plot this P/Q data against Q, and you find the data follows a reasonably straight line: P/Q = b-mQ. This occurs between 1962 and 1983, or between 1983 and 2005. Fro whichever straight line you pick, m and b are positive. Once you find values for m and b that you trust, you can rearrange the equation to read,

P = -mQ2+ bQ

You the calculate the peak of production from this as the point where dP/dQ = 0. With a little calculus you’ll see this occurs at Q = b/2m, or at P/Q = b/2. This is the half-way point on the P/Q vs Q line. If you extrapolate the line to zero production, P=0, you predict a total possible oil production, QT = b/m. According to this model this is always double the total Q discovered by the peak. In 1983, QT was calculated to be 1 trillion barrels. By May of 2005, again predicted to be a peak year, QT had grown to two trillion barrels.

I suppose Deffayes might have suspected there was a mistake somewhere in the calculation from the way that QT had doubled, but he did not. See him lecture here in May 2005; he predicts war, famine, and pestilence, with no real chance of salvation. It’s a depressing conclusion, confidently presented by someone enamored of his own theories. In retrospect, I’d say he did not realize that he was over-enamored of his own theory, and blind to the possibility that the P/Q vs Q line might curve upward, have a positive second derivative.

Aside from his theory of peak oil, Deffayes also had a theory of oil price, one that was not all that popular. It’s not presented in the YouTube video, nor in his popular books, but it’s one that I still find valuable, and plausibly true. Deffeyes claimed the wildly varying prices of the time were the result of an inherent quay imbalance between a varying supply and an inelastic demand. If this was the cause, we’d expect the price jumps of oil up and down will match the way the wait-line at a barber shop gets longer and shorter. Assume supply varies because discoveries came in random packets, while demand rises steadily, and it all makes sense. After each new discovery, price is seen to fall. It then rises slowly till the next discovery. Price is seen as a symptom of supply unpredictability rather than a useful corrective to supply needs. This view is the opposite of Adam Smith, but I think he’s not wrong, at least in the short term with a necessary commodity like oil.

Academics accepted the peak oil prediction, I suspect, in part because it supported a Marxian remedy. If oil was running out and the market was broken, then our only recourse was government management of energy production and use. By the late 70s, Jimmy Carter told us to turn our thermostats to 65. This went with price controls, gas rationing, and a 55 mph speed limit, and a strong message of population management – birth control. We were running out of energy, we were told because we had too many people and they (we) were using too much. America’s grown days were behind us, and only the best and the brightest could be trusted to manage our decline into the abyss. I half believed these scary predictions, in part because everyone did, and in part because they made my research at Princeton particularly important. The Science fiction of the day told tales of bold energy leaders, and I was ready to step up and lead, or so I thought.

By 2009 Dr. Deffayes was being regarded as chicken little as world oil production continued to expand.

By 2009 Dr. Deffayes was being regarded as chicken little as world oil production continued to expand.

I’m happy to report that none of the dire predictions of the 70’s to 90s came to pass. Some of my colleagues became world leaders, the rest because stock brokers with their own private planes and SUVs. As of my writing in 2018, world oil production has been rising, and even King Hubbert’s original prediction of US production has been overturned. Deffayes’s reputation suffered for a few years, then politicians moved on to other dire dangers that require world-class management. Among the major dangers of today, school shootings, Ebola, and Al Gore’s claim that the ice caps will melt by 2014, flooding New York. Sooner or later, one of these predictions will come true, but the lesson I take is that it’s hard to predict change accurately.

Just when you thought US oil had beed depleted for good, production began rising. It's now higher than the 1970 peak.

Just when you thought US oil was depleted, production began rising. We now produce more than in 1970.

Much of the new oil production you’ll see on the chart above comes from tar-sands, oil the Deffeyes  considered unrecoverable, even while it was being recovered. We also  discovered new ways to extract leftover oil, and got better at using nuclear electricity and natural gas. In the long run, I expect nuclear electricity and hydrogen will replace oil. Trees have a value, as does solar. As for nuclear fusion, it has not turned out practical. See my analysis of why.

Robert Buxbaum, March 15, 2018. Happy Ides of March, a most republican holiday.

Hydrogen powered trucks and busses

With all the attention to electric cars, I figure that we’re either at the dawn of electric propulsion vehicles or of electric propulsion vehicle hype. Elon Musk’s Tesla motor car company stock is now valued at $59 B, more than GM or Ford despite the company having massive losses and few cars. The valuation, I suspect, has to do with the future and autonomous vehicles. There are many who expect self-driving vehicles will rule the road, but the form is uncertain. In this space, i suspect that hydrogen-battery hybrids make more sense than batteries alone, and that the first large-impact uses will be trucks and busses — vehicles that go long distance on highways.

Factory floor, hydrogen fueling station for plug-power forklifts. Plug FCs reached their 10 millionth refueling this January.

Factory floor, hydrogen fueling station for fuel cell forklifts. This company’s fuel cells have had over 10 million refuelings so far.

Currently there are only two bands of autonomous vehicles available in the US, the Cadillac CT6, a gasoline powered car, and the Tesla. Neither work well except on highways because the number of highway problems are fewer than the number of city problems and only the CT6 allows you to take your hands off the wheel — see review here. To me, being able to take your hand off the wheel is the only real point of autonomous control, and if one can only do this only on the highway, that’s acceptable. Highway driving gets quite tiring after the first hundred miles or so, and any relief is welcome.

Tesla’s battery cars allow for some auto-driving on the highway, but you can’t take your hand off the wheel or the car stops. That battery cars compete at all for highway driving, I suspect, is only possible because the US government highly subsidizes the battery cost. Musk then hides the true cost among the corporate losses. Without this, hydrogen – fuel cell vehicles would be cheaper, I suspect, while providing better range, see my calculation here. Adding to the advantage of hydrogen over batteries, the charge time for hydrogen is much faster. Slow charge times are a real drawback for highway vehicles traveling any significant distances. While hydrogen fuel isn’t cheap — it’s becoming cheaper and is now about double the price of gasoline on a per mile basis. The advantage over gasoline is it provides pollution-free, electric propulsion, and this is well suited to driverless vehicles. Both gasoline and battery vehicles can have odd acceleration issues, e.g. when the gasoline gets wet, or the battery gets run down. And it’s not like there are no hydrogen fueling stations. Hydrogen, fuel-cell power has become a major competitor for fork-lifts, and has recently had its ten million refueling in that application. The same fueling stations that serve the larger fork-lift users could serve the self-driving truck and bus market.

For round the town use, hydrogen vehicles could still use batteries, and the combined vehicle can have very impressive performance. A Dutch company has begun to sell kits to convert Tesla model S autos to combined battery + hydrogen. With these kits, they boast a 620 mile (1000 km) range instead of the normal 240 miles. See the product here.  On the horizon, in the self-driving fuel cell market, Hyundai has debuted the “Nexo” with a range of 370 miles. Showing off the self-driving capability, Nexos were used to carry spectators between venues at the Pyongyang olympics. Japanese competitors, the Toyota Mirai (312 miles) and the Honda Clarity Fuel Cell (366 miles) can be expected to provide similar capabilities.

Cadillac CT6 with supercruise. An antonymous vehicle that you can buy today that allows you to take your hand off the wheel.

Cadillac CT6 with supercruise. An antonymous vehicle that you can buy today that allows you to take your hand off the wheel.

The reason I believe in hydrogen Trucks and Busses more than cars is the difficulty of refueling, Southern California has installed some 36 public hydrogen refueling stations at last count, but that’s too few for most personal car use. Other states have even fewer spots where you can drive up and get hydrogen; Michigan has only two. This does not matter for a commercial truck or bus because they go between fixed depots and these can be fitted with hydrogen dispensers as found for forklifts. It’s possible trucks can even use the same dispensers as the forklifts. If one needs a little extra range one can add a “hydrogen Jerry can” to provide an extra kg of H2 to provide 20-30 miles of emergency range. I do not see electric vehicles working as well because the charge times are so slow, the range so modest, and the electric power needs so large. To charge a 100 kWhr battery in an hour, the charge station would have to have an electric feed of 100 kW, about as much as a typical mall. With 100A, 240 V, the most you can normally get, expect a 4 1/2 hour charge.

The real benefit for hydrogen trucks and busses is autonomy. Being able to run the route without major input from a driver. So why not gasoline, as with the Cadillac? My answer is simplicity. If you want driverless simplicity, you want electric or hydrogen. And only hydrogen provides the long-range, fast fueling to make the product worthwhile.

Robert Buxbaum March 12, 2018. My company, REB Research provides hydrogen purifiers and hydrogen generators.

Yogurt making for kids

Yogurt making is easy, and is a fun science project for kids and adults alike. It’s cheap, quick, easy, reasonably safe, and fairly useful. Like any real science, it requires mathematical thinking if you want to go anywhere really, but unlike most science, you can get somewhere even without math, and you can eat the experiments. Yogurt making has been done for centuries, and involves nothing more than adding some yogurt culture to a glass of milk and waiting. To do this the traditional way, you wait with the glass sitting outside of any refrigeration (they didn’t have refrigeration in the olden days). After a few days, you’ll have tasty yogurt. You can get taster yogurt if you add flavors. In one of my most successful attempts at flavoring, I added 1/2 ounce of “skinny syrup” (toffee flavor) to a glass of milk. The results were most satisfactory, IMHO.

My latest batch of home-made flavored yogurt, made in a warm spot behind this urn.

My latest batch of home-made flavored yogurt, made in a warm spot behind this coffee urn.

Now to turn yogurt-making into a science project. We’ll begin with a hypothesis. I generally tell people to not start with a hypothesis, (it biases your thinking), but here I will make an exception as I have a peculiarly non-biased hypothesis to suggest. Besides, most school kids are told they need one. My hypothesis is that there must be better ways to make yogurt and worse ways. A hypothesis should be avoided if it contains any unfounded assumptions, or if it points to a particular answer — especially an answer that no one would care about.

As with all science you’ll want to take numerical data of cause and effect. I’d suggest that temperature data is worth taking. The yogurt-making bacteria is called lactose thermophillis, and this suggests that warm temperatures will be good (lact = milk in Latin, thermophilic = loving heat). Also making things interesting is the suspicion that if you make things too warm, you’ll cook your organisms and you won’t get any yogurt. I’ve had this happen, both with over-heat and under-heat. My first attempt was to grow yogurt in the refrigerator, but I got no results. I then tried the kitchen counter and got yogurt, and then I heated things a bit more by growing next to a coffee urn, and got better yogurt; yet more heat and nothing.

For a science project, you might want to make a few batches of yogurt, at least 5, and these should be made at 2-3 different temperatures. If temperature is a cause for the yogurt to come out better or worse, you’ll need to be able to measure how much “better”? You may choose to study taste, and that’s important, but it’s hard to quantify, so that should not be the whole experiment. I would begin by testing thickness, or the time to a get some fixed degree of thickness; I’d measure thickness by seeing if a small weight sinks. A penny is a cheap, small weight, and I know it sinks in milk, but not in yogurt. You’ll want to wash your penny first, or no one will eat the yogurt. I used hot water from the urn to clean and sterilize my pennies.

Another thing that is worth testing is the effect of using different milks: whole milk, 2%, 1% or skim; goat milk, or almond milk. You can also try adding stuff to it, or starting with different starter cultures, or different amounts. Keep numerical records of these choices, then keep track of how they effect how long it takes for the gel to form, and how the stuff looks or tastes to you. Before you know it, you’ll have some very good product at half the price of the stuff in the store. If you really want to move forward fast, you might apply semi-random statistics to your experimental choices. Good luck.

Robert Buxbaum, March 2, 2018. My latest observation: what happens if you leave the yogurt to mold too long? It doesn’t get moldy, perhaps the lactic acid formed kills germs (?), but the yogurt separated into curds and whey. I poured off the whey, the unappealing, bitter yellow liquid. The thick white remainder is called “Greek” yogurt. I’m not convinced this tastes better, or is healthier, BTW.

Elvis Presley and the opioid epidemic

For those who suspect that the medical profession may bear some responsibility for the opioid epidemic, I present a prescription written for Elvis Presley, August 1977. Like many middle age folks, he suffered from back pain and stress. And like most folks, he trusted the medical professionals to “do no harm” prescribing nothing with serious side effects. Clearly he was wrong.

Elis prescription, August 1977. Opioid city.

Elis prescription, August 1977. Opioid city.

The above prescription is a disaster, but you may think this is just an aberration. A crank doctor who hooked (literally) a celebrity patient, but not as aberrant as one might think. I worked for a pharmacist in the 1970s, and the vast majority of prescriptions we saw were for these sort of mood altering drugs. The pharmacist I worked for refused to service many of these customers, and even phoned the doctor to yell at him for one particular egregious case: a shivering skinny kid with a prescription for diet pills, but my employer was the aberration. All those prescriptions would be filled by someone, and a great number of people walked about in a haze because of it.

The popular Stones song, Mother’s Little Helper, would not have been so popular if it were not true to life. One might ask why it was true to life, as doctors might have prescribed less addicting drugs. I believe the reason is that doctors listened to advertising then, and now. They might have suggested marijuana for pain or depression — there was good evidence it worked — but there were no colorful brochures with smiling actors. The only positive advertising was for opioids, speed, and Valium and that was what was prescribed then and still today.

One of the most common drugs prescribed to kids these days is speed, marketed as “Ritalin.” It prevents daydreaming and motor-mouth behaviors; see my essay is ADHD a real disease?. I’m not saying that ADD kids aren’t annoying, or that folks don’t have back ached, but the current drugs are worse than marijuana as best I can tell. It would be nice to get non-high-inducing pot extract sold in pharmacies, in my opinion, and not in specialty stores (I trust pharmacists). AS things now stand the users have medical prescription cards, but the black sellers end up in jail..

Robert Buxbaum, January 25, 2018. Please excuse the rant. I ran for sewer commissioner, 2016, And as a side issue, I’d like to reduce the harsh “minimum” penalties for crimes of possession with intent to sell, while opening up sale to normal, druggist channels.

Keeping your car batteries alive.

Lithium-battery cost and performance has improved so much that no one uses Ni-Cad or metal hydride batteries any more. These are the choice for tools, phones, and computers, while lead acid batteries are used for car starting and emergency lights. I thought I’d write about the care and trade-offs of these two remaining options.

As things currently stand, you can buy a 12 V, lead-acid car battery with 40 Amp-h capacity for about $95. This suggests a cost of about $200/ kWh. The price rises to $400/kWh if you only discharge half way (good practice). This is cheaper than the per-power cost of lithium batteries, about $500/ kWh or $1000/ kWh if you only discharge half-way (good practice), but people pick lithium because (1) it’s lighter, and (2) it’s generally longer lasting. Lithium generally lasts about 2000 half-discharge cycles vs 500 for lead-acid.

On the basis of cost per cycle, lead acid batteries would have been replaced completely except that they are more tolerant of cold and heat, and they easily output the 400-800 Amps needed to start a car. Lithium batteries have problems at these currents, especially when it’s hot or cold. Lithium batteries deteriorate fast in the heat too (over 40°C, 105°F), and you can not charge a lithium car battery at more than 3-4 Amps at temperatures below about 0°C, 32°F. At higher currents, a coat of lithium metal forms on the anode. This lithium can react with water: 2Li + H2O –> Li2O + H2, or it can form dendrites that puncture the cell separators leading to fire and explosion. If you charge a lead acid battery too fast some hydrogen can form, but that’s much less of a problem. If you are worried about hydrogen, we sell hydrogen getters and catalysts that remove it. Here’s a description of the mechanisms.

The best thing you can do to keep a lead-acid battery alive is to keep it near-fully charged. This can be done by taking long drives, by idling the car (warming it up), or by use of an external trickle charger. I recommend a trickle charger in the winter because it’s non-polluting. A lead-acid battery that’s kept at near full charge will give you enough charge for 3000 to 5000 starts. If you let the battery completely discharge, you get only 50 or so deep cycles or 1000 starts. But beware: full discharge can creep up on you. A new car battery will hold 40 Ampere-hours of current, or 65,000 Ampere-seconds if you half discharge. Starting the car will take 5 seconds of 600 Amps, using 3000 Amp-s or about 5% of the battery’s juice. The battery will recharge as you drive, but not that fast. You’ll have to drive for at least 500 seconds (8 minutes) to recharge from the energy used in starting. But in the winter it is common that your drive will be shorter, and that a lot of your alternator power will be sent to the defrosters, lights, and seat heaters. As a result, your lead-acid battery will not totally charge, even on a 10 minute drive. With every week of short trips, the battery will drain a little, and sooner or later, you’ll find your battery is dead. Beware and recharge, ideally before 50% discharge

A little chemistry will help explain why full discharging is bad for battery life (for a different version see Wikipedia). For the first half discharge of a lead-acid battery, the reaction Is:

Pb + 2PbO2 + 2H2SO4  –> PbSO4 + Pb2O2SO4 + 2H2O.

This reaction involves 2 electrons and has a -∆G° of >394 kJ, suggesting a reversible voltage more than 2.04 V per cell with voltage decreasing as H2SO4 is used up. Any discharge forms PbSO4 on the positive plate (the lead anode) and converts lead oxide on the cathode (the negative plate) to Pb2O2SO4. Discharging to more than 50% involves this reaction converting the Pb2O2SO4 on the cathode to PbSO4.

Pb + Pb2O2SO4 + 2H2SO4  –> 2PbSO4 + 2H2O.

This also involves two electrons, but -∆G < 394 kJ, and voltage is less than 2.04 V. Not only is the voltage less, the maximum current is less. As it happens Pb2O2SO4 is amorphous, adherent, and conductive, while PbSO4 is crystalline, not that adherent, and not-so conductive. Operating at more than 50% results in less voltage, increased internal resistance, decreased H2SO4 concentrations, and lead sulfate flaking off the electrode. Even letting a battery sit at low voltage contributes to PbSO4 flaking off. If the weather is cold enough, the low concentration H2SO4 freezes and the battery case cracks. My advice: Get out your battery charger and top up your battery. Don’t worry about overcharging; your battery charger will sense when the charge is complete. A lead-acid battery operated at near full charge, between 67 and 100% will provide 1500 cycles, about as many as lithium. 

Trickle charging my wife's car. Good for battery life. At 6 Amps, expect this to take 3-6 hours.

Trickle charging my wife’s car: good for battery life. At 6 Amps, expect a full charge to take 6 hours or more. You might want to recharge the battery in your emergency lights too. 

Lithium batteries are the choice for tools and electric vehicles, but the chemistry is different. For longest life with lithium batteries, they should not be charged fully. If you change fully they deteriorate and self-discharge, especially when warm (100°F, 40°C). If you operate at 20°C between 75% and 25% charge, a lithium-ion battery will last 2000 cycles; at 100% to 0%, expect only 200 cycles or so.

Tesla cars use lithium batteries of a special type, lithium cobalt. Such batteries have been known to explode, but and Tesla adds sophisticated electronics and cooling systems to prevent this. The Chevy Volt and Bolt use lithium batteries too, but they are less energy-dense. In either case, assuming $1000/kWh and a 2000 cycle life, the battery cost of an EV is about 50¢/kWh-cycle. Add to this the cost of electricity, 15¢/kWh including the over-potential needed to charge, and I find a total cost of operation of 65¢/kWh. EVs get about 3 miles per kWh, suggesting an energy cost of about 22¢/mile. By comparison, a 23 mpg car that uses gasoline at $2.80 / gal, the energy cost is 12¢/mile, about half that of the EVs. For now, I stick to gasoline for normal driving, and for long trips, suggest buses, trains, and flying.

Robert Buxbaum, January 4, 2018.

Why is it hot at the equator, cold at the poles?

Here’s a somewhat mathematical look at why it is hotter at the equator that at the poles. This is high school or basic college level science, using trigonometry (pre calculus), a slight step beyond the basic statement that the sun hits down more directly at the equator than at the poles. That’s the kid’s explanation, but we can understand better if we add a little math.

Solar radiation hits Detroit at an angle, as a result less radiation power hits per square meter of Detroit.

Solar radiation hits Detroit or any other non-equator point at an angle, As a result, less radiation power hits per square meter of land.

Lets use the diagram at right and trigonometry to compare the amount of sun-energy that falls on a square meter of land at the equator (0 latitude) and in a city at 42.5 N latitude (Detroit, Boston, and Rome are at this latitude). In each case, let’s consider high-noon on March 21 or September 20. These are the two equinox days, the only days each year when the day and night are equal length, and the only times when it is easy to calculate the angle of the sun as it deviates from the vertical by exactly the latitude on the days and times.

More specifically the equator is zero latitude, so on the equator at high-noon on the equinox, the sun will shine from directly overhead, or 0° from the vertical. Since the sun’s power in space is 1050 W/m2, every square meter of equator can expect to receive 1050 W of sun-energy, less the amount reflected off clouds and dust, or scattered off or air molecules (air scattering is what makes the sky blue). Further north, Detroit, Boston, and Rome sit at 42.5 latitude. At noon on March 31 the sun will strike earth at 42.5° from the vertical as shown in the lower figure above. From trigonometry, you can see that each square meter of these cities will receive cos 42.5 as much power as a square meter at the equator, except for any difference in clouds, dust, etc. Without clouds etc. that would be 1050 cos 42.5 = 774 W. Less sun power hits per square meter because each square meter is tilted. Earlier and later in the day, each spot will get less sunlight than at noon, but the proportion is the same, at least on one of the equinox days.

To calculate the likely temperature in Detroit, Boston, or Rome, I will use a simple energy balance. Ignoring heat storage in the earth for now, we will say that the heat in equals the heat out. We now ignore heat transfer by way of winds and rain, and approximate to say that the heat out leaves by black-body radiation alone, radiating into the extreme cold of space. This is not a very bad approximation since Black body radiation is the main temperature removal mechanism in most situations where large distances are involved. I’ve discussed black body radiation previously; the amount of energy radiated is proportional to luminosity, and to T4, where T is the temperature as measured in an absolute temperature scale, Kelvin or Rankin. Based on this, and assuming that the luminosity of the earth is the same in Detroit as at the equator,

T Detroit / Tequator  = √√ cos 42.5 = .927

I’ll now calculate the actual temperatures. For American convenience, I’ll choose to calculation in the Rankin Temperature scale, the absolute Fahrenheit scale. In this scale, 100°F = 560°R, 0°F = 460°R, and the temperature of space is 0°R as a good approximation. If the average temperature of the equator = 100°F = 38°C = 560°R, we calculate that the average temperature of Detroit, Boston, or Rome will be about .927 x 560 = 519°R = 59°F (15°C). This is not a bad prediction, given the assumptions. We can expect the temperature will be somewhat lower at night as there is no light, but the temperature will not fall to zero as there is retained heat from the day. The same reason, retained heat, explains why it is warmer will be warmer in these cities on September 20 than on March 31.

In the summer, these cities will be warmer because they are in the northern hemisphere, and the north pole is tilted 23°. At the height of summer (June 21) at high noon, the sun will shine on Detroit at an angle of 42.5 – 23° = 19.5° from the vertical. The difference in angle is why these cities are warmer on that day than on March 21. The equator will be cooler on that day (June 21) than on March 21 since the sun’s rays will strike the equator at 23° from the vertical on that day. These  temperature differences are behind the formation of tornadoes and hurricanes, with a tornado season in the US centering on May to July.

When looking at the poles, we find a curious problem in guessing what the average temperature will be. At noon on the equinox, the sun comes in horizontally, or at 90°from the vertical. We thus expect there is no warming power at all this day, and none for the six months of winter either. At first glance, you’d think the temperature at the poles would be zero, at least for six months of the year. It isn’t zero because there is retained heat from the summer, but still it makes for a more-difficult calculation.

To figure an average temperature of the poles, lets remember that during the 6 month summer the sun shines for 24 hours per day, and that the angle of the sun will be as high as 23° from the horizon, or 67° from the vertical for all 24 hours. Let’s assume that the retained heat from the summer is what keeps the temperature from falling too low in the winter and calculate the temperature at an .

Let’s assume that the sun comes in at the equivalent of 25° for the sun during the 6 month “day” of the polar summer. I don’t look at equinox, but rather the solar day, and note that the heating angle stays fixed through each 24 hour day during the summer, and does not decrease in the morning or as the afternoon wears on. Based on this angle, we expect that

TPole / Tequator  = √√ cos 65° = .806

TPole = .806 x 560°R = 452°R = -8°F (-22°C).

This, as it happens is 4° colder than the average temperature at the north pole, but not bad, given the assumptions. Maybe winds and water currents account for the difference. Of course there is a large temperature difference at the pole between the fall equinox and the spring equinox, but that’s to be expected. The average is, -4°F, about the temperature at night in Detroit in the winter.

One last thing, one that might be unexpected, is that temperature at the south pole is lower than at the north pole, on average -44°F. The main reason for this is that the snow on south pole is quite deep — more than 1 1/2 miles deep, with some rock underneath. As I showed elsewhere, we expect that, temperatures are lower at high altitude. Data collected from cores through the 1 1/2 mile deep snow suggest (to me) chaotic temperature change, with long ice ages, and brief (6000 year) periods of warm. The ice ages seem far worse than global warming.

Dr. Robert Buxbaum, December 30, 2017