Beyond oil lies … more oil + price volatility

One of many best selling books by Kenneth Deffeyes

One of many best-selling books by Kenneth Deffeyes

While I was at Princeton, one of the most popular courses was geology 101 taught by Dr. Kenneth S. Deffeyes. It was a sort of “Rocks for Jocks,” but had an unusual bite since Dr. Deffeyes focussed particularly on the geology of oil. Deffeyes had an impressive understanding of oil and oil production, and one outcome of this impressive understanding was his certainty that US oil production had peaked in 1970, and that world oil was about to run out too. The prediction that US oil production had peaked was not original to him. It was called Hubbert’s peak after King Hubbert who correctly predicted (rationalized?) the date, but published it only in 1971. What Deffeyes added to Hubbard’s analysis was a simplified mathematical justification and a new prediction: that world oil production would peak in the 1980s, or 2000, and then run out fast. By 2005, the peak date was fixed to November 24, of the same year: Thanksgiving day 2005 ± 3 weeks.

As with any prediction of global doom, I was skeptical, but generally trusted the experts, and virtually every experts was on board to predict gloom in the near future. A British group, The Institute for Peak Oil picked 2007 for the oil to run out, and the several movies expanded the theme, e.g. Mad Max. I was convinced enough to direct my PhD research to nuclear fusion engineering. Fusion being presented as the essential salvation for our civilization to survive beyond 2050 years or so. I’m happy to report that the dire prediction of his mathematics did not come to pass, at least not yet. To quote Yogi Berra, “In theory, theory is just like reality.” Still I think it’s worthwhile to review the mathematical thinking for what went wrong, and see if some value might be retained from the rubble.

proof of peak oilDeffeyes’s Maltheisan proof went like this: take a year-by year history of the rate of production, P and divide this by the amount of oil known to be recoverable in that year, Q. Plot this P/Q data against Q, and you find the data follows a reasonably straight line: P/Q = b-mQ. This occurs between 1962 and 1983, or between 1983 and 2005. Fro whichever straight line you pick, m and b are positive. Once you find values for m and b that you trust, you can rearrange the equation to read,

P = -mQ2+ bQ

You the calculate the peak of production from this as the point where dP/dQ = 0. With a little calculus you’ll see this occurs at Q = b/2m, or at P/Q = b/2. This is the half-way point on the P/Q vs Q line. If you extrapolate the line to zero production, P=0, you predict a total possible oil production, QT = b/m. According to this model this is always double the total Q discovered by the peak. In 1983, QT was calculated to be 1 trillion barrels. By May of 2005, again predicted to be a peak year, QT had grown to two trillion barrels.

I suppose Deffayes might have suspected there was a mistake somewhere in the calculation from the way that QT had doubled, but he did not. See him lecture here in May 2005; he predicts war, famine, and pestilence, with no real chance of salvation. It’s a depressing conclusion, confidently presented by someone enamored of his own theories. In retrospect, I’d say he did not realize that he was over-enamored of his own theory, and blind to the possibility that the P/Q vs Q line might curve upward, have a positive second derivative.

Aside from his theory of peak oil, Deffayes also had a theory of oil price, one that was not all that popular. It’s not presented in the youtube video, nor in his popular books, but it’s one that I still find valuable, and plausibly true. Deffeyes claimed the wildly varying prices of the time were the result of an inherent quay imbalance between a varying supply and an inelastic demand. If this was the cause, we’d expect the price jumps of oil up and down will match the way the wait line at a barber shop gets longer and shorter. Assume supply varies because discoveries came in random packets, while demand rises steadily, and it all makes sense. After each new discovery, price is seen to fall. It then rises slowly till the next discovery. At least in the short term, price is seen as a symptom of supply unpredictability rather than a useful corrective to supply needs. This view is the opposite of Adam Smith, but I think he’s not wrong, at least in the short term with a necessary commodity like oil.

Academics accepted the peak oil prediction, I suspect, in part because it supported a Marxian remedy. If oil was running out and the market was broken, then our only recourse was government management of energy production and use. By the late 70s, Jimmy Carter told us to turn our thermostats to 65. This went with price controls, gas rationing, and a 55 mph speed limit, and a strong message of population management – birth control. We were running out of energy, we were told because we had too many people and they (we) were using too much. America’s grown days were behind us, and only the best and the brightest could be trusted to manage our decline into the abyss. I half believed these scary predictions, in part because everyone did, and in part because they made my research at Princeton particularly important. The Science fiction of the day told tales of bold energy leaders, and I was ready to step up and lead, or so I thought.

By 2009 Dr. Deffayes was being regarded as chicken little as world oil production continued to expand.

By 2009 Dr. Deffayes was being regarded as chicken little as world oil production continued to expand.

I’m happy to report that none of the dire predictions of the 70’s to 90s came to pass. Some of my colleagues became world leaders, the rest because stock brokers with their own private planes and SUVs. As of my writing in 2018, world oil production has been rising, and even King Hubbert’s original prediction of US production has been overturned. Deffayes’s reputation suffered for a few years, then politicians moved on to other dire dangers that require world-class management. Among the major dangers of today, school shootings, ebola, and Al Gore’s claim that the ice caps will melt by 2014, flooding New York. Sooner or later, one of these predictions will come true, but the lesson I take is that it’s hard to predict change accurately.

Just when you thought US oil had beed depleted for good, production began rising. It's now higher than the 1970 peak.

Just when you thought US oil was depleted, production began rising. We now produce more than in 1970.

Much of the new oil production you’ll see on the chart above comes from tar-sands, oil the Deffeyes  considered unrecoverable, even while it was being recovered. We also  discovered new ways to extract leftover oil, and got better at using nuclear electricity and natural gas. In the long run, I expect nuclear electricity and hydrogen will replace oil. Trees have a value, as does solar. As for nuclear fusion, it has not turned out practical. See my analysis of why.

Robert Buxbaum, March 15, 2018. Happy Ides of March.

Hydrogen powered trucks and busses

With all the attention to electric cars, I figure that we’re either at the dawn of electric propulsion vehicles or of electric propulsion vehicle hype. Elon Musk’s Tesla motor car company stock is now valued at $59 B, more than GM or Ford despite the company having massive losses and few cars. The valuation, I suspect, has to do with the future and autonomous vehicles. There are many who expect self-driving vehicles will rule the road, but the form is uncertain. In this space, i suspect that hydrogen-battery hybrids make more sense than batteries alone, and that the first large-impact uses will be trucks and busses — vehicles that go long distance on highways.

Factory floor, hydrogen fueling station for plug-power forklifts. Plug FCs reached their 10 millionth refueling this January.

Factory floor, hydrogen fueling station for fuel cell forklifts. This company’s fuel cells have had over 10 million refuelings so far.

Currently there are only two bands of autonomous vehicles available in the US, the Cadillac CT6, a gasoline powered car, and the Tesla. Neither work well except on highways because the number of highway problems are fewer than the number of city problems and only the CT6 allows you to take your hands off the wheel — see review here. To me, being able to take your hand off the wheel is the only real point of autonomous control, and if one can only do this only on the highway, that’s acceptable. Highway driving gets quite tiring after the first hundred miles or so, and any relief is welcome.

Tesla’s battery cars allow for some auto-driving on the highway, but you can’t take your hand off the wheel or the car stops. That battery cars compete at all for highway driving, I suspect, is only possible because the US government highly subsidizes the battery cost. Musk then hides the true cost among the corporate losses. Without this, hydrogen – fuel cell vehicles would be cheaper, I suspect, while providing better range, see my calculation here. Adding to the advantage of hydrogen over batteries, the charge time for hydrogen is much faster. Slow charge times are a real drawback for highway vehicles traveling any significant distances. While hydrogen fuel isn’t cheap — it’s becoming cheaper and is now about double the price of gasoline on a per mile basis. The advantage over gasoline is it provides pollution-free, electric propulsion, and this is well suited to driverless vehicles. Both gasoline and battery vehicles can have odd acceleration issues, e.g. when the gasoline gets wet, or the battery gets run down. And it’s not like there are no hydrogen fueling stations. Hydrogen, fuel-cell power has become a major competitor for fork-lifts, and has recently had its ten million refueling in that application. The same fueling stations that serve the larger fork-lift users could serve the self-driving truck and bus market.

For round the town use, hydrogen vehicles could still use batteries, and the combined vehicle can have very impressive performance. A Dutch company has begun to sell kits to convert Tesla model S autos to combined battery + hydrogen. With these kits, they boast a 620 mile (1000 km) range instead of the normal 240 miles. See the product here.  On the horizon, in the self-driving fuel cell market, Hyundai has debuted the “Nexo” with a range of 370 miles. Showing off the self-driving capability, Nexos were used to carry spectators between venues at the Pyongyang olympics. Japanese competitors, the Toyota Mirai (312 miles) and the Honda Clarity Fuel Cell (366 miles) can be expected to provide similar capabilities.

Cadillac CT6 with supercruise. An antonymous vehicle that you can buy today that allows you to take your hand off the wheel.

Cadillac CT6 with supercruise. An antonymous vehicle that you can buy today that allows you to take your hand off the wheel.

The reason I believe in hydrogen Trucks and Busses more than cars is the difficulty of refueling, Southern California has installed some 36 public hydrogen refueling stations at last count, but that’s too few for most personal car use. Other states have even fewer spots where you can drive up and get hydrogen; Michigan has only two. This does not matter for a commercial truck or bus because they go between fixed depots and these can be fitted with hydrogen dispensers as found for forklifts. It’s possible trucks can even use the same dispensers as the forklifts. If one needs a little extra range one can add a “hydrogen Jerry can” to provide an extra kg of H2 to provide 20-30 miles of emergency range. I do not see electric vehicles working as well because the charge times are so slow, the range so modest, and the electric power needs so large. To charge a 100 kWhr battery in an hour, the charge station would have to have an electric feed of 100 kW, about as much as a typical mall. With 100A, 240 V, the most you can normally get, expect a 4 1/2 hour charge.

The real benefit for hydrogen trucks and busses is autonomy. Being able to run the route without major input from a driver. So why not gasoline, as with the Cadillac? My answer is simplicity. If you want driverless simplicity, you want electric or hydrogen. And only hydrogen provides the long-range, fast fueling to make the product worthwhile.

Robert Buxbaum March 12, 2018. My company, REB Research provides hydrogen purifiers and hydrogen generators.

Yogurt making for kids

Yogurt making is easy, and is a fun science project for kids and adults alike. It’s cheap, quick, easy, reasonably safe, and fairly useful. Like any real science, it requires mathematical thinking if you want to go anywhere really, but unlike most science, you can get somewhere even without math, and you can eat the experiments. Yogurt making has been done for centuries, and involves nothing more than adding some yogurt culture to a glass of milk and waiting. To do this the traditional way, you wait with the glass sitting outside of any refrigeration (they didn’t have refrigeration in the olden days). After a few days, you’ll have tasty yogurt. You can get taster yogurt if you add flavors. In one of my most successful attempts at flavoring, I added 1/2 ounce of “skinny syrup” (toffee flavor) to a glass of milk. The results were most satisfactory, IMHO.

My latest batch of home-made flavored yogurt, made in a warm spot behind this urn.

My latest batch of home-made flavored yogurt, made in a warm spot behind this coffee urn.

Now to turn yogurt-making into a science project. We’ll begin with a hypothesis. I generally tell people to not start with a hypothesis, (it biases your thinking), but here I will make an exception as I have a peculiarly non-biased hypothesis to suggest. Besides, most school kids are told they need one. My hypothesis is that there must be better ways to make yogurt and worse ways. A hypothesis should be avoided if it contains any unfounded assumptions, or if it points to a particular answer — especially an answer that no one would care about.

As with all science you’ll want to take numerical data of cause and effect. I’d suggest that temperature data is worth taking. The yogurt-making bacteria is called lactose thermophillis, and this suggests that warm temperatures will be good (lact = milk in Latin, thermophilic = loving heat). Also making things interesting is the suspicion that if you make things too warm, you’ll cook your organisms and you won’t get any yogurt. I’ve had this happen, both with over-heat and under-heat. My first attempt was to grow yogurt in the refrigerator, but I got no results. I then tried the kitchen counter and got yogurt, and then I heated things a bit more by growing next to a coffee urn, and got better yogurt; yet more heat and nothing.

For a science project, you might want to make a few batches of yogurt, at least 5, and these should be made at 2-3 different temperatures. If temperature is a cause for the yogurt to come out better or worse, you’ll need to be able to measure how much “better”? You may choose to study taste, and that’s important, but it’s hard to quantify, so that should not be the whole experiment. I would begin by testing thickness, or the time to a get some fixed degree of thickness; I’d measure thickness by seeing if a small weight sinks. A penny is a cheap, small weight, and I know it sinks in milk, but not in yogurt. You’ll want to wash your penny first, or no one will eat the yogurt. I used hot water from the urn to clean and sterilize my pennies.

Another thing that is worth testing is the effect of using different milks: whole milk, 2%, 1% or skim; goat milk, or almond milk. You can also try adding stuff to it, or starting with different starter cultures, or different amounts. Keep numerical records of these choices, then keep track of how they effect how long it takes for the gel to form, and how the stuff looks or tastes to you. Before you know it, you’ll have some very good product at half the price of the stuff in the store. If you really want to move forward fast, you might apply semi-random statistics to your experimental choices. Good luck.

Robert Buxbaum, March 2, 2018. My latest observation: what happens if you leave the yogurt to mold too long? It doesn’t get moldy, perhaps the lactic acid formed kills germs (?), but the yogurt separated into curds and whey. I poured off the whey, the unappealing, bitter yellow liquid. The thick white remainder is called “Greek” yogurt. I’m not convinced this tastes better, or is healthier, BTW.

Elvis Presley and the opioid epidemic

For those who suspect that the medical profession may bear some responsibility for the opioid epidemic, I present a prescription written for Elvis Presley, August 1977. Like many middle age folks, he suffered from back pain and stress. And like most folks, he trusted the medical professionals to “do no harm” prescribing nothing with serious side effects. Clearly he was wrong.

Elis prescription, August 1977. Opioid city.

Elis prescription, August 1977. Opioid city.

The above prescription is a disaster, but you may think this is just an aberration. A crank doctor who hooked (literally) a celebrity patient, but not as aberrant as one might think. I worked for a pharmacist in the 1970s, and the vast majority of prescriptions we saw were for these sort of mood altering drugs. The pharmacist I worked for refused to service many of these customers, and even phoned the doctor to yell at him for one particular egregious case: a shivering skinny kid with a prescription for diet pills, but my employer was the aberration. All those prescriptions would be filled by someone, and a great number of people walked about in a haze because of it.

The popular Stones song, Mother’s Little Helper, would not have been so popular if it were not true to life. One might ask why it was true to life, as doctors might have prescribed less addicting drugs. I believe the reason is that doctors listened to advertising then, and now. They might have suggested marijuana for pain or depression — there was good evidence it worked — but there were no colorful brochures with smiling actors. The only positive advertising was for opioids, speed, and Valium and that was what was prescribed then and still today.

One of the most common drugs prescribed to kids these days is speed, marketed as “Ritalin.” It prevents daydreaming and motor-mouth behaviors; see my essay is ADHD a real disease?. I’m not saying that ADD kids aren’t annoying, or that folks don’t have back ached, but the current drugs are worse than marijuana as best I can tell. It would be nice to get non-high-inducing pot extract sold in pharmacies, in my opinion, and not in specialty stores (I trust pharmacists). AS things now stand the users have medical prescription cards, but the black sellers end up in jail..

Robert Buxbaum, January 25, 2018. Please excuse the rant. I ran for sewer commissioner, 2016, And as a side issue, I’d like to reduce the harsh “minimum” penalties for crimes of possession with intent to sell, while opening up sale to normal, druggist channels.

Keeping your car batteries alive.

Lithium-battery cost and performance has improved so much that no one uses Ni-Cad or metal hydride batteries any more. These are the choice for tools, phones, and computers, while lead acid batteries are used for car starting and emergency lights. I thought I’d write about the care and trade-offs of these two remaining options.

As things currently stand, you can buy a 12 V, lead-acid car battery with 40 Amp-h capacity for about $95. This suggests a cost of about $200/ kWh. The price rises to $400/kWh if you only discharge half way (good practice). This is cheaper than the per-power cost of lithium batteries, about $500/ kWh or $1000/ kWh if you only discharge half-way (good practice), but people pick lithium because (1) it’s lighter, and (2) it’s generally longer lasting. Lithium generally lasts about 2000 half-discharge cycles vs 500 for lead-acid.

On the basis of cost per cycle, lead acid batteries would have been replaced completely except that they are more tolerant of cold and heat, and they easily output the 400-800 Amps needed to start a car. Lithium batteries have problems at these currents, especially when it’s hot or cold. Lithium batteries deteriorate fast in the heat too (over 40°C, 105°F), and you can not charge a lithium car battery at more than 3-4 Amps at temperatures below about 0°C, 32°F. At higher currents, a coat of lithium metal forms on the anode. This lithium can react with water: 2Li + H2O –> Li2O + H2, or it can form dendrites that puncture the cell separators leading to fire and explosion. If you charge a lead acid battery too fast some hydrogen can form, but that’s much less of a problem. If you are worried about hydrogen, we sell hydrogen getters and catalysts that remove it. Here’s a description of the mechanisms.

The best thing you can do to keep a lead-acid battery alive is to keep it near-fully charged. This can be done by taking long drives, by idling the car (warming it up), or by use of an external trickle charger. I recommend a trickle charger in the winter because it’s non-polluting. A lead-acid battery that’s kept at near full charge will give you enough charge for 3000 to 5000 starts. If you let the battery completely discharge, you get only 50 or so deep cycles or 1000 starts. But beware: full discharge can creep up on you. A new car battery will hold 40 Ampere-hours of current, or 65,000 Ampere-seconds if you half discharge. Starting the car will take 5 seconds of 600 Amps, using 3000 Amp-s or about 5% of the battery’s juice. The battery will recharge as you drive, but not that fast. You’ll have to drive for at least 500 seconds (8 minutes) to recharge from the energy used in starting. But in the winter it is common that your drive will be shorter, and that a lot of your alternator power will be sent to the defrosters, lights, and seat heaters. As a result, your lead-acid battery will not totally charge, even on a 10 minute drive. With every week of short trips, the battery will drain a little, and sooner or later, you’ll find your battery is dead. Beware and recharge, ideally before 50% discharge

A little chemistry will help explain why full discharging is bad for battery life (for a different version see Wikipedia). For the first half discharge of a lead-acid battery, the reaction Is:

Pb + 2PbO2 + 2H2SO4  –> PbSO4 + Pb2O2SO4 + 2H2O.

This reaction involves 2 electrons and has a -∆G° of >394 kJ, suggesting a reversible voltage more than 2.04 V per cell with voltage decreasing as H2SO4 is used up. Any discharge forms PbSO4 on the positive plate (the lead anode) and converts lead oxide on the cathode (the negative plate) to Pb2O2SO4. Discharging to more than 50% involves this reaction converting the Pb2O2SO4 on the cathode to PbSO4.

Pb + Pb2O2SO4 + 2H2SO4  –> 2PbSO4 + 2H2O.

This also involves two electrons, but -∆G < 394 kJ, and voltage is less than 2.04 V. Not only is the voltage less, the maximum current is less. As it happens Pb2O2SO4 is amorphous, adherent, and conductive, while PbSO4 is crystalline, not that adherent, and not-so conductive. Operating at more than 50% results in less voltage, increased internal resistance, decreased H2SO4 concentrations, and lead sulfate flaking off the electrode. Even letting a battery sit at low voltage contributes to PbSO4 flaking off. If the weather is cold enough, the low concentration H2SO4 freezes and the battery case cracks. My advice: Get out your battery charger and top up your battery. Don’t worry about overcharging; your battery charger will sense when the charge is complete. A lead-acid battery operated at near full charge, between 67 and 100% will provide 1500 cycles, about as many as lithium. 

Trickle charging my wife's car. Good for battery life. At 6 Amps, expect this to take 3-6 hours.

Trickle charging my wife’s car: good for battery life. At 6 Amps, expect a full charge to take 6 hours or more. You might want to recharge the battery in your emergency lights too. 

Lithium batteries are the choice for tools and electric vehicles, but the chemistry is different. For longest life with lithium batteries, they should not be charged fully. If you change fully they deteriorate and self-discharge, especially when warm (100°F, 40°C). If you operate at 20°C between 75% and 25% charge, a lithium-ion battery will last 2000 cycles; at 100% to 0%, expect only 200 cycles or so.

Tesla cars use lithium batteries of a special type, lithium cobalt. Such batteries have been known to explode, but and Tesla adds sophisticated electronics and cooling systems to prevent this. The Chevy Volt and Bolt use lithium batteries too, but they are less energy-dense. In either case, assuming $1000/kWh and a 2000 cycle life, the battery cost of an EV is about 50¢/kWh-cycle. Add to this the cost of electricity, 15¢/kWh including the over-potential needed to charge, and I find a total cost of operation of 65¢/kWh. EVs get about 3 miles per kWh, suggesting an energy cost of about 22¢/mile. By comparison, a 23 mpg car that uses gasoline at $2.80 / gal, the energy cost is 12¢/mile, about half that of the EVs. For now, I stick to gasoline for normal driving, and for long trips, suggest buses, trains, and flying.

Robert Buxbaum, January 4, 2018.

Why is it hot at the equator, cold at the poles?

Here’s a somewhat mathematical look at why it is hotter at the equator that at the poles. This is high school or basic college level science, using trigonometry (pre calculus), a slight step beyond the basic statement that the sun hits down more directly at the equator than at the poles. That’s the kid’s explanation, but we can understand better if we add a little math.

Solar radiation hits Detroit at an angle, as a result less radiation power hits per square meter of Detroit.

Solar radiation hits Detroit or any other non-equator point at an angle, As a result, less radiation power hits per square meter of land.

Lets use the diagram at right and trigonometry to compare the amount of sun-energy that falls on a square meter of land at the equator (0 latitude) and in a city at 42.5 N latitude (Detroit, Boston, and Rome are at this latitude). In each case, let’s consider high-noon on March 21 or September 20. These are the two equinox days, the only days each year when the day and night are equal length, and the only times when it is easy to calculate the angle of the sun as it deviates from the vertical by exactly the latitude on the days and times.

More specifically the equator is zero latitude, so on the equator at high-noon on the equinox, the sun will shine from directly overhead, or 0° from the vertical. Since the sun’s power in space is 1050 W/m2, every square meter of equator can expect to receive 1050 W of sun-energy, less the amount reflected off clouds and dust, or scattered off or air molecules (air scattering is what makes the sky blue). Further north, Detroit, Boston, and Rome sit at 42.5 latitude. At noon on March 31 the sun will strike earth at 42.5° from the vertical as shown in the lower figure above. From trigonometry, you can see that each square meter of these cities will receive cos 42.5 as much power as a square meter at the equator, except for any difference in clouds, dust, etc. Without clouds etc. that would be 1050 cos 42.5 = 774 W. Less sun power hits per square meter because each square meter is tilted. Earlier and later in the day, each spot will get less sunlight than at noon, but the proportion is the same, at least on one of the equinox days.

To calculate the likely temperature in Detroit, Boston, or Rome, I will use a simple energy balance. Ignoring heat storage in the earth for now, we will say that the heat in equals the heat out. We now ignore heat transfer by way of winds and rain, and approximate to say that the heat out leaves by black-body radiation alone, radiating into the extreme cold of space. This is not a very bad approximation since Black body radiation is the main temperature removal mechanism in most situations where large distances are involved. I’ve discussed black body radiation previously; the amount of energy radiated is proportional to luminosity, and to T4, where T is the temperature as measured in an absolute temperature scale, Kelvin or Rankin. Based on this, and assuming that the luminosity of the earth is the same in Detroit as at the equator,

T Detroit / Tequator  = √√ cos 42.5 = .927

I’ll now calculate the actual temperatures. For American convenience, I’ll choose to calculation in the Rankin Temperature scale, the absolute Fahrenheit scale. In this scale, 100°F = 560°R, 0°F = 460°R, and the temperature of space is 0°R as a good approximation. If the average temperature of the equator = 100°F = 38°C = 560°R, we calculate that the average temperature of Detroit, Boston, or Rome will be about .927 x 560 = 519°R = 59°F (15°C). This is not a bad prediction, given the assumptions. We can expect the temperature will be somewhat lower at night as there is no light, but the temperature will not fall to zero as there is retained heat from the day. The same reason, retained heat, explains why it is warmer will be warmer in these cities on September 20 than on March 31.

In the summer, these cities will be warmer because they are in the northern hemisphere, and the north pole is tilted 23°. At the height of summer (June 21) at high noon, the sun will shine on Detroit at an angle of 42.5 – 23° = 19.5° from the vertical. The difference in angle is why these cities are warmer on that day than on March 21. The equator will be cooler on that day (June 21) than on March 21 since the sun’s rays will strike the equator at 23° from the vertical on that day. These  temperature differences are behind the formation of tornadoes and hurricanes, with a tornado season in the US centering on May to July.

When looking at the poles, we find a curious problem in guessing what the average temperature will be. At noon on the equinox, the sun comes in horizontally, or at 90°from the vertical. We thus expect there is no warming power at all this day, and none for the six months of winter either. At first glance, you’d think the temperature at the poles would be zero, at least for six months of the year. It isn’t zero because there is retained heat from the summer, but still it makes for a more-difficult calculation.

To figure an average temperature of the poles, lets remember that during the 6 month summer the sun shines for 24 hours per day, and that the angle of the sun will be as high as 23° from the horizon, or 67° from the vertical for all 24 hours. Let’s assume that the retained heat from the summer is what keeps the temperature from falling too low in the winter and calculate the temperature at an .

Let’s assume that the sun comes in at the equivalent of 25° for the sun during the 6 month “day” of the polar summer. I don’t look at equinox, but rather the solar day, and note that the heating angle stays fixed through each 24 hour day during the summer, and does not decrease in the morning or as the afternoon wears on. Based on this angle, we expect that

TPole / Tequator  = √√ cos 65° = .806

TPole = .806 x 560°R = 452°R = -8°F (-22°C).

This, as it happens is 4° colder than the average temperature at the north pole, but not bad, given the assumptions. Maybe winds and water currents account for the difference. Of course there is a large temperature difference at the pole between the fall equinox and the spring equinox, but that’s to be expected. The average is, -4°F, about the temperature at night in Detroit in the winter.

One last thing, one that might be unexpected, is that temperature at the south pole is lower than at the north pole, on average -44°F. The main reason for this is that the snow on south pole is quite deep — more than 1 1/2 miles deep, with some rock underneath. As I showed elsewhere, we expect that, temperatures are lower at high altitude. Data collected from cores through the 1 1/2 mile deep snow suggest (to me) chaotic temperature change, with long ice ages, and brief (6000 year) periods of warm. The ice ages seem far worse than global warming.

Dr. Robert Buxbaum, December 30, 2017

Gomez Addams, positive male role-model

The Addams Family did well on Broadway, in the movies, and on TV, but got predictably bad reviews in all three forms. Ordinary people like it; critics did not. Something I like about the series that critics didn’t appreciate is that Gomez is the only positive father character I can think of since the days of “Father knows best”.

Gomez is sexual, and sensual; a pursuer and lover, but not a predator.

Gomez is sexual, and sensual; a pursuer and lover, but not a predator.

In most family shows the father isn’t present at all, or if he appears, he’s violent or and idiot. He’s in prison, or in trouble with the law, regularly insulted by his wife and neighbors, in comedies, he’s sexually ambiguous, insulted by his children, and often insulted by talking pets too. In Star Wars, the only father figures are Vader, a distant menace, and Luke who’s just distant. In American shows, the parents are often shown as divorced; the children are reared by the mother with help of a nanny, a grandparent, or a butler. In Japanese works, I hardly see a parent. By contrast, Gomez is present, center stage. He’s not only involved, he’s the respected leader of his clan. If he’s odd, it’s the odd of a devoted father and husband who comfortable with himself and does not care to impress others. It’s the outsiders, the visitors, who we find have family problems, generally caused by a desire to look perfect.

Gomez is hot-blooded, sexual and sensual, but he’s not a predator, or violent. He’s loved by his wife, happy with his children, happy with his life, and happy with himself. As best we can tell, he’s on good-enough terms with the milk man, the newspaper boy, and the law. Though not a stick-in-the-mud, he’s on excellent terms with the rest of the Addams clan, and he’s good with the servants: Lurch, Thing, and for a while a gorilla who served as maid (none too well). One could do worse than to admire a person who maintains a balance like that between the personal, the family, the servants, and the community.

On a personal level, Gomez is honest, kind, generous, loving, and involved. He has hobbies, and his hobbies are manly: fencing, chess, dancing, stocks, music, and yoga. He reads the newspaper and smokes a cigar, but is not addicted to either. He plays with model trains too, an activity he shares with his son. Father and son enjoy blowing up the train — it’s something kids used to do in the era of firecrackers. Gomez is not ashamed to do it, and approves when his son does.

Gomez Addams, in the Addams Family Musical, gives advice and comfort to his daughter who is going through a rough stretch of relationship with a young man, and sings that he’s happy and sad.

Other TV and movie dads have less – attractive hobbies: football watching and beer-drinking, primarily. Han Solo is a smuggler, though he does not seem to need the money. TV dads take little interest in their kids, and their kids return the favor. To the extent that TV dads take an interest, it’s to disapprove, George Costanza’s dad, for example. Gomez is actively interested and is asked for advice regularly. In the video below, he provides touching comfort and advice to his daughter while acknowledging her pain, and telling her how proud he is of her. Kids need to hear that from a dad. No other TV dad gives approval like this; virtually no other male does. They are there as props, I think, for strong females and strong children.

The things that critics dislike, or don’t understand, as best I can tell, is the humor, based as it is on danger and dance. Critics hate humor in general (How many “best picture” Oscars go to comedies?)  Critics fear pointless danger, and disapproval, and law suits, and second-hand-smoke. They are the guardians of correct thinking — just the thinking that Gomez and the show ridicules. Gomez lives happily in the real world of today, but courts danger, death, law suits. He smokes and dances and does not worry what the neighbors think. He tries dangerous things and does not always succeed, but then lets his kids try the same. He dances with enthusiasm. I find his dancing and fearlessness healthier than the over-protective self-sacrifice that critics seem to favor in heroes. To the extent that they tolerate fictional violence, they require the hero to swooping, protecting others at danger to themselves only, while the others look on (or don’t). The normal people are presented as cautious, fearful, and passive. Cold, in a word, and we raise kids to be the same. Cold fear is a paralyzing thing in children and adults; it often brings about the very damage that one tries to avoid.

Gomez is hot: active, happy, and fearless. This heat.– this passion — is what makes Gomez a better male role model than Batman, say. Batman is just miserable, or the current versions are. Ms. Frizzle (magic school bus) is the only other TV character who is happy to let others take risks, but Ms Frizzle is female. Gomez’s thinks the best of those who come to visit, but we see they usually don’t deserve it. Sometimes they do, and this provides touching moments. Gomez is true to his wife and passionate, most others are not. Gomez kisses his wife, dances with her, and compliments her. Outsiders don’t dance, and snap at their wives; they are motivated by money, status, and acceptability. Gomez is motivated by life itself (and death). The outsiders fear anything dangerous or strange; they are cold inside and suffer as a result. Gomez is hot-blooded and alive: as a lover, a dancer, a fencer, a stock trader, an animal trainer, and a collector. He is the only father with a mustache, a sign of particular masculinity — virtually the only man with a mustache.

Gomez has a quiet, polite and decent side too, but it’s a gallant version, a masculine heterosexual version. He’s virtually the only decent man who enjoys life, or for that matter is shown to kiss his wife with more than a peck. In TV or movies, when you see a decent, sensitive, or polite man, he is asexual or homosexual. He is generally unmarried, sometimes divorced, and almost always sad — searching for himself. I’m not sure such people are positive role models for the a-sexual, but they don’t present a lifestyle most would want to follow. Gomez is decent, happy, and motivated; he loves his life and loves his wife, even to death, and kisses with abandon. My advice: be alive like Gomez, don’t be like the dead, cold, visitors and critics.

Robert Buxbaum, December 22, 2017.  Some years ago, I gave advice to my daughter as she turned 16. I’ve also written about Superman, Hamilton, and Burr; about Military heroes and Jack Kelly.

Hydrogen permeation rates in Inconel, Hastelloy and stainless steels.

Some 20 years ago, I published a graph of the permeation rate for hydrogen in several metals at low pressure, See the graph here, but I didn’t include stainless steel in the graph.

Hydrogen permeation in clean SS-304; four research groups’ data.

One reason I did not include stainless steel was there were many stainless steels and the hydrogen permeation rates were different, especially so between austenitic (FCC) steels and ferritic steels (BCC). Another issue was oxidation. All stainless steels are oxidized, and it affect H2 permeation a lot. You can decrease the hydrogen permeation rate significantly by oxidation, or by surface nitriding, etc (my company will even provide this service). Yet another issue is cold work. When  an austenitic stainless steel is worked — rolled or drawn — some Austinite (FCC) material transforms to Martisite (a sort of stretched BCC). Even a small amount of martisite causes an order of magnitude difference in the permeation rate, as shown below. For better or worse, after 20 years, I’m now ready to address H2 in stainless steel, or as ready as I’m likely to be.

Hydrogen permeation data for SS 340 and SS 321.

Hydrogen permeation in SS 340 and SS 321. Cold work affects H2 permeation more than the difference between 304 and 321; Sun Xiukui, Xu Jian, and Li Yiyi, 1989

The first graph I’d like to present, above, is a combination of four research groups’ data for hydrogen transport in clean SS 304, the most common stainless steel in use today. SS 304 is a ductile, austenitic (FCC), work hardening, steel of classic 18-8 composition (18% Cr, 8% Ni). It shares the same basic composition with SS 316, SS 321 and 304L only differing in minor components. The data from four research groups shows a lot of scatter: a factor of 5 variation at high temperature, 1000 K (727 °C), and almost two orders of magnitude variation (factor of 50) at room temperature, 13°C. Pressure is not a factor in creating the scatter, as all of these studies were done with 1 atm, 100 kPa hydrogen transporting to vacuum.

The two likely reasons for the variation are differences in the oxide coat, and differences in the amount of cold work. It is possible these are the same explanation, as a martensitic phase might increase H2 permeation by introducing flaws into the oxide coat. As the graph at left shows, working these alloys causes more differences in H2 permeation than any difference between alloys, or at least between SS 304 and SS 321. A good equation for the permeation behavior of SS 304 is:

P (mol/m.s.Pa1/2) = 1.1 x10-6 exp (-8200/T).      (H2 in SS-304)

Because of the song influence of cold work and oxidation, I’m of the opinion that I get a slightly different, and better equation if I add in permeation data from three other 18-8 stainless steels:

P (mol/m.s.Pa1/2) = 4.75 x10-7 exp (-7880/T).     (H2 in annealed SS-304, SS-316, SS-321)

Screen Shot 2017-12-16 at 10.37.37 PM

Hydrogen permeation through several common stainless steels, as well as Inocnel and Hastelloy

Though this result is about half of the previous at high temperature, I would trust it better, at least for annealed SS-304, and also for any annealed austenitic stainless steel. Just as an experiment, I decided to add a few nickel and cobalt alloys to the mix, and chose to add data for inconel 600, 625, and 718; for kovar; for Hastelloy, and for Fe-5%Si-5%Ge, and SS4130. At left, I pilot all of these on one graph along with data for the common stainless steels. To my eyes the scatter in the H2 permeation rates is indistinguishable from that SS 304 above or in the mixed 18-8 steels (data not shown). Including these materials to the plot decreases the standard deviation a bit to a factor of 2 at 1000°K and a factor of 4 at 13°C. Making a least-square analysis of the data, I find the following equation for permeation in all common FCC stainless steels, plus Inconels, Hastelloys and Kovar:

P (mol/m.s.Pa1/2) = 4.3 x10-7 exp (-7850/T).

This equation is near-identical to the equation above for mixed, 18-8 stainless steel. I would trust it for annealed or low carbon metal (SS-304L) to a factor of 2 accuracy at high temperatures, or a factor of 4 at low temperatures. Low carbon reduces the tendency to form Martinsite. You can not use any of these equations for hydrogen in ferritic (BCC) alloys as the rates are different, but this is as good as you’re likely to get for basic austenitc stainless and related materials. If you are interested in the effect of cold work, here is a good reference. If you are bothered by the square-root of pressure driving force, it’s a result of entropy: hydrogen travels in stainless steel as dislocated H atoms and the dissociation H2 –> 2 H leads to the square root.

Robert Buxbaum, December 17, 2017. My business, REB Research, makes hydrogen generators and purifiers; we sell getters; we consult on hydrogen-related issues, and will (if you like) provide oxide (and similar) permeation barriers.

Change home air filters 3 times per year

Energy efficient furnaces use a surprisingly large amount of electricity to blow the air around your house. Part of the problem is the pressure drop of the ducts, but quite a lot of energy is lost bowing air through the dust filter. An energy-saving idea: replace the filter on your furnace twice a year or more. Another idea, you don’t have to use the fanciest of filters. Dirty filters provide a lot of back-pressure especially when they are dirty.

I built a water manometer, see diagram below to measure the pressure drop through my furnace filters. The pressure drop is measured from the difference in the height of the water column shown. Each inch of water is 0.04 psi or 275 Pa. Using this pressure difference and the flow rating of the furnace, I calculated the amount of power lost by the following formula:

W = Q ∆P/ µ.

Here W is the amount of power use, Watts, Q is flow rate m3/s, ∆P = the pressure drop in Pa, and µ is the efficiency of the motor and blower, typically about 50%.

With clean filters (two different brands), I measured 1/8″ and 1/4″ of water column, or a pressure drop of 0.005 and 0.01 psi, depending on the filter. The “better the filter”, that is the higher the MERV rating, the higher the pressure drop. I also measured the pressure drop through a 6 month old filter and found it to be 1/2″ of water, or 0.02 psi or 140 Pa. Multiplying this by the amount of air moved, 1000 cfm =  25 m3 per minute or 0.42 m3/s, and dividing by the efficiency, I calculate a power use of 118 W. That is 0.118 kWh/hr. or 2.8 kWh/day.

water manometer used to measure pressure drop through the filter of my furnace. I stuck two copper tubes into the furnace, and attached a plastic hose. Pressure was measured from the difference in the water level in the hose.

The water manometer I used to measure the pressure drop through the filter of my furnace. I stuck two copper tubes into the furnace, and attached a plastic tube half filled with water between the copper tubes. Pressure was measured from the difference in the water level in the plastic tube. Each 1″ of water is 280 Pa or 0.04psi.

At the above rate of power use and a cost of electricity of 11¢/kWhr, I find it would cost me an extra 4 KWhr or about 31¢/day to pump air through my dirty-ish filter; that’s $113/year. The cost through a clean filter would be about half this, suggesting that for every year of filter use I spend an average of $57t where t is the use life of the filter.

To calculate the ideal time to change filters I set up the following formula for the total cost per year $, including cost per year spent on filters (at $5/ filter), and the pressure-induced electric cost:

$ = 5/t + 57 t.

The shorter the life of the filter, t, the more I spend on filters, but the less on electricity. I now use calculus to find the filter life that produces the minimum $, and determine that $ is a minimum at a filter life t = √5/57 = .30 years.  The upshot, then, if you filters are like mine, you should change your three times a year, or so; every 3.6 months to be super-exact. For what it’s worth, I buy MERV 5 filters at Ace or Home Depot. If I bought more expensive filters, the optimal change time would likely be once or twice per year. I figure that, unless you are very allergic or make electronics in your basement you don’t need a filter with MERV rating higher than 8 or so.

I’ve mentioned in a previous essay/post that dust starts out mostly as dead skin cells. Over time dust mites eat the skin, some pretty nasty stuff. Most folks are allergic to the mites, but I’m not convinced that the filter on your furnace dies much to isolate you from them since the mites, etc tend to hang out in your bed and clothes (a charming thought, I know).

Old fashioned, octopus furnace. Free convection.

Old fashioned, octopus furnace. Free convection.

The previous house I had, had no filter on the furnace (and no blower). I noticed no difference in my tendency to cough or itch. That furnace relied for circulation on the tendency for hot air to rise. That is, “free convection” circulated air through the home and furnace by way of “Octopus” ducts. If you wonder what a furnace like that looks like here’s a picture.

I calculate that a 10 foot column of air that is 30°C warmer than that in a house will have a buoyancy of about 0.00055 psi (1/8″ of water). That’s enough pressure to drive circulation through my home, and might have even driven air through a clean, low MERV dust filter. The furnace didn’t use any more gas than a modern furnace would, as best I could tell, since I was able to adjust the damper easily (I could see the flame). It used no electricity except for the thermostat control, and the overall cost was lower than for my current, high-efficiency furnace with its electrical blower and forced convection.

Robert E. Buxbaum, December 7, 2017. I ran for water commissioner, and post occasional energy-saving or water saving ideas. Another good energy saver is curtains. And here are some ideas on water-saving, and on toilet paper.

Bitcoin risks, uses, and bubble

Bitcoin prices over the last 3 years

Bitcoin prices over the last 3 years

As I write this, the price of a single bitcoin stands are approximately $11,100 yesterday, up some 2000% in the last 6 months suggesting it is a financial bubble. Or maybe it’s not: just a very risky investment suited for inclusion in a regularly balanced portfolio. These are two competing views of bitcoin, and there are two ways to distinguish between them. One is on the basis of technical analysis — does this fast rise look like a bubble (Yes!), and the other is to accept that bitcoin has a fundamental value, one I’ll calculate that below. In either case, the price rise is so fast that it is very difficult to conclude that the rise is not majorly driven by speculation: the belief that someone else will pay more later. The history of many bubbles suggests that all bubbles burst sooner or later, and that everyone holding the item loses when it does. The only winners are the last few who get out just before the burst. The speculator thinks that’s going to be him, while the investor uses rebalancing to get some of benefit and fun, without having to know exactly when to get out.

That bitcoin is a bubble may be seen by comparing the price three years ago. At that point it was $380 and dropping. A year later, it was $360 and rising. One can compare the price rise of the past 2-3 years with that for some famous bubbles and see that bitcoin has risen 30 times approximately, an increase that is on a path to beat them all except, perhaps, the tulip bubble of 1622.

A comparison between Bitcoin prices, and those of tulips, 1929 stocks, and other speculative bubbles; multiple of original price vs year from peak.

A comparison between Bitcoin prices, and those of tulips, 1929 stocks, and other speculative bubbles; multiple of original price vs year from peak.

That its price looks like a bubble is not to deny that bitcoin has a fundamental value. Bitcoin is nearly un-counterfeit-able, and its ownership is nearly untraceable. These are interesting properties that make bitcoin valuable mostly for illegal activity. To calculate the fundamental value of a bitcoin, it is only necessary to know the total value of bitcoin business transactions and the “speed of money.” As a first guess, lets say that all the transactions are illegal and add up to the equivalent of the GDP of Michigan, $400 billion/year. The value of a single bitcoin would be this number divided by the number of bitcoin in circulation, 12,000,000, and by the “speed of money,” the number of business transactions per year per coin. I’ll take this to be 3 per year. It turns out there are 5 bitcoin transactions total per year per coin, but 2/5 of that, I’ll assume, are investment transactions. Based on this, a single bitcoin should be worth about $11,100, exactly its current valuation. The speed number, 3, includes those bitcoins that are held as investments and never traded, and those actively being used in smuggling, drug-deals, etc.

If you assume that the bitcoin trade will grow to $600 billion year in a year or so, the price rise of a single coin will surpass that of Dutch tulip bulbs on fundamentals alone. If you assume it will reach $1,600 billion/year, the GDP of Texas in the semi-near future, before the Feds jump in, the value of a coin could grow to $44,000 or more. There are several problems for bitcoin owners who are betting on this. One is that the Feds are unlikely to tolerate so large an unregulated, illegal economy. Another is that bitcoin are not likely to go legal. It is very hard (near impossible) to connect a bitcoin to its owner. This is great for someone trying to deal in drugs or trying hide profits from the IRS (or his spouse), but a legal merchant will want the protection of courts of law. For this, he or she needs to demonstrate ownership of the item being traded, and that is not available with bitcoin. The lack of good legitimate business suggests to me that the FBI will likely sweep in sooner or later.

Yet another problem is the existence of other cryptocurrencies: Litecoin (LTC), Ethereum (ETH), and Zcash (ZEC) as examples. The existence of these coins increase the divisor I used when calculating bitcoin value above. And even with bitcoins, the total number is not capped at 12,000,000. There are another 12,000,000 coins to be found — or mined, as it were, and these are likely to move faster (assume an average velocity of 4). By my calculations, with 24,000,000 bitcoin and a total use of $1,600 billion/year, the fundamental value of each coin is only $16,000. This is not much higher than its current price. Let the buyer beware.

For an amusing, though not helpful read into the price: here are Bill Gates, Warren Buffet, Charlie Munger, and Noam Chomsky discussing Bitcoin.

Robert Buxbaum, December 3, 2017.