Category Archives: Science: Physics, Astronomy, etc.

Alkaline batteries have second lives

Most people assume that alkaline batteries are one-time only, throwaway items. Some have used rechargeable cells, but these are Ni-metal hydride, or Ni-Cads, expensive variants that have lower power densities than normal alkaline batteries, and almost impossible to find in stores. It would be nice to be able to recharge ordinary alkaline batteries, e.g. when a smoke alarm goes off in the middle of the night and you find you’re out, but people assume this is impossible. People assume incorrectly.

Modern alkaline batteries are highly efficient: more efficient than even a few years ago, and that always suggests reversibility. Unlike the acid batteries you learned about in highschool chemistry class (basic chemistry due to Volta) the chemistry of modern alkaline batteries is based on Edison’s alkaline car batteries. They have been tweaked to an extent that even the non-rechargeable versions can be recharged. I’ve found I can reliably recharge an ordinary alkaline cell, 9V, at least once using the crude means of a standard 12 V car battery charger by watching the amperage closely. It only took 10 minutes. I suspect I can get nine lives out of these batteries, but have not tried.

To do this experiment, I took a 9 V alkaline that had recently died, and finding I had no replacement, I attached it to a 6 Amp, 12 V, car battery charger that I had on hand. I would have preferred to use a 2 A charger and ideally a charger designed to output 9-10 V, but a 12 V charger is what I had available, and it worked. I only let it charge for 10 minutes because, at that amperage, I calculated that I’d recharged to the full 1 Amp-hr capacity. Since the new alkaline batteries only claimed 1 amp hr, I figured that more charge would likely do bad things, even perhaps cause the thing to blow up.  After 5 minutes, I found that the voltage had returned to normal and the battery worked fine with no bad effects, but went for the full 10 minutes. Perhaps stopping at 5 would have been safer.

I changed for 10 minutes (1/6 hour) because the battery claimed a capacity of 1 Amp-hour when new. My thought was 1 amp-hour = 1 Amp for 1 hour, = 6 Amps for 1/6 hour = ten minutes. That’s engineering math for you, the reason engineers earn so much. I figured that watching the recharge for ten minutes was less work and quicker than running to the store (20 minutes). I used this battery in my firm alarm, and have tested it twice since then to see that it works. After a few days in my fire alarm, I took it out and checked that the voltage was still 9 V, just like when the battery was new. Confirming experiments like this are a good idea. Another confirmation occurred when I overcooked some eggs and the alarm went off from the smoke.

If you want to experiment, you can try a 9V as I did, or try putting a 1.5 volt AA or AAA battery in a charger designed for rechargeables. Another thought is to see what happens when you overcharge. Keep safe: do this in a wood box outside at a distance, but I’d like to know how close I got to having an exploding energizer. Also, it would be worthwhile to try several charge/ discharge cycles to see how the energy content degrades. I expect you can get ~9 recharges with a “non-rechargeable” alkaline battery because the label says: “9 lives,” but even getting a second life from each battery is a significant savings. Try using a charger that’s made for rechargeables. One last experiment: If you’ve got a cell phone charger that works on a car battery, and you get the polarity right, you’ll find you can use a 9V alkaline to recharge your iPhone or Android. How do I know? I judged a science fair not long ago, and a 4th grader did this for her science fair project.

Robert Buxbaum, April 19, 2018. For more, semi-dangerous electrochemistry and biology experiments.

What drives the gulf stream?

I’m not much of a fan of todays’ kids’ science books because they don’t teach science IMHO. They have nice pictures and a few numbers; almost no equations, and lots of words. You can’t do science that way. On the odd occasion that they give the right answer to some problem, the lack of math means the kid has no way of understanding the reasoning, and no reason to believe the answer. Professional science articles on the web are bad in the opposite direction: too many numbers and for math, hey rely on supercomputers. No human can understand the outcome. I like to use my blog to offer science with insight, the type you’d get in an old “everyman science” book.

In previous posts, I gave answers to why the sky is blue, why it’s cold at the poles, why it’s cold on mountains, how tornadoes pick stuff up, and why hurricanes blow the way they do. In this post, we’ll try to figure out what drives the gulf-stream. The main argument will be deduction — disproving things that are not driving the gulf stream to leave us with one or two that could. Deduction is a classic method of science, well presented by Sherlock Holmes.

The gulf stream. The speed in the white area is ≥ 0.5 m/s (1.1 mph.).

The gulf stream. The speed in the white area is ≥ 0.5 m/s (1.1 mph.).

For those who don’t know, the Gulf stream is a massive river of water that runs within the Atlantic ocean. As shown at right, it starts roughly at the end of Florida, runs north to the Carolinas, and then turns dramatically east towards Spain. Flowing east, It’s about 150 miles wide, but only about 62 miles (100 km) when flowing along the US coast. According to some of the science books of my youth this massive flow was driven by temperature according to others, by salinity (whatever that means), and yet other books of my youth wind. My conclusion: they had no clue.

As a start to doing the science here, it’s important to fill in the numerical information that the science books left out. The Gulf stream is roughly 1000 meters deep, with a typical speed of 1 m/s (2.3 mph). The maximum speed is the surface water as the stream flows along the US coast. It is about 2.5 metres per second (5.6 mph), see map above.

From the size and the speed of the Gulf Stream, we conclude that land rivers are not driving the flow. The Mississippi is a big river with an outflow point near the head waters of the gulf stream, but the volume of flow is vastly too small. The volume of the gulf stream is roughly

Q=wdv = 100,000 x 1000 x .5 =  50 million m3/s = 1.5 billion cubic feet/s.

This is about 2000 times more flow than the volume flow of the Mississippi, 18,000 m3/s. The great difference in flow suggests the Mississippi could not be the driving force. The map of flow speeds (above) also suggest rivers do not drive the flow. The Gulf Stream does not flow at its maximum speed near the mouth of any river.  We now look for another driver.

Moving on to temperature. Temperature drives the whirl of hurricanes. The logic for temperature driving the gulf stream is as follows: it’s warm by the equator and cold at the poles; warm things expand and as water flows downhill, the polls will always be downhill from the equator. Lets put some math in here or my explanation will be lacking. First lets consider how much hight difference we might expect to see. The thermal expansivity of water is about 2x 10-4 m/m°C (.0002/°C) in the desired temperature range). To calculate the amount of expansion we multiply this by the depth of the stream, 1000m, and the temperature difference between two points, eg. the end of Florida to the Carolina coast. This is 5°C (9°F) I estimate. I calculate the temperature-induced seawater height as:

∆h (thermal) ≈ 5° x .0002/° x 1000m = 1 m (3.3 feet).

This is a fair amount of height. It’s only about 1/100 the height driving the Mississippi river, but it’s something. To see if 1 m is enough to drive the Gulf flow, I’ll compare it to the velocity-head. Velocity-head is a concept that’s useful in plumbing (I ran for water commissioner). It’s the potential energy height equivalent of any kinetic energy — typically of a fluid flow. The kinetic energy for any velocity v and mass of water, m is 1/2 mv2 . The potential energy equivalent is mgh. Combine the above and remove the mass terms, and we have:

∆h (velocity) = v2/2g.

Where g is the acceleration of gravity. Let’s consider  v = 1 m/s and g= 9.8 m/s2.≤ 0.05 m ≈ 2 inches. This is far less than the driving force calculated above. We have 5x more driving force than we need, but there is a problem: why isn’t the flow faster? Why does the Mississippi move so slowly when it has 100 times more head.

To answer the above questions, and to check if heat could really drive the Gulf Stream, we’ll check if the flow is turbulent — it is. A measure of how turbulent is based on something called the Reynolds number, Re#, it’s the ratio of kinetic energy and viscous loss in a fluid flow. Flows are turbulent if this ratio is more than 3000, or so;

Re# = vdρ/µ.

In the above, v is velocity, say 1 m/s, d is depth, 1000m, ρ = density, 1000 kg/m3 for water, and  0.00133 Pa∙s is the viscosity of water. Plug in these numbers, and we find a RE# = 750 million: this flow will be highly turbulent. Assuming a friction factor of 1/20 (.05), e find that we’d expect complete mixing 20 depths or 20 km. We find we need the above 0.05 m of velocity height to drive every 20 km of flow up the US coast. If the distance to the Carolina coast is 1000 km we need 1000*.05m/20 = 1 meter, that’s just about the velocity-head that the temperature difference would suggest. Temperature is thus a plausible driving force for 0.5 m/s, though not likely for the faster 2.5 m/s flow seen in the center of the stream. Turbulent flow is a big part of figuring the mpg of an automobile; it becomes rapidly more important at high speeds.

World sea salinity

World sea salinity. The maximum and minimum are in the wrong places.

What about salinity? For salinity to work, the salinity would have to be higher at the end of the flow. As a model of the flow, we might imagine that we freeze arctic seawater, and thus we concentrate salt in the seawater just below the ice. The heavy, saline water would flow down to the bottom of the sea, and then flow south to an area of low salinity and low pressure. Somewhere in the south, the salinity would be reduced by rains. If evaporation were to exceed the rains, the flow would go in the other direction. Sorry to say, I see no evidence of any of this. For one the end of the Gulf Stream is not that far north; there is no freezing, For two other problems: there are major rains in the Caribbean, and rains too in the North Atlantic. Finally, while the salinity head is too small. Each pen of salinity adds about 0.0001g/cc, and the salinity difference in this case is less than 1 ppm, lets say 0.5ppm.

h = .0001 x 0.5 x 1000 = 0.05m

I don’t see a case for northern-driven Gulf-stream flow caused by salinity.

Surface level winds in the Atlantic.

Surface level winds in the Atlantic. Trade winds in purple, 15-20 mph.

Now consider winds. The wind velocities are certainly enough to produce 5+ miles per hour flows, and the path of flows is appropriate. Consider, for example, the trade winds. In the southern Caribbean, they blow steadily from east to west slightly above the equator at 15 -20 mph. This could certainly drive a circulation flow of 4.5 mph north. Out of the Caribbean basin and along the eastern US coat the trade winds blow at 15-50 mph north and east. This too would easily drive a 4.5 mph flow.  I conclude that a combination of winds and temperature are the most likely drivers of the gulf stream flow. To quote Holmes, once you’ve eliminated the impossible, whatever remains, however improbable, must be the truth.

Robert E. Buxbaum, March 25, 2018. I used the thermal argument above to figure out how cold it had to be to freeze the balls off of a brass monkey.

Beyond oil lies … more oil + price volatility

One of many best selling books by Kenneth Deffeyes

One of many best-selling books by Kenneth Deffeyes

While I was at Princeton, one of the most popular courses was geology 101 taught by Dr. Kenneth S. Deffeyes. It was a sort of “Rocks for Jocks,” but had an unusual bite since Dr. Deffeyes focussed particularly on the geology of oil. Deffeyes had an impressive understanding of oil and oil production, and one outcome of this impressive understanding was his certainty that US oil production had peaked in 1970, and that world oil was about to run out too. The prediction that US oil production had peaked was not original to him. It was called Hubbert’s peak after King Hubbert who correctly predicted (rationalized?) the date, but published it only in 1971. What Deffeyes added to Hubbard’s analysis was a simplified mathematical justification and a new prediction: that world oil production would peak in the 1980s, or 2000, and then run out fast. By 2005, the peak date was fixed to November 24, of the same year: Thanksgiving day 2005 ± 3 weeks.

As with any prediction of global doom, I was skeptical, but generally trusted the experts, and virtually every experts was on board to predict gloom in the near future. A British group, The Institute for Peak Oil picked 2007 for the oil to run out, and the several movies expanded the theme, e.g. Mad Max. I was convinced enough to direct my PhD research to nuclear fusion engineering. Fusion being presented as the essential salvation for our civilization to survive beyond 2050 years or so. I’m happy to report that the dire prediction of his mathematics did not come to pass, at least not yet. To quote Yogi Berra, “In theory, theory is just like reality.” Still I think it’s worthwhile to review the mathematical thinking for what went wrong, and see if some value might be retained from the rubble.

proof of peak oilDeffeyes’s Maltheisan proof went like this: take a year-by year history of the rate of production, P and divide this by the amount of oil known to be recoverable in that year, Q. Plot this P/Q data against Q, and you find the data follows a reasonably straight line: P/Q = b-mQ. This occurs between 1962 and 1983, or between 1983 and 2005. Fro whichever straight line you pick, m and b are positive. Once you find values for m and b that you trust, you can rearrange the equation to read,

P = -mQ2+ bQ

You the calculate the peak of production from this as the point where dP/dQ = 0. With a little calculus you’ll see this occurs at Q = b/2m, or at P/Q = b/2. This is the half-way point on the P/Q vs Q line. If you extrapolate the line to zero production, P=0, you predict a total possible oil production, QT = b/m. According to this model this is always double the total Q discovered by the peak. In 1983, QT was calculated to be 1 trillion barrels. By May of 2005, again predicted to be a peak year, QT had grown to two trillion barrels.

I suppose Deffayes might have suspected there was a mistake somewhere in the calculation from the way that QT had doubled, but he did not. See him lecture here in May 2005; he predicts war, famine, and pestilence, with no real chance of salvation. It’s a depressing conclusion, confidently presented by someone enamored of his own theories. In retrospect, I’d say he did not realize that he was over-enamored of his own theory, and blind to the possibility that the P/Q vs Q line might curve upward, have a positive second derivative.

Aside from his theory of peak oil, Deffayes also had a theory of oil price, one that was not all that popular. It’s not presented in the YouTube video, nor in his popular books, but it’s one that I still find valuable, and plausibly true. Deffeyes claimed the wildly varying prices of the time were the result of an inherent quay imbalance between a varying supply and an inelastic demand. If this was the cause, we’d expect the price jumps of oil up and down will match the way the wait-line at a barber shop gets longer and shorter. Assume supply varies because discoveries came in random packets, while demand rises steadily, and it all makes sense. After each new discovery, price is seen to fall. It then rises slowly till the next discovery. Price is seen as a symptom of supply unpredictability rather than a useful corrective to supply needs. This view is the opposite of Adam Smith, but I think he’s not wrong, at least in the short term with a necessary commodity like oil.

Academics accepted the peak oil prediction, I suspect, in part because it supported a Marxian remedy. If oil was running out and the market was broken, then our only recourse was government management of energy production and use. By the late 70s, Jimmy Carter told us to turn our thermostats to 65. This went with price controls, gas rationing, and a 55 mph speed limit, and a strong message of population management – birth control. We were running out of energy, we were told because we had too many people and they (we) were using too much. America’s grown days were behind us, and only the best and the brightest could be trusted to manage our decline into the abyss. I half believed these scary predictions, in part because everyone did, and in part because they made my research at Princeton particularly important. The Science fiction of the day told tales of bold energy leaders, and I was ready to step up and lead, or so I thought.

By 2009 Dr. Deffayes was being regarded as chicken little as world oil production continued to expand.

By 2009 Dr. Deffayes was being regarded as chicken little as world oil production continued to expand.

I’m happy to report that none of the dire predictions of the 70’s to 90s came to pass. Some of my colleagues became world leaders, the rest because stock brokers with their own private planes and SUVs. As of my writing in 2018, world oil production has been rising, and even King Hubbert’s original prediction of US production has been overturned. Deffayes’s reputation suffered for a few years, then politicians moved on to other dire dangers that require world-class management. Among the major dangers of today, school shootings, Ebola, and Al Gore’s claim that the ice caps will melt by 2014, flooding New York. Sooner or later, one of these predictions will come true, but the lesson I take is that it’s hard to predict change accurately.

Just when you thought US oil had beed depleted for good, production began rising. It's now higher than the 1970 peak.

Just when you thought US oil was depleted, production began rising. We now produce more than in 1970.

Much of the new oil production you’ll see on the chart above comes from tar-sands, oil the Deffeyes  considered unrecoverable, even while it was being recovered. We also  discovered new ways to extract leftover oil, and got better at using nuclear electricity and natural gas. In the long run, I expect nuclear electricity and hydrogen will replace oil. Trees have a value, as does solar. As for nuclear fusion, it has not turned out practical. See my analysis of why.

Robert Buxbaum, March 15, 2018. Happy Ides of March, a most republican holiday.

Yogurt making for kids

Yogurt making is easy, and is a fun science project for kids and adults alike. It’s cheap, quick, easy, reasonably safe, and fairly useful. Like any real science, it requires mathematical thinking if you want to go anywhere really, but unlike most science, you can get somewhere even without math, and you can eat the experiments. Yogurt making has been done for centuries, and involves nothing more than adding some yogurt culture to a glass of milk and waiting. To do this the traditional way, you wait with the glass sitting outside of any refrigeration (they didn’t have refrigeration in the olden days). After a few days, you’ll have tasty yogurt. You can get taster yogurt if you add flavors. In one of my most successful attempts at flavoring, I added 1/2 ounce of “skinny syrup” (toffee flavor) to a glass of milk. The results were most satisfactory, IMHO.

My latest batch of home-made flavored yogurt, made in a warm spot behind this urn.

My latest batch of home-made flavored yogurt, made in a warm spot behind this coffee urn.

Now to turn yogurt-making into a science project. We’ll begin with a hypothesis. I generally tell people to not start with a hypothesis, (it biases your thinking), but here I will make an exception as I have a peculiarly non-biased hypothesis to suggest. Besides, most school kids are told they need one. My hypothesis is that there must be better ways to make yogurt and worse ways. A hypothesis should be avoided if it contains any unfounded assumptions, or if it points to a particular answer — especially an answer that no one would care about.

As with all science you’ll want to take numerical data of cause and effect. I’d suggest that temperature data is worth taking. The yogurt-making bacteria is called lactose thermophillis, and this suggests that warm temperatures will be good (lact = milk in Latin, thermophilic = loving heat). Also making things interesting is the suspicion that if you make things too warm, you’ll cook your organisms and you won’t get any yogurt. I’ve had this happen, both with over-heat and under-heat. My first attempt was to grow yogurt in the refrigerator, but I got no results. I then tried the kitchen counter and got yogurt, and then I heated things a bit more by growing next to a coffee urn, and got better yogurt; yet more heat and nothing.

For a science project, you might want to make a few batches of yogurt, at least 5, and these should be made at 2-3 different temperatures. If temperature is a cause for the yogurt to come out better or worse, you’ll need to be able to measure how much “better”? You may choose to study taste, and that’s important, but it’s hard to quantify, so that should not be the whole experiment. I would begin by testing thickness, or the time to a get some fixed degree of thickness; I’d measure thickness by seeing if a small weight sinks. A penny is a cheap, small weight, and I know it sinks in milk, but not in yogurt. You’ll want to wash your penny first, or no one will eat the yogurt. I used hot water from the urn to clean and sterilize my pennies.

Another thing that is worth testing is the effect of using different milks: whole milk, 2%, 1% or skim; goat milk, or almond milk. You can also try adding stuff to it, or starting with different starter cultures, or different amounts. Keep numerical records of these choices, then keep track of how they effect how long it takes for the gel to form, and how the stuff looks or tastes to you. Before you know it, you’ll have some very good product at half the price of the stuff in the store. If you really want to move forward fast, you might apply semi-random statistics to your experimental choices. Good luck.

Robert Buxbaum, March 2, 2018. My latest observation: what happens if you leave the yogurt to mold too long? It doesn’t get moldy, perhaps the lactic acid formed kills germs (?), but the yogurt separated into curds and whey. I poured off the whey, the unappealing, bitter yellow liquid. The thick white remainder is called “Greek” yogurt. I’m not convinced this tastes better, or is healthier, BTW.

Keeping your car batteries alive.

Lithium-battery cost and performance has improved so much that no one uses Ni-Cad or metal hydride batteries any more. These are the choice for tools, phones, and computers, while lead acid batteries are used for car starting and emergency lights. I thought I’d write about the care and trade-offs of these two remaining options.

As things currently stand, you can buy a 12 V, lead-acid car battery with 40 Amp-h capacity for about $95. This suggests a cost of about $200/ kWh. The price rises to $400/kWh if you only discharge half way (good practice). This is cheaper than the per-power cost of lithium batteries, about $500/ kWh or $1000/ kWh if you only discharge half-way (good practice), but people pick lithium because (1) it’s lighter, and (2) it’s generally longer lasting. Lithium generally lasts about 2000 half-discharge cycles vs 500 for lead-acid.

On the basis of cost per cycle, lead acid batteries would have been replaced completely except that they are more tolerant of cold and heat, and they easily output the 400-800 Amps needed to start a car. Lithium batteries have problems at these currents, especially when it’s hot or cold. Lithium batteries deteriorate fast in the heat too (over 40°C, 105°F), and you can not charge a lithium car battery at more than 3-4 Amps at temperatures below about 0°C, 32°F. At higher currents, a coat of lithium metal forms on the anode. This lithium can react with water: 2Li + H2O –> Li2O + H2, or it can form dendrites that puncture the cell separators leading to fire and explosion. If you charge a lead acid battery too fast some hydrogen can form, but that’s much less of a problem. If you are worried about hydrogen, we sell hydrogen getters and catalysts that remove it. Here’s a description of the mechanisms.

The best thing you can do to keep a lead-acid battery alive is to keep it near-fully charged. This can be done by taking long drives, by idling the car (warming it up), or by use of an external trickle charger. I recommend a trickle charger in the winter because it’s non-polluting. A lead-acid battery that’s kept at near full charge will give you enough charge for 3000 to 5000 starts. If you let the battery completely discharge, you get only 50 or so deep cycles or 1000 starts. But beware: full discharge can creep up on you. A new car battery will hold 40 Ampere-hours of current, or 65,000 Ampere-seconds if you half discharge. Starting the car will take 5 seconds of 600 Amps, using 3000 Amp-s or about 5% of the battery’s juice. The battery will recharge as you drive, but not that fast. You’ll have to drive for at least 500 seconds (8 minutes) to recharge from the energy used in starting. But in the winter it is common that your drive will be shorter, and that a lot of your alternator power will be sent to the defrosters, lights, and seat heaters. As a result, your lead-acid battery will not totally charge, even on a 10 minute drive. With every week of short trips, the battery will drain a little, and sooner or later, you’ll find your battery is dead. Beware and recharge, ideally before 50% discharge

A little chemistry will help explain why full discharging is bad for battery life (for a different version see Wikipedia). For the first half discharge of a lead-acid battery, the reaction Is:

Pb + 2PbO2 + 2H2SO4  –> PbSO4 + Pb2O2SO4 + 2H2O.

This reaction involves 2 electrons and has a -∆G° of >394 kJ, suggesting a reversible voltage more than 2.04 V per cell with voltage decreasing as H2SO4 is used up. Any discharge forms PbSO4 on the positive plate (the lead anode) and converts lead oxide on the cathode (the negative plate) to Pb2O2SO4. Discharging to more than 50% involves this reaction converting the Pb2O2SO4 on the cathode to PbSO4.

Pb + Pb2O2SO4 + 2H2SO4  –> 2PbSO4 + 2H2O.

This also involves two electrons, but -∆G < 394 kJ, and voltage is less than 2.04 V. Not only is the voltage less, the maximum current is less. As it happens Pb2O2SO4 is amorphous, adherent, and conductive, while PbSO4 is crystalline, not that adherent, and not-so conductive. Operating at more than 50% results in less voltage, increased internal resistance, decreased H2SO4 concentrations, and lead sulfate flaking off the electrode. Even letting a battery sit at low voltage contributes to PbSO4 flaking off. If the weather is cold enough, the low concentration H2SO4 freezes and the battery case cracks. My advice: Get out your battery charger and top up your battery. Don’t worry about overcharging; your battery charger will sense when the charge is complete. A lead-acid battery operated at near full charge, between 67 and 100% will provide 1500 cycles, about as many as lithium. 

Trickle charging my wife's car. Good for battery life. At 6 Amps, expect this to take 3-6 hours.

Trickle charging my wife’s car: good for battery life. At 6 Amps, expect a full charge to take 6 hours or more. You might want to recharge the battery in your emergency lights too. 

Lithium batteries are the choice for tools and electric vehicles, but the chemistry is different. For longest life with lithium batteries, they should not be charged fully. If you change fully they deteriorate and self-discharge, especially when warm (100°F, 40°C). If you operate at 20°C between 75% and 25% charge, a lithium-ion battery will last 2000 cycles; at 100% to 0%, expect only 200 cycles or so.

Tesla cars use lithium batteries of a special type, lithium cobalt. Such batteries have been known to explode, but and Tesla adds sophisticated electronics and cooling systems to prevent this. The Chevy Volt and Bolt use lithium batteries too, but they are less energy-dense. In either case, assuming $1000/kWh and a 2000 cycle life, the battery cost of an EV is about 50¢/kWh-cycle. Add to this the cost of electricity, 15¢/kWh including the over-potential needed to charge, and I find a total cost of operation of 65¢/kWh. EVs get about 3 miles per kWh, suggesting an energy cost of about 22¢/mile. By comparison, a 23 mpg car that uses gasoline at $2.80 / gal, the energy cost is 12¢/mile, about half that of the EVs. For now, I stick to gasoline for normal driving, and for long trips, suggest buses, trains, and flying.

Robert Buxbaum, January 4, 2018.

Why is it hot at the equator, cold at the poles?

Here’s a somewhat mathematical look at why it is hotter at the equator that at the poles. This is high school or basic college level science, using trigonometry (pre calculus), a slight step beyond the basic statement that the sun hits down more directly at the equator than at the poles. That’s the kid’s explanation, but we can understand better if we add a little math.

Solar radiation hits Detroit at an angle, as a result less radiation power hits per square meter of Detroit.

Solar radiation hits Detroit or any other non-equator point at an angle, As a result, less radiation power hits per square meter of land.

Lets use the diagram at right and trigonometry to compare the amount of sun-energy that falls on a square meter of land at the equator (0 latitude) and in a city at 42.5 N latitude (Detroit, Boston, and Rome are at this latitude). In each case, let’s consider high-noon on March 21 or September 20. These are the two equinox days, the only days each year when the day and night are equal length, and the only times when it is easy to calculate the angle of the sun as it deviates from the vertical by exactly the latitude on the days and times.

More specifically the equator is zero latitude, so on the equator at high-noon on the equinox, the sun will shine from directly overhead, or 0° from the vertical. Since the sun’s power in space is 1050 W/m2, every square meter of equator can expect to receive 1050 W of sun-energy, less the amount reflected off clouds and dust, or scattered off or air molecules (air scattering is what makes the sky blue). Further north, Detroit, Boston, and Rome sit at 42.5 latitude. At noon on March 31 the sun will strike earth at 42.5° from the vertical as shown in the lower figure above. From trigonometry, you can see that each square meter of these cities will receive cos 42.5 as much power as a square meter at the equator, except for any difference in clouds, dust, etc. Without clouds etc. that would be 1050 cos 42.5 = 774 W. Less sun power hits per square meter because each square meter is tilted. Earlier and later in the day, each spot will get less sunlight than at noon, but the proportion is the same, at least on one of the equinox days.

To calculate the likely temperature in Detroit, Boston, or Rome, I will use a simple energy balance. Ignoring heat storage in the earth for now, we will say that the heat in equals the heat out. We now ignore heat transfer by way of winds and rain, and approximate to say that the heat out leaves by black-body radiation alone, radiating into the extreme cold of space. This is not a very bad approximation since Black body radiation is the main temperature removal mechanism in most situations where large distances are involved. I’ve discussed black body radiation previously; the amount of energy radiated is proportional to luminosity, and to T4, where T is the temperature as measured in an absolute temperature scale, Kelvin or Rankin. Based on this, and assuming that the luminosity of the earth is the same in Detroit as at the equator,

T Detroit / Tequator  = √√ cos 42.5 = .927

I’ll now calculate the actual temperatures. For American convenience, I’ll choose to calculation in the Rankin Temperature scale, the absolute Fahrenheit scale. In this scale, 100°F = 560°R, 0°F = 460°R, and the temperature of space is 0°R as a good approximation. If the average temperature of the equator = 100°F = 38°C = 560°R, we calculate that the average temperature of Detroit, Boston, or Rome will be about .927 x 560 = 519°R = 59°F (15°C). This is not a bad prediction, given the assumptions. We can expect the temperature will be somewhat lower at night as there is no light, but the temperature will not fall to zero as there is retained heat from the day. The same reason, retained heat, explains why it is warmer will be warmer in these cities on September 20 than on March 31.

In the summer, these cities will be warmer because they are in the northern hemisphere, and the north pole is tilted 23°. At the height of summer (June 21) at high noon, the sun will shine on Detroit at an angle of 42.5 – 23° = 19.5° from the vertical. The difference in angle is why these cities are warmer on that day than on March 21. The equator will be cooler on that day (June 21) than on March 21 since the sun’s rays will strike the equator at 23° from the vertical on that day. These  temperature differences are behind the formation of tornadoes and hurricanes, with a tornado season in the US centering on May to July.

When looking at the poles, we find a curious problem in guessing what the average temperature will be. At noon on the equinox, the sun comes in horizontally, or at 90°from the vertical. We thus expect there is no warming power at all this day, and none for the six months of winter either. At first glance, you’d think the temperature at the poles would be zero, at least for six months of the year. It isn’t zero because there is retained heat from the summer, but still it makes for a more-difficult calculation.

To figure an average temperature of the poles, lets remember that during the 6 month summer the sun shines for 24 hours per day, and that the angle of the sun will be as high as 23° from the horizon, or 67° from the vertical for all 24 hours. Let’s assume that the retained heat from the summer is what keeps the temperature from falling too low in the winter and calculate the temperature at an .

Let’s assume that the sun comes in at the equivalent of 25° for the sun during the 6 month “day” of the polar summer. I don’t look at equinox, but rather the solar day, and note that the heating angle stays fixed through each 24 hour day during the summer, and does not decrease in the morning or as the afternoon wears on. Based on this angle, we expect that

TPole / Tequator  = √√ cos 65° = .806

TPole = .806 x 560°R = 452°R = -8°F (-22°C).

This, as it happens is 4° colder than the average temperature at the north pole, but not bad, given the assumptions. Maybe winds and water currents account for the difference. Of course there is a large temperature difference at the pole between the fall equinox and the spring equinox, but that’s to be expected. The average is, -4°F, about the temperature at night in Detroit in the winter.

One last thing, one that might be unexpected, is that temperature at the south pole is lower than at the north pole, on average -44°F. The main reason for this is that the snow on south pole is quite deep — more than 1 1/2 miles deep, with some rock underneath. As I showed elsewhere, we expect that, temperatures are lower at high altitude. Data collected from cores through the 1 1/2 mile deep snow suggest (to me) chaotic temperature change, with long ice ages, and brief (6000 year) periods of warm. The ice ages seem far worse than global warming.

Dr. Robert Buxbaum, December 30, 2017

Penicillin, cheese allergy, and stomach cancer

penecillin molecule

The penicillin molecule is a product of the penicillin mold

Many people believe they are allergic to penicillin — it’s the most common perceived drug allergy — but several studies have shown that most folks who think they are allergic are not. Perhaps they once were, but when people who thought they were allergic were tested, virtually none showed allergic reaction. In a test of 146 presumably allergic patients at McMaster University, only two had their penicillin allergy confirmed; 98.6% of the patients tested negative. A similar study at the Mayo Clinic tested 384 pre-surgical patients with a history of penicillin allergy; 94% tested negative, and were given clearance to receive penicillin antibiotics before, during, and after surgery. Read a summary here.

08

Orange showing three different strains of the penicillin mold; some of these are toxic.

This is very good news. Penicillin is a low-cost, low side-effect antibiotic, effective against many diseases including salmonella, botulism, gonorrhea, and scarlet fever. The penicillin molecule is a common product of nature, produced by a variety of molds, e.g. on the orange at right, and used in cheese making, below. It is thus something most people have been exposed to, whether they realize it or not.

Penicillin allergy is still a deadly danger for the few who really are allergic, and it’s worthwhile to find out if that means you. The good news: that penicillin is found in common cheeses suggests, to me, a simple test for penicillin allergy. Anyone who suspects penicillin allergy and does not have a general dairy allergy can try eating brie, blue, camembert, or Stilton cheese: any of cheeses made with the penicillin mold. If you don’t break out in a rash or suffer stomach cramps, you’re very likely not allergic to penicillin.

There is some difference between cheeses. Some, like brie and camembert, have a white fuzzy mold coat; this is Penicillium camemberti. it exudes penicillin — not in enough to cure gonorrhea, but enough to give taste and avoid spoilage — and to test for allergy. Danish blue and Roquefort, shown below, have a different look and more flavor. They’re made with blue-green, Penicillium roqueforti. Along with penicillin, this mold produces a small amount of neurotoxin, roquefortine C. It’s not enough to harm most people, but it could cause some who are not allergic to penicillin to be allergic to blue cheese. Don’t eat a moldy orange, by the way; some forms of the mold produce a lot of neurotoxin.

For people who are not allergic, a thought I had is that one could, perhaps treat heartburn or ulcers with cheese; perhaps even cancer? H-Pylori, the bacteria associated with heartburn, is effectively treated by amoxicillin, a penicillin variant. If a penicillin variant kills the bacteria, as seems plausible that penicillin cheese might too. Then too, amoxicillin, is found to reduce the risk of gastric cancer. If so, penicillin or penicillin cheese might prove to be a cancer protective. To my knowledge, this has never been studied, but it seems worth considering. The other, standard treatment for heartburn, pantoprazole / Protonix, is known to cause osteoporosis, and increase the risk of cancer.

A culture of Penicillium roqueforti. Most people are not allergic to it.

The blue in blue cheese is Penicillium roqueforti. Most people are not allergic.

Penicillin was discovered by Alexander Fleming, who noticed that a single spore of the mold killed the bacteria near it on a Petrie dish. He tried to produce significant quantities of the drug from the mold with limited success, but was able to halt disease in patients, and was able to interest others who had more skill in large-scale fungus growing. Kids looking for a good science fair project, might consider penicillin growing, penicillin allergy, treatment of stomach ailments using cheese, or anything else related to the drug. Three Swedish journals declared that penicillin was the most important discovery of the last 1000 years. It would be cool if the dilute form, the one available in your supermarket, could be shown to treat heartburn and/or cancer. Another drug you could study is Lysozyme, a chemical found in tears, in saliva, and in human milk, but not in cow milk. Alexander Fleming found that tears killed bacteria, as did penicillin. Lysozyme, the active ingredient of tears, is currently used to treat animals, but not humans.

Robert Buxbaum, November 9, 2017. Since starting work on this essay I’ve been eating blue cheese. It tastes good and seems to cure heartburn. As a personal note: my first science fair project (4th grade) involved growing molds on moistened bread. For an incubator, I used the underside of our home radiator. The location kept my mom from finding the experiment and throwing it out.

magnetic separation of air

As some of you will know, oxygen is paramagnetic, attracted slightly by a magnet. Oxygen’s paramagnetism is due to the two unpaired electrons in every O2 molecule. Oxygen has a triple-bond structure as discussed here (much of the chemistry you were taught is wrong). Virtually every other common gas is diamagnetic, repelled by a magnet. These include nitrogen, water, CO2, and argon — all diamagnetic. As a result, you can do a reasonable job of extracting oxygen from air by the use of a magnet. This is awfully cool, and could make for a good science fair project, if anyone is of a mind.

But first some math, or physics, if you like. To a good approximation the magnetization of a material, M = CH/T where M is magnetization, H is magnetic field strength, C is the Curie constant for the material, and T is absolute temperature.

Ignoring for now, the difference between entropy and internal energy, but thinking only in terms of work derived by lowering a magnet towards a volume of gas, we can say that the work extracted, and thus the decrease in energy of the magnetic gas is ∫∫HdM  = MH/2. At constant temperature and pressure, we can say ∆G = -CH2/2T.

With a neodymium magnet, you should be able to get about 50 Tesla, or 40,000 ampere meters At 20°C, the per-mol, magnetic susceptibility of oxygen is 1.34×10−6  This suggests that the Curie constant is 1.34 x293 = 3.93 ×10−4  At 20°C, this energy difference is 1072 J/mole. = RT ln ß where ß is the concentration ratio between the O2 content of the magnetized and un-magnetized gas.

From the above, we find that, at room temperature, 298K ß = 1.6, and thus that the maximum oxygen concentration you’re likely to get is about 1.6 x 21% = 33%. It’s slightly more than this due to nitrogen’s diamagnetism, but this effect is too small the matter. What does matter is that 33% O2 is a good amount for a variety of medical uses.

I show below my simple design for a magnetic O2 concentrator. The dotted line is a permeable membrane of no selectivity – with a little O2 permeability the design will work better. All you need is a blower or pump. A coffee filter could serve as a membrane.bux magneitc air separator

This design is as simple as the standard membrane-based O2 concentrator – those based on semi-permeable membranes, but this design should require less pressure differential — just enough to overcome the magnet. Less pressure means the blower should be smaller, and less noisy, with less energy use.  I figure this could be really convenient for people who need portable oxygen. With several stages and low temperature operation, this design could have commercial use.

On the theoretical end, an interesting thing I find concerns the effect on the entropy of the magnetic oxygen. (Please ignore this paragraph if you have not learned statistical thermodynamics.) While you might imagine that magnetization decreases entropy, other-things being equal because the molecules are somewhat aligned with the field, temperature and pressure being fixed, I’ve come to realize that entropy is likely higher. A sea of semi-aligned molecules will have a slightly higher heat capacity than nonaligned molecules because the vibrational Cp is higher, other things being equal. Thus, unless I’m wrong, the temperature of the gas will be slightly lower in the magnetic area than in the non-magnetic field area. Temperature and pressure are not the same within the separator as out, by the way; the blower is something of a compressor, though a much less-energy intense one than used for most air separators. Because of the blower, both the magnetic and the non magnetic air will be slightly warmer than in the surround (blower Work = ∆T/Cp). This heat will be mostly lost when the gas leaves the system, that is when it flows to lower pressure, both gas streams will be, essentially at room temperature. Again, this is not the case with the classic membrane-based oxygen concentrators — there the nitrogen-rich stream is notably warm.

Robert E. Buxbaum, October 11, 2017. I find thermodynamics wonderful, both as science and as an analog for society.

How Tesla invented, I think, Tesla coils and wireless chargers.

I think I know how Tesla invented his high frequency devices, and thought I’d show you, while also explaining the operation of some devices that develop from in. Even if I’m wrong in historical terms, at least you should come to understand some of his devices, and something of the invention process. Either can be the start of a great science fair project.

physics drawing of a mass on a spring, left, and of a grounded capacitor and inception coil, right.

The start of Tesla’s invention process, I think, was a visual similarity– I’m guessing he noticed that the physics symbol for a spring was the same as for an electrical, induction coil, as shown at left. A normal person would notice the similarity, and perhaps think about it for a few seconds, get no where, and think of something else. If he or she had a math background — necessary to do most any science — they might look at the relevant equations and notice that they’re different. The equation describing the force of a spring is F = -k x  (I’ll define these letters in the bottom paragraph). The equation describing the voltage in an induction coil is not very similar-looking at first glance, V = L di/dt.  But there is a key similarity that could appeal to some math aficionados: both equations are linear. A linear equation is one where, if you double one side you double the other. Thus, if you double F, you double x, and if you double V, you double dI/dt, and that’s a significant behavior; the equation z= atis not linear, see the difference?

Another linear equation is the key equation for the motion for a mass, Newton’s second law, F = ma = m d2x/dt2. This equation is quite complicated looking, since the latter term is a second-derivative, but it is linear, and a mass is the likely thing for a spring to act upon. Yet another linear equation can be used to relate current to the voltage across a capacitor: V= -1/C ∫idt. At first glance, this equation looks quite different from the others since it involves an integral. But Nicola Tesla did more than a first glance. Perhaps he knew that linear systems tend to show resonance — vibrations at a fixed frequency. Or perhaps that insight came later. 

And Tesla saw something else, I imagine, something even less obvious, except in hindsight. If you take the derivative of the two electrical equations, you get dV/dt = L d2i/dt2, and dV/dt = -1/C i . These equations are the same as for the spring and mass, just replace F and x by dV/dt and i. That the derivative of the integral is the thing itself is something I demonstrate here. At this point it becomes clear that a capacitor-coil system will show the same sort of natural resonance effects as shown by a spring and mass system, or by a child’s swing, or by a bouncy bridge. Tesla would have known, like anyone who’s taken college-level physics, that a small input at the right, resonant frequency will excite such systems to great swings. For a mass and spring,

Basic Tesla coil. A switch set off by magnetization of the iron core insures resonant frequency operation.

Basic Tesla coil. A switch set off by magnetization of the iron core insures resonant frequency operation.

resonant frequency = (1/2π) √k/m,

Children can make a swing go quite high, just by pumping at the right frequency. Similarly, it should be possible to excite a coil-capacitor system to higher and higher voltages if you can find a way to excite long enough at the right frequency. Tesla would have looked for a way to do this with a coil capacitor system, and after a while of trying and thinking, he seems to have found the circuit shown at right, with a spark gap to impress visitors and keep the voltages from getting to far out of hand. The resonant frequency for this system is 1/(2π√LC), an equation form that is similar to the above. The voltage swings should grow until limited by resistance in the wires, or by the radiation of power into space. The fact that significant power is radiated into space will be used as the basis for wireless phone chargers, but more on that later. For now, you might wish to note that power radiation is proportional to dV/dt.

A version of the above excited by AC current. In this version, you achieve resonance by adjusting the coil, capacitor and resistance to match the forcing frequency.

A more -modern version of the above excited by AC current. In this version, you achieve resonance by adjusting the coil, capacitor and resistance to match the forcing frequency.

The device above provides an early, simple way to excite a coil -capacitor system. It’s designed for use with a battery or other DC power source. There’s an electromagnetic switch to provide resonance with any capacitor and coil pair. An alternative, more modern device is shown at left. It  achieves resonance too without the switch through the use of input AC power, but you have to match the AC frequency to the resonant frequency of the coil and capacitor. If wall current is used, 60 cps, the coil and capacitor must be chosen so that  1/(2π√LC) = 60 cps. Both versions are called Tesla coils and either can be set up to produce very large sparks (sparks make for a great science fair project — you need to put a spark gap across the capacitor, or better yet use the coil as the low-voltage part of a transformer.

power receiverAnother use of this circuit is as a transmitter of power into space. The coil becomes the transmission antenna, and you have to set up a similar device as a receiver, see picture at right. The black thing at left of the picture is the capacitor. One has to make sure that the coil-capacitor pair is tuned to the same frequency as the transmitter. One also needs to add a rectifier, the rectifier chosen here is designated 1N4007. This, fairly standard-size rectifier allows you to sip DC power to the battery, without fear that the battery will discharge on every cycle. That’s all the science you need to charge an iPhone without having to plug it in. Designing one of these is a good science fair project, especially if you can improve on the charging distance. Why should you have to put your iPhone right on top of the transmitter battery. Why not allow continuous charging anywhere in your home. Tesla was working on long-distance power transmission till the end of his life. What modifications would that require?

Symbols used above: a = acceleration = d2x/dt2, C= capacitance of the capacitor, dV/dt = the rate of change of voltage with time, F = force, i = current, k = stiffness of the spring, L= inductance of the coil, m = mass of the weight, t= time, V= voltage, x = distance of the mass from its rest point.

Robert Buxbaum, October 2, 2017.

Heraclitus and Parmenides time joke

From Existential Commics

From Existential Comics; Parmenides believed that nothing changed, nor could it.

For those who don’t remember, Heraclitus believed that change was the essence of life, while  Parmenides believed that nothing ever changes. It’s a debate that exists to this day in physics, and also in religion (there is nothing new under the sun, etc.). In science, the view that no real change is possible is founded in Schrödinger’s wave view of quantum mechanics.

Schrödinger's wave equation, time dependent.

Schrödinger’s wave equation, time dependent.

In Schrödinger’s wave description of reality, every object or particle is considered a wave of probability. What appears to us as motion is nothing more than the wave oscillating back and forth in its potential field. Nothing has a position or velocity, quite, only random interactions with other waves, and all of these are reversible. Because of the time reversibility of the equation, long-term, the system is conservative. The wave returns to where it was, and no entropy is created, long-term. Anything that happens will happen again, in reverse. See here for more on Schrödinger waves.

Thermodynamics is in stark contradiction to this quantum view. To thermodynamics, and to common observation, entropy goes ever upward, and nothing is reversible without outside intervention. Things break but don’t fix themselves. It’s this entropy increase that tells you that you are going forward in time. You know that time is going forward if you can, at will, drop an ice-cube into hot tea to produce lukewarm, diluted tea. If you can do the reverse, time is going backward. It’s a problem that besets Dr. Who, but few others.

One way that I’ve seen to get out of the general problem of quantum time is to assume the observed universe is a black hole or some other closed system, and take it as an issue of reference frame. As seen from the outside of a black hole (or a closed system without observation) time stops and nothing changes. Within a black hole or closed system, there is constant observation, and there is time and change. It’s not a great way out of the contradiction, but it’s the best I know of.

Predestination makes a certain physics and religious sense, it just doesn't match personal experience very well.

Predestination makes a certain physics and religious sense, it just doesn’t match personal experience very well.

The religion version of this problem is as follows: God, in most religions, has fore-knowledge. That is, He knows what will happen, and that presumes we have no free will. The problem with that is, without free-will, there can be no fair judgment, no right or wrong. There are a few ways out of this, and these lie behind many of the religious splits of the 1700s. A lot of the humor of Calvin and Hobbes comics comes because Calvin is a Calvinist, convinced of fatalistic predestination; Hobbes believes in free will. Most religions take a position somewhere in-between, but all have their problems.

Applying the black-hole model to God gives the following, alternative answer, one that isn’t very satisfying IMHO, but at least it matches physics. One might assume predestination for a God that is outside the universe — He sees only an unchanging system, while we, inside see time and change and free will. One of the problems with this is it posits a distant creator who cares little for us and sees none of the details. A more positive view of time appears in Dr. Who. For Dr. Who time is fluid, with some fixed points. Here’s my view of Dr. Who’s physics.  Unfortunately, Dr. Who is fiction: attractive, but without basis. Time, as it were, is an issue for the ages.

Robert Buxbaum, Philosophical musings, Friday afternoon, June 30, 2017.