Tag Archives: temperature

What drives the gulf stream?

I’m not much of a fan of todays’ kids’ science books because they don’t teach science IMHO. They have nice pictures and a few numbers; almost no equations, and lots of words. You can’t do science that way. On the odd occasion that they give the right answer to some problem, the lack of math means the kid has no way of understanding the reasoning, and no reason to believe the answer. Professional science articles on the web are bad in the opposite direction: too many numbers and for math, hey rely on supercomputers. No human can understand the outcome. I like to use my blog to offer science with insight, the type you’d get in an old “everyman science” book.

In previous posts, I gave answers to why the sky is blue, why it’s cold at the poles, why it’s cold on mountains, how tornadoes pick stuff up, and why hurricanes blow the way they do. In this post, we’ll try to figure out what drives the gulf-stream. The main argument will be deduction — disproving things that are not driving the gulf stream to leave us with one or two that could. Deduction is a classic method of science, well presented by Sherlock Holmes.

The gulf stream. The speed in the white area is ≥ 0.5 m/s (1.1 mph.).

The gulf stream. The speed in the white area is ≥ 0.5 m/s (1.1 mph.).

For those who don’t know, the Gulf stream is a massive river of water that runs within the Atlantic ocean. As shown at right, it starts roughly at the end of Florida, runs north to the Carolinas, and then turns dramatically east towards Spain. Flowing east, It’s about 150 miles wide, but only about 62 miles (100 km) when flowing along the US coast. According to some of the science books of my youth this massive flow was driven by temperature according to others, by salinity (whatever that means), and yet other books of my youth wind. My conclusion: they had no clue.

As a start to doing the science here, it’s important to fill in the numerical information that the science books left out. The Gulf stream is roughly 1000 meters deep, with a typical speed of 1 m/s (2.3 mph). The maximum speed is the surface water as the stream flows along the US coast. It is about 2.5 metres per second (5.6 mph), see map above.

From the size and the speed of the Gulf Stream, we conclude that land rivers are not driving the flow. The Mississippi is a big river with an outflow point near the head waters of the gulf stream, but the volume of flow is vastly too small. The volume of the gulf stream is roughly

Q=wdv = 100,000 x 1000 x .5 =  50 million m3/s = 1.5 billion cubic feet/s.

This is about 2000 times more flow than the volume flow of the Mississippi, 18,000 m3/s. The great difference in flow suggests the Mississippi could not be the driving force. The map of flow speeds (above) also suggest rivers do not drive the flow. The Gulf Stream does not flow at its maximum speed near the mouth of any river.  We now look for another driver.

Moving on to temperature. Temperature drives the whirl of hurricanes. The logic for temperature driving the gulf stream is as follows: it’s warm by the equator and cold at the poles; warm things expand and as water flows downhill, the polls will always be downhill from the equator. Lets put some math in here or my explanation will be lacking. First lets consider how much hight difference we might expect to see. The thermal expansivity of water is about 2x 10-4 m/m°C (.0002/°C) in the desired temperature range). To calculate the amount of expansion we multiply this by the depth of the stream, 1000m, and the temperature difference between two points, eg. the end of Florida to the Carolina coast. This is 5°C (9°F) I estimate. I calculate the temperature-induced seawater height as:

∆h (thermal) ≈ 5° x .0002/° x 1000m = 1 m (3.3 feet).

This is a fair amount of height. It’s only about 1/100 the height driving the Mississippi river, but it’s something. To see if 1 m is enough to drive the Gulf flow, I’ll compare it to the velocity-head. Velocity-head is a concept that’s useful in plumbing (I ran for water commissioner). It’s the potential energy height equivalent of any kinetic energy — typically of a fluid flow. The kinetic energy for any velocity v and mass of water, m is 1/2 mv2 . The potential energy equivalent is mgh. Combine the above and remove the mass terms, and we have:

∆h (velocity) = v2/2g.

Where g is the acceleration of gravity. Let’s consider  v = 1 m/s and g= 9.8 m/s2.≤ 0.05 m ≈ 2 inches. This is far less than the driving force calculated above. We have 5x more driving force than we need, but there is a problem: why isn’t the flow faster? Why does the Mississippi move so slowly when it has 100 times more head.

To answer the above questions, and to check if heat could really drive the Gulf Stream, we’ll check if the flow is turbulent — it is. A measure of how turbulent is based on something called the Reynolds number, Re#, it’s the ratio of kinetic energy and viscous loss in a fluid flow. Flows are turbulent if this ratio is more than 3000, or so;

Re# = vdρ/µ.

In the above, v is velocity, say 1 m/s, d is depth, 1000m, ρ = density, 1000 kg/m3 for water, and  0.00133 Pa∙s is the viscosity of water. Plug in these numbers, and we find a RE# = 750 million: this flow will be highly turbulent. Assuming a friction factor of 1/20 (.05), e find that we’d expect complete mixing 20 depths or 20 km. We find we need the above 0.05 m of velocity height to drive every 20 km of flow up the US coast. If the distance to the Carolina coast is 1000 km we need 1000*.05m/20 = 1 meter, that’s just about the velocity-head that the temperature difference would suggest. Temperature is thus a plausible driving force for 0.5 m/s, though not likely for the faster 2.5 m/s flow seen in the center of the stream. Turbulent flow is a big part of figuring the mpg of an automobile; it becomes rapidly more important at high speeds.

World sea salinity

World sea salinity. The maximum and minimum are in the wrong places.

What about salinity? For salinity to work, the salinity would have to be higher at the end of the flow. As a model of the flow, we might imagine that we freeze arctic seawater, and thus we concentrate salt in the seawater just below the ice. The heavy, saline water would flow down to the bottom of the sea, and then flow south to an area of low salinity and low pressure. Somewhere in the south, the salinity would be reduced by rains. If evaporation were to exceed the rains, the flow would go in the other direction. Sorry to say, I see no evidence of any of this. For one the end of the Gulf Stream is not that far north; there is no freezing, For two other problems: there are major rains in the Caribbean, and rains too in the North Atlantic. Finally, while the salinity head is too small. Each pen of salinity adds about 0.0001g/cc, and the salinity difference in this case is less than 1 ppm, lets say 0.5ppm.

h = .0001 x 0.5 x 1000 = 0.05m

I don’t see a case for northern-driven Gulf-stream flow caused by salinity.

Surface level winds in the Atlantic.

Surface level winds in the Atlantic. Trade winds in purple, 15-20 mph.

Now consider winds. The wind velocities are certainly enough to produce 5+ miles per hour flows, and the path of flows is appropriate. Consider, for example, the trade winds. In the southern Caribbean, they blow steadily from east to west slightly above the equator at 15 -20 mph. This could certainly drive a circulation flow of 4.5 mph north. Out of the Caribbean basin and along the eastern US coat the trade winds blow at 15-50 mph north and east. This too would easily drive a 4.5 mph flow.  I conclude that a combination of winds and temperature are the most likely drivers of the gulf stream flow. To quote Holmes, once you’ve eliminated the impossible, whatever remains, however improbable, must be the truth.

Robert E. Buxbaum, March 25, 2018. I used the thermal argument above to figure out how cold it had to be to freeze the balls off of a brass monkey.

Why is it hot at the equator, cold at the poles?

Here’s a somewhat mathematical look at why it is hotter at the equator that at the poles. This is high school or basic college level science, using trigonometry (pre calculus), a slight step beyond the basic statement that the sun hits down more directly at the equator than at the poles. That’s the kid’s explanation, but we can understand better if we add a little math.

Solar radiation hits Detroit at an angle, as a result less radiation power hits per square meter of Detroit.

Solar radiation hits Detroit or any other non-equator point at an angle, As a result, less radiation power hits per square meter of land.

Lets use the diagram at right and trigonometry to compare the amount of sun-energy that falls on a square meter of land at the equator (0 latitude) and in a city at 42.5 N latitude (Detroit, Boston, and Rome are at this latitude). In each case, let’s consider high-noon on March 21 or September 20. These are the two equinox days, the only days each year when the day and night are equal length, and the only times when it is easy to calculate the angle of the sun as it deviates from the vertical by exactly the latitude on the days and times.

More specifically the equator is zero latitude, so on the equator at high-noon on the equinox, the sun will shine from directly overhead, or 0° from the vertical. Since the sun’s power in space is 1050 W/m2, every square meter of equator can expect to receive 1050 W of sun-energy, less the amount reflected off clouds and dust, or scattered off or air molecules (air scattering is what makes the sky blue). Further north, Detroit, Boston, and Rome sit at 42.5 latitude. At noon on March 31 the sun will strike earth at 42.5° from the vertical as shown in the lower figure above. From trigonometry, you can see that each square meter of these cities will receive cos 42.5 as much power as a square meter at the equator, except for any difference in clouds, dust, etc. Without clouds etc. that would be 1050 cos 42.5 = 774 W. Less sun power hits per square meter because each square meter is tilted. Earlier and later in the day, each spot will get less sunlight than at noon, but the proportion is the same, at least on one of the equinox days.

To calculate the likely temperature in Detroit, Boston, or Rome, I will use a simple energy balance. Ignoring heat storage in the earth for now, we will say that the heat in equals the heat out. We now ignore heat transfer by way of winds and rain, and approximate to say that the heat out leaves by black-body radiation alone, radiating into the extreme cold of space. This is not a very bad approximation since Black body radiation is the main temperature removal mechanism in most situations where large distances are involved. I’ve discussed black body radiation previously; the amount of energy radiated is proportional to luminosity, and to T4, where T is the temperature as measured in an absolute temperature scale, Kelvin or Rankin. Based on this, and assuming that the luminosity of the earth is the same in Detroit as at the equator,

T Detroit / Tequator  = √√ cos 42.5 = .927

I’ll now calculate the actual temperatures. For American convenience, I’ll choose to calculation in the Rankin Temperature scale, the absolute Fahrenheit scale. In this scale, 100°F = 560°R, 0°F = 460°R, and the temperature of space is 0°R as a good approximation. If the average temperature of the equator = 100°F = 38°C = 560°R, we calculate that the average temperature of Detroit, Boston, or Rome will be about .927 x 560 = 519°R = 59°F (15°C). This is not a bad prediction, given the assumptions. We can expect the temperature will be somewhat lower at night as there is no light, but the temperature will not fall to zero as there is retained heat from the day. The same reason, retained heat, explains why it is warmer will be warmer in these cities on September 20 than on March 31.

In the summer, these cities will be warmer because they are in the northern hemisphere, and the north pole is tilted 23°. At the height of summer (June 21) at high noon, the sun will shine on Detroit at an angle of 42.5 – 23° = 19.5° from the vertical. The difference in angle is why these cities are warmer on that day than on March 21. The equator will be cooler on that day (June 21) than on March 21 since the sun’s rays will strike the equator at 23° from the vertical on that day. These  temperature differences are behind the formation of tornadoes and hurricanes, with a tornado season in the US centering on May to July.

When looking at the poles, we find a curious problem in guessing what the average temperature will be. At noon on the equinox, the sun comes in horizontally, or at 90°from the vertical. We thus expect there is no warming power at all this day, and none for the six months of winter either. At first glance, you’d think the temperature at the poles would be zero, at least for six months of the year. It isn’t zero because there is retained heat from the summer, but still it makes for a more-difficult calculation.

To figure an average temperature of the poles, lets remember that during the 6 month summer the sun shines for 24 hours per day, and that the angle of the sun will be as high as 23° from the horizon, or 67° from the vertical for all 24 hours. Let’s assume that the retained heat from the summer is what keeps the temperature from falling too low in the winter and calculate the temperature at an .

Let’s assume that the sun comes in at the equivalent of 25° for the sun during the 6 month “day” of the polar summer. I don’t look at equinox, but rather the solar day, and note that the heating angle stays fixed through each 24 hour day during the summer, and does not decrease in the morning or as the afternoon wears on. Based on this angle, we expect that

TPole / Tequator  = √√ cos 65° = .806

TPole = .806 x 560°R = 452°R = -8°F (-22°C).

This, as it happens is 4° colder than the average temperature at the north pole, but not bad, given the assumptions. Maybe winds and water currents account for the difference. Of course there is a large temperature difference at the pole between the fall equinox and the spring equinox, but that’s to be expected. The average is, -4°F, about the temperature at night in Detroit in the winter.

One last thing, one that might be unexpected, is that temperature at the south pole is lower than at the north pole, on average -44°F. The main reason for this is that the snow on south pole is quite deep — more than 1 1/2 miles deep, with some rock underneath. As I showed elsewhere, we expect that, temperatures are lower at high altitude. Data collected from cores through the 1 1/2 mile deep snow suggest (to me) chaotic temperature change, with long ice ages, and brief (6000 year) periods of warm. The ice ages seem far worse than global warming.

Dr. Robert Buxbaum, December 30, 2017

Highest temperature superconductor so far: H2S

The new champion of high-temperature superconductivity is a fairly common gas, hydrogen sulphide, H2S. By compressing it to 150 GPa, 1.5 million atm., a team lead by Alexander Drozdov and M. Eremets of the Max Planck Institute coaxed superconductivity from H2S at temperatures as high as 203.5°K (-70°C). This is, by far, the warmest temperature of any superconductor discovered to-date, and it’s main significance is to open the door for finding superconductivity in other, related hydrogen compounds — ideally at warmer temperatures and/or less-difficult pressures. Among the interesting compounds that will certainly get more attention: PH3, BH3, Methyl mercaptan, and even water, either alone or in combination with H2S.

Relationship between H2S pressure and critical temperature for superconductivity.

Relation between pressure and critical temperature for superconductivity, Tc, in H2S (filled squares) and D2S (open red). The magenta point was measured by magnetic susceptibility (Nature)

H2S superconductivity appears to follow the standard, Bardeen–Cooper–Schrieffer theory (B-C-S). According to this theory superconductivity derives from the formation of pairs of opposite-spinning electrons (Cooper pairs) particularly in light, stiff, semiconductor materials. The light, positively charged lattice quickly moves inward to follow the motion of the electrons, see figure below. This synchronicity of motion is posited to create an effective bond between the electrons, enough to counter the natural repulsion, and allows the the pairs to condense to a low-energy quantum state where they behave as if they were very large and very spread out. In this large, spread out state, they slide through the lattice without interacting with the atoms or the few local vibrations and unpaired electrons found at low temperatures. From this theory, we would expect to find the highest temperature superconductivity in the lightest lattice, materials like ice, boron hydride, magnesium hydride, or H2S, and we expect to find higher temperature behavior in the hydrogen version, H2O, or H2S than in the heavier, deuterium analogs, D2O or D2S. Experiments with H2S and D2S (shown at right) confirm this expectation suggesting that H2S superconductivity is of the B-C-S type. Sorry to say, water has not shown any comparable superconductivity in experiments to date.

We have found high temperature superconductivity in few of materials that we would expect from B-C-S theory, and yet-higher temperature is seen in many unexpected materials. While hydride materials generally do become superconducting, they mostly do so only at low temperatures. The highest temperature semiconductor B-C-S semiconductor discovered until now was magnesium boride, Tc = 27 K. More bothersome, the most-used superconductor, Nb-Sn, and the world record holder until now, copper-oxide ceramics, Tc = 133 K at ambient pressure; 164 K at 35 GPa (350,000 atm) were not B-C-S. There is no version of B-C-S theory to explain why these materials behave as well as they do, or why pressure effects Tc in them. Pressure effects Tc in B-C-S materials by raising the energy of small-scale vibrations that would be necessary to break the pairs. Why should pressure effect copper ceramics? No one knows.

The standard theory of superconductivity relies on Cooper pairs of electrons held together by lattice elasticity.  The lighter and stiffer the lattice, the higher temperature the superconductivity.

The standard theory of superconductivity relies on Cooper pairs of electrons held together by lattice elasticity. The lighter and stiffer the lattice, the higher temperature the superconductivity.

The assumption is that high-pressure H2S acts as a sort of metallic hydrogen. From B-C-S theory, metallic hydrogen was predicted to be a room-temperature superconductor because the material would likely to be a semi-metal, and thus a semiconductor at all temperatures. Hydrogen’s low atomic weight would mean that there would be no significant localized vibrations even at room temperature, suggesting room temperature superconductivity. Sorry to say, we have yet to reach the astronomical pressures necessary to make metallic hydrogen, so we don’t know if this prediction is true. But now it seems H2S behaves nearly the same without requiring the extremely high pressures. It is thought that high temperature H2S superconductivity occurs because H2S somewhat decomposes to H3S and S, and that the H3S provides a metallic-hydrogen-like operative lattice. The sulfur, it’s thought, just goes along for the ride. If this is the explanation, we might hope to find the same behaviors in water or phosphine, PH3, perhaps when mixed with H2S.

One last issue, I guess, is what is this high temperature superconductivity good for. As far as H2S superconductivity goes, the simple answer is that it’s probably good for nothing. The pressures are too high. In general though, high temperature superconductors like NbSn are important. They have been valuable for making high strength magnets, and for prosaic applications like long distance power transmission. The big magnets are used for submarine hunting, nuclear fusion, and (potentially) for levitation trains. See my essay on Fusion here, it’s what I did my PhD on — in chemical engineering, and levitation trains, potentially, will revolutionize transport.

Robert Buxbaum, December 24, 2015. My company, REB Research, does a lot with hydrogen. Not that we make superconductors, but we make hydrogen generators and purifiers, and I try to keep up with the relevant hydrogen research.

Brass monkey cold

In case it should ever come up in conversation, only the picture at left shows a brass monkey. The other is a bronze statue of some sort of a primate. A brass monkey is a rack used to stack cannon balls into a face centered pyramid. A cannon crew could fire about once per minute, and an engagement could last 5 hours, so you could hope to go through a lot of cannon balls during an engagement (assuming you survived).

A brass monkey cannonball holder. The classic monkeys were 10 x 10 and made of navy brass.

Small brass monkey. The classic monkey might have 9 x 9 or 10×10 cannon balls on the lower level.

Bronze sculpture of a primate playing with balls -- but look what the balls are sitting on: it's a surreal joke.

Bronze sculpture of a primate playing with balls — but look what the balls are sitting on: it’s a dada art joke.

But brass monkeys typically show up in conversation in terms of it being cold enough to freeze the balls off of a brass monkey, and if you imagine an ornamental statue, you’d never guess how cold could that be. Well, for a cannonball holder, the answer has to do with the thermal expansion of metals. Cannon balls were made of iron and the classic brass monkey was made of brass, an alloy with a much-greater thermal expansion than iron. As the temperature drops, the brass monkey contracts more than the iron balls. When the drop is enough the balls will fall off and roll around.

The thermal expansion coefficient of brass is 18.9 x 10-6/°C while the thermal expansion coefficient of iron is 11.7 x10-6/°C. The difference is 7.2×10-6/°C; this will determine the key temperature. Now consider a large brass monkey, one with 400 x 400 holes on the lower level, 399 x 399 at the second, and so on. Though it doesn’t affect the result, we’ll consider a monkey that holds 12 lb cannon balls, a typical size of 1750 -1830. Each 12 lb ball is 4.4″ in diameter at room temperature, 20°C in those days. At 20°C, this monkey is about 1760″ wide. The balls will fall off when the monkey shrinks more than the balls by about 1/3 of a diameter, 1.5″.

We can calculate ∆T, the temperature change, °C, that is required to lower the width-difference by 1.5″ as follows:

kepler conjecture, brass monkey

-1.5″ = ∆T x 1760″ x 7.2 x10-6

We find that ∆T = -118°C. The temperature where this happens is 118 degrees cooler than 20°C, or -98°C. That’s a temperature that you could, perhaps reach on the South Pole or maybe deepest Russia. It’s not likely to be a problem, especially with a smaller brass monkey.

Robert E. Buxbaum, February 21, 2015 (modified Apr. 28, 2021). Some fun thoughts: Convince yourself that the key temperature is independent of the size of the cannon balls. That is, that I didn’t need to choose 12 pounders. A bit more advanced, what is the equation for the number of balls on any particular base-size monkey. Show that the packing density is no more efficient if the bottom lawyer were an equilateral triangle, and not a square. If you liked this, you might want to know how much wood a woodchuck chucks if a woodchuck could chuck wood, or on the relationship between mustaches and WWII diplomacy.

Can you spot the man-made climate change?

As best I can tell, the only constant in climate is change, As an example, the record of northern temperatures for the last 10,000 years, below, shows nothing but major ups and downs following the end of the last ice age 9500 years ago. The only pattern, if you call it a pattern, is fractal chaos. Anti-change politicos like to concentrate on the near-recent 110 years from 1890 to 2000. This is the small up line at the right, but they ignore the previous 10000 or more, ignore the fact that the last 17 years show no change, and ignore the variation within the 100 years (they call it weather). I find I can not spot the part of the change that’s man-made.

10,000 years of climate change based on greenland ice cores. Ole Humlum – Professor, University of Oslo Department of Geosciences.

10,000 years of northern climate temperatures based on Greenland ice cores. Dr. Ole Humlum, Dept. of Geosciences, University of Oslo. Can you spot the part of the climate change that’s man-made?

Jon Stewart makes the case for man-made climate change.

Steven Colbert makes his case for belief: If you don’t believe it you’re stupid.

Steven Colbert makes the claim that man-made climate change is so absolutely apparent that all the experts agree, and that anyone who doubts is crazy, stupid, or politically motivated (he, of course is not). Freeman Dyson, one of the doubters, is normally not considered crazy or stupid. The approach reminds me of “the emperor’s new clothes.” Only the good, smart people see it. The same people used to call it “Global Warming” based on a model prediction of man-made warming. The name was changed to “climate change” since the planet isn’t warming. The model predicted strong warming in the upper atmosphere, but that isn’t happening either; ski areas are about as cold as ever (we’ve got good data from ski areas).

I note that the climate on Jupiter has changed too in the last 100 years. A visible sign of this is that the great red spot has nearly disappeared. But it’s hard to claim that’s man-made. There’s a joke here, somewhere.

Jupiter's red spot has shrunk significantly. Here it is now. NASA

Jupiter’s red spot has shrunk significantly. Here it is now. NASA

As a side issue, it seems to me that some global warming could be a good thing. The periods that were warm had peace and relative plenty, while periods of cold, like the little ice age, 500 years ago were times of mass starvation and plague. Similarly, things were a lot better during the medieval warm period (1000 AD) than during the dark ages 500-900 AD. The Roman warm period (100 BC-50 AD) was again warm and (relatively) civilized. Perhaps we owe some of the good food production of today to the warming shown on the chart above. Civilization is good. Robert E. Buxbaum January 14, 2015. (Corrected January 19; I’d originally labeled Steven Colbert as Jon Stewart)

 

Dr. Who’s Quantum reality viewed as diffusion

It’s very hard to get the meaning of life from science because reality is very strange, Further, science is mathematical, and the math relations for reality can be re-arranged. One arrangement of the terms will suggest a version of causality, while another will suggest a different causality. As Dr. Who points out, in non-linear, non-objective terms, there’s no causality, but rather a wibbly-wobbely ball of timey-wimey stuff.

Time as a ball of wibblely wobbly timey wimey stuff.

Reality is a ball of  timey wimpy stuff, Dr. Who.

To this end, I’ll provide my favorite way of looking at the timey-wimey way of the world by rearranging the equations of quantum mechanics into a sort of diffusion. It’s not the diffusion of something you’re quite familiar with, but rather a timey-wimey wave-stuff referred to as Ψ. It’s part real and part imaginary, and the only relationship between ψ and life is that the chance of finding something somewhere is proportional Ψ*|Ψ. The diffusion of this half-imaginary stuff is the underpinning of reality — if viewed in a certain way.

First let’s consider the steady diffusion of a normal (un-quantum) material. If there is a lot of it, like when there’s perfume off of a prima donna, you can say that N = -D dc/dx where N is the flux of perfume (molecules per minute per area), dc/dx is a concentration gradient (there’s more perfume near her than near you), and D is a diffusivity, a number related to the mobility of those perfume molecules. 

We can further generalize the diffusion of an ordinary material for a case where concentration varies with time because of reaction or a difference between the in-rate and the out rate, with reaction added as a secondary accumulator, we can write: dc/dt = reaction + dN/dx = reaction + D d2c/dx2. For a first order reaction, for example radioactive decay, reaction = -ßc, and 

dc/dt = -ßc + D d2c/dx2               (1)

where ß is the radioactive decay constant of the material whose concentration is c.

Viewed in a certain way, the most relevant equation for reality, the time-dependent Schrödinger wave equation (semi-derived here), fits into the same diffusion-reaction form:

dΨ/dt = – 2iπV/h Ψ + hi/4πm d2Ψ/dx               (2)

Instead of reality involving the motion of a real material (perfume, radioactive radon, etc.) with a real concentration, c, in this relation, the material can not be sensed directly, and the concentration, Ψ, is semi -imaginary. Here, h is plank’s constant, i is the imaginary number, √-1, m is the mass of the real material, and V is potential energy. When dealing with reactions or charged materials, it’s relevant that V will vary with position (e.g. electrons’ energy is lower when they are near protons). The diffusivity term here is imaginary, hi/4πm, but that’s OK, Ψ is part imaginary, and we’d expect that potential energy is something of a destroyer of Ψ: the likelihood of finding something at a spot goes down where the energy is high.

The form of this diffusion is linear, a mathematical term that refers to equations where solution that works for Ψ will also work for 2Ψ. Generally speaking linear solutions have exp() terms in them, and that’s especially likely here as the only place where you see a time term is on the left. For most cases we can say that

Ψ = ψ exp(-2iπE/h)t               (3)

where ψ is not a function of anything but x (space) and E is the energy of the thing whose behavior is described by Ψ. If you take the derivative of equation 3 this with respect to time, t, you get

dΨ/dt = ψ (-2iπE/h) exp(-2iπE/h)t = (-2iπE/h)Ψ.               (4)

If you insert this into equation 2, you’ll notice that the form of the first term is now identical to the second, with energy appearing identically in both terms. Divide now by exp(-2iπE/h)t, and you get the following equation:

(E-V) ψ =  -h2/8π2m d2ψ/dx2                      (5)

where ψ can be thought of as the physical concentration in space of the timey-wimey stuff. ψ is still wibbly-wobbley, but no longer timey-wimey. Now ψ- squared is the likelihood of finding the stuff somewhere at any time, and E, the energy of the thing. For most things in normal conditions, E is quantized and equals approximately kT. That is E of the thing equals, typically, a quantized energy state that’s nearly Boltzmann’s constant times temperature.

You now want to check that the approximation in equation 3-5 was legitimate. You do this by checking if the length-scale implicit in exp(-2iπE/h)t is small relative to the length-scales of the action. If it is (and it usually is), You are free to solve for ψ at any E and V using normal mathematics, by analytic or digital means, for example this way. ψ will be wibbely-wobbely but won’t be timey-wimey. That is, the space behavior of the thing will be peculiar with the item in forbidden locations, but there won’t be time reversal. For time reversal, you need small space features (like here) or entanglement.

Equation 5 can be considered a simple steady state diffusion equation. The stuff whose concentration is ψ is created wherever E is greater than V, and is destroyed wherever V is greater than E. The stuff then continuously diffuses from the former area to the latter establishing a time-independent concentration profile. E is quantized (can only be some specific values) since matter can never be created of destroyed, and it is only at specific values of E that this happens in Equation 5. For a particle in a flat box, E and ψ are found, typically, by realizing that the format of ψ must be a sin function (and ignoring an infinity). For more complex potential energy surfaces, it’s best to use a matrix solution for ψ along with non-continuous calculous. This avoids the infinity, and is a lot more flexible besides.

When you detect a material in some spot, you can imagine that the space- function ψ collapses, but even that isn’t clear as you can never know the position and velocity of a thing simultaneously, so doesn’t collapse all that much. And as for what the stuff is that diffuses and has concentration ψ, no-one knows, but it behaves like a stuff. And as to why it diffuses, perhaps it’s jiggled by unseen photons. I don’t know if this is what happens, but it’s a way I often choose to imagine reality — a moving, unseen material with real and imaginary (spiritual ?) parts, whose concentration, ψ, is related to experience, but not directly experienced.

This is not the only way the equations can be rearranged. Another way of thinking of things is as the sum of path integrals — an approach that appears to me as a many-world version, with fixed-points in time (another Dr Who feature). In this view, every object takes every path possible between these points, and reality as the sum of all the versions, including some that have time reversals. Richard Feynman explains this path integral approach here. If it doesn’t make more sense than my version, that’s OK. There is no version of the quantum equations that will make total, rational sense. All the true ones are mathematically equivalent — totally equal, but differ in the “meaning”. That is, if you were to impose meaning on the math terms, the meaning would be totally different. That’s not to say that all explanations are equally valid — most versions are totally wrong, but there are many, equally valid math version to fit many, equally valid religious or philosophic world views. The various religions, I think, are uncomfortable with having so many completely different views being totally equal because (as I understand it) each wants exclusive ownership of truth. Since this is never so for math, I claim religion is the opposite of science. Religion is trying to find The Meaning of life, and science is trying to match experiential truth — and ideally useful truth; knowing the meaning of life isn’t that useful in a knife fight.

Dr. Robert E. Buxbaum, July 9, 2014. If nothing else, you now perhaps understand Dr. Who more than you did previously. If you liked this, see here for a view of political happiness in terms of the thermodynamics of free-energy minimization.

If hot air rises, why is it cold on mountain-tops?

This is a child’s question that’s rarely answered to anyone’s satisfaction. To answer it well requires college level science, and by college the child has usually been dissuaded from asking anything scientific that would likely embarrass teacher — which is to say, from asking most anything. By a good answer, I mean here one that provides both a mathematical, checkable prediction of the temperature you’d expect to find on mountain tops, and one that also gives a feel for why it should be so. I’ll try to provide this here, as previously when explaining “why is the sky blue.” A word of warning: real science involves mathematics, something that’s often left behind, perhaps in an effort to build self-esteem. If I do a poor job, please text me back: “if hot air rises, what’s keeping you down?”

As a touchy-feely answer, please note that all materials have internal energy. It’s generally associated with the kinetic energy + potential energy of the molecules. It enters whenever a material is heated or has work done on it, and for gases, to good approximation, it equals the gas heat capacity of the gas times its temperature. For air, this is about 7 cal/mol°K times the temperature in degrees Kelvin. The average air at sea-level is taken to be at 1 atm, or 101,300  Pascals, and 15.02°C, or 288.15 °K; the internal energy of this are is thus 288.15 x 7 = 2017 cal/mol = 8420 J/mol. The internal energy of the air will decrease as the air rises, and the temperature drops for reasons I will explain below. Most diatomic gases have heat capacity of 7 cal/mol°K, a fact that is only explained by quantum mechanics; if not for quantum mechanics, the heat capacities of diatomic gases would be about 9 cal/mol°K.

Lets consider a volume of this air at this standard condition, and imagine that it is held within a weightless balloon, or plastic bag. As we pull that air up, by pulling up the bag, the bag starts to expand because the pressure is lower at high altitude (air pressure is just the weight of the air). No heat is exchanged with the surrounding air because our air will always be about as warm as its surroundings, or if you like you can imagine weightless balloon prevents it. In either case the molecules lose energy as the bag expands because they always collide with an outwardly moving wall. Alternately you can say that the air in the bag is doing work on the exterior air — expansion is work — but we are putting no work into the air as it takes no work to lift this air. The buoyancy of the air in our balloon is always about that of the surrounding air, or so we’ll assume for now.

A classic, difficult way to calculate the temperature change with altitude is to calculate the work being done by the air in the rising balloon. Work done is force times distance: w=  ∫f dz and this work should equal the effective cooling since heat and work are interchangeable. There’s an integral sign here to account for the fact that force is proportional to pressure and the air pressure will decrease as the balloon goes up. We now note that w =  ∫f dz = – ∫P dV because pressure, P = force per unit area. and volume, V is area times distance. The minus sign is because the work is being done by the air, not done on the air — it involves a loss of internal energy. Sorry to say, the temperature and pressure in the air keeps changing with volume and altitude, so it’s hard to solve the integral, but there is a simple approach based on entropy, S.

Les Droites Mountain, in the Alps, at the intersect of France Italy and Switzerland is 4000 m tall. The top is generally snow-covered.

Les Droites Mountain, in the Alps, at the intersect of France Italy and Switzerland is 4000 m tall. The top is generally snow-covered.

I discussed entropy last month, and showed it was a property of state, and further, that for any reversible path, ∆S= (Q/T)rev. That is, the entropy change for any reversible process equals the heat that enters divided by the temperature. Now, we expect the balloon rise is reversible, and since we’ve assumed no heat transfer, Q = 0. We thus expect that the entropy of air will be the same at all altitudes. Now entropy has two parts, a temperature part, Cp ln T2/T1 and a pressure part, R ln P2/P1. If the total ∆S=0 these two parts will exactly cancel.

Consider that at 4000m, the height of Les Droites, a mountain in the Mont Blanc range, the typical pressure is 61,660 Pa, about 60.85% of sea level pressure (101325 Pa). If the air were reduced to this pressure at constant temperature (∆S)T = -R ln P2/P1 where R is the gas constant, about 2 cal/mol°K, and P2/P1 = .6085; (∆S)T = -2 ln .6085. Since the total entropy change is zero, this part must equal Cp ln T2/T1 where Cp is the heat capacity of air at constant pressure, about 7 cal/mol°K for all diatomic gases, and T1 and T2 are the temperatures (Kelvin) of the air at sea level and 4000 m. (These equations are derived in most thermodynamics texts. The short version is that the entropy change from compression at constant T equals the work at constant temperature divided by T,  ∫P/TdV=  ∫R/V dV = R ln V2/V1= -R ln P2/P1. Similarly the entropy change at constant pressure = ∫dQ/T where dQ = Cp dT. This component of entropy is thus ∫dQ/T = Cp ∫dT/T = Cp ln T2/T1.) Setting the sum to equal zero, we can say that Cp ln T2/T1 =R ln .6085, or that 

T2 = T1 (.6085)R/Cp

T2 = T1(.6085)2/7   where 0.6065 is the pressure ratio at 4000, and because for air and most diatomic gases, R/Cp = 2/7 to very good approximation, matching the prediction from quantum mechanics.

From the above, we calculate T2 = 288.15 x .8676 = 250.0°K, or -23.15 °C. This is cold enough to provide snow  on Les Droites nearly year round, and it’s pretty accurate. The typical temperature at 4000 m is 262.17 K (-11°C). That’s 26°C colder than at sea-level, and only 12°C warmer than we’d predicted.

There are three weak assumptions behind the 11°C error in our predictions: (1) that the air that rises is no hotter than the air that does not, and (2) that the air’s not heated by radiation from the sun or earth, and (3) that there is no heat exchange with the surrounding air, e.g. from rain or snow formation. The last of these errors is thought to be the largest, but it’s still not large enough to cause serious problems.

The snow cover on Kilimanjaro, 2013. If global warming models were true, it should be gone, or mostly gone.

Snow on Kilimanjaro, Tanzania 2013. If global warming models were true, the ground should be 4°C warmer than 100 years ago, and the air at this altitude, about 7°C (12°F) warmer; and the snow should be gone.

You can use this approach, with different exponents, estimate the temperature at the center of Jupiter, or at the center of neutron stars. This iso-entropic calculation is the model that’s used here, though it’s understood that may be off by a fair percentage. You can also ask questions about global warming: increased CO2 at this level is supposed to cause extreme heating at 4000m, enough to heat the earth below by 4°C/century or more. As it happens, the temperature and snow cover on Les Droites and other Alp ski areas has been studied carefully for many decades; they are not warming as best we can tell (here’s a discussion). By all rights, Mt Blanc should be Mt Green by now; no one knows why. The earth too seems to have stopped warming. My theory: clouds. 

Robert Buxbaum, May 10, 2014. Science requires you check your theory for internal and external weakness. Here’s why the sky is blue, not green.

Entropy, the most important pattern in life

One evening at the Princeton grad college a younger fellow (an 18-year-old genius) asked the most simple, elegant question I had ever heard, one I’ve borrowed and used ever since: “tell me”, he asked, “something that’s important and true.” My answer that evening was that the entropy of the universe is always increasing. It’s a fundamentally important pattern in life; one I didn’t discover, but discovered to have a lot of applications and meaning. Let me explain why it’s true here, and then why I find it’s meaningful.

Famous entropy cartoon, Harris

Famous entropy cartoon, Harris

The entropy of the universe is not something you can measure directly, but rather indirectly, from the availability of work in any corner of it. It’s related to randomness and the arrow of time. First off, here’s how you can tell if time is moving forward: put an ice-cube into hot water, if the cube dissolves and the water becomes cooler, time is moving forward — or, at least it’s moving in the same direction as you are. If you can reach into a cup of warm water and pull out an ice-cube while making the water hot, time is moving backwards. — or rather, you are living backwards. Within any closed system, one where you don’t add things or energy (sunlight say), you can tell that time is moving forward because the forward progress of time always leads to the lack of availability of work. In the case above, you could have generated some electricity from the ice-cube and the hot water, but not from the glass of warm water.

You can not extract work from a heat source alone; to extract work some heat must be deposited in a cold sink. At best the entropy of the universe remains unchanged. More typically, it increases.

You can not extract work from a heat source alone; to extract work some heat must be deposited in a cold sink. At best the entropy of the universe remains unchanged.

This observation is about as fundamental as any to understanding the world; it is the basis of entropy and the second law of thermodynamics: you can never extract useful work from a uniform temperature body of water, say, just by making that water cooler. To get useful work, you always need something some other transfer into or out of the system; you always need to make something else hotter, colder, or provide some chemical or altitude changes that can not be reversed without adding more energy back. Thus, so long as time moves forward everything runs down in terms of work availability.

There is also a first law; it states that energy is conserved. That is, if you want to heat some substance, that change requires that you put in a set amount of work plus heat. Similarly, if you want to cool something, a set amount of heat + work must be taken out. In equation form, we say that, for any change, q +w is constant, where q is heat, and w is work. It’s the sum that’s constant, not the individual values so long as you count every 4.174 Joules of work as if it were 1 calorie of heat. If you input more heat, you have to add less work, and visa versa, but there is always the same sum. When adding heat or work, we say that q or w is positive; when extracting heat or work, we say that q or w are negative quantities. Still, each 4.174 joules counts as if it were 1 calorie.

Now, since for every path between two states, q +w is the same, we say that q + w represents a path-independent quantity for the system, one we call internal energy, U where ∆U = q + w. This is a mathematical form of the first law of thermodynamics: you can’t take q + w out of nothing, or add it to something without making a change in the properties of the thing. The only way to leave things the same is if q + w = 0. We notice also that for any pure thing or mixture, the sum q +w for the change is proportional to the mass of the stuff; we can thus say that internal energy is an intensive quality. q + w = n ∆u where n is the grams of material, and ∆u is the change in internal energy per gram.

We are now ready to put the first and second laws together. We find we can extract work from a system if we take heat from a hot body of water and deliver some of it to something at a lower temperature (the ice-cube say). This can be done with a thermopile, or with a steam engine (Rankine cycle, above), or a stirling engine. That an engine can only extract work when there is a difference of temperatures is similar to the operation of a water wheel. Sadie Carnot noted that a water wheel is able to extract work only when there is a flow of water from a high level to low; similarly in a heat engine, you only get work by taking in heat energy from a hot heat-source and exhausting some of it to a colder heat-sink. The remainder leaves as work. That is, q1 -q2 = w, and energy is conserved. The second law isn’t violated so long as there is no way you could run the engine without the cold sink. Accepting this as reasonable, we can now derive some very interesting, non-obvious truths.

We begin with the famous Carnot cycle. The Carnot cycle is an idealized heat engine with the interesting feature that it can be made to operate reversibly. That is, you can make it run forwards, taking a certain amount of work from a hot source, producing a certain amount of work and delivering a certain amount of heat to the cold sink; and you can run the same process backwards, as a refrigerator, taking in the same about of work and the same amount of heat from the cold sink and delivering the same amount to the hot source. Carnot showed by the following proof that all other reversible engines would have the same efficiency as his cycle and no engine, reversible or not, could be more efficient. The proof: if an engine could be designed that will extract a greater percentage of the heat as work when operating between a given hot source and cold sink it could be used to drive his Carnot cycle backwards. If the pair of engines were now combined so that the less efficient engine removed exactly as much heat from the sink as the more efficient engine deposited, the excess work produced by the more efficient engine would leave with no effect besides cooling the source. This combination would be in violation of the second law, something that we’d said was impossible.

Now let us try to understand the relationship that drives useful energy production. The ratio of heat in to heat out has got to be a function of the in and out temperatures alone. That is, q1/q2 = f(T1, T2). Similarly, q2/q1 = f(T2,T1) Now lets consider what happens when two Carnot cycles are placed in series between T1 and T2, with the middle temperature at Tm. For the first engine, q1/qm = f(T1, Tm), and similarly for the second engine qm/q2 = f(Tm, T2). Combining these we see that q1/q2 = (q1/qm)x(qm/q2) and therefore f(T1, T2) must always equal f(T1, Tm)x f(Tm/T2) =f(T1,Tm)/f(T2, Tm). In this relationship we see that the second term Tm is irrelevant; it is true for any Tm. We thus say that q1/q2 = T1/T2, and this is the limit of what you get at maximum (reversible) efficiency. You can now rearrange this to read q1/T1 = q2/T2 or to say that work, W = q1 – q2 = q2 (T1 – T2)/T2.

A strange result from this is that, since every process can be modeled as either a sum of Carnot engines, or of engines that are less-efficient, and since the Carnot engine will produce this same amount of reversible work when filled with any substance or combination of substances, we can say that this outcome: q1/T1 = q2/T2 is independent of path, and independent of substance so long as the process is reversible. We can thus say that for all substances there is a property of state, S such that the change in this property is ∆S = ∑q/T for all the heat in or out. In a more general sense, we can say, ∆S = ∫dq/T, where this state property, S is called the entropy. Since as before, the amount of heat needed is proportional to mass, we can say that S is an intensive property; S= n s where n is the mass of stuff, and s is the entropy change per mass. 

Another strange result comes from the efficiency equation. Since, for any engine or process that is less efficient than the reversible one, we get less work out for the same amount of q1, we must have more heat rejected than q2. Thus, for an irreversible engine or process, q1-q2 < q2(T1-T2)/T2, and q2/T2 is greater than -q1/T1. As a result, the total change in entropy, S = q1/T1 + q2/T2 >0: the entropy of the universe always goes up or stays constant. It never goes down. Another final observation is that there must be a zero temperature that nothing can go below or both q1 and q2 could be positive and energy would not be conserved. Our observations of time and energy conservation leaves us to expect to find that there must be a minimum temperature, T = 0 that nothing can be colder than. We find this temperature at -273.15 °C. It is called absolute zero; nothing has ever been cooled to be colder than this, and now we see that, so long as time moves forward and energy is conserved, nothing will ever will be found colder.

Typically we either say that S is zero at absolute zero, or at room temperature.

We’re nearly there. We can define the entropy of the universe as the sum of the entropies of everything in it. From the above treatment of work cycles, we see that this total of entropy always goes up, never down. A fundamental fact of nature, and (in my world view) a fundamental view into how God views us and the universe. First, that the entropy of the universe goes up only, and not down (in our time-forward framework) suggests there is a creator for our universe — a source of negative entropy at the start of all things, or a reverser of time (it’s the same thing in our framework). Another observation, God likes entropy a lot, and that means randomness. It’s his working principle, it seems.

But before you take me now for a total libertine and say that since science shows that everything runs down the only moral take-home is to teach: “Let us eat and drink,”… “for tomorrow we die!” (Isaiah 22:13), I should note that his randomness only applies to the universe as a whole. The individual parts (planets, laboratories, beakers of coffee) does not maximize entropy, but leads to a minimization of available work, and this is different. You can show that the maximization of S, the entropy of the universe, does not lead to the maximization of s, the entropy per gram of your particular closed space but rather to the minimization of a related quantity µ, the free energy, or usable work per gram of your stuff. You can show that, for any closed system at constant temperature, µ = h -Ts where s is entropy per gram as before, and h is called enthalpy. h is basically the potential energy of the molecules; it is lowest at low temperature and high order. For a closed system we find there is a balance between s, something that increases with increased randomness, and h, something that decreases with increased randomness. Put water and air in a bottle, and you find that the water is mostly on the bottom of the bottle, the air is mostly on the top, and the amount of mixing in each phase is not the maximum disorder, but rather the one you’d calculate will minimize µ.

As the protein folds its randomness and entropy decrease, but its enthalpy decreases too; the net effect is one precise fold that minimizes µ.

As a protein folds its randomness and entropy decrease, but its enthalpy decreases too; the net effect is one precise fold that minimizes µ.

This is the principle that God applies to everything, including us, I’d guess: a balance. Take protein folding; some patterns have big disorder, and high h; some have low disorder and very low h. The result is a temperature-dependent  balance. If I were to take a moral imperative from this balance, I’d say it matches better with the sayings of Solomon the wise: “there is nothing better for a person under the sun than to eat, drink and be merry. Then joy will accompany them in their toil all the days of the life God has given them under the sun.” (Ecclesiastes 8:15). There is toil here as well as pleasure; directed activity balanced against personal pleasures. This is the µ = h -Ts minimization where, perhaps, T is economic wealth. Thus, the richer a society, the less toil is ideal and the more freedom. Of necessity, poor societies are repressive. 

Dr. Robert E. Buxbaum, Mar 18, 2014. My previous thermodynamic post concerned the thermodynamics of hydrogen production. It’s not clear that all matter goes forward in time, by the way; antimatter may go backwards, so it’s possible that anti matter apples may fall up. On microscopic scale, time becomes flexible so it seems you can make a time machine. Religious leaders tend to be anti-science, I’ve noticed, perhaps because scientific miracles can be done by anyone, available even those who think “wrong,” or say the wrong words. And that’s that, all being heard, do what’s right and enjoy life too: as important a pattern in life as you’ll find, I think. The relationship between free-energy and societal organization is from my thesis advisor, Dr. Ernest F. Johnson.

Where does industrial CO2 come from? China mostly.

The US is in the process of imposing strict regulations on carbon dioxide as a way to stop global warming and climate change. We have also closed nearly new power plants, replacing them with cleaner options like a 2.2 billion dollar solar-electric generator in lake Ivanpah, and this January our president imposed a ban on lightbulbs of 60 W and higher. But it might help to know that China produced twice as much of the main climate change gas, carbon dioxide (CO2) as the US in 2012, and the ratio seems to be growing. One reason China produces so much CO2 is that China generates electricity from dirty coal using inefficient turbines.

Where the CO2 is coming from: a fair amount from the US and Europe, but mostly from China and India too.

From EDGAR 4.2; As of 2012 twice as much carbon dioxide, CO2 is coming from China as from the US and Europe.

It strikes me that a good approach to reducing the world’s carbon-dioxide emissions is to stop manufacturing so much in China. Our US electric plants use more efficient generating technology and burn lower carbon fuels than China does. We then add scrubbers and pollution reduction equipment that are hardly used in China. US manufacture thus produces not only less carbon dioxide than China, it also avoids other forms of air pollution, like NOx and SOx. Add to this the advantage of having fewer ships carrying products to and from China, and it’s clear that we could significantly reduce the world’s air problems by moving manufacture back to the USA.

I should also note that manufacture in the US helps the economy by keeping jobs and taxes here. A simple way to reduce purchases from China and collect some tax revenue would be to impose an import tariff on Chinese goods based, perhaps on the difference in carbon emissions or other pollution involved in Chinese manufacture and transport. While I have noted a lack of global warming, sixteen years now, that doesn’t mean I like pollution. It’s worthwhile to clean the air, and if we collect tariffs from the Chinese and help the US economy too, all the better.

Robert E. Buxbaum, February 24, 2014. Nuclear power produces no air pollution and uses a lot less land area compared to solar and wind projects.

Global warming takes a 15 year rest

I have long thought that global climate change was chaotic, rather than steadily warming. Global temperatures show self-similar (fractal) variation with time and long-term cycles; they also show strange attractors generally states including ice ages and El Niño events. These are sudden rests of the global temperature pattern, classic symptoms of chaos. The standard models of global warming is does not predict El Niño and other chaotic events, and thus are fundamentally wrong. The models assume that a steady amount of sun heat reaches the earth, while a decreasing amount leaves, held in by increasing amounts of man-produced CO2 (carbon dioxide) in the atmosphere. These models are “tweaked” to match the observed temperature to the CO2 content of the atmosphere from 1930 to about 2004. In the movie “An Inconvenient Truth” Al Gore uses these models to predict massive arctic melting leading to a 20 foot rise in sea levels by 2100. To the embarrassment of Al Gore, and the relief of everyone else, though COconcentrations continue to rise, global warming took a 15 year break starting shortly before the movie came out, and the sea level is, more-or-less where it was except for temporary changes during periodic El Niño cycles.

Global temperature variation Fifteen years and four El Niño cycles, with little obvious change. Most models predict .25°C/decade.

Fifteen years of global temperature variation to June 2013; 4 El Niños but no sign of a long-term change.

Hans von Storch, a German expert on global warming, told the German newspaper, der Spiegel: “We’re facing a puzzle. Recent CO2 emissions have actually risen even more steeply than we feared. As a result, according to most climate models, we should have seen temperatures rise by around 0.25 degrees Celsius (0.45 degrees Fahrenheit) over the past 10 years. That hasn’t happened. [Further], according to the models, the Mediterranean region will grow drier all year round. At the moment, however, there is actually more rain there in the fall months than there used to be. We will need to observe further developments closely in the coming years.”

Aside from the lack of warming for the last 15 years, von Storch mentions that there has been no increase in severe weather. You might find that surprising given the news reports; still it’s so. Storms are caused by temperature and humidity differences, and these have not changed. (Click here to see why tornadoes lift stuff up).

At this point, I should mention that the majority of global warming experts do not see a problem with the 15 year pause. Global temperatures have been rising unsteadily since 1900, and even von Storch expects this trend to continue — sooner or later. I do see a problem, though, highlighted by the various chaotic changes that are left out of the models. A source of the chaos, and a fundamental problem with the models could be with how they treat the effects of water vapor. When uncondensed, water vapor acts as a very strong thermal blanket; it allows the sun’s light in, but prevents the heat energy from radiating out. CObehaves the same way, but weaker (there’s less of it).

More water vapor enters the air as the planet warms, and this should amplify the CO2 -caused run-away heating except for one thing. Every now and again, the water vapor condenses into clouds, and then (sometimes) falls as rain or show. Clouds and snow reflect the incoming sunlight, and this leads to global cooling. Rain and snow drive water vapor from the air, and this leads to accelerated global cooling. To the extent that clouds are chaotic, and out of man’s control, the global climate should be chaotic too. So far, no one has a very good global model for cloud formation, or for rain and snowfall, but it’s well accepted that these phenomena are chaotic and self-similar (each part of a cloud looks like the whole). Clouds may also admit “the butterfly effect” where a butterfly in China can cause a hurricane in New Jersey if it flaps at the right time.

For those wishing to examine the longer-range view, here’s a thermal history of central England since 1659, Oliver Cromwell’s time. At this scale, each peak is an El Niño. There is a lot of chaotic noise, but you can also notice either a 280 year periodicity (lat peak around 1720), or a 100 year temperature rise beginning about 1900.

Global warming; Central England Since 1659; From http://www.climate4you.com

It is not clear that the cycle is human-caused,but my hope is that it is. My sense is that the last 100 years of global warming has been a good thing; for agriculture and trade it’s far better than an ice age. If we caused it with our  CO2, we could continue to use CO2 to just balance the natural tendency toward another ice age. If it’s chaotic, as I suspect, such optimism is probably misplaced. It is very hard to get a chaotic system out of its behavior. The evidence that we’ve never moved an El Niño out of its normal period of every 3 to 7 years (expect another this year or next). If so, we should expect another ice age within the next few centuries.

Global temperatures measured from the antarctic ice showing stable, cyclic chaos and self-similarity.

Global temperatures measured from the antarctic ice showing 4 Ice ages.

Just as clouds cool the earth, you can cool your building too by painting the roof white. If you are interested in more weather-related posts, here’s why the sky is blue on earth, and why the sky on Mars is yellow.

Robert E. Buxbaum July 27, 2013 (mostly my business makes hydrogen generators and I consult on hydrogen).