Category Archives: Heat transfer

The speed of sound, Buxbaum’s correction

Ernst Mach showed that sound must travel at a particular speed through any material, one determined by the conservation of energy and of entropy. At room temperature and 1 atm, that speed is theoretically predicted to be 343 m/s. For a wave to move at any other speed, either the laws of energy conservation would have to fail, or ∆S ≠ 0 and the wave would die out. This is the only speed where you could say there is a traveling wave, and experimentally, this is found to be the speed of sound in air, to good accuracy.

Still, it strikes me that Mach’s assumptions may have been too restrictive for short-distance sound waves. Perhaps there is room for other sound speeds if you allow ∆S > 0, and consider sound that travels short distances and dies out far from the source. Waves at these, other speeds might affect music appreciation, or headphone design. As these waves were never treated in my thermodynamics textbooks, I wondered if I could derive their speed in any nice way, and if they were faster or slower than the main wave? (If I can’t use this blog to re-think my college studies, what good is it?)

I t can help to think of a shock-wave of sound wave moving down a constant area tube of still are at speed u, with us moving along at the same speed as the wave. In this view, the wave appears stationary, but there is a wind of speed u approaching it from the left.

Imagine the sound-wave moving to the right, down a constant area tube at speed u, with us moving along at the same speed. Thus, the wave appears stationary, with a wind of speed u from the right.

As a first step to trying to re-imagine Mach’s calculation, here is one way to derive the original, for ∆S = 0, speed of sound: I showed in a previous post that the entropy change for compression can be imagines to have two parts, a pressure part at constant temperature: dS/dV at constant T = dP/dT at constant V. This part equals R/V for an ideal gas. There is also a temperature at constant volume part of the entropy change: dS/dT at constant V = Cv/T. Dividing the two equations, we find that, at constant entropy, dT/dV = RT/CvV= P/Cv. For a case where ∆S>0, dT/dV > P/Cv.

Now lets look at the conservation of mechanical energy. A compression wave gives off a certain amount of mechanical energy, or work on expansion, and this work accelerates the gas within the wave. For an ideal gas the internal energy of the gas is stored only in its temperature. Lets now consider a sound wave going down a tube flow left to right, and lets our reference plane along the wave at the same speed so the wave seems to sit still while a flow of gas moves toward it from the right at the speed of the sound wave, u. For this flow system energy is concerned though no heat is removed, and no useful work is done. Thus, any change in enthalpy only results in a change in kinetic energy. dH = -d(u2)/2 = u du, where H here is a per-mass enthalpy (enthalpy per kg).

dH = TdS + VdP. This can be rearranged to read, TdS = dH -VdP = -u du – VdP.

We now use conservation of mass to put du into terms of P,V, and T. By conservation of mass, u/V is constant, or d(u/V)= 0. Taking the derivative of this quotient, du/V -u dV/V2= 0. Rearranging this, we get, du = u dV/V (No assumptions about entropy here). Since dH = -u du, we say that udV/V = -dH = -TdS- VdP. It is now common to say that dS = 0 across the sound wave, and thus find that u2 = -V(dP/dV) at const S. For an ideal gas, this last derivative equals, PCp/VCv, so the speed of sound, u= √PVCp/Cv with the volume in terms of mass (kg/m3).

The problem comes in where we say that ∆S>0. At this point, I would say that u= -V(dH/dV) = VCp dT/dV > PVCp/Cv. Unless, I’ve made a mistake (always possible), I find that there is a small leading, non-adiabatic sound wave that goes ahead of the ordinary sound wave and is experienced only close to the source caused by mechanical energy that becomes degraded to raising T and gives rise more compression than would be expected for iso-entropic waves.

This should have some relevance to headphone design and speaker design since headphones are heard close to the ear, while speakers are heard further away. Meanwhile the recordings are made by microphones right next to the singers or instruments.

Robert E. Buxbaum, August 26, 2014

If hot air rises, why is it cold on mountain-tops?

This is a child’s question that’s rarely answered to anyone’s satisfaction. To answer it well requires college level science, and by college the child has usually been dissuaded from asking anything scientific that would likely embarrass teacher — which is to say, from asking most anything. By a good answer, I mean here one that provides both a mathematical, checkable prediction of the temperature you’d expect to find on mountain tops, and one that also gives a feel for why it should be so. I’ll try to provide this here, as previously when explaining “why is the sky blue.” A word of warning: real science involves mathematics, something that’s often left behind, perhaps in an effort to build self-esteem. If I do a poor job, please text me back: “if hot air rises, what’s keeping you down?”

As a touchy-feely answer, please note that all materials have internal energy. It’s generally associated with the kinetic energy + potential energy of the molecules. It enters whenever a material is heated or has work done on it, and for gases, to good approximation, it equals the gas heat capacity of the gas times its temperature. For air, this is about 7 cal/mol°K times the temperature in degrees Kelvin. The average air at sea-level is taken to be at 1 atm, or 101,300  Pascals, and 15.02°C, or 288.15 °K; the internal energy of this are is thus 288.15 x 7 = 2017 cal/mol = 8420 J/mol. The internal energy of the air will decrease as the air rises, and the temperature drops for reasons I will explain below. Most diatomic gases have heat capacity of 7 cal/mol°K, a fact that is only explained by quantum mechanics; if not for quantum mechanics, the heat capacities of diatomic gases would be about 9 cal/mol°K.

Lets consider a volume of this air at this standard condition, and imagine that it is held within a weightless balloon, or plastic bag. As we pull that air up, by pulling up the bag, the bag starts to expand because the pressure is lower at high altitude (air pressure is just the weight of the air). No heat is exchanged with the surrounding air because our air will always be about as warm as its surroundings, or if you like you can imagine weightless balloon prevents it. In either case the molecules lose energy as the bag expands because they always collide with an outwardly moving wall. Alternately you can say that the air in the bag is doing work on the exterior air — expansion is work — but we are putting no work into the air as it takes no work to lift this air. The buoyancy of the air in our balloon is always about that of the surrounding air, or so we’ll assume for now.

A classic, difficult way to calculate the temperature change with altitude is to calculate the work being done by the air in the rising balloon. Work done is force times distance: w=  ∫f dz and this work should equal the effective cooling since heat and work are interchangeable. There’s an integral sign here to account for the fact that force is proportional to pressure and the air pressure will decrease as the balloon goes up. We now note that w =  ∫f dz = – ∫P dV because pressure, P = force per unit area. and volume, V is area times distance. The minus sign is because the work is being done by the air, not done on the air — it involves a loss of internal energy. Sorry to say, the temperature and pressure in the air keeps changing with volume and altitude, so it’s hard to solve the integral, but there is a simple approach based on entropy, S.

Les Droites Mountain, in the Alps, at the intersect of France Italy and Switzerland is 4000 m tall. The top is generally snow-covered.

Les Droites Mountain, in the Alps, at the intersect of France Italy and Switzerland is 4000 m tall. The top is generally snow-covered.

I discussed entropy last month, and showed it was a property of state, and further, that for any reversible path, ∆S= (Q/T)rev. That is, the entropy change for any reversible process equals the heat that enters divided by the temperature. Now, we expect the balloon rise is reversible, and since we’ve assumed no heat transfer, Q = 0. We thus expect that the entropy of air will be the same at all altitudes. Now entropy has two parts, a temperature part, Cp ln T2/T1 and a pressure part, R ln P2/P1. If the total ∆S=0 these two parts will exactly cancel.

Consider that at 4000m, the height of Les Droites, a mountain in the Mont Blanc range, the typical pressure is 61,660 Pa, about 60.85% of sea level pressure (101325 Pa). If the air were reduced to this pressure at constant temperature (∆S)T = -R ln P2/P1 where R is the gas constant, about 2 cal/mol°K, and P2/P1 = .6085; (∆S)T = -2 ln .6085. Since the total entropy change is zero, this part must equal Cp ln T2/T1 where Cp is the heat capacity of air at constant pressure, about 7 cal/mol°K for all diatomic gases, and T1 and T2 are the temperatures (Kelvin) of the air at sea level and 4000 m. (These equations are derived in most thermodynamics texts. The short version is that the entropy change from compression at constant T equals the work at constant temperature divided by T,  ∫P/TdV=  ∫R/V dV = R ln V2/V1= -R ln P2/P1. Similarly the entropy change at constant pressure = ∫dQ/T where dQ = Cp dT. This component of entropy is thus ∫dQ/T = Cp ∫dT/T = Cp ln T2/T1.) Setting the sum to equal zero, we can say that Cp ln T2/T1 =R ln .6085, or that 

T2 = T1 (.6085)R/Cp

T2 = T1(.6085)2/7   where 0.6065 is the pressure ratio at 4000, and because for air and most diatomic gases, R/Cp = 2/7 to very good approximation, matching the prediction from quantum mechanics.

From the above, we calculate T2 = 288.15 x .8676 = 250.0°K, or -23.15 °C. This is cold enough to provide snow  on Les Droites nearly year round, and it’s pretty accurate. The typical temperature at 4000 m is 262.17 K (-11°C). That’s 26°C colder than at sea-level, and only 12°C warmer than we’d predicted.

There are three weak assumptions behind the 11°C error in our predictions: (1) that the air that rises is no hotter than the air that does not, and (2) that the air’s not heated by radiation from the sun or earth, and (3) that there is no heat exchange with the surrounding air, e.g. from rain or snow formation. The last of these errors is thought to be the largest, but it’s still not large enough to cause serious problems.

The snow cover on Kilimanjaro, 2013. If global warming models were true, it should be gone, or mostly gone.

Snow on Kilimanjaro, Tanzania 2013. If global warming models were true, the ground should be 4°C warmer than 100 years ago, and the air at this altitude, about 7°C (12°F) warmer; and the snow should be gone.

You can use this approach, with different exponents, estimate the temperature at the center of Jupiter, or at the center of neutron stars. This iso-entropic calculation is the model that’s used here, though it’s understood that may be off by a fair percentage. You can also ask questions about global warming: increased CO2 at this level is supposed to cause extreme heating at 4000m, enough to heat the earth below by 4°C/century or more. As it happens, the temperature and snow cover on Les Droites and other Alp ski areas has been studied carefully for many decades; they are not warming as best we can tell (here’s a discussion). By all rights, Mt Blanc should be Mt Green by now; no one knows why. The earth too seems to have stopped warming. My theory: clouds. 

Robert Buxbaum, May 10, 2014. Science requires you check your theory for internal and external weakness. Here’s why the sky is blue, not green.

Ivanpah’s solar electric worse than trees

Recently the DoE committed 1.6 billion dollars to the completion of the last two of three solar-natural gas-electric plants on a 10 mi2 site at Lake Ivanpah in California. The site is rated to produce 370 MW of power, in a facility that uses far more land than nuclear power, at a cost significantly higher than nuclear. The 3900 MW Drax plant (UK) cost 1.1 Billion dollars, and produces 10 times more power on a much smaller site. Ivanpah needs a lot of land because its generators require 173,500 billboard-size, sun-tracking mirrors to heat boilers atop three 750 foot towers (2 1/2 times the statue of liberty). The boilers feed steam to low pressure, low efficiency (28% efficiency) Siemens turbines. At night, natural gas provides heat to make the steam, but only at the same, low efficiency. Siemens makes higher efficiency turbine plants (59% efficiency) but these can not be used here because the solar oven temperature is only 900°F (500°C), while normal Siemens plants operate at 3650°F (2000°C).

The Ivanpau thermal solar-natural gas project will look like The Crescent Dunes Thermal-solar project shown here, but will be bigger.

The first construction of the Ivanpah thermal solar-natural-gas project; Each circle mirrors extend out to cover about 2 square miles of the 10mi2 site.

So far, the first of the three towers is operational, but it has been producing at only 30% of rated low-efficiency output. These are described as “growing pains.” There are also problems with cooked birds, blinded pilots, and the occasional fire from the misaligned death ray — more pains, I guess. There is also the problem of lightning. When hit by lightning the mirrors shatter into millions of shards of glass over a 30 foot radius, according to Argus, the mirror cleaning company. This presents a less-than attractive environmental impact.

As an exercise, I thought I’d compare this site’s electric output to the amount one could generate using a wood-burning boiler fed by trees growing on a similar sized (10 sq. miles) site. Trees are cheap, but only about 10% efficient at converting solar power to chemical energy, thus you might imagine that trees could not match the power of the Ivanpah plant, but dry wood burns hot, at 1100 -1500°C, so the efficiency of a wood-powered steam turbine will be higher, about 45%. 

About 820 MW of sunlight falls on every 1 mi2 plot, or 8200 MW for the Ivanpah site. If trees convert 10% of this to chemical energy, and we convert 45% of that to electricity, we find the site will generate 369 MW of electric power, or exactly the output that Ivanpah is rated for. The cost of trees is far cheaper than mirrors, and electricity from wood burning is typically cost 4¢/kWh, and the environmental impact of tree farming is likely to be less than that of the solar mirrors mentioned above. 

There is another advantage to the high temperature of the wood fire. The use of high temperature turbines means that any power made at night with natural gas will be produced at higher efficiency. The Ivanpah turbines output at low temperature and low efficiency when burning natural gas (at night) and thus output half the half the power of a normal Siemens plant for every BTU of gas. Because of this, it seems that the Ivanpah plant may use as much natural gas to make its 370 MW during a 12 hour night as would a higher efficiency system operating 24 hours, day and night. The additional generation by solar thus, might be zero. 

If you think the problems here are with the particular design, I should also note that the Ivanpah solar project is just one of several our Obama-government is funding, and none are doing particularly well. As another example, the $1.45 B solar project on farmland near Gila Bend Arizona is rated to produce 35 MW, about 1/10 of the Ivanpah project at 2/3 the cost. It was built in 2010 and so far has not produced any power.

Robert E. Buxbaum, March 12, 2014. I’ve tried using wood to make green gasoline. No luck so far. And I’ve come to doubt the likelihood that we can stop global warming.

Nuclear fusion

I got my PhD at Princeton University 33 years ago (1981) working on the engineering of nuclear fusion reactors, and I thought I’d use this blog to rethink through the issues. I find I’m still of the opinion that developing fusion is important as the it seems the best, long-range power option. Civilization will still need significant electric power 300 to 3000 years from now, it seems, when most other fuel sources are gone. Fusion is also one of the few options for long-range space exploration; needed if we ever decide to send colonies to Alpha Centauri or Saturn. I thought fusion would be ready by now, but it is not, and commercial use seems unlikely for the next ten years at least — an indication of the difficulties involved, and a certain lack of urgency.

Oil, gas, and uranium didn’t run out like we’d predicted in the mid 70s. Instead, population growth slowed, new supplies were found, and better methods were developed to recover and use them. Shale oil and fracking unlocked hydrocarbons we thought were unusable, and nuclear fission reactors got better –safer and more efficient. At the same time, the more we studied, the clearer it came that fusion’s technical problems are much harder to tame than uranium fission’s.

Uranium fission was/is frighteningly simple — far simpler than even the most basic fusion reactor. The first nuclear fission reactor (1940) involved nothing more than uranium pellets in a pile of carbon bricks stacked in a converted squash court at the University of Chicago. No outside effort was needed to get the large, unstable uranium atoms split to smaller, more stable ones. Water circulating through the pile removed the heat released, and control was maintained by people lifting and lowering cadmium control rods while standing on the pile.

A fusion reactor requires high temperature or energy to make anything happen. Fusion energy is produced by combining small, unstable heavy hydrogen atoms into helium, a bigger more stable one, see figure. To do this reaction you need to operate at the equivalent of about 500,000,000 degrees C, and containing it requires (typically) a magnetic bottle — something far more complex than a pile of graphic bricks. The reward was smaller too: “only” about 1/13th as much energy per event as fission. We knew the magnetic bottles were going to be tricky, e.g. there was no obvious heat transfer and control method, but fusion seemed important enough, and the problems seemed manageable enough that fusion power seemed worth pursuing — with just enough difficulties to make it a challenge.

Basic fusion reaction: deuterium + tritium react to give helium, a neutron and energy.

Basic fusion reaction: deuterium + tritium react to give helium, a neutron and energy.

The plan at Princeton, and most everywhere, was to use a TOKAMAK, a doughnut-shaped reactor like the one shown below, but roughly twice as big; TOKAMAK was a Russian acronym. The doughnut served as one side of an enormous transformer. Hydrogen fuel was ionized into a plasma (a neutral soup of protons and electrons) and heated to 300,000,000°C by a current in the TOKOMAK generated by varying the current in the other side of the transformer. Plasma containment was provided by enormous magnets on the top and bottom, and by ring-shaped magnets arranged around the torus.

As development went on, we found we kept needing bigger and bigger doughnuts and stronger and stronger magnets in an effort to balance heat loss with fusion heating. The number density of hydrogen atoms per volume, n, is proportional to the magnetic strength. This is important because the fusion heat rate per volume is proportional to n-squared, n2, while heat loss is proportional to n divided by the residence time, something we called tau, τ. The main heat loss was from the hot plasma going to the reactor surface. Because of the above, a heat balance ratio was seen to be important, heat in divided by heat out, and that was seen to be more-or-less proportional to nτ. As the target temperatures increased, we found we needed larger and larger nτ reactors to make a positive heat balance. And this translated to ever larger reactors and ever stronger magnetic fields, but even here there was a limit, 1 billion Kelvin, a thermodynamic temperature where the fusion reaction went backward and no energy was produced. The Princeton design was huge, with super strong, super magnets, and was operated at 300 million°C, near the top of the reaction curve. If the temperature went above or below this temperature, the fire would go out. There was no room for error, but relatively little energy output per volume — compared to fission.

Fusion reaction options and reaction rates.

Fusion reaction options and reaction rates.

The most likely reaction involved deuterium and tritium, referred to as D and T. This was the reaction of the two heavy isotopes of hydrogen shown in the figure above — the same reaction used in hydrogen bombs, a point we rarely made to the public. For each reaction D + T –> He + n, you get 17.6 million electron volts (17.6 MeV). This is 17.6 million times the energy you get for an electron moving over one Volt, but only 1/13 the energy of a fission reaction. By comparison, the energy of water-forming, H2 + 1/2 O2 –> H2O, is the equivalent of two electrons moving over 1.2 Volts, or 2.4 electron volts (eV), some 8 million times less than fusion.

The Princeton design involved reacting 40 gm/hr of heavy hydrogen to produce 8 mol/hr of helium and 4000 MW of heat. The heat was converted to electricity at 38% efficiency using a topping cycle, a modern (relatively untried) design. Of the 1500 MWh/hr of electricity that was supposed to be produced, all but about 400 MW was to be delivered to the power grid — if everything worked right. Sorry to say, the value of the electricity did not rise anywhere as fast as the cost of the reactor and turbines. Another problem: 1100 MW was more than could be easily absorbed by any electrical grid. The output was high and steady, and could not be easily adjusted to match fluctuating customer demand. By contrast a coal plant’s or fuel cell’s output could be easily adjusted (and a nuclear plant with a little more difficulty).

Because of the need for heat balance, it turned out that at least 9% of the hydrogen had to be burnt per pass through the reactor. The heat lost per mol by conduction to the wall was, to good approximation, the heat capacity of each mol of hydrogen ions, 82 J/°C mol, times the temperature of the ions, 300 million °C divided by the containment time, τ. The Princeton design was supposed to have a containment of about 4 seconds. As a result, the heat loss by conduction was 6.2 GW per mol. This must be matched by the molar heat of reaction that stayed in the plasma. This was 17.6 MeV times Faraday’s constant, 96,800 divided by 4 seconds (= 430 GW/mol reacted) divided by 5. Of the 430 GW/mol produced in fusion reactions only 1/5 remains in the plasma (= 86 GW/mol) the other 4/5 of the energy of reaction leaves with the neutron. To get the heat balance right, at least 9% of the hydrogen must react per pass through the reactor; there were also some heat losses from radiation, so the number is higher. Burn more or less percent of the hydrogen and you had problems. The only other solution was to increase τ > 4 seconds, but this meant ever bigger reactors.

There was also a material handling issue: to get enough fuel hydrogen into the center of the reactor, quite a lot of radioactive gas had to be handled — extracted from the plasma chamber. These were to be frozen into tiny spheres of near-solid hydrogen and injected into the reactor at ultra-sonic velocity. Any slower and the spheres would evaporate before reaching the center. As 40 grams per hour was 9% of the feed, it became clear that we had to be ready to produce and inject 1 pound/hour of tiny spheres. These “snowballs-in-hell” had to be small so they didn’t dampen the fire. The vacuum system had to be able to be big enough to handle the lb/hr or so of unburned hydrogen and ash, keeping the pressure near total vacuum. You then had to purify the hydrogen from the ash-helium and remake the little spheres that would be fed back to the reactor. There were no easy engineering problems here, but I found it enjoyable enough. With a colleague, I came up with a cute, efficient high vacuum pump and recycling system, and published it here.

Yet another engineering challenge concerned the difficulty of finding a material for the first-wall — the inner wall of the doughnut facing the plasma. Of the 4000 MW of heat energy produced, all the conduction and radiation heat, about 1000 MW is deposited in the first wall and has to be conducted away. Conducting this heat means that the wall must have an enormous coolant flow and must withstand an enormous amount of thermal stress. One possible approach was to use a liquid wall, but I’ve recently come up with a rather nicer solid wall solution (I think) and have filed a patent; more on that later, perhaps after/if the patent is accepted. Another engineering challenge was making T, tritium, for the D-T reaction. Tritium is not found in nature, but has to be made from the neutron created in the reaction and from lithium in a breeder blanket, Li + n –> He + T. I examined all possible options for extracting this tritium from the lithium at low concentrations as part of my PhD thesis, and eventually found a nice solution. The education I got in the process is used in my, REB Research hydrogen engineering business.

Man inside the fusion reactor doughnut at ITER. He'd better leave before the 8,000,000°C plasma turns on.

Man inside the fusion reactor doughnut at ITER. He’d better leave before the 8,000,000°C plasma turns on.

Because of its complexity, and all these engineering challenges, fusion power never reached the maturity of fission power; and then Three-mile Island happened and ruined the enthusiasm for all things nuclear. There were some claims that fusion would be safer than fission, but because of the complexity and improvements in fission, I am not convinced that fusion would ever be even as safe. And the long-term need keeps moving out: we keep finding more uranium, and we’ve developed breeder reactors and a thorium cycle: technologies that make it very unlikely we will run out of fission material any time soon.

The main, near term advantage I see for fusion over fission is that there are fewer radioactive products, see comparison.  A secondary advantage is neutrons. Fusion reactors make excess neutrons that can be used to make tritium, or other unusual elements. A need for one of these could favor the development of fusion power. And finally, there’s the long-term need: space exploration, or basic power when we run out of coal, uranium, and thorium. Fine advantages but unlikely to be important for a hundred years.

Robert E. Buxbaum, March 1, 2014. Here’s a post on land use, on the aesthetics of engineering design, and on the health risks of nuclear power. The sun’s nuclear fusion reactor is unstable too — one possible source of the chaotic behavior of the climate. Here’s a control joke.

Ocean levels down from 3000 years ago; up from 20,000 BC

In 2006 Al Gore claimed that industry was causing 2-5°C of global warming per century, and that this, in turn, would cause the oceans to rise by 8 m by 2100. Despite a record cold snap this week, and record ice levels in the antarctic, the US this week banned all incandescent light bulbs of 40W and over in an effort to stop the tragedy. This was a bad move, in my opinion, for a variety of reasons, not least because it seems the preferred replacement, compact fluorescents, produce more pollution than incandescents when you include disposal of the mercury and heavy metals they contain. And then there is the weak connection between US industry and global warming.

From the geologic record, we know that 2-5° higher temperatures have been seen without major industrial outputs of pollution. These temperatures do produce the sea level rises that Al Gore warns about. Temperatures and sea levels were higher 3200 years ago (the Trojan war period), without any significant technology. Temperatures and sea levels were also higher 1900 years ago during the Roman warming. In those days Pevensey Castle (England), shown below, was surrounded by water.

During Roman times Pevensey Castle (at right) was surrounded by water at high tide.If Al Gore is right, it will be surrounded by water again soon.

During Roman times the world was warmer, and Pevensey Castle (right) was surrounded by water;. If Al Gore is right about global warming, it will be surrounded by water again by 2100.

From a plot of sea level and global temperature, below, we see that during cooler periods the sea was much shallower than today: 140 m shallower 20,000 years ago at the end of the last ice age, for example. In those days, people could walk from Asia to Alaska. Climate, like weather appears to be cyclically chaotic. I don’t think the last ice age ended because of industry, but it is possible that industry might help the earth to warm by 2-5°C by 2100, as Gore predicts. That would raise the sea levels, assuming there is no new ice age.

Global temperatures and ocean levels rise and sink together

Global temperatures and ocean levels change by a lot; thousands of years ago.

While I doubt there is much we could stop the next ice age — it is very hard to change a chaotic cycle — trying to stop global cooling seems more worthwhile than trying to stop warming. We could survive a 2 m rise in the seas, e.g. by building dykes, but a 2° of cooling would be disastrous. It would come with a drastic reduction in crops, as during the famine year of 1814. And if the drop continued to a new ice age, that would be much worse. The last ice age included mile high glaciers that extended over all of Canada and reached to New York. Only the polar bear and saber-toothed tiger did well (here’s a Canada joke, and my saber toothed tiger sculpture).

The good news is that the current global temperature models appear to be wrongor highly over-estimated. Average global temperatures have not changed in the last 16 years, though the Chinese keep polluting the air (for some reason, Gore doesn’t mind Chinese pollution). It is true that arctic ice extent is low, but then antarctic ice is at record high levels. Perhaps it’s time to do nothing. While I don’t want more air pollution, I’d certainly re-allow US incandescent light bulbs. In cases where you don’t know otherwise, perhaps the wisest course is to do nothing.

Robert Buxbaum, January 8, 2014

Fractal power laws and radioactive waste decay

Here’s a fairly simple fractal model for nuclear reactor decay heat. It’s based on one I came up with over he past few weeks for dealing with the statistics of crime, fires, etc. — looking at the tail wither Poisson fails. Anyway, I find that the analysis method works pretty well for looking at nuclear power problems and comparing fission vs fusion.

If all nuclear waste had only one isotope and would decay only to non-radioactive products, the decay rate would be exponential. Thus, one could plot the log of the decay heat rate linearly against linear time (a semi-log plot). But nuclear waste generally consists of many radioactive components, and they typically decay into other radioactive isotopes (daughters) that then decay to yet others (grand-daughters?). Because the half-lives can barry by a lot, one would like to shrink the time-axis, e.g. by use of a log-log plot, perhaps including a curve for each isotope, and every decay mode. I find these plots hard to produce, and hard extrapolate. What I’d like to propose instead is a fractal variation of the original semi-log plot: a  plot of the log of the heat rate against a fractal time, time to the 1/4 power in this case. The plot is similar to a fractal decay model that I’d developed for crimes and fires a few weeks ago

A plausible justification for this fractal semi-log plot is to observe that the half-life of daughter isotopes relates to the parent isotopes. I picked  to plot time to the 1/4 power because it seemed to work pretty well for fuel rods (below) especially after 5 years of cool-down. Note that the slope is identical for rods consumed to 20 or 35 MW-days per kg of uranium.

Afterheat of fuel rods used to generate 20 kW/kg U; Top graph 35 MW-days/kg U; bottom graph 20 Mw-day /kg  U. Data from US NRC Regulatory Guide 3.54 - Spent Fuel Heat Generation in an Independent Spent Fuel Storage Installation, rev 1, 1999. http://www.nrc.gov/reading-rm/doc-collections/reg-guides/fuels-materials/rg/03-054/

After-heat of nuclear fuel rods used at 20 kW/kg U; Top graph 35 MW-days/kg U; bottom graph 20 Mw-day /kg U. Data from US NRC Regulatory Guide 3.54. A typical reactor has 200,000 kg of uranium.

This plot also works if there is one type of radio-isotope (tritium say), but in this case the fractal power should be 1. (My guess is that the fractal time dependence for crime is 1/2, though I lack real crime statistics to check it on). Unless I find that someone else has come up with this sort of plot or analysis before, I’ll call it after myself: a Buxbaum Mandelbrot plot –Why not?

Nuclear power is attractive world-wide because it is a lot more energy dense than any normal fuel; at 35 MW-days per kg, 1 kg will power an average US home for 8 years. By comparison, even with fuel cell technology, 1 kg of hydrogen can only power a US home for 6 hours. While hydrogen is clean, it is often made from coal, and (at least in theory*) it should be easier to handle and store 1 kg of spent uranium than to deal with many tons of coal-smoke produced to make 35 MW-days of electricity.

Better than uranium fission is nuclear fusion, in theory*. A fission-based nuclear reactor to power 1/2 of Detroit, would contain some 200,000 kg of uranium, and assuming a 5 year cycle, the after-heat would be about 1000 kW (1 MW) one year after removal.

Afterheat of a 4000 MWth Fusion Reactor, from UMAC III Report. Nb-1%Zr is a fairly common high-temerature engineering material of construction.

After-heat of a 4000 MWth Fusion Reactor built from niobium-1%zirconium; from UWMAC III Report. The after heat is far less than with normal uranium fission.

For comparison, I show a and as another demonstration of the Buxbaum-Mandelbrot plot for after-heat of a similar power fusion reactor. It drops from 1 MW at 3 weeks to only 100 W after 1 year. Nb-1%Zr is a fairly common engineering material used for high temperature applications. Nuclear fusion is still a few years off, so for now fission is the only nuclear option for clean electrical generation.

This plot was really designed to look at the statistics of crime, fires, and the need for servers / checkout people. I’d predicted that a plot of the log of the number of people needed to fight crime, fires, etc., is a straight line against the a fractal power of the number of hours per year that this number of people is needed. At least that was the theory.*

Dr. R.E. Buxbaum, January 2, 2014. *A final, final thought about theory from Yogi Berra: “In theory, it matches reality.”

Global warming takes a 15 year rest

I have long thought that global climate change was chaotic, rather than steadily warming. Global temperatures show self-similar (fractal) variation with time and long-term cycles; they also show strange attractors generally states including ice ages and El Niño events. These are sudden rests of the global temperature pattern, classic symptoms of chaos. The standard models of global warming is does not predict El Niño and other chaotic events, and thus are fundamentally wrong. The models assume that a steady amount of sun heat reaches the earth, while a decreasing amount leaves, held in by increasing amounts of man-produced CO2 (carbon dioxide) in the atmosphere. These models are “tweaked” to match the observed temperature to the CO2 content of the atmosphere from 1930 to about 2004. In the movie “An Inconvenient Truth” Al Gore uses these models to predict massive arctic melting leading to a 20 foot rise in sea levels by 2100. To the embarrassment of Al Gore, and the relief of everyone else, though COconcentrations continue to rise, global warming took a 15 year break starting shortly before the movie came out, and the sea level is, more-or-less where it was except for temporary changes during periodic El Niño cycles.

Global temperature variation Fifteen years and four El Niño cycles, with little obvious change. Most models predict .25°C/decade.

Fifteen years of global temperature variation to June 2013; 4 El Niños but no sign of a long-term change.

Hans von Storch, a German expert on global warming, told the German newspaper, der Spiegel: “We’re facing a puzzle. Recent CO2 emissions have actually risen even more steeply than we feared. As a result, according to most climate models, we should have seen temperatures rise by around 0.25 degrees Celsius (0.45 degrees Fahrenheit) over the past 10 years. That hasn’t happened. [Further], according to the models, the Mediterranean region will grow drier all year round. At the moment, however, there is actually more rain there in the fall months than there used to be. We will need to observe further developments closely in the coming years.”

Aside from the lack of warming for the last 15 years, von Storch mentions that there has been no increase in severe weather. You might find that surprising given the news reports; still it’s so. Storms are caused by temperature and humidity differences, and these have not changed. (Click here to see why tornadoes lift stuff up).

At this point, I should mention that the majority of global warming experts do not see a problem with the 15 year pause. Global temperatures have been rising unsteadily since 1900, and even von Storch expects this trend to continue — sooner or later. I do see a problem, though, highlighted by the various chaotic changes that are left out of the models. A source of the chaos, and a fundamental problem with the models could be with how they treat the effects of water vapor. When uncondensed, water vapor acts as a very strong thermal blanket; it allows the sun’s light in, but prevents the heat energy from radiating out. CObehaves the same way, but weaker (there’s less of it).

More water vapor enters the air as the planet warms, and this should amplify the CO2 -caused run-away heating except for one thing. Every now and again, the water vapor condenses into clouds, and then (sometimes) falls as rain or show. Clouds and snow reflect the incoming sunlight, and this leads to global cooling. Rain and snow drive water vapor from the air, and this leads to accelerated global cooling. To the extent that clouds are chaotic, and out of man’s control, the global climate should be chaotic too. So far, no one has a very good global model for cloud formation, or for rain and snowfall, but it’s well accepted that these phenomena are chaotic and self-similar (each part of a cloud looks like the whole). Clouds may also admit “the butterfly effect” where a butterfly in China can cause a hurricane in New Jersey if it flaps at the right time.

For those wishing to examine the longer-range view, here’s a thermal history of central England since 1659, Oliver Cromwell’s time. At this scale, each peak is an El Niño. There is a lot of chaotic noise, but you can also notice either a 280 year periodicity (lat peak around 1720), or a 100 year temperature rise beginning about 1900.

Global warming; Central England Since 1659; From http://www.climate4you.com

It is not clear that the cycle is human-caused,but my hope is that it is. My sense is that the last 100 years of global warming has been a good thing; for agriculture and trade it’s far better than an ice age. If we caused it with our  CO2, we could continue to use CO2 to just balance the natural tendency toward another ice age. If it’s chaotic, as I suspect, such optimism is probably misplaced. It is very hard to get a chaotic system out of its behavior. The evidence that we’ve never moved an El Niño out of its normal period of every 3 to 7 years (expect another this year or next). If so, we should expect another ice age within the next few centuries.

Global temperatures measured from the antarctic ice showing stable, cyclic chaos and self-similarity.

Global temperatures measured from the antarctic ice showing 4 Ice ages.

Just as clouds cool the earth, you can cool your building too by painting the roof white. If you are interested in more weather-related posts, here’s why the sky is blue on earth, and why the sky on Mars is yellow.

Robert E. Buxbaum July 27, 2013 (mostly my business makes hydrogen generators and I consult on hydrogen).

Paint your factory roof white

Standing on the flat roof of my lab / factory building, I notice that virtually all of my neighbors’ roofs are black, covered by tar or bitumen. My roof was black too until three weeks ago; the roof was too hot to touch when I’d gone up to patch a leak. That’s not quite egg-frying hot, but I came to believe my repair would last longer if the roof stayed cooler. So, after sealing the leak with tar and bitumen, we added an aluminized over-layer from Ace hardware. The roof is cooler now than before, and I notice a major drop in air conditioner load and use.

My analysis of our roof coating follows; it’s for Detroit, but you can modify it for your location. Sunlight hits the earth carrying 1300 W/m2. Some 300W/m2 scatters as blue light (for why so much scatters, and why the sky is blue, see here). The rest, 1000 W/m2 or 308 Btu/ft2hr, comes through or reflects off clouds on a cloudy day and hits buildings at an angle determined by latitude, time of day, and season of the year.

Detroit is at 42° North latitude so my roof shows an angle of 42° to the sun at noon in mid spring. In summer, the angle is 20°, and in winter about 63°. The sun sinks lower on the horizon through the day, e.g. at two hours before or after noon in mid spring the angle is 51°. On a clear day, with a perfectly black roof, the heating is 308 Btu/ft2hr times the cosine of the angle.

To calculate our average roof heating, I integrated this heat over the full day’s angles using Euler’s method, and included the scatter from clouds plus an absorption factor for the blackness of the roof. The figure below shows the cloud cover for Detroit.

Average cloud cover for Detroit, month by month.

Average cloud cover for Detroit, month by month; the black line is the median cloud cover. On January 1, it is strongly overcast 60% of the time, and hardly ever clear; the median is about 98%. From http://weatherspark.com/averages/30042/Detroit-Michigan-United-States

Based on this and an assumed light absorption factor of σ = .9 for tar and σ = .2 after aluminum. I calculate an average of 105 Btu/ft2hr heating during the summer for the original black roof, and 23 Btu/ft2hr after aluminizing. Our roof is still warm, but it’s no longer hot. While most of the absorbed heat leaves the roof by black body radiation or convection, enough enters my lab through 6″ of insulation to cause me to use a lot of air conditioning. I calculate the heat entering this way from the roof temperature. In the summer, an aluminum coat is a clear winner.

Detroit High and Low Temperatures Over the ear

High and Low Temperatures For Detroit, Month by Month. From http://weatherspark.com/averages/30042/Detroit-Michigan-United-States

Detroit has a cold winter too, and these are months where I’d benefit from solar heat. I find it’s so cloudy in winter that, even with a black roof, I got less than 5 Btu/ft2hr. Aluminizing reduced this heat to 1.2 Btu/ft2hr, but it also reduces the black-body radiation leaving at night. I should find that I use less heat in winter, but perhaps more in late spring and early fall. I won’t know the details till next year, but that’s the calculation.

The REB Research laboratory is located at 12851 Capital St., Oak Park, MI 48237. We specialize in hydrogen separations and membrane reactors. By Dr. Robert Buxbaum, June 16, 2013

What’s the quality of your home insulation

By Dr. Robert E. Buxbaum, June 3, 2013

It’s common to have companies call during dinner offering to blow extra insulation into the walls and attic of your home. Those who’ve added this insulation find a small decrease in their heating and cooling bills, but generally wonder if they got their money’s worth, or perhaps if they need yet-more insulation to get the full benefit. Here’s a simple approach to comparing your home heat bill to the ideal your home can reasonably reach.

The rate of heat transfer through a wall, Qw, is proportional to the temperature difference, ∆T, to the area, A, and to the average thermal conductivity of the wall, k; it is inversely proportional to the wall thickness, ∂;

Qw = ∆T A k /∂.

For home insulation, we re-write this as Qw = ∆T A/Rw where Rw is the thermal resistance of the wall, measured (in the US) as °F/BTU/hr-ft2. Rw = ∂/k.

Lets assume that your home’s outer wall thickness is nominally 6″ thick (0.5 foot). With the best available insulation, perfectly applied, the heat loss will be somewhat higher than if the space was filled with still air, k=.024 BTU/fthr°F, a result based on molecular dynamics. For a 6″ wall, the R value, will always be less than .5/.024 = 20.8 °F/BTU/hr-ft2.. It will be much less if there are holes or air infiltration, but for practical construction with joists and sills, an Rw value of 15 or 16 is probably about as good as you’ll get with 6″ walls.

To show you how to evaluate your home, I’ll now calculate the R value of my walls based on the size of my ranch-style home (in Michigan) and our heat bills. I’ll first do this in a simplified calculation, ignoring windows, and will then repeat the calculation including the windows. Windows are found to be very important. I strongly suggest window curtains to save heat and air conditioning,

The outer wall of my home is 190 feet long, and extends about 11 feet above ground to the roof. Multiplying these dimensions gives an outer wall area of 2090 ft2. I could now add the roof area, 1750 ft2 (it’s the same as the area of the house), but since the roof is more heavily insulated than the walls, I’ll estimate that it behaves like 1410 ft2 of normal wall. I calculate there are 3500 ftof effective above-ground area for heat loss. This is the area that companies keep offering to insulate.

Between December 2011 and February 2012, our home was about 72°F inside, and the outside temperature was about 28°F. Thus, the average temperature difference between the inside and outside was about 45°F; I estimate the rate of heat loss from the above-ground part of my house, Qu = 3500 * 45/R = 157,500/Rw.

Our house has a basement too, something that no one has yet offered to insulate. While the below-ground temperature gradient is smaller, it’s less-well insulated. Our basement walls are cinderblock covered with 2″ of styrofoam plus wall-board. Our basement floor is even less well insulated: it’s just cement poured on pea-gravel. I estimate the below-ground R value is no more than 1/2 of whatever the above ground value is; thus, for calculating QB, I’ll assume a resistance of Rw/2.

The below-ground area equals the square footage of our house, 1750 ft2 but the walls extend down only about 5 feet below ground. The basement walls are thus 950 ft2 in area (5 x 190 = 950). Adding the 1750 ft2 floor area, we find a total below-ground area of 2700 ft2.

The temperature difference between the basement and the wet dirt is only about 25°F in the winter. Assuming the thermal resistance is Rw/2, I estimate the rate of heat loss from the basement, QB = 2700*25*(2/Rw) = 135,000/Rw. It appears that nearly as much heat leaves through the basement as above ground!

Between December and February 2012, our home used an average of 597 cubic feet of gas per day or 25497 BTU/hour (heat value = 1025 BTU/ ft3). QU+ Q= 292,500/Rw. Ignoring windows, I estimate Rw of my home = 292,500/25497 = 11.47.

We now add the windows. Our house has 230 ft2 of windows, most covered by curtains and/or plastic. Because of the curtains and plastic, they would have an R value of 3 except that black-body radiation tends to be very significant. I estimate our windows have an R value of 1.5; the heat loss through the windows is thus QW= 230*45/1.5 = 6900 BTU/hr, about 27% of the total. The R value for our walls is now re-estimated to be 292,500/(25497-6900) = 15.7; this is about as good as I can expect given the fixed thickness of our walls and the fact that I can not easily get an insulation conductivity lower than still air. I thus find that there will be little or no benefit to adding more above-ground wall insulation to my house.

To save heat energy, I might want to coat our windows in partially reflective plastic or draw the curtains to follow the sun. Also, since nearly half the heat left from the basement, I may want to lay a thicker carpet, or lay a reflective under-layer (a space blanket) beneath the carpet.

To improve on the above estimate, I could consider our furnace efficiency; it is perhaps only 85-90% efficient, with still-warm air leaving up the chimney. There is also some heat lost through the door being opened, and through hot water being poured down the drain. As a first guess, these heat losses are balanced by the heat added by electric usage, by the body-heat of people in the house, and by solar radiation that entered through the windows (not much for Michigan in winter). I still see no reason to add more above-ground insulation. Now that I’ve analyzed my home, it’s time for you to analyze yours.

Chaos, Stocks, and Global Warming

Two weeks ago, I discussed black-body radiation and showed how you calculate the rate of radiative heat transfer from any object. Based on this, I claimed that basal metabolism (the rate of calorie burning for people at rest) was really proportional to surface area, not weight as in most charts. I also claimed that it should be near-impossible to lose weight through exercise, and went on to explain why we cover the hot parts of our hydrogen purifiers and hydrogen generators in aluminum foil.

I’d previously discussed chaos and posted a chart of the earth’s temperature over the last 600,000 years. I’d now like to combine these discussions to give some personal (R. E. Buxbaum) thoughts on global warming.

Black-body radiation differs from normal heat transfer in that the rate is proportional to emissivity and is very sensitive to temperature. We can expect the rate of heat transfer from the sun to earth will follow these rules, and that the rate from the earth will behave similarly.

That the earth is getting warmer is seen as proof that the carbon dioxide we produce is considered proof that we are changing the earth’s emissivity so that we absorb more of the sun’s radiation while emitting less (relatively), but things are not so simple. Carbon dioxide should, indeed promote terrestrial heating, but a hotter earth should have more clouds and these clouds should reflect solar radiation, while allowing the earth’s heat to radiate into space. Also, this model would suggest slow, gradual heating beginning, perhaps in 1850, but the earth’s climate is chaotic with a fractal temperature rise that has been going on for the last 15,000 years (see figure).

Recent temperature variation as measured from the Greenland Ice. A previous post had the temperature variation over the past 600,000 years.

Recent temperature variation as measured from the Greenland Ice. Like the stock market, it shows aspects of chaos.

Over a larger time scale, the earth’s temperature looks, chaotic and cyclical (see the graph of global temperature in this post) with ice ages every 120,000 years, and chaotic, fractal variation at times spans of 100 -1000 years. The earth’s temperature is self-similar too; that is, its variation looks the same if one scales time and temperature. This is something that is seen whenever a system possess feedback and complexity. It’s seen also in the economy (below), a system with complexity and feedback.

Manufacturing Profit is typically chaotic -- something that makes it exciting.

Manufacturing Profit is typically chaotic — and seems to have cold spells very similar to the ice ages seen above.

The economy of any city is complex, and the world economy even more so. No one part changes independent of the others, and as a result we can expect to see chaotic, self-similar stock and commodity prices for the foreseeable future. As with global temperature, the economic data over a 10 year scale looks like economic data over a 100 year scale. Surprisingly,  the economic data looks similar to the earth temperature data over a 100 year or 1000 year scale. It takes a strange person to guess either consistently as both are chaotic and fractal.

gomez3

It takes a rather chaotic person to really enjoy stock trading (Seen here, Gomez Addams of the Addams Family TV show).

Clouds and ice play roles in the earth’s feedback mechanisms. Clouds tend to increase when more of the sun’s light heats the oceans, but the more clouds, the less heat gets through to the oceans. Thus clouds tend to stabilize our temperature. The effect of ice is to destabilize: the more heat that gets to the ice, the more melts and the less of the suns heat is reflected to space. There is time-delay too, caused by the melting flow of ice and ocean currents as driven by temperature differences among the ocean layers, and (it seems) by salinity. The net result, instability and chaos.

The sun has chaotic weather too. The rate of the solar reactions that heat the earth increases with temperature and density in the sun’s interior: when a volume of the sun gets hotter, the reaction rates pick up making the volume yet-hotter. The temperature keeps rising, and the heat radiated to the earth keeps increasing, until a density current develops in the sun. The hot area is then cooled by moving to the surface and the rate of solar output decreases. It is quite likely that some part of our global temperature rise derives from this chaotic variation in solar output. The ice caps of Mars are receding.

The change in martian ice could be from the sun, or it might be from Martian dust in the air. If so, it suggests yet another feedback system for the earth. When economic times age good we have more money to spend on agriculture and air pollution control. For all we know, the main feedback loops involve dust and smog in the air. Perhaps, the earth is getting warmer because we’ve got no reflective cloud of dust as in the dust-bowl days, and our cities are no longer covered by a layer of thick, black (reflective) smog. If so, we should be happy to have the extra warmth.