Tag Archives: heat transfer

A Nuclear-blast resistant paint: Starlite and co.

About 20 years ago, an itinerate inventor named Maurice Ward demonstrated a super insulating paint that he claimed would protect most anything from intense heat. He called it Starlite, and at first no one believed the claims. Then he demonstrated it on TV, see below, by painting a paper-thin layer on a raw egg. He then blasting the egg with a blow torch for a minute till the outside glowed yellow-red. He then lifted the egg with his hand; it was barely warm! And then, on TV, he broke the shell to show that the insides were totally raw, not only uncooked but completely unchanged, a completely raw egg. The documentary below shows the demonstration and describes what happened next (as of 10 years ago) including an even more impressive series of tests.

Intrigued, but skeptical, researchers at the US White Sands National Laboratory, our nuclear bomb test lab, asked for samples. Ward provided pieces of wood painted as before with a “paper thin” layer of Starlite. They subjected these to burning with an oxyacetylene torch, and to a simulated nuclear bomb blast. The nuclear fireball radiation was simulated by an intense laser at the site. Amazing as it sounds, the paint and the wood beneath emerging barely scorched. The painted wood was not damaged by the laser, nor by an oxyacetylene torch that could burn through 8 inches of steel in seconds.

The famous egg, blow torch experiment.

The inventor wouldn’t say what the paint was made of, or what mechanism allowed it to do this, but clearly it had military and civilian uses. It seems it would have prevented the twin towers from collapsing, or would have greatly extended the time they stayed standing. Similarly, it would protect almost anything from a flame-thrower.

As for the ingredients, Ward said it was non-toxic, and that it contained mostly organic materials, plus borax and some silica or ceramic. According to his daughter, it was “edible”; they’d fed it to dogs and horses without adverse effects.

Starlite coasted wood. The simulated nuclear blast made the char mark at left.

The White sands engineers speculate that the paint worked by combination of ablation and intumescence, controlled swelling. The surface, they surmised, formed a foam of char, pure carbon, that swelled to make tiny chambers. If these chambers are small enough, ≤10 nm or so, the mean free path of gas molecules will be severely reduced, reducing the potential for heat transfer. Even more insulting would be if the foam chambers were about 1 nm. Such chambers will be, essentially air free, and thus very insulating. For a more technical view of how molecule motion affects heat transfer rates, see my essay, here.

Sorry to say we don’t know how big the char chambers are, or if this is how the material works. Ward retained the samples and the formula, and didn’t allow close examination. Clearly, if it works by a char, the char layer is very thin, a few microns at most.

Because Maurice Ward never sold the formula or any of the paint in his lifetime, he made no money on the product. He kept closed muted about it, as he knew that, as soon as he patented, or sold, or let anyone know what was in the paint, there would be copycats, and patent violations, and leaks of any secret formula. Even in the US, many people and companies ignore patent rights, daring you to challenge them in court. And it’s worse in foreign countries where the government actively encourages violation. There are also legal ways around a patent: A copycat inventor looks for ways to get the same behavior from materials that are not covered in the patent. Ward could not get around these issues, so he never patented the formula or sold the rights. He revealed the formula only to some close family members, but that was it till May, 2020, when a US company, Thermashield, LLC, bought Ward’s lab equipment and notes. They now claim to make the original Starlite. Maybe they do. The product doesn’t seem quite as good. I’ve yet to see an item scorched as little as the sample above.

Many companies today are now selling versions of Starlite. The formulas are widely different, but all the paints are intumescent, and all the formulas are based on materials Ward would have had on hand, and on the recollections of the TV people and those at White Sands. I’ve bought one of these copycat products, not Thermashield, and tested it. It’s not half bad: thicker in consistency than the original, or as resistive.

There are home-made products too, with formulas on the internet and on YouTube. They are applied more like a spackle or a clay. Still, these products insulate remarkably well: a lot better than any normal insulator I’d seen.

If you’d like to try this as a science fair project, among the formulas you can try; a mix of glue, baking soda, borax, and sugar, with some water. Some versions use sodium silicate too. The Thermoshield folks say that this isn’t the formula, that there is no PVA glue or baking soda in their product. Still it works.

Robert Buxbaum, March 13, 2022. Despite my complaints about the US patent system, it’s far better than in any other country I’ve explored. In most countries, patents are granted only as an income stream for the government, and inventors are considered villains: folks who withhold the fruits of their brains for unearned money. Horrible.

If hot air rises, why is it cold on mountain-tops?

This is a child’s question that’s rarely answered to anyone’s satisfaction. To answer it well requires college level science, and by college the child has usually been dissuaded from asking anything scientific that would likely embarrass teacher — which is to say, from asking most anything. By a good answer, I mean here one that provides both a mathematical, checkable prediction of the temperature you’d expect to find on mountain tops, and one that also gives a feel for why it should be so. I’ll try to provide this here, as previously when explaining “why is the sky blue.” A word of warning: real science involves mathematics, something that’s often left behind, perhaps in an effort to build self-esteem. If I do a poor job, please text me back: “if hot air rises, what’s keeping you down?”

As a touchy-feely answer, please note that all materials have internal energy. It’s generally associated with the kinetic energy + potential energy of the molecules. It enters whenever a material is heated or has work done on it, and for gases, to good approximation, it equals the gas heat capacity of the gas times its temperature. For air, this is about 7 cal/mol°K times the temperature in degrees Kelvin. The average air at sea-level is taken to be at 1 atm, or 101,300  Pascals, and 15.02°C, or 288.15 °K; the internal energy of this are is thus 288.15 x 7 = 2017 cal/mol = 8420 J/mol. The internal energy of the air will decrease as the air rises, and the temperature drops for reasons I will explain below. Most diatomic gases have heat capacity of 7 cal/mol°K, a fact that is only explained by quantum mechanics; if not for quantum mechanics, the heat capacities of diatomic gases would be about 9 cal/mol°K.

Lets consider a volume of this air at this standard condition, and imagine that it is held within a weightless balloon, or plastic bag. As we pull that air up, by pulling up the bag, the bag starts to expand because the pressure is lower at high altitude (air pressure is just the weight of the air). No heat is exchanged with the surrounding air because our air will always be about as warm as its surroundings, or if you like you can imagine weightless balloon prevents it. In either case the molecules lose energy as the bag expands because they always collide with an outwardly moving wall. Alternately you can say that the air in the bag is doing work on the exterior air — expansion is work — but we are putting no work into the air as it takes no work to lift this air. The buoyancy of the air in our balloon is always about that of the surrounding air, or so we’ll assume for now.

A classic, difficult way to calculate the temperature change with altitude is to calculate the work being done by the air in the rising balloon. Work done is force times distance: w=  ∫f dz and this work should equal the effective cooling since heat and work are interchangeable. There’s an integral sign here to account for the fact that force is proportional to pressure and the air pressure will decrease as the balloon goes up. We now note that w =  ∫f dz = – ∫P dV because pressure, P = force per unit area. and volume, V is area times distance. The minus sign is because the work is being done by the air, not done on the air — it involves a loss of internal energy. Sorry to say, the temperature and pressure in the air keeps changing with volume and altitude, so it’s hard to solve the integral, but there is a simple approach based on entropy, S.

Les Droites Mountain, in the Alps, at the intersect of France Italy and Switzerland is 4000 m tall. The top is generally snow-covered.

Les Droites Mountain, in the Alps, at the intersect of France Italy and Switzerland is 4000 m tall. The top is generally snow-covered.

I discussed entropy last month, and showed it was a property of state, and further, that for any reversible path, ∆S= (Q/T)rev. That is, the entropy change for any reversible process equals the heat that enters divided by the temperature. Now, we expect the balloon rise is reversible, and since we’ve assumed no heat transfer, Q = 0. We thus expect that the entropy of air will be the same at all altitudes. Now entropy has two parts, a temperature part, Cp ln T2/T1 and a pressure part, R ln P2/P1. If the total ∆S=0 these two parts will exactly cancel.

Consider that at 4000m, the height of Les Droites, a mountain in the Mont Blanc range, the typical pressure is 61,660 Pa, about 60.85% of sea level pressure (101325 Pa). If the air were reduced to this pressure at constant temperature (∆S)T = -R ln P2/P1 where R is the gas constant, about 2 cal/mol°K, and P2/P1 = .6085; (∆S)T = -2 ln .6085. Since the total entropy change is zero, this part must equal Cp ln T2/T1 where Cp is the heat capacity of air at constant pressure, about 7 cal/mol°K for all diatomic gases, and T1 and T2 are the temperatures (Kelvin) of the air at sea level and 4000 m. (These equations are derived in most thermodynamics texts. The short version is that the entropy change from compression at constant T equals the work at constant temperature divided by T,  ∫P/TdV=  ∫R/V dV = R ln V2/V1= -R ln P2/P1. Similarly the entropy change at constant pressure = ∫dQ/T where dQ = Cp dT. This component of entropy is thus ∫dQ/T = Cp ∫dT/T = Cp ln T2/T1.) Setting the sum to equal zero, we can say that Cp ln T2/T1 =R ln .6085, or that 

T2 = T1 (.6085)R/Cp

T2 = T1(.6085)2/7   where 0.6065 is the pressure ratio at 4000, and because for air and most diatomic gases, R/Cp = 2/7 to very good approximation, matching the prediction from quantum mechanics.

From the above, we calculate T2 = 288.15 x .8676 = 250.0°K, or -23.15 °C. This is cold enough to provide snow  on Les Droites nearly year round, and it’s pretty accurate. The typical temperature at 4000 m is 262.17 K (-11°C). That’s 26°C colder than at sea-level, and only 12°C warmer than we’d predicted.

There are three weak assumptions behind the 11°C error in our predictions: (1) that the air that rises is no hotter than the air that does not, and (2) that the air’s not heated by radiation from the sun or earth, and (3) that there is no heat exchange with the surrounding air, e.g. from rain or snow formation. The last of these errors is thought to be the largest, but it’s still not large enough to cause serious problems.

The snow cover on Kilimanjaro, 2013. If global warming models were true, it should be gone, or mostly gone.

Snow on Kilimanjaro, Tanzania 2013. If global warming models were true, the ground should be 4°C warmer than 100 years ago, and the air at this altitude, about 7°C (12°F) warmer; and the snow should be gone.

You can use this approach, with different exponents, estimate the temperature at the center of Jupiter, or at the center of neutron stars. This iso-entropic calculation is the model that’s used here, though it’s understood that may be off by a fair percentage. You can also ask questions about global warming: increased CO2 at this level is supposed to cause extreme heating at 4000m, enough to heat the earth below by 4°C/century or more. As it happens, the temperature and snow cover on Les Droites and other Alp ski areas has been studied carefully for many decades; they are not warming as best we can tell (here’s a discussion). By all rights, Mt Blanc should be Mt Green by now; no one knows why. The earth too seems to have stopped warming. My theory: clouds. 

Robert Buxbaum, May 10, 2014. Science requires you check your theory for internal and external weakness. Here’s why the sky is blue, not green.

Nuclear fusion

I got my PhD at Princeton University 33 years ago (1981) working on the engineering of nuclear fusion reactors, and I thought I’d use this blog to rethink through the issues. I find I’m still of the opinion that developing fusion is important as the it seems the best, long-range power option. Civilization will still need significant electric power 300 to 3000 years from now, it seems, when most other fuel sources are gone. Fusion is also one of the few options for long-range space exploration; needed if we ever decide to send colonies to Alpha Centauri or Saturn. I thought fusion would be ready by now, but it is not, and commercial use seems unlikely for the next ten years at least — an indication of the difficulties involved, and a certain lack of urgency.

Oil, gas, and uranium didn’t run out like we’d predicted in the mid 70s. Instead, population growth slowed, new supplies were found, and better methods were developed to recover and use them. Shale oil and fracking unlocked hydrocarbons we thought were unusable, and nuclear fission reactors got better –safer and more efficient. At the same time, the more we studied, the clearer it came that fusion’s technical problems are much harder to tame than uranium fission’s.

Uranium fission was/is frighteningly simple — far simpler than even the most basic fusion reactor. The first nuclear fission reactor (1940) involved nothing more than uranium pellets in a pile of carbon bricks stacked in a converted squash court at the University of Chicago. No outside effort was needed to get the large, unstable uranium atoms split to smaller, more stable ones. Water circulating through the pile removed the heat released, and control was maintained by people lifting and lowering cadmium control rods while standing on the pile.

A fusion reactor requires high temperature or energy to make anything happen. Fusion energy is produced by combining small, unstable heavy hydrogen atoms into helium, a bigger more stable one, see figure. To do this reaction you need to operate at the equivalent of about 500,000,000 degrees C, and containing it requires (typically) a magnetic bottle — something far more complex than a pile of graphic bricks. The reward was smaller too: “only” about 1/13th as much energy per event as fission. We knew the magnetic bottles were going to be tricky, e.g. there was no obvious heat transfer and control method, but fusion seemed important enough, and the problems seemed manageable enough that fusion power seemed worth pursuing — with just enough difficulties to make it a challenge.

Basic fusion reaction: deuterium + tritium react to give helium, a neutron and energy.

Basic fusion reaction: deuterium + tritium react to give helium, a neutron and energy.

The plan at Princeton, and most everywhere, was to use a TOKAMAK, a doughnut-shaped reactor like the one shown below, but roughly twice as big; TOKAMAK was a Russian acronym. The doughnut served as one side of an enormous transformer. Hydrogen fuel was ionized into a plasma (a neutral soup of protons and electrons) and heated to 300,000,000°C by a current in the TOKOMAK generated by varying the current in the other side of the transformer. Plasma containment was provided by enormous magnets on the top and bottom, and by ring-shaped magnets arranged around the torus.

As development went on, we found we kept needing bigger and bigger doughnuts and stronger and stronger magnets in an effort to balance heat loss with fusion heating. The number density of hydrogen atoms per volume, n, is proportional to the magnetic strength. This is important because the fusion heat rate per volume is proportional to n-squared, n2, while heat loss is proportional to n divided by the residence time, something we called tau, τ. The main heat loss was from the hot plasma going to the reactor surface. Because of the above, a heat balance ratio was seen to be important, heat in divided by heat out, and that was seen to be more-or-less proportional to nτ. As the target temperatures increased, we found we needed larger and larger nτ reactors to make a positive heat balance. And this translated to ever larger reactors and ever stronger magnetic fields, but even here there was a limit, 1 billion Kelvin, a thermodynamic temperature where the fusion reaction went backward and no energy was produced. The Princeton design was huge, with super strong, super magnets, and was operated at 300 million°C, near the top of the reaction curve. If the temperature went above or below this temperature, the fire would go out. There was no room for error, but relatively little energy output per volume — compared to fission.

Fusion reaction options and reaction rates.

Fusion reaction options and reaction rates.

The most likely reaction involved deuterium and tritium, referred to as D and T. This was the reaction of the two heavy isotopes of hydrogen shown in the figure above — the same reaction used in hydrogen bombs, a point we rarely made to the public. For each reaction D + T –> He + n, you get 17.6 million electron volts (17.6 MeV). This is 17.6 million times the energy you get for an electron moving over one Volt, but only 1/13 the energy of a fission reaction. By comparison, the energy of water-forming, H2 + 1/2 O2 –> H2O, is the equivalent of two electrons moving over 1.2 Volts, or 2.4 electron volts (eV), some 8 million times less than fusion.

The Princeton design involved reacting 40 gm/hr of heavy hydrogen to produce 8 mol/hr of helium and 4000 MW of heat. The heat was converted to electricity at 38% efficiency using a topping cycle, a modern (relatively untried) design. Of the 1500 MWh/hr of electricity that was supposed to be produced, all but about 400 MW was to be delivered to the power grid — if everything worked right. Sorry to say, the value of the electricity did not rise anywhere as fast as the cost of the reactor and turbines. Another problem: 1100 MW was more than could be easily absorbed by any electrical grid. The output was high and steady, and could not be easily adjusted to match fluctuating customer demand. By contrast a coal plant’s or fuel cell’s output could be easily adjusted (and a nuclear plant with a little more difficulty).

Because of the need for heat balance, it turned out that at least 9% of the hydrogen had to be burnt per pass through the reactor. The heat lost per mol by conduction to the wall was, to good approximation, the heat capacity of each mol of hydrogen ions, 82 J/°C mol, times the temperature of the ions, 300 million °C divided by the containment time, τ. The Princeton design was supposed to have a containment of about 4 seconds. As a result, the heat loss by conduction was 6.2 GW per mol. This must be matched by the molar heat of reaction that stayed in the plasma. This was 17.6 MeV times Faraday’s constant, 96,800 divided by 4 seconds (= 430 GW/mol reacted) divided by 5. Of the 430 GW/mol produced in fusion reactions only 1/5 remains in the plasma (= 86 GW/mol) the other 4/5 of the energy of reaction leaves with the neutron. To get the heat balance right, at least 9% of the hydrogen must react per pass through the reactor; there were also some heat losses from radiation, so the number is higher. Burn more or less percent of the hydrogen and you had problems. The only other solution was to increase τ > 4 seconds, but this meant ever bigger reactors.

There was also a material handling issue: to get enough fuel hydrogen into the center of the reactor, quite a lot of radioactive gas had to be handled — extracted from the plasma chamber. These were to be frozen into tiny spheres of near-solid hydrogen and injected into the reactor at ultra-sonic velocity. Any slower and the spheres would evaporate before reaching the center. As 40 grams per hour was 9% of the feed, it became clear that we had to be ready to produce and inject 1 pound/hour of tiny spheres. These “snowballs-in-hell” had to be small so they didn’t dampen the fire. The vacuum system had to be able to be big enough to handle the lb/hr or so of unburned hydrogen and ash, keeping the pressure near total vacuum. You then had to purify the hydrogen from the ash-helium and remake the little spheres that would be fed back to the reactor. There were no easy engineering problems here, but I found it enjoyable enough. With a colleague, I came up with a cute, efficient high vacuum pump and recycling system, and published it here.

Yet another engineering challenge concerned the difficulty of finding a material for the first-wall — the inner wall of the doughnut facing the plasma. Of the 4000 MW of heat energy produced, all the conduction and radiation heat, about 1000 MW is deposited in the first wall and has to be conducted away. Conducting this heat means that the wall must have an enormous coolant flow and must withstand an enormous amount of thermal stress. One possible approach was to use a liquid wall, but I’ve recently come up with a rather nicer solid wall solution (I think) and have filed a patent; more on that later, perhaps after/if the patent is accepted. Another engineering challenge was making T, tritium, for the D-T reaction. Tritium is not found in nature, but has to be made from the neutron created in the reaction and from lithium in a breeder blanket, Li + n –> He + T. I examined all possible options for extracting this tritium from the lithium at low concentrations as part of my PhD thesis, and eventually found a nice solution. The education I got in the process is used in my, REB Research hydrogen engineering business.

Man inside the fusion reactor doughnut at ITER. He'd better leave before the 8,000,000°C plasma turns on.

Man inside the fusion reactor doughnut at ITER. He’d better leave before the 8,000,000°C plasma turns on.

Because of its complexity, and all these engineering challenges, fusion power never reached the maturity of fission power; and then Three-mile Island happened and ruined the enthusiasm for all things nuclear. There were some claims that fusion would be safer than fission, but because of the complexity and improvements in fission, I am not convinced that fusion would ever be even as safe. And the long-term need keeps moving out: we keep finding more uranium, and we’ve developed breeder reactors and a thorium cycle: technologies that make it very unlikely we will run out of fission material any time soon.

The main, near term advantage I see for fusion over fission is that there are fewer radioactive products, see comparison.  A secondary advantage is neutrons. Fusion reactors make excess neutrons that can be used to make tritium, or other unusual elements. A need for one of these could favor the development of fusion power. And finally, there’s the long-term need: space exploration, or basic power when we run out of coal, uranium, and thorium. Fine advantages but unlikely to be important for a hundred years.

Robert E. Buxbaum, March 1, 2014. Here’s a post on land use, on the aesthetics of engineering design, and on the health risks of nuclear power. The sun’s nuclear fusion reactor is unstable too — one possible source of the chaotic behavior of the climate. Here’s a control joke.

Fractal power laws and radioactive waste decay

Here’s a fairly simple model for nuclear reactor decay heat versus time. It’s based on a fractal model I came up with for dealing with the statistics of crime, fires, etc. The start was to notice that radioactive waste is typically a mixture of isotopes with different decay times and different decay heats. I then came to suspect that there would be a general fractal relation, and that the fractal relation would hold through as the elements of the mixed waste decayed to more stable, less radioactive products. After looking a bit, if seems that the fractal time characteristic is time to the 1/4 power, that is

heat output = H° exp (-at1/4).

Here H° is the heat output rate at some time =0 and “a” is a characteristic of the waste. Different waste mixes will have different values of this decay characteristic.

If nuclear waste consisted of one isotope and one decay path, the number of atoms decaying per day would decrease exponentially with time to the power of 1. If there were only one daughter product produced, and it were non-radioactive, the heat output of a sample would also decay with time to the power of 1. Thus, Heat output would equal  H° exp (-at) and a plot of the log of the decay heat would be linear against linear time — you could plot it all conveniently on semi-log paper.

But nuclear waste generally consists of many radioactive components with different half lives, and these commpnents decay into other radioactive isotopes, all of whom have half-lives that vary by quite a lot. The result is that a semi-log plot is rarely helpful.  Some people therefore plot radioactivity on a log-log plot, typically including a curve for each major isotope and decay mode. I find these plots hardly useful. They are certainly impossible to extrapolate. What I’d like to propose instead is a fractal variation of the original semi-log plot: a  plot of the log of the heat rate against a fractal time. As shown below the use of time to the 1/4 power seems to be helpful. The plot is similar to a fractal decay model that I’d developed for crimes and fires a few weeks ago

Afterheat of fuel rods used to generate 20 kW/kg U; Top graph 35 MW-days/kg U; bottom graph 20 Mw-day /kg  U. Data from US NRC Regulatory Guide 3.54 - Spent Fuel Heat Generation in an Independent Spent Fuel Storage Installation, rev 1, 1999. http://www.nrc.gov/reading-rm/doc-collections/reg-guides/fuels-materials/rg/03-054/

After-heat of nuclear fuel rods used at 20 kW/kg U; Top graph 35 MW-days/kg U; bottom graph 20 Mw-day /kg U. Data from US NRC Regulatory Guide 3.54. A typical reactor has 200,000 kg of uranium.

A plausible justification for this fractal semi-log plot is to observe that the half-life of daughter isotopes relates to the parent isotopes. Unless I find that someone else has come up with this sort of plot or analysis before, I’ll call it after myself: a Buxbaum Mandelbrot plot –Why not?

Nuclear power is attractive because it is a lot more energy dense than any normal fuel. Still the graph at right illustrates the problem of radioactive waste. With nuclear, you generate about 35 MW-days of power per kg of uranium. This is enough to power an average US home for 8 years, but it produces 1 kg of radioactive waste. Even after 81 years the waste is generating about 1/2 W of decay heat. It should be easier to handle and store the 1 kg of spent uranium than to deal with the many tons of coal-smoke produced when 35 MW-days of electricity is made from coal, still, there is reason to worry about the decay heat.

I’ve made a similar plot of decay heat of a fusion reactor, see below. Fusion looks better in this regard. A fission-based nuclear reactor to power 1/2 of Detroit, would hold some 200,000 kg of uranium that would be replaced every 5 years. Even 81 years after removal, the after-heat would be about 100 kW, and that’s a lot.

Afterheat of a 4000 MWth Fusion Reactor, from UMAC III Report. Nb-1%Zr is a fairly common high-temerature engineering material of construction.

After-heat of a 4000 MWth Fusion Reactor built from niobium-1%zirconium; from UWMAC III Report. The after heat is far less than with normal uranium fission.

The plot of the after-heat of a similar power fusion reactor (right) shows a far greater slope, but the same time to the1/4 power dependence. The heat output drops from 1 MW at 3 weeks to only 100 W after 1 year and far less than 1 W after 81 years. Nuclear fusion is still a few years off, but the plot at left shows the advantages fairly clearly, I. think.

This plot was really designed to look at the statistics of crime, fires, and the need for servers / checkout people.

Dr. R.E. Buxbaum, January 2, 2014, edited Aug 30, 2022. *A final, final thought about theory from Yogi Berra: “In theory, it matches reality.”

Paint your factory roof white

Standing on the flat roof of my lab / factory building, I notice that virtually all of my neighbors’ roofs are black, covered by tar or bitumen. My roof was black too until three weeks ago; the roof was too hot to touch when I’d gone up to patch a leak. That’s not quite egg-frying hot, but I came to believe my repair would last longer if the roof stayed cooler. So, after sealing the leak with tar and bitumen, we added an aluminized over-layer from Ace hardware. The roof is cooler now than before, and I notice a major drop in air conditioner load and use.

My analysis of our roof coating follows; it’s for Detroit, but you can modify it for your location. Sunlight hits the earth carrying 1300 W/m2. Some 300W/m2 scatters as blue light (for why so much scatters, and why the sky is blue, see here). The rest, 1000 W/m2 or 308 Btu/ft2hr, comes through or reflects off clouds on a cloudy day and hits buildings at an angle determined by latitude, time of day, and season of the year.

Detroit is at 42° North latitude so my roof shows an angle of 42° to the sun at noon in mid spring. In summer, the angle is 20°, and in winter about 63°. The sun sinks lower on the horizon through the day, e.g. at two hours before or after noon in mid spring the angle is 51°. On a clear day, with a perfectly black roof, the heating is 308 Btu/ft2hr times the cosine of the angle.

To calculate our average roof heating, I integrated this heat over the full day’s angles using Euler’s method, and included the scatter from clouds plus an absorption factor for the blackness of the roof. The figure below shows the cloud cover for Detroit.

Average cloud cover for Detroit, month by month.

Average cloud cover for Detroit, month by month; the black line is the median cloud cover. On January 1, it is strongly overcast 60% of the time, and hardly ever clear; the median is about 98%. From http://weatherspark.com/averages/30042/Detroit-Michigan-United-States

Based on this and an assumed light absorption factor of σ = .9 for tar and σ = .2 after aluminum. I calculate an average of 105 Btu/ft2hr heating during the summer for the original black roof, and 23 Btu/ft2hr after aluminizing. Our roof is still warm, but it’s no longer hot. While most of the absorbed heat leaves the roof by black body radiation or convection, enough enters my lab through 6″ of insulation to cause me to use a lot of air conditioning. I calculate the heat entering this way from the roof temperature. In the summer, an aluminum coat is a clear winner.

Detroit High and Low Temperatures Over the ear

High and Low Temperatures For Detroit, Month by Month. From http://weatherspark.com/averages/30042/Detroit-Michigan-United-States

Detroit has a cold winter too, and these are months where I’d benefit from solar heat. I find it’s so cloudy in winter that, even with a black roof, I got less than 5 Btu/ft2hr. Aluminizing reduced this heat to 1.2 Btu/ft2hr, but it also reduces the black-body radiation leaving at night. I should find that I use less heat in winter, but perhaps more in late spring and early fall. I won’t know the details till next year, but that’s the calculation.

The REB Research laboratory is located at 12851 Capital St., Oak Park, MI 48237. We specialize in hydrogen separations and membrane reactors. By Dr. Robert Buxbaum, June 16, 2013

What’s the quality of your home insulation

By Dr. Robert E. Buxbaum, June 3, 2013

It’s common to have companies call during dinner offering to blow extra insulation into the walls and attic of your home. Those who’ve added this insulation find a small decrease in their heating and cooling bills, but generally wonder if they got their money’s worth, or perhaps if they need yet-more insulation to get the full benefit. Here’s a simple approach to comparing your home heat bill to the ideal your home can reasonably reach.

The rate of heat transfer through a wall, Qw, is proportional to the temperature difference, ∆T, to the area, A, and to the average thermal conductivity of the wall, k; it is inversely proportional to the wall thickness, ∂;

Qw = ∆T A k /∂.

For home insulation, we re-write this as Qw = ∆T A/Rw where Rw is the thermal resistance of the wall, measured (in the US) as °F/BTU/hr-ft2. Rw = ∂/k.

Lets assume that your home’s outer wall thickness is nominally 6″ thick (0.5 foot). With the best available insulation, perfectly applied, the heat loss will be somewhat higher than if the space was filled with still air, k=.024 BTU/fthr°F, a result based on molecular dynamics. For a 6″ wall, the R value, will always be less than .5/.024 = 20.8 °F/BTU/hr-ft2.. It will be much less if there are holes or air infiltration, but for practical construction with joists and sills, an Rw value of 15 or 16 is probably about as good as you’ll get with 6″ walls.

To show you how to evaluate your home, I’ll now calculate the R value of my walls based on the size of my ranch-style home (in Michigan) and our heat bills. I’ll first do this in a simplified calculation, ignoring windows, and will then repeat the calculation including the windows. Windows are found to be very important. I strongly suggest window curtains to save heat and air conditioning,

The outer wall of my home is 190 feet long, and extends about 11 feet above ground to the roof. Multiplying these dimensions gives an outer wall area of 2090 ft2. I could now add the roof area, 1750 ft2 (it’s the same as the area of the house), but since the roof is more heavily insulated than the walls, I’ll estimate that it behaves like 1410 ft2 of normal wall. I calculate there are 3500 ftof effective above-ground area for heat loss. This is the area that companies keep offering to insulate.

Between December 2011 and February 2012, our home was about 72°F inside, and the outside temperature was about 28°F. Thus, the average temperature difference between the inside and outside was about 45°F; I estimate the rate of heat loss from the above-ground part of my house, Qu = 3500 * 45/R = 157,500/Rw.

Our house has a basement too, something that no one has yet offered to insulate. While the below-ground temperature gradient is smaller, it’s less-well insulated. Our basement walls are cinderblock covered with 2″ of styrofoam plus wall-board. Our basement floor is even less well insulated: it’s just cement poured on pea-gravel. I estimate the below-ground R value is no more than 1/2 of whatever the above ground value is; thus, for calculating QB, I’ll assume a resistance of Rw/2.

The below-ground area equals the square footage of our house, 1750 ft2 but the walls extend down only about 5 feet below ground. The basement walls are thus 950 ft2 in area (5 x 190 = 950). Adding the 1750 ft2 floor area, we find a total below-ground area of 2700 ft2.

The temperature difference between the basement and the wet dirt is only about 25°F in the winter. Assuming the thermal resistance is Rw/2, I estimate the rate of heat loss from the basement, QB = 2700*25*(2/Rw) = 135,000/Rw. It appears that nearly as much heat leaves through the basement as above ground!

Between December and February 2012, our home used an average of 597 cubic feet of gas per day or 25497 BTU/hour (heat value = 1025 BTU/ ft3). QU+ Q= 292,500/Rw. Ignoring windows, I estimate Rw of my home = 292,500/25497 = 11.47.

We now add the windows. Our house has 230 ft2 of windows, most covered by curtains and/or plastic. Because of the curtains and plastic, they would have an R value of 3 except that black-body radiation tends to be very significant. I estimate our windows have an R value of 1.5; the heat loss through the windows is thus QW= 230*45/1.5 = 6900 BTU/hr, about 27% of the total. The R value for our walls is now re-estimated to be 292,500/(25497-6900) = 15.7; this is about as good as I can expect given the fixed thickness of our walls and the fact that I can not easily get an insulation conductivity lower than still air. I thus find that there will be little or no benefit to adding more above-ground wall insulation to my house.

To save heat energy, I might want to coat our windows in partially reflective plastic or draw the curtains to follow the sun. Also, since nearly half the heat left from the basement, I may want to lay a thicker carpet, or lay a reflective under-layer (a space blanket) beneath the carpet.

To improve on the above estimate, I could consider our furnace efficiency; it is perhaps only 85-90% efficient, with still-warm air leaving up the chimney. There is also some heat lost through the door being opened, and through hot water being poured down the drain. As a first guess, these heat losses are balanced by the heat added by electric usage, by the body-heat of people in the house, and by solar radiation that entered through the windows (not much for Michigan in winter). I still see no reason to add more above-ground insulation. Now that I’ve analyzed my home, it’s time for you to analyze yours.

Most Heat Loss Is Black-Body Radiation

In a previous post I used statistical mechanics to show how you’d calculate the thermal conductivity of any gas and showed why the insulating power of the best normal insulating materials was usually identical to ambient air. That analysis only considered the motion of molecules and not of photons (black-body radiation) and thus under-predicted heat transfer in most circumstances. Though black body radiation is often ignored in chemical engineering calculations, it is often the major heat transfer mechanism, even at modest temperatures.

One can show from quantum mechanics that the radiative heat transfer between two surfaces of temperature T and To is proportional to the difference of the fourth power of the two temperatures in absolute (Kelvin) scale.

Heat transfer rate = P = A ε σ( T^4 – To^4).

Here, A is the area of the surfaces, σ is the Stefan–Boltzmann constantε is the surface emissivity, a number that is 1 for most non-metals and .3 for stainless steel.  For A measured in m2σ = 5.67×10−8 W m−2 K−4.

Infrared picture of a fellow wearing a black plastic bag on his arm. The bag is nearly transparent to heat radiation, while his eyeglasses are opaque. His hair provides some insulation.

Unlike with conduction, heat transfer does not depend on the distances between the surfaces but only on the temperature and the infra-red (IR) reflectivity. This is different from normal reflectivity as seen in the below infra-red photo of a lightly dressed person standing in a normal room. The fellow has a black plastic bag on his arm, but you can hardly see it here, as it hardly affects heat loss. His clothes, don’t do much either, but his hair and eyeglasses are reasonably effective blocks to radiative heat loss.

As an illustrative example, lets calculate the radiative and conductive heat transfer heat transfer rates of the person in the picture, assuming he has  2 m2 of surface area, an emissivity of 1, and a body and clothes temperature of about 86°F; that is, his skin/clothes temperature is 30°C or 303K in absolute. If this person stands in a room at 71.6°F, 295K, the radiative heat loss is calculated from the equation above: 2 *1* 5.67×10−8 * (8.43×109 -7.57×109) = 97.5 W. This is 23.36 cal/second or 84.1 Cal/hr or 2020 Cal/day; this is nearly the expected basal calorie use of a person this size.

The conductive heat loss is typically much smaller. As discussed previously in my analysis of curtains, the rate is inversely proportional to the heat transfer distance and proportional to the temperature difference. For the fellow in the picture, assuming he’s standing in relatively stagnant air, the heat boundary layer thickness will be about 2 cm (0.02m). Multiplying the thermal conductivity of air, 0.024 W/mK, by the surface area and the temperature difference and dividing by the boundary layer thickness, we find a Wattage of heat loss of 2*.024*(30-22)/.02 = 19.2 W. This is 16.56 Cal/hr, or 397 Cal/day: about 20% of the radiative heat loss, suggesting that some 5/6 of a sedentary person’s heat transfer may be from black body radiation.

We can expect that black-body radiation dominates conduction when looking at heat-shedding losses from hot chemical equipment because this equipment is typically much warmer than a human body. We’ve found, with our hydrogen purifiers for example, that it is critically important to choose a thermal insulation that is opaque or reflective to black body radiation. We use an infra-red opaque ceramic wrapped with aluminum foil to provide more insulation to a hot pipe than many inches of ceramic could. Aluminum has a far lower emissivity than the nonreflective surfaces of ceramic, and gold has an even lower emissivity at most temperatures.

Many popular insulation materials are not black-body opaque, and most hot surfaces are not reflectively coated. Because of this, you can find that the heat loss rate goes up as you add too much insulation. After a point, the extra insulation increases the surface area for radiation while barely reducing the surface temperature; it starts to act like a heat fin. While the space-shuttle tiles are fairly mediocre in terms of conduction, they are excellent in terms of black-body radiation.

There are applications where you want to increase heat transfer without having to resort to direct contact with corrosive chemicals or heat-transfer fluids. Often black body radiation can be used. As an example, heat transfers quite well from a cartridge heater or band heater to a piece of equipment even if they do not fit particularly tightly, especially if the outer surfaces are coated with black oxide. Black body radiation works well with stainless steel and most liquids, but most gases are nearly transparent to black body radiation. For heat transfer to most gases, it’s usually necessary to make use of turbulence or better yet, chaos.

Robert Buxbaum

The Gift of Chaos

Many, if not most important engineering systems are chaotic to some extent, but as most college programs don’t deal with this behavior, or with this type of math, I thought I might write something on it. It was a big deal among my PhD colleagues some 30 years back as it revolutionized the way we looked at classic problems; it’s fundamental, but it’s now hardly mentioned.

Two of the first freshman engineering homework problems I had turn out to have been chaotic, though I didn’t know it at the time. One of these concerned the cooling of a cup of coffee. As presented, the coffee was in a cup at a uniform temperature of 70°C; the room was at 20°C, and some fanciful data was presented to suggest that the coffee cooled at a rate that was proportional the difference between the (changing) coffee temperature and the fixed room temperature. Based on these assumptions, we predicted exponential cooling with time, something that was (more or less) observed, but not quite in real life. The chaotic part in a real cup of coffee, is that the cup develops currents that move faster and slower. These currents accelerate heat loss, but since they are driven by the temperature differences within the cup they tend to speed up and slow down erratically. They accelerate when the cup is not well stirred, causing new stir, and slow down when it is stirred, and the temperature at any point is seen to rise and fall in an almost rhythmic fashion; that is, chaotically.

While it is impossible to predict what will happen over a short time scale, there are some general patterns. Perhaps the most remarkable of these is self-similarity: if observed over a short time scale (10 seconds or less), the behavior over 10 seconds will look like the behavior over 1 second, and this will look like the behavior over 0.1 second. The only difference being that, the smaller the time-scale, the smaller the up-down variation. You can see the same thing with stock movements, wind speed, cell-phone noise, etc. and the same self-similarity can occur in space so that the shape of clouds tends to be similar at all reasonably small length scales. The maximum average deviation is smaller over smaller time scales, of course, and larger over large time-scales, but not in any obvious way. There is no simple proportionality, but rather a fractional power dependence that results in these chaotic phenomena having fractal dependence on measure scale. Some of this is seen in the global temperature graph below.

Global temperatures measured from the antarctic ice showing stable, cyclic chaos and self-similarity.

Global temperatures measured from the antarctic ice showing stable, cyclic chaos and self-similarity.

Chaos can be stable or unstable, by the way; the cooling of a cup of coffee was stable because the temperature could not exceed 70°C or go below 20°C. Stable chaotic phenomena tend to have fixed period cycles in space or time. The world temperature seems to follow this pattern though there is no obvious reason it should. That is, there is no obvious maximum and minimum temperature for the earth, nor any obvious reason there should be cycles or that they should be 120,000 years long. I’ll probably write more about chaos in later posts, but I should mention that unstable chaos can be quite destructive, and quite hard to prevent. Some form of chaotic local heating seems to have caused battery fires aboard the Dreamliner; similarly, most riots, famines, and financial panics seem to be chaotic. Generally speaking, tight control does not prevent this sort of chaos, by the way; it just changes the period and makes the eruptions that much more violent. As two examples, consider what would happen if we tried to cap a volcano, or provided  clamp-downs on riots in Syria, Egypt or Ancient Rome.

From math, we know some alternate ways to prevent unstable chaos from getting out of hand; one is to lay off, another is to control chaotically (hard to believe, but true).

 

Heat conduction in insulating blankets, aerogels, space shuttle tiles, etc.

A lot about heat conduction in insulating blankets can be explained by the ordinary motion of gas molecules. That’s because the thermal conductivity of air (or any likely gas) is much lower than that of glass, alumina, or any likely solid material used for the structure of the blanket. At any temperature, the average kinetic energy of an air molecule is 1/2kT in any direction, or 3/2kT altogether; where k is Boltzman’s constant, and T is absolute temperature, °K. Since kinetic energy equals 1/2 mv2, you find that the average velocity in the x direction must be v = √kT/m = √RT/M. Here m is the mass of the gas molecule in kg, M is the molecular weight also in kg (0.029 kg/mol for air), R is the gas constant 8.29J/mol°C, and v is the molecular velocity in the x direction, in meters/sec. From this equation, you will find that v is quite large under normal circumstances, about 290 m/s (650 mph) for air molecules at ordinary temperatures of 22°C or 295 K. That is, air molecules travel in any fixed direction at roughly the speed of sound, Mach 1 (the average speed including all directions is about √3 as fast, or about 1130 mph).

The distance a molecule will go before hitting another one is a function of the cross-sectional areas of the molecules and their densities in space. Dividing the volume of a mol of gas, 0.0224 m3/mol at “normal conditions” by the number of molecules in the mol (6.02 x10^23) gives an effective volume per molecule at this normal condition: .0224 m3/6.0210^23 = 3.72 x10^-26 m3/molecule at normal temperatures and pressures. Dividing this volume by the molecular cross-section area for collisions (about 1.6 x 10^-19 m2 for air based on an effective diameter of 4.5 Angstroms) gives a free-motion distance of about 0.23×10^-6 m or 0.23µ for air molecules at standard conditions. This distance is small, to be sure, but it is 1000 times the molecular diameter, more or less, and as a result air behaves nearly as an “ideal gas”, one composed of point masses under normal conditions (and most conditions you run into). The distance the molecule travels to or from a given surface will be smaller, 1/√3 of this on average, or about 1.35×10^-7m. This distance will be important when we come to estimate heat transfer rates at the end of this post.

 

Molecular motion of an air molecule (oxygen or nitrogen) as part of heat transfer process; this shows how some of the dimensions work.

Molecular motion of an air molecule (oxygen or nitrogen) as part of heat transfer process; this shows how some of the dimensions work.

The number of molecules hitting per square meter per second is most easily calculated from the transfer of momentum. The pressure at the surface equals the rate of change of momentum of the molecules bouncing off. At atmospheric pressure 103,000 Pa = 103,000 Newtons/m2, the number of molecules bouncing off per second is half this pressure divided by the mass of each molecule times the velocity in the surface direction. The contact rate is thus found to be (1/2) x 103,000 Pa x 6.02^23 molecule/mol /(290 m/s. x .029 kg/mol) = 36,900 x 10^23 molecules/m2sec.

The thermal conductivity is merely this number times the heat capacity transfer per molecule times the distance of the transfer. I will now calculate the heat capacity per molecule from statistical mechanics because I’m used to doing things this way; other people might look up the heat capacity per mol and divide by 6.02 x10^23: For any gas, the heat capacity that derives from kinetic energy is k/2 per molecule in each direction, as mentioned above. Combining the three directions, that’s 3k/2. Air molecules look like dumbbells, though, so they have two rotations that contribute another k/2 of heat capacity each, and they have a vibration that contributes k. We begin with an approximate value for k = 2 cal/mol of molecules per °C; it’s actually 1.987 but I round up to include some electronic effects. Based on this, we calculate the heat capacity of air to be 7 cal/mol°C at constant volume or 1.16 x10^-23 cal/molecule°C. The amount of energy that can transfer to the hot (or cold) wall is this heat capacity times the temperature difference that molecules carry between the wall and their first collision with other gases. The temperature difference carried by air molecules at standard conditions is only 1.35 x10-7 times the temperature difference per meter because the molecules only go that far before colliding with another molecule (remember, I said this number would be important). The thermal conductivity for stagnant air per meter is thus calculated by multiplying the number of molecules times that hit per m2 per second, the distance the molecule travels in meters, and the effective heat capacity per molecule. This would be 36,900 x 10^23  molecules/m2sec x 1.35 x10-7m x 1.16 x10^-23 cal/molecule°C = 0.00578 cal/ms°C or .0241 W/m°C. This value is (pretty exactly) the thermal conductivity of dry air that you find by experiment.

I did all that math, though I already knew the thermal conductivity of air from experiment for a few reasons: to show off the sort of stuff you can do with simple statistical mechanics; to build up skills in case I ever need to know the thermal conductivity of deuterium or iodine gas, or mixtures; and finally, to be able to understand the effects of pressure, temperature and (mainly insulator) geometry — something I might need to design a piece of equipment with, for example, lower thermal heat losses. I find, from my calculation that we should not expect much change in thermal conductivity with gas pressure at near normal conditions; to first order, changes in pressure will change the distance the molecule travels to exactly the same extent that it changes the number of molecules that hit the surface per second. At very low pressures or very small distances, lower pressures will translate to lower conductivity, but for normal-ish pressures and geometries, changes in gas pressure should not affect thermal conductivity — and does not.

I’d predict that temperature would have a larger effect on thermal conductivity, but still not an order-of magnitude large effect. Increasing the temperature increases the distance between collisions in proportion to the absolute temperature, but decreases the number of collisions by the square-root of T since the molecules move faster at high temperature. As a result, increasing T has a √T positive effect on thermal conductivity.

Because neither temperature nor pressure has much effect, you might expect that the thermal conductivity of all air-filed insulating blankets at all normal-ish conditions is more-or-less that of standing air (air without circulation). That is what you find, for the most part; the same 0.024 W/m°C thermal conductivity with standing air, with high-tech, NASA fiber blankets on the space shuttle and with the cheapest styrofoam cups. Wool felt has a thermal conductivity of 0.042 W/m°C, about twice that of air, a not-surprising result given that wool felt is about 1/2 wool and 1/2 air.

Now we can start to understand the most recent class of insulating blankets, those with very fine fibers, or thin layers of fiber (or aluminum or gold). When these are separated by less than 0.2µ you finally decrease the thermal conductivity at room temperature below that for air. These layers decrease the distance traveled between gas collisions, but still leave the same number of collisions with the hot or cold wall; as a result, the smaller the gap below .2µ the lower the thermal conductivity. This happens in aerogels and some space blankets that have very small silica fibers, less than .1µ apart (<100 nm). Aerogels can have much lower thermal conductivities than 0.024 W/m°C, even when filled with air at standard conditions.

In outer space you get lower thermal conductivity without high-tech aerogels because the free path is very long. At these pressures virtually every molecule hits a fiber before it hits another molecule; for even a rough blanket with distant fibers, the fibers bleak up the path of the molecules significantly. Thus, the fibers of the space shuttle (about 10 µ apart) provide far lower thermal conductivity in outer space than on earth. You can get the same benefit in the lab if you put a high vacuum of say 10^-7 atm between glass walls that are 9 mm apart. Without the walls, the air molecules could travel 1.3 µ/10^-7 = 13m before colliding with each other. Since the walls of a typical Dewar are about 0.009 m apart (9 mm) the heat conduction of the Dewar is thus 1/150 (0.7%) as high as for a normal air layer 9mm thick; there is no thermal conductivity of Dewar flasks and vacuum bottles as such, since the amount of heat conducted is independent of gap-distance. Pretty spiffy. I use this knowledge to help with the thermal insulation of some of our hydrogen generators and hydrogen purifiers.

There is another effect that I should mention: black body heat transfer. In many cases black body radiation dominates: it is the reason the tiles are white (or black) and not clear; it is the reason Dewar flasks are mirrored (a mirrored surface provides less black body heat transfer). This post is already too long to do black body radiation justice here, but treat it in more detail in another post.

RE. Buxbaum