Tag Archives: heat

My home-made brandy and still.

MY home-made still, and messy lab. Note the masking tape seal and the nylon hoses. Nylon is cheaper than copper. The yellow item behind the burner is the cooling water circulation pump. The wire at top and left is the thermocouple.

I have an apple tree, a peach tree, and some grape vines. They’re not big trees, but they give too much fruit to eat. The squirrels get some, and we give some away. As for the rest, I began making wine and apple jack a few years back, but there’s still more fruit than I can use. Being a chemical engineer, I decided to make brandy this year, so far only with pears and apples.

The first steps were the simplest: I collected fruit in a 5 gallon, Ace bucket, and mashed it using a 2×4. I then added some sugar and water and some yeast and let it sit with a cover for a week or two. Bread yeast worked fine for this, and gives a warm flavor, IMHO. A week or so later, I put the mush into a press I had fro grapes, shown below, and extracted the fermented juice. I used a cheesecloth bag with one squeezing, no bag with the other. The bag helped, making cleanup easier.

The fruit press, used to extract liquid. A cheese cloth bag helps.

I did a second fermentation with both batches of fermented mash. This was done in a pot over a hot-plate on warm. I added more sugar and some more yeast and let it ferment for a few more days at about 78°F. To avoid bad yeasts, I washed out the pot and the ace bucket with dilute iodine before using them– I have lots of dilute iodine around from the COVID years. The product went into the aluminum “corn-cooker” shown above, 5 or 6 gallon size, that serves as the still boiler. The aluminum cover of the pot was drilled with a 1″ hole; I then screwed in a 10″ length of 3/4″ galvanized pipe, added a reducing elbow, and screwed that into a flat-plate heat exchanger, shown below. The heat exchanger serves as the condenser, while the 3/4″ pipe is like the cap on a moonshiner still. Its purpose is to keep the foam and splatter from getting in the condenser.

I put the pot on the propane burner stand shown, sealed the lid with masking tape (it worked better than duct tape), hooked up the heat exchanger to a water flow, and started cooking. If you don’t feel like making a still this way, you can buy one at Home Depot for about $150. Whatever route you go, get a good heat exchanger/ condenser. The one on the Home-depot still looks awful. You need to be able to take heat out as fast as the fire puts heat in, and you’ll need minimal pressure drop or the lid won’t seal. The Home Depot still has too little area and too much back-pressure, IMHO. Also, get a good thermometer and put it in the head-space of the pot. I used a thermocouple. Temperature is the only reasonable way to keep track of the progress and avoid toxic distillate.

A flat-plate heat exchanger, used as a condenser.

The extra weight of the heat exchanger and pipe helps hold the lid down, by the way, but it would not be enough if there was a lot of back pressure in the heat exchanger-condenser. If your lid doesn’t seal, you’ll lose your product. If you have problems, get a better heat exchanger. I made sure that the distillate flows down as it condenses. Up-flow adds back pressure and reduces condenser efficiency. I cooled the condenser with water circulated to a bucket with the cooling water flowing up, counter current to the distillate flow. I could have used tap water via a hose with proper fittings for cooling, but was afraid of major leaks all over the floor.

With the system shown, and the propane on high, it took about 20 minutes to raise the temperature to near boiling. To avoid splatter, I turned down the heater as the temperature approached 150°F. The first distillate came out at 165°F, a temperature that indicated it was not alcohol or anything you’d want to drink. I threw away the first 2-3 oz of this product. You can sniff or sip a tiny amount to convince yourself that this this is really nasty, acetone, I suspect, plus ethyl acetate, and maybe some ether and methanol. Throw it away!

After the first 2-3 ounces, I collected everything to 211°F. Product started coming in earnest at about 172°F. I ended distillation at 211°F when I’d collected nearly 3 quarts. For my first run, my electronic thermometer was off and I stopped too early — you need a good thermometer. The material I collected and was OK in taste, especially when diluted a bit. To test the strength, I set some on fire, the classic “100% proof test”, and diluted till it to about 70% beyond. This is 70% proof, by the classic method. I also tried a refractometer, comparing the results to whiskey. I was aiming for 60-80 proof (30-40%).

My 1 gallon aging barrel.

I tried distilling a second time to improve the flavor. The result was stronger, but much worse tasting with a loss of fruit flavor. By contrast, a much better resulted from putting some distillate (one pass) in an oak barrel we had used for wine. Just one day in the barrel helped a lot. I’ve also seen success putting charred wood cubes set into a glass bottle of distillate. Note: my barrel, as purchased, had leaks. I sealed them with wood glue before use.

I only looked up distilling law after my runs. It varies state to state. In Michigan, making spirits for consumption, either 1 gal or 60,000 gal/year, requires a “Distilling, Rectifying, Blending and/or Bottling Spirits” Permit, from the ATF Tax and Trade Bureau (“TTB”) plus a Small Distiller license from Michigan. Based on the sale of stills at Home Depot and a call to the ATF, it appears there is little interest in pursuing home distillers who do not sell, despite the activity being illegal. This appears similar to state of affairs with personal use marijuana growers in the state. Your state’s laws may be different, and your revenuers may be more enthusiastic. If you decide to distill, here’s some music, the Dukes of Hazard theme song.

Robert Buxbaum, November 23, 2022.

Thermal stress failure

Take a glass, preferably a cheap glass, and set it in a bowl of ice-cold water so that the water goes only half-way up the glass. Now pour boiling hot water into the glass. In a few seconds the glass will crack from thermal stress, the force caused by heat going from the inside of the glass outside to the bowl of cold water. This sort of failure is not mentioned in any of the engineering material books that I had in college, or had available for teaching engineering materials. To the extent that it is mentioned mentioned on the internet, e.g. here at wikipedia, the metric presented is not derived and (I think) wrong. Given this, I’d like to present a Buxbaum- derived metric for thermal stress-resistance and thermal stress failure. A key aspect: using a thinner glass does not help.

Before gong on to the general case of thermal stress failure, lets consider the glass, and try to compute the magnitude of the thermal stress. The glass is being torn apart and that suggests that quite a lot of stress is being generated by a ∆T of 100°C temeprarture gradient.

To calcule the thermal stress, consider the thermal expansivity of the material, α. Glass — normal cheap glass — has a thermal expansivity α = 8.5 x10-6 meters/meter °C (or 8.5 x10-6 foot/foot °C). For every degree Centigrade a meter of glass is heated, it will expand 8.5×10-6 meters, and for every degree it is cooled, it will shrink 8.5 x10-6 meters. If you consider the circumference of the glass to be L (measured in meters), then
∆L/L = α ∆T.

where ∆L is the change in length due to heating, and ∆L/L is sometimes called the “strain.”. Now, lets call the amount of stress caused by this expansion σ, sigma, measured in psi or GPa. It is proportional to the strain, ∆L/L, and to the elasticity constant, E (also called Young’s elastic constant).

σ = E ∆L/L.

For glass, Young’s elasticity constant, E = 75 GPa. Since strain was equal to α ∆T, we find that

σ =Eα ∆T 

Thus, for glass and a ∆T of 100 °C, σ =100°C x 75 GPa x 8.5 x10-6 /°C  = 0.064  GPa = 64MPa. This is about 640 atm, or 9500 psi.

As it happens, the ultimate tensile strength of ordinary glass is only about 40 MPa =  σu. This, the maximum force per area you can put on glass before it breaks, is less than the thermal stress. You can expect a break here, and wherever σu < Eα∆T. I thus create a characteristic temperature difference for thermal stress failure:

The Buxbaum failure temperature, ß = σu/Eα

If ∆T of more than ß is applied to any material, you can expect a thermal stress failure.

The Wikipedia article referenced above provides a ratio for thermal resistance. The usits are perhaps heat load per unit area and time. How you would use this ratio I don’t quite know, it includes k, the thermal conductivity and ν, the Poisson ratio. Including the thermal conductivity here only makes sense, to me, if you think you’ll have a defined thermal load, a defined amount of heat transfer per unit area and time. I don’t think this is a normal way to look at things.  As for including the Poisson ratio, this too seems misunderstanding. The assumption is that a high Poisson ratio decreases the effect of thermal stress. The thought behind this, as I understand it, is that heating one side of a curved (the inside for example) will decrease the thickness of that side, reducing the effective stress. This is a mistake, I think; heating never decreases the thickness of any part being heated, but only increases the thickness. The heated part will expand in all directions. Thus, I think my ratio is the correct one. Please find following a list of failure temperatures for various common materials. 

Stress strain properties of engineering materials including thermal expansion, ultimate stress, MPa, and Youngs elastic modulus, GPa.

You will notice that most materials are a lot more resistant to thermal stress than glass is and some are quite a lot less resistant. Based on the above, we can expect that ice will fracture at a temperature difference as small as 1°C. Similarly, cast iron will crack with relatively little effort, while steel is a lot more durable (I hope that so-called cast iron skillets are really steel skillets). Pyrex is a form of glass that is more resistant to thermal breakage; that’s mainly because for pyrex, α is a lot smaller than for ordinary, cheap glass. I find it interesting that diamond is the material most resistant to thermal failure, followed by invar, a low -expansion steel, and ordinary rubber.

Robert E. Buxbaum, July 3, 2019. I should note that, for several of these materials, those with very high thermal conductivities, you’d want to use a very thick sample of materials to produce a temperature difference of 100*C.

What drives the gulf stream?

I’m not much of a fan of todays’ kids’ science books because they don’t teach science IMHO. They have nice pictures and a few numbers; almost no equations, and lots of words. You can’t do science that way. On the odd occasion that they give the right answer to some problem, the lack of math means the kid has no way of understanding the reasoning, and no reason to believe the answer. Professional science articles on the web are bad in the opposite direction: too many numbers and for math, hey rely on supercomputers. No human can understand the outcome. I like to use my blog to offer science with insight, the type you’d get in an old “everyman science” book.

In previous posts, I gave answers to why the sky is blue, why it’s cold at the poles, why it’s cold on mountains, how tornadoes pick stuff up, and why hurricanes blow the way they do. In this post, we’ll try to figure out what drives the gulf-stream. The main argument will be deduction — disproving things that are not driving the gulf stream to leave us with one or two that could. Deduction is a classic method of science, well presented by Sherlock Holmes.

The gulf stream. The speed in the white area is ≥ 0.5 m/s (1.1 mph.).

The gulf stream. The speed in the white area is ≥ 0.5 m/s (1.1 mph.).

For those who don’t know, the Gulf stream is a massive river of water that runs within the Atlantic ocean. As shown at right, it starts roughly at the end of Florida, runs north to the Carolinas, and then turns dramatically east towards Spain. Flowing east, It’s about 150 miles wide, but only about 62 miles (100 km) when flowing along the US coast. According to some of the science books of my youth this massive flow was driven by temperature according to others, by salinity (whatever that means), and yet other books of my youth wind. My conclusion: they had no clue.

As a start to doing the science here, it’s important to fill in the numerical information that the science books left out. The Gulf stream is roughly 1000 meters deep, with a typical speed of 1 m/s (2.3 mph). The maximum speed is the surface water as the stream flows along the US coast. It is about 2.5 metres per second (5.6 mph), see map above.

From the size and the speed of the Gulf Stream, we conclude that land rivers are not driving the flow. The Mississippi is a big river with an outflow point near the head waters of the gulf stream, but the volume of flow is vastly too small. The volume of the gulf stream is roughly

Q=wdv = 100,000 x 1000 x .5 =  50 million m3/s = 1.5 billion cubic feet/s.

This is about 2000 times more flow than the volume flow of the Mississippi, 18,000 m3/s. The great difference in flow suggests the Mississippi could not be the driving force. The map of flow speeds (above) also suggest rivers do not drive the flow. The Gulf Stream does not flow at its maximum speed near the mouth of any river.  We now look for another driver.

Moving on to temperature. Temperature drives the whirl of hurricanes. The logic for temperature driving the gulf stream is as follows: it’s warm by the equator and cold at the poles; warm things expand and as water flows downhill, the polls will always be downhill from the equator. Lets put some math in here or my explanation will be lacking. First lets consider how much hight difference we might expect to see. The thermal expansivity of water is about 2x 10-4 m/m°C (.0002/°C) in the desired temperature range). To calculate the amount of expansion we multiply this by the depth of the stream, 1000m, and the temperature difference between two points, eg. the end of Florida to the Carolina coast. This is 5°C (9°F) I estimate. I calculate the temperature-induced seawater height as:

∆h (thermal) ≈ 5° x .0002/° x 1000m = 1 m (3.3 feet).

This is a fair amount of height. It’s only about 1/100 the height driving the Mississippi river, but it’s something. To see if 1 m is enough to drive the Gulf flow, I’ll compare it to the velocity-head. Velocity-head is a concept that’s useful in plumbing (I ran for water commissioner). It’s the potential energy height equivalent of any kinetic energy — typically of a fluid flow. The kinetic energy for any velocity v and mass of water, m is 1/2 mv2 . The potential energy equivalent is mgh. Combine the above and remove the mass terms, and we have:

∆h (velocity) = v2/2g.

Where g is the acceleration of gravity. Let’s consider  v = 1 m/s and g= 9.8 m/s2.≤ 0.05 m ≈ 2 inches. This is far less than the driving force calculated above. We have 5x more driving force than we need, but there is a problem: why isn’t the flow faster? Why does the Mississippi move so slowly when it has 100 times more head.

To answer the above questions, and to check if heat could really drive the Gulf Stream, we’ll check if the flow is turbulent — it is. A measure of how turbulent is based on something called the Reynolds number, Re#, it’s the ratio of kinetic energy and viscous loss in a fluid flow. Flows are turbulent if this ratio is more than 3000, or so;

Re# = vdρ/µ.

In the above, v is velocity, say 1 m/s, d is depth, 1000m, ρ = density, 1000 kg/m3 for water, and  0.00133 Pa∙s is the viscosity of water. Plug in these numbers, and we find a RE# = 750 million: this flow will be highly turbulent. Assuming a friction factor of 1/20 (.05), e find that we’d expect complete mixing 20 depths or 20 km. We find we need the above 0.05 m of velocity height to drive every 20 km of flow up the US coast. If the distance to the Carolina coast is 1000 km we need 1000*.05m/20 = 1 meter, that’s just about the velocity-head that the temperature difference would suggest. Temperature is thus a plausible driving force for 0.5 m/s, though not likely for the faster 2.5 m/s flow seen in the center of the stream. Turbulent flow is a big part of figuring the mpg of an automobile; it becomes rapidly more important at high speeds.

World sea salinity

World sea salinity. The maximum and minimum are in the wrong places.

What about salinity? For salinity to work, the salinity would have to be higher at the end of the flow. As a model of the flow, we might imagine that we freeze arctic seawater, and thus we concentrate salt in the seawater just below the ice. The heavy, saline water would flow down to the bottom of the sea, and then flow south to an area of low salinity and low pressure. Somewhere in the south, the salinity would be reduced by rains. If evaporation were to exceed the rains, the flow would go in the other direction. Sorry to say, I see no evidence of any of this. For one the end of the Gulf Stream is not that far north; there is no freezing, For two other problems: there are major rains in the Caribbean, and rains too in the North Atlantic. Finally, while the salinity head is too small. Each pen of salinity adds about 0.0001g/cc, and the salinity difference in this case is less than 1 ppm, lets say 0.5ppm.

h = .0001 x 0.5 x 1000 = 0.05m

I don’t see a case for northern-driven Gulf-stream flow caused by salinity.

Surface level winds in the Atlantic.

Surface level winds in the Atlantic. Trade winds in purple, 15-20 mph.

Now consider winds. The wind velocities are certainly enough to produce 5+ miles per hour flows, and the path of flows is appropriate. Consider, for example, the trade winds. In the southern Caribbean, they blow steadily from east to west slightly above the equator at 15 -20 mph. This could certainly drive a circulation flow of 4.5 mph north. Out of the Caribbean basin and along the eastern US coat the trade winds blow at 15-50 mph north and east. This too would easily drive a 4.5 mph flow.  I conclude that a combination of winds and temperature are the most likely drivers of the gulf stream flow. To quote Holmes, once you’ve eliminated the impossible, whatever remains, however improbable, must be the truth.

Robert E. Buxbaum, March 25, 2018. I used the thermal argument above to figure out how cold it had to be to freeze the balls off of a brass monkey.

In praise of openable windows and leaky construction

It’s summer in Detroit, and in all the tall buildings the air conditioners are humming. They have to run at near-full power even on evenings and weekends when the buildings are near empty, and on cool days. This would seem to waste a lot of power and it does, but it’s needed for ventilation. Tall buildings are made air-tight with windows that don’t open — without the AC, there’s be no heat leaving at all, no way for air to get in, and no way for smells to get out.

The windows don’t open because of the conceit of modern architecture; air tight building are believed to be good design because they have improved air-conditioner efficiency when the buildings are full, and use less heat when the outside world is very cold. That’s, perhaps 10% of the year. 

No openable windows, but someone figured you should suffer for art

Modern architecture with no openable windows. Someone wants you to suffer for his/her art.

Another reason closed buildings are popular is that they reduce the owners’ liability in terms of things flying in or falling out. Owners don’t rain coming in, or rocks (or people) falling out. Not that windows can’t be made with small openings that angle to avoid these problems, but that’s work and money and architects like to spend time and money only on fancy facades that look nice (and are often impractical). Besides, open windows can ruin the cool lines of their modern designs, and there’s nothing worse, to them, than a building that looks uncool despite the energy cost or the suffering of the inmates of their art.

Most workers find sealed buildings claustrophobic, musty, and isolating. That pain leads to lost productivity: Fast Company reported that natural ventilation can increase productivity by up to 11 percent. But, as with leading clothes stylists, leading building designers prefer uncomfortable and uneconomic to uncool. If people in the building can’t smell an ocean breeze, or can’t vent their area in a fire (or following a burnt burrito), that’s a small price to pay for art. Art is absurd, and it’s OK with the architect if fire fumes have to circulate through the entire building before they’re slowly vented. Smells add character, and the architect is gone before the stench gets really bad. 

No one dreams of working in an unventilated glass box.

No one dreams of working in a glass box. If it’s got to be an office, give some ventilation.

So what’s to be done? One can demand openable windows and hope the architect begrudgingly obliges. Some of the newest buildings have gone this route. A simpler, engineering option is to go for leaky construction — cracks in the masonry, windows that don’t quite seal. I’ve maintained and enlarged the gap under the doors of my laboratory buildings to increase air leakage; I like to have passive venting for toxic or flammable vapors. I’m happy to not worry about air circulation failing at the worst moment, and I’m happy to not have to ventilate at night when few people are here. To save some money, I increase the temperature range at night and weekends so that the buildings is allowed to get as hot as 82°F before the AC goes on, or as cold as 55°F without the heat. Folks who show up on weekends may need a sweater, but normally no one is here. 

A bit of air leakage and a few openable windows won’t mess up the air-conditioning control because most heat loss is through the walls and black body radiation. And what you lose in heat infiltration you gain by being able to turn off the AC circulation system when you know there are few people in the building (It helps to have a key-entry system to tell you how many people are there) and the productivity advantage of occasional outdoor smells coming in, or nasty indoor smells going out.

One irrational fear of openable windows is that some people will not close the windows in the summer or in the dead of winter. But people are quite happy in the older skyscrapers (like the empire state building) built before universal AC. Most people are nice — or most people you’d want to employ are. They will respond to others feelings to keep everyone comfortable. If necessary a boss or building manager may enforce this, or may have to move a particularly crusty miscreant from the window. But most people are nice, and even a degree of discomfort is worth the boost to your psyche when someone in management trusts you to control something of the building environment.

Robert E. Buxbaum, July 18, 2014. Curtains are a plus too — far better than self-darkening glass. They save energy, and let you think that management trusts you to have power over your environment. And that’s nice.

If hot air rises, why is it cold on mountain-tops?

This is a child’s question that’s rarely answered to anyone’s satisfaction. To answer it well requires college level science, and by college the child has usually been dissuaded from asking anything scientific that would likely embarrass teacher — which is to say, from asking most anything. By a good answer, I mean here one that provides both a mathematical, checkable prediction of the temperature you’d expect to find on mountain tops, and one that also gives a feel for why it should be so. I’ll try to provide this here, as previously when explaining “why is the sky blue.” A word of warning: real science involves mathematics, something that’s often left behind, perhaps in an effort to build self-esteem. If I do a poor job, please text me back: “if hot air rises, what’s keeping you down?”

As a touchy-feely answer, please note that all materials have internal energy. It’s generally associated with the kinetic energy + potential energy of the molecules. It enters whenever a material is heated or has work done on it, and for gases, to good approximation, it equals the gas heat capacity of the gas times its temperature. For air, this is about 7 cal/mol°K times the temperature in degrees Kelvin. The average air at sea-level is taken to be at 1 atm, or 101,300  Pascals, and 15.02°C, or 288.15 °K; the internal energy of this are is thus 288.15 x 7 = 2017 cal/mol = 8420 J/mol. The internal energy of the air will decrease as the air rises, and the temperature drops for reasons I will explain below. Most diatomic gases have heat capacity of 7 cal/mol°K, a fact that is only explained by quantum mechanics; if not for quantum mechanics, the heat capacities of diatomic gases would be about 9 cal/mol°K.

Lets consider a volume of this air at this standard condition, and imagine that it is held within a weightless balloon, or plastic bag. As we pull that air up, by pulling up the bag, the bag starts to expand because the pressure is lower at high altitude (air pressure is just the weight of the air). No heat is exchanged with the surrounding air because our air will always be about as warm as its surroundings, or if you like you can imagine weightless balloon prevents it. In either case the molecules lose energy as the bag expands because they always collide with an outwardly moving wall. Alternately you can say that the air in the bag is doing work on the exterior air — expansion is work — but we are putting no work into the air as it takes no work to lift this air. The buoyancy of the air in our balloon is always about that of the surrounding air, or so we’ll assume for now.

A classic, difficult way to calculate the temperature change with altitude is to calculate the work being done by the air in the rising balloon. Work done is force times distance: w=  ∫f dz and this work should equal the effective cooling since heat and work are interchangeable. There’s an integral sign here to account for the fact that force is proportional to pressure and the air pressure will decrease as the balloon goes up. We now note that w =  ∫f dz = – ∫P dV because pressure, P = force per unit area. and volume, V is area times distance. The minus sign is because the work is being done by the air, not done on the air — it involves a loss of internal energy. Sorry to say, the temperature and pressure in the air keeps changing with volume and altitude, so it’s hard to solve the integral, but there is a simple approach based on entropy, S.

Les Droites Mountain, in the Alps, at the intersect of France Italy and Switzerland is 4000 m tall. The top is generally snow-covered.

Les Droites Mountain, in the Alps, at the intersect of France Italy and Switzerland is 4000 m tall. The top is generally snow-covered.

I discussed entropy last month, and showed it was a property of state, and further, that for any reversible path, ∆S= (Q/T)rev. That is, the entropy change for any reversible process equals the heat that enters divided by the temperature. Now, we expect the balloon rise is reversible, and since we’ve assumed no heat transfer, Q = 0. We thus expect that the entropy of air will be the same at all altitudes. Now entropy has two parts, a temperature part, Cp ln T2/T1 and a pressure part, R ln P2/P1. If the total ∆S=0 these two parts will exactly cancel.

Consider that at 4000m, the height of Les Droites, a mountain in the Mont Blanc range, the typical pressure is 61,660 Pa, about 60.85% of sea level pressure (101325 Pa). If the air were reduced to this pressure at constant temperature (∆S)T = -R ln P2/P1 where R is the gas constant, about 2 cal/mol°K, and P2/P1 = .6085; (∆S)T = -2 ln .6085. Since the total entropy change is zero, this part must equal Cp ln T2/T1 where Cp is the heat capacity of air at constant pressure, about 7 cal/mol°K for all diatomic gases, and T1 and T2 are the temperatures (Kelvin) of the air at sea level and 4000 m. (These equations are derived in most thermodynamics texts. The short version is that the entropy change from compression at constant T equals the work at constant temperature divided by T,  ∫P/TdV=  ∫R/V dV = R ln V2/V1= -R ln P2/P1. Similarly the entropy change at constant pressure = ∫dQ/T where dQ = Cp dT. This component of entropy is thus ∫dQ/T = Cp ∫dT/T = Cp ln T2/T1.) Setting the sum to equal zero, we can say that Cp ln T2/T1 =R ln .6085, or that 

T2 = T1 (.6085)R/Cp

T2 = T1(.6085)2/7   where 0.6065 is the pressure ratio at 4000, and because for air and most diatomic gases, R/Cp = 2/7 to very good approximation, matching the prediction from quantum mechanics.

From the above, we calculate T2 = 288.15 x .8676 = 250.0°K, or -23.15 °C. This is cold enough to provide snow  on Les Droites nearly year round, and it’s pretty accurate. The typical temperature at 4000 m is 262.17 K (-11°C). That’s 26°C colder than at sea-level, and only 12°C warmer than we’d predicted.

There are three weak assumptions behind the 11°C error in our predictions: (1) that the air that rises is no hotter than the air that does not, and (2) that the air’s not heated by radiation from the sun or earth, and (3) that there is no heat exchange with the surrounding air, e.g. from rain or snow formation. The last of these errors is thought to be the largest, but it’s still not large enough to cause serious problems.

The snow cover on Kilimanjaro, 2013. If global warming models were true, it should be gone, or mostly gone.

Snow on Kilimanjaro, Tanzania 2013. If global warming models were true, the ground should be 4°C warmer than 100 years ago, and the air at this altitude, about 7°C (12°F) warmer; and the snow should be gone.

You can use this approach, with different exponents, estimate the temperature at the center of Jupiter, or at the center of neutron stars. This iso-entropic calculation is the model that’s used here, though it’s understood that may be off by a fair percentage. You can also ask questions about global warming: increased CO2 at this level is supposed to cause extreme heating at 4000m, enough to heat the earth below by 4°C/century or more. As it happens, the temperature and snow cover on Les Droites and other Alp ski areas has been studied carefully for many decades; they are not warming as best we can tell (here’s a discussion). By all rights, Mt Blanc should be Mt Green by now; no one knows why. The earth too seems to have stopped warming. My theory: clouds. 

Robert Buxbaum, May 10, 2014. Science requires you check your theory for internal and external weakness. Here’s why the sky is blue, not green.

Entropy, the most important pattern in life

One evening at the Princeton grad college a younger fellow (an 18-year-old genius) asked the most simple, elegant question I had ever heard, one I’ve borrowed and used ever since: “tell me”, he asked, “something that’s important and true.” My answer that evening was that the entropy of the universe is always increasing. It’s a fundamentally important pattern in life; one I didn’t discover, but discovered to have a lot of applications and meaning. Let me explain why it’s true here, and then why I find it’s meaningful.

Famous entropy cartoon, Harris

Famous entropy cartoon, Harris

The entropy of the universe is not something you can measure directly, but rather indirectly, from the availability of work in any corner of it. It’s related to randomness and the arrow of time. First off, here’s how you can tell if time is moving forward: put an ice-cube into hot water, if the cube dissolves and the water becomes cooler, time is moving forward — or, at least it’s moving in the same direction as you are. If you can reach into a cup of warm water and pull out an ice-cube while making the water hot, time is moving backwards. — or rather, you are living backwards. Within any closed system, one where you don’t add things or energy (sunlight say), you can tell that time is moving forward because the forward progress of time always leads to the lack of availability of work. In the case above, you could have generated some electricity from the ice-cube and the hot water, but not from the glass of warm water.

You can not extract work from a heat source alone; to extract work some heat must be deposited in a cold sink. At best the entropy of the universe remains unchanged. More typically, it increases.

You can not extract work from a heat source alone; to extract work some heat must be deposited in a cold sink. At best the entropy of the universe remains unchanged.

This observation is about as fundamental as any to understanding the world; it is the basis of entropy and the second law of thermodynamics: you can never extract useful work from a uniform temperature body of water, say, just by making that water cooler. To get useful work, you always need something some other transfer into or out of the system; you always need to make something else hotter, colder, or provide some chemical or altitude changes that can not be reversed without adding more energy back. Thus, so long as time moves forward everything runs down in terms of work availability.

There is also a first law; it states that energy is conserved. That is, if you want to heat some substance, that change requires that you put in a set amount of work plus heat. Similarly, if you want to cool something, a set amount of heat + work must be taken out. In equation form, we say that, for any change, q +w is constant, where q is heat, and w is work. It’s the sum that’s constant, not the individual values so long as you count every 4.174 Joules of work as if it were 1 calorie of heat. If you input more heat, you have to add less work, and visa versa, but there is always the same sum. When adding heat or work, we say that q or w is positive; when extracting heat or work, we say that q or w are negative quantities. Still, each 4.174 joules counts as if it were 1 calorie.

Now, since for every path between two states, q +w is the same, we say that q + w represents a path-independent quantity for the system, one we call internal energy, U where ∆U = q + w. This is a mathematical form of the first law of thermodynamics: you can’t take q + w out of nothing, or add it to something without making a change in the properties of the thing. The only way to leave things the same is if q + w = 0. We notice also that for any pure thing or mixture, the sum q +w for the change is proportional to the mass of the stuff; we can thus say that internal energy is an intensive quality. q + w = n ∆u where n is the grams of material, and ∆u is the change in internal energy per gram.

We are now ready to put the first and second laws together. We find we can extract work from a system if we take heat from a hot body of water and deliver some of it to something at a lower temperature (the ice-cube say). This can be done with a thermopile, or with a steam engine (Rankine cycle, above), or a stirling engine. That an engine can only extract work when there is a difference of temperatures is similar to the operation of a water wheel. Sadie Carnot noted that a water wheel is able to extract work only when there is a flow of water from a high level to low; similarly in a heat engine, you only get work by taking in heat energy from a hot heat-source and exhausting some of it to a colder heat-sink. The remainder leaves as work. That is, q1 -q2 = w, and energy is conserved. The second law isn’t violated so long as there is no way you could run the engine without the cold sink. Accepting this as reasonable, we can now derive some very interesting, non-obvious truths.

We begin with the famous Carnot cycle. The Carnot cycle is an idealized heat engine with the interesting feature that it can be made to operate reversibly. That is, you can make it run forwards, taking a certain amount of work from a hot source, producing a certain amount of work and delivering a certain amount of heat to the cold sink; and you can run the same process backwards, as a refrigerator, taking in the same about of work and the same amount of heat from the cold sink and delivering the same amount to the hot source. Carnot showed by the following proof that all other reversible engines would have the same efficiency as his cycle and no engine, reversible or not, could be more efficient. The proof: if an engine could be designed that will extract a greater percentage of the heat as work when operating between a given hot source and cold sink it could be used to drive his Carnot cycle backwards. If the pair of engines were now combined so that the less efficient engine removed exactly as much heat from the sink as the more efficient engine deposited, the excess work produced by the more efficient engine would leave with no effect besides cooling the source. This combination would be in violation of the second law, something that we’d said was impossible.

Now let us try to understand the relationship that drives useful energy production. The ratio of heat in to heat out has got to be a function of the in and out temperatures alone. That is, q1/q2 = f(T1, T2). Similarly, q2/q1 = f(T2,T1) Now lets consider what happens when two Carnot cycles are placed in series between T1 and T2, with the middle temperature at Tm. For the first engine, q1/qm = f(T1, Tm), and similarly for the second engine qm/q2 = f(Tm, T2). Combining these we see that q1/q2 = (q1/qm)x(qm/q2) and therefore f(T1, T2) must always equal f(T1, Tm)x f(Tm/T2) =f(T1,Tm)/f(T2, Tm). In this relationship we see that the second term Tm is irrelevant; it is true for any Tm. We thus say that q1/q2 = T1/T2, and this is the limit of what you get at maximum (reversible) efficiency. You can now rearrange this to read q1/T1 = q2/T2 or to say that work, W = q1 – q2 = q2 (T1 – T2)/T2.

A strange result from this is that, since every process can be modeled as either a sum of Carnot engines, or of engines that are less-efficient, and since the Carnot engine will produce this same amount of reversible work when filled with any substance or combination of substances, we can say that this outcome: q1/T1 = q2/T2 is independent of path, and independent of substance so long as the process is reversible. We can thus say that for all substances there is a property of state, S such that the change in this property is ∆S = ∑q/T for all the heat in or out. In a more general sense, we can say, ∆S = ∫dq/T, where this state property, S is called the entropy. Since as before, the amount of heat needed is proportional to mass, we can say that S is an intensive property; S= n s where n is the mass of stuff, and s is the entropy change per mass. 

Another strange result comes from the efficiency equation. Since, for any engine or process that is less efficient than the reversible one, we get less work out for the same amount of q1, we must have more heat rejected than q2. Thus, for an irreversible engine or process, q1-q2 < q2(T1-T2)/T2, and q2/T2 is greater than -q1/T1. As a result, the total change in entropy, S = q1/T1 + q2/T2 >0: the entropy of the universe always goes up or stays constant. It never goes down. Another final observation is that there must be a zero temperature that nothing can go below or both q1 and q2 could be positive and energy would not be conserved. Our observations of time and energy conservation leaves us to expect to find that there must be a minimum temperature, T = 0 that nothing can be colder than. We find this temperature at -273.15 °C. It is called absolute zero; nothing has ever been cooled to be colder than this, and now we see that, so long as time moves forward and energy is conserved, nothing will ever will be found colder.

Typically we either say that S is zero at absolute zero, or at room temperature.

We’re nearly there. We can define the entropy of the universe as the sum of the entropies of everything in it. From the above treatment of work cycles, we see that this total of entropy always goes up, never down. A fundamental fact of nature, and (in my world view) a fundamental view into how God views us and the universe. First, that the entropy of the universe goes up only, and not down (in our time-forward framework) suggests there is a creator for our universe — a source of negative entropy at the start of all things, or a reverser of time (it’s the same thing in our framework). Another observation, God likes entropy a lot, and that means randomness. It’s his working principle, it seems.

But before you take me now for a total libertine and say that since science shows that everything runs down the only moral take-home is to teach: “Let us eat and drink,”… “for tomorrow we die!” (Isaiah 22:13), I should note that his randomness only applies to the universe as a whole. The individual parts (planets, laboratories, beakers of coffee) does not maximize entropy, but leads to a minimization of available work, and this is different. You can show that the maximization of S, the entropy of the universe, does not lead to the maximization of s, the entropy per gram of your particular closed space but rather to the minimization of a related quantity µ, the free energy, or usable work per gram of your stuff. You can show that, for any closed system at constant temperature, µ = h -Ts where s is entropy per gram as before, and h is called enthalpy. h is basically the potential energy of the molecules; it is lowest at low temperature and high order. For a closed system we find there is a balance between s, something that increases with increased randomness, and h, something that decreases with increased randomness. Put water and air in a bottle, and you find that the water is mostly on the bottom of the bottle, the air is mostly on the top, and the amount of mixing in each phase is not the maximum disorder, but rather the one you’d calculate will minimize µ.

As the protein folds its randomness and entropy decrease, but its enthalpy decreases too; the net effect is one precise fold that minimizes µ.

As a protein folds its randomness and entropy decrease, but its enthalpy decreases too; the net effect is one precise fold that minimizes µ.

This is the principle that God applies to everything, including us, I’d guess: a balance. Take protein folding; some patterns have big disorder, and high h; some have low disorder and very low h. The result is a temperature-dependent  balance. If I were to take a moral imperative from this balance, I’d say it matches better with the sayings of Solomon the wise: “there is nothing better for a person under the sun than to eat, drink and be merry. Then joy will accompany them in their toil all the days of the life God has given them under the sun.” (Ecclesiastes 8:15). There is toil here as well as pleasure; directed activity balanced against personal pleasures. This is the µ = h -Ts minimization where, perhaps, T is economic wealth. Thus, the richer a society, the less toil is ideal and the more freedom. Of necessity, poor societies are repressive. 

Dr. Robert E. Buxbaum, Mar 18, 2014. My previous thermodynamic post concerned the thermodynamics of hydrogen production. It’s not clear that all matter goes forward in time, by the way; antimatter may go backwards, so it’s possible that anti matter apples may fall up. On microscopic scale, time becomes flexible so it seems you can make a time machine. Religious leaders tend to be anti-science, I’ve noticed, perhaps because scientific miracles can be done by anyone, available even those who think “wrong,” or say the wrong words. And that’s that, all being heard, do what’s right and enjoy life too: as important a pattern in life as you’ll find, I think. The relationship between free-energy and societal organization is from my thesis advisor, Dr. Ernest F. Johnson.

Fractal power laws and radioactive waste decay

Here’s a fairly simple model for nuclear reactor decay heat versus time. It’s based on a fractal model I came up with for dealing with the statistics of crime, fires, etc. The start was to notice that radioactive waste is typically a mixture of isotopes with different decay times and different decay heats. I then came to suspect that there would be a general fractal relation, and that the fractal relation would hold through as the elements of the mixed waste decayed to more stable, less radioactive products. After looking a bit, if seems that the fractal time characteristic is time to the 1/4 power, that is

heat output = H° exp (-at1/4).

Here H° is the heat output rate at some time =0 and “a” is a characteristic of the waste. Different waste mixes will have different values of this decay characteristic.

If nuclear waste consisted of one isotope and one decay path, the number of atoms decaying per day would decrease exponentially with time to the power of 1. If there were only one daughter product produced, and it were non-radioactive, the heat output of a sample would also decay with time to the power of 1. Thus, Heat output would equal  H° exp (-at) and a plot of the log of the decay heat would be linear against linear time — you could plot it all conveniently on semi-log paper.

But nuclear waste generally consists of many radioactive components with different half lives, and these commpnents decay into other radioactive isotopes, all of whom have half-lives that vary by quite a lot. The result is that a semi-log plot is rarely helpful.  Some people therefore plot radioactivity on a log-log plot, typically including a curve for each major isotope and decay mode. I find these plots hardly useful. They are certainly impossible to extrapolate. What I’d like to propose instead is a fractal variation of the original semi-log plot: a  plot of the log of the heat rate against a fractal time. As shown below the use of time to the 1/4 power seems to be helpful. The plot is similar to a fractal decay model that I’d developed for crimes and fires a few weeks ago

Afterheat of fuel rods used to generate 20 kW/kg U; Top graph 35 MW-days/kg U; bottom graph 20 Mw-day /kg  U. Data from US NRC Regulatory Guide 3.54 - Spent Fuel Heat Generation in an Independent Spent Fuel Storage Installation, rev 1, 1999. http://www.nrc.gov/reading-rm/doc-collections/reg-guides/fuels-materials/rg/03-054/

After-heat of nuclear fuel rods used at 20 kW/kg U; Top graph 35 MW-days/kg U; bottom graph 20 Mw-day /kg U. Data from US NRC Regulatory Guide 3.54. A typical reactor has 200,000 kg of uranium.

A plausible justification for this fractal semi-log plot is to observe that the half-life of daughter isotopes relates to the parent isotopes. Unless I find that someone else has come up with this sort of plot or analysis before, I’ll call it after myself: a Buxbaum Mandelbrot plot –Why not?

Nuclear power is attractive because it is a lot more energy dense than any normal fuel. Still the graph at right illustrates the problem of radioactive waste. With nuclear, you generate about 35 MW-days of power per kg of uranium. This is enough to power an average US home for 8 years, but it produces 1 kg of radioactive waste. Even after 81 years the waste is generating about 1/2 W of decay heat. It should be easier to handle and store the 1 kg of spent uranium than to deal with the many tons of coal-smoke produced when 35 MW-days of electricity is made from coal, still, there is reason to worry about the decay heat.

I’ve made a similar plot of decay heat of a fusion reactor, see below. Fusion looks better in this regard. A fission-based nuclear reactor to power 1/2 of Detroit, would hold some 200,000 kg of uranium that would be replaced every 5 years. Even 81 years after removal, the after-heat would be about 100 kW, and that’s a lot.

Afterheat of a 4000 MWth Fusion Reactor, from UMAC III Report. Nb-1%Zr is a fairly common high-temerature engineering material of construction.

After-heat of a 4000 MWth Fusion Reactor built from niobium-1%zirconium; from UWMAC III Report. The after heat is far less than with normal uranium fission.

The plot of the after-heat of a similar power fusion reactor (right) shows a far greater slope, but the same time to the1/4 power dependence. The heat output drops from 1 MW at 3 weeks to only 100 W after 1 year and far less than 1 W after 81 years. Nuclear fusion is still a few years off, but the plot at left shows the advantages fairly clearly, I. think.

This plot was really designed to look at the statistics of crime, fires, and the need for servers / checkout people.

Dr. R.E. Buxbaum, January 2, 2014, edited Aug 30, 2022. *A final, final thought about theory from Yogi Berra: “In theory, it matches reality.”

Paint your factory roof white

Standing on the flat roof of my lab / factory building, I notice that virtually all of my neighbors’ roofs are black, covered by tar or bitumen. My roof was black too until three weeks ago; the roof was too hot to touch when I’d gone up to patch a leak. That’s not quite egg-frying hot, but I came to believe my repair would last longer if the roof stayed cooler. So, after sealing the leak with tar and bitumen, we added an aluminized over-layer from Ace hardware. The roof is cooler now than before, and I notice a major drop in air conditioner load and use.

My analysis of our roof coating follows; it’s for Detroit, but you can modify it for your location. Sunlight hits the earth carrying 1300 W/m2. Some 300W/m2 scatters as blue light (for why so much scatters, and why the sky is blue, see here). The rest, 1000 W/m2 or 308 Btu/ft2hr, comes through or reflects off clouds on a cloudy day and hits buildings at an angle determined by latitude, time of day, and season of the year.

Detroit is at 42° North latitude so my roof shows an angle of 42° to the sun at noon in mid spring. In summer, the angle is 20°, and in winter about 63°. The sun sinks lower on the horizon through the day, e.g. at two hours before or after noon in mid spring the angle is 51°. On a clear day, with a perfectly black roof, the heating is 308 Btu/ft2hr times the cosine of the angle.

To calculate our average roof heating, I integrated this heat over the full day’s angles using Euler’s method, and included the scatter from clouds plus an absorption factor for the blackness of the roof. The figure below shows the cloud cover for Detroit.

Average cloud cover for Detroit, month by month.

Average cloud cover for Detroit, month by month; the black line is the median cloud cover. On January 1, it is strongly overcast 60% of the time, and hardly ever clear; the median is about 98%. From http://weatherspark.com/averages/30042/Detroit-Michigan-United-States

Based on this and an assumed light absorption factor of σ = .9 for tar and σ = .2 after aluminum. I calculate an average of 105 Btu/ft2hr heating during the summer for the original black roof, and 23 Btu/ft2hr after aluminizing. Our roof is still warm, but it’s no longer hot. While most of the absorbed heat leaves the roof by black body radiation or convection, enough enters my lab through 6″ of insulation to cause me to use a lot of air conditioning. I calculate the heat entering this way from the roof temperature. In the summer, an aluminum coat is a clear winner.

Detroit High and Low Temperatures Over the ear

High and Low Temperatures For Detroit, Month by Month. From http://weatherspark.com/averages/30042/Detroit-Michigan-United-States

Detroit has a cold winter too, and these are months where I’d benefit from solar heat. I find it’s so cloudy in winter that, even with a black roof, I got less than 5 Btu/ft2hr. Aluminizing reduced this heat to 1.2 Btu/ft2hr, but it also reduces the black-body radiation leaving at night. I should find that I use less heat in winter, but perhaps more in late spring and early fall. I won’t know the details till next year, but that’s the calculation.

The REB Research laboratory is located at 12851 Capital St., Oak Park, MI 48237. We specialize in hydrogen separations and membrane reactors. By Dr. Robert Buxbaum, June 16, 2013

What’s the quality of your home insulation

By Dr. Robert E. Buxbaum, June 3, 2013

It’s common to have companies call during dinner offering to blow extra insulation into the walls and attic of your home. Those who’ve added this insulation find a small decrease in their heating and cooling bills, but generally wonder if they got their money’s worth, or perhaps if they need yet-more insulation to get the full benefit. Here’s a simple approach to comparing your home heat bill to the ideal your home can reasonably reach.

The rate of heat transfer through a wall, Qw, is proportional to the temperature difference, ∆T, to the area, A, and to the average thermal conductivity of the wall, k; it is inversely proportional to the wall thickness, ∂;

Qw = ∆T A k /∂.

For home insulation, we re-write this as Qw = ∆T A/Rw where Rw is the thermal resistance of the wall, measured (in the US) as °F/BTU/hr-ft2. Rw = ∂/k.

Lets assume that your home’s outer wall thickness is nominally 6″ thick (0.5 foot). With the best available insulation, perfectly applied, the heat loss will be somewhat higher than if the space was filled with still air, k=.024 BTU/fthr°F, a result based on molecular dynamics. For a 6″ wall, the R value, will always be less than .5/.024 = 20.8 °F/BTU/hr-ft2.. It will be much less if there are holes or air infiltration, but for practical construction with joists and sills, an Rw value of 15 or 16 is probably about as good as you’ll get with 6″ walls.

To show you how to evaluate your home, I’ll now calculate the R value of my walls based on the size of my ranch-style home (in Michigan) and our heat bills. I’ll first do this in a simplified calculation, ignoring windows, and will then repeat the calculation including the windows. Windows are found to be very important. I strongly suggest window curtains to save heat and air conditioning,

The outer wall of my home is 190 feet long, and extends about 11 feet above ground to the roof. Multiplying these dimensions gives an outer wall area of 2090 ft2. I could now add the roof area, 1750 ft2 (it’s the same as the area of the house), but since the roof is more heavily insulated than the walls, I’ll estimate that it behaves like 1410 ft2 of normal wall. I calculate there are 3500 ftof effective above-ground area for heat loss. This is the area that companies keep offering to insulate.

Between December 2011 and February 2012, our home was about 72°F inside, and the outside temperature was about 28°F. Thus, the average temperature difference between the inside and outside was about 45°F; I estimate the rate of heat loss from the above-ground part of my house, Qu = 3500 * 45/R = 157,500/Rw.

Our house has a basement too, something that no one has yet offered to insulate. While the below-ground temperature gradient is smaller, it’s less-well insulated. Our basement walls are cinderblock covered with 2″ of styrofoam plus wall-board. Our basement floor is even less well insulated: it’s just cement poured on pea-gravel. I estimate the below-ground R value is no more than 1/2 of whatever the above ground value is; thus, for calculating QB, I’ll assume a resistance of Rw/2.

The below-ground area equals the square footage of our house, 1750 ft2 but the walls extend down only about 5 feet below ground. The basement walls are thus 950 ft2 in area (5 x 190 = 950). Adding the 1750 ft2 floor area, we find a total below-ground area of 2700 ft2.

The temperature difference between the basement and the wet dirt is only about 25°F in the winter. Assuming the thermal resistance is Rw/2, I estimate the rate of heat loss from the basement, QB = 2700*25*(2/Rw) = 135,000/Rw. It appears that nearly as much heat leaves through the basement as above ground!

Between December and February 2012, our home used an average of 597 cubic feet of gas per day or 25497 BTU/hour (heat value = 1025 BTU/ ft3). QU+ Q= 292,500/Rw. Ignoring windows, I estimate Rw of my home = 292,500/25497 = 11.47.

We now add the windows. Our house has 230 ft2 of windows, most covered by curtains and/or plastic. Because of the curtains and plastic, they would have an R value of 3 except that black-body radiation tends to be very significant. I estimate our windows have an R value of 1.5; the heat loss through the windows is thus QW= 230*45/1.5 = 6900 BTU/hr, about 27% of the total. The R value for our walls is now re-estimated to be 292,500/(25497-6900) = 15.7; this is about as good as I can expect given the fixed thickness of our walls and the fact that I can not easily get an insulation conductivity lower than still air. I thus find that there will be little or no benefit to adding more above-ground wall insulation to my house.

To save heat energy, I might want to coat our windows in partially reflective plastic or draw the curtains to follow the sun. Also, since nearly half the heat left from the basement, I may want to lay a thicker carpet, or lay a reflective under-layer (a space blanket) beneath the carpet.

To improve on the above estimate, I could consider our furnace efficiency; it is perhaps only 85-90% efficient, with still-warm air leaving up the chimney. There is also some heat lost through the door being opened, and through hot water being poured down the drain. As a first guess, these heat losses are balanced by the heat added by electric usage, by the body-heat of people in the house, and by solar radiation that entered through the windows (not much for Michigan in winter). I still see no reason to add more above-ground insulation. Now that I’ve analyzed my home, it’s time for you to analyze yours.

Most Heat Loss Is Black-Body Radiation

In a previous post I used statistical mechanics to show how you’d calculate the thermal conductivity of any gas and showed why the insulating power of the best normal insulating materials was usually identical to ambient air. That analysis only considered the motion of molecules and not of photons (black-body radiation) and thus under-predicted heat transfer in most circumstances. Though black body radiation is often ignored in chemical engineering calculations, it is often the major heat transfer mechanism, even at modest temperatures.

One can show from quantum mechanics that the radiative heat transfer between two surfaces of temperature T and To is proportional to the difference of the fourth power of the two temperatures in absolute (Kelvin) scale.

Heat transfer rate = P = A ε σ( T^4 – To^4).

Here, A is the area of the surfaces, σ is the Stefan–Boltzmann constantε is the surface emissivity, a number that is 1 for most non-metals and .3 for stainless steel.  For A measured in m2σ = 5.67×10−8 W m−2 K−4.

Infrared picture of a fellow wearing a black plastic bag on his arm. The bag is nearly transparent to heat radiation, while his eyeglasses are opaque. His hair provides some insulation.

Unlike with conduction, heat transfer does not depend on the distances between the surfaces but only on the temperature and the infra-red (IR) reflectivity. This is different from normal reflectivity as seen in the below infra-red photo of a lightly dressed person standing in a normal room. The fellow has a black plastic bag on his arm, but you can hardly see it here, as it hardly affects heat loss. His clothes, don’t do much either, but his hair and eyeglasses are reasonably effective blocks to radiative heat loss.

As an illustrative example, lets calculate the radiative and conductive heat transfer heat transfer rates of the person in the picture, assuming he has  2 m2 of surface area, an emissivity of 1, and a body and clothes temperature of about 86°F; that is, his skin/clothes temperature is 30°C or 303K in absolute. If this person stands in a room at 71.6°F, 295K, the radiative heat loss is calculated from the equation above: 2 *1* 5.67×10−8 * (8.43×109 -7.57×109) = 97.5 W. This is 23.36 cal/second or 84.1 Cal/hr or 2020 Cal/day; this is nearly the expected basal calorie use of a person this size.

The conductive heat loss is typically much smaller. As discussed previously in my analysis of curtains, the rate is inversely proportional to the heat transfer distance and proportional to the temperature difference. For the fellow in the picture, assuming he’s standing in relatively stagnant air, the heat boundary layer thickness will be about 2 cm (0.02m). Multiplying the thermal conductivity of air, 0.024 W/mK, by the surface area and the temperature difference and dividing by the boundary layer thickness, we find a Wattage of heat loss of 2*.024*(30-22)/.02 = 19.2 W. This is 16.56 Cal/hr, or 397 Cal/day: about 20% of the radiative heat loss, suggesting that some 5/6 of a sedentary person’s heat transfer may be from black body radiation.

We can expect that black-body radiation dominates conduction when looking at heat-shedding losses from hot chemical equipment because this equipment is typically much warmer than a human body. We’ve found, with our hydrogen purifiers for example, that it is critically important to choose a thermal insulation that is opaque or reflective to black body radiation. We use an infra-red opaque ceramic wrapped with aluminum foil to provide more insulation to a hot pipe than many inches of ceramic could. Aluminum has a far lower emissivity than the nonreflective surfaces of ceramic, and gold has an even lower emissivity at most temperatures.

Many popular insulation materials are not black-body opaque, and most hot surfaces are not reflectively coated. Because of this, you can find that the heat loss rate goes up as you add too much insulation. After a point, the extra insulation increases the surface area for radiation while barely reducing the surface temperature; it starts to act like a heat fin. While the space-shuttle tiles are fairly mediocre in terms of conduction, they are excellent in terms of black-body radiation.

There are applications where you want to increase heat transfer without having to resort to direct contact with corrosive chemicals or heat-transfer fluids. Often black body radiation can be used. As an example, heat transfers quite well from a cartridge heater or band heater to a piece of equipment even if they do not fit particularly tightly, especially if the outer surfaces are coated with black oxide. Black body radiation works well with stainless steel and most liquids, but most gases are nearly transparent to black body radiation. For heat transfer to most gases, it’s usually necessary to make use of turbulence or better yet, chaos.

Robert Buxbaum