Tag Archives: heat

In praise of openable windows and leaky construction

It’s summer in Detroit, and in all the tall buildings the air conditioners are humming. They have to run at near-full power even on evenings and weekends when the buildings are near empty, and on cool days. This would seem to waste a lot of power and it does, but it’s needed for ventilation. Tall buildings are made air-tight with windows that don’t open — without the AC, there’s be no heat leaving at all, no way for air to get in, and no way for smells to get out.

The windows don’t open because of the conceit of modern architecture; air tight building are believed to be good design because they have improved air-conditioner efficiency when the buildings are full, and use less heat when the outside world is very cold. That’s, perhaps 10% of the year. 

No openable windows, but someone figured you should suffer for art

Modern architecture with no openable windows. Someone wants you to suffer for his/her art.

Another reason closed buildings are popular is that they reduce the owners’ liability in terms of things flying in or falling out. Owners don’t rain coming in, or rocks (or people) falling out. Not that windows can’t be made with small openings that angle to avoid these problems, but that’s work and money and architects like to spend time and money only on fancy facades that look nice (and are often impractical). Besides, open windows can ruin the cool lines of their modern designs, and there’s nothing worse, to them, than a building that looks uncool despite the energy cost or the suffering of the inmates of their art.

Most workers find sealed buildings claustrophobic, musty, and isolating. That pain leads to lost productivity: Fast Company reported that natural ventilation can increase productivity by up to 11 percent. But, as with leading clothes stylists, leading building designers prefer uncomfortable and uneconomic to uncool. If people in the building can’t smell an ocean breeze, or can’t vent their area in a fire (or following a burnt burrito), that’s a small price to pay for art. Art is absurd, and it’s OK with the architect if fire fumes have to circulate through the entire building before they’re slowly vented. Smells add character, and the architect is gone before the stench gets really bad. 

No one dreams of working in an unventilated glass box.

No one dreams of working in a glass box. If it’s got to be an office, give some ventilation.

So what’s to be done? One can demand openable windows and hope the architect begrudgingly obliges. Some of the newest buildings have gone this route. A simpler, engineering option is to go for leaky construction — cracks in the masonry, windows that don’t quite seal. I’ve maintained and enlarged the gap under the doors of my laboratory buildings to increase air leakage; I like to have passive venting for toxic or flammable vapors. I’m happy to not worry about air circulation failing at the worst moment, and I’m happy to not have to ventilate at night when few people are here. To save some money, I increase the temperature range at night and weekends so that the buildings is allowed to get as hot as 82°F before the AC goes on, or as cold as 55°F without the heat. Folks who show up on weekends may need a sweater, but normally no one is here. 

A bit of air leakage and a few openable windows won’t mess up the air-conditioning control because most heat loss is through the walls and black body radiation. And what you lose in heat infiltration you gain by being able to turn off the AC circulation system when you know there are few people in the building (It helps to have a key-entry system to tell you how many people are there) and the productivity advantage of occasional outdoor smells coming in, or nasty indoor smells going out.

One irrational fear of openable windows is that some people will not close the windows in the summer or in the dead of winter. But people are quite happy in the older skyscrapers (like the empire state building) built before universal AC. Most people are nice — or most people you’d want to employ are. They will respond to others feelings to keep everyone comfortable. If necessary a boss or building manager may enforce this, or may have to move a particularly crusty miscreant from the window. But most people are nice, and even a degree of discomfort is worth the boost to your psyche when someone in management trusts you to control something of the building environment.

Robert E. Buxbaum, July 18, 2014. Curtains are a plus too — far better than self-darkening glass. They save energy, and let you think that management trusts you to have power over your environment. And that’s nice.

If hot air rises, why is it cold on mountain-tops?

This is a child’s question that’s rarely answered to anyone’s satisfaction. To answer it well requires college level science, and by college the child has usually been dissuaded from asking anything scientific that would likely embarrass teacher — which is to say, from asking most anything. By a good answer, I mean here one that provides both a mathematical, checkable prediction of the temperature you’d expect to find on mountain tops, and one that also gives a feel for why it should be so. I’ll try to provide this here, as previously when explaining “why is the sky blue.” A word of warning: real science involves mathematics, something that’s often left behind, perhaps in an effort to build self-esteem. If I do a poor job, please text me back: “if hot air rises, what’s keeping you down?”

As a touchy-feely answer, please note that all materials have internal energy. It’s generally associated with the kinetic energy + potential energy of the molecules. It enters whenever a material is heated or has work done on it, and for gases, to good approximation, it equals the gas heat capacity of the gas times its temperature. For air, this is about 7 cal/mol°K times the temperature in degrees Kelvin. The average air at sea-level is taken to be at 1 atm, or 101,300  Pascals, and 15.02°C, or 288.15 °K; the internal energy of this are is thus 288.15 x 7 = 2017 cal/mol = 8420 J/mol. The internal energy of the air will decrease as the air rises, and the temperature drops for reasons I will explain below. Most diatomic gases have heat capacity of 7 cal/mol°K, a fact that is only explained by quantum mechanics; if not for quantum mechanics, the heat capacities of diatomic gases would be about 9 cal/mol°K.

Lets consider a volume of this air at this standard condition, and imagine that it is held within a weightless balloon, or plastic bag. As we pull that air up, by pulling up the bag, the bag starts to expand because the pressure is lower at high altitude (air pressure is just the weight of the air). No heat is exchanged with the surrounding air because our air will always be about as warm as its surroundings, or if you like you can imagine weightless balloon prevents it. In either case the molecules lose energy as the bag expands because they always collide with an outwardly moving wall. Alternately you can say that the air in the bag is doing work on the exterior air — expansion is work — but we are putting no work into the air as it takes no work to lift this air. The buoyancy of the air in our balloon is always about that of the surrounding air, or so we’ll assume for now.

A classic, difficult way to calculate the temperature change with altitude is to calculate the work being done by the air in the rising balloon. Work done is force times distance: w=  ∫f dz and this work should equal the effective cooling since heat and work are interchangeable. There’s an integral sign here to account for the fact that force is proportional to pressure and the air pressure will decrease as the balloon goes up. We now note that w =  ∫f dz = – ∫P dV because pressure, P = force per unit area. and volume, V is area times distance. The minus sign is because the work is being done by the air, not done on the air — it involves a loss of internal energy. Sorry to say, the temperature and pressure in the air keeps changing with volume and altitude, so it’s hard to solve the integral, but there is a simple approach based on entropy, S.

Les Droites Mountain, in the Alps, at the intersect of France Italy and Switzerland is 4000 m tall. The top is generally snow-covered.

Les Droites Mountain, in the Alps, at the intersect of France Italy and Switzerland is 4000 m tall. The top is generally snow-covered.

I discussed entropy last month, and showed it was a property of state, and further, that for any reversible path, ∆S= (Q/T)rev. That is, the entropy change for any reversible process equals the heat that enters divided by the temperature. Now, we expect the balloon rise is reversible, and since we’ve assumed no heat transfer, Q = 0. We thus expect that the entropy of air will be the same at all altitudes. Now entropy has two parts, a temperature part, Cp ln T2/T1 and a pressure part, R ln P2/P1. If the total ∆S=0 these two parts will exactly cancel.

Consider that at 4000m, the height of Les Droites, a mountain in the Mont Blanc range, the typical pressure is 61,660 Pa, about 60.85% of sea level pressure (101325 Pa). If the air were reduced to this pressure at constant temperature (∆S)T = -R ln P2/P1 where R is the gas constant, about 2 cal/mol°K, and P2/P1 = .6085; (∆S)T = -2 ln .6085. Since the total entropy change is zero, this part must equal Cp ln T2/T1 where Cp is the heat capacity of air at constant pressure, about 7 cal/mol°K for all diatomic gases, and T1 and T2 are the temperatures (Kelvin) of the air at sea level and 4000 m. (These equations are derived in most thermodynamics texts. The short version is that the entropy change from compression at constant T equals the work at constant temperature divided by T,  ∫P/TdV=  ∫R/V dV = R ln V2/V1= -R ln P2/P1. Similarly the entropy change at constant pressure = ∫dQ/T where dQ = Cp dT. This component of entropy is thus ∫dQ/T = Cp ∫dT/T = Cp ln T2/T1.) Setting the sum to equal zero, we can say that Cp ln T2/T1 =R ln .6085, or that 

T2 = T1 (.6085)R/Cp

T2 = T1(.6085)2/7   where 0.6065 is the pressure ratio at 4000, and because for air and most diatomic gases, R/Cp = 2/7 to very good approximation, matching the prediction from quantum mechanics.

From the above, we calculate T2 = 288.15 x .8676 = 250.0°K, or -23.15 °C. This is cold enough to provide snow  on Les Droites nearly year round, and it’s pretty accurate. The typical temperature at 4000 m is 262.17 K (-11°C). That’s 26°C colder than at sea-level, and only 12°C warmer than we’d predicted.

There are three weak assumptions behind the 11°C error in our predictions: (1) that the air that rises is no hotter than the air that does not, and (2) that the air’s not heated by radiation from the sun or earth, and (3) that there is no heat exchange with the surrounding air, e.g. from rain or snow formation. The last of these errors is thought to be the largest, but it’s still not large enough to cause serious problems.

The snow cover on Kilimanjaro, 2013. If global warming models were true, it should be gone, or mostly gone.

Snow on Kilimanjaro, Tanzania 2013. If global warming models were true, the ground should be 4°C warmer than 100 years ago, and the air at this altitude, about 7°C (12°F) warmer; and the snow should be gone.

You can use this approach, with different exponents, estimate the temperature at the center of Jupiter, or at the center of neutron stars. This iso-entropic calculation is the model that’s used here, though it’s understood that may be off by a fair percentage. You can also ask questions about global warming: increased CO2 at this level is supposed to cause extreme heating at 4000m, enough to heat the earth below by 4°C/century or more. As it happens, the temperature and snow cover on Les Droites and other Alp ski areas has been studied carefully for many decades; they are not warming as best we can tell (here’s a discussion). By all rights, Mt Blanc should be Mt Green by now; no one knows why. The earth too seems to have stopped warming. My theory: clouds. 

Robert Buxbaum, May 10, 2014. Science requires you check your theory for internal and external weakness. Here’s why the sky is blue, not green.

Entropy, the most important pattern in life

One evening at the Princeton grad college a younger fellow (an 18-year-old genius) asked the most simple, elegant question I had ever heard, one I’ve borrowed and used ever since: “tell me”, he asked, “something that’s important and true.” My answer that evening was that the entropy of the universe is always increasing. It’s a fundamentally important pattern in life; one I didn’t discover, but discovered to have a lot of applications and meaning. Let me explain why it’s true here, and then why I find it’s meaningful.

Famous entropy cartoon, Harris

Famous entropy cartoon, Harris

The entropy of the universe is not something you can measure directly, but rather indirectly, from the availability of work in any corner of it. It’s related to randomness and the arrow of time. First off, here’s how you can tell if time is moving forward: put an ice-cube into hot water, if the cube dissolves and the water becomes cooler, time is moving forward — or, at least it’s moving in the same direction as you are. If you can reach into a cup of warm water and pull out an ice-cube while making the water hot, time is moving backwards. — or rather, you are living backwards. Within any closed system, one where you don’t add things or energy (sunlight say), you can tell that time is moving forward because the forward progress of time always leads to the lack of availability of work. In the case above, you could have generated some electricity from the ice-cube and the hot water, but not from the glass of warm water.

You can not extract work from a heat source alone; to extract work some heat must be deposited in a cold sink. At best the entropy of the universe remains unchanged. More typically, it increases.

You can not extract work from a heat source alone; to extract work some heat must be deposited in a cold sink. At best the entropy of the universe remains unchanged.

This observation is about as fundamental as any to understanding the world; it is the basis of entropy and the second law of thermodynamics: you can never extract useful work from a uniform temperature body of water, say, just by making that water cooler. To get useful work, you always need something some other transfer into or out of the system; you always need to make something else hotter, colder, or provide some chemical or altitude changes that can not be reversed without adding more energy back. Thus, so long as time moves forward everything runs down in terms of work availability.

There is also a first law; it states that energy is conserved. That is, if you want to heat some substance, that change requires that you put in a set amount of work plus heat. Similarly, if you want to cool something, a set amount of heat + work must be taken out. In equation form, we say that, for any change, q +w is constant, where q is heat, and w is work. It’s the sum that’s constant, not the individual values so long as you count every 4.174 Joules of work as if it were 1 calorie of heat. If you input more heat, you have to add less work, and visa versa, but there is always the same sum. When adding heat or work, we say that q or w is positive; when extracting heat or work, we say that q or w are negative quantities. Still, each 4.174 joules counts as if it were 1 calorie.

Now, since for every path between two states, q +w is the same, we say that q + w represents a path-independent quantity for the system, one we call internal energy, U where ∆U = q + w. This is a mathematical form of the first law of thermodynamics: you can’t take q + w out of nothing, or add it to something without making a change in the properties of the thing. The only way to leave things the same is if q + w = 0. We notice also that for any pure thing or mixture, the sum q +w for the change is proportional to the mass of the stuff; we can thus say that internal energy is an intensive quality. q + w = n ∆u where n is the grams of material, and ∆u is the change in internal energy per gram.

We are now ready to put the first and second laws together. We find we can extract work from a system if we take heat from a hot body of water and deliver some of it to something at a lower temperature (the ice-cube say). This can be done with a thermopile, or with a steam engine (Rankine cycle, above), or a stirling engine. That an engine can only extract work when there is a difference of temperatures is similar to the operation of a water wheel. Sadie Carnot noted that a water wheel is able to extract work only when there is a flow of water from a high level to low; similarly in a heat engine, you only get work by taking in heat energy from a hot heat-source and exhausting some of it to a colder heat-sink. The remainder leaves as work. That is, q1 -q2 = w, and energy is conserved. The second law isn’t violated so long as there is no way you could run the engine without the cold sink. Accepting this as reasonable, we can now derive some very interesting, non-obvious truths.

We begin with the famous Carnot cycle. The Carnot cycle is an idealized heat engine with the interesting feature that it can be made to operate reversibly. That is, you can make it run forwards, taking a certain amount of work from a hot source, producing a certain amount of work and delivering a certain amount of heat to the cold sink; and you can run the same process backwards, as a refrigerator, taking in the same about of work and the same amount of heat from the cold sink and delivering the same amount to the hot source. Carnot showed by the following proof that all other reversible engines would have the same efficiency as his cycle and no engine, reversible or not, could be more efficient. The proof: if an engine could be designed that will extract a greater percentage of the heat as work when operating between a given hot source and cold sink it could be used to drive his Carnot cycle backwards. If the pair of engines were now combined so that the less efficient engine removed exactly as much heat from the sink as the more efficient engine deposited, the excess work produced by the more efficient engine would leave with no effect besides cooling the source. This combination would be in violation of the second law, something that we’d said was impossible.

Now let us try to understand the relationship that drives useful energy production. The ratio of heat in to heat out has got to be a function of the in and out temperatures alone. That is, q1/q2 = f(T1, T2). Similarly, q2/q1 = f(T2,T1) Now lets consider what happens when two Carnot cycles are placed in series between T1 and T2, with the middle temperature at Tm. For the first engine, q1/qm = f(T1, Tm), and similarly for the second engine qm/q2 = f(Tm, T2). Combining these we see that q1/q2 = (q1/qm)x(qm/q2) and therefore f(T1, T2) must always equal f(T1, Tm)x f(Tm/T2) =f(T1,Tm)/f(T2, Tm). In this relationship we see that the second term Tm is irrelevant; it is true for any Tm. We thus say that q1/q2 = T1/T2, and this is the limit of what you get at maximum (reversible) efficiency. You can now rearrange this to read q1/T1 = q2/T2 or to say that work, W = q1 – q2 = q2 (T1 – T2)/T2.

A strange result from this is that, since every process can be modeled as either a sum of Carnot engines, or of engines that are less-efficient, and since the Carnot engine will produce this same amount of reversible work when filled with any substance or combination of substances, we can say that this outcome: q1/T1 = q2/T2 is independent of path, and independent of substance so long as the process is reversible. We can thus say that for all substances there is a property of state, S such that the change in this property is ∆S = ∑q/T for all the heat in or out. In a more general sense, we can say, ∆S = ∫dq/T, where this state property, S is called the entropy. Since as before, the amount of heat needed is proportional to mass, we can say that S is an intensive property; S= n s where n is the mass of stuff, and s is the entropy change per mass. 

Another strange result comes from the efficiency equation. Since, for any engine or process that is less efficient than the reversible one, we get less work out for the same amount of q1, we must have more heat rejected than q2. Thus, for an irreversible engine or process, q1-q2 < q2(T1-T2)/T2, and q2/T2 is greater than -q1/T1. As a result, the total change in entropy, S = q1/T1 + q2/T2 >0: the entropy of the universe always goes up or stays constant. It never goes down. Another final observation is that there must be a zero temperature that nothing can go below or both q1 and q2 could be positive and energy would not be conserved. Our observations of time and energy conservation leaves us to expect to find that there must be a minimum temperature, T = 0 that nothing can be colder than. We find this temperature at -273.15 °C. It is called absolute zero; nothing has ever been cooled to be colder than this, and now we see that, so long as time moves forward and energy is conserved, nothing will ever will be found colder.

Typically we either say that S is zero at absolute zero, or at room temperature.

We’re nearly there. We can define the entropy of the universe as the sum of the entropies of everything in it. From the above treatment of work cycles, we see that this total of entropy always goes up, never down. A fundamental fact of nature, and (in my world view) a fundamental view into how God views us and the universe. First, that the entropy of the universe goes up only, and not down (in our time-forward framework) suggests there is a creator for our universe — a source of negative entropy at the start of all things, or a reverser of time (it’s the same thing in our framework). Another observation, God likes entropy a lot, and that means randomness. It’s his working principle, it seems.

But before you take me now for a total libertine and say that since science shows that everything runs down the only moral take-home is to teach: “Let us eat and drink,”… “for tomorrow we die!” (Isaiah 22:13), I should note that his randomness only applies to the universe as a whole. The individual parts (planets, laboratories, beakers of coffee) does not maximize entropy, but leads to a minimization of available work, and this is different. You can show that the maximization of S, the entropy of the universe, does not lead to the maximization of s, the entropy per gram of your particular closed space but rather to the minimization of a related quantity µ, the free energy, or usable work per gram of your stuff. You can show that, for any closed system at constant temperature, µ = h -Ts where s is entropy per gram as before, and h is called enthalpy. h is basically the potential energy of the molecules; it is lowest at low temperature and high order. For a closed system we find there is a balance between s, something that increases with increased randomness, and h, something that decreases with increased randomness. Put water and air in a bottle, and you find that the water is mostly on the bottom of the bottle, the air is mostly on the top, and the amount of mixing in each phase is not the maximum disorder, but rather the one you’d calculate will minimize µ.

As the protein folds its randomness and entropy decrease, but its enthalpy decreases too; the net effect is one precise fold that minimizes µ.

As a protein folds its randomness and entropy decrease, but its enthalpy decreases too; the net effect is one precise fold that minimizes µ.

This is the principle that God applies to everything, including us, I’d guess: a balance. Take protein folding; some patterns have big disorder, and high h; some have low disorder and very low h. The result is a temperature-dependent  balance. If I were to take a moral imperative from this balance, I’d say it matches better with the sayings of Solomon the wise: “there is nothing better for a person under the sun than to eat, drink and be merry. Then joy will accompany them in their toil all the days of the life God has given them under the sun.” (Ecclesiastes 8:15). There is toil here as well as pleasure; directed activity balanced against personal pleasures. This is the µ = h -Ts minimization where, perhaps, T is economic wealth. Thus, the richer a society, the less toil is ideal and the more freedom. Of necessity, poor societies are repressive. 

Dr. Robert E. Buxbaum, Mar 18, 2014. My previous thermodynamic post concerned the thermodynamics of hydrogen production. It’s not clear that all matter goes forward in time, by the way; antimatter may go backwards, so it’s possible that anti matter apples may fall up. On microscopic scale, time becomes flexible so it seems you can make a time machine. Religious leaders tend to be anti-science, I’ve noticed, perhaps because scientific miracles can be done by anyone, available even those who think “wrong,” or say the wrong words. And that’s that, all being heard, do what’s right and enjoy life too: as important a pattern in life as you’ll find, I think. The relationship between free-energy and societal organization is from my thesis advisor, Dr. Ernest F. Johnson.

Fractal power laws and radioactive waste decay

Here’s a fairly simple fractal model for nuclear reactor decay heat. It’s based on one I came up with over he past few weeks for dealing with the statistics of crime, fires, etc. — looking at the tail wither Poisson fails. Anyway, I find that the analysis method works pretty well for looking at nuclear power problems and comparing fission vs fusion.

If all nuclear waste had only one isotope and would decay only to non-radioactive products, the decay rate would be exponential. Thus, one could plot the log of the decay heat rate linearly against linear time (a semi-log plot). But nuclear waste generally consists of many radioactive components, and they typically decay into other radioactive isotopes (daughters) that then decay to yet others (grand-daughters?). Because the half-lives can barry by a lot, one would like to shrink the time-axis, e.g. by use of a log-log plot, perhaps including a curve for each isotope, and every decay mode. I find these plots hard to produce, and hard extrapolate. What I’d like to propose instead is a fractal variation of the original semi-log plot: a  plot of the log of the heat rate against a fractal time, time to the 1/4 power in this case. The plot is similar to a fractal decay model that I’d developed for crimes and fires a few weeks ago

A plausible justification for this fractal semi-log plot is to observe that the half-life of daughter isotopes relates to the parent isotopes. I picked  to plot time to the 1/4 power because it seemed to work pretty well for fuel rods (below) especially after 5 years of cool-down. Note that the slope is identical for rods consumed to 20 or 35 MW-days per kg of uranium.

Afterheat of fuel rods used to generate 20 kW/kg U; Top graph 35 MW-days/kg U; bottom graph 20 Mw-day /kg  U. Data from US NRC Regulatory Guide 3.54 - Spent Fuel Heat Generation in an Independent Spent Fuel Storage Installation, rev 1, 1999. http://www.nrc.gov/reading-rm/doc-collections/reg-guides/fuels-materials/rg/03-054/

After-heat of nuclear fuel rods used at 20 kW/kg U; Top graph 35 MW-days/kg U; bottom graph 20 Mw-day /kg U. Data from US NRC Regulatory Guide 3.54. A typical reactor has 200,000 kg of uranium.

This plot also works if there is one type of radio-isotope (tritium say), but in this case the fractal power should be 1. (My guess is that the fractal time dependence for crime is 1/2, though I lack real crime statistics to check it on). Unless I find that someone else has come up with this sort of plot or analysis before, I’ll call it after myself: a Buxbaum Mandelbrot plot –Why not?

Nuclear power is attractive world-wide because it is a lot more energy dense than any normal fuel; at 35 MW-days per kg, 1 kg will power an average US home for 8 years. By comparison, even with fuel cell technology, 1 kg of hydrogen can only power a US home for 6 hours. While hydrogen is clean, it is often made from coal, and (at least in theory*) it should be easier to handle and store 1 kg of spent uranium than to deal with many tons of coal-smoke produced to make 35 MW-days of electricity.

Better than uranium fission is nuclear fusion, in theory*. A fission-based nuclear reactor to power 1/2 of Detroit, would contain some 200,000 kg of uranium, and assuming a 5 year cycle, the after-heat would be about 1000 kW (1 MW) one year after removal.

Afterheat of a 4000 MWth Fusion Reactor, from UMAC III Report. Nb-1%Zr is a fairly common high-temerature engineering material of construction.

After-heat of a 4000 MWth Fusion Reactor built from niobium-1%zirconium; from UWMAC III Report. The after heat is far less than with normal uranium fission.

For comparison, I show a and as another demonstration of the Buxbaum-Mandelbrot plot for after-heat of a similar power fusion reactor. It drops from 1 MW at 3 weeks to only 100 W after 1 year. Nb-1%Zr is a fairly common engineering material used for high temperature applications. Nuclear fusion is still a few years off, so for now fission is the only nuclear option for clean electrical generation.

This plot was really designed to look at the statistics of crime, fires, and the need for servers / checkout people. I’d predicted that a plot of the log of the number of people needed to fight crime, fires, etc., is a straight line against the a fractal power of the number of hours per year that this number of people is needed. At least that was the theory.*

Dr. R.E. Buxbaum, January 2, 2014. *A final, final thought about theory from Yogi Berra: “In theory, it matches reality.”

Paint your factory roof white

Standing on the flat roof of my lab / factory building, I notice that virtually all of my neighbors’ roofs are black, covered by tar or bitumen. My roof was black too until three weeks ago; the roof was too hot to touch when I’d gone up to patch a leak. That’s not quite egg-frying hot, but I came to believe my repair would last longer if the roof stayed cooler. So, after sealing the leak with tar and bitumen, we added an aluminized over-layer from Ace hardware. The roof is cooler now than before, and I notice a major drop in air conditioner load and use.

My analysis of our roof coating follows; it’s for Detroit, but you can modify it for your location. Sunlight hits the earth carrying 1300 W/m2. Some 300W/m2 scatters as blue light (for why so much scatters, and why the sky is blue, see here). The rest, 1000 W/m2 or 308 Btu/ft2hr, comes through or reflects off clouds on a cloudy day and hits buildings at an angle determined by latitude, time of day, and season of the year.

Detroit is at 42° North latitude so my roof shows an angle of 42° to the sun at noon in mid spring. In summer, the angle is 20°, and in winter about 63°. The sun sinks lower on the horizon through the day, e.g. at two hours before or after noon in mid spring the angle is 51°. On a clear day, with a perfectly black roof, the heating is 308 Btu/ft2hr times the cosine of the angle.

To calculate our average roof heating, I integrated this heat over the full day’s angles using Euler’s method, and included the scatter from clouds plus an absorption factor for the blackness of the roof. The figure below shows the cloud cover for Detroit.

Average cloud cover for Detroit, month by month.

Average cloud cover for Detroit, month by month; the black line is the median cloud cover. On January 1, it is strongly overcast 60% of the time, and hardly ever clear; the median is about 98%. From http://weatherspark.com/averages/30042/Detroit-Michigan-United-States

Based on this and an assumed light absorption factor of σ = .9 for tar and σ = .2 after aluminum. I calculate an average of 105 Btu/ft2hr heating during the summer for the original black roof, and 23 Btu/ft2hr after aluminizing. Our roof is still warm, but it’s no longer hot. While most of the absorbed heat leaves the roof by black body radiation or convection, enough enters my lab through 6″ of insulation to cause me to use a lot of air conditioning. I calculate the heat entering this way from the roof temperature. In the summer, an aluminum coat is a clear winner.

Detroit High and Low Temperatures Over the ear

High and Low Temperatures For Detroit, Month by Month. From http://weatherspark.com/averages/30042/Detroit-Michigan-United-States

Detroit has a cold winter too, and these are months where I’d benefit from solar heat. I find it’s so cloudy in winter that, even with a black roof, I got less than 5 Btu/ft2hr. Aluminizing reduced this heat to 1.2 Btu/ft2hr, but it also reduces the black-body radiation leaving at night. I should find that I use less heat in winter, but perhaps more in late spring and early fall. I won’t know the details till next year, but that’s the calculation.

The REB Research laboratory is located at 12851 Capital St., Oak Park, MI 48237. We specialize in hydrogen separations and membrane reactors. By Dr. Robert Buxbaum, June 16, 2013

What’s the quality of your home insulation

By Dr. Robert E. Buxbaum, June 3, 2013

It’s common to have companies call during dinner offering to blow extra insulation into the walls and attic of your home. Those who’ve added this insulation find a small decrease in their heating and cooling bills, but generally wonder if they got their money’s worth, or perhaps if they need yet-more insulation to get the full benefit. Here’s a simple approach to comparing your home heat bill to the ideal your home can reasonably reach.

The rate of heat transfer through a wall, Qw, is proportional to the temperature difference, ∆T, to the area, A, and to the average thermal conductivity of the wall, k; it is inversely proportional to the wall thickness, ∂;

Qw = ∆T A k /∂.

For home insulation, we re-write this as Qw = ∆T A/Rw where Rw is the thermal resistance of the wall, measured (in the US) as °F/BTU/hr-ft2. Rw = ∂/k.

Lets assume that your home’s outer wall thickness is nominally 6″ thick (0.5 foot). With the best available insulation, perfectly applied, the heat loss will be somewhat higher than if the space was filled with still air, k=.024 BTU/fthr°F, a result based on molecular dynamics. For a 6″ wall, the R value, will always be less than .5/.024 = 20.8 °F/BTU/hr-ft2.. It will be much less if there are holes or air infiltration, but for practical construction with joists and sills, an Rw value of 15 or 16 is probably about as good as you’ll get with 6″ walls.

To show you how to evaluate your home, I’ll now calculate the R value of my walls based on the size of my ranch-style home (in Michigan) and our heat bills. I’ll first do this in a simplified calculation, ignoring windows, and will then repeat the calculation including the windows. Windows are found to be very important. I strongly suggest window curtains to save heat and air conditioning,

The outer wall of my home is 190 feet long, and extends about 11 feet above ground to the roof. Multiplying these dimensions gives an outer wall area of 2090 ft2. I could now add the roof area, 1750 ft2 (it’s the same as the area of the house), but since the roof is more heavily insulated than the walls, I’ll estimate that it behaves like 1410 ft2 of normal wall. I calculate there are 3500 ftof effective above-ground area for heat loss. This is the area that companies keep offering to insulate.

Between December 2011 and February 2012, our home was about 72°F inside, and the outside temperature was about 28°F. Thus, the average temperature difference between the inside and outside was about 45°F; I estimate the rate of heat loss from the above-ground part of my house, Qu = 3500 * 45/R = 157,500/Rw.

Our house has a basement too, something that no one has yet offered to insulate. While the below-ground temperature gradient is smaller, it’s less-well insulated. Our basement walls are cinderblock covered with 2″ of styrofoam plus wall-board. Our basement floor is even less well insulated: it’s just cement poured on pea-gravel. I estimate the below-ground R value is no more than 1/2 of whatever the above ground value is; thus, for calculating QB, I’ll assume a resistance of Rw/2.

The below-ground area equals the square footage of our house, 1750 ft2 but the walls extend down only about 5 feet below ground. The basement walls are thus 950 ft2 in area (5 x 190 = 950). Adding the 1750 ft2 floor area, we find a total below-ground area of 2700 ft2.

The temperature difference between the basement and the wet dirt is only about 25°F in the winter. Assuming the thermal resistance is Rw/2, I estimate the rate of heat loss from the basement, QB = 2700*25*(2/Rw) = 135,000/Rw. It appears that nearly as much heat leaves through the basement as above ground!

Between December and February 2012, our home used an average of 597 cubic feet of gas per day or 25497 BTU/hour (heat value = 1025 BTU/ ft3). QU+ Q= 292,500/Rw. Ignoring windows, I estimate Rw of my home = 292,500/25497 = 11.47.

We now add the windows. Our house has 230 ft2 of windows, most covered by curtains and/or plastic. Because of the curtains and plastic, they would have an R value of 3 except that black-body radiation tends to be very significant. I estimate our windows have an R value of 1.5; the heat loss through the windows is thus QW= 230*45/1.5 = 6900 BTU/hr, about 27% of the total. The R value for our walls is now re-estimated to be 292,500/(25497-6900) = 15.7; this is about as good as I can expect given the fixed thickness of our walls and the fact that I can not easily get an insulation conductivity lower than still air. I thus find that there will be little or no benefit to adding more above-ground wall insulation to my house.

To save heat energy, I might want to coat our windows in partially reflective plastic or draw the curtains to follow the sun. Also, since nearly half the heat left from the basement, I may want to lay a thicker carpet, or lay a reflective under-layer (a space blanket) beneath the carpet.

To improve on the above estimate, I could consider our furnace efficiency; it is perhaps only 85-90% efficient, with still-warm air leaving up the chimney. There is also some heat lost through the door being opened, and through hot water being poured down the drain. As a first guess, these heat losses are balanced by the heat added by electric usage, by the body-heat of people in the house, and by solar radiation that entered through the windows (not much for Michigan in winter). I still see no reason to add more above-ground insulation. Now that I’ve analyzed my home, it’s time for you to analyze yours.

Most Heat Loss Is Black-Body Radiation

In a previous post I used statistical mechanics to show how you’d calculate the thermal conductivity of any gas and showed why the insulating power of the best normal insulating materials was usually identical to ambient air. That analysis only considered the motion of molecules and not of photons (black-body radiation) and thus under-predicted heat transfer in most circumstances. Though black body radiation is often ignored in chemical engineering calculations, it is often the major heat transfer mechanism, even at modest temperatures.

One can show from quantum mechanics that the radiative heat transfer between two surfaces of temperature T and To is proportional to the difference of the fourth power of the two temperatures in absolute (Kelvin) scale.

P_{\rm net}=A\sigma \varepsilon \left( T^4 - T_0^4 \right).  Here Pnet is the net heat transfer rate, A is the area of the surfaces, σ is the Stefan–Boltzmann constantε is the surface emissivity, a number that is 1 for most non-metals and .3 for stainless steel.  For A measured in m2σ = 5.67×10−8 W m−2 K−4.

Unlike with conduction, heat transfer does not depend on the distances between the surfaces but only on the temperature and the infra-red (IR) reflectivity. This is different from normal reflectivity as seen in the below infra-red photo of a lightly dressed person standing in a normal room. The fellow has a black plastic bag on his arm, but you can hardly see it here, as it hardly affects heat loss. His clothes, don’t do much either, but his hair and eyeglasses are reasonably effective blocks to radiative heat loss.

Human-Infrared.jpg
Infrared picture of a fellow wearing a black plastic bag on his arm. The bag is nearly transparent to heat radiation, while his eyeglasses are opaque. His hair provides some insulation.

As an illustrative example, lets calculate the radiative and conductive heat transfer heat transfer rates of the person in the picture, assuming he has  2 m2 of surface area, an emissivity of 1, and a body and clothes temperature of about 86°F; that is, his skin/clothes temperature is 30°C or 303K in absolute. If this person stands in a room at 71.6°F, 295K, the radiative heat loss is calculated from the equation above: 2 *1* 5.67×10−8 * (8.43×109 -7.57×109) = 97.5 W. This is 23.36 cal/second or 84.1 Cal/hr or 2020 Cal/day; this is nearly the expected basal calorie use of a person this size.

The conductive heat loss is typically much smaller. As discussed previously in my analysis of curtains, the rate is inversely proportional to the heat transfer distance and proportional to the temperature difference. For the fellow in the picture, assuming he’s standing in relatively stagnant air, the heat boundary layer thickness will be about 2 cm (0.02m). Multiplying the thermal conductivity of air, 0.024 W/mK, by the surface area and the temperature difference and dividing by the boundary layer thickness, we find a Wattage of heat loss of 2*.024*(30-22)/.02 = 19.2 W. This is 16.56 Cal/hr, or 397 Cal/day: about 20% of the radiative heat loss, suggesting that some 5/6 of a sedentary person’s heat transfer may be from black body radiation.

We can expect that black-body radiation dominates conduction when looking at heat-shedding losses from hot chemical equipment because this equipment is typically much warmer than a human body. We’ve found, with our hydrogen purifiers for example, that it is critically important to choose a thermal insulation that is opaque or reflective to black body radiation. We use an infra-red opaque ceramic wrapped with aluminum foil to provide more insulation to a hot pipe than many inches of ceramic could. Aluminum has a far lower emissivity than the nonreflective surfaces of ceramic, and gold has an even lower emissivity at most temperatures.

Many popular insulation materials are not black-body opaque, and most hot surfaces are not reflectively coated. Because of this, you can find that the heat loss rate goes up as you add too much insulation. After a point, the extra insulation increases the surface area for radiation while barely reducing the surface temperature; it starts to act like a heat fin. While the space-shuttle tiles are fairly mediocre in terms of conduction, they are excellent in terms of black-body radiation.

There are applications where you want to increase heat transfer without having to resort to direct contact with corrosive chemicals or heat-transfer fluids. Often black body radiation can be used. As an example, heat transfers quite well from a cartridge heater or band heater to a piece of equipment even if they do not fit particularly tightly, especially if the outer surfaces are coated with black oxide. Black body radiation works well with stainless steel and most liquids, but most gases are nearly transparent to black body radiation. For heat transfer to most gases, it’s usually necessary to make use of turbulence or better yet, chaos.

Heat conduction in insulating blankets, aerogels, space shuttle tiles, etc.

A lot about heat conduction in insulating blankets can be explained by the ordinary motion of gas molecules. That’s because the thermal conductivity of air (or any likely gas) is much lower than that of glass, alumina, or any likely solid material used for the structure of the blanket. At any temperature, the average kinetic energy of an air molecule is 1/2kT in any direction, or 3/2kT altogether; where k is Boltzman’s constant, and T is absolute temperature, °K. Since kinetic energy equals 1/2 mv2, you find that the average velocity in the x direction must be v = √kT/m = √RT/M. Here m is the mass of the gas molecule in kg, M is the molecular weight also in kg (0.029 kg/mol for air), R is the gas constant 8.29J/mol°C, and v is the molecular velocity in the x direction, in meters/sec. From this equation, you will find that v is quite large under normal circumstances, about 290 m/s (650 mph) for air molecules at ordinary temperatures of 22°C or 295 K. That is, air molecules travel in any fixed direction at roughly the speed of sound, Mach 1 (the average speed including all directions is about √3 as fast, or about 1130 mph).

The distance a molecule will go before hitting another one is a function of the cross-sectional areas of the molecules and their densities in space. Dividing the volume of a mol of gas, 0.0224 m3/mol at “normal conditions” by the number of molecules in the mol (6.02 x10^23) gives an effective volume per molecule at this normal condition: .0224 m3/6.0210^23 = 3.72 x10^-26 m3/molecule at normal temperatures and pressures. Dividing this volume by the molecular cross-section area for collisions (about 1.6 x 10^-19 m2 for air based on an effective diameter of 4.5 Angstroms) gives a free-motion distance of about 0.23×10^-6 m or 0.23µ for air molecules at standard conditions. This distance is small, to be sure, but it is 1000 times the molecular diameter, more or less, and as a result air behaves nearly as an “ideal gas”, one composed of point masses under normal conditions (and most conditions you run into). The distance the molecule travels to or from a given surface will be smaller, 1/√3 of this on average, or about 1.35×10^-7m. This distance will be important when we come to estimate heat transfer rates at the end of this post.

 

Molecular motion of an air molecule (oxygen or nitrogen) as part of heat transfer process; this shows how some of the dimensions work.

Molecular motion of an air molecule (oxygen or nitrogen) as part of heat transfer process; this shows how some of the dimensions work.

The number of molecules hitting per square meter per second is most easily calculated from the transfer of momentum. The pressure at the surface equals the rate of change of momentum of the molecules bouncing off. At atmospheric pressure 103,000 Pa = 103,000 Newtons/m2, the number of molecules bouncing off per second is half this pressure divided by the mass of each molecule times the velocity in the surface direction. The contact rate is thus found to be (1/2) x 103,000 Pa x 6.02^23 molecule/mol /(290 m/s. x .029 kg/mol) = 36,900 x 10^23 molecules/m2sec.

The thermal conductivity is merely this number times the heat capacity transfer per molecule times the distance of the transfer. I will now calculate the heat capacity per molecule from statistical mechanics because I’m used to doing things this way; other people might look up the heat capacity per mol and divide by 6.02 x10^23: For any gas, the heat capacity that derives from kinetic energy is k/2 per molecule in each direction, as mentioned above. Combining the three directions, that’s 3k/2. Air molecules look like dumbbells, though, so they have two rotations that contribute another k/2 of heat capacity each, and they have a vibration that contributes k. We begin with an approximate value for k = 2 cal/mol of molecules per °C; it’s actually 1.987 but I round up to include some electronic effects. Based on this, we calculate the heat capacity of air to be 7 cal/mol°C at constant volume or 1.16 x10^-23 cal/molecule°C. The amount of energy that can transfer to the hot (or cold) wall is this heat capacity times the temperature difference that molecules carry between the wall and their first collision with other gases. The temperature difference carried by air molecules at standard conditions is only 1.35 x10-7 times the temperature difference per meter because the molecules only go that far before colliding with another molecule (remember, I said this number would be important). The thermal conductivity for stagnant air per meter is thus calculated by multiplying the number of molecules times that hit per m2 per second, the distance the molecule travels in meters, and the effective heat capacity per molecule. This would be 36,900 x 10^23  molecules/m2sec x 1.35 x10-7m x 1.16 x10^-23 cal/molecule°C = 0.00578 cal/ms°C or .0241 W/m°C. This value is (pretty exactly) the thermal conductivity of dry air that you find by experiment.

I did all that math, though I already knew the thermal conductivity of air from experiment for a few reasons: to show off the sort of stuff you can do with simple statistical mechanics; to build up skills in case I ever need to know the thermal conductivity of deuterium or iodine gas, or mixtures; and finally, to be able to understand the effects of pressure, temperature and (mainly insulator) geometry — something I might need to design a piece of equipment with, for example, lower thermal heat losses. I find, from my calculation that we should not expect much change in thermal conductivity with gas pressure at near normal conditions; to first order, changes in pressure will change the distance the molecule travels to exactly the same extent that it changes the number of molecules that hit the surface per second. At very low pressures or very small distances, lower pressures will translate to lower conductivity, but for normal-ish pressures and geometries, changes in gas pressure should not affect thermal conductivity — and does not.

I’d predict that temperature would have a larger effect on thermal conductivity, but still not an order-of magnitude large effect. Increasing the temperature increases the distance between collisions in proportion to the absolute temperature, but decreases the number of collisions by the square-root of T since the molecules move faster at high temperature. As a result, increasing T has a √T positive effect on thermal conductivity.

Because neither temperature nor pressure has much effect, you might expect that the thermal conductivity of all air-filed insulating blankets at all normal-ish conditions is more-or-less that of standing air (air without circulation). That is what you find, for the most part; the same 0.024 W/m°C thermal conductivity with standing air, with high-tech, NASA fiber blankets on the space shuttle and with the cheapest styrofoam cups. Wool felt has a thermal conductivity of 0.042 W/m°C, about twice that of air, a not-surprising result given that wool felt is about 1/2 wool and 1/2 air.

Now we can start to understand the most recent class of insulating blankets, those with very fine fibers, or thin layers of fiber (or aluminum or gold). When these are separated by less than 0.2µ you finally decrease the thermal conductivity at room temperature below that for air. These layers decrease the distance traveled between gas collisions, but still leave the same number of collisions with the hot or cold wall; as a result, the smaller the gap below .2µ the lower the thermal conductivity. This happens in aerogels and some space blankets that have very small silica fibers, less than .1µ apart (<100 nm). Aerogels can have much lower thermal conductivities than 0.024 W/m°C, even when filled with air at standard conditions.

In outer space you get lower thermal conductivity without high-tech aerogels because the free path is very long. At these pressures virtually every molecule hits a fiber before it hits another molecule; for even a rough blanket with distant fibers, the fibers bleak up the path of the molecules significantly. Thus, the fibers of the space shuttle (about 10 µ apart) provide far lower thermal conductivity in outer space than on earth. You can get the same benefit in the lab if you put a high vacuum of say 10^-7 atm between glass walls that are 9 mm apart. Without the walls, the air molecules could travel 1.3 µ/10^-7 = 13m before colliding with each other. Since the walls of a typical Dewar are about 0.009 m apart (9 mm) the heat conduction of the Dewar is thus 1/150 (0.7%) as high as for a normal air layer 9mm thick; there is no thermal conductivity of Dewar flasks and vacuum bottles as such, since the amount of heat conducted is independent of gap-distance. Pretty spiffy. I use this knowledge to help with the thermal insulation of some of our hydrogen generators and hydrogen purifiers.

There is another effect that I should mention: black body heat transfer. In many cases black body radiation dominates: it is the reason the tiles are white (or black) and not clear; it is the reason Dewar flasks are mirrored (a mirrored surface provides less black body heat transfer). This post is already too long to do black body radiation justice here, but treat it in more detail in another post.

RE. Buxbaum

Small hydrogen generators for cooling dynamo generators

A majority of the electricity used in the US comes from rotating dynamos. Power is provided to the dynamos by a turbine or IC engine and the dynamo turns this power into electricity by moving a rotating coil (a rotor) through a non-rotating magnetic field provided by magnets or a non-rotating coil (a stator). While it is easy to cool the magnets or stator, cooling the rotor is challenging as there is no possibility to connect it cooling water or heat transfer paste. One of the more common options is hydrogen gas.

It is common to fill the space between the rotor and the stator with hydrogen gas. Heat transfers from the rotor to the stator or to the walls of the dynamo through the circulating hydrogen. Hydrogen has the lowest density of any gas, and the highest thermal conductivity of any gas. The low density is important because it reduces the power drag (wind drag) on the rotor. The high heat transfer coefficient helps cool the rotor so that it does not burn out at high power draw.

Hydrogen is typically provided to the dynamo by a small hydrogen generator or hydrogen bottle. While we have never sold a hydrogen generator to this market, I strongly believe that our membrane reactor hydrogen generators would be competitive; the cost of hydrogen is lower than that of bottled gas; it is far more convenient and safe; and the hydrogen is purer than from electrolysis.

Why isn’t the sky green?

Yesterday I blogged with a simple version of why the sky was blue and not green. Now I’d like to add mathematics to the treatment. The simple version said that the sky was blue because the sun color was a spectrum centered on yellow. I said that molecules of air scattered mostly the short wavelength, high frequency light colors, indigo and blue. This made the sky blue. I said that, the rest of the sunlight was not scattered, so that the sun looked yellow. I then said that the only way for the sky to be green would be if the sun were cooler, orange say, then the sky would be green. The answer is sort-of true, but only in a hand-waving way; so here’s the better treatment.

Light scatters off of dispersed small particles in proportion to wavelength to the inverse 4th power of the wavelength. That is to say, we expect air molecules will scatter more short wavelength, cool colors (purple and indigo) than warm colors (red and orange) but a real analysis must use the actual spectrum of sunlight, the light power (mW/m2.nm) at each wavelength.

intensity of sunlight as a function of wavelength (frequency)

intensity of sunlight as a function of wavelength

The first thing you’ll notice is that the light from our sun isn’t quite yellow, but is mostly green. Clearly plants understand this, otherwise chlorophyl would be yellow. There are fairly large components of blue and red too, but my first correction to the previous treatment is that the yellow color we see as the sun is a trick of the eye called additive color. Our eyes combine the green and red of the sun’s light, and sees it as yellow. There are some nice classroom experiment you can do to show this, the simplest being to make a Maxwell top with green and red sections, spin the top, and notice that you see the color as yellow.

In order to add some math to the analysis of sky color, I show a table below where I divided the solar spectrum into the 7 representative colors with their effective power. There is some subjectivity to this, but I took red as the wavelengths from 620 to 750nm so I claim on the table was 680 nm. The average power of the red was 500 mW/m2nm, so I calculate the power as .5 W/m2nm x 130 nm = 65W/m2. Similarly, I took orange to be the 30W/m2 centered on 640nm, etc. This division is presented in the first 3 columns of the following table. The first line of the table is an approximate of the Rayleigh-scatter factor for our atmosphere, with scatter presented as the percent of the incident light. That is % scattered = 9E11/wavelength^4.skyblue scatter

To use the Rayleigh factor, I calculate the 1/wavelength of each color to the 4th power; this is shown in the 4th column. The scatter % is now calculated and I apply this percent to the light intensities to calculate the amount of each color that I’d expect in the scattered and un-scattered light (the last two columns). Based on this, I find that the predominant wavelength in the color of the sky should be blue-cyan with significant components of green, indigo, and violet. When viewed through a spectroscope, I find that these are the colors I see (I have a pocket spectroscope and used it an hour ago to check). Viewed through the same spectroscope (with eye protection), I expect the sun should look like a combination of green and red, something our eyes see as yellow (I have not done this personally). At any rate, it appears that the sky looks blue because our eyes see the green+ cyan+ indigo + purple in the scattered light as sky blue.220px-RGB_illumination

At sunrise and sunset when the sun is on the horizon the scatter percents will be higher, so that all of the sun’s colors will be scattered except red and orange. The sun looks orange then, as expected, but the sky should look blue-green, as that’s the combination of all the other colors of sunlight when orange and red are removed. I’ve not checked this last yet. I’ll have to take my spectroscope to a fine sunset and see what I see when I look at the sky.