Category Archives: Heat transfer

A simpler way to recycle the waste fuel of a SOFC.

My favorite fuel cells burn hydrogen-rich hydrocarbon fuels, like methane (natural gas) instead of pure hydrogen. Methane is far more energy dense, and costs far less than hydrogen per energy content. The US has plenty of methane and has pipelines that distribute it to every city and town. It’s a low CO2 fuel, and we can lower the CO2 impact further by mixing in hydrogen to get hythane. Elon Musk has called hydrogen- powered fuel cells “fool cells”, methane-powered fuel cells look a lot less foolish. They easily compete with his batteries and with gasoline. Besides, Musk has chosen methane as the fuel for his proposed starship to Mars.

Solid oxide fuel cells, SOFCs, can use methane directly without any pre-reformer. They operate at 800°C or so. At these temperatures, methane reacts with water (steam) within the fuel cell to form hydrogen by the reaction, CH4 + H2O –> 3H2 + CO. The hydrogen, and to a lesser extent the CO is oxidized in the fuel cell to create electricity,, but the methane is not 100% consumed, generally. Unused methane, CO, and some hydrogen exits a solid oxide fuel cell along with the products of combustion, CO2 and water.

Several researchers have looked for ways to recycle this waste fuel to capture the energy value. Six years ago, I patented a membrane method to extract the waste fuel and recycle it, see a description here. I now see this method as too complex, and have applied for a patent on a simpler version, shown below as Figure 1. As before the main work is done by a membrane but here I dispense with the water gas shift reactor, and many of the heat exchangers of the previous approach.

Simple way to improve fuel use in a high temperature fuel cell, using just a membrane.

The fuel cell system of Fig. 1 operates at somewhat elevated pressure, 2 atm or more. It is expected that the majority of the exhaust going to the membrane will be CO2 and water. Most of this will pass through the membrane and will exhaust to the air. The rest is mixed with fresh methane and recycles to the fuel cell. Despite the pressure of the fuel cell, very a little energy is needed for recirculation since the methane does not go through the membrane. The result is a light, simple, and energy efficient process. If you are interested, please contact me at REB Research. Or you can purchase the silicone membrane module here. Alternately, see here for flux information and other applications.

Robert Buxbaum, December 8, 2022.

My home-made brandy and still.

MY home-made still, and messy lab. Note the masking tape seal and the nylon hoses. Nylon is cheaper than copper. The yellow item behind the burner is the cooling water circulation pump. The wire at top and left is the thermocouple.

I have an apple tree, a peach tree, and some grape vines. They’re not big trees, but they give too much fruit to eat. The squirrels get some, and we give some away. As for the rest, I began making wine and apple jack a few years back, but there’s still more fruit than I can use. Being a chemical engineer, I decided to make brandy this year, so far only with pears and apples.

The first steps were the simplest: I collected fruit in a 5 gallon, Ace bucket, and mashed it using a 2×4. I then added some sugar and water and some yeast and let it sit with a cover for a week or two. Bread yeast worked fine for this, and gives a warm flavor, IMHO. A week or so later, I put the mush into a press I had fro grapes, shown below, and extracted the fermented juice. I used a cheesecloth bag with one squeezing, no bag with the other. The bag helped, making cleanup easier.

The fruit press, used to extract liquid. A cheese cloth bag helps.

I did a second fermentation with both batches of fermented mash. This was done in a pot over a hot-plate on warm. I added more sugar and some more yeast and let it ferment for a few more days at about 78°F. To avoid bad yeasts, I washed out the pot and the ace bucket with dilute iodine before using them– I have lots of dilute iodine around from the COVID years. The product went into the aluminum “corn-cooker” shown above, 5 or 6 gallon size, that serves as the still boiler. The aluminum cover of the pot was drilled with a 1″ hole; I then screwed in a 10″ length of 3/4″ galvanized pipe, added a reducing elbow, and screwed that into a flat-plate heat exchanger, shown below. The heat exchanger serves as the condenser, while the 3/4″ pipe is like the cap on a moonshiner still. Its purpose is to keep the foam and splatter from getting in the condenser.

I put the pot on the propane burner stand shown, sealed the lid with masking tape (it worked better than duct tape), hooked up the heat exchanger to a water flow, and started cooking. If you don’t feel like making a still this way, you can buy one at Home Depot for about $150. Whatever route you go, get a good heat exchanger/ condenser. The one on the Home-depot still looks awful. You need to be able to take heat out as fast as the fire puts heat in, and you’ll need minimal pressure drop or the lid won’t seal. The Home Depot still has too little area and too much back-pressure, IMHO. Also, get a good thermometer and put it in the head-space of the pot. I used a thermocouple. Temperature is the only reasonable way to keep track of the progress and avoid toxic distillate.

A flat-plate heat exchanger, used as a condenser.

The extra weight of the heat exchanger and pipe helps hold the lid down, by the way, but it would not be enough if there was a lot of back pressure in the heat exchanger-condenser. If your lid doesn’t seal, you’ll lose your product. If you have problems, get a better heat exchanger. I made sure that the distillate flows down as it condenses. Up-flow adds back pressure and reduces condenser efficiency. I cooled the condenser with water circulated to a bucket with the cooling water flowing up, counter current to the distillate flow. I could have used tap water via a hose with proper fittings for cooling, but was afraid of major leaks all over the floor.

With the system shown, and the propane on high, it took about 20 minutes to raise the temperature to near boiling. To avoid splatter, I turned down the heater as the temperature approached 150°F. The first distillate came out at 165°F, a temperature that indicated it was not alcohol or anything you’d want to drink. I threw away the first 2-3 oz of this product. You can sniff or sip a tiny amount to convince yourself that this this is really nasty, acetone, I suspect, plus ethyl acetate, and maybe some ether and methanol. Throw it away!

After the first 2-3 ounces, I collected everything to 211°F. Product started coming in earnest at about 172°F. I ended distillation at 211°F when I’d collected nearly 3 quarts. For my first run, my electronic thermometer was off and I stopped too early — you need a good thermometer. The material I collected and was OK in taste, especially when diluted a bit. To test the strength, I set some on fire, the classic “100% proof test”, and diluted till it to about 70% beyond. This is 70% proof, by the classic method. I also tried a refractometer, comparing the results to whiskey. I was aiming for 60-80 proof (30-40%).

My 1 gallon aging barrel.

I tried distilling a second time to improve the flavor. The result was stronger, but much worse tasting with a loss of fruit flavor. By contrast, a much better resulted from putting some distillate (one pass) in an oak barrel we had used for wine. Just one day in the barrel helped a lot. I’ve also seen success putting charred wood cubes set into a glass bottle of distillate. Note: my barrel, as purchased, had leaks. I sealed them with wood glue before use.

I only looked up distilling law after my runs. It varies state to state. In Michigan, making spirits for consumption, either 1 gal or 60,000 gal/year, requires a “Distilling, Rectifying, Blending and/or Bottling Spirits” Permit, from the ATF Tax and Trade Bureau (“TTB”) plus a Small Distiller license from Michigan. Based on the sale of stills at Home Depot and a call to the ATF, it appears there is little interest in pursuing home distillers who do not sell, despite the activity being illegal. This appears similar to state of affairs with personal use marijuana growers in the state. Your state’s laws may be different, and your revenuers may be more enthusiastic. If you decide to distill, here’s some music, the Dukes of Hazard theme song.

Robert Buxbaum, November 23, 2022.

A Nuclear-blast resistant paint: Starlite and co.

About 20 years ago, an itinerate inventor named Maurice Ward demonstrated a super insulating paint that he claimed would protect most anything from intense heat. He called it Starlite, and at first no one believed the claims. Then he demonstrated it on TV, see below, by painting a paper-thin layer on a raw egg. He then blasting the egg with a blow torch for a minute till the outside glowed yellow-red. He then lifted the egg with his hand; it was barely warm! And then, on TV, he broke the shell to show that the insides were totally raw, not only uncooked but completely unchanged, a completely raw egg. The documentary below shows the demonstration and describes what happened next (as of 10 years ago) including an even more impressive series of tests.

Intrigued, but skeptical, researchers at the US White Sands National Laboratory, our nuclear bomb test lab, asked for samples. Ward provided pieces of wood painted as before with a “paper thin” layer of Starlite. They subjected these to burning with an oxyacetylene torch, and to a simulated nuclear bomb blast. The nuclear fireball radiation was simulated by an intense laser at the site. Amazing as it sounds, the paint and the wood beneath emerging barely scorched. The painted wood was not damaged by the laser, nor by an oxyacetylene torch that could burn through 8 inches of steel in seconds.

The famous egg, blow torch experiment.

The inventor wouldn’t say what the paint was made of, or what mechanism allowed it to do this, but clearly it had military and civilian uses. It seems it would have prevented the twin towers from collapsing, or would have greatly extended the time they stayed standing. Similarly, it would protect almost anything from a flame-thrower.

As for the ingredients, Ward said it was non-toxic, and that it contained mostly organic materials, plus borax and some silica or ceramic. According to his daughter, it was “edible”; they’d fed it to dogs and horses without adverse effects.

Starlite coasted wood. The simulated nuclear blast made the char mark at left.

The White sands engineers speculate that the paint worked by combination of ablation and intumescence, controlled swelling. The surface, they surmised, formed a foam of char, pure carbon, that swelled to make tiny chambers. If these chambers are small enough, ≤10 nm or so, the mean free path of gas molecules will be severely reduced, reducing the potential for heat transfer. Even more insulting would be if the foam chambers were about 1 nm. Such chambers will be, essentially air free, and thus very insulating. For a more technical view of how molecule motion affects heat transfer rates, see my essay, here.

Sorry to say we don’t know how big the char chambers are, or if this is how the material works. Ward retained the samples and the formula, and didn’t allow close examination. Clearly, if it works by a char, the char layer is very thin, a few microns at most.

Because Maurice Ward never sold the formula or any of the paint in his lifetime, he made no money on the product. He kept closed muted about it, as he knew that, as soon as he patented, or sold, or let anyone know what was in the paint, there would be copycats, and patent violations, and leaks of any secret formula. Even in the US, many people and companies ignore patent rights, daring you to challenge them in court. And it’s worse in foreign countries where the government actively encourages violation. There are also legal ways around a patent: A copycat inventor looks for ways to get the same behavior from materials that are not covered in the patent. Ward could not get around these issues, so he never patented the formula or sold the rights. He revealed the formula only to some close family members, but that was it till May, 2020, when a US company, Thermashield, LLC, bought Ward’s lab equipment and notes. They now claim to make the original Starlite. Maybe they do. The product doesn’t seem quite as good. I’ve yet to see an item scorched as little as the sample above.

Many companies today are now selling versions of Starlite. The formulas are widely different, but all the paints are intumescent, and all the formulas are based on materials Ward would have had on hand, and on the recollections of the TV people and those at White Sands. I’ve bought one of these copycat products, not Thermashield, and tested it. It’s not half bad: thicker in consistency than the original, or as resistive.

There are home-made products too, with formulas on the internet and on YouTube. They are applied more like a spackle or a clay. Still, these products insulate remarkably well: a lot better than any normal insulator I’d seen.

If you’d like to try this as a science fair project, among the formulas you can try; a mix of glue, baking soda, borax, and sugar, with some water. Some versions use sodium silicate too. The Thermoshield folks say that this isn’t the formula, that there is no PVA glue or baking soda in their product. Still it works.

Robert Buxbaum, March 13, 2022. Despite my complaints about the US patent system, it’s far better than in any other country I’ve explored. In most countries, patents are granted only as an income stream for the government, and inventors are considered villains: folks who withhold the fruits of their brains for unearned money. Horrible.

Thermal stress failure

Take a glass, preferably a cheap glass, and set it in a bowl of ice-cold water so that the water goes only half-way up the glass. Now pour boiling hot water into the glass. In a few seconds the glass will crack from thermal stress, the force caused by heat going from the inside of the glass outside to the bowl of cold water. This sort of failure is not mentioned in any of the engineering material books that I had in college, or had available for teaching engineering materials. To the extent that it is mentioned mentioned on the internet, e.g. here at wikipedia, the metric presented is not derived and (I think) wrong. Given this, I’d like to present a Buxbaum- derived metric for thermal stress-resistance and thermal stress failure. A key aspect: using a thinner glass does not help.

Before gong on to the general case of thermal stress failure, lets consider the glass, and try to compute the magnitude of the thermal stress. The glass is being torn apart and that suggests that quite a lot of stress is being generated by a ∆T of 100°C temeprarture gradient.

To calcule the thermal stress, consider the thermal expansivity of the material, α. Glass — normal cheap glass — has a thermal expansivity α = 8.5 x10-6 meters/meter °C (or 8.5 x10-6 foot/foot °C). For every degree Centigrade a meter of glass is heated, it will expand 8.5×10-6 meters, and for every degree it is cooled, it will shrink 8.5 x10-6 meters. If you consider the circumference of the glass to be L (measured in meters), then
∆L/L = α ∆T.

where ∆L is the change in length due to heating, and ∆L/L is sometimes called the “strain.”. Now, lets call the amount of stress caused by this expansion σ, sigma, measured in psi or GPa. It is proportional to the strain, ∆L/L, and to the elasticity constant, E (also called Young’s elastic constant).

σ = E ∆L/L.

For glass, Young’s elasticity constant, E = 75 GPa. Since strain was equal to α ∆T, we find that

σ =Eα ∆T 

Thus, for glass and a ∆T of 100 °C, σ =100°C x 75 GPa x 8.5 x10-6 /°C  = 0.064  GPa = 64MPa. This is about 640 atm, or 9500 psi.

As it happens, the ultimate tensile strength of ordinary glass is only about 40 MPa =  σu. This, the maximum force per area you can put on glass before it breaks, is less than the thermal stress. You can expect a break here, and wherever σu < Eα∆T. I thus create a characteristic temperature difference for thermal stress failure:

The Buxbaum failure temperature, ß = σu/Eα

If ∆T of more than ß is applied to any material, you can expect a thermal stress failure.

The Wikipedia article referenced above provides a ratio for thermal resistance. The usits are perhaps heat load per unit area and time. How you would use this ratio I don’t quite know, it includes k, the thermal conductivity and ν, the Poisson ratio. Including the thermal conductivity here only makes sense, to me, if you think you’ll have a defined thermal load, a defined amount of heat transfer per unit area and time. I don’t think this is a normal way to look at things.  As for including the Poisson ratio, this too seems misunderstanding. The assumption is that a high Poisson ratio decreases the effect of thermal stress. The thought behind this, as I understand it, is that heating one side of a curved (the inside for example) will decrease the thickness of that side, reducing the effective stress. This is a mistake, I think; heating never decreases the thickness of any part being heated, but only increases the thickness. The heated part will expand in all directions. Thus, I think my ratio is the correct one. Please find following a list of failure temperatures for various common materials. 

Stress strain properties of engineering materials including thermal expansion, ultimate stress, MPa, and Youngs elastic modulus, GPa.

You will notice that most materials are a lot more resistant to thermal stress than glass is and some are quite a lot less resistant. Based on the above, we can expect that ice will fracture at a temperature difference as small as 1°C. Similarly, cast iron will crack with relatively little effort, while steel is a lot more durable (I hope that so-called cast iron skillets are really steel skillets). Pyrex is a form of glass that is more resistant to thermal breakage; that’s mainly because for pyrex, α is a lot smaller than for ordinary, cheap glass. I find it interesting that diamond is the material most resistant to thermal failure, followed by invar, a low -expansion steel, and ordinary rubber.

Robert E. Buxbaum, July 3, 2019. I should note that, for several of these materials, those with very high thermal conductivities, you’d want to use a very thick sample of materials to produce a temperature difference of 100*C.

The speed of sound, Buxbaum’s correction

Ernst Mach showed that sound must travel at a particular speed through any material, one determined by the conservation of energy and of entropy. At room temperature and 1 atm, that speed is theoretically predicted to be 343 m/s. For a wave to move at any other speed, either the laws of energy conservation would have to fail, or ∆S ≠ 0 and the wave would die out. This is the only speed where you could say there is a traveling wave, and experimentally, this is found to be the speed of sound in air, to good accuracy.

Still, it strikes me that Mach’s assumptions may have been too restrictive for short-distance sound waves. Perhaps there is room for other sound speeds if you allow ∆S > 0, and consider sound that travels short distances and dies out far from the source. Waves at these, other speeds might affect music appreciation, or headphone design. As these waves were never treated in my thermodynamics textbooks, I wondered if I could derive their speed in any nice way, and if they were faster or slower than the main wave? (If I can’t use this blog to re-think my college studies, what good is it?)

I t can help to think of a shock-wave of sound wave moving down a constant area tube of still are at speed u, with us moving along at the same speed as the wave. In this view, the wave appears stationary, but there is a wind of speed u approaching it from the left.

Imagine the sound-wave moving to the right, down a constant area tube at speed u, with us moving along at the same speed. Thus, the wave appears stationary, with a wind of speed u from the right.

As a first step to trying to re-imagine Mach’s calculation, here is one way to derive the original, for ∆S = 0, speed of sound: I showed in a previous post that the entropy change for compression can be imagines to have two parts, a pressure part at constant temperature: dS/dV at constant T = dP/dT at constant V. This part equals R/V for an ideal gas. There is also a temperature at constant volume part of the entropy change: dS/dT at constant V = Cv/T. Dividing the two equations, we find that, at constant entropy, dT/dV = RT/CvV= P/Cv. For a case where ∆S>0, dT/dV > P/Cv.

Now lets look at the conservation of mechanical energy. A compression wave gives off a certain amount of mechanical energy, or work on expansion, and this work accelerates the gas within the wave. For an ideal gas the internal energy of the gas is stored only in its temperature. Lets now consider a sound wave going down a tube flow left to right, and lets our reference plane along the wave at the same speed so the wave seems to sit still while a flow of gas moves toward it from the right at the speed of the sound wave, u. For this flow system energy is concerned though no heat is removed, and no useful work is done. Thus, any change in enthalpy only results in a change in kinetic energy. dH = -d(u2)/2 = u du, where H here is a per-mass enthalpy (enthalpy per kg).

dH = TdS + VdP. This can be rearranged to read, TdS = dH -VdP = -u du – VdP.

We now use conservation of mass to put du into terms of P,V, and T. By conservation of mass, u/V is constant, or d(u/V)= 0. Taking the derivative of this quotient, du/V -u dV/V2= 0. Rearranging this, we get, du = u dV/V (No assumptions about entropy here). Since dH = -u du, we say that udV/V = -dH = -TdS- VdP. It is now common to say that dS = 0 across the sound wave, and thus find that u2 = -V(dP/dV) at const S. For an ideal gas, this last derivative equals, PCp/VCv, so the speed of sound, u= √PVCp/Cv with the volume in terms of mass (kg/m3).

The problem comes in where we say that ∆S>0. At this point, I would say that u= -V(dH/dV) = VCp dT/dV > PVCp/Cv. Unless, I’ve made a mistake (always possible), I find that there is a small leading, non-adiabatic sound wave that goes ahead of the ordinary sound wave and is experienced only close to the source caused by mechanical energy that becomes degraded to raising T and gives rise more compression than would be expected for iso-entropic waves.

This should have some relevance to headphone design and speaker design since headphones are heard close to the ear, while speakers are heard further away. Meanwhile the recordings are made by microphones right next to the singers or instruments.

Robert E. Buxbaum, August 26, 2014

If hot air rises, why is it cold on mountain-tops?

This is a child’s question that’s rarely answered to anyone’s satisfaction. To answer it well requires college level science, and by college the child has usually been dissuaded from asking anything scientific that would likely embarrass teacher — which is to say, from asking most anything. By a good answer, I mean here one that provides both a mathematical, checkable prediction of the temperature you’d expect to find on mountain tops, and one that also gives a feel for why it should be so. I’ll try to provide this here, as previously when explaining “why is the sky blue.” A word of warning: real science involves mathematics, something that’s often left behind, perhaps in an effort to build self-esteem. If I do a poor job, please text me back: “if hot air rises, what’s keeping you down?”

As a touchy-feely answer, please note that all materials have internal energy. It’s generally associated with the kinetic energy + potential energy of the molecules. It enters whenever a material is heated or has work done on it, and for gases, to good approximation, it equals the gas heat capacity of the gas times its temperature. For air, this is about 7 cal/mol°K times the temperature in degrees Kelvin. The average air at sea-level is taken to be at 1 atm, or 101,300  Pascals, and 15.02°C, or 288.15 °K; the internal energy of this are is thus 288.15 x 7 = 2017 cal/mol = 8420 J/mol. The internal energy of the air will decrease as the air rises, and the temperature drops for reasons I will explain below. Most diatomic gases have heat capacity of 7 cal/mol°K, a fact that is only explained by quantum mechanics; if not for quantum mechanics, the heat capacities of diatomic gases would be about 9 cal/mol°K.

Lets consider a volume of this air at this standard condition, and imagine that it is held within a weightless balloon, or plastic bag. As we pull that air up, by pulling up the bag, the bag starts to expand because the pressure is lower at high altitude (air pressure is just the weight of the air). No heat is exchanged with the surrounding air because our air will always be about as warm as its surroundings, or if you like you can imagine weightless balloon prevents it. In either case the molecules lose energy as the bag expands because they always collide with an outwardly moving wall. Alternately you can say that the air in the bag is doing work on the exterior air — expansion is work — but we are putting no work into the air as it takes no work to lift this air. The buoyancy of the air in our balloon is always about that of the surrounding air, or so we’ll assume for now.

A classic, difficult way to calculate the temperature change with altitude is to calculate the work being done by the air in the rising balloon. Work done is force times distance: w=  ∫f dz and this work should equal the effective cooling since heat and work are interchangeable. There’s an integral sign here to account for the fact that force is proportional to pressure and the air pressure will decrease as the balloon goes up. We now note that w =  ∫f dz = – ∫P dV because pressure, P = force per unit area. and volume, V is area times distance. The minus sign is because the work is being done by the air, not done on the air — it involves a loss of internal energy. Sorry to say, the temperature and pressure in the air keeps changing with volume and altitude, so it’s hard to solve the integral, but there is a simple approach based on entropy, S.

Les Droites Mountain, in the Alps, at the intersect of France Italy and Switzerland is 4000 m tall. The top is generally snow-covered.

Les Droites Mountain, in the Alps, at the intersect of France Italy and Switzerland is 4000 m tall. The top is generally snow-covered.

I discussed entropy last month, and showed it was a property of state, and further, that for any reversible path, ∆S= (Q/T)rev. That is, the entropy change for any reversible process equals the heat that enters divided by the temperature. Now, we expect the balloon rise is reversible, and since we’ve assumed no heat transfer, Q = 0. We thus expect that the entropy of air will be the same at all altitudes. Now entropy has two parts, a temperature part, Cp ln T2/T1 and a pressure part, R ln P2/P1. If the total ∆S=0 these two parts will exactly cancel.

Consider that at 4000m, the height of Les Droites, a mountain in the Mont Blanc range, the typical pressure is 61,660 Pa, about 60.85% of sea level pressure (101325 Pa). If the air were reduced to this pressure at constant temperature (∆S)T = -R ln P2/P1 where R is the gas constant, about 2 cal/mol°K, and P2/P1 = .6085; (∆S)T = -2 ln .6085. Since the total entropy change is zero, this part must equal Cp ln T2/T1 where Cp is the heat capacity of air at constant pressure, about 7 cal/mol°K for all diatomic gases, and T1 and T2 are the temperatures (Kelvin) of the air at sea level and 4000 m. (These equations are derived in most thermodynamics texts. The short version is that the entropy change from compression at constant T equals the work at constant temperature divided by T,  ∫P/TdV=  ∫R/V dV = R ln V2/V1= -R ln P2/P1. Similarly the entropy change at constant pressure = ∫dQ/T where dQ = Cp dT. This component of entropy is thus ∫dQ/T = Cp ∫dT/T = Cp ln T2/T1.) Setting the sum to equal zero, we can say that Cp ln T2/T1 =R ln .6085, or that 

T2 = T1 (.6085)R/Cp

T2 = T1(.6085)2/7   where 0.6065 is the pressure ratio at 4000, and because for air and most diatomic gases, R/Cp = 2/7 to very good approximation, matching the prediction from quantum mechanics.

From the above, we calculate T2 = 288.15 x .8676 = 250.0°K, or -23.15 °C. This is cold enough to provide snow  on Les Droites nearly year round, and it’s pretty accurate. The typical temperature at 4000 m is 262.17 K (-11°C). That’s 26°C colder than at sea-level, and only 12°C warmer than we’d predicted.

There are three weak assumptions behind the 11°C error in our predictions: (1) that the air that rises is no hotter than the air that does not, and (2) that the air’s not heated by radiation from the sun or earth, and (3) that there is no heat exchange with the surrounding air, e.g. from rain or snow formation. The last of these errors is thought to be the largest, but it’s still not large enough to cause serious problems.

The snow cover on Kilimanjaro, 2013. If global warming models were true, it should be gone, or mostly gone.

Snow on Kilimanjaro, Tanzania 2013. If global warming models were true, the ground should be 4°C warmer than 100 years ago, and the air at this altitude, about 7°C (12°F) warmer; and the snow should be gone.

You can use this approach, with different exponents, estimate the temperature at the center of Jupiter, or at the center of neutron stars. This iso-entropic calculation is the model that’s used here, though it’s understood that may be off by a fair percentage. You can also ask questions about global warming: increased CO2 at this level is supposed to cause extreme heating at 4000m, enough to heat the earth below by 4°C/century or more. As it happens, the temperature and snow cover on Les Droites and other Alp ski areas has been studied carefully for many decades; they are not warming as best we can tell (here’s a discussion). By all rights, Mt Blanc should be Mt Green by now; no one knows why. The earth too seems to have stopped warming. My theory: clouds. 

Robert Buxbaum, May 10, 2014. Science requires you check your theory for internal and external weakness. Here’s why the sky is blue, not green.

Ivanpah’s solar electric worse than trees

Recently the DoE committed 1.6 billion dollars to the completion of the last two of three solar-natural gas-electric plants on a 10 mi2 site at Lake Ivanpah in California. The site is rated to produce 370 MW of power, in a facility that uses far more land than nuclear power, at a cost significantly higher than nuclear. The 3900 MW Drax plant (UK) cost 1.1 Billion dollars, and produces 10 times more power on a much smaller site. Ivanpah needs a lot of land because its generators require 173,500 billboard-size, sun-tracking mirrors to heat boilers atop three 750 foot towers (2 1/2 times the statue of liberty). The boilers feed steam to low pressure, low efficiency (28% efficiency) Siemens turbines. At night, natural gas provides heat to make the steam, but only at the same, low efficiency. Siemens makes higher efficiency turbine plants (59% efficiency) but these can not be used here because the solar oven temperature is only 900°F (500°C), while normal Siemens plants operate at 3650°F (2000°C).

The Ivanpau thermal solar-natural gas project will look like The Crescent Dunes Thermal-solar project shown here, but will be bigger.

The first construction of the Ivanpah thermal solar-natural-gas project; Each circle mirrors extend out to cover about 2 square miles of the 10mi2 site.

So far, the first of the three towers is operational, but it has been producing at only 30% of rated low-efficiency output. These are described as “growing pains.” There are also problems with cooked birds, blinded pilots, and the occasional fire from the misaligned death ray — more pains, I guess. There is also the problem of lightning. When hit by lightning the mirrors shatter into millions of shards of glass over a 30 foot radius, according to Argus, the mirror cleaning company. This presents a less-than attractive environmental impact.

As an exercise, I thought I’d compare this site’s electric output to the amount one could generate using a wood-burning boiler fed by trees growing on a similar sized (10 sq. miles) site. Trees are cheap, but only about 10% efficient at converting solar power to chemical energy, thus you might imagine that trees could not match the power of the Ivanpah plant, but dry wood burns hot, at 1100 -1500°C, so the efficiency of a wood-powered steam turbine will be higher, about 45%. 

About 820 MW of sunlight falls on every 1 mi2 plot, or 8200 MW for the Ivanpah site. If trees convert 10% of this to chemical energy, and we convert 45% of that to electricity, we find the site will generate 369 MW of electric power, or exactly the output that Ivanpah is rated for. The cost of trees is far cheaper than mirrors, and electricity from wood burning is typically cost 4¢/kWh, and the environmental impact of tree farming is likely to be less than that of the solar mirrors mentioned above. 

There is another advantage to the high temperature of the wood fire. The use of high temperature turbines means that any power made at night with natural gas will be produced at higher efficiency. The Ivanpah turbines output at low temperature and low efficiency when burning natural gas (at night) and thus output half the half the power of a normal Siemens plant for every BTU of gas. Because of this, it seems that the Ivanpah plant may use as much natural gas to make its 370 MW during a 12 hour night as would a higher efficiency system operating 24 hours, day and night. The additional generation by solar thus, might be zero. 

If you think the problems here are with the particular design, I should also note that the Ivanpah solar project is just one of several our Obama-government is funding, and none are doing particularly well. As another example, the $1.45 B solar project on farmland near Gila Bend Arizona is rated to produce 35 MW, about 1/10 of the Ivanpah project at 2/3 the cost. It was built in 2010 and so far has not produced any power.

Robert E. Buxbaum, March 12, 2014. I’ve tried using wood to make green gasoline. No luck so far. And I’ve come to doubt the likelihood that we can stop global warming.

Nuclear fusion

I got my PhD at Princeton University 33 years ago (1981) working on the engineering of nuclear fusion reactors, and I thought I’d use this blog to rethink through the issues. I find I’m still of the opinion that developing fusion is important as the it seems the best, long-range power option. Civilization will still need significant electric power 300 to 3000 years from now, it seems, when most other fuel sources are gone. Fusion is also one of the few options for long-range space exploration; needed if we ever decide to send colonies to Alpha Centauri or Saturn. I thought fusion would be ready by now, but it is not, and commercial use seems unlikely for the next ten years at least — an indication of the difficulties involved, and a certain lack of urgency.

Oil, gas, and uranium didn’t run out like we’d predicted in the mid 70s. Instead, population growth slowed, new supplies were found, and better methods were developed to recover and use them. Shale oil and fracking unlocked hydrocarbons we thought were unusable, and nuclear fission reactors got better –safer and more efficient. At the same time, the more we studied, the clearer it came that fusion’s technical problems are much harder to tame than uranium fission’s.

Uranium fission was/is frighteningly simple — far simpler than even the most basic fusion reactor. The first nuclear fission reactor (1940) involved nothing more than uranium pellets in a pile of carbon bricks stacked in a converted squash court at the University of Chicago. No outside effort was needed to get the large, unstable uranium atoms split to smaller, more stable ones. Water circulating through the pile removed the heat released, and control was maintained by people lifting and lowering cadmium control rods while standing on the pile.

A fusion reactor requires high temperature or energy to make anything happen. Fusion energy is produced by combining small, unstable heavy hydrogen atoms into helium, a bigger more stable one, see figure. To do this reaction you need to operate at the equivalent of about 500,000,000 degrees C, and containing it requires (typically) a magnetic bottle — something far more complex than a pile of graphic bricks. The reward was smaller too: “only” about 1/13th as much energy per event as fission. We knew the magnetic bottles were going to be tricky, e.g. there was no obvious heat transfer and control method, but fusion seemed important enough, and the problems seemed manageable enough that fusion power seemed worth pursuing — with just enough difficulties to make it a challenge.

Basic fusion reaction: deuterium + tritium react to give helium, a neutron and energy.

Basic fusion reaction: deuterium + tritium react to give helium, a neutron and energy.

The plan at Princeton, and most everywhere, was to use a TOKAMAK, a doughnut-shaped reactor like the one shown below, but roughly twice as big; TOKAMAK was a Russian acronym. The doughnut served as one side of an enormous transformer. Hydrogen fuel was ionized into a plasma (a neutral soup of protons and electrons) and heated to 300,000,000°C by a current in the TOKOMAK generated by varying the current in the other side of the transformer. Plasma containment was provided by enormous magnets on the top and bottom, and by ring-shaped magnets arranged around the torus.

As development went on, we found we kept needing bigger and bigger doughnuts and stronger and stronger magnets in an effort to balance heat loss with fusion heating. The number density of hydrogen atoms per volume, n, is proportional to the magnetic strength. This is important because the fusion heat rate per volume is proportional to n-squared, n2, while heat loss is proportional to n divided by the residence time, something we called tau, τ. The main heat loss was from the hot plasma going to the reactor surface. Because of the above, a heat balance ratio was seen to be important, heat in divided by heat out, and that was seen to be more-or-less proportional to nτ. As the target temperatures increased, we found we needed larger and larger nτ reactors to make a positive heat balance. And this translated to ever larger reactors and ever stronger magnetic fields, but even here there was a limit, 1 billion Kelvin, a thermodynamic temperature where the fusion reaction went backward and no energy was produced. The Princeton design was huge, with super strong, super magnets, and was operated at 300 million°C, near the top of the reaction curve. If the temperature went above or below this temperature, the fire would go out. There was no room for error, but relatively little energy output per volume — compared to fission.

Fusion reaction options and reaction rates.

Fusion reaction options and reaction rates.

The most likely reaction involved deuterium and tritium, referred to as D and T. This was the reaction of the two heavy isotopes of hydrogen shown in the figure above — the same reaction used in hydrogen bombs, a point we rarely made to the public. For each reaction D + T –> He + n, you get 17.6 million electron volts (17.6 MeV). This is 17.6 million times the energy you get for an electron moving over one Volt, but only 1/13 the energy of a fission reaction. By comparison, the energy of water-forming, H2 + 1/2 O2 –> H2O, is the equivalent of two electrons moving over 1.2 Volts, or 2.4 electron volts (eV), some 8 million times less than fusion.

The Princeton design involved reacting 40 gm/hr of heavy hydrogen to produce 8 mol/hr of helium and 4000 MW of heat. The heat was converted to electricity at 38% efficiency using a topping cycle, a modern (relatively untried) design. Of the 1500 MWh/hr of electricity that was supposed to be produced, all but about 400 MW was to be delivered to the power grid — if everything worked right. Sorry to say, the value of the electricity did not rise anywhere as fast as the cost of the reactor and turbines. Another problem: 1100 MW was more than could be easily absorbed by any electrical grid. The output was high and steady, and could not be easily adjusted to match fluctuating customer demand. By contrast a coal plant’s or fuel cell’s output could be easily adjusted (and a nuclear plant with a little more difficulty).

Because of the need for heat balance, it turned out that at least 9% of the hydrogen had to be burnt per pass through the reactor. The heat lost per mol by conduction to the wall was, to good approximation, the heat capacity of each mol of hydrogen ions, 82 J/°C mol, times the temperature of the ions, 300 million °C divided by the containment time, τ. The Princeton design was supposed to have a containment of about 4 seconds. As a result, the heat loss by conduction was 6.2 GW per mol. This must be matched by the molar heat of reaction that stayed in the plasma. This was 17.6 MeV times Faraday’s constant, 96,800 divided by 4 seconds (= 430 GW/mol reacted) divided by 5. Of the 430 GW/mol produced in fusion reactions only 1/5 remains in the plasma (= 86 GW/mol) the other 4/5 of the energy of reaction leaves with the neutron. To get the heat balance right, at least 9% of the hydrogen must react per pass through the reactor; there were also some heat losses from radiation, so the number is higher. Burn more or less percent of the hydrogen and you had problems. The only other solution was to increase τ > 4 seconds, but this meant ever bigger reactors.

There was also a material handling issue: to get enough fuel hydrogen into the center of the reactor, quite a lot of radioactive gas had to be handled — extracted from the plasma chamber. These were to be frozen into tiny spheres of near-solid hydrogen and injected into the reactor at ultra-sonic velocity. Any slower and the spheres would evaporate before reaching the center. As 40 grams per hour was 9% of the feed, it became clear that we had to be ready to produce and inject 1 pound/hour of tiny spheres. These “snowballs-in-hell” had to be small so they didn’t dampen the fire. The vacuum system had to be able to be big enough to handle the lb/hr or so of unburned hydrogen and ash, keeping the pressure near total vacuum. You then had to purify the hydrogen from the ash-helium and remake the little spheres that would be fed back to the reactor. There were no easy engineering problems here, but I found it enjoyable enough. With a colleague, I came up with a cute, efficient high vacuum pump and recycling system, and published it here.

Yet another engineering challenge concerned the difficulty of finding a material for the first-wall — the inner wall of the doughnut facing the plasma. Of the 4000 MW of heat energy produced, all the conduction and radiation heat, about 1000 MW is deposited in the first wall and has to be conducted away. Conducting this heat means that the wall must have an enormous coolant flow and must withstand an enormous amount of thermal stress. One possible approach was to use a liquid wall, but I’ve recently come up with a rather nicer solid wall solution (I think) and have filed a patent; more on that later, perhaps after/if the patent is accepted. Another engineering challenge was making T, tritium, for the D-T reaction. Tritium is not found in nature, but has to be made from the neutron created in the reaction and from lithium in a breeder blanket, Li + n –> He + T. I examined all possible options for extracting this tritium from the lithium at low concentrations as part of my PhD thesis, and eventually found a nice solution. The education I got in the process is used in my, REB Research hydrogen engineering business.

Man inside the fusion reactor doughnut at ITER. He'd better leave before the 8,000,000°C plasma turns on.

Man inside the fusion reactor doughnut at ITER. He’d better leave before the 8,000,000°C plasma turns on.

Because of its complexity, and all these engineering challenges, fusion power never reached the maturity of fission power; and then Three-mile Island happened and ruined the enthusiasm for all things nuclear. There were some claims that fusion would be safer than fission, but because of the complexity and improvements in fission, I am not convinced that fusion would ever be even as safe. And the long-term need keeps moving out: we keep finding more uranium, and we’ve developed breeder reactors and a thorium cycle: technologies that make it very unlikely we will run out of fission material any time soon.

The main, near term advantage I see for fusion over fission is that there are fewer radioactive products, see comparison.  A secondary advantage is neutrons. Fusion reactors make excess neutrons that can be used to make tritium, or other unusual elements. A need for one of these could favor the development of fusion power. And finally, there’s the long-term need: space exploration, or basic power when we run out of coal, uranium, and thorium. Fine advantages but unlikely to be important for a hundred years.

Robert E. Buxbaum, March 1, 2014. Here’s a post on land use, on the aesthetics of engineering design, and on the health risks of nuclear power. The sun’s nuclear fusion reactor is unstable too — one possible source of the chaotic behavior of the climate. Here’s a control joke.

Ocean levels down from 3000 years ago; up from 20,000 BC

In 2006 Al Gore claimed that industry was causing 2-5°C of global warming per century, and that this, in turn, would cause the oceans to rise by 8 m by 2100. Despite a record cold snap this week, and record ice levels in the antarctic, the US this week banned all incandescent light bulbs of 40W and over in an effort to stop the tragedy. This was a bad move, in my opinion, for a variety of reasons, not least because it seems the preferred replacement, compact fluorescents, produce more pollution than incandescents when you include disposal of the mercury and heavy metals they contain. And then there is the weak connection between US industry and global warming.

From the geologic record, we know that 2-5° higher temperatures have been seen without major industrial outputs of pollution. These temperatures do produce the sea level rises that Al Gore warns about. Temperatures and sea levels were higher 3200 years ago (the Trojan war period), without any significant technology. Temperatures and sea levels were also higher 1900 years ago during the Roman warming. In those days Pevensey Castle (England), shown below, was surrounded by water.

During Roman times Pevensey Castle (at right) was surrounded by water at high tide.If Al Gore is right, it will be surrounded by water again soon.

During Roman times the world was warmer, and Pevensey Castle (right) was surrounded by water;. If Al Gore is right about global warming, it will be surrounded by water again by 2100.

From a plot of sea level and global temperature, below, we see that during cooler periods the sea was much shallower than today: 140 m shallower 20,000 years ago at the end of the last ice age, for example. In those days, people could walk from Asia to Alaska. Climate, like weather appears to be cyclically chaotic. I don’t think the last ice age ended because of industry, but it is possible that industry might help the earth to warm by 2-5°C by 2100, as Gore predicts. That would raise the sea levels, assuming there is no new ice age.

Global temperatures and ocean levels rise and sink together

Global temperatures and ocean levels change by a lot; thousands of years ago.

While I doubt there is much we could stop the next ice age — it is very hard to change a chaotic cycle — trying to stop global cooling seems more worthwhile than trying to stop warming. We could survive a 2 m rise in the seas, e.g. by building dykes, but a 2° of cooling would be disastrous. It would come with a drastic reduction in crops, as during the famine year of 1814. And if the drop continued to a new ice age, that would be much worse. The last ice age included mile high glaciers that extended over all of Canada and reached to New York. Only the polar bear and saber-toothed tiger did well (here’s a Canada joke, and my saber toothed tiger sculpture).

The good news is that the current global temperature models appear to be wrongor highly over-estimated. Average global temperatures have not changed in the last 16 years, though the Chinese keep polluting the air (for some reason, Gore doesn’t mind Chinese pollution). It is true that arctic ice extent is low, but then antarctic ice is at record high levels. Perhaps it’s time to do nothing. While I don’t want more air pollution, I’d certainly re-allow US incandescent light bulbs. In cases where you don’t know otherwise, perhaps the wisest course is to do nothing.

Robert Buxbaum, January 8, 2014

Fractal power laws and radioactive waste decay

Here’s a fairly simple model for nuclear reactor decay heat versus time. It’s based on a fractal model I came up with for dealing with the statistics of crime, fires, etc. The start was to notice that radioactive waste is typically a mixture of isotopes with different decay times and different decay heats. I then came to suspect that there would be a general fractal relation, and that the fractal relation would hold through as the elements of the mixed waste decayed to more stable, less radioactive products. After looking a bit, if seems that the fractal time characteristic is time to the 1/4 power, that is

heat output = H° exp (-at1/4).

Here H° is the heat output rate at some time =0 and “a” is a characteristic of the waste. Different waste mixes will have different values of this decay characteristic.

If nuclear waste consisted of one isotope and one decay path, the number of atoms decaying per day would decrease exponentially with time to the power of 1. If there were only one daughter product produced, and it were non-radioactive, the heat output of a sample would also decay with time to the power of 1. Thus, Heat output would equal  H° exp (-at) and a plot of the log of the decay heat would be linear against linear time — you could plot it all conveniently on semi-log paper.

But nuclear waste generally consists of many radioactive components with different half lives, and these commpnents decay into other radioactive isotopes, all of whom have half-lives that vary by quite a lot. The result is that a semi-log plot is rarely helpful.  Some people therefore plot radioactivity on a log-log plot, typically including a curve for each major isotope and decay mode. I find these plots hardly useful. They are certainly impossible to extrapolate. What I’d like to propose instead is a fractal variation of the original semi-log plot: a  plot of the log of the heat rate against a fractal time. As shown below the use of time to the 1/4 power seems to be helpful. The plot is similar to a fractal decay model that I’d developed for crimes and fires a few weeks ago

Afterheat of fuel rods used to generate 20 kW/kg U; Top graph 35 MW-days/kg U; bottom graph 20 Mw-day /kg  U. Data from US NRC Regulatory Guide 3.54 - Spent Fuel Heat Generation in an Independent Spent Fuel Storage Installation, rev 1, 1999. http://www.nrc.gov/reading-rm/doc-collections/reg-guides/fuels-materials/rg/03-054/

After-heat of nuclear fuel rods used at 20 kW/kg U; Top graph 35 MW-days/kg U; bottom graph 20 Mw-day /kg U. Data from US NRC Regulatory Guide 3.54. A typical reactor has 200,000 kg of uranium.

A plausible justification for this fractal semi-log plot is to observe that the half-life of daughter isotopes relates to the parent isotopes. Unless I find that someone else has come up with this sort of plot or analysis before, I’ll call it after myself: a Buxbaum Mandelbrot plot –Why not?

Nuclear power is attractive because it is a lot more energy dense than any normal fuel. Still the graph at right illustrates the problem of radioactive waste. With nuclear, you generate about 35 MW-days of power per kg of uranium. This is enough to power an average US home for 8 years, but it produces 1 kg of radioactive waste. Even after 81 years the waste is generating about 1/2 W of decay heat. It should be easier to handle and store the 1 kg of spent uranium than to deal with the many tons of coal-smoke produced when 35 MW-days of electricity is made from coal, still, there is reason to worry about the decay heat.

I’ve made a similar plot of decay heat of a fusion reactor, see below. Fusion looks better in this regard. A fission-based nuclear reactor to power 1/2 of Detroit, would hold some 200,000 kg of uranium that would be replaced every 5 years. Even 81 years after removal, the after-heat would be about 100 kW, and that’s a lot.

Afterheat of a 4000 MWth Fusion Reactor, from UMAC III Report. Nb-1%Zr is a fairly common high-temerature engineering material of construction.

After-heat of a 4000 MWth Fusion Reactor built from niobium-1%zirconium; from UWMAC III Report. The after heat is far less than with normal uranium fission.

The plot of the after-heat of a similar power fusion reactor (right) shows a far greater slope, but the same time to the1/4 power dependence. The heat output drops from 1 MW at 3 weeks to only 100 W after 1 year and far less than 1 W after 81 years. Nuclear fusion is still a few years off, but the plot at left shows the advantages fairly clearly, I. think.

This plot was really designed to look at the statistics of crime, fires, and the need for servers / checkout people.

Dr. R.E. Buxbaum, January 2, 2014, edited Aug 30, 2022. *A final, final thought about theory from Yogi Berra: “In theory, it matches reality.”