# magnetic separation of air

As some of you will know, oxygen is paramagnetic, attracted slightly by a magnet. Oxygen’s paramagnetism is due to the two unpaired electrons in every O2 molecule. Oxygen has a triple-bond structure as discussed here (much of the chemistry you were taught is wrong). Virtually every other common gas is diamagnetic, repelled by a magnet. These include nitrogen, water, CO2, and argon — all diamagnetic. As a result, you can do a reasonable job of extracting oxygen from air by the use of a magnet. This is awfully cool, and could make for a good science fair project, if anyone is of a mind.

But first some math, or physics, if you like. To a good approximation the magnetization of a material, M = CH/T where M is magnetization, H is magnetic field strength, C is the Curie constant for the material, and T is absolute temperature.

Ignoring for now, the difference between entropy and internal energy, but thinking only in terms of work derived by lowering a magnet towards a volume of gas, we can say that the work extracted, and thus the decrease in energy of the magnetic gas is ∫∫HdM  = MH/2. At constant temperature and pressure, we can say ∆G = -CH2/2T.

With a neodymium magnet, you should be able to get about 50 Tesla, or 40,000 ampere meters At 20°C, the per-mol, magnetic susceptibility of oxygen is 1.34×10−6  This suggests that the Curie constant is 1.34 x293 = 3.93 ×10−4  At 20°C, this energy difference is 1072 J/mole. = RT ln ß where ß is the concentration ratio between the O2 content of the magnetized and un-magnetized gas.

From the above, we find that, at room temperature, 298K ß = 1.6, and thus that the maximum oxygen concentration you’re likely to get is about 1.6 x 21% = 33%. It’s slightly more than this due to nitrogen’s diamagnetism, but this effect is too small the matter. What does matter is that 33% O2 is a good amount for a variety of medical uses.

I show below my simple design for a magnetic O2 concentrator. The dotted line is a permeable membrane of no selectivity – with a little O2 permeability the design will work better. All you need is a blower or pump. A coffee filter could serve as a membrane.

This design is as simple as the standard membrane-based O2 concentrator – those based on semi-permeable membranes, but this design should require less pressure differential — just enough to overcome the magnet. Less pressure means the blower should be smaller, and less noisy, with less energy use.  I figure this could be really convenient for people who need portable oxygen. With several stages and low temperature operation, this design could have commercial use.

On the theoretical end, an interesting thing I find concerns the effect on the entropy of the magnetic oxygen. (Please ignore this paragraph if you have not learned statistical thermodynamics.) While you might imagine that magnetization decreases entropy, other-things being equal because the molecules are somewhat aligned with the field, temperature and pressure being fixed, I’ve come to realize that entropy is likely higher. A sea of semi-aligned molecules will have a slightly higher heat capacity than nonaligned molecules because the vibrational Cp is higher, other things being equal. Thus, unless I’m wrong, the temperature of the gas will be slightly lower in the magnetic area than in the non-magnetic field area. Temperature and pressure are not the same within the separator as out, by the way; the blower is something of a compressor, though a much less-energy intense one than used for most air separators. Because of the blower, both the magnetic and the non magnetic air will be slightly warmer than in the surround (blower Work = ∆T/Cp). This heat will be mostly lost when the gas leaves the system, that is when it flows to lower pressure, both gas streams will be, essentially at room temperature. Again, this is not the case with the classic membrane-based oxygen concentrators — there the nitrogen-rich stream is notably warm.

Robert E. Buxbaum, October 11, 2017. I find thermodynamics wonderful, both as science and as an analog for society.

# Heraclitus and Parmenides time joke

From Existential Comics; Parmenides believed that nothing changed, nor could it.

For those who don’t remember, Heraclitus believed that change was the essence of life, while  Parmenides believed that nothing ever changes. It’s a debate that exists to this day in physics, and also in religion (there is nothing new under the sun, etc.). In science, the view that no real change is possible is founded in Schrödinger’s wave view of quantum mechanics.

Schrödinger’s wave equation, time dependent.

In Schrödinger’s wave description of reality, every object or particle is considered a wave of probability. What appears to us as motion is nothing more than the wave oscillating back and forth in its potential field. Nothing has a position or velocity, quite, only random interactions with other waves, and all of these are reversible. Because of the time reversibility of the equation, long-term, the system is conservative. The wave returns to where it was, and no entropy is created, long-term. Anything that happens will happen again, in reverse. See here for more on Schrödinger waves.

Thermodynamics is in stark contradiction to this quantum view. To thermodynamics, and to common observation, entropy goes ever upward, and nothing is reversible without outside intervention. Things break but don’t fix themselves. It’s this entropy increase that tells you that you are going forward in time. You know that time is going forward if you can, at will, drop an ice-cube into hot tea to produce lukewarm, diluted tea. If you can do the reverse, time is going backward. It’s a problem that besets Dr. Who, but few others.

One way that I’ve seen to get out of the general problem of quantum time is to assume the observed universe is a black hole or some other closed system, and take it as an issue of reference frame. As seen from the outside of a black hole (or a closed system without observation) time stops and nothing changes. Within a black hole or closed system, there is constant observation, and there is time and change. It’s not a great way out of the contradiction, but it’s the best I know of.

Predestination makes a certain physics and religious sense, it just doesn’t match personal experience very well.

The religion version of this problem is as follows: God, in most religions, has fore-knowledge. That is, He knows what will happen, and that presumes we have no free will. The problem with that is, without free-will, there can be no fair judgment, no right or wrong. There are a few ways out of this, and these lie behind many of the religious splits of the 1700s. A lot of the humor of Calvin and Hobbes comics comes because Calvin is a Calvinist, convinced of fatalistic predestination; Hobbes believes in free will. Most religions take a position somewhere in-between, but all have their problems.

Applying the black-hole model to God gives the following, alternative answer, one that isn’t very satisfying IMHO, but at least it matches physics. One might assume predestination for a God that is outside the universe — He sees only an unchanging system, while we, inside see time and change and free will. One of the problems with this is it posits a distant creator who cares little for us and sees none of the details. A more positive view of time appears in Dr. Who. For Dr. Who time is fluid, with some fixed points. Here’s my view of Dr. Who’s physics.  Unfortunately, Dr. Who is fiction: attractive, but without basis. Time, as it were, is an issue for the ages.

Robert Buxbaum, Philosophical musings, Friday afternoon, June 30, 2017.

# A clever, sorption-based, hydrogen pump

Hydrogen-power ed fuel cells provide a lot of advantages over batteries, e.g. for drones and extended range vehicles, but part of the challenge is compressing the hydrogen. On solution I’d proposed is a larger version of this steam-powered compressor, another is a membrane reactor hydrogen generator, and a few weeks ago, I wrote about an other clever innovative solutions: an electrochemical hydrogen pump. It was a fuel cell operating backwards, pumping was very efficient and compact, but the pressure was borne by the fuel cell membranes, so the pump is only suitable at low pressure differentials. I’d now like to describe a different, very clever hydrogen pump, one that operates by metallic hydride sorption and provides very high pressure.

Hydride sorption -desorption pressures vs temperature, from Dhinesh et al.

The basic metal hydride reaction is M + nH2 <–> MH2n. Where M is a metal or metallic alloy. While most metals will undergo this reaction at some appropriate temperature and pressure, the materials of interest are exothermic hydrides that undergo a nearly stoichiometric absorption or desorption reaction at temperatures near 1 atm, temperatures near room temperature. The plot at right presents the plateau pressure for hydrogen absorption/ desorption in several, common metal hydrides. The slope is proportionals to the heat of sorption. There is a red box shown for the candidates that sorb or desorb between 1 and 10 atmospheres and 25 and 100 °C. Sorbants whose lines pass through that box are good candidates for pump use. The ones with a high slope (high heat of sorption) in particular, if you want a convenient source of very high pressure.

To me, NaAlH4 is among the best of the materials, and certainly serves as a good example for how the pump works. The basic reaction, in this case is:

NaAl + 2H2 <–> NaAlH4

The line for this reaction crosses the 1 atm red line at about 30°C suggesting that each mol of NaAl material will absorb 2 mols of hydrogen at 1 am and normal room temperatures: 20-30°C. Assume the pump contains 100 g of NaAl (2.0 mols). We can expect it will 4 mols of hydrogen gas, about 90 liters at this temperature. If this material in now heated to 250°C, it will desorb most of the hydrogen (80% perhaps, 72 liters) at 100 atm, or 1500 psi. This is a remarkably high pressure boost; 1500 psi hydrogen is suitable for use filling the high pressure tank of a hydrogen-based, fuel cell car.

But there is a problem: it will take 2-3 hours to cycle the sober; the absorb hydrogen at low pressure, heat, desorb and cycle back to low temperature. If you only can pump 72 liters in 2-3 hours, this will not be an effective pump for automobiles. Even with several cells operating in parallel, it will be hard to fill the fuel tank of a fuel-cell car. The output is enough for electric generators, or for the small gas tank of a fuel cell drone, or for augmenting the mpg of gasoline automobiles. If one is interested in these materials, my company, REB Research will supply them in research quantities.

Properties of Metal Hydride materials; Dhanesh Chandra,* Wen-Ming Chien and Anjali Talekar, Material Matters, Volume 6 Article 2

At this point, I can imagine you saying that there is a simple way to make up for the low output of a pump with 100g of sorbent: use more, perhaps 10 kg distributed over 100 cells. The alloys don’t cost much in bulk, see chart above (they’re a lot more expensive in small quantities). With 100 times more sorbent, you’ll pump 100 times faster, enough for a fairly large hydrogen generator, like this one from REB. This will work, but you don’t get economies of scale. With standard, mechanical pumps give you a decent economy of scale — it costs 3-4 times as much for each 10 times increase in output. For this reason, the hydride sorption pump, though clever appears to be destined for low volume applications. Though low volume might involve hundreds of kg of sorbent, at some larger value, you’re going to want to use a mechanical pump.

Other uses of these materials include hydrogen storageremoval of hydrogen from a volume, e.g. so it does not mess up electronics, or for vacuum pumping from a futon reactor. I have sold niobium screws for hydrogen sorption in electronic packages, and my company provides chemical sorbers for hydrogen removal from air. For more of our products, visit www.rebresearch.com/catalog.html

Robert Buxbaum, May 26, 2017.

# Alcohol and gasoline don’t mix in the cold

One of the worst ideas to come out of the Iowa caucuses, I thought, was Ted Cruz claiming he’d allow farmers to blend as much alcohol into their gasoline as they liked. While this may have sounded good in Iowa, and while it’s consistent with his non-regulation theme, it’s horribly bad engineering.

Ethanol and gasoline are that miscible at temperatures below freezing, 0°C. The tendency is greater if the ethanol is wet or the gasoline contains benzenes

We add alcohol to gasoline, not to save money, mostly, but so that farmers will produce excess so we’ll have secure food for wartime or famine — or so I understand it. But the government only allows 10% alcohol in the blend because alcohol and gasoline don’t mix well when it’s cold. You may notice, even with the 10% mixture we use, that your car starts poorly on the coldest winter days. The engine turns over and almost catches, but dies. A major reason is that the alcohol separates from the rest of the gasoline. The concentrated alcohol layer screws up combustion because alcohol doesn’t burn all that well. With Cruz’s higher alcohol allowance, you’d get separation more often, at temperatures as high as 13°C (55°F) for a 65 mol percent mix, see chart at right. Things get worse yet if the gasoline gets wet, or contains benzene. Gasoline blending is complex stuff: something the average joe should not do.

Solubility of alcohol (ethanol) in gasoline; an extrapolation based on the data above.

To estimate the separation temperature of our normal, 10% alcohol-gasoline mix, I extended the data from the chart above using linear regression. From thermodynamics, I extrapolated ln-concentration vs 1/T, and found that a 10% by volume mix (5% mol fraction alcohol) will separate at about -40°F. Chances are, you won’t see that temperature this winter (and if you you do, try to find a gas mix that has no alcohol. Another thought, add hydrogen or other combustible gas to get the engine going.

Robert E. Buxbaum, February 10, 2016. Two more thoughts: 1) Thermodynamics is a beautiful subject to learn, and (2) Avoid people who stick to foolish consistency. Too much regulation is bad, as is too little: it’s a common pattern: The difference between a cure and a poison is often just the dose.

# The speed of sound, Buxbaum’s correction

Ernst Mach showed that sound must travel at a particular speed through any material, one determined by the conservation of energy and of entropy. At room temperature and 1 atm, that speed is theoretically predicted to be 343 m/s. For a wave to move at any other speed, either the laws of energy conservation would have to fail, or ∆S ≠ 0 and the wave would die out. This is the only speed where you could say there is a traveling wave, and experimentally, this is found to be the speed of sound in air, to good accuracy.

Still, it strikes me that Mach’s assumptions may have been too restrictive for short-distance sound waves. Perhaps there is room for other sound speeds if you allow ∆S > 0, and consider sound that travels short distances and dies out far from the source. Waves at these, other speeds might affect music appreciation, or headphone design. As these waves were never treated in my thermodynamics textbooks, I wondered if I could derive their speed in any nice way, and if they were faster or slower than the main wave? (If I can’t use this blog to re-think my college studies, what good is it?)

Imagine the sound-wave moving to the right, down a constant area tube at speed u, with us moving along at the same speed. Thus, the wave appears stationary, with a wind of speed u from the right.

As a first step to trying to re-imagine Mach’s calculation, here is one way to derive the original, for ∆S = 0, speed of sound: I showed in a previous post that the entropy change for compression can be imagines to have two parts, a pressure part at constant temperature: dS/dV at constant T = dP/dT at constant V. This part equals R/V for an ideal gas. There is also a temperature at constant volume part of the entropy change: dS/dT at constant V = Cv/T. Dividing the two equations, we find that, at constant entropy, dT/dV = RT/CvV= P/Cv. For a case where ∆S>0, dT/dV > P/Cv.

Now lets look at the conservation of mechanical energy. A compression wave gives off a certain amount of mechanical energy, or work on expansion, and this work accelerates the gas within the wave. For an ideal gas the internal energy of the gas is stored only in its temperature. Lets now consider a sound wave going down a tube flow left to right, and lets our reference plane along the wave at the same speed so the wave seems to sit still while a flow of gas moves toward it from the right at the speed of the sound wave, u. For this flow system energy is concerned though no heat is removed, and no useful work is done. Thus, any change in enthalpy only results in a change in kinetic energy. dH = -d(u2)/2 = u du, where H here is a per-mass enthalpy (enthalpy per kg).

dH = TdS + VdP. This can be rearranged to read, TdS = dH -VdP = -u du – VdP.

We now use conservation of mass to put du into terms of P,V, and T. By conservation of mass, u/V is constant, or d(u/V)= 0. Taking the derivative of this quotient, du/V -u dV/V2= 0. Rearranging this, we get, du = u dV/V (No assumptions about entropy here). Since dH = -u du, we say that udV/V = -dH = -TdS- VdP. It is now common to say that dS = 0 across the sound wave, and thus find that u2 = -V(dP/dV) at const S. For an ideal gas, this last derivative equals, PCp/VCv, so the speed of sound, u= √PVCp/Cv with the volume in terms of mass (kg/m3).

The problem comes in where we say that ∆S>0. At this point, I would say that u= -V(dH/dV) = VCp dT/dV > PVCp/Cv. Unless, I’ve made a mistake (always possible), I find that there is a small leading, non-adiabatic sound wave that goes ahead of the ordinary sound wave and is experienced only close to the source caused by mechanical energy that becomes degraded to raising T and gives rise more compression than would be expected for iso-entropic waves.

This should have some relevance to headphone design and speaker design since headphones are heard close to the ear, while speakers are heard further away. Meanwhile the recordings are made by microphones right next to the singers or instruments.

Robert E. Buxbaum, August 26, 2014

# If hot air rises, why is it cold on mountain-tops?

This is a child’s question that’s rarely answered to anyone’s satisfaction. To answer it well requires college level science, and by college the child has usually been dissuaded from asking anything scientific that would likely embarrass teacher — which is to say, from asking most anything. By a good answer, I mean here one that provides both a mathematical, checkable prediction of the temperature you’d expect to find on mountain tops, and one that also gives a feel for why it should be so. I’ll try to provide this here, as previously when explaining “why is the sky blue.” A word of warning: real science involves mathematics, something that’s often left behind, perhaps in an effort to build self-esteem. If I do a poor job, please text me back: “if hot air rises, what’s keeping you down?”

As a touchy-feely answer, please note that all materials have internal energy. It’s generally associated with the kinetic energy + potential energy of the molecules. It enters whenever a material is heated or has work done on it, and for gases, to good approximation, it equals the gas heat capacity of the gas times its temperature. For air, this is about 7 cal/mol°K times the temperature in degrees Kelvin. The average air at sea-level is taken to be at 1 atm, or 101,300  Pascals, and 15.02°C, or 288.15 °K; the internal energy of this are is thus 288.15 x 7 = 2017 cal/mol = 8420 J/mol. The internal energy of the air will decrease as the air rises, and the temperature drops for reasons I will explain below. Most diatomic gases have heat capacity of 7 cal/mol°K, a fact that is only explained by quantum mechanics; if not for quantum mechanics, the heat capacities of diatomic gases would be about 9 cal/mol°K.

Lets consider a volume of this air at this standard condition, and imagine that it is held within a weightless balloon, or plastic bag. As we pull that air up, by pulling up the bag, the bag starts to expand because the pressure is lower at high altitude (air pressure is just the weight of the air). No heat is exchanged with the surrounding air because our air will always be about as warm as its surroundings, or if you like you can imagine weightless balloon prevents it. In either case the molecules lose energy as the bag expands because they always collide with an outwardly moving wall. Alternately you can say that the air in the bag is doing work on the exterior air — expansion is work — but we are putting no work into the air as it takes no work to lift this air. The buoyancy of the air in our balloon is always about that of the surrounding air, or so we’ll assume for now.

A classic, difficult way to calculate the temperature change with altitude is to calculate the work being done by the air in the rising balloon. Work done is force times distance: w=  ∫f dz and this work should equal the effective cooling since heat and work are interchangeable. There’s an integral sign here to account for the fact that force is proportional to pressure and the air pressure will decrease as the balloon goes up. We now note that w =  ∫f dz = – ∫P dV because pressure, P = force per unit area. and volume, V is area times distance. The minus sign is because the work is being done by the air, not done on the air — it involves a loss of internal energy. Sorry to say, the temperature and pressure in the air keeps changing with volume and altitude, so it’s hard to solve the integral, but there is a simple approach based on entropy, S.

Les Droites Mountain, in the Alps, at the intersect of France Italy and Switzerland is 4000 m tall. The top is generally snow-covered.

I discussed entropy last month, and showed it was a property of state, and further, that for any reversible path, ∆S= (Q/T)rev. That is, the entropy change for any reversible process equals the heat that enters divided by the temperature. Now, we expect the balloon rise is reversible, and since we’ve assumed no heat transfer, Q = 0. We thus expect that the entropy of air will be the same at all altitudes. Now entropy has two parts, a temperature part, Cp ln T2/T1 and a pressure part, R ln P2/P1. If the total ∆S=0 these two parts will exactly cancel.

Consider that at 4000m, the height of Les Droites, a mountain in the Mont Blanc range, the typical pressure is 61,660 Pa, about 60.85% of sea level pressure (101325 Pa). If the air were reduced to this pressure at constant temperature (∆S)T = -R ln P2/P1 where R is the gas constant, about 2 cal/mol°K, and P2/P1 = .6085; (∆S)T = -2 ln .6085. Since the total entropy change is zero, this part must equal Cp ln T2/T1 where Cp is the heat capacity of air at constant pressure, about 7 cal/mol°K for all diatomic gases, and T1 and T2 are the temperatures (Kelvin) of the air at sea level and 4000 m. (These equations are derived in most thermodynamics texts. The short version is that the entropy change from compression at constant T equals the work at constant temperature divided by T,  ∫P/TdV=  ∫R/V dV = R ln V2/V1= -R ln P2/P1. Similarly the entropy change at constant pressure = ∫dQ/T where dQ = Cp dT. This component of entropy is thus ∫dQ/T = Cp ∫dT/T = Cp ln T2/T1.) Setting the sum to equal zero, we can say that Cp ln T2/T1 =R ln .6085, or that

T2 = T1 (.6085)R/Cp

T2 = T1(.6085)2/7   where 0.6065 is the pressure ratio at 4000, and because for air and most diatomic gases, R/Cp = 2/7 to very good approximation, matching the prediction from quantum mechanics.

From the above, we calculate T2 = 288.15 x .8676 = 250.0°K, or -23.15 °C. This is cold enough to provide snow  on Les Droites nearly year round, and it’s pretty accurate. The typical temperature at 4000 m is 262.17 K (-11°C). That’s 26°C colder than at sea-level, and only 12°C warmer than we’d predicted.

There are three weak assumptions behind the 11°C error in our predictions: (1) that the air that rises is no hotter than the air that does not, and (2) that the air’s not heated by radiation from the sun or earth, and (3) that there is no heat exchange with the surrounding air, e.g. from rain or snow formation. The last of these errors is thought to be the largest, but it’s still not large enough to cause serious problems.

Snow on Kilimanjaro, Tanzania 2013. If global warming models were true, the ground should be 4°C warmer than 100 years ago, and the air at this altitude, about 7°C (12°F) warmer; and the snow should be gone.

You can use this approach, with different exponents, estimate the temperature at the center of Jupiter, or at the center of neutron stars. This iso-entropic calculation is the model that’s used here, though it’s understood that may be off by a fair percentage. You can also ask questions about global warming: increased CO2 at this level is supposed to cause extreme heating at 4000m, enough to heat the earth below by 4°C/century or more. As it happens, the temperature and snow cover on Les Droites and other Alp ski areas has been studied carefully for many decades; they are not warming as best we can tell (here’s a discussion). By all rights, Mt Blanc should be Mt Green by now; no one knows why. The earth too seems to have stopped warming. My theory: clouds.

Robert Buxbaum, May 10, 2014. Science requires you check your theory for internal and external weakness. Here’s why the sky is blue, not green.

# Entropy, the most important pattern in life

One evening at the Princeton grad college a younger fellow (an 18-year-old genius) asked the most simple, elegant question I had ever heard, one I’ve borrowed and used ever since: “tell me”, he asked, “something that’s important and true.” My answer that evening was that the entropy of the universe is always increasing. It’s a fundamentally important pattern in life; one I didn’t discover, but discovered to have a lot of applications and meaning. Let me explain why it’s true here, and then why I find it’s meaningful.

Famous entropy cartoon, Harris

The entropy of the universe is not something you can measure directly, but rather indirectly, from the availability of work in any corner of it. It’s related to randomness and the arrow of time. First off, here’s how you can tell if time is moving forward: put an ice-cube into hot water, if the cube dissolves and the water becomes cooler, time is moving forward — or, at least it’s moving in the same direction as you are. If you can reach into a cup of warm water and pull out an ice-cube while making the water hot, time is moving backwards. — or rather, you are living backwards. Within any closed system, one where you don’t add things or energy (sunlight say), you can tell that time is moving forward because the forward progress of time always leads to the lack of availability of work. In the case above, you could have generated some electricity from the ice-cube and the hot water, but not from the glass of warm water.

You can not extract work from a heat source alone; to extract work some heat must be deposited in a cold sink. At best the entropy of the universe remains unchanged.

This observation is about as fundamental as any to understanding the world; it is the basis of entropy and the second law of thermodynamics: you can never extract useful work from a uniform temperature body of water, say, just by making that water cooler. To get useful work, you always need something some other transfer into or out of the system; you always need to make something else hotter, colder, or provide some chemical or altitude changes that can not be reversed without adding more energy back. Thus, so long as time moves forward everything runs down in terms of work availability.

There is also a first law; it states that energy is conserved. That is, if you want to heat some substance, that change requires that you put in a set amount of work plus heat. Similarly, if you want to cool something, a set amount of heat + work must be taken out. In equation form, we say that, for any change, q +w is constant, where q is heat, and w is work. It’s the sum that’s constant, not the individual values so long as you count every 4.174 Joules of work as if it were 1 calorie of heat. If you input more heat, you have to add less work, and visa versa, but there is always the same sum. When adding heat or work, we say that q or w is positive; when extracting heat or work, we say that q or w are negative quantities. Still, each 4.174 joules counts as if it were 1 calorie.

Now, since for every path between two states, q +w is the same, we say that q + w represents a path-independent quantity for the system, one we call internal energy, U where ∆U = q + w. This is a mathematical form of the first law of thermodynamics: you can’t take q + w out of nothing, or add it to something without making a change in the properties of the thing. The only way to leave things the same is if q + w = 0. We notice also that for any pure thing or mixture, the sum q +w for the change is proportional to the mass of the stuff; we can thus say that internal energy is an intensive quality. q + w = n ∆u where n is the grams of material, and ∆u is the change in internal energy per gram.

We are now ready to put the first and second laws together. We find we can extract work from a system if we take heat from a hot body of water and deliver some of it to something at a lower temperature (the ice-cube say). This can be done with a thermopile, or with a steam engine (Rankine cycle, above), or a stirling engine. That an engine can only extract work when there is a difference of temperatures is similar to the operation of a water wheel. Sadie Carnot noted that a water wheel is able to extract work only when there is a flow of water from a high level to low; similarly in a heat engine, you only get work by taking in heat energy from a hot heat-source and exhausting some of it to a colder heat-sink. The remainder leaves as work. That is, q1 -q2 = w, and energy is conserved. The second law isn’t violated so long as there is no way you could run the engine without the cold sink. Accepting this as reasonable, we can now derive some very interesting, non-obvious truths.

We begin with the famous Carnot cycle. The Carnot cycle is an idealized heat engine with the interesting feature that it can be made to operate reversibly. That is, you can make it run forwards, taking a certain amount of work from a hot source, producing a certain amount of work and delivering a certain amount of heat to the cold sink; and you can run the same process backwards, as a refrigerator, taking in the same about of work and the same amount of heat from the cold sink and delivering the same amount to the hot source. Carnot showed by the following proof that all other reversible engines would have the same efficiency as his cycle and no engine, reversible or not, could be more efficient. The proof: if an engine could be designed that will extract a greater percentage of the heat as work when operating between a given hot source and cold sink it could be used to drive his Carnot cycle backwards. If the pair of engines were now combined so that the less efficient engine removed exactly as much heat from the sink as the more efficient engine deposited, the excess work produced by the more efficient engine would leave with no effect besides cooling the source. This combination would be in violation of the second law, something that we’d said was impossible.

Now let us try to understand the relationship that drives useful energy production. The ratio of heat in to heat out has got to be a function of the in and out temperatures alone. That is, q1/q2 = f(T1, T2). Similarly, q2/q1 = f(T2,T1) Now lets consider what happens when two Carnot cycles are placed in series between T1 and T2, with the middle temperature at Tm. For the first engine, q1/qm = f(T1, Tm), and similarly for the second engine qm/q2 = f(Tm, T2). Combining these we see that q1/q2 = (q1/qm)x(qm/q2) and therefore f(T1, T2) must always equal f(T1, Tm)x f(Tm/T2) =f(T1,Tm)/f(T2, Tm). In this relationship we see that the second term Tm is irrelevant; it is true for any Tm. We thus say that q1/q2 = T1/T2, and this is the limit of what you get at maximum (reversible) efficiency. You can now rearrange this to read q1/T1 = q2/T2 or to say that work, W = q1 – q2 = q2 (T1 – T2)/T2.

A strange result from this is that, since every process can be modeled as either a sum of Carnot engines, or of engines that are less-efficient, and since the Carnot engine will produce this same amount of reversible work when filled with any substance or combination of substances, we can say that this outcome: q1/T1 = q2/T2 is independent of path, and independent of substance so long as the process is reversible. We can thus say that for all substances there is a property of state, S such that the change in this property is ∆S = ∑q/T for all the heat in or out. In a more general sense, we can say, ∆S = ∫dq/T, where this state property, S is called the entropy. Since as before, the amount of heat needed is proportional to mass, we can say that S is an intensive property; S= n s where n is the mass of stuff, and s is the entropy change per mass.

Another strange result comes from the efficiency equation. Since, for any engine or process that is less efficient than the reversible one, we get less work out for the same amount of q1, we must have more heat rejected than q2. Thus, for an irreversible engine or process, q1-q2 < q2(T1-T2)/T2, and q2/T2 is greater than -q1/T1. As a result, the total change in entropy, S = q1/T1 + q2/T2 >0: the entropy of the universe always goes up or stays constant. It never goes down. Another final observation is that there must be a zero temperature that nothing can go below or both q1 and q2 could be positive and energy would not be conserved. Our observations of time and energy conservation leaves us to expect to find that there must be a minimum temperature, T = 0 that nothing can be colder than. We find this temperature at -273.15 °C. It is called absolute zero; nothing has ever been cooled to be colder than this, and now we see that, so long as time moves forward and energy is conserved, nothing will ever will be found colder.

Typically we either say that S is zero at absolute zero, or at room temperature.

We’re nearly there. We can define the entropy of the universe as the sum of the entropies of everything in it. From the above treatment of work cycles, we see that this total of entropy always goes up, never down. A fundamental fact of nature, and (in my world view) a fundamental view into how God views us and the universe. First, that the entropy of the universe goes up only, and not down (in our time-forward framework) suggests there is a creator for our universe — a source of negative entropy at the start of all things, or a reverser of time (it’s the same thing in our framework). Another observation, God likes entropy a lot, and that means randomness. It’s his working principle, it seems.

But before you take me now for a total libertine and say that since science shows that everything runs down the only moral take-home is to teach: “Let us eat and drink,”… “for tomorrow we die!” (Isaiah 22:13), I should note that his randomness only applies to the universe as a whole. The individual parts (planets, laboratories, beakers of coffee) does not maximize entropy, but leads to a minimization of available work, and this is different. You can show that the maximization of S, the entropy of the universe, does not lead to the maximization of s, the entropy per gram of your particular closed space but rather to the minimization of a related quantity µ, the free energy, or usable work per gram of your stuff. You can show that, for any closed system at constant temperature, µ = h -Ts where s is entropy per gram as before, and h is called enthalpy. h is basically the potential energy of the molecules; it is lowest at low temperature and high order. For a closed system we find there is a balance between s, something that increases with increased randomness, and h, something that decreases with increased randomness. Put water and air in a bottle, and you find that the water is mostly on the bottom of the bottle, the air is mostly on the top, and the amount of mixing in each phase is not the maximum disorder, but rather the one you’d calculate will minimize µ.

As a protein folds its randomness and entropy decrease, but its enthalpy decreases too; the net effect is one precise fold that minimizes µ.

This is the principle that God applies to everything, including us, I’d guess: a balance. Take protein folding; some patterns have big disorder, and high h; some have low disorder and very low h. The result is a temperature-dependent  balance. If I were to take a moral imperative from this balance, I’d say it matches better with the sayings of Solomon the wise: “there is nothing better for a person under the sun than to eat, drink and be merry. Then joy will accompany them in their toil all the days of the life God has given them under the sun.” (Ecclesiastes 8:15). There is toil here as well as pleasure; directed activity balanced against personal pleasures. This is the µ = h -Ts minimization where, perhaps, T is economic wealth. Thus, the richer a society, the less toil is ideal and the more freedom. Of necessity, poor societies are repressive.

Dr. Robert E. Buxbaum, Mar 18, 2014. My previous thermodynamic post concerned the thermodynamics of hydrogen production. It’s not clear that all matter goes forward in time, by the way; antimatter may go backwards, so it’s possible that anti matter apples may fall up. On microscopic scale, time becomes flexible so it seems you can make a time machine. Religious leaders tend to be anti-science, I’ve noticed, perhaps because scientific miracles can be done by anyone, available even those who think “wrong,” or say the wrong words. And that’s that, all being heard, do what’s right and enjoy life too: as important a pattern in life as you’ll find, I think. The relationship between free-energy and societal organization is from my thesis advisor, Dr. Ernest F. Johnson.

# Nerves are tensegrity structures and grow when pulled

No one quite knows how nerve cells learn stuff. It is incorrectly thought that you can not get new nerves in the brain, nor that you can get brain cells to grow out further, but people have made new nerve cells, and when I was a professor at Michigan State, a Physiology colleague and I got brain and sensory nerves to grow out axons by pulling on them without the use of drugs.

I had just moved to Michigan State as a fresh PhD (Princeton) as an assistant professor of chemical engineering. Steve Heidemann was a few years ahead of me, a Physiology professor PhD from Princeton. We were both new Yorkers. He had been studying nerve structure, and wondered about how the growth cone makes nerves grow out axons (the axon is the long, stringy part of the nerve). A thought was that nerves were structured as Snelson-Fuller tensegrity structures, but it was not obvious how that would relate to growth or anything else. A Snelson-Fuller structure is shown below the structure stands erect not by compression, as in a pyramid or igloo, but rather because tension in the wires helps lift the metal pipes, and puts them in compression. The nerve cell, shown further below is similar with actin-protein as the outer, tensed skin, and a microtubule-protein core as the compress pipes.

A Snelson-Fuller tensegrity sculpture in the graduate college courtyard at Princeton, an inspiration for our work.

Biothermodynamics was pretty basic 30 years ago (It still is today), and it was incorrectly thought that objects were more stable when put in compression. It didn’t take too much thermodynamics on my part to show otherwise, and so I started a part-time career in cell physiology. Consider first how mechanical force should affect the Gibbs free energy, G, of assembled microtubules. For any process at constant temperature and pressure, ∆G = work. If force is applied we expect some elastic work will be put into the assembled Mts in an amount  ∫f dz, where f is the force at every compression, and ∫dz is the integral of the distance traveled. Assuming a small force, or a constant spring, f = kz with k as the spring constant. Integrating the above, ∆G = ∫kz dz = kz2; ∆G is always positive whether z is positive or negative, that is the microtubule is most stable with no force, and is made less stable by any force, tension or compression.

A cell showing what appears to be tensegrity. The microtubules (green) surrounded by actin (red). In nerves Heidemann and I showed actin is in tension the microtubules in compression.

Assuming that microtubules in the nerve- axon are generally in compression as in the Snelson-Fuller structure, then pulling on the axon could potentially reduce the compression. Normally, this is done by a growth cone, we posited, but we could also do it by pulling. In either case, a decrease in the compression of the assembled microtubules should favor microtubule assembly.

To calculate the rates, I used absolute rate theory, something I’d learned from Dr. Mortimer Kostin, a most-excellent thermodynamics professor. I assumed that the free energy of the monomer was unaffected by force, and that the microtubules were in pseudo- equilibrium with the monomer. Growth rates were predicted to be proportional to the decrease in G, and the prediction matched experimental data.

Our few efforts to cure nerve disease by pulling did not produce immediate results; it turns out to by hard to pull on nerves in the body. Still, we gained some publicity, and a variety of people seem to have found scientific and/or philosophical inspiration in this sort of tensegrity model for nerve growth. I particularly like this review article by Don Ingber in Scientific American. A little more out there is this view of consciousness life and the fate of the universe (where I got the cell picture). In general, tensegrity structures are more tough and flexible than normal construction. A tensegrity structure will bend easily, but rarely break. It seems likely that your body is held together this way, and because of this you can carry heavy things, and still move with flexibility. It also seems likely that bones are structured this way; as with nerves; they are reasonably flexible, and can be made to grow by pulling.

Now that I think about it, we should have done more theoretical or experimental work in this direction. I imagine that  pulling on the nerve also affects the stability of the actin network by affecting the chain configuration entropy. This might slow actin assembly, or perhaps not. It might have been worthwhile to look at new ways to pull, or at bone growth. In our in-vivo work we used an external magnetic field to pull. We might have looked at NASA funding too, since it’s been observed that astronauts grow in outer space by a solid inch or two, and their bodies deteriorate. Presumably, the lack of gravity causes the calcite in the bones to grow, making a person less of a tensegrity structure. The muscle must grow too, just to keep up, but I don’t have a theory for muscle.

Robert Buxbaum, February 2, 2014. Vaguely related to this, I’ve written about architecture, art, and mechanical design.

# Thermodynamics of hydrogen generation

Perhaps the simplest way to make hydrogen is by electrolysis: you run some current through water with a little sulfuric acid or KOH added, and for every two electrons transferred, you get a molecule of hydrogen from one electrode and half a molecule of oxygen from the other.

2 OH- –> 2e- + 1/2 O2 +H2O

2H2O + 2e- –>  H2 + 2OH-

The ratio between amps, seconds and mols of electrons (or hydrogen) is called the Faraday constant, F = 96500; 96500 amp-seconds transfers a mol of electrons. For hydrogen production, you need 2 mols of electrons for each mol of hydrogen, n= 2, so

it = 2F where and i is the current in amps, and t is the time in seconds and n is the number electrons per molecule of desired product. For hydrogen, t = 96500*2/i; in general, t = Fn/i.

96500 is a large number, and it takes a fair amount of time to make any substantial amount of hydrogen by electrolysis. At 1 amp, it takes 96500*2 = 193000 seconds, 2 days, to generate one mol of hydrogen (that’s 2 grams Hor 22.4 liters, enough to fill a garment bag). We can reduce the time by using a higher current, but there are limits. At 25 amps, the maximum current of you can carry with house wiring it takes 2.14 hours to generate 2 grams. (You’ll have to rectify your electricity to DC or you’ll get a nasty H2 /O2 mix called Brown’s gas, While normal H2 isn’t that dangerous, Browns gas is a mix of H2 and O2 and is quite explosive. Here’s an essay I wrote on separating Browns gas).

Electrolysis takes a fair amount of electric energy too; the minimum energy needed to make hydrogen at a given temperature and pressure is called the reversible energy, or the Gibbs free energy ∆G of the reaction. ∆G = ∆H -T∆S, that is, ∆G equals the heat of hydrogen production ∆H – minus an entropy effect, T∆S. Since energy is the product of voltage current and time, Vit = ∆G, where ∆G is the Gibbs free energy measured in Joules and V,i, and t are measured Volts, Amps, and seconds respectively.

Since it = nF, we can rewrite the relationship as: V =∆G/nF for a process that has no every losses, the reversible process. This is the form found in most thermodynamics textbooks; the value of V calculated this way is the minimum voltage to generate hydrogen, and the maximum voltage you could get in a fuel cell putting water back together.

To calculate this voltage, and the power requirements to make hydrogen, we use the Gibbs free energy for water formation found in Wikipedia, copied below (in my day, we used the CRC Handbook of Chemistry and Physics or a table in out P-chem book). You’ll notice that there are two different values for ∆G depending on whether the water is a gas or a liquid, and you’ll notice a small zero at the upper right (∆G°). This shows that the values are for an imaginary standard state: 0°C and 1 atm pressure. You can’t get 1 atm steam at 0°C, it’s an extrapolation; room temperature behavior is pretty similar (i.e: there’s a non- negligible correction that I’ll leave to a reader to send as a comment.)

 Liquid H2O formation ∆G° = -237.14 Gaseous H2O formation ∆G° = -228.61

The reversible voltage for creating liquid water in a reversible fuel cell is found to be -237,140/(2 x 96,500) = -1.23V. We find that 1.23 Volts is about the minimum voltage you need to do electrolysis at 0°C because you need liquid water to carry the current; -1.18 V is about the maximum voltage you can get in a fuel cell because they operate at higher temperature with oxygen pressures significantly below 1 atm. (typically). The minus sign is kept for accounting; it differentiates the power out case (fuel cells) from power in (electrolysis).

Most electrolysis is done at voltages above about 1.48 V. Just as fuel cells always give off heat (they are exothermic), electrolysis will absorb heat if run reversibly. That is, electrolysis can act as a refrigerator if run reversibly, but it is not a very good refrigerator (the refrigerator ability is tied up in the entropy term mentioned above). To do electrolysis at fast rates, people give up on refrigeration and provide all the energy needed in the electricity. In this case, ∆H = nFV’ where ∆H is the enthalpy of water formation, and V’ is this higher voltage for electrolysis. Based on the enthalpy of liquid water formation,  −285.8 kJ/mol we find V’ = 1.48 V at zero degrees. At this voltage no net heat is given off or absorbed. The figure below shows that you can use less voltage, but not if you want to make hydrogen fast:

Electrolyzer performance; C-Pt catalyst on a thin, nafion membrane

If you figure out the energy that this voltage and amperage represents (shown below) you’re likely to come to a conclusion I came to several years ago: that it’s far better to generate large amounts of hydrogen chemically, ideally from membrane reactors like my company makes.

The electric power to make each 2 grams of hydrogen at 1.5 volts is 1.5 V x 193000 Amp-s = 289,500 J = .080 kWh’s, or 0.9¢ at current rates, but filling a car takes 20 kg, or 10,000 times as much. That’s 800 kW-hr, or \$90 at current rates. The electricity is twice as expensive as current gasoline and the infrastructure cost is staggering too: a station that fuels ten cars per hour would require 8 MW, far more power than any normal distributor could provide.

By contrast, methanol costs about 2/3 as much as gasoline, and it’s easy to deliver many giga-joules of methanol energy to a gas station by truck. Our company’s membrane reactor hydrogen generators would convert methanol-water to hydrogen efficiently by the reaction CH3OH + H2O –> 3H2 + CO2. This is not to say that electrolysis isn’t worthwhile for lower demand applications: see, e.g.: gas chromatography, and electric generator cooling. Here’s how membrane reactors work.

R. E. Buxbaum July 1, 2013; Those who want to show off, should post the temperature and pressure corrections to my calculations for the reversible voltage of typical fuel cells and electrolysis.

# How and why membrane reactors work

Here is a link to a 3 year old essay of mine about how membrane reactors work and how you can use them to get past the normal limits of thermodynamics. The words are good, as is the example application, but I think I can write a shorter version now. Also, sorry to say, when I wrote the essay I was just beginning to make membrane reactors; my designs have gotten simpler since.

Above, for example, is a more modern, high pressure membrane reactor design:  72 tube reactor assembly; high pressure. The area at right is used for heat transfer. Normally the reactor would sit with this end up, and the tube area filled or half-filled with catalyst, e.g. for the water gas shift reaction, CO + H2O –> CO2 + H2. According to normal thermodynamics, the extent of reaction for this will not be affected by pressure once it reaches equilibrium, only by temperature. If you want the reaction to go reasonably to completion, you have to operate at low temperatures, 250- 300 °C, and you have to cool externally to remove the heat of reaction. In a membrane reactor, you can operate at much higher temperatures and you don’t have to work so hard to remove heat. The trick is to operate with the reacting gas at high pressures, and to extract hydrogen at lower pressures. With a large enough difference between the reacting pressure and the extract pressure, you can achieve high extents (high conversions) at any temperature.

Here’s where we sell membrane reactors; we also sell catalyst and tubes.