Category Archives: thermodynamics

magnetic separation of air

As some of you will know, oxygen is paramagnetic, attracted slightly by a magnet. Oxygen’s paramagnetism is due to the two unpaired electrons in every O2 molecule. Oxygen has a triple-bond structure as discussed here (much of the chemistry you were taught is wrong). Virtually every other common gas is diamagnetic, repelled by a magnet. These include nitrogen, water, CO2, and argon — all diamagnetic. As a result, you can do a reasonable job of extracting oxygen from air by the use of a magnet. This is awfully cool, and could make for a good science fair project, if anyone is of a mind.

But first some math, or physics, if you like. To a good approximation the magnetization of a material, M = CH/T where M is magnetization, H is magnetic field strength, C is the Curie constant for the material, and T is absolute temperature.

Ignoring for now, the difference between entropy and internal energy, but thinking only in terms of work derived by lowering a magnet towards a volume of gas, we can say that the work extracted, and thus the decrease in energy of the magnetic gas is ∫∫HdM  = MH/2. At constant temperature and pressure, we can say ∆G = -CH2/2T.

With a neodymium magnet, you should be able to get about 50 Tesla, or 40,000 ampere meters At 20°C, the per-mol, magnetic susceptibility of oxygen is 1.34×10−6  This suggests that the Curie constant is 1.34 x293 = 3.93 ×10−4  At 20°C, this energy difference is 1072 J/mole. = RT ln ß where ß is the concentration ratio between the O2 content of the magnetized and un-magnetized gas.

From the above, we find that, at room temperature, 298K ß = 1.6, and thus that the maximum oxygen concentration you’re likely to get is about 1.6 x 21% = 33%. It’s slightly more than this due to nitrogen’s diamagnetism, but this effect is too small the matter. What does matter is that 33% O2 is a good amount for a variety of medical uses.

I show below my simple design for a magnetic O2 concentrator. The dotted line is a permeable membrane of no selectivity – with a little O2 permeability the design will work better. All you need is a blower or pump. A coffee filter could serve as a membrane.bux magneitc air separator

This design is as simple as the standard membrane-based O2 concentrator – those based on semi-permeable membranes, but this design should require less pressure differential — just enough to overcome the magnet. Less pressure means the blower should be smaller, and less noisy, with less energy use.  I figure this could be really convenient for people who need portable oxygen. With several stages and low temperature operation, this design could have commercial use.

On the theoretical end, an interesting thing I find concerns the effect on the entropy of the magnetic oxygen. (Please ignore this paragraph if you have not learned statistical thermodynamics.) While you might imagine that magnetization decreases entropy, other-things being equal because the molecules are somewhat aligned with the field, temperature and pressure being fixed, I’ve come to realize that entropy is likely higher. A sea of semi-aligned molecules will have a slightly higher heat capacity than nonaligned molecules because the vibrational Cp is higher, other things being equal. Thus, unless I’m wrong, the temperature of the gas will be slightly lower in the magnetic area than in the non-magnetic field area. Temperature and pressure are not the same within the separator as out, by the way; the blower is something of a compressor, though a much less-energy intense one than used for most air separators. Because of the blower, both the magnetic and the non magnetic air will be slightly warmer than in the surround (blower Work = ∆T/Cp). This heat will be mostly lost when the gas leaves the system, that is when it flows to lower pressure, both gas streams will be, essentially at room temperature. Again, this is not the case with the classic membrane-based oxygen concentrators — there the nitrogen-rich stream is notably warm.

Robert E. Buxbaum, October 11, 2017. I find thermodynamics wonderful, both as science and as an analog for society.

The chemistry of sewage treatment

The first thing to know about sewage is that it’s mostly water and only about 250 ppm solids. That is, if you boiled down a pot of sewage, only about 1/40 of 1% of it would remain as solids at the bottom of the pot. There would be some dried poop, some bits of lint and soap, the remains of potato peelings… Mostly, the sewage is water, and mostly it would have boiled away. The second thing to know, is that the solids, the bio-solids, are a lot like soil but better: more valuable, brown gold if used right. While our county mostly burns and landfills the solids remnant of our treated sewage, the wiser choice would be to convert it to fertilizer. Here is a comparison between the composition of soil and bio-solids.

The composition of soil and the composition of bio-solid waste. biosolids are like soil, just better.

The composition of soil and the composition of bio-solid waste. biosolids are like soil, just better.

Most of Oakland’s sewage goes to Detroit where they mostly dry and burn it, and land fill the rest. These processes are expensive and engineering- problematic. It takes a lot of energy to dry these solids to the point where they burn (they’re like really wet wood), and even then they don’t burn nicely. As shown above, the biosolids contain lots of sulfur and that makes combustion smelly. They also contain nitrate, and that makes combustion dangerous. It’s sort of like burning natural gun powder.

The preferred solution is partial combustion (oxidation) at room temperature by bacteria followed by conversion to fertilizer. In Detroit we do this first stage of treatment, the slow partial combustion by bacteria. Consider glucose, a typical carbohydrate,

-HCOH- + O–> CO+ H2O.    ∆G°= -114.6 kcal/mol.

The value of ∆G°, is relevant as a determinate of whether the reaction will proceed. A negative value of ∆G°, as above, indicates that the reaction can progress substantially to completion at standard conditions of 25°C and 1 atm pressure. In a sewage plant, many different carbohydrates are treated by many different bacteria (amoebae, paramnesia, and lactobacilli), and the temperature is slightly cooler than room, about 10-15°C, but this value of ∆G° suggests that near total biological oxidation is possible.

The Detroit plant, like most others, do this biological oxidation treatment using either large stirred tanks, of million gallon volume or so, or in flow reactors with a large fraction of cellular-material returning as recycle. Recycle is needed also in the stirred tank process because of the low solid content. The reaction is approximately first order in oxygen, carbohydrate, and bacteria. Thus a 50% cell recycle more or less doubles the speed of the reaction. Air is typically bubbled through the reactor to provide the oxygen, but in Detroit, pure oxygen is used. About half the organic carbon is oxidized and the remainder is sent to a settling pond. The decant (top) water is sent for “polishing” and dumped in the river, while the goop (the bottom) is currently dried for burning or carted off for landfill. The Holly, MI sewage plant uses a heterogeneous reactors for the oxidation: a trickle bed followed by a rotating disk contractor. These have higher bio-content and thus lower area demands and separation costs, but there is a somewhat higher capital cost.

A major component of bio-solids is nitrogen. Much of this in enters the form of urea, NH2-CO-NH2. In an oxidizing environment, bacteria turns the urea and other nitrogen compounds into nitrate. Consider the reaction the presence of washing soda, Na2CO3. The urea is turned into nitrate, a product suitable for gun powder manufacture. The value of ∆G° is negative, and the reaction is highly favorable.

NH2-CO-NH2 + Na2CO3 + 4 O2 –> 2 Na(NO3) + 2 CO2 + 2 H2O.     ∆G° = -177.5 kcal/mol

The mixture of nitrates and dry bio-solids is highly flammable, and there was recently a fire in the Detroit biosolids dryer. If we wished to make fertilizer, we’d probably want to replace the drier with a further stage of bio-treatment. In Wisconsin, and on a smaller scale in Oakland MI, biosolids are treated by higher temperature (thermophilic) bacteria in the absence of air, that is anaerobically. Anaerobic digestion produces hydrogen and methane, and produces highly useful forms of organic carbon.

2 (-HCOH-) –> COCH4        ∆G° = -33.7 Kcal/mol

3 (-HCOH-) + H2O –> -CH2COOH + CO2 +  2 1/2 H2        ∆G° = -21.9 kcal/mol

In a well-designed plant, the methane is recovered to provide heat to the plant, and sometimes to generate power. In Wisconsin, enough methane is produced to cook the fertilizer to sterilization. The product is called “Milorganite” as much of it comes from Milwaukee and much of the nitrate is bound to organics.

Egg-shaped, anaerobic biosolid digestors.

Egg-shaped, anaerobic biosolid digestors, Singapore.

The hydrogen could be recovered too, but typically reacts further within the anaerobic digester. Typically it will reduce the iron oxide in the biosolids from the brown, ferric form, Fe2O3, to black FeO.  In a reducing atmosphere,

Fe2O3 + H2 –> 2 FeO + H2O.

Fe2O3 is the reason leaves turn brown in the fall and is the reason that most poop is brown. FeO is the reason that composted soil is typically black. You’ll notice that swamps are filled with black goo, that’s because of a lack of oxygen at the bottom. Sulphate and phosphorous can be bound to ferrous iron and this is good for fertilizer. Generally you want the reduction reactions to go no further.

Weir dam on the river dour. Used to manage floods, increase residence time, and oxygenate the flow.

Weir dam on the river Dour in Scotland. Dams of this type increase residence time, and oxygenate the flow. They’re good for fish, pollution, and flooding.

When allowed to continue, the hydrogen produced by anaerobic digestion begins to reduce sulfate to H2S.

NaSO4 + 4.5 H2 –>  NaOH + 3H2O + H2S.

I’m running for Oakland county, MI water commissioner, and one of my aims is to stop wasting our biosolids. Oakland produces nearly 1000,000 pounds of dry biosolids per day. This is either a blessing or a curse depending on how we use it.

Another issue, Oakland county dumps unpasteurized, smelly black goo into Lake St. Clair every other week, whenever it rains more than one inch. I’d like to stop this by separating the storm and “sanitary” sewage. There is a capital cost, but it can save money because we’d no longer have to pay to treat our rainwater at the Detroit sewage plant. To clean the storm runoff, I’d use mini wetlands and weir dams to increase residence time and provide oxygen. Done right, it would look beautiful and would avoid the flash floods. It should also bring natural fish back to the Clinton River.

Robert Buxbaum, May 24 – Sept. 15, 2016 Thermodynamics plays a big role in my posts. You can show that, when the global ∆G is negative, there is an increase in the entropy of the universe.

Alcohol and gasoline don’t mix in the cold

One of the worst ideas to come out of the Iowa caucuses, I thought, was Ted Cruz claiming he’d allow farmers to blend as much alcohol into their gasoline as they liked. While this may have sounded good in Iowa, and while it’s consistent with his non-regulation theme, it’s horribly bad engineering.

At low temperatures ethanol and gasoline are no longer quite miscible

Ethanol and gasoline are that miscible at temperatures below freezing, 0°C. The tendency is greater if the ethanol is wet or the gasoline contains benzenes

We add alcohol to gasoline, not to save money, mostly, but so that farmers will produce excess so we’ll have secure food for wartime or famine — or so I understand it. But the government only allows 10% alcohol in the blend because alcohol and gasoline don’t mix well when it’s cold. You may notice, even with the 10% mixture we use, that your car starts poorly on the coldest winter days. The engine turns over and almost catches, but dies. A major reason is that the alcohol separates from the rest of the gasoline. The concentrated alcohol layer screws up combustion because alcohol doesn’t burn all that well. With Cruz’s higher alcohol allowance, you’d get separation more often, at temperatures as high as 13°C (55°F) for a 65 mol percent mix, see chart at right. Things get worse yet if the gasoline gets wet, or contains benzene. Gasoline blending is complex stuff: something the average joe should not do.

Solubility of dry alcohol (ethanol) in gasoline. The solubility is worse at low temperature and if the gasoline is wet or aromatic.

Solubility of alcohol (ethanol) in gasoline; an extrapolation based on the data above.

To estimate the separation temperature of our normal, 10% alcohol-gasoline mix, I extended the data from the chart above using linear regression. From thermodynamics, I extrapolated ln-concentration vs 1/T, and found that a 10% by volume mix (5% mol fraction alcohol) will separate at about -40°F. Chances are, you won’t see that temperature this winter (and if you you do, try to find a gas mix that has no alcohol. Another thought, add hydrogen or other combustible gas to get the engine going.

Robert E. Buxbaum, February 10, 2016. Two more thoughts: 1) Thermodynamics is a beautiful subject to learn, and (2) Avoid people who stick to foolish consistency. Too much regulation is bad, as is too little: it’s a common pattern: The difference between a cure and a poison is often just the dose.

It’s rocket science

Here are six or so rocket science insights, some simple, some advanced. It’s a fun area of engineering that touches many areas of science and politics. Besides, some people seem to think I’m a rocket scientist.

A basic question I get asked by kids is how a rocket goes up. My answer is it does not go up. That’s mostly an illusion. The majority of the rocket — the fuel — goes down, and only the light shell goes up. People imagine they are seeing the rocket go up. Taken as a whole, fuel and shell, they both go down at 1 G: 9.8 m/s2, 32 ft/sec2.

Because 1 G ofupward acceleration is always lost to gravity, you need high thrust from the rocket engine, especially at the beginning when the rocket is heaviest. If your engine provides less thrust than the weight of your rocket, your rocket sits on the launch pad, and if your thrust is merely twice the weight of the rocket your waste half of your fuel doing nothing useful. Effectively, the upward acceleration of the shell, a = F/m -1G where F is the force of the engine, and m is the mass of the rocket and whatever fuel is in it, and the 1 G is the upward acceleration lost to gravity.  My guess is that you want to design a rocket engine so that the upward acceleration, a, is in the range 8-10 G. This range avoids wasting lots of fuel without requiring you to build the rocket too sturdy. At a = 9G, the rocket engine force, F, has to be about 10 times the rocket weight; it also means the rocket structure must be sturdy enough to support a force of ten times the rocket weight. This can be tricky because the rocket will be the size of a small skyscraper, and the structure must be light so that the majority is fuel. It’s also tricky that this 9-11 times the rocket weight must sit on an engine that runs really hot, about 3000°C. Most engineering projects have fewer constraints than this, and are thus “not rocket science.”

Basic force balance on a rocket going up.

Basic force balance on a rocket going up.

A space rocket has to reach very high speeds; most things that go up, come down almost immediately. You can calculate the minimum orbital speed by balancing the acceleration of gravity, 9.8 m/s2, against the orbital acceleration of going around the earth, a sphere of 40,000 km in circumference (that’s how the meter was defined). Orbital acceleration, a = v2/r, and r = 40,000,000 m/2π = 6,366,000m. Thus, the speed you need to stay up indefinitely is v=√(6,366,000 x 9.8) = 7900 m/s = 17,800 mph. That’s roughly Mach 35, or 35 times the speed of sound. You need some altitude too, just to keep air friction from killing you, but for most missions, the main thing you need is velocity, kinetic energy, not potential energy, as i’ll show below. If you achieve more speed than 17,800 m/s, you circle the earth higher up; this makes docking space-ships tricky, as I’ll explain also.

It turns out that kinetic energy is quite a lot more important than potential energy for sending an object into orbit, and that rockets are the only way practical to reach orbital speed; no current cannon or gun can reach Mach 35. To get a sense of the energy balance involved in rocketry, consider a one kg mass at orbital speed, 7900 m/s, and 200 km altitude. You can calculate that the kinetic energy is 31,205 kJ, while the potential energy, mgh, is only 1,960 kJ. For this orbital height, 200 km, the kinetic energy is about 16 times the potential energy. Not that it’s easy to reach 200 miles altitude, but you can do it with a sophisticated cannon, or a “simple”, one stage, V2-style rocket, you need multiple stages to reach 7,900 m/s. As a way to see this, consider that the energy content of gasoline + oxygen is about 10.5 MJ/kg (10,500 kJ/kg); this is only 1/3 of the kinetic energy of the orbital rocket, but it’s 5 times the potential energy. A fairly efficient gasoline + oxygen powered cannon could not provide orbital kinetic energy since the bullet can move no faster than the explosive vapor. In a rocket this is not a constraint since most of the mass is ejected. I’ll explain further below.

A shell fired at a 45° angle that reaches 200 km altitude would go about 800 km — the distance between North Korea and Japan, or between Iran and Israel. That would require twice as much energy as a shell fired straight up, about 4000 kJ/kg. This is a value still within the range for a (very large) cannon or a single-stage rocket. For Russia or China to hit the US would take much more velocity, and orbital, or near orbital rocketry. To reach the moon, you need more total energy, but less kinetic energy. Moon rockets have taken the approach of first going into orbit, and only later going on. While most of the kinetic energy isn’t lost, I’m still not sure it’s the best trajectory.

The force produced by a rocket is equal to the rate of mass shot out times its velocity. F = ∆(mv). To get a lot of force for each bit of fuel, you want the gas exit velocity to be as fast as possible. A typical maximum is about 2,500 m/s. Mach 10, for a gasoline – oxygen engine. The acceleration of the rocket itself is this ∆mv force divided by the total remaining mass in the rocket (rocket shell plus remaining fuel) minus 1 (gravity). Thus, if the exhaust from a rocket leaves at 2,500 m/s, and you want the rocket to accelerate upward at 9 G, you must exhaust fast enough to develop 10 G, 98 m/s2. The rate of mass exhaust is the mass of the rocket times 98/2500 = .0392/second. That is, about 3.9% of the rocket mass must be ejected each second. Assuming that the fuel for your first stage engine is less than 80% of the total mass, the first stage will flare-out in about 20 seconds at this rate. Your acceleration at the end of the 20 seconds will be greater than 9G, by the way, since the rocket gets lighter as fuel is burnt. When half the weight is gone, it will be accelerating at 19 G.

If you have a good math background, you can develop a differential equation for the relation between fuel consumption and altitude or final speed. This is readily done if you know calculous, or reasonably done if you use differential methods. By either method, it turns out that, for no air friction or gravity resistance, you will reach the same speed as the exhaust when 64% of the rocket mass is exhausted. In the real world, your rocket will have to exhaust 75 or 80% of its mass as first stage fuel to reach a final speed of 2,500 m/s. This is less than 1/3 orbital speed, and reaching it requires that the rest of your rocket mass: the engine, 2nd stage, payload, and any spare fuel to handle descent (Elon Musk’s approach) must weigh less than 20-25% of the original weight of the rocket on the launch pad. This gasoline and oxygen is expensive, but not horribly so if you can reuse the rocket; that’s the motivation for NASA’s and SpaceX’s work on reusable rockets. Most orbital rocket designs require at least three stages to accelerate to the 7900 m/s calculated above, and the second stage is almost invariably lost. If you can set-up and solve the differential equation above, a career in science may be for you.

Now, you might wonder about the exhaust speed I’ve been using, 2500 m/s. If you can achieve higher speeds, the rocket design will be a lot easier, but doing so is not easy for a gasoline/ oxygen engine like Russia and the US uses currently. The heat of combustion of gasoline is 42 MJ/kg, but burning a kg of gasoline requires roughly 2.5 kg of oxygen. Thus, for a rocket fueled by gasoline + oxygen, the heat of combustion is about 10.5 MJ/kg. Now assume that the rocket engine is 30% efficient. Per unit of fuel+ oxygen mass, 1/2 v2 = .3 x 10,500,000; v =√6,300,000  = 2500 m/s. Higher exhaust speeds have been achieved, e.g. with hydrogen-fueled rockets. The sources of inefficiency are many including incomplete combustion in the engine, gas flow off the center-line, and friction flow in the engine and between the atmosphere and gases leaving the rocket nozzle. If you can make a reliable, higher efficiency engine, a career in engineering may be for you.

At an average acceleration of 10 G = 98 m/s2 and a first stage that reaches 2500 m/s you find that the first stage will burn out after 25.5 seconds. If the rocket were going straight up (bad idea), you’d find you are at an altitude of about 28.7 km. A better idea would be an average trajectory of 30°, leaving you at an altitude of 14 km or so. At that altitude you can expect to have far less air friction, and you can expect the second stage engine to be more efficient. It seems to me, you may want to wait 15 seconds or so before firing the second stage: you’ll be another few km up and it seems to me that the benefit of this altitude will be worthwhile. I guess that’s why most space launches wait a few seconds before firing the second stage.

As a final bit, I’d mentioned that docking a rocket with a space station is difficult, in part, because docking requires an increase in angular speed, w, but this generally goes along with a decrease in altitude; a counter-intuitive behavior. Setting the acceleration due to gravity equal to the angular acceleration, we find GM/r2 = w2r, where G is the gravitational constant, and M is the mass or the earth. Rearranging, we find that w2  = GM/r3. For high angular speed, you need small r: a low altitude. When we first went to dock a space-ship, in the early 60s, we had not realized this. When the astronauts fired the engines to dock, they found that they’d accelerate in velocity, but not in angular speed: v = wr. The faster they went, the higher up they went, but the lower the angular speed got: the fewer the orbits per day. Eventually they realized that, to dock with another ship or a space-station that is in front of you, you do not accelerate, but decelerate. When you decelerate you lose altitude and gain angular speed: you catch up with the station, but at a lower altitude. Your next step is to angle your ship near-radially to the earth, and accelerate by firing engines to the side till you dock. Like much of orbital rocketry, it’s simple, but not intuitive or easy.

Robert Buxbaum, August 12, 2015. A cannon that could reach from North Korea to Japan, say, would have to be on the order of 10 km long, running along the slope of a mountain. Even at that length, the shell would have to fire at 450 G, or so, and reach a speed about 3000 m/s, or 1/3 orbital.

No need to conserve energy

Earth day, energy conservation stamp from the 1970s

Energy conservation stamp from the early 70s

I’m reminded that one of the major ideas of Earth Day, energy conservation, is completely unnecessary: Energy is always conserved. It’s entropy that needs to be conserved.

The entropy of the universe increases for any process that occurs, for any process that you can make occur, and for any part of any process. While some parts of processes are very efficient in themselves, they are always entropy generators when considered on a global scale. Entropy is the arrow of time: if entropy ever goes backward, time has reversed.

A thought I’ve had on how do you might conserve entropy: grow trees and use them for building materials, or convert them to gasoline, or just burn them for power. Under ideal conditions, photosynthesis is about 30% efficient at converting photon-energy to glucose. (photons + CO2 + water –> glucose + O2). This would be nearly same energy conversion efficiency as solar cells if not for the energy the plant uses to live. But solar cells have inefficiency issues of their own, and as a result the land use per power is about the same. And it’s a lot easier to grow a tree and dispose of forest waste than it is to make a solar cell and dispose of used coated glass and broken electric components. Just some Earth Day thoughts from Robert E. Buxbaum. April 24, 2015

Much of the chemistry you learned is wrong

When you were in school, you probably learned that understanding chemistry involved understanding the bonds between atoms. That all the things of the world were made of molecules, and that these molecules were fixed proportion combinations of the chemical elements held together by one of the 2 or 3 types of electron-sharing bonds. You were taught that water was H2O, that table salt was NaCl, that glass was SIO2, and rust was Fe2O3, and perhaps that the bonds involved an electron transferring between an electron-giver: H, Na, Si, or Fe… to an electron receiver: O or Cl above.

Sorry to say, none of that is true. These are fictions perpetrated by well-meaning, and sometime ignorant teachers. All of the materials mentioned above are grand polymers. Any of them can have extra or fewer atoms of any species, and as a result the stoichiometry isn’t quite fixed. They are not molecules at all in the sense you knew them. Also, ionic bonds hardly exist. Not in any chemical you’re familiar with. There are no common electron compounds. The world works, almost entirely on covalent, shared bonds. If bonds were ionic you could separate most materials by direct electrolysis of the pure compound, but you can not. You can not, for example, make iron by electrolysis of rust, nor can you make silicon by electrolysis of pure SiO2, or titanium by electrolysis of pure TiO. If you could, you’d make a lot of money and titanium would be very cheap. On the other hand, the fact that stoichiometry is rarely fixed allows you to make many useful devices, e.g. solid oxide fuel cells — things that should not work based on the chemistry you were taught.

Iron -zinc forms compounds, but they don't have fixed stoichiometry. As an example the compound at 60 atom % Zn is, I guess Zn3Fe2, but the composition varies quite a bit from there.

Iron -zinc forms compounds, but they don’t have fixed stoichiometry. As an example the compound at 68-80 atom% Zn is, I guess Zn7Fe3 with many substituted atoms, especially at temperatures near 665°C.

Because most bonds are covalent many compounds form that you would not expect. Most metal pairs form compounds with unusual stoicheometric composition. Here, for example, is the phase diagram for zinc and Iron –the materials behind galvanized sheet metal: iron that does not rust readily. The delta phase has a composition between 85 and 92 atom% Zn (8 and 15 a% iron): Perhaps the main compound is Zn5Fe2, not the sort of compound you’d expect, and it has a very variable compositions.

You may now ask why your teachers didn’t tell you this sort of stuff, but instead told you a pack of lies and half-truths. In part it’s because we don’t quite understand this ourselves. We don’t like to admit that. And besides, the lies serve a useful purpose: it gives us something to test you on. That is, a way to tell if you are a good student. The good students are those who memorize well and spit our lies back without asking too many questions of the wrong sort. We give students who do this good grades. I’m going to guess you were a good student (congratulations, so was I). The dullards got confused by our explanations. They asked too many questions, and asked, “can you explain that again? Or why? We get mad at these dullards and give them low grades. Eventually, the dullards feel bad enough about themselves to allow themselves to be ruled by us. We graduates who are confident in our ignorance rule the world, but inventions come from the dullards who don’t feel bad about their ignorance. They survive despite our best efforts. A few more of these folks survive in the west, and especially in America, than survive elsewhere. If you’re one, be happy you live here. In most countries you’d be beheaded.

Back to chemistry. It’s very difficult to know where to start to un-teach someone. Lets start with EMF and ionic bonds. While it is generally easier to remove an electron from a free metal atom than from a free non-metal atom, e.g. from a sodium atom instead of oxygen, removing an electron is always energetically unfavored, for all atoms. Similarly, while oxygen takes an extra electron easier than iron would, adding an electron is energetically unfavored. The figure below shows the classic ion bond, left, and two electron sharing options (center right) One is a bonding option the other anti-bonding. Nature prefers this to electron sharing to ionic bonds, even with blatantly ionic elements like sodium and chlorine.

Bond options in NaCl. Note that covalent is the stronger bond option though it requires less ionization.

Bond options in NaCl. Note that covalent is the stronger bond option though it requires less ionization.

There is a very small degree of ionic bonding in NaCl (left picture), but in virtually every case, covalent bonds (center) are easier to form and stronger when formed. And then there is the key anti-bonding state (right picture). The anti bond is hardly ever mentioned in high school or college chemistry, but it is critical — it’s this bond that keeps all mater from shrinking into nothingness.

I’ve discussed hydrogen bonds before. I find them fascinating since they make water wet and make life possible. I’d mentioned that they are just like regular bonds except that the quantum hydrogen atom (proton) plays the role that the electron plays. I now have to add that this is not a transfer, but a covalent spot. The H atom (proton) divides up like the electron did in the NaCl above. Thus, two water molecules are attracted by having partial bits of a proton half-way between the two oxygen atoms. The proton does not stay put at the center, there, but bobs between them as a quantum cloud. I should also mention that the hydrogen bond has an anti-bond state just like the electron above. We were never “taught” the hydrogen bond in high school or college — fortunately — that’s how I came to understand them. My professors, at Princeton saw hydrogen atoms as solid. It was their ignorance that allowed me to discover new things and get a PhD. One must be thankful for the folly of others: without it, no talented person could succeed.

And now I get to really weird bonds: entropy bonds. Have you ever noticed that meat gets softer when its aged in the freezer? That’s because most of the chemicals of life are held together by a sort of anti-bond called entropy, or randomness. The molecules in meat are unstable energetically, but actually increase the entropy of the water around them by their formation. When you lower the temperature you case the inherent instability of the bonds to cause them to let go. Unfortunately, this happens only slowly at low temperatures so you’ve got to age meat to tenderize it.

A nice thing about the entropy bond is that it is not particularly specific. A consequence of this is that all protein bonds are more-or-less the same strength. This allows proteins to form in a wide variety of compositions, but also means that deuterium oxide (heavy water) is toxic — it has a different entropic profile than regular water.

Robert Buxbaum, March 19, 2015. Unlearning false facts one lie at a time.

The speed of sound, Buxbaum’s correction

Ernst Mach showed that sound must travel at a particular speed through any material, one determined by the conservation of energy and of entropy. At room temperature and 1 atm, that speed is theoretically predicted to be 343 m/s. For a wave to move at any other speed, either the laws of energy conservation would have to fail, or ∆S ≠ 0 and the wave would die out. This is the only speed where you could say there is a traveling wave, and experimentally, this is found to be the speed of sound in air, to good accuracy.

Still, it strikes me that Mach’s assumptions may have been too restrictive for short-distance sound waves. Perhaps there is room for other sound speeds if you allow ∆S > 0, and consider sound that travels short distances and dies out far from the source. Waves at these, other speeds might affect music appreciation, or headphone design. As these waves were never treated in my thermodynamics textbooks, I wondered if I could derive their speed in any nice way, and if they were faster or slower than the main wave? (If I can’t use this blog to re-think my college studies, what good is it?)

I t can help to think of a shock-wave of sound wave moving down a constant area tube of still are at speed u, with us moving along at the same speed as the wave. In this view, the wave appears stationary, but there is a wind of speed u approaching it from the left.

Imagine the sound-wave moving to the right, down a constant area tube at speed u, with us moving along at the same speed. Thus, the wave appears stationary, with a wind of speed u from the right.

As a first step to trying to re-imagine Mach’s calculation, here is one way to derive the original, for ∆S = 0, speed of sound: I showed in a previous post that the entropy change for compression can be imagines to have two parts, a pressure part at constant temperature: dS/dV at constant T = dP/dT at constant V. This part equals R/V for an ideal gas. There is also a temperature at constant volume part of the entropy change: dS/dT at constant V = Cv/T. Dividing the two equations, we find that, at constant entropy, dT/dV = RT/CvV= P/Cv. For a case where ∆S>0, dT/dV > P/Cv.

Now lets look at the conservation of mechanical energy. A compression wave gives off a certain amount of mechanical energy, or work on expansion, and this work accelerates the gas within the wave. For an ideal gas the internal energy of the gas is stored only in its temperature. Lets now consider a sound wave going down a tube flow left to right, and lets our reference plane along the wave at the same speed so the wave seems to sit still while a flow of gas moves toward it from the right at the speed of the sound wave, u. For this flow system energy is concerned though no heat is removed, and no useful work is done. Thus, any change in enthalpy only results in a change in kinetic energy. dH = -d(u2)/2 = u du, where H here is a per-mass enthalpy (enthalpy per kg).

dH = TdS + VdP. This can be rearranged to read, TdS = dH -VdP = -u du – VdP.

We now use conservation of mass to put du into terms of P,V, and T. By conservation of mass, u/V is constant, or d(u/V)= 0. Taking the derivative of this quotient, du/V -u dV/V2= 0. Rearranging this, we get, du = u dV/V (No assumptions about entropy here). Since dH = -u du, we say that udV/V = -dH = -TdS- VdP. It is now common to say that dS = 0 across the sound wave, and thus find that u2 = -V(dP/dV) at const S. For an ideal gas, this last derivative equals, PCp/VCv, so the speed of sound, u= √PVCp/Cv with the volume in terms of mass (kg/m3).

The problem comes in where we say that ∆S>0. At this point, I would say that u= -V(dH/dV) = VCp dT/dV > PVCp/Cv. Unless, I’ve made a mistake (always possible), I find that there is a small leading, non-adiabatic sound wave that goes ahead of the ordinary sound wave and is experienced only close to the source caused by mechanical energy that becomes degraded to raising T and gives rise more compression than would be expected for iso-entropic waves.

This should have some relevance to headphone design and speaker design since headphones are heard close to the ear, while speakers are heard further away. Meanwhile the recordings are made by microphones right next to the singers or instruments.

Robert E. Buxbaum, August 26, 2014

Dr. Who’s Quantum reality viewed as diffusion

It’s very hard to get the meaning of life from science because reality is very strange, Further, science is mathematical, and the math relations for reality can be re-arranged. One arrangement of the terms will suggest a version of causality, while another will suggest a different causality. As Dr. Who points out, in non-linear, non-objective terms, there’s no causality, but rather a wibbly-wobbely ball of timey-wimey stuff.

Time as a ball of wibblely wobbly timey wimey stuff.

Reality is a ball of  timey wimpy stuff, Dr. Who.

To this end, I’ll provide my favorite way of looking at the timey-wimey way of the world by rearranging the equations of quantum mechanics into a sort of diffusion. It’s not the diffusion of something you’re quite familiar with, but rather a timey-wimey wave-stuff referred to as Ψ. It’s part real and part imaginary, and the only relationship between ψ and life is that the chance of finding something somewhere is proportional Ψ*|Ψ. The diffusion of this half-imaginary stuff is the underpinning of reality — if viewed in a certain way.

First let’s consider the steady diffusion of a normal (un-quantum) material. If there is a lot of it, like when there’s perfume off of a prima donna, you can say that N = -D dc/dx where N is the flux of perfume (molecules per minute per area), dc/dx is a concentration gradient (there’s more perfume near her than near you), and D is a diffusivity, a number related to the mobility of those perfume molecules. 

We can further generalize the diffusion of an ordinary material for a case where concentration varies with time because of reaction or a difference between the in-rate and the out rate, with reaction added as a secondary accumulator, we can write: dc/dt = reaction + dN/dx = reaction + D d2c/dx2. For a first order reaction, for example radioactive decay, reaction = -ßc, and 

dc/dt = -ßc + D d2c/dx2               (1)

where ß is the radioactive decay constant of the material whose concentration is c.

Viewed in a certain way, the most relevant equation for reality, the time-dependent Schrödinger wave equation (semi-derived here), fits into the same diffusion-reaction form:

dΨ/dt = – 2iπV/h Ψ + hi/4πm d2Ψ/dx               (2)

Instead of reality involving the motion of a real material (perfume, radioactive radon, etc.) with a real concentration, c, in this relation, the material can not be sensed directly, and the concentration, Ψ, is semi -imaginary. Here, h is plank’s constant, i is the imaginary number, √-1, m is the mass of the real material, and V is potential energy. When dealing with reactions or charged materials, it’s relevant that V will vary with position (e.g. electrons’ energy is lower when they are near protons). The diffusivity term here is imaginary, hi/4πm, but that’s OK, Ψ is part imaginary, and we’d expect that potential energy is something of a destroyer of Ψ: the likelihood of finding something at a spot goes down where the energy is high.

The form of this diffusion is linear, a mathematical term that refers to equations where solution that works for Ψ will also work for 2Ψ. Generally speaking linear solutions have exp() terms in them, and that’s especially likely here as the only place where you see a time term is on the left. For most cases we can say that

Ψ = ψ exp(-2iπE/h)t               (3)

where ψ is not a function of anything but x (space) and E is the energy of the thing whose behavior is described by Ψ. If you take the derivative of equation 3 this with respect to time, t, you get

dΨ/dt = ψ (-2iπE/h) exp(-2iπE/h)t = (-2iπE/h)Ψ.               (4)

If you insert this into equation 2, you’ll notice that the form of the first term is now identical to the second, with energy appearing identically in both terms. Divide now by exp(-2iπE/h)t, and you get the following equation:

(E-V) ψ =  -h2/8π2m d2ψ/dx2                      (5)

where ψ can be thought of as the physical concentration in space of the timey-wimey stuff. ψ is still wibbly-wobbley, but no longer timey-wimey. Now ψ- squared is the likelihood of finding the stuff somewhere at any time, and E, the energy of the thing. For most things in normal conditions, E is quantized and equals approximately kT. That is E of the thing equals, typically, a quantized energy state that’s nearly Boltzmann’s constant times temperature.

You now want to check that the approximation in equation 3-5 was legitimate. You do this by checking if the length-scale implicit in exp(-2iπE/h)t is small relative to the length-scales of the action. If it is (and it usually is), You are free to solve for ψ at any E and V using normal mathematics, by analytic or digital means, for example this way. ψ will be wibbely-wobbely but won’t be timey-wimey. That is, the space behavior of the thing will be peculiar with the item in forbidden locations, but there won’t be time reversal. For time reversal, you need small space features (like here) or entanglement.

Equation 5 can be considered a simple steady state diffusion equation. The stuff whose concentration is ψ is created wherever E is greater than V, and is destroyed wherever V is greater than E. The stuff then continuously diffuses from the former area to the latter establishing a time-independent concentration profile. E is quantized (can only be some specific values) since matter can never be created of destroyed, and it is only at specific values of E that this happens in Equation 5. For a particle in a flat box, E and ψ are found, typically, by realizing that the format of ψ must be a sin function (and ignoring an infinity). For more complex potential energy surfaces, it’s best to use a matrix solution for ψ along with non-continuous calculous. This avoids the infinity, and is a lot more flexible besides.

When you detect a material in some spot, you can imagine that the space- function ψ collapses, but even that isn’t clear as you can never know the position and velocity of a thing simultaneously, so doesn’t collapse all that much. And as for what the stuff is that diffuses and has concentration ψ, no-one knows, but it behaves like a stuff. And as to why it diffuses, perhaps it’s jiggled by unseen photons. I don’t know if this is what happens, but it’s a way I often choose to imagine reality — a moving, unseen material with real and imaginary (spiritual ?) parts, whose concentration, ψ, is related to experience, but not directly experienced.

This is not the only way the equations can be rearranged. Another way of thinking of things is as the sum of path integrals — an approach that appears to me as a many-world version, with fixed-points in time (another Dr Who feature). In this view, every object takes every path possible between these points, and reality as the sum of all the versions, including some that have time reversals. Richard Feynman explains this path integral approach here. If it doesn’t make more sense than my version, that’s OK. There is no version of the quantum equations that will make total, rational sense. All the true ones are mathematically equivalent — totally equal, but differ in the “meaning”. That is, if you were to impose meaning on the math terms, the meaning would be totally different. That’s not to say that all explanations are equally valid — most versions are totally wrong, but there are many, equally valid math version to fit many, equally valid religious or philosophic world views. The various religions, I think, are uncomfortable with having so many completely different views being totally equal because (as I understand it) each wants exclusive ownership of truth. Since this is never so for math, I claim religion is the opposite of science. Religion is trying to find The Meaning of life, and science is trying to match experiential truth — and ideally useful truth; knowing the meaning of life isn’t that useful in a knife fight.

Dr. Robert E. Buxbaum, July 9, 2014. If nothing else, you now perhaps understand Dr. Who more than you did previously. If you liked this, see here for a view of political happiness in terms of the thermodynamics of free-energy minimization.

If hot air rises, why is it cold on mountain-tops?

This is a child’s question that’s rarely answered to anyone’s satisfaction. To answer it well requires college level science, and by college the child has usually been dissuaded from asking anything scientific that would likely embarrass teacher — which is to say, from asking most anything. By a good answer, I mean here one that provides both a mathematical, checkable prediction of the temperature you’d expect to find on mountain tops, and one that also gives a feel for why it should be so. I’ll try to provide this here, as previously when explaining “why is the sky blue.” A word of warning: real science involves mathematics, something that’s often left behind, perhaps in an effort to build self-esteem. If I do a poor job, please text me back: “if hot air rises, what’s keeping you down?”

As a touchy-feely answer, please note that all materials have internal energy. It’s generally associated with the kinetic energy + potential energy of the molecules. It enters whenever a material is heated or has work done on it, and for gases, to good approximation, it equals the gas heat capacity of the gas times its temperature. For air, this is about 7 cal/mol°K times the temperature in degrees Kelvin. The average air at sea-level is taken to be at 1 atm, or 101,300  Pascals, and 15.02°C, or 288.15 °K; the internal energy of this are is thus 288.15 x 7 = 2017 cal/mol = 8420 J/mol. The internal energy of the air will decrease as the air rises, and the temperature drops for reasons I will explain below. Most diatomic gases have heat capacity of 7 cal/mol°K, a fact that is only explained by quantum mechanics; if not for quantum mechanics, the heat capacities of diatomic gases would be about 9 cal/mol°K.

Lets consider a volume of this air at this standard condition, and imagine that it is held within a weightless balloon, or plastic bag. As we pull that air up, by pulling up the bag, the bag starts to expand because the pressure is lower at high altitude (air pressure is just the weight of the air). No heat is exchanged with the surrounding air because our air will always be about as warm as its surroundings, or if you like you can imagine weightless balloon prevents it. In either case the molecules lose energy as the bag expands because they always collide with an outwardly moving wall. Alternately you can say that the air in the bag is doing work on the exterior air — expansion is work — but we are putting no work into the air as it takes no work to lift this air. The buoyancy of the air in our balloon is always about that of the surrounding air, or so we’ll assume for now.

A classic, difficult way to calculate the temperature change with altitude is to calculate the work being done by the air in the rising balloon. Work done is force times distance: w=  ∫f dz and this work should equal the effective cooling since heat and work are interchangeable. There’s an integral sign here to account for the fact that force is proportional to pressure and the air pressure will decrease as the balloon goes up. We now note that w =  ∫f dz = – ∫P dV because pressure, P = force per unit area. and volume, V is area times distance. The minus sign is because the work is being done by the air, not done on the air — it involves a loss of internal energy. Sorry to say, the temperature and pressure in the air keeps changing with volume and altitude, so it’s hard to solve the integral, but there is a simple approach based on entropy, S.

Les Droites Mountain, in the Alps, at the intersect of France Italy and Switzerland is 4000 m tall. The top is generally snow-covered.

Les Droites Mountain, in the Alps, at the intersect of France Italy and Switzerland is 4000 m tall. The top is generally snow-covered.

I discussed entropy last month, and showed it was a property of state, and further, that for any reversible path, ∆S= (Q/T)rev. That is, the entropy change for any reversible process equals the heat that enters divided by the temperature. Now, we expect the balloon rise is reversible, and since we’ve assumed no heat transfer, Q = 0. We thus expect that the entropy of air will be the same at all altitudes. Now entropy has two parts, a temperature part, Cp ln T2/T1 and a pressure part, R ln P2/P1. If the total ∆S=0 these two parts will exactly cancel.

Consider that at 4000m, the height of Les Droites, a mountain in the Mont Blanc range, the typical pressure is 61,660 Pa, about 60.85% of sea level pressure (101325 Pa). If the air were reduced to this pressure at constant temperature (∆S)T = -R ln P2/P1 where R is the gas constant, about 2 cal/mol°K, and P2/P1 = .6085; (∆S)T = -2 ln .6085. Since the total entropy change is zero, this part must equal Cp ln T2/T1 where Cp is the heat capacity of air at constant pressure, about 7 cal/mol°K for all diatomic gases, and T1 and T2 are the temperatures (Kelvin) of the air at sea level and 4000 m. (These equations are derived in most thermodynamics texts. The short version is that the entropy change from compression at constant T equals the work at constant temperature divided by T,  ∫P/TdV=  ∫R/V dV = R ln V2/V1= -R ln P2/P1. Similarly the entropy change at constant pressure = ∫dQ/T where dQ = Cp dT. This component of entropy is thus ∫dQ/T = Cp ∫dT/T = Cp ln T2/T1.) Setting the sum to equal zero, we can say that Cp ln T2/T1 =R ln .6085, or that 

T2 = T1 (.6085)R/Cp

T2 = T1(.6085)2/7   where 0.6065 is the pressure ratio at 4000, and because for air and most diatomic gases, R/Cp = 2/7 to very good approximation, matching the prediction from quantum mechanics.

From the above, we calculate T2 = 288.15 x .8676 = 250.0°K, or -23.15 °C. This is cold enough to provide snow  on Les Droites nearly year round, and it’s pretty accurate. The typical temperature at 4000 m is 262.17 K (-11°C). That’s 26°C colder than at sea-level, and only 12°C warmer than we’d predicted.

There are three weak assumptions behind the 11°C error in our predictions: (1) that the air that rises is no hotter than the air that does not, and (2) that the air’s not heated by radiation from the sun or earth, and (3) that there is no heat exchange with the surrounding air, e.g. from rain or snow formation. The last of these errors is thought to be the largest, but it’s still not large enough to cause serious problems.

The snow cover on Kilimanjaro, 2013. If global warming models were true, it should be gone, or mostly gone.

Snow on Kilimanjaro, Tanzania 2013. If global warming models were true, the ground should be 4°C warmer than 100 years ago, and the air at this altitude, about 7°C (12°F) warmer; and the snow should be gone.

You can use this approach, with different exponents, estimate the temperature at the center of Jupiter, or at the center of neutron stars. This iso-entropic calculation is the model that’s used here, though it’s understood that may be off by a fair percentage. You can also ask questions about global warming: increased CO2 at this level is supposed to cause extreme heating at 4000m, enough to heat the earth below by 4°C/century or more. As it happens, the temperature and snow cover on Les Droites and other Alp ski areas has been studied carefully for many decades; they are not warming as best we can tell (here’s a discussion). By all rights, Mt Blanc should be Mt Green by now; no one knows why. The earth too seems to have stopped warming. My theory: clouds. 

Robert Buxbaum, May 10, 2014. Science requires you check your theory for internal and external weakness. Here’s why the sky is blue, not green.

Getting rid of hydrogen

Though most of my company’s business is making hydrogen or purifying it, or consulting about it, we also provide sorbers and membranes that allow a customer to get rid of unwanted hydrogen, or remove it from a space where it is not wanted. A common example is a customer who has a battery system for long-term operation under the sea, or in space. The battery or the metal containment is then found to degas hydrogen, perhaps from a corrosion reaction. The hydrogen may interfere with his electronics, or the customer fears it will reach explosive levels. In one case the customer’s system was monitoring deep oil wells and hydrogen from the well was messing up its fiber optic communications.

img505

Pd-coated niobium screws used to getter hydrogen from electronic packages.

For many of these problems, the simplest solution is an organic hydrogen getter of palladium-catalyst and a labile unsaturated hydrocarbon, e.g. buckminsterfullerene. These hydrogen getters are effective in air or inert gas at temperatures between about -20°C and 150°C. When used in an inert gas the organic is hydrogenated, there is a finite amount of removal per gram of sober. When used in air the catalyst promotes the water-forming reaction, and thus there is a lot more hydrogen removal. Depending on the organic, we can provide gettering to lower temperatures or higher. We’ve a recent patent on an organo-palladium gel to operate to 300°C, suitable for down-well hydrogen removal.

At high temperatures, generally above 100*C, we generally suggest an inorganic hydrogen remover, e.g. our platinum ceria catalyst. This material is suitable for hydrogen removal from air, including from polluted air like that in radioactive waste storage areas. Platinum catalyst works long-term at temperatures between about 0°C and 600°C. The catalyst-sorber also works without air, reducing Ce2O3 to CeO and converting hydrogen irreversibly to water (H2O). As with the organo-Pd getters, there is a finite amount of hydrogen removal per gram when these materials are used in a sealed environment.

Low temperature, Pd-grey coated, Pd-Ag membranes made for the space shuttle to remove hydrogen from the drinking water at room temperature. The water came from the fuel cells.

Low temperature, metal membranes made for NASA to remove H2 from  drinking water at room temperature.

Another high temperature hydrogen removal option is metallic getters, e.g. yttrium or vanadium-titanium alloy. These metals require temperatures in excess of 100°C to be effective, and typically do not work well in air. They are best suited for removing hydrogen a vacuum or inert gas, converting it to metallic hydride. The thermodynamics of hydriding is such that, depending on the material, these getters can extract hydrogen even at temperatures up to 700°C, and at very low hydrogen pressures, below 10-9 torr. For operation in air or at 100-400°C we typically provide these getters coated with palladium to increase the hydrogen sorption rate. A fairly popular product is palladium-coated niobium screws 4-40 x 1/4″. Each screw will remove over 2000 sec of hydrogen at temperatures up to 400°C. We also provide oxygen, nitrogen and water getters. They work on the same principle, but form metallic oxides or nitrides instead of hydrides.

Our last, and highest-end, hydrogen-removal option is to provide metallic membranes. These don’t remove the hydrogen as such, but transfer it elsewhere. We’ve provided these for the space shuttle, and to the nuclear industry so that hydrogen can be vented from nuclear reactors before it has a chance to build up and case damage or interfere with heat transfer. Because nothing is used up, these membranes work, essentially forever. The Fukushima reactor explosions were from corrosion-produced hydrogen that had no acceptable way to vent.

Please contact us for more information, e.g. by phone at 248-545-0155, or check out the various sorbers in our web-siteRobert Buxbaum, May 5, 2014.