Category Archives: Physics

Einstein’s theory of diffusion in liquids, and my extension.

In 1905 and 1908, Einstein developed two formulations for the diffusion of a small particle in a liquid. As a side-benefit of the first derivation, he demonstrated the visible existence of molecules, a remarkable piece of work. In the second formulation, he derived the same result using non-equilibrium thermodynamics, something he seems to have developed on the spot. I’ll give a brief version of the second derivation, and will then I’ll show off my own extension. It’s one of my proudest intellectual achievements.

But first a little background to the problem. In 1827, a plant biologist, Robert Brown examined pollen under a microscope and noticed that it moved in a jerky manner. He gave this “Brownian motion” the obvious explanation: that the pollen was alive and swimming. Later, it was observed that the pollen moved faster in acetone. The obvious explanation: pollen doesn’t like acetone, and thus swims faster. But the pollen never stopped, and it was noticed that cigar smoke also swam. Was cigar smoke alive too?

Einstein’s first version of an answer, 1905, was to consider that the liquid was composed of atoms whose energy was a Boltzmann distribution with an average of E= kT in every direction where k is the Boltzmann constant, and k = R/N. That is Boltsman’s constant equals the gas constant, R, divided by Avogadro’s number, N. He was able to show that the many interactions with the molecules should cause the pollen to take a random, jerky walk as seen, and that the velocity should be faster the less viscous the solvent, or the smaller the length-scale of observation. Einstein applied the Stokes drag equation to the solute, the drag force per particle was f = -6πrvη where r is the radius of the solute particle, v is the velocity, and η is the solution viscosity. Using some math, he was able to show that the diffusivity of the solute should be D = kT/6πrη. This is called the Stokes-Einstein equation.

In 1908 a French physicist, Jean Baptiste Perrin confirmed Einstein’s predictions, winning the Nobel prize for his work. I will now show the 1908 Einstein derivation and will hope to get to my extension by the end of this post.

Consider the molar Gibbs free energy of a solvent, water say. The molar concentration of water is x and that of a very dilute solute is y. y<<1. For this nearly pure water, you can show that µ = µ° +RT ln x= µ° +RT ln (1-y) = µ° -RTy.

Now, take a derivative with respect to some linear direction, z. Normally this is considered illegal, since thermodynamic is normally understood to apply to equilibrium systems only. Still Einstein took the derivative, and claimed it was legitimate at nearly equilibrium, pseudo-equilibrium. You can calculate the force on the solvent, the force on the water generated by a concentration gradient, Fw = dµ/dz = -RT dy/dz.

Now the force on each atom of water equals -RT/N dy/dz = -kT dy/dz.

Now, let’s call f the force on each atom of solute. For dilute solutions, this force is far higher than the above, f = -kT/y dy/dz. That is, for a given concentration gradient, dy/dz, the force on each solute atom is higher than on each solvent atom in inverse proportion to the molar concentration.

For small spheres, and low velocities, the flow is laminar and the drag force, f = 6πrvη.

Now calculate the speed of each solute atom. It is proportional to the force on the atom by the same relationship as appeared above: f = 6πrvη or v = f/6πrη. Inserting our equation for f= -kT/y dy/dz, we find that the velocity of the average solute molecule,

v = -kT/6πrηy dy/dz.

Let’s say that the molar concentration of solvent is C, so that, for water, C will equal about 1/18 mols/cc. The atomic concentration of dilute solvent will then equal Cy. We find that the molar flux of material, the diffusive flux equals Cyv, or that

Molar flux (mols/cm2/s) = Cy (-kT/6πrηy dy/dz) = -kTC/6πrη dy/dz -kT/6πrη dCy/dz.

where Cy is the molar concentration of solvent per volume.

Classical engineering comes to a similar equation with a property called diffusivity. Sp that

Molar flux of y (mols y/cm2/s) = -D dCy/dz, and D is an experimentally determined constant. We thus now have a prediction for D:

D = kT/6πrη.

This again is the Stokes Einstein Equation, the same as above but derived with far less math. I was fascinated, but felt sure there was something wrong here. Macroscopic viscosity was not the same as microscopic. I just could not think of a great case where there was much difference until I realized that, in polymer solutions there was a big difference.

Polymer solutions, I reasoned had large viscosities, but a diffusing solute probably didn’t feel the liquid as anywhere near as viscous. The viscometer measured at a larger distance, more similar to that of the polymer coil entanglement length, while a small solute might dart between the polymer chains like a rabbit among trees. I applied an equation for heat transfer in a dispersion that JK Maxwell had derived,

where κeff is the modified effective thermal conductivity (or diffusivity in my case), κl and κp are the thermal conductivity of the liquid and the particles respectively, and φ is the volume fraction of particles. 

To convert this to diffusion, I replaced κl by Dl, and κp by Dp where

Dl = kT/6πrηl

and Dp = kT/6πrη.

In the above ηl is the viscosity of the pure, liquid solvent.

The chair of the department, Don Anderson didn’t believe my equation, but agreed to help test it. A student named Kit Yam ran experiments on a variety of polymer solutions, and it turned out that the equation worked really well down to high polymer concentrations, and high viscosity.

As a simple, first approximation to the above, you can take Dp = 0, since it’s much smaller than Dl and you can take Dl to equal Dl = kT/6πrηl as above. The new, first order approximation is:

D = kT/6πrηl (1 – 3φ/2).

We published in Science. That is I published along with the two colleagues who tested the idea and proved the theory right, or at least useful. The reference is Yam, K., Anderson, D., Buxbaum, R. E., Science 240 (1988) p. 330 ff. “Diffusion of Small Solutes in Polymer-Containing Solutions”. This result is one of my proudest achievements.

R.E. Buxbaum, March 20, 2024

Relativity’s twin paradox explained, and why time is at right angles to space.

One of the most famous paradoxes of physics is explained wrong — always. It makes people feel good to think they understand it, but the explanation is wrong and confusing, and it drives young physicists in a wrong direction. The basic paradox is an outgrowth of the special relativity prediction that time moves slower if you move faster.

Thus, if you entered a spaceship and were to travel to a distant star at 99% the speed of light, turn around and get here 30 years, you would have aged far less than 30 years. You and everyone else on the space ship would have aged three years, 1/10 as much as someone on earth.

The paradox part, not that the above isn’t weird enough by itself, is that the person in the spaceship will imagine that he (or she) is standing still, and that everyone on earth is moving away at 99% the speed of light. Thus, the person on the spaceship should expect to find that the people on earth will age slower. That is, the person on the space ship should return from his (or her) three year journey, expecting to find that the people on earth have only aged 0.3 years. Obviously, only one of these expectations can be right, but it’s not clear which (It’s the first one), nor is it clear why.

The wrong explanation appears in an early popular book, “Mr Tompkins in Wonderland”, by Physicist, George Gamow. The book was written shortly after Relativity was proposed, and involves a Mr Tompkins who falls asleep in a physics lecture. Mr. Tompkins dreams he’s riding on a train going near the speed of light, finds things are shorter and time is going slower. He then asks the paradox question to the conductor, who admits he doesn’t quite know how it works (perhaps Gamow didn’t), but that “it has something do do with the brakeman.” That sounds like Gamow is saying the explanation has to do with deceleration at the turn around, or general relativity in general, implying gravity could have a similarly large effect. It doesn’t work that way, and the effect of 1G gravity is small, but everyone seems content to explain the paradox this way. This is particularly unfortunate because these include physicists clouding an already cloudy issue.

In the early days of physics, physicists tried to explain things with a little legitimate math to the lay audience. Gamow did this, as did Einstein, Planck, Feynman, and most others. I try to do this too. Nowadays, physicists have removed the math, and added gobbledygook. The one exception here are the cinematographers of Star Wars. They alone show the explanation correctly.

The explanation does not have to do general relativity or the acceleration at the end of the journey (the brakeman). Instead of working through some acceleration, general relativity effect, the twin paradox works with simple, special relativity: all space contracts for the duration of the trip, and everything in it gets shorter. The person in this spaceship will see the distance to the star shrink by 90%. Traveling there thus takes 1/10th the time because the distance is 1/10th. There and back at 99% the speed of light, takes exactly 3 years.

The equation for time contraction is: t’ = v/x° √(1-(v/c)2) = t° √(1-(v/c)2) where t’ is the time in the spaceship, v is the speed, x° is the distance traveled (as measured from earth), and c is the speed of light. For v/c = .99, we find that √1-(v/c)2 is 0.1. We thus find that t’ = 0.1 t°. When dealing with the twin paradox, it’s better to say that x’ = 0.1x° where x’ is the distance to the star as seen from the spaceship. In either case, when the people on the space ship accelerate, they see the distance in front of them shrink, as shown in Star Wars, below.

Star Wars. The millennium falcon jumps to light speed, and beyond.

That time was at right angles to space was a comment in one of Einstein’s popular articles and books; he wrote several, all with some minimal mathematics Current science has no math, and a lot of politics, IMHO, and thus is not science.

He showed that time and space are at right angles by analogy from Pythagoras. Pythagoras showed that distance on a diagonal, d between two points at right angles, x and y is d = √(x2 + y2). Another way of saying this is d2 =x2 + y2. The relationship is similar for relativistic distances. To explain the twin paradox, we find that the square of the effective distance, x’2 = x°2 (1 – (v/c)2) = x°2 – (x°v)2/c2 = x°2 – (x°v/c)2 = x°2 – (t°2/c2). Here, x°2 is the square of the original distance, and it comes out that the term, – (t°2/c2) behaves like the square of an imaginary distance that is at right angles to it. It comes out that co-frame time, t° behaves as if it were a distance with a scale factor of i/c.

For some reason people today read books on science by non-scientist ‘explainers.’ I These books have no math, and I guess they sell. Publishers think they are helping democratize science, perhaps. You are better off reading the original thinkers, IMHO.

Robert Buxbaum, July 16, 2023. In his autobiography, Einstein claimed to be a fan of scientist -philosopher, Ernst Mach. Mach derived the speed of sound from a mathematical analysis of thermodynamics. Einstein followed, considering that it must be equally true to consider an empty box traveling in space to be one that carries its emptiness with it, as to assume that fresh emptiness comes in at one end and leaves by the other. If you set the two to be equal mathematically, you conclude that both time and space vary with velocity. Similar analysis will show that atoms are real, and that energy must travel in packets, quanta. Einstein also did fun work on the curvature of rivers, and was a fan of this sail ship design. Here is some more on the scientific method.

Dark matter: why our galaxy still has its arms

Our galaxy may have two arms, or perhaps four. It was thought to be four until 2008, when it was reduced to two. Then, in 2015, it was expanded again to four arms, but recent research suggests it’s only two again. About 70% of galaxies have arms, easily counted from the outside, as in the picture below. Apparently it’s hard to get a good view from the inside.

Four armed, spiral galaxy, NGC 2008. There is a debate over whether our galaxy looks like this, or if there are only two arms. Over 70% of all galaxies are spiral galaxies. 

Logically speaking, we should not expect a galaxy to have arms at all. For a galaxy to have arms, it must rotate as a unit. Otherwise, even if the galaxy had arms when it formed, it would lose them by the time the outer rim rotated even once. As it happens we know the speed of rotation and age of galaxies; they’ve all rotated 10 to 50 times since they formed.

For stable rotation, the rotational acceleration must match the force of gravity and this should decrease with distances from the massive center. Thus, we’d expect that the stars should circle much faster the closer they are to the center of the galaxy. We see that Mercury circles the sun much faster than we do, and that we circle much faster than the outer planets. If stars circled the galactic core this way, any arm structure would be long gone. We see that the galactic arms are stable, and to explain it, we’ve proposed the existence of lots of unseen, dark matter. This matter has to have some peculiar properties, behaving as a light gas that doesn’t spin with the rest of the galaxy, or absorb light, or reflect. Some years ago, I came to believe that there was only one gas distribution that fit, and challenged folks to figure out the distribution.

The mass of the particles that made up this gas has to be very light, about 10-7 eV, about 2 x 1012 lighter than an electron, and very slippery. Some researchers had posited large, dark rocks, but I preferred to imagine a particle called the axion, and I expected it would be found soon. The particle mass had to be about this or it would shrink down to the center of he galaxy or start to spin, or fill the universe. Ina ny of these cases, galaxies would not be stable. The problem is, we’ve been looking for years, and not only have we not seen any particle like this. What’s more, continued work on the structure of matter suggests that no such particle should exist. At this point, galactic stability is a bigger mystery than it was 40 years ago.;

So how to explain galactic stability if there is no axion. One thought, from Mordechai Milgrom, is that gravity does not work as we thought. This is an annoying explanation: it involves a complex revision of General Relativity, a beautiful theory that seems to be generally valid. Another, more recent explanation is that the dark matter is regular matter that somehow became an entangled, super fluid despite the low density and relatively warm temperatures of interstellar space. This has been proposed by Justin Khoury, here. Either theory would explain the slipperiness, and the fact that the gas does not interact with light, but the details don’t quite work. For one, I’d still think that the entangled particle mass would have to be quite light; maybe a neutrino would fit (entangled neutrinos?). Super fluids don’t usually exist at space temperatures and pressures, and long distances (light years) should preclude entanglements, and neutrinos don’t seem to interact at all.

Sabine Hossenfelder suggests a combination of modified gravity and superfluidity. Some version of this might fit observations better, but doubles the amount of new physics required. Sabine does a good science video blog, BTW, with humor and less math. She doesn’t believe in Free will or religion, or entropy. By her, the Big Bang was caused by a mystery particle called an inflateon that creates mass and energy from nothing. She claims that the worst thing you can do in terms of resource depletion is have children, and seems to believe religious education is child abuse. Some of her views I agree with, with many, I do not. I think entropy is fundamental, and think people are good. Also, I see no advantage in saying “In the beginning an inflateon created the heavens and the earth”, but there you go. It’s not like I know what dark matter is any better than she does.

There are some 200 billion galaxies, generally with 100 billion stars. Our galaxy is about 150,000 light years across, 1.5 x 1018 km. It appears to behave, more or less, as a solid disk having rotated about 15 full turns since its formation, 10 billion years ago. The speed at the edge is thus about π x 1.5 x 1018 km/ 3 x 1016 s = 160km/s. That’s not relativistic, but is 16 times the speed of our fastest rockets. The vast majority of the mass of our galaxy would have to be dark matter, with relatively little between galaxies. Go figure.

Robert Buxbaum, May 24, 2023. I’m a chemical engineer, PhD, but studied some physics and philosophy.

Rotating sail ships and why your curve ball doesn’t curve.

The Flettner-sail ship, Barbara, 1926.

Sailing ships are wonderfully economic and non-polluting. They have unlimited range because they use virtually no fuel, but they tend to be slow, about 5-12 knots, about half as fast as Diesel-powered ships, and they can be stranded for weeks if the wind dies. Classic sailing ships also require a lot of manpower: many skilled sailors to adjust the sails. What’s wanted is an easily manned, economical, hybrid ship: one that’s powered by Diesel when the wind is light, and by a simple sail system when the wind blows. Anton Flettner invented an easily manned sail and built two ships with it. The Barbara above used a 530 hp Diesel and got additional thrust, about an additional 500 hp worth, from three, rotating, cylindrical sails. The rotating sales produced thrust via the same, Magnus force that makes a curve ball curve. Barbara went at 9 knots without the wind, or about 12.5 knots when the wind blew. Einstein thought it one of the most brilliant ideas he’d seen.

Force diagram of Flettner rotor (Lele & Rao, 2017)

The source of the force can be understood with help of the figure at left and the graph below. When a simple cylinder sits in the wind, with no spin, α=0, the wind force is essentially drag, and is 1/2 the wind speed squared, times the cross-sectional area of the cylinder, Dxh, and the density of air. Add to this a drag coefficient, CD, that is about 1 for a non-spinning cylinder. More explicitly, FD= CDDhρv2/2. As the figure at right shows, there is a sort-of lift in the form of sustained vibrations at zero spin, α=0. Vibrations like this are useless for propulsion, and can be damaging to the sail. In baseball, such vibrations are the reason knuckle balls fly erratically. If you spin the cylindrical mast at α=2.1, that is at a speed where the fast surface moves with the wind, at 2.1 times the wind speed, and the other side side moves to the wind, there is more force on the side moving to the wind (see figure above) and the ship can be propelled forward (or backward if you reverse the spin direction). Significantly, at α=2.1, you get 6 times as much force as the expected drag, and you no longer get vibrations. FL= CLDhρv2/2, and CL=6 at this rotation speed

Numerical lift coefficients versus time, seconds for different ratios of surface speed to wind speed, a. (Mittal & Kumar 2003), Journal of Fluid Mechanics.

At this rotation speed, α=2.1, this force will be enough to drive a ship so long as the wind is reasonably strong, 15-30 knots, and ship does not move faster than the wind. The driving force is always at right angles to the perceived wind, called the “fair wind”, and the fair wind moves towards the front as the ship speed increases. If you spin the cylinder at 3 to 4 times the wind speed, the lift coefficient increases to between 10 and 18. This drives a ship with yet force. You need somewhat more power to turn the sails, but you are also further from vibrations. Flettner considered α=3.5. optimal. Higher rotation speeds are possible, but they require more rotation power (rotation power goes as ω2, and if you go beyond α=4.3, the vibrations return. Controlling the speed is somewhat difficult but important. Flettner sails were no longer used by the 1930s when fuel became cheaper.

In the early 1980s, the famous underwater explorer, Jacques Cousteau revived the Flettner sail for his exploratory ship, the Alcyone. He used light-weight aluminum sails, and an electric motor for rotation instead of Diesel as on the Barbara. He claimed that the ship drew more than half of its power from the wind, and claimed that, because of computer control, it could sail with no crew. This latter claim was likely bragging. Even with today’s computer systems, people are needed as soon as something goes wrong. Still the energy savings were impressive enough that other ship owners took notice. In recent years, several ship-owners have put Flettner sails on cargo ships, as a right. This is not an ideal use since cargo ships tend to go fast. Still, it’s reported that, these ships get about 20% of their propulsion from wind power, not an insignificant amount.

And this gets us to the reason your curve ball does not curve: you’re not spinning it fast enough. You want the ball to spin at a higher rate than you get just by rolling the ball off your fingers. If you do this, α = 1 and you get relatively little sideways force. To get the ball to really curve, you have to snap your wrist hard aiming for α=1.5 or so. As another approach you can aim for a knuckle ball, achieved with zero rotation. At α=0, the ball will oscillate and your pitch nearly impossible to hit, or catch. Good luck.

Robert Buxbaum, March 22, 2023. There are also various Flettner airplane designs where horizontal, cylindrical “wings” rotate to provide lift, power too in some versions. The aim is high lift with short wings and a relatively low power draw. So-far, these planes are less efficient and slower than a normal helicopter.

Hydrogen transport in metallic membranes

The main products of my company, REB Research, involve metallic membranes, often palladium-based, that provide 100% selective hydrogen filtering or long term hydrogen storage. One way to understand why these metallic membrane provide 100% selectivity has to do with the fact that metallic atoms are much bigger than hydrogen ions, with absolutely regular, small spaces between them that fit hydrogen and nothing else.

Palladium atoms are essentially spheres. In the metallic form, the atoms pack in an FCC structure (face-centered cubic) with a radius of, 1.375 Å. There is a cloud of free electrons that provide conductivity and heat transfer, but as far as the structure of the metal, there is only a tiny space of 0.426 Å between the atoms, see below. This hole is too small of any molecule, or any inert gas. In the gas phase hydrogen molecules are about 1.06 Å in diameter, and other molecules are bigger. Hydrogen atoms shrink when inside a metal, though, to 0.3 to 0.4 Å, just small enough to fit through the holes.

The reason that hydrogen shrinks has to do with its electron leaving to join palladium’s condition cloud. Hydrogen is usually put on the upper left of the periodic table because, in most cases, it behaves as a metal. Like a metal, it reacts with oxygen, and chlorine, forming stoichiometric compounds like H2O and HCl. It also behaves like a metal in that it alloys, non-stoichiometrically, with other metals. Not with all metals, but with many, Pd and the transition metals in particular. Metal atoms are a lot bigger than hydrogen so there is little metallic expansion on alloying. The hydrogen fits in the tiny spaces between atoms. I’ve previously written about hydrogen transport through transition metals (we provide membranes for this too).

No other atom or molecule fits in the tiny space between palladium atoms. Other atoms and molecules are bigger, 1.5Å or more in size. This is far too big to fit in a hole 0.426Å in diameter. The result is that palladium is basically 100% selective to hydrogen. Other metals are too, but palladium is particularly good in that it does not readily oxidize. We sometime sell transition metal membranes and sorbers, but typically coat the underlying metal with palladium.

We don’t typically sell products of pure palladium, by the way. Instead most of our products use, Pd-25%Ag or Pd-Cu. These alloys are slightly cheaper than pure Pd and more stable. Pd-25% silver is also slightly more permeable to hydrogen than pure Pd is — a win-win-win for the alloy.

Robert Buxbaum, January 22, 2023

Fusion advance: LLNL’s small H-bomb, 1.5 lb TNT didn’t destroy the lab.

There was a major advance in nuclear fusion this month at the The National Ignition Facility of Lawrence Livermore National Laboratory (LLNL), but the press could not figure out what it was, quite. They claimed ignition, and it was not. They claimed that it opened the door to limitless power. It did not. Some heat-energy was produced, but not much, 2.5 MJ was reported. Translated to the English system, that’s 600 kCal, about as much heat in a “Big Mac”. That’s far less energy went into lasers that set the reaction off. The importance wasn’t the amount in the energy produced, in my opinion, it’s that the folks at LLNL fired off a small hydrogen bomb, in house, and survived the explosion. 600 kCal is about the explosive power of 1.5 lb of TNT.

Many laser beams converge on a droplet of deuterium-tritium setting off the explosion of a small fraction of the fuel. The explosion had about the power of 1.2 kg of TNT. Drawing from IEEE Spectrum

The process, as reported in the Financial Times, involved “a BB-sized” droplet of holmium -enclosed deuterium and tritium. The folks at LLNL fast-cooked this droplet using 100 lasers, see figure of 2.1MJ total output, converging on one spot simultaneously. As I understand it 4.6 MJ came out, 2.5 MJ more than went in. The impressive part is that the delicate lasers survived the event. By comparison, the blast that bought down Pan Am flight 103 over Lockerbie took only 2-3 ounces of explosive, about 70g. The folks at LLNL say they can do this once per day, something I find impressive.

The New York Times seemed to think this was ignition. It was not. Given the size of a BB, and the density of liquid deuterium-tritium, it would seem the weight of the drop was about 0.022g. This is not much but if it were all fused, it would release 12 GJ, the equivalent of about 3 tons of TNT. That the energy released was only 2.5MJ, suggests that only 0.02% of the droplet was fused. It is possible, though unlikely, that the folks at LLNL could have ignited the entire droplet. If they did, the damage from 5 tons of TNT equivalent would have certainly wrecked the facility. And that’s part of the problem; to make practical energy, you need to ignite the whole droplet and do it every second or so. That’s to say, you have to burn the equivalent of 5000 Big Macs per second.

You also need the droplets to be a lot cheaper than they are. Today, these holmium capsules cost about $100,000 each. We will need to make them, one per second for a cost around $! for this to make any sort of sense. Not to say that the experiments are useless. This is a great way to test H-bomb designs without destroying the environment. But it’s not a practical energy production method. Even ignoring the energy input to the laser, it is impossible to deal with energy when it comes in the form of huge explosions. In a sense we got unlimited power. Unfortunately it’s in the form of H-Bombs.

Robert Buxbaum, January 5, 2023

Of covalent bonds and muon catalyzed cold fusion.

A hydrogen molecule consists of two protons held together by a covalent bond. One way to think of such bonds is to imagine that there is only one electron is directly involved as shown below. The bonding electron only spends 1/7 of its time between the protons, making the bond, the other 6/7 of the time the electron shields the two protons by 3/7 e each, reducing the effective charge of each proton to 4/7e+.

We see that the two shielded protons will repel each other with the force of FR = Ke (16/49 e2 /r2) where e is the charge of an electron or proton, r is the distance between the protons (r = 0.74Å = 0.74×10-10m), and Ke is Coulomb’s electrical constant, Ke ≈ 8.988×109 N⋅m2⋅C−2. The attractive force is calculated similarly, as each proton attracts the central electron by FA = – Ke (4/49) e2/ (r/2)2. The forces are seen to be in balance, the net force is zero.

It is because of quantum mechanics, that the bond is the length that it is. If the atoms were to move closer than r = 0.74Å, the central electron would be confined to less space and would get more energy, causing it to spend less time between the two protons. With less of an electron between them, FR would be greater than FA and the protons would repel. If the atoms moved further apart than 0.74Å, a greater fraction of the electron would move to the center, FA would increase, and the atoms would attract. This is a fairly pleasant way to understand why the hydrogen side of all hydrogen covalent bonds are the same length. It’s also a nice introduction to muon-catalyzed cold fusion.

Most fusion takes place only at high temperatures, at 100 million °C in a TOKAMAK Fusion reactor, or at about 15 million °C in the high pressure interior of the sun. Muon catalyzed fusion creates the equivalent of a much higher pressure, so that fusion occurs at room temperature. The trick to muon catalyzed fusion is to replace one of the electrons with a muon, an unstable, heavy electron particle discovered in 1936. The muon, designated µ-, behaves just like an electron but it has about 207 times the mass. As a result when it replaces an electron in hydrogen, it forms form a covalent bond that is about 1/207th the length of a normal bond. This is the equivalent of extreme pressure. At this closer distance, hydrogen nuclei fuse even at room temperature.

In normal hydrogen, the nuclei are just protons. When they fuse, one of them becomes a neutron. You get a deuteron (a proton-neutron pair), plus an anti electron and 1.44 MeV of energy after the anti-electron has annihilated (for more on antimatter see here). The muon is released most of the time, and can catalyze many more fusion reactions. See figure at right.

While 1.44MeV per reaction is a lot by ordinary standards — roughly one million times more energy than is released per atom when hydrogen is burnt — it’s very little compared to the energy it takes to make a muon. Making a muon takes a minimum of 1000 MeV, and more typically 4000 MeV using current technology. You need to get a lot more energy per muon if this process is to be useful.

You get quite a lot more energy when a muon catalyzes deuterium fusion or deuterium- fusion. With these reactions, you get 3.3 to 4 MeV worth of energy per fusion, and the muon will be ejected with enough force to support about eight D-D fusions before it decays or sticks to a helium atom. That’s better than before, but still not enough to justify the cost of making the muon.

The next reactions to consider are D-T fusion and Li-D fusion. Tritium is an even heavier isotope of hydrogen. It undergoes muon catalyzed fusion with deuterium via the reaction, D+T –> 4He +n +17.6 MeV. Because of the higher energy of the reaction, the muons are even less likely to stick to a helium atom, and you get about 100 fusions per muon. 100 x 17.6 MeV = 1.76 GeV, barely break-even for the high energy cost to make the muon, but there is no reason to stop there. You can use the high energy fusion neutrons to catalyze LiD fusion. For example, 2LiD +n –> 34He + T + D +n producing 19.9 MeV and a tritium atom.

With this additional 19.9 MeV per DT fusion, the system can start to produce usable energy for sale. It is also important that tritium is made in the process. You need tritium for the fusion reactions, and there are not many other supplies. The spare neutron is interesting too. It can be used to make additional tritium or for other purposes. It’s a direction I’d like to explore further. I worked on making tritium for my PhD, and in my opinion, this sort of hybrid operation is the most attractive route to clean nuclear fusion power.

Robert Buxbaum, September 8, 2022. For my appraisal of hot fusion, see here.

A more accurate permeation tester

There are two ASTM-approved methods for measuring the gas permeability of a material. The equipment is very similar, and REB Research makes equipment for either. In one of these methods (described in detail here) you measure the rate of pressure rise in a small volume.This method is ideal for high permeation rate materials. It’s fast, reliable, and as a bonus, allows you to infer diffusivity and solubility as well, based on the permeation and breakthrough time.

Exploded view of the permeation cell.

For slower permeation materials, I’ve found you are better off with the other method: using a flow of sampling gas (helium typically, though argon can be used as well) and a gas-sampling gas chromatograph. We sell the cells for this, though not the gas chromatograph. For my own work, I use helium as the carrier gas and sampling gas, along with a GC with a 1 cc sampling loop (a coil of stainless steel tube), and an automatic, gas-operated valve, called a sampling valve. I use a VECO ionization detector since it provides the greatest sensitivity differentiating hydrogen from helium.

When doing an experiment, the permeate gas is put into the upper chamber. That’s typically hydrogen for my experiments. The sampling gas (helium in my setup) is made to flow past the lower chamber at a fixed, flow rate, 20 sccm or less. The sampling gas then flows to the sampling loop of the GC, and from there up the hood. Every 20 minutes or so, the sampling valve switches, sending the sampling gas directly out the hood. When the valve switches, the carrier gas (helium) now passes through the sampling loop on its way to the column. This sends the 1 cc of sample directly to the GC column as a single “injection”. The GC column separates the various gases in the sample and determines the components and the concentration of each. From the helium flow rate, and the argon concentration in it, I determine the permeation rate and, from that, the permeability of the material.

As an example, let’s assume that the sample gas flow is 20 sccm, as in the diagram above, and that the GC determines the H2 concentration to be 1 ppm. The permeation rate is thus 20 x 10-6 std cc/minute, or 3.33 x 10-7 std cc/s. The permeability is now calculated from the permeation area (12.56 cm2 for the cells I make), from the material thickness, and from the upstream pressure. Typically, one measures the thickness in cm, and the pressure in cm of Hg so that 1 atm is 76cm Hg. The result is that permeability is determined in a unit called barrer. Continuing the example above, if the upstream hydrogen is 15 psig, that’s 2 atmospheres absolute or or 152 cm Hg. Lets say that the material is a polymer of thickness is 0.3 cm; we thus conclude that the permeability is 0.524 x 10-10 scc/cm/s/cm2/cmHg = 0.524 barrer.

This method is capable of measuring permeabilities lower than the previous method, easily lower than 1 barrer, because the results are not fogged by small air leaks or degassing from the membrane material. Leaks of oxygen, and nitrogen show up on the GC output as peaks that are distinct from the permeate peak, hydrogen or whatever you’re studying as a permeate gas. Another plus of this method is that you can measure the permeability of multiple gas species simultaneously, a useful feature when evaluating gas separation polymers. If this type of approach seems attractive, you can build a cell like this yourself, or buy one from us. Send us an email to reb@rebresearch.com, or give us a call at 248-545-0155.

Robert Buxbaum, April 27, 2022.

Low temperature hydrogen removal

Platinum catalysts can be very effective at removing hydrogen from air. Platinum promotes the irreversible reaction of hydrogen with oxygen to make water: H2 + 1/2 O2 –> H2O, a reaction that can take off, at great rates, even at temperatures well below freezing. In the 1800s, when platinum was cheap, platinum powder was used to light town-gas, gas street lamps. In those days, street lamps were not fueled by methane, ‘natural gas’, but by ‘town gas’, a mix of hydrogen and carbon monoxide and many impurities like H2S. It was made by reacting coal and steam in a gas plant, and it is a testament to the catalytic power of Pt that it could light this town gas. These impurities are catalytic poisons. When exposed to any catalyst, including platinum, the catalyst looses it’s power to. This is especially true at low temperatures where product water condenses, and this too poisons the catalytic surface.

Nowadays, platinum is expensive and platinum catalysts are no longer made of Pt powder, but rather by coating a thin layer of Pt metal on a high surface area substrate like alumina, ceria, or activated carbon. At higher temperatures, this distribution of Pt improves the reaction rate per gram Pt. Unfortunately, at low temperatures, the substrate seems to be part of the poisoning problem. I think I’ve found a partial way around it though.

My company, REB Research, sells Pt catalysts for hydrogen removal use down to about 0°C, 32°F. For those needing lower temperature hydrogen removal, we offer a palladium-hydrocarbon getter that continues to work down to -30°C and works both in air and in the absence of air. It’s pretty good, but poisons more readily than Pt does when exposed to H2S. For years, I had wanted to develop a version of the platinum catalyst that works well down to -30°C or so, and ideally that worked both in air and without air. I got to do some of this development work during the COVID downtime year.

My current approach is to add a small amount of teflon and other hydrophobic materials. My theory is that normal Pt catalysts form water so readily that the water coats the catalytic surface and substrate pores, choking the catalyst from contact with oxygen or hydrogen. My thought of why our Pd-organic works better than Pt is that it’s part because Pd is a slower water former, and in part because the organic compounds prevent water condensation. If so, teflon + Pt should be more active than uncoated Pt catalyst. And it is so.

Think of this in terms of the  Van der Waals equation of state:{\displaystyle \left(p+{\frac {a}{V_{m}^{2}}}\right)\left(V_{m}-b\right)=RT}

where V_{m} is molar volume. The substance-specific constants a and b can be understood as an attraction force between molecules and a molecular volume respectively. Alternately, they can be calculated from the critical temperature and pressure as

{\displaystyle a={\frac {27(RT_{c})^{2}}{64p_{c}}}}{\displaystyle b={\frac {RT_{c}}{8p_{c}}}.}

Now, I’m going to assume that the effect of a hydrophobic surface near the Pt is to reduce the effective value of a. This is to say that water molecules still attract as before, but there are fewer water molecules around. I’ll assume that b remains the same. Thus the ratio of Tc and Pc remains the same but the values drop by a factor of related to the decrease in water density. If we imagine the use of enough teflon to decrease he number of water molecules by 60%, that would be enough to reduce the critical temperature by 60%. That is, from 647 K (374 °C) to 359 K, or -14°C. This might be enough to allow Pt catalysts to be used for H2 removal from the gas within a nuclear wast casket. I’m into nuclear, both because of its clean power density and its space density. As for nuclear waste, you need these caskets.

I’ve begun to test of my theory by making hydrogen removal catalyst that use both platinum and palladium along with unsaturated hydrocarbons. I find it works far better than the palladium-hydrocarbon getter, at least at room temperature. I find it works well even when the catalyst is completely soaked in water, but the real experiments are yet to come — how does this work in the cold. Originally I planned to use a freezer for these tests, but I now have a better method: wait for winter and use God’s giant freezer.

Robert E. Buxbaum October 20, 2021. I did a fuller treatment of the thermo above, a few weeks back.

Weird thermodynamics near surfaces can prevent condensation and make water more slippery.

It is a fundamental of science that that the properties of every pure one-phase material is totally fixed properties at any given temperature and pressure. Thus for example, water at 0°C is accepted to always have a density of 0.998 gm/cc, a vapor pressure of 17.5 Torr, a viscosity of 1.002 centipoise (milliPascal seconds) and a speed of sound of 1481 m/s. Set the temperature and pressure of any other material and every other quality is set. But things go screwy near surfaces, and this is particularly true for water where the hydrogen bond — a quantum bond — predominates.

its vapor pressure rises and it becomes less inclined to condense or freeze. I use this odd aspect of thermodynamics to keep my platinum-based hydrogen getter catalysis active at low temperatures where they would normally clog. Normal platinum catalysts are not suitable for hydrogen removal at normal temperatures, eg room temperature, because the water that forms from hydrogen oxidation chokes off the catalytic surface. Hydrophobic additions prevent this, and I’d like to show you why this works, and why other odd things happen, based on an approximation called the  Van der Waals equation of state:

{\displaystyle \left(p+{\frac {a}{V_{m}^{2}}}\right)\left(V_{m}-b\right)=RT} (1)

This equation described the molar volume of a pure material, V_{m}, of any pure material based not the pressure, the absolute temperature (Kelvin) and two, substance-specific constants, a and b. These constants can be understood as an attraction force term, and a molecular volume respectively. It is common to calculate a and b from the critical temperature and pressure as follows, where Tc is absolute temperature:

{\displaystyle a={\frac {27(RT_{c})^{2}}{64p_{c}}}}, {\displaystyle b={\frac {RT_{c}}{8p_{c}}}.} (2 a,b)

For water Tc = 647 K (374°C) and 220.5 bar. Plugging in these numbers, the Van der Waals gives reasonable values for the density of water both as a liquid and a gas, and thus gives a reasonable value for the boiling point.

Now consider the effect that an inert surface would have on the effective values of a and b near that surface. The volume of the molecules will not change, and thus b will not change, but the value of a will change, likely by about half. This is because, the number of molecules surrounding any other molecule is reduced by about half while the inert surface adds nothing to the attraction. Near a surface, surrounding molecules still attract each other the same as before, but there are about half as many molecules at any temperature and pressure.

To get a physical sense of what the surface does, consider using the new values of a and b to determine a new value for Tc and Pc, for materials near the surface. Since b does not change, we see that the presence of a surface does not affect the ratio of Tc and Pc, but it decreases the effective value of Tc — by about half. For water, that is a change from 647 K to 323.5K, 50.5°C, very close to room temperature. Pc changes to 110 bar, about 1600 psi. Since the new value of Tc is close to room temperature, the the density of water will be much lower near the surface, and the viscosity can be expected to drop. The net result is that water flows more readily through a teflon pipe than through an ordinary pipe, a difference that is particularly apparent at small diameters.

This decrease in effective Tc is useful for fire hoses, and for making sailing ships go faster (use teflon paint) and for making my hydrogen removal catalysts more active at low temperatures. Condensed water can block the pores to the catalyst; teflon can forestall this condensation. It’s a general trick of thermodynamics, reasonably useful. Now you know it, and now you know why it works.

Robert Buxbaum August 30, 2021