Tag Archives: Quantum mechanics

Dark matter: why our galaxy still has its arms

Our galaxy may have two arms, or perhaps four. It was thought to be four until 2008, when it was reduced to two. Then, in 2015, it was expanded again to four arms, but recent research suggests it’s only two again. About 70% of galaxies have arms, easily counted from the outside, as in the picture below. Apparently it’s hard to get a good view from the inside.

Four armed, spiral galaxy, NGC 2008. There is a debate over whether our galaxy looks like this, or if there are only two arms. Over 70% of all galaxies are spiral galaxies. 

Logically speaking, we should not expect a galaxy to have arms at all. For a galaxy to have arms, it must rotate as a unit. Otherwise, even if the galaxy had arms when it formed, it would lose them by the time the outer rim rotated even once. As it happens we know the speed of rotation and age of galaxies; they’ve all rotated 10 to 50 times since they formed.

For stable rotation, the rotational acceleration must match the force of gravity and this should decrease with distances from the massive center. Thus, we’d expect that the stars should circle much faster the closer they are to the center of the galaxy. We see that Mercury circles the sun much faster than we do, and that we circle much faster than the outer planets. If stars circled the galactic core this way, any arm structure would be long gone. We see that the galactic arms are stable, and to explain it, we’ve proposed the existence of lots of unseen, dark matter. This matter has to have some peculiar properties, behaving as a light gas that doesn’t spin with the rest of the galaxy, or absorb light, or reflect. Some years ago, I came to believe that there was only one gas distribution that fit, and challenged folks to figure out the distribution.

The mass of the particles that made up this gas has to be very light, about 10-7 eV, about 2 x 1012 lighter than an electron, and very slippery. Some researchers had posited large, dark rocks, but I preferred to imagine a particle called the axion, and I expected it would be found soon. The particle mass had to be about this or it would shrink down to the center of he galaxy or start to spin, or fill the universe. Ina ny of these cases, galaxies would not be stable. The problem is, we’ve been looking for years, and not only have we not seen any particle like this. What’s more, continued work on the structure of matter suggests that no such particle should exist. At this point, galactic stability is a bigger mystery than it was 40 years ago.;

So how to explain galactic stability if there is no axion. One thought, from Mordechai Milgrom, is that gravity does not work as we thought. This is an annoying explanation: it involves a complex revision of General Relativity, a beautiful theory that seems to be generally valid. Another, more recent explanation is that the dark matter is regular matter that somehow became an entangled, super fluid despite the low density and relatively warm temperatures of interstellar space. This has been proposed by Justin Khoury, here. Either theory would explain the slipperiness, and the fact that the gas does not interact with light, but the details don’t quite work. For one, I’d still think that the entangled particle mass would have to be quite light; maybe a neutrino would fit (entangled neutrinos?). Super fluids don’t usually exist at space temperatures and pressures, and long distances (light years) should preclude entanglements, and neutrinos don’t seem to interact at all.

Sabine Hossenfelder suggests a combination of modified gravity and superfluidity. Some version of this might fit observations better, but doubles the amount of new physics required. Sabine does a good science video blog, BTW, with humor and less math. She doesn’t believe in Free will or religion, or entropy. By her, the Big Bang was caused by a mystery particle called an inflateon that creates mass and energy from nothing. She claims that the worst thing you can do in terms of resource depletion is have children, and seems to believe religious education is child abuse. Some of her views I agree with, with many, I do not. I think entropy is fundamental, and think people are good. Also, I see no advantage in saying “In the beginning an inflateon created the heavens and the earth”, but there you go. It’s not like I know what dark matter is any better than she does.

There are some 200 billion galaxies, generally with 100 billion stars. Our galaxy is about 150,000 light years across, 1.5 x 1018 km. It appears to behave, more or less, as a solid disk having rotated about 15 full turns since its formation, 10 billion years ago. The speed at the edge is thus about π x 1.5 x 1018 km/ 3 x 1016 s = 160km/s. That’s not relativistic, but is 16 times the speed of our fastest rockets. The vast majority of the mass of our galaxy would have to be dark matter, with relatively little between galaxies. Go figure.

Robert Buxbaum, May 24, 2023. I’m a chemical engineer, PhD, but studied some physics and philosophy.

Hydrogen transport in metallic membranes

The main products of my company, REB Research, involve metallic membranes, often palladium-based, that provide 100% selective hydrogen filtering or long term hydrogen storage. One way to understand why these metallic membrane provide 100% selectivity has to do with the fact that metallic atoms are much bigger than hydrogen ions, with absolutely regular, small spaces between them that fit hydrogen and nothing else.

Palladium atoms are essentially spheres. In the metallic form, the atoms pack in an FCC structure (face-centered cubic) with a radius of, 1.375 Å. There is a cloud of free electrons that provide conductivity and heat transfer, but as far as the structure of the metal, there is only a tiny space of 0.426 Å between the atoms, see below. This hole is too small of any molecule, or any inert gas. In the gas phase hydrogen molecules are about 1.06 Å in diameter, and other molecules are bigger. Hydrogen atoms shrink when inside a metal, though, to 0.3 to 0.4 Å, just small enough to fit through the holes.

The reason that hydrogen shrinks has to do with its electron leaving to join palladium’s condition cloud. Hydrogen is usually put on the upper left of the periodic table because, in most cases, it behaves as a metal. Like a metal, it reacts with oxygen, and chlorine, forming stoichiometric compounds like H2O and HCl. It also behaves like a metal in that it alloys, non-stoichiometrically, with other metals. Not with all metals, but with many, Pd and the transition metals in particular. Metal atoms are a lot bigger than hydrogen so there is little metallic expansion on alloying. The hydrogen fits in the tiny spaces between atoms. I’ve previously written about hydrogen transport through transition metals (we provide membranes for this too).

No other atom or molecule fits in the tiny space between palladium atoms. Other atoms and molecules are bigger, 1.5Å or more in size. This is far too big to fit in a hole 0.426Å in diameter. The result is that palladium is basically 100% selective to hydrogen. Other metals are too, but palladium is particularly good in that it does not readily oxidize. We sometime sell transition metal membranes and sorbers, but typically coat the underlying metal with palladium.

We don’t typically sell products of pure palladium, by the way. Instead most of our products use, Pd-25%Ag or Pd-Cu. These alloys are slightly cheaper than pure Pd and more stable. Pd-25% silver is also slightly more permeable to hydrogen than pure Pd is — a win-win-win for the alloy.

Robert Buxbaum, January 22, 2023

Of covalent bonds and muon catalyzed cold fusion.

A hydrogen molecule consists of two protons held together by a covalent bond. One way to think of such bonds is to imagine that there is only one electron is directly involved as shown below. The bonding electron only spends 1/7 of its time between the protons, making the bond, the other 6/7 of the time the electron shields the two protons by 3/7 e each, reducing the effective charge of each proton to 4/7e+.

We see that the two shielded protons will repel each other with the force of FR = Ke (16/49 e2 /r2) where e is the charge of an electron or proton, r is the distance between the protons (r = 0.74Å = 0.74×10-10m), and Ke is Coulomb’s electrical constant, Ke ≈ 8.988×109 N⋅m2⋅C−2. The attractive force is calculated similarly, as each proton attracts the central electron by FA = – Ke (4/49) e2/ (r/2)2. The forces are seen to be in balance, the net force is zero.

It is because of quantum mechanics, that the bond is the length that it is. If the atoms were to move closer than r = 0.74Å, the central electron would be confined to less space and would get more energy, causing it to spend less time between the two protons. With less of an electron between them, FR would be greater than FA and the protons would repel. If the atoms moved further apart than 0.74Å, a greater fraction of the electron would move to the center, FA would increase, and the atoms would attract. This is a fairly pleasant way to understand why the hydrogen side of all hydrogen covalent bonds are the same length. It’s also a nice introduction to muon-catalyzed cold fusion.

Most fusion takes place only at high temperatures, at 100 million °C in a TOKAMAK Fusion reactor, or at about 15 million °C in the high pressure interior of the sun. Muon catalyzed fusion creates the equivalent of a much higher pressure, so that fusion occurs at room temperature. The trick to muon catalyzed fusion is to replace one of the electrons with a muon, an unstable, heavy electron particle discovered in 1936. The muon, designated µ-, behaves just like an electron but it has about 207 times the mass. As a result when it replaces an electron in hydrogen, it forms form a covalent bond that is about 1/207th the length of a normal bond. This is the equivalent of extreme pressure. At this closer distance, hydrogen nuclei fuse even at room temperature.

In normal hydrogen, the nuclei are just protons. When they fuse, one of them becomes a neutron. You get a deuteron (a proton-neutron pair), plus an anti electron and 1.44 MeV of energy after the anti-electron has annihilated (for more on antimatter see here). The muon is released most of the time, and can catalyze many more fusion reactions. See figure at right.

While 1.44MeV per reaction is a lot by ordinary standards — roughly one million times more energy than is released per atom when hydrogen is burnt — it’s very little compared to the energy it takes to make a muon. Making a muon takes a minimum of 1000 MeV, and more typically 4000 MeV using current technology. You need to get a lot more energy per muon if this process is to be useful.

You get quite a lot more energy when a muon catalyzes deuterium fusion or deuterium- fusion. With these reactions, you get 3.3 to 4 MeV worth of energy per fusion, and the muon will be ejected with enough force to support about eight D-D fusions before it decays or sticks to a helium atom. That’s better than before, but still not enough to justify the cost of making the muon.

The next reactions to consider are D-T fusion and Li-D fusion. Tritium is an even heavier isotope of hydrogen. It undergoes muon catalyzed fusion with deuterium via the reaction, D+T –> 4He +n +17.6 MeV. Because of the higher energy of the reaction, the muons are even less likely to stick to a helium atom, and you get about 100 fusions per muon. 100 x 17.6 MeV = 1.76 GeV, barely break-even for the high energy cost to make the muon, but there is no reason to stop there. You can use the high energy fusion neutrons to catalyze LiD fusion. For example, 2LiD +n –> 34He + T + D +n producing 19.9 MeV and a tritium atom.

With this additional 19.9 MeV per DT fusion, the system can start to produce usable energy for sale. It is also important that tritium is made in the process. You need tritium for the fusion reactions, and there are not many other supplies. The spare neutron is interesting too. It can be used to make additional tritium or for other purposes. It’s a direction I’d like to explore further. I worked on making tritium for my PhD, and in my opinion, this sort of hybrid operation is the most attractive route to clean nuclear fusion power.

Robert Buxbaum, September 8, 2022. For my appraisal of hot fusion, see here.

If the test of free will is that no one can tell what I will do….

Free will is generally considered a good thing — perhaps a unique gift from the creator to man-kind. Legal philosophers contend that it is free will that makes us liable to legal punishment for our crimes. while piranhas and machines are not. We would never think of jailing a gun or a piranha even it harmed a child.

It’s not totally clear that we have free will, though, nor is it totally clear what free will is. The common test is that no one can tell what I will do. If this is the only requirement, though, it seems a random number generator should be found to have free will. One might want to add some degree of artificial intelligence so that the random numbers are used to make decisions that are rational in some sense, say choosing between tea and coffee, for example, and not tea and covfefe, but this should not be difficult. With that modification, we should find that the random device would make free decisions as boldly or conservatively as any person.

The numbers should be truly random, but even if they are not quite, this should not be a barrier. We generally take statistical things to be random, the speed of the wind tomorrow at 3:00 PM for example even though there is a likely average, and 500 mph is exceedingly unlikely. And, if that isn’t quite random enough, one could use quantum mechanics. One could devise a system that measures the time between the next two radioactive decays to an accuracy many times greater than the likely time between. If the sample has a decay every 100 seconds or so, the second and third digit of this time after the decimal is random to an extent that most would accept, and that one can predict it at all — or so we understand it. (God might be an exception here, but since He is outside of time, prediction becomes an oxymoron). Using these quantum mechanic random numbers, one should be able to make decisions showing as much free will as any person shows, and likely more . Most folks are fairly predictable.

Since God is considered to be outside of time, any mention of his fore-knowledge or pre-determination is an oxymoron. There is no pre or fore if you’re outside of time, as I’d understand things

 I notice that few people would say that a radioactive atom has free will, though, and that many doubt that people have free will. Still no one seems interested in handing major issues to a computer, or holding the machine liable if things turn out poorly. And if one wants to argue that people have no free will, it seems to me that the argument for punishment would get rather weak. Without free will, shy would it be more wrong to kill a person than a piranha, or a plant.

Robert Buxbaum, January 19, 2020. Just some random thoughts on random number generators. I’ve also had thoughts about punishments, and about job choices.

Of God and gauge blocks

Most scientists are religious on some level. There’s clear evidence for a big bang, and thus for a God-of-Creation. But the creation event is so distant and huge that no personal God is implied. I’d like to suggest that the God of creation is close by and as a beginning to this, I’d like to discus Johansson gauge blocks, the standard tool used to measure machine parts accurately.

jo4

A pair of Johansson blocks supporting 100 kg in a 1917 demonstration. This is 33 times atmospheric pressure, about 470 psi.

Lets say you’re making a complicated piece of commercial machinery, a car engine for example. Generally you’ll need to make many parts in several different shops using several different machines. If you want to be sure the parts will fit together, a representative number of each part must be checked for dimensional accuracy in several places. An accuracy requirement of 0.01 mm is not uncommon. How would you do this? The way it’s been done, at least since the days of Henry Ford, is to mount the parts to a flat surface and use a feeler gauge to compare the heights of the parts to the height of stacks of precisely manufactured gauge blocks. Called Johansson gauge blocks after the inventor and original manufacturer, Henrik Johansson, the blocks are typically made of steel, 1.35″ wide by .35″ thick (0.47 in2 surface), and of various heights. Different height blocks can be stacked to produce any desired height in multiples of 0.01 mm. To give accuracy to the measurements, the blocks must be manufactured flat to within 1/10000 of a millimeter. This is 0.1µ, or about 1/5 the wavelength of visible light. At this degree of flatness an amazing thing is seen to happen: Jo blocks stick together when stacked with a force of 100 kg (220 pounds) or more, an effect called, “wringing.” See picture at right from a 1917 advertising demonstration.

This 220 lbs of force measured in the picture suggests an invisible pressure of 470 psi at least that holds the blocks together (220 lbs/0.47 in2 = 470 psi). This is 32 times the pressure of the atmosphere. It is independent of air, or temperature, or the metal used to make the blocks. Since pressure times volume equals energy, and this pressure can be thought of as a vacuum energy density arising “out of the nothingness.” We find that each cubic foot of space between the blocks contains, 470 foot-lbs of energy. This is the equivalent of 0.9 kWh per cubic meter, energy you can not see, but you can feel. That is a lot of energy in the nothingness, but the energy (and the pressure) get larger the flatter you make the surfaces, or the closer together you bring them together. This is an odd observation since, generally get more dense the smaller you divide them. Clean metal surfaces that are flat enough will weld together without the need for heat, a trick we have used in the manufacture of purifiers.

A standard way to think of quantum scattering is that the particle is scattered by invisible bits of light (virtual photons), the wavy lines. In this view, the force that pushes two flat surfaces together is from a slight deficiency in the amount of invisible light in the small space between them.

A standard way to think of quantum scattering of an atom (solid line) is that it is scattered by invisible bits of light, virtual photons (the wavy lines). In this view, the force that pushes two blocks together comes from a slight deficiency in the number of virtual photons in the small space between the blocks.

The empty space between two flat surfaces also has the power to scatter light or atoms that pass between them. This scattering is seen even in vacuum at zero degrees Kelvin, absolute zero. Somehow the light or atoms picks up energy, “out of the nothingness,” and shoots up or down. It’s a “quantum effect,” and after a while physics students forget how odd it is for energy to come out of nothing. Not only do students stop wondering about where the energy comes from, they stop wondering why it is that the scattering energy gets bigger the closer you bring the surfaces. With Johansson block sticking and with quantum scattering, the energy density gets higher the closer the surface, and this is accepted as normal, just Heisenberg’s uncertainly in two contexts. You can calculate the force from the zero-point energy of vacuum, but you must add a relativistic wrinkle: the distance between two surfaces shrinks the faster you move according to relativity, but measurable force should not. A calculation of the force that includes both quantum mechanics and relativity was derived by Hendrik Casimir:

Energy per volume = P = F/A = πhc/ 480 L4,

where P is pressure, F is force, A is area, h is plank’s quantum constant, 6.63×10−34 Js, c is the speed of light, 3×108 m/s, and L is the distance between the plates, m. Experiments have been found to match the above prediction to within 2%, experimental error, but the energy density this implies is huge, especially when L is small, the equation must apply down to plank lengths, 1.6×10-35 m. Even at the size of an atom, 1×10-10m, the amount of the energy you can see is 3.6 GWhr/m3, 3.6 Giga Watts. 3.6 GigaWatt hrs is one hour’s energy output of three to four large nuclear plants. We see only a tiny portion of the Plank-length vacuum energy when we stick Johansson gauge blocks together, but the rest is there, near invisible, in every bit of empty space. The implication of this enormous energy remains baffling in any analysis. I see it as an indication that God is everywhere, exceedingly powerful, filling the universe, and holding everything together. Take a look, and come to your own conclusions.

As a homiletic, it seems to me that God likes friendship, but does not desire shaman, folks to stand between man and Him. Why do I say that? The huge force-energy between plates brings them together, but scatters anything that goes between. And now you know something about nothing.

Robert Buxbaum, November 7, 2018. Physics references: H. B. G. Casimir and D. Polder. The Influence of Retardation on the London-van der Waals Forces. Phys. Rev. 73, 360 (1948).
S. Lamoreaux, Phys. Rev. Lett. 78, 5 (1996).

Isotopic effects in hydrogen diffusion in metals

For most people, there is a fundamental difference between solids and fluids. Solids have long-term permanence with no apparent diffusion; liquids diffuse and lack permanence. Put a penny on top of a dime, and 20 years later the two coins are as distinct as ever. Put a layer of colored water on top of plain water, and within a few minutes you’ll see that the coloring diffuse into the plain water, or (if you think the other way) you’ll see the plain water diffuse into the colored.

Now consider the transport of hydrogen in metals, the technology behind REB Research’s metallic  membranes and getters. The metals are clearly solid, keeping their shapes and properties for centuries. Still, hydrogen flows into and through the metals at a rate of a light breeze, about 40 cm/minute. Another way of saying this is we transfer 30 to 50 cc/min of hydrogen through each cm2 of membrane at 200 psi and 400°C; divide the volume by the area, and you’ll see that the hydrogen really moves through the metal at a nice clip. It’s like a normal filter, but it’s 100% selective to hydrogen. No other gas goes through.

To explain why hydrogen passes through the solid metal membrane this way, we have to start talking about quantum behavior. It was the quantum behavior of hydrogen that first interested me in hydrogen, some 42 years ago. I used it to explain why water was wet. Below, you will find something a bit more mathematical, a quantum explanation of hydrogen motion in metals. At REB we recently put these ideas towards building a membrane system for concentration of heavy hydrogen isotopes. If you like what follows, you might want to look up my thesis. This is from my 3rd appendix.

Although no-one quite understands why nature should work this way, it seems that nature works by quantum mechanics (and entropy). The basic idea of quantum mechanics you will know that confined atoms can only occupy specific, quantized energy levels as shown below. The energy difference between the lowest energy state and the next level is typically high. Thus, most of the hydrogen atoms in an atom will occupy only the lower state, the so-called zero-point-energy state.

A hydrogen atom, shown occupying an interstitial position between metal atoms (above), is also occupying quantum states (below). The lowest state, ZPE is above the bottom of the well. Higher energy states are degenerate: they appear in pairs. The rate of diffusive motion is related to ∆E* and this degeneracy.

A hydrogen atom, shown occupying an interstitial position between metal atoms (above), is also occupying quantum states (below). The lowest state, ZPE is above the bottom of the well. Higher energy states are degenerate: they appear in pairs. The rate of diffusive motion is related to ∆E* and this degeneracy.

The fraction occupying a higher energy state is calculated as c*/c = exp (-∆E*/RT). where ∆E* is the molar energy difference between the higher energy state and the ground state, R is the gas constant and T is temperature. When thinking about diffusion it is worthwhile to note that this energy is likely temperature dependent. Thus ∆E* = ∆G* = ∆H* – T∆S* where asterisk indicates the key energy level where diffusion takes place — the activated state. If ∆E* is mostly elastic strain energy, we can assume that ∆S* is related to the temperature dependence of the elastic strain.

Thus,

∆S* = -∆E*/Y dY/dT

where Y is the Young’s modulus of elasticity of the metal. For hydrogen diffusion in metals, I find that ∆S* is typically small, while it is often typically significant for the diffusion of other atoms: carbon, nitrogen, oxygen, sulfur…

The rate of diffusion is now calculated assuming a three-dimensional drunkards walk where the step lengths are constant = a. Rayleigh showed that, for a simple cubic lattice, this becomes:

D = a2/6τ

a is the distance between interstitial sites and t is the average time for crossing. For hydrogen in a BCC metal like niobium or iron, D=

a2/9τ; for a FCC metal, like palladium or copper, it’s

a2/3τ. A nice way to think about τ, is to note that it is only at high-energy can a hydrogen atom cross from one interstitial site to another, and as we noted most hydrogen atoms will be at lower energies. Thus,

τ = ω c*/c = ω exp (-∆E*/RT)

where ω is the approach frequency, or the amount of time it takes to go from the left interstitial position to the right one. When I was doing my PhD (and still likely today) the standard approach of physics writers was to use a classical formulation for this time-scale based on the average speed of the interstitial. Thus, ω = 1/2a√(kT/m), and

τ = 1/2a√(kT/m) exp (-∆E*/RT).

In the above, m is the mass of the hydrogen atom, 1.66 x 10-24 g for protium, and twice that for deuterium, etc., a is the distance between interstitial sites, measured in cm, T is temperature, Kelvin, and k is the Boltzmann constant, 1.38 x 10-16 erg/°K. This formulation correctly predicts that heavier isotopes will diffuse slower than light isotopes, but it predicts incorrectly that, at all temperatures, the diffusivity of deuterium is 1/√2 that for protium, and that the diffusivity of tritium is 1/√3 that of protium. It also suggests that the activation energy of diffusion will not depend on isotope mass. I noticed that neither of these predictions is borne out by experiment, and came to wonder if it would not be more correct to assume ω represent the motion of the lattice, breathing, and not the motion of a highly activated hydrogen atom breaking through an immobile lattice. This thought is borne out by experimental diffusion data where you describe hydrogen diffusion as D = D° exp (-∆E*/RT).

Screen Shot 2018-06-21 at 12.08.20 AM

You’ll notice from the above that D° hardly changes with isotope mass, in complete contradiction to the above classical model. Also note that ∆E* is very isotope dependent. This too is in contradiction to the classical formulation above. Further, to the extent that D° does change with isotope mass, D° gets larger for heavier mass hydrogen isotopes. I assume that small difference is the entropy effect of ∆E* mentioned above. There is no simple square-root of mass behavior in contrast to most of the books we had in grad school.

As for why ∆E* varies with isotope mass, I found that I could get a decent explanation of my observations if I assumed that the isotope dependence arose from the zero point energy. Heavier isotopes of hydrogen will have lower zero-point energies, and thus ∆E* will be higher for heavier isotopes of hydrogen. This seems like a far better approach than the semi-classical one, where ∆E* is isotope independent.

I will now go a bit further than I did in my PhD thesis. I’ll make the general assumption that the energy well is sinusoidal, or rather that it consists of two parabolas one opposite the other. The ZPE is easily calculated for parabolic energy surfaces (harmonic oscillators). I find that ZPE = h/aπ √(∆E/m) where m is the mass of the particular hydrogen atom, h is Plank’s constant, 6.63 x 10-27 erg-sec,  and ∆E is ∆E* + ZPE, the zero point energy. For my PhD thesis, I didn’t think to calculate ZPE and thus the isotope effect on the activation energy. I now see how I could have done it relatively easily e.g. by trial and error, and a quick estimate shows it would have worked nicely. Instead, for my PhD, Appendix 3, I only looked at D°, and found that the values of D° were consistent with the idea that ω is about 0.55 times the Debye frequency, ω ≈ .55 ωD. The slight tendency for D° to be larger for heavier isotopes was explained by the temperature dependence of the metal’s elasticity.

Two more comments based on the diagram I presented above. First, notice that there is middle split level of energies. This was an explanation I’d put forward for quantum tunneling atomic migration that some people had seen at energies below the activation energy. I don’t know if this observation was a reality or an optical illusion, but present I the energy picture so that you’ll have the beginnings of a description. The other thing I’d like to address is the question you may have had — why is there no zero-energy effect at the activated energy state. Such a zero energy difference would cancel the one at the ground state and leave you with no isotope effect on activation energy. The simple answer is that all the data showing the isotope effect on activation energy, table A3-2, was for BCC metals. BCC metals have an activation energy barrier, but it is not caused by physical squeezing between atoms, as for a FCC metal, but by a lack of electrons. In a BCC metal there is no physical squeezing, at the activated state so you’d expect to have no ZPE there. This is not be the case for FCC metals, like palladium, copper, or most stainless steels. For these metals there is a much smaller, on non-existent isotope effect on ∆E*.

Robert Buxbaum, June 21, 2018. I should probably try to answer the original question about solids and fluids, too: why solids appear solid, and fluids not. My answer has to do with quantum mechanics: Energies are quantized, and always have a ∆E* for motion. Solid materials are those where ω exp (-∆E*/RT) has unit of centuries. Thus, our ability to understand the world is based on the least understandable bit of physics.

Heraclitus and Parmenides time joke

From Existential Commics

From Existential Comics; Parmenides believed that nothing changed, nor could it.

For those who don’t remember, Heraclitus believed that change was the essence of life, while  Parmenides believed that nothing ever changes. It’s a debate that exists to this day in physics, and also in religion (there is nothing new under the sun, etc.). In science, the view that no real change is possible is founded in Schrödinger’s wave view of quantum mechanics.

Schrödinger's wave equation, time dependent.

Schrödinger’s wave equation, time dependent.

In Schrödinger’s wave description of reality, every object or particle is considered a wave of probability. What appears to us as motion is nothing more than the wave oscillating back and forth in its potential field. Nothing has a position or velocity, quite, only random interactions with other waves, and all of these are reversible. Because of the time reversibility of the equation, long-term, the system is conservative. The wave returns to where it was, and no entropy is created, long-term. Anything that happens will happen again, in reverse. See here for more on Schrödinger waves.

Thermodynamics is in stark contradiction to this quantum view. To thermodynamics, and to common observation, entropy goes ever upward, and nothing is reversible without outside intervention. Things break but don’t fix themselves. It’s this entropy increase that tells you that you are going forward in time. You know that time is going forward if you can, at will, drop an ice-cube into hot tea to produce lukewarm, diluted tea. If you can do the reverse, time is going backward. It’s a problem that besets Dr. Who, but few others.

One way that I’ve seen to get out of the general problem of quantum time is to assume the observed universe is a black hole or some other closed system, and take it as an issue of reference frame. As seen from the outside of a black hole (or a closed system without observation) time stops and nothing changes. Within a black hole or closed system, there is constant observation, and there is time and change. It’s not a great way out of the contradiction, but it’s the best I know of.

Predestination makes a certain physics and religious sense, it just doesn't match personal experience very well.

Predestination makes a certain physics and religious sense, it just doesn’t match personal experience very well.

The religion version of this problem is as follows: God, in most religions, has fore-knowledge. That is, He knows what will happen, and that presumes we have no free will. The problem with that is, without free-will, there can be no fair judgment, no right or wrong. There are a few ways out of this, and these lie behind many of the religious splits of the 1700s. A lot of the humor of Calvin and Hobbes comics comes because Calvin is a Calvinist, convinced of fatalistic predestination; Hobbes believes in free will. Most religions take a position somewhere in-between, but all have their problems.

Applying the black-hole model to God gives the following, alternative answer, one that isn’t very satisfying IMHO, but at least it matches physics. One might assume predestination for a God that is outside the universe — He sees only an unchanging system, while we, inside see time and change and free will. One of the problems with this is it posits a distant creator who cares little for us and sees none of the details. A more positive view of time appears in Dr. Who. For Dr. Who time is fluid, with some fixed points. Here’s my view of Dr. Who’s physics.  Unfortunately, Dr. Who is fiction: attractive, but without basis. Time, as it were, is an issue for the ages.

Robert Buxbaum, Philosophical musings, Friday afternoon, June 30, 2017.

Highest temperature superconductor so far: H2S

The new champion of high-temperature superconductivity is a fairly common gas, hydrogen sulphide, H2S. By compressing it to 150 GPa, 1.5 million atm., a team lead by Alexander Drozdov and M. Eremets of the Max Planck Institute coaxed superconductivity from H2S at temperatures as high as 203.5°K (-70°C). This is, by far, the warmest temperature of any superconductor discovered to-date, and it’s main significance is to open the door for finding superconductivity in other, related hydrogen compounds — ideally at warmer temperatures and/or less-difficult pressures. Among the interesting compounds that will certainly get more attention: PH3, BH3, Methyl mercaptan, and even water, either alone or in combination with H2S.

Relationship between H2S pressure and critical temperature for superconductivity.

Relation between pressure and critical temperature for superconductivity, Tc, in H2S (filled squares) and D2S (open red). The magenta point was measured by magnetic susceptibility (Nature)

H2S superconductivity appears to follow the standard, Bardeen–Cooper–Schrieffer theory (B-C-S). According to this theory superconductivity derives from the formation of pairs of opposite-spinning electrons (Cooper pairs) particularly in light, stiff, semiconductor materials. The light, positively charged lattice quickly moves inward to follow the motion of the electrons, see figure below. This synchronicity of motion is posited to create an effective bond between the electrons, enough to counter the natural repulsion, and allows the the pairs to condense to a low-energy quantum state where they behave as if they were very large and very spread out. In this large, spread out state, they slide through the lattice without interacting with the atoms or the few local vibrations and unpaired electrons found at low temperatures. From this theory, we would expect to find the highest temperature superconductivity in the lightest lattice, materials like ice, boron hydride, magnesium hydride, or H2S, and we expect to find higher temperature behavior in the hydrogen version, H2O, or H2S than in the heavier, deuterium analogs, D2O or D2S. Experiments with H2S and D2S (shown at right) confirm this expectation suggesting that H2S superconductivity is of the B-C-S type. Sorry to say, water has not shown any comparable superconductivity in experiments to date.

We have found high temperature superconductivity in few of materials that we would expect from B-C-S theory, and yet-higher temperature is seen in many unexpected materials. While hydride materials generally do become superconducting, they mostly do so only at low temperatures. The highest temperature semiconductor B-C-S semiconductor discovered until now was magnesium boride, Tc = 27 K. More bothersome, the most-used superconductor, Nb-Sn, and the world record holder until now, copper-oxide ceramics, Tc = 133 K at ambient pressure; 164 K at 35 GPa (350,000 atm) were not B-C-S. There is no version of B-C-S theory to explain why these materials behave as well as they do, or why pressure effects Tc in them. Pressure effects Tc in B-C-S materials by raising the energy of small-scale vibrations that would be necessary to break the pairs. Why should pressure effect copper ceramics? No one knows.

The standard theory of superconductivity relies on Cooper pairs of electrons held together by lattice elasticity.  The lighter and stiffer the lattice, the higher temperature the superconductivity.

The standard theory of superconductivity relies on Cooper pairs of electrons held together by lattice elasticity. The lighter and stiffer the lattice, the higher temperature the superconductivity.

The assumption is that high-pressure H2S acts as a sort of metallic hydrogen. From B-C-S theory, metallic hydrogen was predicted to be a room-temperature superconductor because the material would likely to be a semi-metal, and thus a semiconductor at all temperatures. Hydrogen’s low atomic weight would mean that there would be no significant localized vibrations even at room temperature, suggesting room temperature superconductivity. Sorry to say, we have yet to reach the astronomical pressures necessary to make metallic hydrogen, so we don’t know if this prediction is true. But now it seems H2S behaves nearly the same without requiring the extremely high pressures. It is thought that high temperature H2S superconductivity occurs because H2S somewhat decomposes to H3S and S, and that the H3S provides a metallic-hydrogen-like operative lattice. The sulfur, it’s thought, just goes along for the ride. If this is the explanation, we might hope to find the same behaviors in water or phosphine, PH3, perhaps when mixed with H2S.

One last issue, I guess, is what is this high temperature superconductivity good for. As far as H2S superconductivity goes, the simple answer is that it’s probably good for nothing. The pressures are too high. In general though, high temperature superconductors like NbSn are important. They have been valuable for making high strength magnets, and for prosaic applications like long distance power transmission. The big magnets are used for submarine hunting, nuclear fusion, and (potentially) for levitation trains. See my essay on Fusion here, it’s what I did my PhD on — in chemical engineering, and levitation trains, potentially, will revolutionize transport.

Robert Buxbaum, December 24, 2015. My company, REB Research, does a lot with hydrogen. Not that we make superconductors, but we make hydrogen generators and purifiers, and I try to keep up with the relevant hydrogen research.

Dr. Who’s Quantum reality viewed as diffusion

It’s very hard to get the meaning of life from science because reality is very strange, Further, science is mathematical, and the math relations for reality can be re-arranged. One arrangement of the terms will suggest a version of causality, while another will suggest a different causality. As Dr. Who points out, in non-linear, non-objective terms, there’s no causality, but rather a wibbly-wobbely ball of timey-wimey stuff.

Time as a ball of wibblely wobbly timey wimey stuff.

Reality is a ball of  timey wimpy stuff, Dr. Who.

To this end, I’ll provide my favorite way of looking at the timey-wimey way of the world by rearranging the equations of quantum mechanics into a sort of diffusion. It’s not the diffusion of something you’re quite familiar with, but rather a timey-wimey wave-stuff referred to as Ψ. It’s part real and part imaginary, and the only relationship between ψ and life is that the chance of finding something somewhere is proportional Ψ*|Ψ. The diffusion of this half-imaginary stuff is the underpinning of reality — if viewed in a certain way.

First let’s consider the steady diffusion of a normal (un-quantum) material. If there is a lot of it, like when there’s perfume off of a prima donna, you can say that N = -D dc/dx where N is the flux of perfume (molecules per minute per area), dc/dx is a concentration gradient (there’s more perfume near her than near you), and D is a diffusivity, a number related to the mobility of those perfume molecules. 

We can further generalize the diffusion of an ordinary material for a case where concentration varies with time because of reaction or a difference between the in-rate and the out rate, with reaction added as a secondary accumulator, we can write: dc/dt = reaction + dN/dx = reaction + D d2c/dx2. For a first order reaction, for example radioactive decay, reaction = -ßc, and 

dc/dt = -ßc + D d2c/dx2               (1)

where ß is the radioactive decay constant of the material whose concentration is c.

Viewed in a certain way, the most relevant equation for reality, the time-dependent Schrödinger wave equation (semi-derived here), fits into the same diffusion-reaction form:

dΨ/dt = – 2iπV/h Ψ + hi/4πm d2Ψ/dx               (2)

Instead of reality involving the motion of a real material (perfume, radioactive radon, etc.) with a real concentration, c, in this relation, the material can not be sensed directly, and the concentration, Ψ, is semi -imaginary. Here, h is plank’s constant, i is the imaginary number, √-1, m is the mass of the real material, and V is potential energy. When dealing with reactions or charged materials, it’s relevant that V will vary with position (e.g. electrons’ energy is lower when they are near protons). The diffusivity term here is imaginary, hi/4πm, but that’s OK, Ψ is part imaginary, and we’d expect that potential energy is something of a destroyer of Ψ: the likelihood of finding something at a spot goes down where the energy is high.

The form of this diffusion is linear, a mathematical term that refers to equations where solution that works for Ψ will also work for 2Ψ. Generally speaking linear solutions have exp() terms in them, and that’s especially likely here as the only place where you see a time term is on the left. For most cases we can say that

Ψ = ψ exp(-2iπE/h)t               (3)

where ψ is not a function of anything but x (space) and E is the energy of the thing whose behavior is described by Ψ. If you take the derivative of equation 3 this with respect to time, t, you get

dΨ/dt = ψ (-2iπE/h) exp(-2iπE/h)t = (-2iπE/h)Ψ.               (4)

If you insert this into equation 2, you’ll notice that the form of the first term is now identical to the second, with energy appearing identically in both terms. Divide now by exp(-2iπE/h)t, and you get the following equation:

(E-V) ψ =  -h2/8π2m d2ψ/dx2                      (5)

where ψ can be thought of as the physical concentration in space of the timey-wimey stuff. ψ is still wibbly-wobbley, but no longer timey-wimey. Now ψ- squared is the likelihood of finding the stuff somewhere at any time, and E, the energy of the thing. For most things in normal conditions, E is quantized and equals approximately kT. That is E of the thing equals, typically, a quantized energy state that’s nearly Boltzmann’s constant times temperature.

You now want to check that the approximation in equation 3-5 was legitimate. You do this by checking if the length-scale implicit in exp(-2iπE/h)t is small relative to the length-scales of the action. If it is (and it usually is), You are free to solve for ψ at any E and V using normal mathematics, by analytic or digital means, for example this way. ψ will be wibbely-wobbely but won’t be timey-wimey. That is, the space behavior of the thing will be peculiar with the item in forbidden locations, but there won’t be time reversal. For time reversal, you need small space features (like here) or entanglement.

Equation 5 can be considered a simple steady state diffusion equation. The stuff whose concentration is ψ is created wherever E is greater than V, and is destroyed wherever V is greater than E. The stuff then continuously diffuses from the former area to the latter establishing a time-independent concentration profile. E is quantized (can only be some specific values) since matter can never be created of destroyed, and it is only at specific values of E that this happens in Equation 5. For a particle in a flat box, E and ψ are found, typically, by realizing that the format of ψ must be a sin function (and ignoring an infinity). For more complex potential energy surfaces, it’s best to use a matrix solution for ψ along with non-continuous calculous. This avoids the infinity, and is a lot more flexible besides.

When you detect a material in some spot, you can imagine that the space- function ψ collapses, but even that isn’t clear as you can never know the position and velocity of a thing simultaneously, so doesn’t collapse all that much. And as for what the stuff is that diffuses and has concentration ψ, no-one knows, but it behaves like a stuff. And as to why it diffuses, perhaps it’s jiggled by unseen photons. I don’t know if this is what happens, but it’s a way I often choose to imagine reality — a moving, unseen material with real and imaginary (spiritual ?) parts, whose concentration, ψ, is related to experience, but not directly experienced.

This is not the only way the equations can be rearranged. Another way of thinking of things is as the sum of path integrals — an approach that appears to me as a many-world version, with fixed-points in time (another Dr Who feature). In this view, every object takes every path possible between these points, and reality as the sum of all the versions, including some that have time reversals. Richard Feynman explains this path integral approach here. If it doesn’t make more sense than my version, that’s OK. There is no version of the quantum equations that will make total, rational sense. All the true ones are mathematically equivalent — totally equal, but differ in the “meaning”. That is, if you were to impose meaning on the math terms, the meaning would be totally different. That’s not to say that all explanations are equally valid — most versions are totally wrong, but there are many, equally valid math version to fit many, equally valid religious or philosophic world views. The various religions, I think, are uncomfortable with having so many completely different views being totally equal because (as I understand it) each wants exclusive ownership of truth. Since this is never so for math, I claim religion is the opposite of science. Religion is trying to find The Meaning of life, and science is trying to match experiential truth — and ideally useful truth; knowing the meaning of life isn’t that useful in a knife fight.

Dr. Robert E. Buxbaum, July 9, 2014. If nothing else, you now perhaps understand Dr. Who more than you did previously. If you liked this, see here for a view of political happiness in terms of the thermodynamics of free-energy minimization.

If hot air rises, why is it cold on mountain-tops?

This is a child’s question that’s rarely answered to anyone’s satisfaction. To answer it well requires college level science, and by college the child has usually been dissuaded from asking anything scientific that would likely embarrass teacher — which is to say, from asking most anything. By a good answer, I mean here one that provides both a mathematical, checkable prediction of the temperature you’d expect to find on mountain tops, and one that also gives a feel for why it should be so. I’ll try to provide this here, as previously when explaining “why is the sky blue.” A word of warning: real science involves mathematics, something that’s often left behind, perhaps in an effort to build self-esteem. If I do a poor job, please text me back: “if hot air rises, what’s keeping you down?”

As a touchy-feely answer, please note that all materials have internal energy. It’s generally associated with the kinetic energy + potential energy of the molecules. It enters whenever a material is heated or has work done on it, and for gases, to good approximation, it equals the gas heat capacity of the gas times its temperature. For air, this is about 7 cal/mol°K times the temperature in degrees Kelvin. The average air at sea-level is taken to be at 1 atm, or 101,300  Pascals, and 15.02°C, or 288.15 °K; the internal energy of this are is thus 288.15 x 7 = 2017 cal/mol = 8420 J/mol. The internal energy of the air will decrease as the air rises, and the temperature drops for reasons I will explain below. Most diatomic gases have heat capacity of 7 cal/mol°K, a fact that is only explained by quantum mechanics; if not for quantum mechanics, the heat capacities of diatomic gases would be about 9 cal/mol°K.

Lets consider a volume of this air at this standard condition, and imagine that it is held within a weightless balloon, or plastic bag. As we pull that air up, by pulling up the bag, the bag starts to expand because the pressure is lower at high altitude (air pressure is just the weight of the air). No heat is exchanged with the surrounding air because our air will always be about as warm as its surroundings, or if you like you can imagine weightless balloon prevents it. In either case the molecules lose energy as the bag expands because they always collide with an outwardly moving wall. Alternately you can say that the air in the bag is doing work on the exterior air — expansion is work — but we are putting no work into the air as it takes no work to lift this air. The buoyancy of the air in our balloon is always about that of the surrounding air, or so we’ll assume for now.

A classic, difficult way to calculate the temperature change with altitude is to calculate the work being done by the air in the rising balloon. Work done is force times distance: w=  ∫f dz and this work should equal the effective cooling since heat and work are interchangeable. There’s an integral sign here to account for the fact that force is proportional to pressure and the air pressure will decrease as the balloon goes up. We now note that w =  ∫f dz = – ∫P dV because pressure, P = force per unit area. and volume, V is area times distance. The minus sign is because the work is being done by the air, not done on the air — it involves a loss of internal energy. Sorry to say, the temperature and pressure in the air keeps changing with volume and altitude, so it’s hard to solve the integral, but there is a simple approach based on entropy, S.

Les Droites Mountain, in the Alps, at the intersect of France Italy and Switzerland is 4000 m tall. The top is generally snow-covered.

Les Droites Mountain, in the Alps, at the intersect of France Italy and Switzerland is 4000 m tall. The top is generally snow-covered.

I discussed entropy last month, and showed it was a property of state, and further, that for any reversible path, ∆S= (Q/T)rev. That is, the entropy change for any reversible process equals the heat that enters divided by the temperature. Now, we expect the balloon rise is reversible, and since we’ve assumed no heat transfer, Q = 0. We thus expect that the entropy of air will be the same at all altitudes. Now entropy has two parts, a temperature part, Cp ln T2/T1 and a pressure part, R ln P2/P1. If the total ∆S=0 these two parts will exactly cancel.

Consider that at 4000m, the height of Les Droites, a mountain in the Mont Blanc range, the typical pressure is 61,660 Pa, about 60.85% of sea level pressure (101325 Pa). If the air were reduced to this pressure at constant temperature (∆S)T = -R ln P2/P1 where R is the gas constant, about 2 cal/mol°K, and P2/P1 = .6085; (∆S)T = -2 ln .6085. Since the total entropy change is zero, this part must equal Cp ln T2/T1 where Cp is the heat capacity of air at constant pressure, about 7 cal/mol°K for all diatomic gases, and T1 and T2 are the temperatures (Kelvin) of the air at sea level and 4000 m. (These equations are derived in most thermodynamics texts. The short version is that the entropy change from compression at constant T equals the work at constant temperature divided by T,  ∫P/TdV=  ∫R/V dV = R ln V2/V1= -R ln P2/P1. Similarly the entropy change at constant pressure = ∫dQ/T where dQ = Cp dT. This component of entropy is thus ∫dQ/T = Cp ∫dT/T = Cp ln T2/T1.) Setting the sum to equal zero, we can say that Cp ln T2/T1 =R ln .6085, or that 

T2 = T1 (.6085)R/Cp

T2 = T1(.6085)2/7   where 0.6065 is the pressure ratio at 4000, and because for air and most diatomic gases, R/Cp = 2/7 to very good approximation, matching the prediction from quantum mechanics.

From the above, we calculate T2 = 288.15 x .8676 = 250.0°K, or -23.15 °C. This is cold enough to provide snow  on Les Droites nearly year round, and it’s pretty accurate. The typical temperature at 4000 m is 262.17 K (-11°C). That’s 26°C colder than at sea-level, and only 12°C warmer than we’d predicted.

There are three weak assumptions behind the 11°C error in our predictions: (1) that the air that rises is no hotter than the air that does not, and (2) that the air’s not heated by radiation from the sun or earth, and (3) that there is no heat exchange with the surrounding air, e.g. from rain or snow formation. The last of these errors is thought to be the largest, but it’s still not large enough to cause serious problems.

The snow cover on Kilimanjaro, 2013. If global warming models were true, it should be gone, or mostly gone.

Snow on Kilimanjaro, Tanzania 2013. If global warming models were true, the ground should be 4°C warmer than 100 years ago, and the air at this altitude, about 7°C (12°F) warmer; and the snow should be gone.

You can use this approach, with different exponents, estimate the temperature at the center of Jupiter, or at the center of neutron stars. This iso-entropic calculation is the model that’s used here, though it’s understood that may be off by a fair percentage. You can also ask questions about global warming: increased CO2 at this level is supposed to cause extreme heating at 4000m, enough to heat the earth below by 4°C/century or more. As it happens, the temperature and snow cover on Les Droites and other Alp ski areas has been studied carefully for many decades; they are not warming as best we can tell (here’s a discussion). By all rights, Mt Blanc should be Mt Green by now; no one knows why. The earth too seems to have stopped warming. My theory: clouds. 

Robert Buxbaum, May 10, 2014. Science requires you check your theory for internal and external weakness. Here’s why the sky is blue, not green.