Tag Archives: Quantum mechanics

Heraclitus and Parmenides time joke

From Existential Commics

From Existential Comics; Parmenides believed that nothing changed, nor could it.

For those who don’t remember, Heraclitus believed that change was the essence of life, while  Parmenides believed that nothing ever changes. It’s a debate that exists to this day in physics, and also in religion (there is nothing new under the sun, etc.). In science, the view that no real change is possible is founded in Schrödinger’s wave view of quantum mechanics.

Schrödinger's wave equation, time dependent.

Schrödinger’s wave equation, time dependent.

In Schrödinger’s wave description of reality, every object or particle is considered a wave of probability. What appears to us as motion is nothing more than the wave oscillating back and forth in its potential field. Nothing has a position or velocity, quite, only random interactions with other waves, and all of these are reversible. Because of the time reversibility of the equation, long-term, the system is conservative. The wave returns to where it was, and no entropy is created, long-term. Anything that happens will happen again, in reverse. See here for more on Schrödinger waves.

Thermodynamics is in stark contradiction to this quantum view. To thermodynamics, and to common observation, entropy goes ever upward, and nothing is reversible without outside intervention. Things break but don’t fix themselves. It’s this entropy increase that tells you that you are going forward in time. You know that time is going forward if you can, at will, drop an ice-cube into hot tea to produce lukewarm, diluted tea. If you can do the reverse, time is going backward. It’s a problem that besets Dr. Who, but few others.

One way that I’ve seen to get out of the general problem of quantum time is to assume the observed universe is a black hole or some other closed system, and take it as an issue of reference frame. As seen from the outside of a black hole (or a closed system without observation) time stops and nothing changes. Within a black hole or closed system, there is constant observation, and there is time and change. It’s not a great way out of the contradiction, but it’s the best I know of.

Predestination makes a certain physics and religious sense, it just doesn't match personal experience very well.

Predestination makes a certain physics and religious sense, it just doesn’t match personal experience very well.

The religion version of this problem is as follows: God, in most religions, has fore-knowledge. That is, He knows what will happen, and that presumes we have no free will. The problem with that is, without free-will, there can be no fair judgment, no right or wrong. There are a few ways out of this, and these lie behind many of the religious splits of the 1700s. A lot of the humor of Calvin and Hobbes comics comes because Calvin is a Calvinist, convinced of fatalistic predestination; Hobbes believes in free will. Most religions take a position somewhere in-between, but all have their problems.

Applying the black-hole model to God gives the following, alternative answer, one that isn’t very satisfying IMHO, but at least it matches physics. One might assume predestination for a God that is outside the universe — He sees only an unchanging system, while we, inside see time and change and free will. One of the problems with this is it posits a distant creator who cares little for us and sees none of the details. A more positive view of time appears in Dr. Who. For Dr. Who time is fluid, with some fixed points. Here’s my view of Dr. Who’s physics.  Unfortunately, Dr. Who is fiction: attractive, but without basis. Time, as it were, is an issue for the ages.

Robert Buxbaum, Philosophical musings, Friday afternoon, June 30, 2017.

Highest temperature superconductor so far: H2S

The new champion of high-temperature superconductivity is a fairly common gas, hydrogen sulphide, H2S. By compressing it to 150 GPa, 1.5 million atm., a team lead by Alexander Drozdov and M. Eremets of the Max Planck Institute coaxed superconductivity from H2S at temperatures as high as 203.5°K (-70°C). This is, by far, the warmest temperature of any superconductor discovered to-date, and it’s main significance is to open the door for finding superconductivity in other, related hydrogen compounds — ideally at warmer temperatures and/or less-difficult pressures. Among the interesting compounds that will certainly get more attention: PH3, BH3, Methyl mercaptan, and even water, either alone or in combination with H2S.

Relationship between H2S pressure and critical temperature for superconductivity.

Relation between pressure and critical temperature for superconductivity, Tc, in H2S (filled squares) and D2S (open red). The magenta point was measured by magnetic susceptibility (Nature)

H2S superconductivity appears to follow the standard, Bardeen–Cooper–Schrieffer theory (B-C-S). According to this theory superconductivity derives from the formation of pairs of opposite-spinning electrons (Cooper pairs) particularly in light, stiff, semiconductor materials. The light, positively charged lattice quickly moves inward to follow the motion of the electrons, see figure below. This synchronicity of motion is posited to create an effective bond between the electrons, enough to counter the natural repulsion, and allows the the pairs to condense to a low-energy quantum state where they behave as if they were very large and very spread out. In this large, spread out state, they slide through the lattice without interacting with the atoms or the few local vibrations and unpaired electrons found at low temperatures. From this theory, we would expect to find the highest temperature superconductivity in the lightest lattice, materials like ice, boron hydride, magnesium hydride, or H2S, and we expect to find higher temperature behavior in the hydrogen version, H2O, or H2S than in the heavier, deuterium analogs, D2O or D2S. Experiments with H2S and D2S (shown at right) confirm this expectation suggesting that H2S superconductivity is of the B-C-S type. Sorry to say, water has not shown any comparable superconductivity in experiments to date.

We have found high temperature superconductivity in few of materials that we would expect from B-C-S theory, and yet-higher temperature is seen in many unexpected materials. While hydride materials generally do become superconducting, they mostly do so only at low temperatures. The highest temperature semiconductor B-C-S semiconductor discovered until now was magnesium boride, Tc = 27 K. More bothersome, the most-used superconductor, Nb-Sn, and the world record holder until now, copper-oxide ceramics, Tc = 133 K at ambient pressure; 164 K at 35 GPa (350,000 atm) were not B-C-S. There is no version of B-C-S theory to explain why these materials behave as well as they do, or why pressure effects Tc in them. Pressure effects Tc in B-C-S materials by raising the energy of small-scale vibrations that would be necessary to break the pairs. Why should pressure effect copper ceramics? No one knows.

The standard theory of superconductivity relies on Cooper pairs of electrons held together by lattice elasticity.  The lighter and stiffer the lattice, the higher temperature the superconductivity.

The standard theory of superconductivity relies on Cooper pairs of electrons held together by lattice elasticity. The lighter and stiffer the lattice, the higher temperature the superconductivity.

The assumption is that high-pressure H2S acts as a sort of metallic hydrogen. From B-C-S theory, metallic hydrogen was predicted to be a room-temperature superconductor because the material would likely to be a semi-metal, and thus a semiconductor at all temperatures. Hydrogen’s low atomic weight would mean that there would be no significant localized vibrations even at room temperature, suggesting room temperature superconductivity. Sorry to say, we have yet to reach the astronomical pressures necessary to make metallic hydrogen, so we don’t know if this prediction is true. But now it seems H2S behaves nearly the same without requiring the extremely high pressures. It is thought that high temperature H2S superconductivity occurs because H2S somewhat decomposes to H3S and S, and that the H3S provides a metallic-hydrogen-like operative lattice. The sulfur, it’s thought, just goes along for the ride. If this is the explanation, we might hope to find the same behaviors in water or phosphine, PH3, perhaps when mixed with H2S.

One last issue, I guess, is what is this high temperature superconductivity good for. As far as H2S superconductivity goes, the simple answer is that it’s probably good for nothing. The pressures are too high. In general though, high temperature superconductors like NbSn are important. They have been valuable for making high strength magnets, and for prosaic applications like long distance power transmission. The big magnets are used for submarine hunting, nuclear fusion, and (potentially) for levitation trains. See my essay on Fusion here, it’s what I did my PhD on — in chemical engineering, and levitation trains, potentially, will revolutionize transport.

Robert Buxbaum, December 24, 2015. My company, REB Research, does a lot with hydrogen. Not that we make superconductors, but we make hydrogen generators and purifiers, and I try to keep up with the relevant hydrogen research.

Dr. Who’s Quantum reality viewed as diffusion

It’s very hard to get the meaning of life from science because reality is very strange, Further, science is mathematical, and the math relations for reality can be re-arranged. One arrangement of the terms will suggest a version of causality, while another will suggest a different causality. As Dr. Who points out, in non-linear, non-objective terms, there’s no causality, but rather a wibbly-wobbely ball of timey-wimey stuff.

Time as a ball of wibblely wobbly timey wimey stuff.

Reality is a ball of  timey wimpy stuff, Dr. Who.

To this end, I’ll provide my favorite way of looking at the timey-wimey way of the world by rearranging the equations of quantum mechanics into a sort of diffusion. It’s not the diffusion of something you’re quite familiar with, but rather a timey-wimey wave-stuff referred to as Ψ. It’s part real and part imaginary, and the only relationship between ψ and life is that the chance of finding something somewhere is proportional Ψ*|Ψ. The diffusion of this half-imaginary stuff is the underpinning of reality — if viewed in a certain way.

First let’s consider the steady diffusion of a normal (un-quantum) material. If there is a lot of it, like when there’s perfume off of a prima donna, you can say that N = -D dc/dx where N is the flux of perfume (molecules per minute per area), dc/dx is a concentration gradient (there’s more perfume near her than near you), and D is a diffusivity, a number related to the mobility of those perfume molecules. 

We can further generalize the diffusion of an ordinary material for a case where concentration varies with time because of reaction or a difference between the in-rate and the out rate, with reaction added as a secondary accumulator, we can write: dc/dt = reaction + dN/dx = reaction + D d2c/dx2. For a first order reaction, for example radioactive decay, reaction = -ßc, and 

dc/dt = -ßc + D d2c/dx2               (1)

where ß is the radioactive decay constant of the material whose concentration is c.

Viewed in a certain way, the most relevant equation for reality, the time-dependent Schrödinger wave equation (semi-derived here), fits into the same diffusion-reaction form:

dΨ/dt = – 2iπV/h Ψ + hi/4πm d2Ψ/dx               (2)

Instead of reality involving the motion of a real material (perfume, radioactive radon, etc.) with a real concentration, c, in this relation, the material can not be sensed directly, and the concentration, Ψ, is semi -imaginary. Here, h is plank’s constant, i is the imaginary number, √-1, m is the mass of the real material, and V is potential energy. When dealing with reactions or charged materials, it’s relevant that V will vary with position (e.g. electrons’ energy is lower when they are near protons). The diffusivity term here is imaginary, hi/4πm, but that’s OK, Ψ is part imaginary, and we’d expect that potential energy is something of a destroyer of Ψ: the likelihood of finding something at a spot goes down where the energy is high.

The form of this diffusion is linear, a mathematical term that refers to equations where solution that works for Ψ will also work for 2Ψ. Generally speaking linear solutions have exp() terms in them, and that’s especially likely here as the only place where you see a time term is on the left. For most cases we can say that

Ψ = ψ exp(-2iπE/h)t               (3)

where ψ is not a function of anything but x (space) and E is the energy of the thing whose behavior is described by Ψ. If you take the derivative of equation 3 this with respect to time, t, you get

dΨ/dt = ψ (-2iπE/h) exp(-2iπE/h)t = (-2iπE/h)Ψ.               (4)

If you insert this into equation 2, you’ll notice that the form of the first term is now identical to the second, with energy appearing identically in both terms. Divide now by exp(-2iπE/h)t, and you get the following equation:

(E-V) ψ =  -h2/8π2m d2ψ/dx2                      (5)

where ψ can be thought of as the physical concentration in space of the timey-wimey stuff. ψ is still wibbly-wobbley, but no longer timey-wimey. Now ψ- squared is the likelihood of finding the stuff somewhere at any time, and E, the energy of the thing. For most things in normal conditions, E is quantized and equals approximately kT. That is E of the thing equals, typically, a quantized energy state that’s nearly Boltzmann’s constant times temperature.

You now want to check that the approximation in equation 3-5 was legitimate. You do this by checking if the length-scale implicit in exp(-2iπE/h)t is small relative to the length-scales of the action. If it is (and it usually is), You are free to solve for ψ at any E and V using normal mathematics, by analytic or digital means, for example this way. ψ will be wibbely-wobbely but won’t be timey-wimey. That is, the space behavior of the thing will be peculiar with the item in forbidden locations, but there won’t be time reversal. For time reversal, you need small space features (like here) or entanglement.

Equation 5 can be considered a simple steady state diffusion equation. The stuff whose concentration is ψ is created wherever E is greater than V, and is destroyed wherever V is greater than E. The stuff then continuously diffuses from the former area to the latter establishing a time-independent concentration profile. E is quantized (can only be some specific values) since matter can never be created of destroyed, and it is only at specific values of E that this happens in Equation 5. For a particle in a flat box, E and ψ are found, typically, by realizing that the format of ψ must be a sin function (and ignoring an infinity). For more complex potential energy surfaces, it’s best to use a matrix solution for ψ along with non-continuous calculous. This avoids the infinity, and is a lot more flexible besides.

When you detect a material in some spot, you can imagine that the space- function ψ collapses, but even that isn’t clear as you can never know the position and velocity of a thing simultaneously, so doesn’t collapse all that much. And as for what the stuff is that diffuses and has concentration ψ, no-one knows, but it behaves like a stuff. And as to why it diffuses, perhaps it’s jiggled by unseen photons. I don’t know if this is what happens, but it’s a way I often choose to imagine reality — a moving, unseen material with real and imaginary (spiritual ?) parts, whose concentration, ψ, is related to experience, but not directly experienced.

This is not the only way the equations can be rearranged. Another way of thinking of things is as the sum of path integrals — an approach that appears to me as a many-world version, with fixed-points in time (another Dr Who feature). In this view, every object takes every path possible between these points, and reality as the sum of all the versions, including some that have time reversals. Richard Feynman explains this path integral approach here. If it doesn’t make more sense than my version, that’s OK. There is no version of the quantum equations that will make total, rational sense. All the true ones are mathematically equivalent — totally equal, but differ in the “meaning”. That is, if you were to impose meaning on the math terms, the meaning would be totally different. That’s not to say that all explanations are equally valid — most versions are totally wrong, but there are many, equally valid math version to fit many, equally valid religious or philosophic world views. The various religions, I think, are uncomfortable with having so many completely different views being totally equal because (as I understand it) each wants exclusive ownership of truth. Since this is never so for math, I claim religion is the opposite of science. Religion is trying to find The Meaning of life, and science is trying to match experiential truth — and ideally useful truth; knowing the meaning of life isn’t that useful in a knife fight.

Dr. Robert E. Buxbaum, July 9, 2014. If nothing else, you now perhaps understand Dr. Who more than you did previously. If you liked this, see here for a view of political happiness in terms of the thermodynamics of free-energy minimization.

If hot air rises, why is it cold on mountain-tops?

This is a child’s question that’s rarely answered to anyone’s satisfaction. To answer it well requires college level science, and by college the child has usually been dissuaded from asking anything scientific that would likely embarrass teacher — which is to say, from asking most anything. By a good answer, I mean here one that provides both a mathematical, checkable prediction of the temperature you’d expect to find on mountain tops, and one that also gives a feel for why it should be so. I’ll try to provide this here, as previously when explaining “why is the sky blue.” A word of warning: real science involves mathematics, something that’s often left behind, perhaps in an effort to build self-esteem. If I do a poor job, please text me back: “if hot air rises, what’s keeping you down?”

As a touchy-feely answer, please note that all materials have internal energy. It’s generally associated with the kinetic energy + potential energy of the molecules. It enters whenever a material is heated or has work done on it, and for gases, to good approximation, it equals the gas heat capacity of the gas times its temperature. For air, this is about 7 cal/mol°K times the temperature in degrees Kelvin. The average air at sea-level is taken to be at 1 atm, or 101,300  Pascals, and 15.02°C, or 288.15 °K; the internal energy of this are is thus 288.15 x 7 = 2017 cal/mol = 8420 J/mol. The internal energy of the air will decrease as the air rises, and the temperature drops for reasons I will explain below. Most diatomic gases have heat capacity of 7 cal/mol°K, a fact that is only explained by quantum mechanics; if not for quantum mechanics, the heat capacities of diatomic gases would be about 9 cal/mol°K.

Lets consider a volume of this air at this standard condition, and imagine that it is held within a weightless balloon, or plastic bag. As we pull that air up, by pulling up the bag, the bag starts to expand because the pressure is lower at high altitude (air pressure is just the weight of the air). No heat is exchanged with the surrounding air because our air will always be about as warm as its surroundings, or if you like you can imagine weightless balloon prevents it. In either case the molecules lose energy as the bag expands because they always collide with an outwardly moving wall. Alternately you can say that the air in the bag is doing work on the exterior air — expansion is work — but we are putting no work into the air as it takes no work to lift this air. The buoyancy of the air in our balloon is always about that of the surrounding air, or so we’ll assume for now.

A classic, difficult way to calculate the temperature change with altitude is to calculate the work being done by the air in the rising balloon. Work done is force times distance: w=  ∫f dz and this work should equal the effective cooling since heat and work are interchangeable. There’s an integral sign here to account for the fact that force is proportional to pressure and the air pressure will decrease as the balloon goes up. We now note that w =  ∫f dz = – ∫P dV because pressure, P = force per unit area. and volume, V is area times distance. The minus sign is because the work is being done by the air, not done on the air — it involves a loss of internal energy. Sorry to say, the temperature and pressure in the air keeps changing with volume and altitude, so it’s hard to solve the integral, but there is a simple approach based on entropy, S.

Les Droites Mountain, in the Alps, at the intersect of France Italy and Switzerland is 4000 m tall. The top is generally snow-covered.

Les Droites Mountain, in the Alps, at the intersect of France Italy and Switzerland is 4000 m tall. The top is generally snow-covered.

I discussed entropy last month, and showed it was a property of state, and further, that for any reversible path, ∆S= (Q/T)rev. That is, the entropy change for any reversible process equals the heat that enters divided by the temperature. Now, we expect the balloon rise is reversible, and since we’ve assumed no heat transfer, Q = 0. We thus expect that the entropy of air will be the same at all altitudes. Now entropy has two parts, a temperature part, Cp ln T2/T1 and a pressure part, R ln P2/P1. If the total ∆S=0 these two parts will exactly cancel.

Consider that at 4000m, the height of Les Droites, a mountain in the Mont Blanc range, the typical pressure is 61,660 Pa, about 60.85% of sea level pressure (101325 Pa). If the air were reduced to this pressure at constant temperature (∆S)T = -R ln P2/P1 where R is the gas constant, about 2 cal/mol°K, and P2/P1 = .6085; (∆S)T = -2 ln .6085. Since the total entropy change is zero, this part must equal Cp ln T2/T1 where Cp is the heat capacity of air at constant pressure, about 7 cal/mol°K for all diatomic gases, and T1 and T2 are the temperatures (Kelvin) of the air at sea level and 4000 m. (These equations are derived in most thermodynamics texts. The short version is that the entropy change from compression at constant T equals the work at constant temperature divided by T,  ∫P/TdV=  ∫R/V dV = R ln V2/V1= -R ln P2/P1. Similarly the entropy change at constant pressure = ∫dQ/T where dQ = Cp dT. This component of entropy is thus ∫dQ/T = Cp ∫dT/T = Cp ln T2/T1.) Setting the sum to equal zero, we can say that Cp ln T2/T1 =R ln .6085, or that 

T2 = T1 (.6085)R/Cp

T2 = T1(.6085)2/7   where 0.6065 is the pressure ratio at 4000, and because for air and most diatomic gases, R/Cp = 2/7 to very good approximation, matching the prediction from quantum mechanics.

From the above, we calculate T2 = 288.15 x .8676 = 250.0°K, or -23.15 °C. This is cold enough to provide snow  on Les Droites nearly year round, and it’s pretty accurate. The typical temperature at 4000 m is 262.17 K (-11°C). That’s 26°C colder than at sea-level, and only 12°C warmer than we’d predicted.

There are three weak assumptions behind the 11°C error in our predictions: (1) that the air that rises is no hotter than the air that does not, and (2) that the air’s not heated by radiation from the sun or earth, and (3) that there is no heat exchange with the surrounding air, e.g. from rain or snow formation. The last of these errors is thought to be the largest, but it’s still not large enough to cause serious problems.

The snow cover on Kilimanjaro, 2013. If global warming models were true, it should be gone, or mostly gone.

Snow on Kilimanjaro, Tanzania 2013. If global warming models were true, the ground should be 4°C warmer than 100 years ago, and the air at this altitude, about 7°C (12°F) warmer; and the snow should be gone.

You can use this approach, with different exponents, estimate the temperature at the center of Jupiter, or at the center of neutron stars. This iso-entropic calculation is the model that’s used here, though it’s understood that may be off by a fair percentage. You can also ask questions about global warming: increased CO2 at this level is supposed to cause extreme heating at 4000m, enough to heat the earth below by 4°C/century or more. As it happens, the temperature and snow cover on Les Droites and other Alp ski areas has been studied carefully for many decades; they are not warming as best we can tell (here’s a discussion). By all rights, Mt Blanc should be Mt Green by now; no one knows why. The earth too seems to have stopped warming. My theory: clouds. 

Robert Buxbaum, May 10, 2014. Science requires you check your theory for internal and external weakness. Here’s why the sky is blue, not green.

How to make a simple time machine

I’d been in science fairs from the time I was in elementary school until 9th grade, and  usually did quite well. One trick: I always like to do cool, unexpected things. I didn’t have money, but tried for the gee-whiz factor. Sorry to say, the winning ideas of my youth are probably old hat, but here’s a project that I never got to do, but is simple and cheap and good enough to win today. It’s a basic time machine, or rather a quantum eraser — it lets you go back in time and erase something.

The first thing you should know is that the whole aspect of time rests on rather shaky footing in modern science. It is possible therefore that antimatter, positrons say, are just regular matter moving backwards in time.

The trick behind this machine is the creation of entangled states, an idea that Einstein and Rosen proposed in the 1930s (they thought it could not work and thus disproved quantum mechanics, turned out the trick works). The original version of the trick was this: start with a particle that splits in half at a given, known energy. If you measure the energy of either of the halves of the particle they are always the same, assuming the source particle starts at rest. The thing is, if you start with the original particle at absolute zero and were to measure the position of one half, and the velocity of the other, you’d certainly know the position and velocity of the original particle. Actually, you should not need to measure the velocity, since that’s fixed by they energy of the split, but we’re doing it just to be sure. Thing is quantum mechanics is based on the idea that you can not know both the velocity and position, even just before the split. What happens? If you measure the position of one half the velocity of the other changes, but if you measure the velocity of both halves it is the same, and this even works backward in time. QM seems to know if you intend to measure the position, and you measure an odd velocity even before you do so. Weird. There is another trick to making time machines, one found in Einstein’s own relativity by Gödel. It involves black holes, and we’re not sure if it works since we’ve never had a black hole to work with. With the QM time machine you’re never able to go back in time before the creation of the time machine.

To make the mini-version of this time machine, we’re going to split a few photons and play with the halves. This is not as cool as splitting an elephant, or even a proton, but money don’t grow on trees, and costs go up fast as the mass of the thing being split increases. You’re not going back in time more than 10 attoseconds (that’s a hundredth of a femtosecond), but that’s good enough for the science fair judges (you’re a kid, and that’s your lunch money at work). You’ll need a piece of thick aluminum foil, a sharp knife or a pin, a bright lamp, superglue (or, in a pinch, Elmer’s), a polarizing sunglass lens, some colored Saran wrap or colored glass, a shoe-box worth of cardboard, and wood + nails  to build some sort of wooden frame to hold everything together. Make your fixture steady and hard to break; judges are clumsy. Use decent wood (judges don’t like splinters). Keep spares for the moving parts in case someone breaks them (not uncommon). Ideally you’ll want to attach some focussing lenses a few inches from the lamp (a small magnifier or reading glass lens will do). You’ll want to lay the colored plastic smoothly over this lens, away from the lamp heat.

First make a point light source: take the 4″ square of shoe-box cardboard and put a quarter-inch hole in it near the center. Attach it in front of your strong electric light at 6″ if there is no lens, or at the focus if there is a lens. If you have no lens, you’ll want to put the Saran over this cardboard.

Take two strips of aluminum foil about 6″ square and in the center of each, cut two slits perhaps 4 mm long by .1 mm wide, 1 mm apart from each other near the middle of both strips. Back both strips with some cardboard with a 1″ hole in the middle (use glue to hold it there). Now take the sunglass lens; cut two strips 2 mm x 10 mm on opposite 45° diagonals to the vertical of the lens. Confirm that this is a polarized lens by rotating one against the other; at some rotation the pieces of sunglass, the pair should be opaque, at 90° it should be fairly clear. If this is not so, get a different sunglass.

Paste these two strips over the two slits on one of the aluminum foil sheets with a drop of super-glue. The polarization of the sunglasses is normally up and down, so when these strips are glued next to one another, the polarization of the strips will be opposing 45° angles. Look at the point light source through both of your aluminum foils (the one with the polarized filter and the one without); they should look different. One should look like two pin-points (or strips) of light. The other should look like a fog of dots or lines.

The reason for the difference is that, generally speaking a photon passes through two nearby slits as two entangled halves, or its quantum equivalent. When you use the foil without the polarizers, the halves recombine to give an interference pattern. The result with the polarization is different though since polarization means you can (in theory at least) tell the photons apart. The photons know this and thus behave like they were not two entangled halves, but rather like they passed either through one slit or the other. Your device will go back in time after the light has gone through the holes and will erase this knowledge.

Now cut another 3″ x 3″ cardboard square and cut a 1/4″ hole in the center. Cut a bit of sunglass lens, 1/2″ square and attach it over the hole of this 3×3″ cardboard square. If you view the aluminum square through this cardboard, you should be able to make one hole or the other go black by rotating this polarized piece appropriately. If it does not, there is a problem.

Set up the lamp (with the lens) on one side so that a bright light shines on the slits. Look at the light from the other side of the aluminum foil. You will notice that the light that comes through the foil with the polarized film looks like two dots, while the one that comes through the other one shows a complex interference pattern; putting the other polarizing lens in front of the foil or behind it does not change the behavior of the foil without the polarizing filters, but if done right it will change things if put behind the other foil, the one with the filters.

Robert Buxbaum, of the future.

yet another quantum joke

Why do you get more energy from a steak than from the same amount of hamburger?

 

Hamburger is steak in the ground state.

 

Is funny because….. it’s a pun on the word ground. Hamburger is ground-up meat, of course, but the reference to a ground state also relates to a basic discovery of quantum mechanics (QM): that all things exist in quantized energy states. The lowest of these is called the ground state, and you get less energy out of a process if you start with things at this ground state. Lasers, as an example, get their energy by electrons being made to drop to their ground state at the same time; you can’t get any energy from a laser if the electrons start in the ground state.

The total energy of a thing can be thought of as having a kinetic and a potential energy part. The potential energy is usually higher the more an item moves from its ideal (lowest potential point). The kinetic energies of though tends to get lower when more space is available because, from Heisenberg uncertainty, ∆l•∆v=h. That is, the more space there is, the less uncertainty of speed, and thus the less kinetic energy other things being equal. The ground energy state is the lowest sum of potential and kinetic energy, and thus all things occupy a cloud of some size, even at the ground state. Without this size, the world would cease to exist. Atoms would radiate energy, and shrink until they vanished.

In grad school, I got into understanding thermodynamics, transport phenomena, and quantum mechanics, particularly involving hydrogen. This lead to my hydrogen production and purification inventions, what my company sells.

Click here for a quantum cartoon on waves and particles, an old Heisenberg joke, or a joke about how many quantum mechanicians it takes to change a lightbulb.

R. E. Buxbaum, July 16, 2013. I once claimed that the unseen process that maintains existence could be called God; this did not go well with the religious.

 

Another Quantum Joke, and Schrödinger’s waves derived

Quantum mechanics joke. from xkcd.

Quantum mechanics joke. from xkcd.

Is funny because … it’s is a double entente on the words grain (as in grainy) and waves, as in Schrödinger waves or “amber waves of grain” in the song America (Oh Beautiful). In Schrödinger’s view of the quantum world everything seems to exist or move as a wave until you observe it, and then it always becomes a particle. The math to solve for the energy of things is simple, and thus the equation is useful, but it’s hard to understand why,  e.g. when you solve for the behavior of a particle (atom) in a double slit experiment you have to imagine that the particle behaves as an insubstantial wave traveling though both slits until it’s observed. And only then behaves as a completely solid particle.

Math equations can always be rewritten, though, and science works in the language of math. The different forms appear to have different meaning but they don’t since they have the same practical predictions. Because of this freedom of meaning (and some other things) science is the opposite of religion. Other mathematical formalisms for quantum mechanics may be more comforting, or less, but most avoid the wave-particle duality.

The first formalism was Heisenberg’s uncertainty. At the end of this post, I show that it is identical mathematically to Schrödinger’s wave view. Heisenberg’s version showed up in two quantum jokes that I explained (beat into the ground), one about a lightbulb  and one about Heisenberg in a car (also explains why water is wet or why hydrogen diffuses through metals so quickly).

Yet another quantum formalism involves Feynman’s little diagrams. One assumes that matter follows every possible path (the multiple universe view) and that time should go backwards. As a result, we expect that antimatter apples should fall up. Experiments are underway at CERN to test if they do fall up, and by next year we should finally know if they do. Even if anti-apples don’t fall up, that won’t mean this formalism is wrong, BTW: all identical math forms are identical, and we don’t understand gravity well in any of them.

Yet another identical formalism (my favorite) involves imagining that matter has a real and an imaginary part. In this formalism, the components move independently by diffusion, and as a result look like waves: exp (-it) = cost t + i sin t. You can’t observe the two parts independently though, only the following product of the real and imaginary part: (the real + imaginary part) x (the real – imaginary part). Slightly different math, same results, different ways of thinking of things.

Because of quantum mechanics, hydrogen diffuses very quickly in metals: in some metals quicker than most anything in water. This is the basis of REB Research metal membrane hydrogen purifiers and also causes hydrogen embrittlement (explained, perhaps in some later post). All other elements go through metals much slower than hydrogen allowing us to make hydrogen purifiers that are effectively 100% selective. Our membranes also separate different hydrogen isotopes from each other by quantum effects (big things tunnel slower). Among the uses for our hydrogen filters is for gas chromatography, dynamo cooling, and to reduce the likelihood of nuclear accidents.

Dr. Robert E. Buxbaum, June 18, 2013.

To see Schrödinger’s wave equation derived from Heisenberg for non-changing (time independent) items, go here and note that, for a standing wave there is a vibration in time, though no net change. Start with a version of Heisenberg uncertainty: h =  λp where the uncertainty in length = wavelength = λ and the uncertainty in momentum = momentum = p. The kinetic energy, KE = 1/2 p2/m, and KE+U(x) =E where E is the total energy of the particle or atom, and U(x) is the potential energy, some function of position only. Thus, p = √2m(E-PE). Assume that the particle can be described by a standing wave with a physical description, ψ, and an imaginary vibration you can’t ever see, exp(-iωt). And assume this time and space are completely separable — an OK assumption if you ignore gravity and if your potential fields move slowly relative to the speed of light. Now read the section, follow the derivation, and go through the worked problems. Most useful applications of QM can be derived using this time-independent version of Schrödinger’s wave equation.

Most Heat Loss Is Black-Body Radiation

In a previous post I used statistical mechanics to show how you’d calculate the thermal conductivity of any gas and showed why the insulating power of the best normal insulating materials was usually identical to ambient air. That analysis only considered the motion of molecules and not of photons (black-body radiation) and thus under-predicted heat transfer in most circumstances. Though black body radiation is often ignored in chemical engineering calculations, it is often the major heat transfer mechanism, even at modest temperatures.

One can show from quantum mechanics that the radiative heat transfer between two surfaces of temperature T and To is proportional to the difference of the fourth power of the two temperatures in absolute (Kelvin) scale.

P_{\rm net}=A\sigma \varepsilon \left( T^4 - T_0^4 \right).  Here Pnet is the net heat transfer rate, A is the area of the surfaces, σ is the Stefan–Boltzmann constantε is the surface emissivity, a number that is 1 for most non-metals and .3 for stainless steel.  For A measured in m2σ = 5.67×10−8 W m−2 K−4.

Unlike with conduction, heat transfer does not depend on the distances between the surfaces but only on the temperature and the infra-red (IR) reflectivity. This is different from normal reflectivity as seen in the below infra-red photo of a lightly dressed person standing in a normal room. The fellow has a black plastic bag on his arm, but you can hardly see it here, as it hardly affects heat loss. His clothes, don’t do much either, but his hair and eyeglasses are reasonably effective blocks to radiative heat loss.

Human-Infrared.jpg
Infrared picture of a fellow wearing a black plastic bag on his arm. The bag is nearly transparent to heat radiation, while his eyeglasses are opaque. His hair provides some insulation.

As an illustrative example, lets calculate the radiative and conductive heat transfer heat transfer rates of the person in the picture, assuming he has  2 m2 of surface area, an emissivity of 1, and a body and clothes temperature of about 86°F; that is, his skin/clothes temperature is 30°C or 303K in absolute. If this person stands in a room at 71.6°F, 295K, the radiative heat loss is calculated from the equation above: 2 *1* 5.67×10−8 * (8.43×109 -7.57×109) = 97.5 W. This is 23.36 cal/second or 84.1 Cal/hr or 2020 Cal/day; this is nearly the expected basal calorie use of a person this size.

The conductive heat loss is typically much smaller. As discussed previously in my analysis of curtains, the rate is inversely proportional to the heat transfer distance and proportional to the temperature difference. For the fellow in the picture, assuming he’s standing in relatively stagnant air, the heat boundary layer thickness will be about 2 cm (0.02m). Multiplying the thermal conductivity of air, 0.024 W/mK, by the surface area and the temperature difference and dividing by the boundary layer thickness, we find a Wattage of heat loss of 2*.024*(30-22)/.02 = 19.2 W. This is 16.56 Cal/hr, or 397 Cal/day: about 20% of the radiative heat loss, suggesting that some 5/6 of a sedentary person’s heat transfer may be from black body radiation.

We can expect that black-body radiation dominates conduction when looking at heat-shedding losses from hot chemical equipment because this equipment is typically much warmer than a human body. We’ve found, with our hydrogen purifiers for example, that it is critically important to choose a thermal insulation that is opaque or reflective to black body radiation. We use an infra-red opaque ceramic wrapped with aluminum foil to provide more insulation to a hot pipe than many inches of ceramic could. Aluminum has a far lower emissivity than the nonreflective surfaces of ceramic, and gold has an even lower emissivity at most temperatures.

Many popular insulation materials are not black-body opaque, and most hot surfaces are not reflectively coated. Because of this, you can find that the heat loss rate goes up as you add too much insulation. After a point, the extra insulation increases the surface area for radiation while barely reducing the surface temperature; it starts to act like a heat fin. While the space-shuttle tiles are fairly mediocre in terms of conduction, they are excellent in terms of black-body radiation.

There are applications where you want to increase heat transfer without having to resort to direct contact with corrosive chemicals or heat-transfer fluids. Often black body radiation can be used. As an example, heat transfers quite well from a cartridge heater or band heater to a piece of equipment even if they do not fit particularly tightly, especially if the outer surfaces are coated with black oxide. Black body radiation works well with stainless steel and most liquids, but most gases are nearly transparent to black body radiation. For heat transfer to most gases, it’s usually necessary to make use of turbulence or better yet, chaos.

Joke about antimatter and time travel

I’m sorry we don’t serve antimatter men here.

Antimatter man walks into a bar.

Is funny because … in quantum-physics there is no directionality in time. Thus an electron can change directions in time and then appears to the observer as a positron, an anti electron that has the same mass as a normal electron but the opposite charge and an opposite spin, etc. In physics, the reason electrons and positrons appear to annihilate is that it’s there was only one electron to begin with. That electron started going backwards in time so it disappeared in our forward-in-time time-frame.

The thing is, time is quite apparent on a macroscopic scales. It’s one of the most apparent aspects of macroscopic existence. Perhaps the clearest proof that time is flowing in one direction only is entropy. In normal life, you can drop a glass and watch it break whenever you like, but you can not drop shards and expect to get a complete glass. Similarly, you know you are moving forward in time if you can drop an ice cube into a hot cup of coffee and make it luke-warm. If you can reach into a cup of luke-warm coffee and extract an ice cube to make it hot, you’re moving backwards in time.

It’s also possible that gravity proves that time is moving forward. If an anti apple is just a normal apple that is moving backwards in time, then I should expect that, when I drop an anti-apple, I will find it floats upward. On the other hand, if mass is inherently a warpage of space-time, it should fall down. Perhaps when we understand gravity we will also understand how quantum physics meets the real world of entropy.

Heisenberg joke and why water is wet

I love hydrogen in large part because it is a quantum fluid. To explain what that means and how that leads to water being wet, let me begin with an old quantum physics joke.

Werner Heisenberg is speeding down a highway in his car when he’s stopped by a police officer. “Do you know how fast you were going?” asks the officer. “No idea” answers Heisenberg, “but I know exactly where I am.”

The joke relates to a phenomenon of quantum physics that states that the more precisely you can know the location of something, the less precisely you can infer the speed. Thus, the fact that Heisenberg knew precisely where he was implied that he could have no idea of the car’s speed. Of course, this uncertainty is mostly seen with small things like light and electrons –and a bit with hydrogen, but hardly at all with a car or with Dr. Heisenberg himself (and that’s why it’s funny).

This funky property is related to something you may have wondered about: why is water wet? That is, why does water cling to your hands or clothes while liquid teflon repels. Even further, you may have wondered why water is a liquid at normal conditions when H2S is a gas; H2S is a heavier analog, so if one of the two were a liquid, you’d think it was H2S.

Both phenomena are understood through hydrogen behaving as the quantum car above. Oxygen atoms are pretty small, and hydrogen atoms are light enough to start behaving in a quantum way. When a hydrogen atom attaches to an oxygen atom to form part of a water molecule, its location becomes fixed rather precisely. As a result, the hydrogen atom gains velocity (the hydrogen isn’t going anywhere with this velocity, and it’s sometimes called zero-point energy), but because of this velocity or energy, its bond to the oxygen becomes looser than it would be if you had heavier hydrogen. When the oxygen of another water molecule or of a cotton cellulose molecule comes close, the hydrogen starts to hop back and forth between the two oxygen atoms. This reduces the velocity of the hydrogen atom, and stabilizes the assemblage. There is now less kinetic energy (or zero-point energy) in the system, and this stability is seen as a bond that is caused not by electron sharing but by hydrogen sharing. We call the reasonably stable bond between molecules that share a hydrogen atom this way a “hydrogen bond.” (now you know).

The hydrogen bond is why water is a liquid and is the reason water is wet. The hydrogen atom jumping between water molecules stabilizes the liquid water more than it would stabilize liquid H2S. Since sulfur atoms are bigger than oxygen atoms, the advantage of hydrogen jumping is smaller. As a result, the heat of vaporization of water is higher than that of H2S, and water is a liquid at normal conditions while H2S is a gas.

Water sticks to cotton or your skin the same way, hydrogen atoms skip between the oxygen of water molecules and of these surfaces creating a bond. It is said to whet these surfaces, and the result is that water is found to be wet. Liquid teflon does not have hydrogen atoms that can jump so there is no band that could be made from that direction (there are some hydrogen atoms on the cotton that can jump to the teflon, but there is no advantage to bonding of this sort as there are only a few hydrogen atoms, and these already jump to other oxygens in the cotton. Thus, to jump to the teflon would mean breaking a bond with other oxygen atoms in the cotton — there would be no energy advantage. This then is just one of the reasons I love hydrogen: it’s a quantum-y material.