Tag Archives: extent of reaction

Sewage reactor engineering, Stirred tank designs

Over the past few years, I’ve devoted several of these essays to analysis of first-stage sewage treatment reactors. I described and analyzed the rotating disc reactor found at the plant is Holly here, and described the racetrack,“activated sludge” plug reactor found most everywhere else here. I also described a system without a primary clarifier found near Cincinatti. All of these were effective for primary treatment; soluble organics are removed by bio-catalyzed oxidation:

2 H-C-O-H + O2 –> CO2 + H2O.

A typical plant in Oakland county treats 2,000,000 gallons per day of this stuff, with the bio-reactor receiving liquid waste containing about 200 ppm of soluble and colloidal biomass. That’s 400 dry gallons for those interested, or about 3200 dry lbs./day. About half of this will be oxidized to CO2 and water. The rest (cell bodies) are removed with insoluble components, and applied to farmers fields or buried, or burnt in an incinerator.

There is another type of reactor used in Oakland County. It’s mostly used for secondary treatment, converting consolidated sludge to higher-quality sludge that can be sold or used on farms with less restriction, but it is a type of reactor used at the South Lyon treatment plant, for primary treatment. It is a Continually stirred tank reactor, or CSTR, a design that is shown in schematic below.

As of some years ago, the South Lyon system involved a single largish pond lined with plastic with a volume about 2,000,000 gallons total. About 700,000 gallons per day of sewage liquids went into the lagoon, at 200 ppm soluble organics. Air was bubbled through the liquid providing a necessary reactant, and causing near-perfect mixing of the contents. The aim of the plant managers is to keep the soluble output to the, then-acceptable level of 10 ppm; it’s something they only barely managed, and things got worse as the flow increased. Assume as before, a value V and a flow Q.

We will call the concentration of soluble organics C, and call the initial concentration, the concentration that enters,  Ci. It’s about 200 ppm. We’ll call the output concentration Co, and for this type of reactors, Co = C.  The reaction is first order, approximately, so that, if there were no flow into or out of the reactor, the concentration of organics would decrease at the rate of

dC/dt = -kC.

Here k is a reaction constant, dependent on temperature oxygen and cell content. It’s typically about 0.5/hour. For a given volume of tank the rate of organic removal is VkC. We can now do a mass balance on soluble organics. Since the rate of organic entry is QCi and the rate leaving by flow is QC. The difference must be the amount that is reacted away:

QCi – QC = VkC.

We now use algebra, to find that

Co = Ci/(1 + kV/Q).

V/Q is sometimes called a residence time; for the system. At normal flow, the residence time of the South Lyon system is about 2.8 days or 68.6 hours. Plugging these numbers in, we find that the effluent from the reactor leaves at 1/35 of the input concentration, or 5.7 ppm, on average. This would be fine except that sometimes the temperature drops, or the flow increases, and we start violating the standard. A yet bigger problem was that the population increased by 50% while the EPA standard got more stringent to 2 ppm. This was solved by adding another, smaller reactor, volume = V2. Using the same algebraic analysis, as above you can show that, with two reactors,

Co = Ci/ [(1 + kV/Q)(1+kV2/Q)].

It’s a touchy system, but it meets government targets, just barely, most of the time. I think it is time to switch to a plug-flow reactor system, as used in much of Oakland county. In these, the fluid enters a channel and is reacted as it flows along. Each gallon of fluid, in a sense moves by itself as if it were its own reactor. In each gallon, we can say that dC/dt = -kC. We can thus solve for Co in terms of the total residence time, where t again is V/Q. We can rearrange this equation and integrate: ∫dC/C = – ∫kdt. We then find that, 

      ln(Ci/Co) = kt = kV/Q

To convert 200 ppm sewage to 2 ppm we note that Ci/Co = 100 and that V = Q ln(100)/k = Q (4.605/.5) hours. An inflow of 1000,000 gallons per day = 41,667 gal/ hour, and we find the volume of tank is 41,667 x 9.21 = 383,750 gallons. This is quite a lot smaller than the CSTR tanks at South Lyon. If we converted the South Lyon tanks to a plug-flow, race-track design, it would allow it to serve a massively increased population, discharging far cleaner sewage. 

Robert Buxbaum, November 17, 2019

Nuclear fusion

I got my PhD at Princeton University 33 years ago (1981) working on the engineering of nuclear fusion reactors, and I thought I’d use this blog to rethink through the issues. I find I’m still of the opinion that developing fusion is important as the it seems the best, long-range power option. Civilization will still need significant electric power 300 to 3000 years from now, it seems, when most other fuel sources are gone. Fusion is also one of the few options for long-range space exploration; needed if we ever decide to send colonies to Alpha Centauri or Saturn. I thought fusion would be ready by now, but it is not, and commercial use seems unlikely for the next ten years at least — an indication of the difficulties involved, and a certain lack of urgency.

Oil, gas, and uranium didn’t run out like we’d predicted in the mid 70s. Instead, population growth slowed, new supplies were found, and better methods were developed to recover and use them. Shale oil and fracking unlocked hydrocarbons we thought were unusable, and nuclear fission reactors got better –safer and more efficient. At the same time, the more we studied, the clearer it came that fusion’s technical problems are much harder to tame than uranium fission’s.

Uranium fission was/is frighteningly simple — far simpler than even the most basic fusion reactor. The first nuclear fission reactor (1940) involved nothing more than uranium pellets in a pile of carbon bricks stacked in a converted squash court at the University of Chicago. No outside effort was needed to get the large, unstable uranium atoms split to smaller, more stable ones. Water circulating through the pile removed the heat released, and control was maintained by people lifting and lowering cadmium control rods while standing on the pile.

A fusion reactor requires high temperature or energy to make anything happen. Fusion energy is produced by combining small, unstable heavy hydrogen atoms into helium, a bigger more stable one, see figure. To do this reaction you need to operate at the equivalent of about 500,000,000 degrees C, and containing it requires (typically) a magnetic bottle — something far more complex than a pile of graphic bricks. The reward was smaller too: “only” about 1/13th as much energy per event as fission. We knew the magnetic bottles were going to be tricky, e.g. there was no obvious heat transfer and control method, but fusion seemed important enough, and the problems seemed manageable enough that fusion power seemed worth pursuing — with just enough difficulties to make it a challenge.

Basic fusion reaction: deuterium + tritium react to give helium, a neutron and energy.

Basic fusion reaction: deuterium + tritium react to give helium, a neutron and energy.

The plan at Princeton, and most everywhere, was to use a TOKAMAK, a doughnut-shaped reactor like the one shown below, but roughly twice as big; TOKAMAK was a Russian acronym. The doughnut served as one side of an enormous transformer. Hydrogen fuel was ionized into a plasma (a neutral soup of protons and electrons) and heated to 300,000,000°C by a current in the TOKOMAK generated by varying the current in the other side of the transformer. Plasma containment was provided by enormous magnets on the top and bottom, and by ring-shaped magnets arranged around the torus.

As development went on, we found we kept needing bigger and bigger doughnuts and stronger and stronger magnets in an effort to balance heat loss with fusion heating. The number density of hydrogen atoms per volume, n, is proportional to the magnetic strength. This is important because the fusion heat rate per volume is proportional to n-squared, n2, while heat loss is proportional to n divided by the residence time, something we called tau, τ. The main heat loss was from the hot plasma going to the reactor surface. Because of the above, a heat balance ratio was seen to be important, heat in divided by heat out, and that was seen to be more-or-less proportional to nτ. As the target temperatures increased, we found we needed larger and larger nτ reactors to make a positive heat balance. And this translated to ever larger reactors and ever stronger magnetic fields, but even here there was a limit, 1 billion Kelvin, a thermodynamic temperature where the fusion reaction went backward and no energy was produced. The Princeton design was huge, with super strong, super magnets, and was operated at 300 million°C, near the top of the reaction curve. If the temperature went above or below this temperature, the fire would go out. There was no room for error, but relatively little energy output per volume — compared to fission.

Fusion reaction options and reaction rates.

Fusion reaction options and reaction rates.

The most likely reaction involved deuterium and tritium, referred to as D and T. This was the reaction of the two heavy isotopes of hydrogen shown in the figure above — the same reaction used in hydrogen bombs, a point we rarely made to the public. For each reaction D + T –> He + n, you get 17.6 million electron volts (17.6 MeV). This is 17.6 million times the energy you get for an electron moving over one Volt, but only 1/13 the energy of a fission reaction. By comparison, the energy of water-forming, H2 + 1/2 O2 –> H2O, is the equivalent of two electrons moving over 1.2 Volts, or 2.4 electron volts (eV), some 8 million times less than fusion.

The Princeton design involved reacting 40 gm/hr of heavy hydrogen to produce 8 mol/hr of helium and 4000 MW of heat. The heat was converted to electricity at 38% efficiency using a topping cycle, a modern (relatively untried) design. Of the 1500 MWh/hr of electricity that was supposed to be produced, all but about 400 MW was to be delivered to the power grid — if everything worked right. Sorry to say, the value of the electricity did not rise anywhere as fast as the cost of the reactor and turbines. Another problem: 1100 MW was more than could be easily absorbed by any electrical grid. The output was high and steady, and could not be easily adjusted to match fluctuating customer demand. By contrast a coal plant’s or fuel cell’s output could be easily adjusted (and a nuclear plant with a little more difficulty).

Because of the need for heat balance, it turned out that at least 9% of the hydrogen had to be burnt per pass through the reactor. The heat lost per mol by conduction to the wall was, to good approximation, the heat capacity of each mol of hydrogen ions, 82 J/°C mol, times the temperature of the ions, 300 million °C divided by the containment time, τ. The Princeton design was supposed to have a containment of about 4 seconds. As a result, the heat loss by conduction was 6.2 GW per mol. This must be matched by the molar heat of reaction that stayed in the plasma. This was 17.6 MeV times Faraday’s constant, 96,800 divided by 4 seconds (= 430 GW/mol reacted) divided by 5. Of the 430 GW/mol produced in fusion reactions only 1/5 remains in the plasma (= 86 GW/mol) the other 4/5 of the energy of reaction leaves with the neutron. To get the heat balance right, at least 9% of the hydrogen must react per pass through the reactor; there were also some heat losses from radiation, so the number is higher. Burn more or less percent of the hydrogen and you had problems. The only other solution was to increase τ > 4 seconds, but this meant ever bigger reactors.

There was also a material handling issue: to get enough fuel hydrogen into the center of the reactor, quite a lot of radioactive gas had to be handled — extracted from the plasma chamber. These were to be frozen into tiny spheres of near-solid hydrogen and injected into the reactor at ultra-sonic velocity. Any slower and the spheres would evaporate before reaching the center. As 40 grams per hour was 9% of the feed, it became clear that we had to be ready to produce and inject 1 pound/hour of tiny spheres. These “snowballs-in-hell” had to be small so they didn’t dampen the fire. The vacuum system had to be able to be big enough to handle the lb/hr or so of unburned hydrogen and ash, keeping the pressure near total vacuum. You then had to purify the hydrogen from the ash-helium and remake the little spheres that would be fed back to the reactor. There were no easy engineering problems here, but I found it enjoyable enough. With a colleague, I came up with a cute, efficient high vacuum pump and recycling system, and published it here.

Yet another engineering challenge concerned the difficulty of finding a material for the first-wall — the inner wall of the doughnut facing the plasma. Of the 4000 MW of heat energy produced, all the conduction and radiation heat, about 1000 MW is deposited in the first wall and has to be conducted away. Conducting this heat means that the wall must have an enormous coolant flow and must withstand an enormous amount of thermal stress. One possible approach was to use a liquid wall, but I’ve recently come up with a rather nicer solid wall solution (I think) and have filed a patent; more on that later, perhaps after/if the patent is accepted. Another engineering challenge was making T, tritium, for the D-T reaction. Tritium is not found in nature, but has to be made from the neutron created in the reaction and from lithium in a breeder blanket, Li + n –> He + T. I examined all possible options for extracting this tritium from the lithium at low concentrations as part of my PhD thesis, and eventually found a nice solution. The education I got in the process is used in my, REB Research hydrogen engineering business.

Man inside the fusion reactor doughnut at ITER. He'd better leave before the 8,000,000°C plasma turns on.

Man inside the fusion reactor doughnut at ITER. He’d better leave before the 8,000,000°C plasma turns on.

Because of its complexity, and all these engineering challenges, fusion power never reached the maturity of fission power; and then Three-mile Island happened and ruined the enthusiasm for all things nuclear. There were some claims that fusion would be safer than fission, but because of the complexity and improvements in fission, I am not convinced that fusion would ever be even as safe. And the long-term need keeps moving out: we keep finding more uranium, and we’ve developed breeder reactors and a thorium cycle: technologies that make it very unlikely we will run out of fission material any time soon.

The main, near term advantage I see for fusion over fission is that there are fewer radioactive products, see comparison.  A secondary advantage is neutrons. Fusion reactors make excess neutrons that can be used to make tritium, or other unusual elements. A need for one of these could favor the development of fusion power. And finally, there’s the long-term need: space exploration, or basic power when we run out of coal, uranium, and thorium. Fine advantages but unlikely to be important for a hundred years.

Robert E. Buxbaum, March 1, 2014. Here’s a post on land use, on the aesthetics of engineering design, and on the health risks of nuclear power. The sun’s nuclear fusion reactor is unstable too — one possible source of the chaotic behavior of the climate. Here’s a control joke.

Nerves are tensegrity structures and grow when pulled

No one quite knows how nerve cells learn stuff. It is incorrectly thought that you can not get new nerves in the brain, nor that you can get brain cells to grow out further, but people have made new nerve cells, and when I was a professor at Michigan State, a Physiology colleague and I got brain and sensory nerves to grow out axons by pulling on them without the use of drugs.

I had just moved to Michigan State as a fresh PhD (Princeton) as an assistant professor of chemical engineering. Steve Heidemann was a few years ahead of me, a Physiology professor PhD from Princeton. We were both new Yorkers. He had been studying nerve structure, and wondered about how the growth cone makes nerves grow out axons (the axon is the long, stringy part of the nerve). A thought was that nerves were structured as Snelson-Fuller tensegrity structures, but it was not obvious how that would relate to growth or anything else. A Snelson-Fuller structure is shown below the structure stands erect not by compression, as in a pyramid or igloo, but rather because tension in the wires helps lift the metal pipes, and puts them in compression. The nerve cell, shown further below is similar with actin-protein as the outer, tensed skin, and a microtubule-protein core as the compress pipes. 

A Snelson-Fuller tensegrity sculpture in the graduate college courtyard at Princeton, where Steve and I got our PhDs

A Snelson-Fuller tensegrity sculpture in the graduate college courtyard at Princeton, an inspiration for our work.

Biothermodynamics was pretty basic 30 years ago (It still is today), and it was incorrectly thought that objects were more stable when put in compression. It didn’t take too much thermodynamics on my part to show otherwise, and so I started a part-time career in cell physiology. Consider first how mechanical force should affect the Gibbs free energy, G, of assembled microtubules. For any process at constant temperature and pressure, ∆G = work. If force is applied we expect some elastic work will be put into the assembled Mts in an amount  ∫f dz, where f is the force at every compression, and ∫dz is the integral of the distance traveled. Assuming a small force, or a constant spring, f = kz with k as the spring constant. Integrating the above, ∆G = ∫kz dz = kz2; ∆G is always positive whether z is positive or negative, that is the microtubule is most stable with no force, and is made less stable by any force, tension or compression. 

A cell showing what appears to be tensegrity. The microtubules in green surrounded by actin in red. If the actin is under tension the microtubules are in compression. From here.

A cell showing what appears to be tensegrity. The microtubules (green) surrounded by actin (red). In nerves Heidemann and I showed actin is in tension the microtubules in compression.

Assuming that microtubules in the nerve- axon are generally in compression as in the Snelson-Fuller structure, then pulling on the axon could potentially reduce the compression. Normally, this is done by a growth cone, we posited, but we could also do it by pulling. In either case, a decrease in the compression of the assembled microtubules should favor microtubule assembly.

To calculate the rates, I used absolute rate theory, something I’d learned from Dr. Mortimer Kostin, a most-excellent thermodynamics professor. I assumed that the free energy of the monomer was unaffected by force, and that the microtubules were in pseudo- equilibrium with the monomer. Growth rates were predicted to be proportional to the decrease in G, and the prediction matched experimental data. 

Our few efforts to cure nerve disease by pulling did not produce immediate results; it turns out to by hard to pull on nerves in the body. Still, we gained some publicity, and a variety of people seem to have found scientific and/or philosophical inspiration in this sort of tensegrity model for nerve growth. I particularly like this review article by Don Ingber in Scientific American. A little more out there is this view of consciousness life and the fate of the universe (where I got the cell picture). In general, tensegrity structures are more tough and flexible than normal construction. A tensegrity structure will bend easily, but rarely break. It seems likely that your body is held together this way, and because of this you can carry heavy things, and still move with flexibility. It also seems likely that bones are structured this way; as with nerves; they are reasonably flexible, and can be made to grow by pulling.

Now that I think about it, we should have done more theoretical or experimental work in this direction. I imagine that  pulling on the nerve also affects the stability of the actin network by affecting the chain configuration entropy. This might slow actin assembly, or perhaps not. It might have been worthwhile to look at new ways to pull, or at bone growth. In our in-vivo work we used an external magnetic field to pull. We might have looked at NASA funding too, since it’s been observed that astronauts grow in outer space by a solid inch or two, and their bodies deteriorate. Presumably, the lack of gravity causes the calcite in the bones to grow, making a person less of a tensegrity structure. The muscle must grow too, just to keep up, but I don’t have a theory for muscle.

Robert Buxbaum, February 2, 2014. Vaguely related to this, I’ve written about architecture, art, and mechanical design.

How and why membrane reactors work

Here is a link to a 3 year old essay of mine about how membrane reactors work and how you can use them to get past the normal limits of thermodynamics. The words are good, as is the example application, but I think I can write a shorter version now. Also, sorry to say, when I wrote the essay I was just beginning to make membrane reactors; my designs have gotten simpler since.

At left, for example, is a more modern, high pressure membrane reactor design. A common size is  72 tube reactor assembly; high pressure. The area around the shell is used for heat transfer. Normally the reactor would sit with this end up, and the tube area filled or half-filled with catalyst, e.g. for the water gas shift reaction, CO + H2O –> CO2 + H2 or for the methanol reforming CH3OH + H2O –> 3H2 + CO2, or ammonia cracking 2NH3 –> N2 + 3H2. According to normal thermodynamics, the extent of reaction for these reactions will be negatively affected by pressure (WGS is unaffected). Separation of the hydrogen generally requires high pressure and a separate step or two. This setup combines the steps of reaction with separation, give you ultra high purity, and avoids the normal limitations of thermodynamics.

Once equilibrium is reached in a normal reactor, your only option to drive the reaction isby adjusting the temperature. For the WGS, you have to operate at low temperatures, 250- 300 °C, if you want high conversion, and you have to cool externally to remove the heat of reaction. In a membrane reactor, you can operate in your preferred temperature ranges and you don’t have to work so hard to remove, or add heat. Typically with a MR, you want to operate at high reactor pressures, and you want to extract hydrogen at a lower pressure. The pressure difference between the reacting gas and the extracted hydrogen allows you to achieve high reaction extents (high conversions) at any temperature. The extent is higher because you are continuously removing product – H2 in this case.

Here’s where we sell membrane reactors; we also sell catalyst and tubes.