Tag Archives: reaction rate

A rotating disk bio-reactor for sewage treatment

One of the most effective designs for sewage treatment is the rotating disk bio-reactor, shown below. It is typically used in small-throughput sewage plants, but it performs quite well in larger plants too. I’d like to present an analysis of the reactor, and an explanation of why it works so well.

A rotating disc sewage reactor.

A rotating disk sewage reactor; ∂ is the thickness of the biofilm. It’s related to W the rotation rate in radians per sec, and to D the limiting diffusivity.

As shown, the reactor is fairly simple-looking, nothing more than a train of troughs filled with sewage-water, typically 3-6 feet deep, with a stack of discs rotating within. The discs are typically 7 to 14 feet in diameter (2-4 meters) and 1 cm apart. The shaft is typically close to the water level, but slightly above, and the rotation speed is adjustable. The device works because appropriate bio-organisms attach themselves to the disk, and the rotation insures that they are fully (or reasonably) oxygenated.

How do we know the cells on the disc will be oxygenated? The key is the solubility of oxygen in water, and the fact that these discs are only used on the low biological oxygen demand part of the sewage treatment process, only where the sewage contains 40 ppm of soluble organics or less. The main reaction on the rotating disc is bio oxidation of soluble carbohydrate (sugar) in a layer of wet slime attached to the disc.

H-O-C-H + O2 –> CO2 + H2O.

As it happens, the solubility of pure oxygen in water is about 40 ppm at 1 atm. As air contains 21% oxygen, we expect an 8 ppm concentration of oxygen on the slime surface: 21% of 40 ppm = 8 ppm. Given the reaction above and the fact that oxygen will diffuse five times more readily than sugar at least, we expect that one disc rotation will easily provide enough oxygen to remove 40 ppm sugar in the slime at every speed of rotation so long as the wheel is in the air at least half of the time, as shown above.

Let’s now pick a rotation speed of 1/3 rpm (3 minutes per rotation) and see where that gets us in terms of speed of organic removal. Since the disc is always in an area of low organic concentration, it becomes covered mostly with “rotifers”, a fungus that does well in low nutrient (low BOD) sewage. Let’s now assume that mass transfer (diffusion) of sugar in the rotifer slime determines the thickness of the rotifera layer, and thus the rate of organic removal. We can calculate the diffusion depth of sugar, ∂ via the following equation, derived in my PhD thesis.

∂ = √πDt

Here, D is the diffusivity (cm2/s) for sugar in the rotifera slime attached to the disk, π = 3.1415.. and t is the contact time, 90 seconds in the above assumption. My expectation is that D in the rotifer slime will be similar to the diffusivity sugar in water, about 3 x 10-6 cm2/s. Based on the above, we find the rotifer thickness will be ∂ = √.00085 cm2 = .03 cm, and the oxygen depth will be about 2.5 times that, 0.07 cm. If the discs are 1 cm apart, we find that, about 14% of the fluid volume of the reactor will be filled with slime, with 2/5 of this rotifer-filled. This is as much as 1000 times more rotifers than you would get in an ordinary constantly stirred tank reactor, a CSTR to use a common acronym. We might now imagine that the volume of this sewage-treatment reactor can be as small as 1000 gallons, 1/1000 the size of a CSTR. Unfortunately it is not so; we’ll have to consider another limiting effect, diffusion of nutrients.

Consider the diffusive mass transfer of sugar from a 1,000,000 gal/day flow (43 liters/sec). Assume that at some point in the extraction you have a concentration C(g/cc) of sugar in the liquid where C is between 40 ppm and 1 ppm. Let’s pick a volume of the reactor that is 1/20 the normal size for this flow (not 1/1000 the size, you’ll see why). That is to say a trough whose volume is 50,000 gallons (200,000 liters, 200 m3). If the discs are 1 cm apart, this trough (or section of a trough) will have about  4×10^8 cm2 of submerged surface, and about 9×10^8 total surface including wetted disc in the air. The mass of organic that enters this section of trough is 44,000 C g/second, but this mass of sugar can only reach the rotifers by diffusion through a water-like diffusion layer of about .06 cm thickness, twice the thickness calculated above. The thickness is twice that calculated above because it includes the supernatant liquid beyond the slime layer. We now calculate the rate of mass diffusing into the disc: AxDxc/z = 8×10^8 x 3×10-6 x C/.06 cm = 40,000 C g/sec, and find that, for this tank size and rotation speed, the transfer rate of organic to the discs is 2/3 as much as needed to absorb the incoming sugar. This is to say that a 50,000 gallon section is too small to reduce the concentration to ln (1) at this speed of disc rotation.

Based on the above calculation, I’m inclined to increase the speed of rotation to .75 rpm. At this speed, the rotifer-slime layer will be 2/3 as thin 0.2 mm, and we expect an equally thinner diffusion barrier in the supernatant. At this faster speed, there is now 3/2 as much diffusion transfer per area because the thinner diffusion barrier, and we can expect a 50,000 liter reactor section to reduce the initial concentration by a fraction of 1/2.718 or C/e. Note that the mass transfer rate to the discs is always proportional to C. If we find that 50,000 gallons of tank reduces the concentration to 1/e, we find that we need 150,000 gallons of reactor to reduce the concentration of sugar from 40 ppm to 2 ppm, the legal target, ln (40/2) = 3. This 150,000 gallons is a remarkably small volume to reduce the sBOD concentration from 40 ppm to 2 ppm (sBOD = soluble biological oxygen demand), and the energy use is small too if the disc bearings are good.

The Holly sewage treatment plant is the only one in Oakland county, MI using the rotating disc contacted technology. It has a flow of 1,000,000 gallons per day, and has a contactor trough that is 215,000 gallons, about what we’d expect though their speed is somewhat higher, over 1 rpm and their input concentration is likely lower than 40 ppm. For the first stage of sewage treatment, the Holly plant use a vertical-draft, trickle-bed reactor. That is they drizzle the sewage-liquids over a slime-coated packing to reduce the sBOD concentration from 200 ppm to before feeding the flow to the rotating discs. My sense of the reason they don’t do the entire extraction with a trickle bed is that the discs use far less energy.

I should also add that the back-part of the disc isn’t totally useless oxygen storage, as it seems from my analysis. Some non-sugar reactions take place in the relatively anoxic environment there and in the liquid at the bottom of the trough. In these regions, iron reacts with phosphate, and nitrate removal takes place. These are other important requirements of sewage treatment.

Robert E. Buxbaum, July 18, 2017. As an exercise, find the volume necessary for a plug flow reactor or a stirred tank reactor (CSTR) to reduce the concentration of sugar from 40 ppm to 2 ppm. Assume 1,000,000 gal per day, an excess of oxygen in these reactors, and a first order reaction with a rate constant of dC/dt = -(0.4/hr)C. At some point in the future I plan to analyze these options, and the trickle bed reactor, too.

Nuclear fusion

I got my PhD at Princeton University 33 years ago (1981) working on the engineering of nuclear fusion reactors, and I thought I’d use this blog to rethink through the issues. I find I’m still of the opinion that developing fusion is important as the it seems the best, long-range power option. Civilization will still need significant electric power 300 to 3000 years from now, it seems, when most other fuel sources are gone. Fusion is also one of the few options for long-range space exploration; needed if we ever decide to send colonies to Alpha Centauri or Saturn. I thought fusion would be ready by now, but it is not, and commercial use seems unlikely for the next ten years at least — an indication of the difficulties involved, and a certain lack of urgency.

Oil, gas, and uranium didn’t run out like we’d predicted in the mid 70s. Instead, population growth slowed, new supplies were found, and better methods were developed to recover and use them. Shale oil and fracking unlocked hydrocarbons we thought were unusable, and nuclear fission reactors got better –safer and more efficient. At the same time, the more we studied, the clearer it came that fusion’s technical problems are much harder to tame than uranium fission’s.

Uranium fission was/is frighteningly simple — far simpler than even the most basic fusion reactor. The first nuclear fission reactor (1940) involved nothing more than uranium pellets in a pile of carbon bricks stacked in a converted squash court at the University of Chicago. No outside effort was needed to get the large, unstable uranium atoms split to smaller, more stable ones. Water circulating through the pile removed the heat released, and control was maintained by people lifting and lowering cadmium control rods while standing on the pile.

A fusion reactor requires high temperature or energy to make anything happen. Fusion energy is produced by combining small, unstable heavy hydrogen atoms into helium, a bigger more stable one, see figure. To do this reaction you need to operate at the equivalent of about 500,000,000 degrees C, and containing it requires (typically) a magnetic bottle — something far more complex than a pile of graphic bricks. The reward was smaller too: “only” about 1/13th as much energy per event as fission. We knew the magnetic bottles were going to be tricky, e.g. there was no obvious heat transfer and control method, but fusion seemed important enough, and the problems seemed manageable enough that fusion power seemed worth pursuing — with just enough difficulties to make it a challenge.

Basic fusion reaction: deuterium + tritium react to give helium, a neutron and energy.

Basic fusion reaction: deuterium + tritium react to give helium, a neutron and energy.

The plan at Princeton, and most everywhere, was to use a TOKAMAK, a doughnut-shaped reactor like the one shown below, but roughly twice as big; TOKAMAK was a Russian acronym. The doughnut served as one side of an enormous transformer. Hydrogen fuel was ionized into a plasma (a neutral soup of protons and electrons) and heated to 300,000,000°C by a current in the TOKOMAK generated by varying the current in the other side of the transformer. Plasma containment was provided by enormous magnets on the top and bottom, and by ring-shaped magnets arranged around the torus.

As development went on, we found we kept needing bigger and bigger doughnuts and stronger and stronger magnets in an effort to balance heat loss with fusion heating. The number density of hydrogen atoms per volume, n, is proportional to the magnetic strength. This is important because the fusion heat rate per volume is proportional to n-squared, n2, while heat loss is proportional to n divided by the residence time, something we called tau, τ. The main heat loss was from the hot plasma going to the reactor surface. Because of the above, a heat balance ratio was seen to be important, heat in divided by heat out, and that was seen to be more-or-less proportional to nτ. As the target temperatures increased, we found we needed larger and larger nτ reactors to make a positive heat balance. And this translated to ever larger reactors and ever stronger magnetic fields, but even here there was a limit, 1 billion Kelvin, a thermodynamic temperature where the fusion reaction went backward and no energy was produced. The Princeton design was huge, with super strong, super magnets, and was operated at 300 million°C, near the top of the reaction curve. If the temperature went above or below this temperature, the fire would go out. There was no room for error, but relatively little energy output per volume — compared to fission.

Fusion reaction options and reaction rates.

Fusion reaction options and reaction rates.

The most likely reaction involved deuterium and tritium, referred to as D and T. This was the reaction of the two heavy isotopes of hydrogen shown in the figure above — the same reaction used in hydrogen bombs, a point we rarely made to the public. For each reaction D + T –> He + n, you get 17.6 million electron volts (17.6 MeV). This is 17.6 million times the energy you get for an electron moving over one Volt, but only 1/13 the energy of a fission reaction. By comparison, the energy of water-forming, H2 + 1/2 O2 –> H2O, is the equivalent of two electrons moving over 1.2 Volts, or 2.4 electron volts (eV), some 8 million times less than fusion.

The Princeton design involved reacting 40 gm/hr of heavy hydrogen to produce 8 mol/hr of helium and 4000 MW of heat. The heat was converted to electricity at 38% efficiency using a topping cycle, a modern (relatively untried) design. Of the 1500 MWh/hr of electricity that was supposed to be produced, all but about 400 MW was to be delivered to the power grid — if everything worked right. Sorry to say, the value of the electricity did not rise anywhere as fast as the cost of the reactor and turbines. Another problem: 1100 MW was more than could be easily absorbed by any electrical grid. The output was high and steady, and could not be easily adjusted to match fluctuating customer demand. By contrast a coal plant’s or fuel cell’s output could be easily adjusted (and a nuclear plant with a little more difficulty).

Because of the need for heat balance, it turned out that at least 9% of the hydrogen had to be burnt per pass through the reactor. The heat lost per mol by conduction to the wall was, to good approximation, the heat capacity of each mol of hydrogen ions, 82 J/°C mol, times the temperature of the ions, 300 million °C divided by the containment time, τ. The Princeton design was supposed to have a containment of about 4 seconds. As a result, the heat loss by conduction was 6.2 GW per mol. This must be matched by the molar heat of reaction that stayed in the plasma. This was 17.6 MeV times Faraday’s constant, 96,800 divided by 4 seconds (= 430 GW/mol reacted) divided by 5. Of the 430 GW/mol produced in fusion reactions only 1/5 remains in the plasma (= 86 GW/mol) the other 4/5 of the energy of reaction leaves with the neutron. To get the heat balance right, at least 9% of the hydrogen must react per pass through the reactor; there were also some heat losses from radiation, so the number is higher. Burn more or less percent of the hydrogen and you had problems. The only other solution was to increase τ > 4 seconds, but this meant ever bigger reactors.

There was also a material handling issue: to get enough fuel hydrogen into the center of the reactor, quite a lot of radioactive gas had to be handled — extracted from the plasma chamber. These were to be frozen into tiny spheres of near-solid hydrogen and injected into the reactor at ultra-sonic velocity. Any slower and the spheres would evaporate before reaching the center. As 40 grams per hour was 9% of the feed, it became clear that we had to be ready to produce and inject 1 pound/hour of tiny spheres. These “snowballs-in-hell” had to be small so they didn’t dampen the fire. The vacuum system had to be able to be big enough to handle the lb/hr or so of unburned hydrogen and ash, keeping the pressure near total vacuum. You then had to purify the hydrogen from the ash-helium and remake the little spheres that would be fed back to the reactor. There were no easy engineering problems here, but I found it enjoyable enough. With a colleague, I came up with a cute, efficient high vacuum pump and recycling system, and published it here.

Yet another engineering challenge concerned the difficulty of finding a material for the first-wall — the inner wall of the doughnut facing the plasma. Of the 4000 MW of heat energy produced, all the conduction and radiation heat, about 1000 MW is deposited in the first wall and has to be conducted away. Conducting this heat means that the wall must have an enormous coolant flow and must withstand an enormous amount of thermal stress. One possible approach was to use a liquid wall, but I’ve recently come up with a rather nicer solid wall solution (I think) and have filed a patent; more on that later, perhaps after/if the patent is accepted. Another engineering challenge was making T, tritium, for the D-T reaction. Tritium is not found in nature, but has to be made from the neutron created in the reaction and from lithium in a breeder blanket, Li + n –> He + T. I examined all possible options for extracting this tritium from the lithium at low concentrations as part of my PhD thesis, and eventually found a nice solution. The education I got in the process is used in my, REB Research hydrogen engineering business.

Man inside the fusion reactor doughnut at ITER. He'd better leave before the 8,000,000°C plasma turns on.

Man inside the fusion reactor doughnut at ITER. He’d better leave before the 8,000,000°C plasma turns on.

Because of its complexity, and all these engineering challenges, fusion power never reached the maturity of fission power; and then Three-mile Island happened and ruined the enthusiasm for all things nuclear. There were some claims that fusion would be safer than fission, but because of the complexity and improvements in fission, I am not convinced that fusion would ever be even as safe. And the long-term need keeps moving out: we keep finding more uranium, and we’ve developed breeder reactors and a thorium cycle: technologies that make it very unlikely we will run out of fission material any time soon.

The main, near term advantage I see for fusion over fission is that there are fewer radioactive products, see comparison.  A secondary advantage is neutrons. Fusion reactors make excess neutrons that can be used to make tritium, or other unusual elements. A need for one of these could favor the development of fusion power. And finally, there’s the long-term need: space exploration, or basic power when we run out of coal, uranium, and thorium. Fine advantages but unlikely to be important for a hundred years.

Robert E. Buxbaum, March 1, 2014. Here’s a post on land use, on the aesthetics of engineering design, and on the health risks of nuclear power. The sun’s nuclear fusion reactor is unstable too — one possible source of the chaotic behavior of the climate. Here’s a control joke.

Nerves are tensegrity structures and grow when pulled

No one quite knows how nerve cells learn stuff. It is incorrectly thought that you can not get new nerves in the brain, nor that you can get brain cells to grow out further, but people have made new nerve cells, and when I was a professor at Michigan State, a Physiology colleague and I got brain and sensory nerves to grow out axons by pulling on them without the use of drugs.

I had just moved to Michigan State as a fresh PhD (Princeton) as an assistant professor of chemical engineering. Steve Heidemann was a few years ahead of me, a Physiology professor PhD from Princeton. We were both new Yorkers. He had been studying nerve structure, and wondered about how the growth cone makes nerves grow out axons (the axon is the long, stringy part of the nerve). A thought was that nerves were structured as Snelson-Fuller tensegrity structures, but it was not obvious how that would relate to growth or anything else. A Snelson-Fuller structure is shown below the structure stands erect not by compression, as in a pyramid or igloo, but rather because tension in the wires helps lift the metal pipes, and puts them in compression. The nerve cell, shown further below is similar with actin-protein as the outer, tensed skin, and a microtubule-protein core as the compress pipes. 

A Snelson-Fuller tensegrity sculpture in the graduate college courtyard at Princeton, where Steve and I got our PhDs

A Snelson-Fuller tensegrity sculpture in the graduate college courtyard at Princeton, an inspiration for our work.

Biothermodynamics was pretty basic 30 years ago (It still is today), and it was incorrectly thought that objects were more stable when put in compression. It didn’t take too much thermodynamics on my part to show otherwise, and so I started a part-time career in cell physiology. Consider first how mechanical force should affect the Gibbs free energy, G, of assembled microtubules. For any process at constant temperature and pressure, ∆G = work. If force is applied we expect some elastic work will be put into the assembled Mts in an amount  ∫f dz, where f is the force at every compression, and ∫dz is the integral of the distance traveled. Assuming a small force, or a constant spring, f = kz with k as the spring constant. Integrating the above, ∆G = ∫kz dz = kz2; ∆G is always positive whether z is positive or negative, that is the microtubule is most stable with no force, and is made less stable by any force, tension or compression. 

A cell showing what appears to be tensegrity. The microtubules in green surrounded by actin in red. If the actin is under tension the microtubules are in compression. From here.

A cell showing what appears to be tensegrity. The microtubules (green) surrounded by actin (red). In nerves Heidemann and I showed actin is in tension the microtubules in compression.

Assuming that microtubules in the nerve- axon are generally in compression as in the Snelson-Fuller structure, then pulling on the axon could potentially reduce the compression. Normally, this is done by a growth cone, we posited, but we could also do it by pulling. In either case, a decrease in the compression of the assembled microtubules should favor microtubule assembly.

To calculate the rates, I used absolute rate theory, something I’d learned from Dr. Mortimer Kostin, a most-excellent thermodynamics professor. I assumed that the free energy of the monomer was unaffected by force, and that the microtubules were in pseudo- equilibrium with the monomer. Growth rates were predicted to be proportional to the decrease in G, and the prediction matched experimental data. 

Our few efforts to cure nerve disease by pulling did not produce immediate results; it turns out to by hard to pull on nerves in the body. Still, we gained some publicity, and a variety of people seem to have found scientific and/or philosophical inspiration in this sort of tensegrity model for nerve growth. I particularly like this review article by Don Ingber in Scientific American. A little more out there is this view of consciousness life and the fate of the universe (where I got the cell picture). In general, tensegrity structures are more tough and flexible than normal construction. A tensegrity structure will bend easily, but rarely break. It seems likely that your body is held together this way, and because of this you can carry heavy things, and still move with flexibility. It also seems likely that bones are structured this way; as with nerves; they are reasonably flexible, and can be made to grow by pulling.

Now that I think about it, we should have done more theoretical or experimental work in this direction. I imagine that  pulling on the nerve also affects the stability of the actin network by affecting the chain configuration entropy. This might slow actin assembly, or perhaps not. It might have been worthwhile to look at new ways to pull, or at bone growth. In our in-vivo work we used an external magnetic field to pull. We might have looked at NASA funding too, since it’s been observed that astronauts grow in outer space by a solid inch or two, and their bodies deteriorate. Presumably, the lack of gravity causes the calcite in the bones to grow, making a person less of a tensegrity structure. The muscle must grow too, just to keep up, but I don’t have a theory for muscle.

Robert Buxbaum, February 2, 2014. Vaguely related to this, I’ve written about architecture, art, and mechanical design.