# The energy cost of airplanes, trains, and buses

I’ve come to conclude that airplane travel, and busses makes a lot more sense than high-speed trains. Consider the marginal energy cost of a 90kg (200 lb) person getting on a 737-800, the most commonly flown commercial jet in US service. For this plane, the ratio of lift/drag at cruise speed is 19, suggesting an average value of 15 or so for a 1 hr trip when you include take-off and landing. The energy cost of his trip is related to the cost of jet fuel, about \$3.20/gallon, or about \$1/kg. The heat energy content of jet fuel is 44 MJ/kg or 18,800 Btu/lb. Assuming an average engine efficiency of 21%, we calculate a motive-energy cost of 1.1 x 10-7 \$/J, or 40¢/kwhr. The amount of energy per mile is just force times distance: 1 mile = 1609 m. Force is calculated from the person’s weight in (in Newtons) divided by lift/drag ratio. The energy per mile is thus 90*9.8*1609/15 = 94,600 J. Multiplying by the \$-per-J we find the marginal cost of his transport is 1¢ per mile, virtually nothing.

The Wright brothers testing their gliders in 1901 (left) and 1902 (right). The angle of the tether reflects a dramatic improvement in lift-to-drag ratio; the marginal cost per mile is inversely proportional to the lift-to-drag ratio.

The marginal cost for carrying a 200 lb person from Detroit to NY (500 miles) is 1¢/mile x 500 miles = \$5: hardly anything compared to the cost of driving. No wonder airplanes offer crazy-low, fares to fill seats on empty flights. But this is just the marginal cost. The average energy cost per passenger is higher since it includes the weight of the plane. On a reasonably full 737 flight, the passengers and luggage  weigh about 1/4 as much as the plane and its fuel. Effectively, each passenger weighs 800 lbs, suggesting a 4¢/mile energy cost, or \$20 of energy per passenger for the flight from Detroit to NY. Though the fuel rate of burn is high, about 5000 lbs/hr, the cost is low because of the high speed and the number of passengers. Stated another way, the 737 gets 80 passenger miles per gallon, a somewhat lower mpg than the 91 claimed for a full 747.

Passengers must pay more than \$20, of course because of wages, capital, interest, profit, taxes, and landing fees. Still, one can see how discount airlines could make money if they arrange a good deal with a hub airport, one that allows them low landing fees and allows them to buy fuel at near cost.

Compare this to any proposed super-fast or Mag-lev train. Over any significant distance, the plane will be cheaper, faster, and as energy-efficient. Current US passenger trains, when fairly full, boast a fuel economy of 200 passenger miles per gallon, but they are rarely full. Currently, they take some 15 hours to go Detroit to NY, in part because they go slow, and in part because they go via longer routes, visiting Toronto and Montreal in this case, with many stops along the way. With this long route, even if the train got 200 passenger mpg, the 750 mile trip would use 3.75 gallons per passenger, compared to 6.25 for the flight above. This is a savings of 2.5 gallons, or \$8, but it comes at a cost of 15 hours of a passenger’s life. Even train speeds were doubled, the trip would still take more than 7.5 hours including stops, and the energy cost would be higher. As for price, beyond the costs of wages, capital, interest, profit, taxes, and depot fees — similar to those for air-tragic – you have to add the cost of new track and track upkeep. While I’d be happy to see better train signaling to allow passenger trains to go 100 mph on current, freight-compatible lines, I can see little benefit to government-funded projects to add the parallel, dedicated track for 150+ mph trains that will still, likely be half-full.

You may now ask about cities that don’t have  good airports. Something else removing my enthusiasm for super trains is the appearance of a new generation of short take-off and landing, commercial jets, and of a new generation of comfortable buses. Some years ago, I noted that Detroit’s Coleman Young airport no longer has commercial traffic because its runway was too short, 1051m. I’m happy to report that Bombardier’s new CS100s should make small airports like this usable. A CS100 will hold 120 passengers, requires only 1463m of runway, and is quiet enough for city use. The economics are such that it’s hard to imagine Mag-lev beating this for the proposed US high-speed train routes: Dallas to Houston; LA to San José to San Francisco; or Chicago-Detroit-Toledo-Cleveland-Pittsburgh. So far US has kept out these planes because Boeing claims unfair competition, but I trust that this is just a delay. As for shorter trips, the modern busses are as fast and energy efficient as trains, and far cheaper because they share the road costs with cars and trucks.

If the US does want to spend money on transport, I’d suggest improving inner-city airports. The US could also fund development of yet-better short take off planes, perhaps made with carbon fiber, or with flexible wing structures to improve the lift-to-drag during take-offs and landings. Higher train speeds should be available with better signaling and with passenger trains that lean more into a curve, but even this does not have to be super high-tech. And for 100-200 mile intercity traffic, I suspect the best solution is to improve the highways and busses. If you want low pollution and high efficiency, how about hydrogen hybrid buses?

Robert Buxbaum, October 30, 2017. I taught engineering for 10 years at Michigan State, and my company, REB Research, makes hydrogen generators and hydrogen purifiers.

# Advanced windmills + 20 years = field of junk

Everything wears out. This can be a comforting or a depressing thought, but it’s a truth. No old mistake, however egregious, lasts forever, and no bold advance avoids decay. At best, last year’s advance will pay for itself with interest, will wear out gracefully, and will be recalled fondly by aficionados after it’s replaced by something better. Water wheels, and early steamships are examples of this type of bold advance. Unfortunately, it is often the case that last years innovation turns out to be no advance at all: a technological dead end that never pays for itself, and becomes a dangerous, rotting eyesore or worse, a laughing-stock blot or a blot on the ecology. Our first two generations of advanced windmill farms seem to match this description; perhaps the next generation will be better, but here are some thoughts on lessons learned from the existing fields of rotting windmills.

The ancient design windmills of Don Quixote’s Spain (1300?) were boons. Farmers used them to grind grain or cut wood, and to to pump drinking water. Holland used similar early windmills to drain their land. So several American presidents came to believe advanced design windmills would be similar boons if used for continuous electric power generation. It didn’t work, and many of the problems could have been seen at the start. While the farmer didn’t care when his water was pumped, or when his wood is cut. When you’re generating electricity, there is a need to match the power demand exactly. Whenever the customer turns on the switch, electricity is expected to flow at the appropriate amount of Wattage; at other times any power generated is a waste or a nuisance. But electric generator-windmills do not produce power on demand, they produce power when the wind blows. The mismatch of wind and electric demand has bedeviled windmill reliability and economic return. It will likely continue to do so until we find a good way to store electric power cheaply. Until then windmills will not be able to produce electricity at competitive prices to compete with cheap coal and nuclear power.

There is also the problem of repair. The old windmills of Holland still turn a century later because they were relatively robust, and relatively easy to maintain. The modern windmills of the US stand much taller and move much faster. They are often hit, and damaged by lightning strikes, and their fast-turning gears tend to wear out fast, Once damaged, modern windmills are not readily fix, They are made of advanced fiberglass materials spun on special molds. Worse yet, they are constructed in mountainous, remote locations. Such blades can not be replaces by amateurs, and even the gears are not readily accessed to repair. More than half of the great power-windmills built in the last 35 years have worn out and are unlikely to ever get repair. Driving past, you see fields of them sitting idle; the ones still turning look like they will wear out soon. The companies that made and installed these behemoth are mostly out of the business, so there is no-one there to take them down even if there were an economic incentive to do so. Even where a company is found to fix the old windmills, no one would as there is not sufficient economic return — the electricity is worth less than the repair.

Komoa Wind Farm in Kona, Hawaii, June 2010; A field of modern design wind-turbines already ruined by wear, wind, and lightning. — Friends of Grand Ronde Valley.

A single rusting windmill would be bad enough, but modern wind turbines were put up as wind farms with nominal power production targeted to match the output of small coal-fired generators. These wind farms require a lot of area,  covering many square miles along some of the most beautiful mountain ranges and ridges — places chosen because the wind was strong

Putting up these massive farms of windmills lead to a situation where the government had pay for construction of the project, and often where the government provided the land. This, generous spending gives the taxpayer the risk, and often a political gain — generally to a contributor. But there is very little political gain in paying for the repair or removal of the windmills. And since the electricity value is less than the repair cost, the owners (friends of the politician) generally leave the broken hulks to sit and rot. Politicians don’t like to pay to fix their past mistakes as it undermines their next boondoggle, suggesting it will someday rust apart without ever paying for itself.

So what can be done. I wish I could suggest less arrogance and political corruption, but I see no way to achieve that, as the poet wrote about Ozymandias (Ramses II) and his disastrous building projects, the leader inevitably believes: “I am Ozymandias, king of kings; look on my works ye mighty and despair.” So I’ll propose some other, less ambitious ideas. For one, smaller demonstration projects closer to the customer. First see if a single windmill pays for itself, and only then build a second. Also, electricity storage is absolutely key. I think it is worthwhile to store excess wind power as hydrogen (hydrogen storage is far cheaper than batteries), and the thermodynamics are not bad

Robert E. Buxbaum, January 3, 2016. These comments are not entirely altruistic. I own a company that makes hydrogen generators and hydrogen purifiers. If the government were to take my suggestions I would benefit.

# my electric cart of the future

Buxbaum and Sperka show off the (shopping) cart of future, Oak Park parade July 4, 2015.

A Roman chariot did quite well with only 1 horse-power, while the average US car requires 100 horses. Part of the problem is that our cars weigh more than a chariot and go faster, 80 mph vs of 25 mph. But most city applications don’t need all that weight nor all of that speed. 20-25 mph is fine for round-town errands, and should be particularly suited to use by young drivers and seniors.

To show what can be done with a light vehicle that only has to go 20 mph, I made this modified shopping cart, and fitted it with a small, 1 hp motor. I call it the cart-of the future and paraded around with it at our last 4th of July parade. It’s high off the ground for safety, reasonably wide for stability, and has the shopping cart cage and seat-belts for safety. There is also speed control. We went pretty slow in the parade, but here’s a link to a video of the cart zipping down the street at 17.5 mph.

In the 2 months since this picture was taken, I’ve modified the cart to have a chain drive and a rear-wheel differential — helpful for turning. My next modification, if I get to it, will be to switch to hydrogen power via a fuel cell. One of the main products we make is hydrogen generators, and I’m hoping to use the cart to advertise the advantages of hydrogen power.

Robert E. Buxbaum, August 28, 2015. I’m the one in the beige suit.

# The mass of a car and its mpg.

Back when I was an assistant professor at Michigan State University, MSU, they had a mileage olympics between the various engineering schools. Michigan State’s car got over 800 mpg, and lost soundly. By contrast, my current car, a Saab 9,2 gets about 30 miles per gallon on the highway, about average for US cars, and 22 to 23 mpg in the city in the summer. That’s about 1/40th the gas mileage of the Michigan State car, or about 2/3 the mileage of the 1978 VW rabbit I drove as a young professor, or the same as a Model A Ford. Why so low? My basic answer: the current car weighs a lot more.

As a first step to analyzing the energy drain of my car, or MSU’s, the energy content of gasoline is about 123 MJ/gallon. Thus, if my engine was 27% efficient (reasonably likely) and I got 22.5 mpg (36 km/gallon) driving around town, that would mean I was using about .922 MJ/km of gasoline energy. Now all I need to know is where is this energy going (the MSU car got double this efficiency, but went 40 times further).

The first energy sink I considered was rolling drag. To measure this without the fancy equipment we had at MSU, I put my car in neutral on a flat surface at 22 mph and measured how long it took for the speed to drop to 19.5 mph. From this time, 14.5 sec, and the speed drop, I calculated that the car had a rolling drag of 1.4% of its weight (if you had college physics you should be able to repeat this calculation). Since I and the car weigh about 1700 kg, or 3790 lb, the drag is 53 lb or 233 Nt (the MSU car had far less, perhaps 8 lb). For any friction, the loss per km is F•x, or 233 kJ/km for my vehicle in the summer, independent of speed. This is significant, but clearly there are other energy sinks involved. In winter, the rolling drag is about 50% higher: the effect of gooey grease, I guess.

The next energy sink is air resistance. This is calculated by multiplying the frontal area of the car by the density of air, times 1/2 the speed squared (the kinetic energy imparted to the air). There is also a form factor, measured on a wind tunnel. For my car this factor was 0.28, similar to the MSU car. That is, for both cars, the equivalent of only 28% of the air in front of the car is accelerated to the car’s speed. Based on this and the density of air in the summer, I calculate that, at 20 mph, air drag was about 5.3 lbs for my car. At 40 mph it’s 21 lbs (95 Nt), and it’s 65 lbs (295 Nt) at 70 mph. Given that my city driving is mostly at <40 mph, I expect that only 95 kJ/km is used to fight air friction in the city. That is, less than 10% of my gas energy in the city or about 30% on the highway. (The MSU car had less because of a smaller front area, and because it drove at about 25 mph)

The next energy sink was the energy used to speed up from a stop — or, if you like, the energy lost to the brakes when I slow down. This energy is proportional to the mass of the car, and to velocity squared or kinetic energy. It’s also inversely proportional to the distance between stops. For a 1700 kg car+ driver who travels at 38 mph on city streets (17 m/s) and stops, or slows every 500m, I calculate that the start-stop energy per km is 2 (1/2 m v2 ) = 1700•(17)2  = 491 kJ/km. This is more than the other two losses combined and would seem to explain the majority cause of my low gas mileage in the city.

The sum of the above losses is 0.819 MJ/km, and I’m willing to accept that the rest of the energy loss (100 kJ/km or so) is due to engine idling (the efficiency is zero then); to air conditioning and headlights; and to times when I have a passenger or lots of stuff in the car. It all adds up. When I go for long drives on the highway, this start-stop loss is no longer relevant. Though the air drag is greater, the net result is a mileage improvement. Brief rides on the highway, by contrast, hardly help my mileage. Though I slow down less often, maybe every 2 km, I go faster, so the energy loss per km is the same.

I find that the two major drags on my gas mileage are proportional to the weight of the car, and that is currently half-again the weight of my VW rabbit (only 1900 lbs, 900 kg). The MSU car was far lighter still, about 200 lbs with the driver, and it never stopped till the gas ran out. My suggestion, if you want the best gas milage, buy one light cars on the road. The Mitsubishi Mirage, for example, weighs 1000 kg, gets 35 mpg in the city.

A very aerodynamic, very big car. It’s beautiful art, but likely gets lousy mileage — especially in the city.

Short of buying a lighter car, you have few good options to improve gas mileage. One thought is to use better grease or oil; synthetic oil, like Mobil 1 helps, I’m told (I’ve not checked it). Alternately, some months ago, I tried adding hydrogen and water to the engine. This helps too (5% -10%), likely by improving ignition and reducing idling vacuum loss. Another option is fancy valving, as on the Fiat 500. If you’re willing to buy a new car, and not just a new engine, a good option is a hybrid or battery car with regenerative breaking to recover the energy normally lost to the breaks. Alternately, a car powered with hydrogen fuel cells, — an option with advantages over batteries, or with a gasoline-powered fuel cell

Robert E. Buxbaum; July 29, 2015 I make hydrogen generators and purifiers. Here’s a link to my company site. Here’s something I wrote about Peter Cooper, an industrialist who made the first practical steam locomotive, the Tom Thumb: the key innovation here: making it lighter by using a forced air, fire-tube boiler.

# My latest invention: improved fuel cell reformer

Last week, I submitted a provisional patent application for an improved fuel reformer system to allow a fuel cell to operate on ordinary, liquid fuels, e.g. alcohol, gasoline, and JP-8 (diesel). I’m attaching the complete text of the description, below, but since it is not particularly user-friendly, I’d like to add a small, explanatory preface. What I’m proposing is shown in the diagram, following. I send a hydrogen-rich stream plus ordinary fuel and steam to the fuel cell, perhaps with a pre-reformer. My expectation that the fuel cell will not completely convert this material to CO2 and water vapor, even with the pre-reformer. Following the fuel cell, I then use a water-gas shift reactor to convert product CO and H2O to H2 and CO2 to increase the hydrogen content of the stream. I then use a semi-permeable membrane to extract the waste CO2 and water. I recirculate the hydrogen and the rest of the water back to the fuel cell to generate extra power, prevent coking, and promote steam reforming. I calculate the design should be able to operate at, perhaps 0.9 Volt per cell, and should nearly double the energy per gallon of fuel compared to ordinary diesel. Though use of pure hydrogen fuel would give better mileage, this design seems better for some applications. Please find the text following.

Use of a Water-Gas shift reactor and a CO2 extraction membrane to improve fuel utilization in a solid oxide fuel cell system.

Inventor: Dr. Robert E. Buxbaum, REB Research, 12851 Capital St, Oak Park, MI 48237; Patent Pending.

Solid oxide fuel cells (SOFCs) have improved over the last 10 years to the point that they are attractive options for electric power generation in automobiles, airplanes, and auxiliary power supplies. These cells operate at high temperatures and tolerate high concentrations of CO, hydrocarbons and limited concentrations of sulfur (H2S). SOFCs can operate on reformate gas and can perform limited degrees of hydrocarbon reforming too – something that is advantageous from the stand-point of fuel logistics: it’s far easier to transport a small volume of liquid fuel that it is a large volume of H2 gas. The main problem with in-situ reforming is the danger of coking the fuel cell, a problem that gets worse when reforming is attempted with the more–desirable, heavier fuels like gasoline and JP-8. To avoid coking the fuel cell, heavier fuels are typically reforming before hand in a separate reactor, typically by partial oxidation at auto-thermal conditions, a process that typically adds nitrogen and results in the inability to use the natural heat given off by the fuel cell. Steam reforming has been suggested as an option (Chick, 2011) but there is not enough heat released by the fuel cell alone to do it with the normal fuel cycles.

Another source of inefficiency in reformate-powered SOFC systems is basic to the use of carbon-containing fuels: the carbon tends to leave the fuel cell as CO instead of CO2. CO in the exhaust is undesirable from two perspectives: CO is toxic, and quite a bit of energy is wasted when the carbon leaves in this form. Normally, carbon can not leave as CO2 though, since CO is the more stable form at the high temperatures typical of SOFC operation. This patent provides solutions to all these problems through the use of a water-gas shift reactor and a CO2-extraction membrane. Find a drawing of a version of the process following.

RE. Buxbaum invention: A suggested fuel cycle to allow improved fuel reforming with a solid oxide fuel cell

As depicted in Figure 1, above, the fuel enters, is mixed with steam or partially boiled water, and heated in the rectifying heat exchanger. The hot steam + fuel mix then enters a steam reformer and perhaps a sulfur removal stage. This would be typical steam reforming except for a key difference: the heat for reforming comes (at least in part) from waste heat of the SOFC. Normally speaking there would not be enough heat, but in this system we add a recycle stream of H2-rich gas to the fuel cell. This stream, produced from waste CO in a water-gas shift reactor (the WGS) shown in Figure 1. This additional H2 adds to the heat generated by the SOFC and also adds to the amount of water in the SOFC. The net effect should be to reduce coking in the fuel cell while increasing the output voltage and providing enough heat for steam reforming. At least, that is the thought.

SOFCs differ from proton conducting FCS, e.g. PEM FCs, in that the ion that moves is oxygen, not hydrogen. As a result, water produced in the fuel cell ends up in the hydrogen-rich stream and not in the oxygen stream. Having this additional water in the fuel stream of the SOFC can promote fuel reforming within the FC. This presents a difficulty in exhausting the waste water vapor in that a means must be found to separate it from un-combusted fuel. This is unlike the case with PEM FCs, where the waste water leaves with the exhaust air. Our main solution to exhausting the water is the use of a membrane and perhaps a knockout drum to extract it from un-combusted fuel gases.

Our solution to the problem of carbon leaving the SOFC as CO is to react this CO with waste H2O to convert it to CO2 and additional H2. This is done in a water gas shift reactor, the WGS above. We then extract the CO2 and remaining, unused water through a CO2- specific membrane and we recycle the H2 and unconverted CO back to the SOFC using a low temperature recycle blower. The design above was modified from one in a paper by PNNL; that paper had neither a WGS reactor nor a membrane. As a result it got much worse fuel conversion, and required a high temperature recycle blower.

Heat must be removed from the SOFC output to cool it to a temperature suitable for the WGS reactor. In the design shown, the heat is used to heat the fuel before feeding it to the SOFC – this is done in the Rectifying HX. More heat must be removed before the gas can go to the CO2 extractor membrane; this heat is used to boil water for the steam reforming reaction. Additional heat inputs and exhausts will be needed for startup and load tracking. A solution to temporary heat imbalances is to adjust the voltage at the SOFC. The lower the voltage the more heat will be available to radiate to the steam reformer. At steady state operation, a heat balance suggests we will be able to provide sufficient heat to the steam reformer if we produce electricity at between 0.9 and 1.0 Volts per cell. The WGS reactor allows us to convert virtually all the fuel to water and CO2, with hardly any CO output. This was not possible for any design in the PNNL study cited above.

The drawing above shows water recycle. This is not a necessary part of the cycle. What is necessary is some degree of cooling of the WGS output. Boiling recycle water is shown because it can be a logistic benefit in certain situations, e.g. where you can not remove the necessary CO2 without removing too much of the water in the membrane module, and in mobile military situations, where it’s a benefit to reduce the amount of material that must be carried. If water or fuel must be boiled, it is worthwhile to do so by cooling the output from the WGS reactor. Using this heat saves energy and helps protect the high-selectivity membranes. Cooling also extends the life of the recycle blower and allows the lower-temperature recycle blowers. Ideally the temperature is not lowered so much that water begins to condense. Condensed water tends to disturb gas flow through a membrane module. The gas temperatures necessary to keep water from condensing in the module is about 180°C given typical, expected operating pressures of about 10 atm. The alternative is the use of a water knockout and a pressure reducer to prevent water condensation in membranes operated at lower temperatures, about 50°C.

Extracting the water in a knockout drum separate from the CO2 extraction has the secondary advantage of making it easier to adjust the water content in the fuel-gas stream. The temperature of condensation can then be used to control the water content; alternately, a separate membrane can extract water ahead of the CO2, with water content controlled by adjusting the pressure of the liquid water in the exit stream.

Some description of the membrane is worthwhile at this point since a key aspect of this patent – perhaps the key aspect — is the use of a CO2-extraction membrane. It is this addition to the fuel cycle that allows us to use the WGS reactor effectively to reduce coking and increase efficiency. The first reasonably effective CO2 extraction membranes appeared only about 5 years ago. These are made of silicone polymers like dimethylsiloxane, e.g. the Polaris membrane from MTR Inc. We can hope that better membranes will be developed in the following years, but the Polaris membrane is a reasonably acceptable option and available today, its only major shortcoming being its low operating temperature, about 50°C. Current Polaris membranes show H2-CO2 selectivity about 30 and a CO2 permeance about 1000 Barrers; these permeances suggest that high operating pressures would be desirable, and the preferred operation pressure could be 300 psi (20 atm) or higher. To operate the membrane with a humid gas stream at high pressure and 50°C will require the removal of most of the water upstream of the membrane module. For this, I’ve included a water knockout, or steam trap, shown in Figure 1. I also include a pressure reduction valve before the membrane (shown as an X in Figure 1). The pressure reduction helps prevent water condensation in the membrane modules. Better membranes may be able to operate at higher temperatures where this type of water knockout is not needed.

It seems likely that, no matter what improvements in membrane technology, the membrane will have to operate at pressures above about 6 atm, and likely above about 10 atm (upstream pressure) exhausting CO2 and water vapor to atmosphere. These high pressures are needed because the CO2 partial pressure in the fuel gas leaving the membrane module will have to be significantly higher than the CO2 exhaust pressure. Assuming a CO2 exhaust pressure of 0.7 atm or above and a desired 15% CO2 mol fraction in the fuel gas recycle, we can expect to need a minimum operating pressure of 4.7 atm at the membrane. Higher pressures, like 10 or 20 atm could be even more attractive.

In order to reform a carbon-based fuel, I expect the fuel cell to have to operate at 800°C or higher (Chick, 2011). Most fuels require high temperatures like this for reforming –methanol being a notable exception requiring only modest temperatures. If methanol is the fuel we will still want a rectifying heat exchanger, but it will be possible to put it after the Water-Gas Shift reactor, and it may be desirable for the reformer of this fuel to follow the fuel cell. When reforming sulfur-containing fuels, it is likely that a sulfur removal reactor will be needed. Several designs are available for this; I provide references to two below.

The overall system design I suggest should produce significantly more power per gm of carbon-based feed than the PNNL system (Chick, 2011). The combination of a rectifying heat exchange, a water gas reactor and CO2 extraction membrane recovers chemical energy that would otherwise be lost with the CO and H2 bleed steam. Further, the cooling stage allows the use of a lower temperature recycle pump with a fairly low compression ratio, likely 2 or less. The net result is to lower the pump cost and power drain. The fuel stream, shown in orange, is reheated without the use of a combustion pre-heater, another big advantage. While PNNL (Chick, 2011) has suggested an alternative route to recover most of the chemical energy through the use of a turbine power generator following the fuel cell, this design should have several advantages including greater reliability, and less noise.

Claims:

1.   A power-producing, fuel cell system including a solid oxide fuel cell (SOFC) where a fuel-containing output stream from the fuel cell goes to a regenerative heat exchanger followed by a water gas shift reactor followed by a membrane means to extract waste gases including carbon dioxide (CO2) formed in said reactor. Said reactor operating a temperatures between 200 and 450°C and the extracted carbon dioxide leaving at near ambient pressure; the non-extracted gases being recycled to the fuel cell.

Main References:

The most relevant reference here is “Solid Oxide Fuel Cell and Power System Development at PNNL” by Larry Chick, Pacific Northwest National Laboratory March 29, 2011: http://www.energy.gov/sites/prod/files/2014/03/f10/apu2011_9_chick.pdf. Also see US patent  8394544. it’s from the same authors and somewhat similar, though not as good and only for methane, a high-hydrogen fuel.

Robert E. Buxbaum, REB Research, May 11, 2015.

# No need to conserve energy

Energy conservation stamp from the early 70s

I’m reminded that one of the major ideas of Earth Day, energy conservation, is completely unnecessary: Energy is always conserved. It’s entropy that needs to be conserved.

The entropy of the universe increases for any process that occurs, for any process that you can make occur, and for any part of any process. While some parts of processes are very efficient in themselves, they are always entropy generators when considered on a global scale. Entropy is the arrow of time: if entropy ever goes backward, time has reversed.

A thought I’ve had on how do you might conserve entropy: grow trees and use them for building materials, or convert them to gasoline, or just burn them for power. Under ideal conditions, photosynthesis is about 30% efficient at converting photon-energy to glucose. (photons + CO2 + water –> glucose + O2). This would be nearly same energy conversion efficiency as solar cells if not for the energy the plant uses to live. But solar cells have inefficiency issues of their own, and as a result the land use per power is about the same. And it’s a lot easier to grow a tree and dispose of forest waste than it is to make a solar cell and dispose of used coated glass and broken electric components. Just some Earth Day thoughts from Robert E. Buxbaum. April 24, 2015

# Much of the chemistry you learned is wrong

When you were in school, you probably learned that understanding chemistry involved understanding the bonds between atoms. That all the things of the world were made of molecules, and that these molecules were fixed proportion combinations of the chemical elements held together by one of the 2 or 3 types of electron-sharing bonds. You were taught that water was H2O, that table salt was NaCl, that glass was SIO2, and rust was Fe2O3, and perhaps that the bonds involved an electron transferring between an electron-giver: H, Na, Si, or Fe… to an electron receiver: O or Cl above.

Sorry to say, none of that is true. These are fictions perpetrated by well-meaning, and sometime ignorant teachers. All of the materials mentioned above are grand polymers. Any of them can have extra or fewer atoms of any species, and as a result the stoichiometry isn’t quite fixed. They are not molecules at all in the sense you knew them. Also, ionic bonds hardly exist. Not in any chemical you’re familiar with. There are no common electron compounds. The world works, almost entirely on covalent, shared bonds. If bonds were ionic you could separate most materials by direct electrolysis of the pure compound, but you can not. You can not, for example, make iron by electrolysis of rust, nor can you make silicon by electrolysis of pure SiO2, or titanium by electrolysis of pure TiO. If you could, you’d make a lot of money and titanium would be very cheap. On the other hand, the fact that stoichiometry is rarely fixed allows you to make many useful devices, e.g. solid oxide fuel cells — things that should not work based on the chemistry you were taught.

Iron -zinc forms compounds, but they don’t have fixed stoichiometry. As an example the compound at 68-80 atom% Zn is, I guess Zn7Fe3 with many substituted atoms, especially at temperatures near 665°C.

Because most bonds are covalent many compounds form that you would not expect. Most metal pairs form compounds with unusual stoicheometric composition. Here, for example, is the phase diagram for zinc and Iron –the materials behind galvanized sheet metal: iron that does not rust readily. The delta phase has a composition between 85 and 92 atom% Zn (8 and 15 a% iron): Perhaps the main compound is Zn5Fe2, not the sort of compound you’d expect, and it has a very variable compositions.

Back to chemistry. It’s very difficult to know where to start to un-teach someone. Lets start with EMF and ionic bonds. While it is generally easier to remove an electron from a free metal atom than from a free non-metal atom, e.g. from a sodium atom instead of oxygen, removing an electron is always energetically unfavored, for all atoms. Similarly, while oxygen takes an extra electron easier than iron would, adding an electron is energetically unfavored. The figure below shows the classic ion bond, left, and two electron sharing options (center right) One is a bonding option the other anti-bonding. Nature prefers this to electron sharing to ionic bonds, even with blatantly ionic elements like sodium and chlorine.

Bond options in NaCl. Note that covalent is the stronger bond option though it requires less ionization.

There is a very small degree of ionic bonding in NaCl (left picture), but in virtually every case, covalent bonds (center) are easier to form and stronger when formed. And then there is the key anti-bonding state (right picture). The anti bond is hardly ever mentioned in high school or college chemistry, but it is critical — it’s this bond that keeps all mater from shrinking into nothingness.

I’ve discussed hydrogen bonds before. I find them fascinating since they make water wet and make life possible. I’d mentioned that they are just like regular bonds except that the quantum hydrogen atom (proton) plays the role that the electron plays. I now have to add that this is not a transfer, but a covalent spot. The H atom (proton) divides up like the electron did in the NaCl above. Thus, two water molecules are attracted by having partial bits of a proton half-way between the two oxygen atoms. The proton does not stay put at the center, there, but bobs between them as a quantum cloud. I should also mention that the hydrogen bond has an anti-bond state just like the electron above. We were never “taught” the hydrogen bond in high school or college — fortunately — that’s how I came to understand them. My professors, at Princeton saw hydrogen atoms as solid. It was their ignorance that allowed me to discover new things and get a PhD. One must be thankful for the folly of others: without it, no talented person could succeed.

And now I get to really weird bonds: entropy bonds. Have you ever noticed that meat gets softer when its aged in the freezer? That’s because most of the chemicals of life are held together by a sort of anti-bond called entropy, or randomness. The molecules in meat are unstable energetically, but actually increase the entropy of the water around them by their formation. When you lower the temperature you case the inherent instability of the bonds to cause them to let go. Unfortunately, this happens only slowly at low temperatures so you’ve got to age meat to tenderize it.

A nice thing about the entropy bond is that it is not particularly specific. A consequence of this is that all protein bonds are more-or-less the same strength. This allows proteins to form in a wide variety of compositions, but also means that deuterium oxide (heavy water) is toxic — it has a different entropic profile than regular water.

Robert Buxbaum, March 19, 2015. Unlearning false facts one lie at a time.

# The speed of sound, Buxbaum’s correction

Ernst Mach showed that sound must travel at a particular speed through any material, one determined by the conservation of energy and of entropy. At room temperature and 1 atm, that speed is theoretically predicted to be 343 m/s. For a wave to move at any other speed, either the laws of energy conservation would have to fail, or ∆S ≠ 0 and the wave would die out. This is the only speed where you could say there is a traveling wave, and experimentally, this is found to be the speed of sound in air, to good accuracy.

Still, it strikes me that Mach’s assumptions may have been too restrictive for short-distance sound waves. Perhaps there is room for other sound speeds if you allow ∆S > 0, and consider sound that travels short distances and dies out far from the source. Waves at these, other speeds might affect music appreciation, or headphone design. As these waves were never treated in my thermodynamics textbooks, I wondered if I could derive their speed in any nice way, and if they were faster or slower than the main wave? (If I can’t use this blog to re-think my college studies, what good is it?)

Imagine the sound-wave moving to the right, down a constant area tube at speed u, with us moving along at the same speed. Thus, the wave appears stationary, with a wind of speed u from the right.

As a first step to trying to re-imagine Mach’s calculation, here is one way to derive the original, for ∆S = 0, speed of sound: I showed in a previous post that the entropy change for compression can be imagines to have two parts, a pressure part at constant temperature: dS/dV at constant T = dP/dT at constant V. This part equals R/V for an ideal gas. There is also a temperature at constant volume part of the entropy change: dS/dT at constant V = Cv/T. Dividing the two equations, we find that, at constant entropy, dT/dV = RT/CvV= P/Cv. For a case where ∆S>0, dT/dV > P/Cv.

Now lets look at the conservation of mechanical energy. A compression wave gives off a certain amount of mechanical energy, or work on expansion, and this work accelerates the gas within the wave. For an ideal gas the internal energy of the gas is stored only in its temperature. Lets now consider a sound wave going down a tube flow left to right, and lets our reference plane along the wave at the same speed so the wave seems to sit still while a flow of gas moves toward it from the right at the speed of the sound wave, u. For this flow system energy is concerned though no heat is removed, and no useful work is done. Thus, any change in enthalpy only results in a change in kinetic energy. dH = -d(u2)/2 = u du, where H here is a per-mass enthalpy (enthalpy per kg).

dH = TdS + VdP. This can be rearranged to read, TdS = dH -VdP = -u du – VdP.

We now use conservation of mass to put du into terms of P,V, and T. By conservation of mass, u/V is constant, or d(u/V)= 0. Taking the derivative of this quotient, du/V -u dV/V2= 0. Rearranging this, we get, du = u dV/V (No assumptions about entropy here). Since dH = -u du, we say that udV/V = -dH = -TdS- VdP. It is now common to say that dS = 0 across the sound wave, and thus find that u2 = -V(dP/dV) at const S. For an ideal gas, this last derivative equals, PCp/VCv, so the speed of sound, u= √PVCp/Cv with the volume in terms of mass (kg/m3).

The problem comes in where we say that ∆S>0. At this point, I would say that u= -V(dH/dV) = VCp dT/dV > PVCp/Cv. Unless, I’ve made a mistake (always possible), I find that there is a small leading, non-adiabatic sound wave that goes ahead of the ordinary sound wave and is experienced only close to the source caused by mechanical energy that becomes degraded to raising T and gives rise more compression than would be expected for iso-entropic waves.

This should have some relevance to headphone design and speaker design since headphones are heard close to the ear, while speakers are heard further away. Meanwhile the recordings are made by microphones right next to the singers or instruments.

Robert E. Buxbaum, August 26, 2014

# Dr. Who’s Quantum reality viewed as diffusion

It’s very hard to get the meaning of life from science because reality is very strange, Further, science is mathematical, and the math relations for reality can be re-arranged. One arrangement of the terms will suggest a version of causality, while another will suggest a different causality. As Dr. Who points out, in non-linear, non-objective terms, there’s no causality, but rather a wibbly-wobbely ball of timey-wimey stuff.

Reality is a ball of  timey wimpy stuff, Dr. Who.

To this end, I’ll provide my favorite way of looking at the timey-wimey way of the world by rearranging the equations of quantum mechanics into a sort of diffusion. It’s not the diffusion of something you’re quite familiar with, but rather a timey-wimey wave-stuff referred to as Ψ. It’s part real and part imaginary, and the only relationship between ψ and life is that the chance of finding something somewhere is proportional Ψ*|Ψ. The diffusion of this half-imaginary stuff is the underpinning of reality — if viewed in a certain way.

First let’s consider the steady diffusion of a normal (un-quantum) material. If there is a lot of it, like when there’s perfume off of a prima donna, you can say that N = -D dc/dx where N is the flux of perfume (molecules per minute per area), dc/dx is a concentration gradient (there’s more perfume near her than near you), and D is a diffusivity, a number related to the mobility of those perfume molecules.

We can further generalize the diffusion of an ordinary material for a case where concentration varies with time because of reaction or a difference between the in-rate and the out rate, with reaction added as a secondary accumulator, we can write: dc/dt = reaction + dN/dx = reaction + D d2c/dx2. For a first order reaction, for example radioactive decay, reaction = -ßc, and

dc/dt = -ßc + D d2c/dx2               (1)

where ß is the radioactive decay constant of the material whose concentration is c.

Viewed in a certain way, the most relevant equation for reality, the time-dependent Schrödinger wave equation (semi-derived here), fits into the same diffusion-reaction form:

dΨ/dt = – 2iπV/h Ψ + hi/4πm d2Ψ/dx               (2)

Instead of reality involving the motion of a real material (perfume, radioactive radon, etc.) with a real concentration, c, in this relation, the material can not be sensed directly, and the concentration, Ψ, is semi -imaginary. Here, h is plank’s constant, i is the imaginary number, √-1, m is the mass of the real material, and V is potential energy. When dealing with reactions or charged materials, it’s relevant that V will vary with position (e.g. electrons’ energy is lower when they are near protons). The diffusivity term here is imaginary, hi/4πm, but that’s OK, Ψ is part imaginary, and we’d expect that potential energy is something of a destroyer of Ψ: the likelihood of finding something at a spot goes down where the energy is high.

The form of this diffusion is linear, a mathematical term that refers to equations where solution that works for Ψ will also work for 2Ψ. Generally speaking linear solutions have exp() terms in them, and that’s especially likely here as the only place where you see a time term is on the left. For most cases we can say that

Ψ = ψ exp(-2iπE/h)t               (3)

where ψ is not a function of anything but x (space) and E is the energy of the thing whose behavior is described by Ψ. If you take the derivative of equation 3 this with respect to time, t, you get

dΨ/dt = ψ (-2iπE/h) exp(-2iπE/h)t = (-2iπE/h)Ψ.               (4)

If you insert this into equation 2, you’ll notice that the form of the first term is now identical to the second, with energy appearing identically in both terms. Divide now by exp(-2iπE/h)t, and you get the following equation:

(E-V) ψ =  -h2/8π2m d2ψ/dx2                      (5)

where ψ can be thought of as the physical concentration in space of the timey-wimey stuff. ψ is still wibbly-wobbley, but no longer timey-wimey. Now ψ- squared is the likelihood of finding the stuff somewhere at any time, and E, the energy of the thing. For most things in normal conditions, E is quantized and equals approximately kT. That is E of the thing equals, typically, a quantized energy state that’s nearly Boltzmann’s constant times temperature.

You now want to check that the approximation in equation 3-5 was legitimate. You do this by checking if the length-scale implicit in exp(-2iπE/h)t is small relative to the length-scales of the action. If it is (and it usually is), You are free to solve for ψ at any E and V using normal mathematics, by analytic or digital means, for example this way. ψ will be wibbely-wobbely but won’t be timey-wimey. That is, the space behavior of the thing will be peculiar with the item in forbidden locations, but there won’t be time reversal. For time reversal, you need small space features (like here) or entanglement.

Equation 5 can be considered a simple steady state diffusion equation. The stuff whose concentration is ψ is created wherever E is greater than V, and is destroyed wherever V is greater than E. The stuff then continuously diffuses from the former area to the latter establishing a time-independent concentration profile. E is quantized (can only be some specific values) since matter can never be created of destroyed, and it is only at specific values of E that this happens in Equation 5. For a particle in a flat box, E and ψ are found, typically, by realizing that the format of ψ must be a sin function (and ignoring an infinity). For more complex potential energy surfaces, it’s best to use a matrix solution for ψ along with non-continuous calculous. This avoids the infinity, and is a lot more flexible besides.

When you detect a material in some spot, you can imagine that the space- function ψ collapses, but even that isn’t clear as you can never know the position and velocity of a thing simultaneously, so doesn’t collapse all that much. And as for what the stuff is that diffuses and has concentration ψ, no-one knows, but it behaves like a stuff. And as to why it diffuses, perhaps it’s jiggled by unseen photons. I don’t know if this is what happens, but it’s a way I often choose to imagine reality — a moving, unseen material with real and imaginary (spiritual ?) parts, whose concentration, ψ, is related to experience, but not directly experienced.

This is not the only way the equations can be rearranged. Another way of thinking of things is as the sum of path integrals — an approach that appears to me as a many-world version, with fixed-points in time (another Dr Who feature). In this view, every object takes every path possible between these points, and reality as the sum of all the versions, including some that have time reversals. Richard Feynman explains this path integral approach here. If it doesn’t make more sense than my version, that’s OK. There is no version of the quantum equations that will make total, rational sense. All the true ones are mathematically equivalent — totally equal, but differ in the “meaning”. That is, if you were to impose meaning on the math terms, the meaning would be totally different. That’s not to say that all explanations are equally valid — most versions are totally wrong, but there are many, equally valid math version to fit many, equally valid religious or philosophic world views. The various religions, I think, are uncomfortable with having so many completely different views being totally equal because (as I understand it) each wants exclusive ownership of truth. Since this is never so for math, I claim religion is the opposite of science. Religion is trying to find The Meaning of life, and science is trying to match experiential truth — and ideally useful truth; knowing the meaning of life isn’t that useful in a knife fight.

Dr. Robert E. Buxbaum, July 9, 2014. If nothing else, you now perhaps understand Dr. Who more than you did previously. If you liked this, see here for a view of political happiness in terms of the thermodynamics of free-energy minimization.

# The future of steamships: steam

Most large ships and virtually all locomotives currently run on diesel power. But the diesel  engine does not drive the wheels or propeller directly; the transmission would be too big and complex. Instead, the diesel engine is used to generate electric power, and the electric power drives the ship or train via an electric motor, generally with a battery bank to provide a buffer. Current diesel generators operate at 75-300 rpm and about 40-50% efficiency (not bad), but diesel fuel is expensive. It strikes me, therefore that the next step is to switch to a cheaper fuel like coal or compressed natural gas, and convert these fuels to electricity by a partial or full steam cycle as used in land-based electric power plants

Diesel engine, 100 MW for a large container ship

Steam powers all nuclear ships, and conventionally boiled steam provided the power for thousands of Liberty ships and hundreds of aircraft carriers during World War 2. Advanced steam turbine cycles are somewhat more efficient, pushing 60% efficiency for high pressure, condensed-turbine cycles that consume vaporized fuel in a gas turbine and recover the waste heat with a steam boiler exhausting to vacuum. The higher efficiency of these gas/steam turbine engines means that, even for ships that burn ship-diesel fuel (so-called bunker oil) or natural gas, there can be a cost advantage to having a degree of steam power. There are a dozen or so steam-powered ships operating on the great lakes currently. These are mostly 700-800 feet long, and operate with 1950s era steam turbines, burning bunker oil or asphalt. US Steel runs the “Arthur M Anderson”, Carson J Callaway” , “John G Munson” and “Philip R Clarke”, all built-in 1951/2. The “Upper Lakes Group” runs the “Canadian Leader”, “Canadian Provider”, “Quebecois”, and “Montrealais.” And then there is the coal-fired “Badger”. Built in 1952, the Badger is powered by two, “Skinner UniFlow” double-acting, piston engines operating at 450 psi. The Badger is cost-effective, with the low-cost of the fuel making up for the low efficiency of the 50’s technology. With larger ships, more modern boilers and turbines, and with higher pressure boilers and turbines, the economics of steam power would be far better, even for ships with modern pollution abatement.

Nuclear steam boilers can be very compact

Steam powered ships can burn fuels that diesel engines can’t: coal, asphalts, or even dry wood because fuel combustion can be external to the high pressure region. Steam engines can cost more than diesel engines do, but lower fuel cost can make up for that, and the cost differences get smaller as the outputs get larger. Currently, coal costs 1/10 as much as bunker oil on a per-energy basis, and natural gas costs about 1/5 as much as bunker oil. One can burn coal cleanly and safely if the coal is dried before being loaded on the ship. Before burning, the coal would be powdered and gassified to town-gas (CO + H2O) before being burnt. The drying process removes much of the toxic impact of the coal by removing much of the mercury and toxic oxides. Gasification before combustion further reduces these problems, and reduces the tendency to form adhesions on boiler pipes — a bane of old-fashioned steam power. Natural gas requires no pretreatment, but costs twice as much as coal and requires a gas-turbine, boiler system for efficient energy use.

Todays ships and locomotives are far bigger than in the 1950s. The current standard is an engine output about 50 MW, or 170 MM Btu/hr of motive energy. Assuming a 50% efficient engine, the fuel use for a 50 MW ship or locomotive is 340 MM Btu/hr; locomotives only use this much when going up hill with a heavy load. Illinois coal costs, currently, about \$60/ton, or \$2.31/MM Btu. A 50 MW engine would consume about 13 tons of dry coal per hour costing \$785/hr. By comparison, bunker oil costs about \$3 /gallon, or \$21/MM Btu. This is nearly ten times more than coal, or \$ 7,140/hr for the same 50 MW output. Over 30 years of operation, the difference in fuel cost adds up to 1.5 billion dollars — about the cost of a modern container ship.

Robert E. Buxbaum, May 16, 2014. I possess a long-term interest in economics, thermodynamics, history, and the technology of the 1800s. See my steam-pump, and this page dedicated to Peter Cooper: Engineer, citizen of New York. Wood power isn’t all that bad, by the way, but as with coal, you must dry the wood, or (ideally) convert it to charcoal. You can improve the power and efficiency of diesel and automobile engines and reduce the pollution by adding hydrogen. Normal cars do not use steam because there is more start-stop, and because it takes too long to fire up the engine before one can drive. For cars, and drone airplanes, I suggest hydrogen/ fuel cells.