Tag Archives: efficiency

The energy cost of airplanes, trains, and buses

I’ve come to conclude that airplane travel, and busses makes a lot more sense than high-speed trains. Consider the marginal energy cost of a 90kg (200 lb) person getting on a 737-800, the most commonly flown commercial jet in US service. For this plane, the ratio of lift/drag at cruise speed is 19, suggesting an average value of 15 or so for a 1 hr trip when you include take-off and landing. The energy cost of his trip is related to the cost of jet fuel, about $3.20/gallon, or about $1/kg. The heat energy content of jet fuel is 44 MJ/kg or 18,800 Btu/lb. Assuming an average engine efficiency of 21%, we calculate a motive-energy cost of 1.1 x 10-7 $/J, or 40¢/kwhr. The amount of energy per mile is just force times distance: 1 mile = 1609 m. Force is calculated from the person’s weight in (in Newtons) divided by lift/drag ratio. The energy per mile is thus 90*9.8*1609/15 = 94,600 J. Multiplying by the $-per-J we find the marginal cost of his transport is 1¢ per mile, virtually nothing.

The Wright brothers testing their gliders in 1901 (left) and 1902 (right). The angle of the tether reflects the dramatic improvement in the lift-to-drag ratio.

The Wright brothers testing their gliders in 1901 (left) and 1902 (right). The angle of the tether reflects a dramatic improvement in lift-to-drag ratio; the marginal cost per mile is inversely proportional to the lift-to-drag ratio.

The marginal cost for carrying a 200 lb person from Detroit to NY (500 miles) is 1¢/mile x 500 miles = $5: hardly anything compared to the cost of driving. No wonder airplanes offer crazy-low, fares to fill seats on empty flights. But this is just the marginal cost. The average energy cost per passenger is higher since it includes the weight of the plane. On a reasonably full 737 flight, the passengers and luggage  weigh about 1/4 as much as the plane and its fuel. Effectively, each passenger weighs 800 lbs, suggesting a 4¢/mile energy cost, or $20 of energy per passenger for the flight from Detroit to NY. Though the fuel rate of burn is high, about 5000 lbs/hr, the cost is low because of the high speed and the number of passengers. Stated another way, the 737 gets 80 passenger miles per gallon, a somewhat lower mpg than the 91 claimed for a full 747.

Passengers must pay more than $20, of course because of wages, capital, interest, profit, taxes, and landing fees. Still, one can see how discount airlines could make money if they arrange a good deal with a hub airport, one that allows them low landing fees and allows them to buy fuel at near cost.

Compare this to any proposed super-fast or Mag-lev train. Over any significant distance, the plane will be cheaper, faster, and as energy-efficient. Current US passenger trains, when fairly full, boast a fuel economy of 200 passenger miles per gallon, but they are rarely full. Currently, they take some 15 hours to go Detroit to NY, in part because they go slow, and in part because they go via longer routes, visiting Toronto and Montreal in this case, with many stops along the way. With this long route, even if the train got 200 passenger mpg, the 750 mile trip would use 3.75 gallons per passenger, compared to 6.25 for the flight above. This is a savings of 2.5 gallons, or $8, but it comes at a cost of 15 hours of a passenger’s life. Even train speeds were doubled, the trip would still take more than 7.5 hours including stops, and the energy cost would be higher. As for price, beyond the costs of wages, capital, interest, profit, taxes, and depot fees — similar to those for air-tragic – you have to add the cost of new track and track upkeep. While I’d be happy to see better train signaling to allow passenger trains to go 100 mph on current, freight-compatible lines, I can see little benefit to government-funded projects to add the parallel, dedicated track for 150+ mph trains that will still, likely be half-full.

You may now ask about cities that don’t have  good airports. Something else removing my enthusiasm for super trains is the appearance of a new generation of short take-off and landing, commercial jets, and of a new generation of comfortable buses. Some years ago, I noted that Detroit’s Coleman Young airport no longer has commercial traffic because its runway was too short, 1051m. I’m happy to report that Bombardier’s new CS100s should make small airports like this usable. A CS100 will hold 120 passengers, requires only 1463m of runway, and is quiet enough for city use. The economics are such that it’s hard to imagine Mag-lev beating this for the proposed US high-speed train routes: Dallas to Houston; LA to San José to San Francisco; or Chicago-Detroit-Toledo-Cleveland-Pittsburgh. So far US has kept out these planes because Boeing claims unfair competition, but I trust that this is just a delay. As for shorter trips, the modern busses are as fast and energy efficient as trains, and far cheaper because they share the road costs with cars and trucks.

If the US does want to spend money on transport, I’d suggest improving inner-city airports. The US could also fund development of yet-better short take off planes, perhaps made with carbon fiber, or with flexible wing structures to improve the lift-to-drag during take-offs and landings. Higher train speeds should be available with better signaling and with passenger trains that lean more into a curve, but even this does not have to be super high-tech. And for 100-200 mile intercity traffic, I suspect the best solution is to improve the highways and busses. If you want low pollution and high efficiency, how about hydrogen hybrid buses?

Robert Buxbaum, October 30, 2017. I taught engineering for 10 years at Michigan State, and my company, REB Research, makes hydrogen generators and hydrogen purifiers.

It’s rocket science

Here are six or so rocket science insights, some simple, some advanced. It’s a fun area of engineering that touches many areas of science and politics. Besides, some people seem to think I’m a rocket scientist.

A basic question I get asked by kids is how a rocket goes up. My answer is it does not go up. That’s mostly an illusion. The majority of the rocket — the fuel — goes down, and only the light shell goes up. People imagine they are seeing the rocket go up. Taken as a whole, fuel and shell, they both go down at 1 G: 9.8 m/s2, 32 ft/sec2.

Because 1 G ofupward acceleration is always lost to gravity, you need high thrust from the rocket engine, especially at the beginning when the rocket is heaviest. If your engine provides less thrust than the weight of your rocket, your rocket sits on the launch pad, and if your thrust is merely twice the weight of the rocket your waste half of your fuel doing nothing useful. Effectively, the upward acceleration of the shell, a = F/m -1G where F is the force of the engine, and m is the mass of the rocket and whatever fuel is in it, and the 1 G is the upward acceleration lost to gravity.  My guess is that you want to design a rocket engine so that the upward acceleration, a, is in the range 8-10 G. This range avoids wasting lots of fuel without requiring you to build the rocket too sturdy. At a = 9G, the rocket engine force, F, has to be about 10 times the rocket weight; it also means the rocket structure must be sturdy enough to support a force of ten times the rocket weight. This can be tricky because the rocket will be the size of a small skyscraper, and the structure must be light so that the majority is fuel. It’s also tricky that this 9-11 times the rocket weight must sit on an engine that runs really hot, about 3000°C. Most engineering projects have fewer constraints than this, and are thus “not rocket science.”

Basic force balance on a rocket going up.

Basic force balance on a rocket going up.

A space rocket has to reach very high speeds; most things that go up, come down almost immediately. You can calculate the minimum orbital speed by balancing the acceleration of gravity, 9.8 m/s2, against the orbital acceleration of going around the earth, a sphere of 40,000 km in circumference (that’s how the meter was defined). Orbital acceleration, a = v2/r, and r = 40,000,000 m/2π = 6,366,000m. Thus, the speed you need to stay up indefinitely is v=√(6,366,000 x 9.8) = 7900 m/s = 17,800 mph. That’s roughly Mach 35, or 35 times the speed of sound. You need some altitude too, just to keep air friction from killing you, but for most missions, the main thing you need is velocity, kinetic energy, not potential energy, as i’ll show below. If you achieve more speed than 17,800 m/s, you circle the earth higher up; this makes docking space-ships tricky, as I’ll explain also.

It turns out that kinetic energy is quite a lot more important than potential energy for sending an object into orbit, and that rockets are the only way practical to reach orbital speed; no current cannon or gun can reach Mach 35. To get a sense of the energy balance involved in rocketry, consider a one kg mass at orbital speed, 7900 m/s, and 200 km altitude. You can calculate that the kinetic energy is 31,205 kJ, while the potential energy, mgh, is only 1,960 kJ. For this orbital height, 200 km, the kinetic energy is about 16 times the potential energy. Not that it’s easy to reach 200 miles altitude, but you can do it with a sophisticated cannon, or a “simple”, one stage, V2-style rocket, you need multiple stages to reach 7,900 m/s. As a way to see this, consider that the energy content of gasoline + oxygen is about 10.5 MJ/kg (10,500 kJ/kg); this is only 1/3 of the kinetic energy of the orbital rocket, but it’s 5 times the potential energy. A fairly efficient gasoline + oxygen powered cannon could not provide orbital kinetic energy since the bullet can move no faster than the explosive vapor. In a rocket this is not a constraint since most of the mass is ejected. I’ll explain further below.

A shell fired at a 45° angle that reaches 200 km altitude would go about 800 km — the distance between North Korea and Japan, or between Iran and Israel. That would require twice as much energy as a shell fired straight up, about 4000 kJ/kg. This is a value still within the range for a (very large) cannon or a single-stage rocket. For Russia or China to hit the US would take much more velocity, and orbital, or near orbital rocketry. To reach the moon, you need more total energy, but less kinetic energy. Moon rockets have taken the approach of first going into orbit, and only later going on. While most of the kinetic energy isn’t lost, I’m still not sure it’s the best trajectory.

The force produced by a rocket is equal to the rate of mass shot out times its velocity. F = ∆(mv). To get a lot of force for each bit of fuel, you want the gas exit velocity to be as fast as possible. A typical maximum is about 2,500 m/s. Mach 10, for a gasoline – oxygen engine. The acceleration of the rocket itself is this ∆mv force divided by the total remaining mass in the rocket (rocket shell plus remaining fuel) minus 1 (gravity). Thus, if the exhaust from a rocket leaves at 2,500 m/s, and you want the rocket to accelerate upward at 9 G, you must exhaust fast enough to develop 10 G, 98 m/s2. The rate of mass exhaust is the mass of the rocket times 98/2500 = .0392/second. That is, about 3.9% of the rocket mass must be ejected each second. Assuming that the fuel for your first stage engine is less than 80% of the total mass, the first stage will flare-out in about 20 seconds at this rate. Your acceleration at the end of the 20 seconds will be greater than 9G, by the way, since the rocket gets lighter as fuel is burnt. When half the weight is gone, it will be accelerating at 19 G.

If you have a good math background, you can develop a differential equation for the relation between fuel consumption and altitude or final speed. This is readily done if you know calculous, or reasonably done if you use differential methods. By either method, it turns out that, for no air friction or gravity resistance, you will reach the same speed as the exhaust when 64% of the rocket mass is exhausted. In the real world, your rocket will have to exhaust 75 or 80% of its mass as first stage fuel to reach a final speed of 2,500 m/s. This is less than 1/3 orbital speed, and reaching it requires that the rest of your rocket mass: the engine, 2nd stage, payload, and any spare fuel to handle descent (Elon Musk’s approach) must weigh less than 20-25% of the original weight of the rocket on the launch pad. This gasoline and oxygen is expensive, but not horribly so if you can reuse the rocket; that’s the motivation for NASA’s and SpaceX’s work on reusable rockets. Most orbital rocket designs require at least three stages to accelerate to the 7900 m/s calculated above, and the second stage is almost invariably lost. If you can set-up and solve the differential equation above, a career in science may be for you.

Now, you might wonder about the exhaust speed I’ve been using, 2500 m/s. If you can achieve higher speeds, the rocket design will be a lot easier, but doing so is not easy for a gasoline/ oxygen engine like Russia and the US uses currently. The heat of combustion of gasoline is 42 MJ/kg, but burning a kg of gasoline requires roughly 2.5 kg of oxygen. Thus, for a rocket fueled by gasoline + oxygen, the heat of combustion is about 10.5 MJ/kg. Now assume that the rocket engine is 30% efficient. Per unit of fuel+ oxygen mass, 1/2 v2 = .3 x 10,500,000; v =√6,300,000  = 2500 m/s. Higher exhaust speeds have been achieved, e.g. with hydrogen-fueled rockets. The sources of inefficiency are many including incomplete combustion in the engine, gas flow off the center-line, and friction flow in the engine and between the atmosphere and gases leaving the rocket nozzle. If you can make a reliable, higher efficiency engine, a career in engineering may be for you.

At an average acceleration of 10 G = 98 m/s2 and a first stage that reaches 2500 m/s you find that the first stage will burn out after 25.5 seconds. If the rocket were going straight up (bad idea), you’d find you are at an altitude of about 28.7 km. A better idea would be an average trajectory of 30°, leaving you at an altitude of 14 km or so. At that altitude you can expect to have far less air friction, and you can expect the second stage engine to be more efficient. It seems to me, you may want to wait 15 seconds or so before firing the second stage: you’ll be another few km up and it seems to me that the benefit of this altitude will be worthwhile. I guess that’s why most space launches wait a few seconds before firing the second stage.

As a final bit, I’d mentioned that docking a rocket with a space station is difficult, in part, because docking requires an increase in angular speed, w, but this generally goes along with a decrease in altitude; a counter-intuitive behavior. Setting the acceleration due to gravity equal to the angular acceleration, we find GM/r2 = w2r, where G is the gravitational constant, and M is the mass or the earth. Rearranging, we find that w2  = GM/r3. For high angular speed, you need small r: a low altitude. When we first went to dock a space-ship, in the early 60s, we had not realized this. When the astronauts fired the engines to dock, they found that they’d accelerate in velocity, but not in angular speed: v = wr. The faster they went, the higher up they went, but the lower the angular speed got: the fewer the orbits per day. Eventually they realized that, to dock with another ship or a space-station that is in front of you, you do not accelerate, but decelerate. When you decelerate you lose altitude and gain angular speed: you catch up with the station, but at a lower altitude. Your next step is to angle your ship near-radially to the earth, and accelerate by firing engines to the side till you dock. Like much of orbital rocketry, it’s simple, but not intuitive or easy.

Robert Buxbaum, August 12, 2015. A cannon that could reach from North Korea to Japan, say, would have to be on the order of 10 km long, running along the slope of a mountain. Even at that length, the shell would have to fire at 450 G, or so, and reach a speed about 3000 m/s, or 1/3 orbital.

The mass of a car and its mpg.

Back when I was an assistant professor at Michigan State University, MSU, they had a mileage olympics between the various engineering schools. Michigan State’s car got over 800 mpg, and lost soundly. By contrast, my current car, a Saab 9,2 gets about 30 miles per gallon on the highway, about average for US cars, and 22 to 23 mpg in the city in the summer. That’s about 1/40th the gas mileage of the Michigan State car, or about 2/3 the mileage of the 1978 VW rabbit I drove as a young professor, or the same as a Model A Ford. Why so low? My basic answer: the current car weighs a lot more.

As a first step to analyzing the energy drain of my car, or MSU’s, the energy content of gasoline is about 123 MJ/gallon. Thus, if my engine was 27% efficient (reasonably likely) and I got 22.5 mpg (36 km/gallon) driving around town, that would mean I was using about .922 MJ/km of gasoline energy. Now all I need to know is where is this energy going (the MSU car got double this efficiency, but went 40 times further).

The first energy sink I considered was rolling drag. To measure this without the fancy equipment we had at MSU, I put my car in neutral on a flat surface at 22 mph and measured how long it took for the speed to drop to 19.5 mph. From this time, 14.5 sec, and the speed drop, I calculated that the car had a rolling drag of 1.4% of its weight (if you had college physics you should be able to repeat this calculation). Since I and the car weigh about 1700 kg, or 3790 lb, the drag is 53 lb or 233 Nt (the MSU car had far less, perhaps 8 lb). For any friction, the loss per km is F•x, or 233 kJ/km for my vehicle in the summer, independent of speed. This is significant, but clearly there are other energy sinks involved. In winter, the rolling drag is about 50% higher: the effect of gooey grease, I guess.

The next energy sink is air resistance. This is calculated by multiplying the frontal area of the car by the density of air, times 1/2 the speed squared (the kinetic energy imparted to the air). There is also a form factor, measured on a wind tunnel. For my car this factor was 0.28, similar to the MSU car. That is, for both cars, the equivalent of only 28% of the air in front of the car is accelerated to the car’s speed. Based on this and the density of air in the summer, I calculate that, at 20 mph, air drag was about 5.3 lbs for my car. At 40 mph it’s 21 lbs (95 Nt), and it’s 65 lbs (295 Nt) at 70 mph. Given that my city driving is mostly at <40 mph, I expect that only 95 kJ/km is used to fight air friction in the city. That is, less than 10% of my gas energy in the city or about 30% on the highway. (The MSU car had less because of a smaller front area, and because it drove at about 25 mph)

The next energy sink was the energy used to speed up from a stop — or, if you like, the energy lost to the brakes when I slow down. This energy is proportional to the mass of the car, and to velocity squared or kinetic energy. It’s also inversely proportional to the distance between stops. For a 1700 kg car+ driver who travels at 38 mph on city streets (17 m/s) and stops, or slows every 500m, I calculate that the start-stop energy per km is 2 (1/2 m v2 ) = 1700•(17)2  = 491 kJ/km. This is more than the other two losses combined and would seem to explain the majority cause of my low gas mileage in the city.

The sum of the above losses is 0.819 MJ/km, and I’m willing to accept that the rest of the energy loss (100 kJ/km or so) is due to engine idling (the efficiency is zero then); to air conditioning and headlights; and to times when I have a passenger or lots of stuff in the car. It all adds up. When I go for long drives on the highway, this start-stop loss is no longer relevant. Though the air drag is greater, the net result is a mileage improvement. Brief rides on the highway, by contrast, hardly help my mileage. Though I slow down less often, maybe every 2 km, I go faster, so the energy loss per km is the same.

I find that the two major drags on my gas mileage are proportional to the weight of the car, and that is currently half-again the weight of my VW rabbit (only 1900 lbs, 900 kg). The MSU car was far lighter still, about 200 lbs with the driver, and it never stopped till the gas ran out. My suggestion, if you want the best gas milage, buy one light cars on the road. The Mitsubishi Mirage, for example, weighs 1000 kg, gets 35 mpg in the city.

A very aerodynamic, very big car. It's beautiful art, but likely gets lousy mileage -- especially in the city.

A very aerodynamic, very big car. It’s beautiful art, but likely gets lousy mileage — especially in the city.

Short of buying a lighter car, you have few good options to improve gas mileage. One thought is to use better grease or oil; synthetic oil, like Mobil 1 helps, I’m told (I’ve not checked it). Alternately, some months ago, I tried adding hydrogen and water to the engine. This helps too (5% -10%), likely by improving ignition and reducing idling vacuum loss. Another option is fancy valving, as on the Fiat 500. If you’re willing to buy a new car, and not just a new engine, a good option is a hybrid or battery car with regenerative breaking to recover the energy normally lost to the breaks. Alternately, a car powered with hydrogen fuel cells, — an option with advantages over batteries, or with a gasoline-powered fuel cell

Robert E. Buxbaum; July 29, 2015 I make hydrogen generators and purifiers. Here’s a link to my company site. Here’s something I wrote about Peter Cooper, an industrialist who made the first practical steam locomotive, the Tom Thumb: the key innovation here: making it lighter by using a forced air, fire-tube boiler.

The future of steamships: steam

Most large ships and virtually all locomotives currently run on diesel power. But the diesel  engine does not drive the wheels or propeller directly; the transmission would be too big and complex. Instead, the diesel engine is used to generate electric power, and the electric power drives the ship or train via an electric motor, generally with a battery bank to provide a buffer. Current diesel generators operate at 75-300 rpm and about 40-50% efficiency (not bad), but diesel fuel is expensive. It strikes me, therefore that the next step is to switch to a cheaper fuel like coal or compressed natural gas, and convert these fuels to electricity by a partial or full steam cycle as used in land-based electric power plants

Ship-board diesel engine, 100 MW for a large container ship

Diesel engine, 100 MW for a large container ship

Steam powers all nuclear ships, and conventionally boiled steam provided the power for thousands of Liberty ships and hundreds of aircraft carriers during World War 2. Advanced steam turbine cycles are somewhat more efficient, pushing 60% efficiency for high pressure, condensed-turbine cycles that consume vaporized fuel in a gas turbine and recover the waste heat with a steam boiler exhausting to vacuum. The higher efficiency of these gas/steam turbine engines means that, even for ships that burn ship-diesel fuel (so-called bunker oil) or natural gas, there can be a cost advantage to having a degree of steam power. There are a dozen or so steam-powered ships operating on the great lakes currently. These are mostly 700-800 feet long, and operate with 1950s era steam turbines, burning bunker oil or asphalt. US Steel runs the “Arthur M Anderson”, Carson J Callaway” , “John G Munson” and “Philip R Clarke”, all built-in 1951/2. The “Upper Lakes Group” runs the “Canadian Leader”, “Canadian Provider”, “Quebecois”, and “Montrealais.” And then there is the coal-fired “Badger”. Built in 1952, the Badger is powered by two, “Skinner UniFlow” double-acting, piston engines operating at 450 psi. The Badger is cost-effective, with the low-cost of the fuel making up for the low efficiency of the 50’s technology. With larger ships, more modern boilers and turbines, and with higher pressure boilers and turbines, the economics of steam power would be far better, even for ships with modern pollution abatement.

Nuclear steam boilers can be very compact

Nuclear steam boilers can be very compact

Steam powered ships can burn fuels that diesel engines can’t: coal, asphalts, or even dry wood because fuel combustion can be external to the high pressure region. Steam engines can cost more than diesel engines do, but lower fuel cost can make up for that, and the cost differences get smaller as the outputs get larger. Currently, coal costs 1/10 as much as bunker oil on a per-energy basis, and natural gas costs about 1/5 as much as bunker oil. One can burn coal cleanly and safely if the coal is dried before being loaded on the ship. Before burning, the coal would be powdered and gassified to town-gas (CO + H2O) before being burnt. The drying process removes much of the toxic impact of the coal by removing much of the mercury and toxic oxides. Gasification before combustion further reduces these problems, and reduces the tendency to form adhesions on boiler pipes — a bane of old-fashioned steam power. Natural gas requires no pretreatment, but costs twice as much as coal and requires a gas-turbine, boiler system for efficient energy use.

Todays ships and locomotives are far bigger than in the 1950s. The current standard is an engine output about 50 MW, or 170 MM Btu/hr of motive energy. Assuming a 50% efficient engine, the fuel use for a 50 MW ship or locomotive is 340 MM Btu/hr; locomotives only use this much when going up hill with a heavy load. Illinois coal costs, currently, about $60/ton, or $2.31/MM Btu. A 50 MW engine would consume about 13 tons of dry coal per hour costing $785/hr. By comparison, bunker oil costs about $3 /gallon, or $21/MM Btu. This is nearly ten times more than coal, or $ 7,140/hr for the same 50 MW output. Over 30 years of operation, the difference in fuel cost adds up to 1.5 billion dollars — about the cost of a modern container ship.

Robert E. Buxbaum, May 16, 2014. I possess a long-term interest in economics, thermodynamics, history, and the technology of the 1800s. See my steam-pump, and this page dedicated to Peter Cooper: Engineer, citizen of New York. Wood power isn’t all that bad, by the way, but as with coal, you must dry the wood, or (ideally) convert it to charcoal. You can improve the power and efficiency of diesel and automobile engines and reduce the pollution by adding hydrogen. Normal cars do not use steam because there is more start-stop, and because it takes too long to fire up the engine before one can drive. For cars, and drone airplanes, I suggest hydrogen/ fuel cells.

Entropy, the most important pattern in life

One evening at the Princeton grad college a younger fellow (an 18-year-old genius) asked the most simple, elegant question I had ever heard, one I’ve borrowed and used ever since: “tell me”, he asked, “something that’s important and true.” My answer that evening was that the entropy of the universe is always increasing. It’s a fundamentally important pattern in life; one I didn’t discover, but discovered to have a lot of applications and meaning. Let me explain why it’s true here, and then why I find it’s meaningful.

Famous entropy cartoon, Harris

Famous entropy cartoon, Harris

The entropy of the universe is not something you can measure directly, but rather indirectly, from the availability of work in any corner of it. It’s related to randomness and the arrow of time. First off, here’s how you can tell if time is moving forward: put an ice-cube into hot water, if the cube dissolves and the water becomes cooler, time is moving forward — or, at least it’s moving in the same direction as you are. If you can reach into a cup of warm water and pull out an ice-cube while making the water hot, time is moving backwards. — or rather, you are living backwards. Within any closed system, one where you don’t add things or energy (sunlight say), you can tell that time is moving forward because the forward progress of time always leads to the lack of availability of work. In the case above, you could have generated some electricity from the ice-cube and the hot water, but not from the glass of warm water.

You can not extract work from a heat source alone; to extract work some heat must be deposited in a cold sink. At best the entropy of the universe remains unchanged. More typically, it increases.

You can not extract work from a heat source alone; to extract work some heat must be deposited in a cold sink. At best the entropy of the universe remains unchanged.

This observation is about as fundamental as any to understanding the world; it is the basis of entropy and the second law of thermodynamics: you can never extract useful work from a uniform temperature body of water, say, just by making that water cooler. To get useful work, you always need something some other transfer into or out of the system; you always need to make something else hotter, colder, or provide some chemical or altitude changes that can not be reversed without adding more energy back. Thus, so long as time moves forward everything runs down in terms of work availability.

There is also a first law; it states that energy is conserved. That is, if you want to heat some substance, that change requires that you put in a set amount of work plus heat. Similarly, if you want to cool something, a set amount of heat + work must be taken out. In equation form, we say that, for any change, q +w is constant, where q is heat, and w is work. It’s the sum that’s constant, not the individual values so long as you count every 4.174 Joules of work as if it were 1 calorie of heat. If you input more heat, you have to add less work, and visa versa, but there is always the same sum. When adding heat or work, we say that q or w is positive; when extracting heat or work, we say that q or w are negative quantities. Still, each 4.174 joules counts as if it were 1 calorie.

Now, since for every path between two states, q +w is the same, we say that q + w represents a path-independent quantity for the system, one we call internal energy, U where ∆U = q + w. This is a mathematical form of the first law of thermodynamics: you can’t take q + w out of nothing, or add it to something without making a change in the properties of the thing. The only way to leave things the same is if q + w = 0. We notice also that for any pure thing or mixture, the sum q +w for the change is proportional to the mass of the stuff; we can thus say that internal energy is an intensive quality. q + w = n ∆u where n is the grams of material, and ∆u is the change in internal energy per gram.

We are now ready to put the first and second laws together. We find we can extract work from a system if we take heat from a hot body of water and deliver some of it to something at a lower temperature (the ice-cube say). This can be done with a thermopile, or with a steam engine (Rankine cycle, above), or a stirling engine. That an engine can only extract work when there is a difference of temperatures is similar to the operation of a water wheel. Sadie Carnot noted that a water wheel is able to extract work only when there is a flow of water from a high level to low; similarly in a heat engine, you only get work by taking in heat energy from a hot heat-source and exhausting some of it to a colder heat-sink. The remainder leaves as work. That is, q1 -q2 = w, and energy is conserved. The second law isn’t violated so long as there is no way you could run the engine without the cold sink. Accepting this as reasonable, we can now derive some very interesting, non-obvious truths.

We begin with the famous Carnot cycle. The Carnot cycle is an idealized heat engine with the interesting feature that it can be made to operate reversibly. That is, you can make it run forwards, taking a certain amount of work from a hot source, producing a certain amount of work and delivering a certain amount of heat to the cold sink; and you can run the same process backwards, as a refrigerator, taking in the same about of work and the same amount of heat from the cold sink and delivering the same amount to the hot source. Carnot showed by the following proof that all other reversible engines would have the same efficiency as his cycle and no engine, reversible or not, could be more efficient. The proof: if an engine could be designed that will extract a greater percentage of the heat as work when operating between a given hot source and cold sink it could be used to drive his Carnot cycle backwards. If the pair of engines were now combined so that the less efficient engine removed exactly as much heat from the sink as the more efficient engine deposited, the excess work produced by the more efficient engine would leave with no effect besides cooling the source. This combination would be in violation of the second law, something that we’d said was impossible.

Now let us try to understand the relationship that drives useful energy production. The ratio of heat in to heat out has got to be a function of the in and out temperatures alone. That is, q1/q2 = f(T1, T2). Similarly, q2/q1 = f(T2,T1) Now lets consider what happens when two Carnot cycles are placed in series between T1 and T2, with the middle temperature at Tm. For the first engine, q1/qm = f(T1, Tm), and similarly for the second engine qm/q2 = f(Tm, T2). Combining these we see that q1/q2 = (q1/qm)x(qm/q2) and therefore f(T1, T2) must always equal f(T1, Tm)x f(Tm/T2) =f(T1,Tm)/f(T2, Tm). In this relationship we see that the second term Tm is irrelevant; it is true for any Tm. We thus say that q1/q2 = T1/T2, and this is the limit of what you get at maximum (reversible) efficiency. You can now rearrange this to read q1/T1 = q2/T2 or to say that work, W = q1 – q2 = q2 (T1 – T2)/T2.

A strange result from this is that, since every process can be modeled as either a sum of Carnot engines, or of engines that are less-efficient, and since the Carnot engine will produce this same amount of reversible work when filled with any substance or combination of substances, we can say that this outcome: q1/T1 = q2/T2 is independent of path, and independent of substance so long as the process is reversible. We can thus say that for all substances there is a property of state, S such that the change in this property is ∆S = ∑q/T for all the heat in or out. In a more general sense, we can say, ∆S = ∫dq/T, where this state property, S is called the entropy. Since as before, the amount of heat needed is proportional to mass, we can say that S is an intensive property; S= n s where n is the mass of stuff, and s is the entropy change per mass. 

Another strange result comes from the efficiency equation. Since, for any engine or process that is less efficient than the reversible one, we get less work out for the same amount of q1, we must have more heat rejected than q2. Thus, for an irreversible engine or process, q1-q2 < q2(T1-T2)/T2, and q2/T2 is greater than -q1/T1. As a result, the total change in entropy, S = q1/T1 + q2/T2 >0: the entropy of the universe always goes up or stays constant. It never goes down. Another final observation is that there must be a zero temperature that nothing can go below or both q1 and q2 could be positive and energy would not be conserved. Our observations of time and energy conservation leaves us to expect to find that there must be a minimum temperature, T = 0 that nothing can be colder than. We find this temperature at -273.15 °C. It is called absolute zero; nothing has ever been cooled to be colder than this, and now we see that, so long as time moves forward and energy is conserved, nothing will ever will be found colder.

Typically we either say that S is zero at absolute zero, or at room temperature.

We’re nearly there. We can define the entropy of the universe as the sum of the entropies of everything in it. From the above treatment of work cycles, we see that this total of entropy always goes up, never down. A fundamental fact of nature, and (in my world view) a fundamental view into how God views us and the universe. First, that the entropy of the universe goes up only, and not down (in our time-forward framework) suggests there is a creator for our universe — a source of negative entropy at the start of all things, or a reverser of time (it’s the same thing in our framework). Another observation, God likes entropy a lot, and that means randomness. It’s his working principle, it seems.

But before you take me now for a total libertine and say that since science shows that everything runs down the only moral take-home is to teach: “Let us eat and drink,”… “for tomorrow we die!” (Isaiah 22:13), I should note that his randomness only applies to the universe as a whole. The individual parts (planets, laboratories, beakers of coffee) does not maximize entropy, but leads to a minimization of available work, and this is different. You can show that the maximization of S, the entropy of the universe, does not lead to the maximization of s, the entropy per gram of your particular closed space but rather to the minimization of a related quantity µ, the free energy, or usable work per gram of your stuff. You can show that, for any closed system at constant temperature, µ = h -Ts where s is entropy per gram as before, and h is called enthalpy. h is basically the potential energy of the molecules; it is lowest at low temperature and high order. For a closed system we find there is a balance between s, something that increases with increased randomness, and h, something that decreases with increased randomness. Put water and air in a bottle, and you find that the water is mostly on the bottom of the bottle, the air is mostly on the top, and the amount of mixing in each phase is not the maximum disorder, but rather the one you’d calculate will minimize µ.

As the protein folds its randomness and entropy decrease, but its enthalpy decreases too; the net effect is one precise fold that minimizes µ.

As a protein folds its randomness and entropy decrease, but its enthalpy decreases too; the net effect is one precise fold that minimizes µ.

This is the principle that God applies to everything, including us, I’d guess: a balance. Take protein folding; some patterns have big disorder, and high h; some have low disorder and very low h. The result is a temperature-dependent  balance. If I were to take a moral imperative from this balance, I’d say it matches better with the sayings of Solomon the wise: “there is nothing better for a person under the sun than to eat, drink and be merry. Then joy will accompany them in their toil all the days of the life God has given them under the sun.” (Ecclesiastes 8:15). There is toil here as well as pleasure; directed activity balanced against personal pleasures. This is the µ = h -Ts minimization where, perhaps, T is economic wealth. Thus, the richer a society, the less toil is ideal and the more freedom. Of necessity, poor societies are repressive. 

Dr. Robert E. Buxbaum, Mar 18, 2014. My previous thermodynamic post concerned the thermodynamics of hydrogen production. It’s not clear that all matter goes forward in time, by the way; antimatter may go backwards, so it’s possible that anti matter apples may fall up. On microscopic scale, time becomes flexible so it seems you can make a time machine. Religious leaders tend to be anti-science, I’ve noticed, perhaps because scientific miracles can be done by anyone, available even those who think “wrong,” or say the wrong words. And that’s that, all being heard, do what’s right and enjoy life too: as important a pattern in life as you’ll find, I think. The relationship between free-energy and societal organization is from my thesis advisor, Dr. Ernest F. Johnson.

Ivanpah’s solar electric worse than trees

Recently the DoE committed 1.6 billion dollars to the completion of the last two of three solar-natural gas-electric plants on a 10 mi2 site at Lake Ivanpah in California. The site is rated to produce 370 MW of power, in a facility that uses far more land than nuclear power, at a cost significantly higher than nuclear. The 3900 MW Drax plant (UK) cost 1.1 Billion dollars, and produces 10 times more power on a much smaller site. Ivanpah needs a lot of land because its generators require 173,500 billboard-size, sun-tracking mirrors to heat boilers atop three 750 foot towers (2 1/2 times the statue of liberty). The boilers feed steam to low pressure, low efficiency (28% efficiency) Siemens turbines. At night, natural gas provides heat to make the steam, but only at the same, low efficiency. Siemens makes higher efficiency turbine plants (59% efficiency) but these can not be used here because the solar oven temperature is only 900°F (500°C), while normal Siemens plants operate at 3650°F (2000°C).

The Ivanpau thermal solar-natural gas project will look like The Crescent Dunes Thermal-solar project shown here, but will be bigger.

The first construction of the Ivanpah thermal solar-natural-gas project; Each circle mirrors extend out to cover about 2 square miles of the 10mi2 site.

So far, the first of the three towers is operational, but it has been producing at only 30% of rated low-efficiency output. These are described as “growing pains.” There are also problems with cooked birds, blinded pilots, and the occasional fire from the misaligned death ray — more pains, I guess. There is also the problem of lightning. When hit by lightning the mirrors shatter into millions of shards of glass over a 30 foot radius, according to Argus, the mirror cleaning company. This presents a less-than attractive environmental impact.

As an exercise, I thought I’d compare this site’s electric output to the amount one could generate using a wood-burning boiler fed by trees growing on a similar sized (10 sq. miles) site. Trees are cheap, but only about 10% efficient at converting solar power to chemical energy, thus you might imagine that trees could not match the power of the Ivanpah plant, but dry wood burns hot, at 1100 -1500°C, so the efficiency of a wood-powered steam turbine will be higher, about 45%. 

About 820 MW of sunlight falls on every 1 mi2 plot, or 8200 MW for the Ivanpah site. If trees convert 10% of this to chemical energy, and we convert 45% of that to electricity, we find the site will generate 369 MW of electric power, or exactly the output that Ivanpah is rated for. The cost of trees is far cheaper than mirrors, and electricity from wood burning is typically cost 4¢/kWh, and the environmental impact of tree farming is likely to be less than that of the solar mirrors mentioned above. 

There is another advantage to the high temperature of the wood fire. The use of high temperature turbines means that any power made at night with natural gas will be produced at higher efficiency. The Ivanpah turbines output at low temperature and low efficiency when burning natural gas (at night) and thus output half the half the power of a normal Siemens plant for every BTU of gas. Because of this, it seems that the Ivanpah plant may use as much natural gas to make its 370 MW during a 12 hour night as would a higher efficiency system operating 24 hours, day and night. The additional generation by solar thus, might be zero. 

If you think the problems here are with the particular design, I should also note that the Ivanpah solar project is just one of several our Obama-government is funding, and none are doing particularly well. As another example, the $1.45 B solar project on farmland near Gila Bend Arizona is rated to produce 35 MW, about 1/10 of the Ivanpah project at 2/3 the cost. It was built in 2010 and so far has not produced any power.

Robert E. Buxbaum, March 12, 2014. I’ve tried using wood to make green gasoline. No luck so far. And I’ve come to doubt the likelihood that we can stop global warming.