Tag Archives: Engineering

Advanced windmills + 20 years = field of junk

Everything wears out. This can be a comforting or a depressing thought, but it’s a truth. No old mistake, however egregious, lasts forever, and no bold advance avoids decay. At best, last year’s advance will pay for itself with interest, will wear out gracefully, and will be recalled fondly by aficionados after it’s replaced by something better. Water wheels, and early steamships are examples of this type of bold advance. Unfortunately, it is often the case that last years innovation turns out to be no advance at all: a technological dead end that never pays for itself, and becomes a dangerous, rotting eyesore or worse, a laughing-stock blot or a blot on the ecology. Our first two generations of advanced windmill farms seem to match this description; perhaps the next generation will be better, but here are some thoughts on lessons learned from the existing fields of rotting windmills.

The ancient design windmills of Don Quixote’s Spain (1300?) were boons. Farmers used them to grind grain or cut wood, and to to pump drinking water. Holland used similar early windmills to drain their land. So several American presidents came to believe advanced design windmills would be similar boons if used for continuous electric power generation. It didn’t work, and many of the problems could have been seen at the start. While the farmer didn’t care when his water was pumped, or when his wood is cut. When you’re generating electricity, there is a need to match the power demand exactly. Whenever the customer turns on the switch, electricity is expected to flow at the appropriate amount of Wattage; at other times any power generated is a waste or a nuisance. But electric generator-windmills do not produce power on demand, they produce power when the wind blows. The mismatch of wind and electric demand has bedeviled windmill reliability and economic return. It will likely continue to do so until we find a good way to store electric power cheaply. Until then windmills will not be able to produce electricity at competitive prices to compete with cheap coal and nuclear power.

There is also the problem of repair. The old windmills of Holland still turn a century later because they were relatively robust, and relatively easy to maintain. The modern windmills of the US stand much taller and move much faster. They are often hit, and damaged by lightning strikes, and their fast-turning gears tend to wear out fast, Once damaged, modern windmills are not readily fix, They are made of advanced fiberglass materials spun on special molds. Worse yet, they are constructed in mountainous, remote locations. Such blades can not be replaces by amateurs, and even the gears are not readily accessed to repair. More than half of the great power-windmills built in the last 35 years have worn out and are unlikely to ever get repair. Driving past, you see fields of them sitting idle; the ones still turning look like they will wear out soon. The companies that made and installed these behemoth are mostly out of the business, so there is no-one there to take them down even if there were an economic incentive to do so. Even where a company is found to fix the old windmills, no one would as there is not sufficient economic return — the electricity is worth less than the repair.

Komoa Wind Farm in Kona, Hawaii June 2010; Friends of Grand Ronde Valley.

Komoa Wind Farm in Kona, Hawaii, June 2010; A field of modern design wind-turbines already ruined by wear, wind, and lightning. — Friends of Grand Ronde Valley.

A single rusting windmill would be bad enough, but modern wind turbines were put up as wind farms with nominal power production targeted to match the output of small coal-fired generators. These wind farms require a lot of area,  covering many square miles along some of the most beautiful mountain ranges and ridges — places chosen because the wind was strong

Putting up these massive farms of windmills lead to a situation where the government had pay for construction of the project, and often where the government provided the land. This, generous spending gives the taxpayer the risk, and often a political gain — generally to a contributor. But there is very little political gain in paying for the repair or removal of the windmills. And since the electricity value is less than the repair cost, the owners (friends of the politician) generally leave the broken hulks to sit and rot. Politicians don’t like to pay to fix their past mistakes as it undermines their next boondoggle, suggesting it will someday rust apart without ever paying for itself.

So what can be done. I wish I could suggest less arrogance and political corruption, but I see no way to achieve that, as the poet wrote about Ozymandias (Ramses II) and his disastrous building projects, the leader inevitably believes: “I am Ozymandias, king of kings; look on my works ye mighty and despair.” So I’ll propose some other, less ambitious ideas. For one, smaller demonstration projects closer to the customer. First see if a single windmill pays for itself, and only then build a second. Also, electricity storage is absolutely key. I think it is worthwhile to store excess wind power as hydrogen (hydrogen storage is far cheaper than batteries), and the thermodynamics are not bad

Robert E. Buxbaum, January 3, 2016. These comments are not entirely altruistic. I own a company that makes hydrogen generators and hydrogen purifiers. If the government were to take my suggestions I would benefit.

The french engineering

There is something wonderful about French Engineering. It is good, but different from US or German engineering. The French don’t seem to copy others, and very few others seem to copy them. Nonetheless French engineering managed to build an atom bomb, is a core of the Airbus consortium, and both builds and runs the fastest passenger trains on earth, the TGF, record speed 357 mph on the line between Paris and Luxembourg.

JULY 14, 2015 Students of the Ecole Polytechnique (the most prestigious engineering school in France march in the Paris Bastille Day military parade. commemorating the storming of the Bastille in 1789.  (Photo by Thierry Chesnot/Getty Images).

JULY 14, 2015 Female engineering students of the Ecole Polytechnique, march in the Paris Bastille Day military parade. (Photo by Thierry Chesnot/Getty Images).

France was almost the only country to sell Israel weapons for the first 20 years of its existence, and as odd as the weapons they sold were, they worked. The Mirage jet was noted for short-range and maneuverability; in 1967, they handily defeated Egypt and Syria’s much larger force of Russian Migs. More recently, Argentina used French Exocet missiles to sink 3 British warships in the Argentine war, and last week, Turkey used a french missile to down a Su24, the new main Russian fighter-bomber. not bad for a country whose main engineering school marches in Napoleonic garb.

The classic of French Engineering, of course is the Eiffel Tower. It is generally unappreciated that this is not the only Eiffel structure designed this way. Eiffel designed railroad bridges, aqueducts. Here’s an Eiffel railroad bridge.

Eiffel railroad bridge, still in use

Eiffel railroad bridge, still in use. American, German, or British bridges of the era look nothing like this.

To get a sense of the engineering artistry of the Eifflel tower, consider that when the tower was built, in 1871, self-financed by Eiffel, it was more than twice as tall as the next-tallest building on earth. ff one weighed the air in a cylinder the height of the tower with a circle about its base, the air would weigh more than the steel of the tower. But here are some other random observations, while first level of the tower houses a restaurant, a normal American space-use choice,the second level housed, when the tower opened the print shop and offices of the International Herald Tribune; not a normal tenant. And, on the third level, near the very top, you will find Mr Eiffel’s apartment. The builder lived there; he owned the place. It’s still there today, but now there are now only mannequins in residence. It’s weird, but genius, like so much that is French engineering.

Eiffel's apartment atop the tower, now occupied by mannequins of Eiffel and Edison, a one-time guest.

Eiffel’s apartment atop the tower, now occupied by mannequins of Eiffel and Edison, a one-time guest.

Returning to the French airplane, The french were the first to make mono-planes. But having succeeded there, they made a decent-enough plane-like automobile, the 1932 Helicon car. It’s a three-man car with a propeller out front and rear-wheel steering. At first, you’d think this is a slow, unmanageable, deathtrap, like Buckminster Fuller’s Dymaxion,.  But you’d be wrong, the Helicon (apparently) is both speedy and safe it moves at 100 mph or more once it gets going, still passed French safety standards in 2000, and gets taken out for (semi-normal) jaunts. Don’t stand in front of the propeller (there’s a bicycle version too).

1932 Helicon; seats 3, rear staring, propeller-driven. Normal-ish. Photo by Yalon.

1932 Helicon car; 100 mph, seats 3, propeller-driven. Photo by Yalon.

The Helicon never quite took off, as it were, but an odd design motorcycle did quite well, at least in France, the Solex, front wheel motorcycle.Unlike US motorcycles, it’s just a bicycle with an engine above the front wheels. The engine runs “backwards” and drives the front wheel via a friction-cam. The only clutch action involves engaging the cam. Simple, elegant, and unlikely to be duplicated elsewhere.

A French Solex motorcycles, and an e-Solex. The e-Solex uses a battery.

A Solex motorcycle and an e-Solex, the battery-powered version. A Citroen and a Peugeot sport are in the background. Popular in France.

The reason I’m writing about French Engineering is perhaps because of the recent attacks. Or perhaps because of aesthetic. It’s important to have an engineering aesthetic — an idea you’re after — and to have pride in one’s craft too. The French stand out in how much they have of both. Some months ago I wrote about a more American engineering aesthetic, It’s a good article, but interestingly, I now note that some main examples I used were semi-French: the gunpowder factory of E. I. Dupont, the main productions facility of a Frenchman’s company in the US.

Robert Buxbaum, December 13, 2015. Some months ago, I wrote about a favorite car engine, finally being used on the Fiat 500 and Alfa Romeo. Fast, energy-efficient, light, maneuverable, and (I suspect) unreliable; the engine embodies a particularly Italian engineering aesthetic.

It’s rocket science

Here are six or so rocket science insights, some simple, some advanced. It’s a fun area of engineering that touches many areas of science and politics. Besides, some people seem to think I’m a rocket scientist.

A basic question I get asked by kids is how a rocket goes up. My answer is it does not go up. That’s mostly an illusion. The majority of the rocket — the fuel — goes down, and only the light shell goes up. People imagine they are seeing the rocket go up. Taken as a whole, fuel and shell, they both go down at 1 G: 9.8 m/s2, 32 ft/sec2.

Because 1 G ofupward acceleration is always lost to gravity, you need more thrust from the rocket engine than the weight of rocket and fuel. This can be difficult at the beginning when the rocket is heaviest. If your engine provides less thrust than the weight of your rocket, your rocket sits on the launch pad, burning. If your thrust is merely twice the weight of the rocket, you waste half of your fuel doing nothing useful, just fighting gravity. The upward acceleration you’ll see, a = F/m -1G where F is the force of the engine, and m is the mass of the rocket shell + whatever fuel is in it. 1G = 9.8m/s is the upward acceleration lost to gravity.  For model rocketry, you want to design a rocket engine so that the upward acceleration, a, is in the range 5-10 G. This range avoids wasting lots of fuel without requiring you to build the rocket too sturdy.

For NASA moon rockets, a = 0.2G approximately at liftoff increasing as fuel was used. The Saturn V rose, rather majestically, into the sky with a rocket structure that had to be only strong enough to support 1.2 times the rocket weight. Higher initial accelerations would have required more structure and bigger engines. As it was the Saturn V was the size of a skyscraper. You want the structure to be light so that the majority of weight is fuel. What makes it tricky is that the acceleration weight has to sit on an engine that gimbals (slants) and runs really hot, about 3000°C. Most engineering projects have fewer constraints than this, and are thus “not rocket science.”

Basic force balance on a rocket going up.

Basic force balance on a rocket going up.

A space rocket has to reach very high, orbital speed if the rocket is to stay up indefinitely, or nearly orbital speed for long-range, military uses. You can calculate the orbital speed by balancing the acceleration of gravity, 9.8 m/s2, against the orbital acceleration of going around the earth, a sphere of 40,000 km in circumference (that’s how the meter was defined). Orbital acceleration, a = v2/r, and r = 40,000,000 m/2π = 6,366,000m. Thus, the speed you need to stay up indefinitely is v=√(6,366,000 x 9.8) = 7900 m/s = 17,800 mph. That’s roughly Mach 35, or 35 times the speed of sound at sea level, (343 m/s). You need some altitude too, just to keep air friction from killing you, but for most missions, the main thing you need is velocity, kinetic energy, not potential energy, as I’ll show below. If your speed exceeds 17,800 m/s, you go higher up, but the stable orbital velocity is lower. The gravity force is lower higher up, and the radius to the earth higher too, but you’re balancing this lower gravity force against v2/r, so v2 has to be reduced to stay stable high up, but higher to get there. This all makes docking space-ships tricky, as I’ll explain also. Rockets are the only way practical to reach Mach 35 or anything near it. No current cannon or gun gets close.

Kinetic energy is a lot more important than potential energy for sending an object into orbit. To get a sense of the comparison, consider a one kg mass at orbital speed, 7900 m/s, and 200 km altitude. For these conditions, the kinetic energy, 1/2mv2 is 31,205 kJ, while the potential energy, mgh, is only 1,960 kJ . The potential energy is thus only 1/16 the kinetic energy.

Not that it’s easy to reach 200 miles altitude, but you can do it with a sophisticated cannon. The Germans did it with “simple”, one stage, V2-style rockets. To reach orbit, you generally need multiple stages. As a way to see this, consider that the energy content of gasoline + oxygen is about 10.5 MJ/kg (10,500 kJ/kg); this is only 1/3 of the kinetic energy of the orbital rocket, but it’s 5 times the potential energy. A fairly efficient gasoline + oxygen powered cannon could not provide orbital kinetic energy since the bullet can move no faster than the explosive vapor. In a rocket this is not a constraint since most of the mass is ejected.

A shell fired at a 45° angle that reaches 200 km altitude would go about 800 km — the distance between North Korea and Japan, or between Iran and Israel. That would require twice as much energy as a shell fired straight up, about 4000 kJ/kg. This is still within the range for a (very large) cannon or a single-stage rocket. For Russia or China to hit the US would take much more: orbital, or near orbital rocketry. To reach the moon, you need more total energy, but less kinetic energy. Moon rockets have taken the approach of first going into orbit, and only later going on. While most of the kinetic energy isn’t lost, it’s likely not the best trajectory in terms of energy use.

The force produced by a rocket is equal to the rate of mass shot out times its velocity. F = ∆(mv). To get a lot of force for each bit of fuel, you want the gas exit velocity to be as fast as possible. A typical maximum is about 2,500 m/s. Mach 10, for a gasoline – oxygen engine. The acceleration of the rocket itself is this ∆mv force divided by the total remaining mass in the rocket (rocket shell plus remaining fuel) minus 1 (gravity). Thus, if the exhaust from a rocket leaves at 2,500 m/s, and you want the rocket to accelerate upward at an average of 10 G, you must exhaust fast enough to develop 10 G, 98 m/s2. The rate of mass exhaust is the average mass of the rocket times 98/2500 = .0392/second. That is, about 3.92% of the rocket mass must be ejected each second. Assuming that the fuel for your first stage engine is less than 80% of the total mass, the first stage will flare-out in about 20 seconds. Typically, the acceleration at the end of the 20 burn is much greater than at the beginning since the rocket gets lighter as fuel is burnt. This was the case with the Apollo missions. The Saturn V started up at 0.5G but reached a maximum of 4G by the time most of the fuel was used.

If you have a good math background, you can develop a differential equation for the relation between fuel consumption and altitude or final speed. This is readily done if you know calculous, or reasonably done if you use differential methods. By either method, it turns out that, for no air friction or gravity resistance, you will reach the same speed as the exhaust when 64% of the rocket mass is exhausted. In the real world, your rocket will have to exhaust 75 or 80% of its mass as first stage fuel to reach a final speed of 2,500 m/s. This is less than 1/3 orbital speed, and reaching it requires that the rest of your rocket mass: the engine, 2nd stage, payload, and any spare fuel to handle descent (Elon Musk’s approach) must weigh less than 20-25% of the original weight of the rocket on the launch pad. This gasoline and oxygen is expensive, but not horribly so if you can reuse the rocket; that’s the motivation for NASA’s and SpaceX’s work on reusable rockets. Most orbital rocket designs require three stages to accelerate to the 7900 m/s orbital speed calculated above. The second stage is dropped from high altitude and almost invariably lost. If you can set-up and solve the differential equation above, a career in science may be for you.

Now, you might wonder about the exhaust speed I’ve been using, 2500 m/s. You’ll typically want a speed at lest this high as it’s associated with a high value of thrust-seconds per weight of fuel. Thrust seconds pre weight is called specific impulse, SI, SI = lb-seconds of thrust/lb of fuel. This approximately equals speed of exhaust (m/s) divided by 9.8 m/s2. For a high molecular weight burn it’s not easy to reach gas speed much above 2500, or values of SI much above 250, but you can get high thrust since thrust is related to momentum transfer. High thrust is why US and Russian engines typically use gasoline + oxygen. The heat of combustion of gasoline is 42 MJ/kg, but burning a kg of gasoline requires roughly 2.5 kg of oxygen. Thus, for a rocket fueled by gasoline + oxygen, the heat of combustion per kg is 42/3.5 = 12,000,000 J/kg. A typical rocket engine is 30% efficient (V2 efficiency was lower, Saturn V higher). Per corrected unit of fuel+oxygen mass, 1/2 v2 = .3 x 12,000,000; v =√7,200,000 = 2680 m/s. Adding some mass for the engine and fuel tanks, the specific impulse for this engine will be, about 250 s. This is fairly typical. Higher exhaust speeds have been achieved with hydrogen fuel, it has a higher combustion energy per weight. It is also possible to increase the engine efficiency; the Saturn V, stage 2 efficiency was nearly 50%, but the thrust was low. The sources of inefficiency include inefficiencies in compression, incomplete combustion, friction flows in the engine, and back-pressure of the atmosphere. If you can make a reliable, high efficiency engine with good lift, a career in engineering may be for you. A yet bigger challenge is doing this at a reasonable cost.

At an average acceleration of 5G = 49 m/s2 and a first stage that reaches 2500 m/s, you’ll find that the first stage burns out after 51 seconds. If the rocket were going straight up (bad idea), you’d find you are at an altitude of about 63.7 km. A better idea would be an average trajectory of 30°, leaving you at an altitude of 32 km or so. At that altitude you can expect to have far less air friction, and you can expect the second stage engine to be more efficient. It seems to me, you may want to wait another 10 seconds before firing the second stage: you’ll be 12 km higher up and it seems to me that the benefit of this will be significant. I notice that space launches wait a few seconds before firing their second stage.

As a final bit, I’d mentioned that docking a rocket with a space station is difficult, in part, because docking requires an increase in angular speed, w, but this generally goes along with a decrease in altitude; a counter-intuitive outcome. Setting the acceleration due to gravity equal to the angular acceleration, we find GM/r2 = w2r, where G is the gravitational constant, and M is the mass or the earth. Rearranging, we find that w2  = GM/r3. For high angular speed, you need small r: a low altitude. When we first went to dock a space-ship, in the early 60s, we had not realized this. When the astronauts fired the engines to dock, they found that they’d accelerate in velocity, but not in angular speed: v = wr. The faster they went, the higher up they went, but the lower the angular speed got: the fewer the orbits per day. Eventually they realized that, to dock with another ship or a space-station that is in front of you, you do not accelerate, but decelerate. When you decelerate you lose altitude and gain angular speed: you catch up with the station, but at a lower altitude. Your next step is to angle your ship near-radially to the earth, and accelerate by firing engines to the side till you dock. Like much of orbital rocketry, it’s simple, but not intuitive or easy.

Robert Buxbaum, August 12, 2015. A cannon that could reach from North Korea to Japan, say, would have to be on the order of 10 km long, running along the slope of a mountain. Even at that length, the shell would have to fire at 450 G, or so, and reach a speed about 3000 m/s, or 1/3 orbital.

Gatling guns and the Spanish American War

I rather like inventions and engineering history, and I regularly go to the SME, a fair of 18th to 19th century innovation. I am generally impressed with how these machines work, but what really brings things out is when talented people use the innovation to do something radical. Case in point, the Gatling gun; invented by Richard J. Gatling in 1861 for use in the Civil war, it was never used there, or in any major war until 1898 when Lieut. John H. Parker (Gatling Gun Parker) showed how to deploy them successfully, and helped take over Cuba. Until then, they were considered another species of short-range, grape-shot cannon, and ignored.

1876_Gatling_gun_NPS_Fort_Laramie_WY_by-Matthew_Trump_2004

A Gatling gun of the late 1800s. Similar, but not identical to the ones Parker brought along.

Parker had sent his thoughts on how to deploy a Gatling gun in a letter to West Point, but they were ignored, as most new thoughts are. For the Spanish-American War, Parker got 4 of the guns, trained his small detachment to use them, and registered as a quartermaster corp in order to sneak them aboard ship to Cuba. Here follows Theodore Roosevelt’s account of their use.

“On the morning of July 1st, the dismounted cavalry, including my regiment, stormed Kettle Hill, driving the Spaniards from their trenches. After taking the crest, I made the men under me turn and begin volley-firing at the San Juan Blockhouse and entrenchment’s against which Hawkins’ and Kent’s Infantry were advancing. While thus firing, there suddenly smote on our ears a peculiar drumming sound. One or two of the men cried out, “The Spanish machine guns!” but, after listening a moment, I leaped to my feet and called, “It’s the Gatlings, men! It’s our Gatlings!” Immediately the troopers began to cheer lustily, for the sound was most inspiring. Whenever the drumming stopped, it was only to open again a little nearer the front. Our artillery, using black powder, had not been able to stand within range of the Spanish rifles, but it was perfectly evident that the Gatlings were troubled by no such consideration, for they were advancing all the while.

Roosevelt and the charge up Kettle Hill, Frederick Remington

Roosevelt, his volunteers, and the Buffalo soldiers charge up Kettle Hill, Frederick Remington.

Soon the infantry took San Juan Hill, and, after one false start, we in turn rushed the next line of block-houses and intrenchments, and then swung to the left and took the chain of hills immediately fronting Santiago. Here I found myself on the extreme front, in command of the fragments of all six regiments of the cavalry division. I received orders to halt where I was, but to hold the hill at all hazards. The Spaniards were heavily reinforced and they opened a tremendous fire upon us from their batteries and trenches. We laid down just behind the gentle crest of the hill, firing as we got the chance, but, for the most part, taking the fire without responding. As the afternoon wore on, however, the Spaniards became bolder, and made an attack upon the position. They did not push it home, but they did advance, their firing being redoubled. We at once ran forward to the crest and opened on them, and, as we did so, the unmistakable drumming of the Gatlings opened abreast of us, to our right, and the men cheered again. As soon as the attack was definitely repulsed, I strolled over to find out about the Gatlings, and there I found Lieut. Parker with two of his guns right on our left, abreast of our men, who at that time were closer to the Spaniards than any others.

From thence on, Parker’s Gatlings were our inseparable companion throughout the siege. They were right up at the front. When we dug our trenches, he took off the wheels of his guns and put them in the trenches. His men and ours slept in the same bomb-proofs and shared with one another whenever either side got a supply of beans or coffee and sugar. At no hour of the day or night was Parker anywhere but where we wished him to be, in the event of an attack. If a troop of my regiment was sent off to guard some road or some break in the lines, we were almost certain to get Parker to send a Gatling along, and, whether the change was made by day or by night, the Gatling went. Sometimes we took the initiative and started to quell the fire of the Spanish trenches; sometimes they opened upon us; but, at whatever hour of the twenty-four the fighting began, the drumming of the Gatlings was soon heard through the cracking of our own carbines.

Map of the Attack on Kettle Hill and San Juan Hill in the Spanish American War.

Map of the Attack on Kettle Hill and San Juan Hill in the Spanish-American War, July 1, 1898 The Spanish had 760 troops n the in fortified positions defending the crests of the two hills, and 10,000 more defending Santiago. As Americans were being killed in “hells pocket” near the foot of San Juan Hill, from crossfire, Roosevelt, on the right, charged his men, the “Rough Riders” [1st volunteers] and the “Buffalo Soldiers [10th cavalry], up Kettle Hill in hopes of ending the crossfire and of helping to protect troops that would charge further up San Juan Hill. Parker’s Gatlings were about 600 yards from the Spanish and fired some 700 rounds per minute into the Spanish lines. Theyy were then repositioned on the hill to beat back the counter attack. Without the Parker’s Gatling guns, the chances of success would have been small.

I have had too little experience to make my judgment final; but certainly, if I were to command either a regiment or a brigade, whether of cavalry or infantry, I would try to get a Gatling battery–under a good man–with me. I feel sure that the greatest possible assistance would be rendered, under almost all circumstances, by such a Gatling battery, if well handled; for I believe that it could be pushed fairly to the front of the firing-line. At any rate, this is the way that Lieut. Parker used his battery when he went into action at San Juan, and when he kept it in the trenches beside the Rough Riders before Santiago.”

Here is how the Gatling gun works; it’s rather like 5 or more rotating zip guns; a pall pulls and releases the firing pins. Gravity feeds the bullets at the top and drops the shells out the bottom. Lt’ Parker’s deployment innovation was to have them hand-carried to protected positions, near-enough to the front that they could be aimed. The swivel and rapid fire of the guns allowed the shooter to aim them to correct for the drop in the bullets over fairly great distances. This provided rapid-fire accurate protection from positions that could not be readily hit. Shortly after the victory on San Juan HIll, July 1 1898, the Spanish Caribbean fleet was destroyed July 3, Santiago surrendered July 17, and all of Cuba surrendered 4 days later, July 21 (my birthday) — a remarkably short war. While TR may not have figured out how to use the Gatling guns effectively, he at least recognized that Lt. John Parker had.

A new type of machine gun,  a colt browning repeating rifle, a gift from Con'l Roosevelt to John Parker's Gatling gun detachment.

Roosevelt gave two of these, more modern, Colt-Browning repeating rifles to Parker’s detachment the day after the battle. They were not particularly effective. By WWI, “Gatling Gun” Parker would be a general; by 1901 Roosevelt would be president.

The day after the battle, Col. Roosevelt gifted Parker’s group with two Colt-Browning machine guns that he and his family had bought, but had not used. According to Roosevelt, but these rifles, proved to be “more delicate than the Gatlings, and very readily got out-of-order.” The Brownings are the predecessor of the modern machine gun used in the Boxer Rebellion and for wholesale deaths in WWI and WWII.

Dr. Robert E. Buxbaum, June 9, 2015. The Spanish-American War was a war of misunderstanding and colonialism, but its effects, by and large, were good. The cause, the sinking of the USS Maine, February 15, 1898, was likely a mistake. Spain, a decaying colonial power, was a conservative monarchy under Alfonso XIII; the loss of Cuba seems to have lead to liberalization. The US, a republic, became a colonial power. There is an inherent friction, I think between conservatism and liberal republicanism, Generally, republics have out-gunned and out-produced other countries, perhaps because they reward individual initiative.

Brass monkey cold

In case it should ever come up in conversation, only the picture at left shows a brass monkey. The other is a bronze statue of some sort of a primate. A brass monkey is a rack used to stack cannon balls into a face centered pyramid. A cannon crew could fire about once per minute, and an engagement could last 5 hours, so you could hope to go through a lot of cannon balls during an engagement (assuming you survived).

A brass monkey cannonball holder. The classic monkeys were 10 x 10 and made of navy brass.

Small brass monkey. The classic monkey might have 9 x 9 or 10×10 cannon balls on the lower level.

Bronze sculpture of a primate playing with balls -- but look what the balls are sitting on: it's a surreal joke.

Bronze sculpture of a primate playing with balls — but look what the balls are sitting on: it’s a dada art joke.

But brass monkeys typically show up in conversation in terms of it being cold enough to freeze the balls off of a brass monkey, and if you imagine an ornamental statue, you’d never guess how cold could that be. Well, for a cannonball holder, the answer has to do with the thermal expansion of metals. Cannon balls were made of iron and the classic brass monkey was made of brass, an alloy with a much-greater thermal expansion than iron. As the temperature drops, the brass monkey contracts more than the iron balls. When the drop is enough the balls will fall off and roll around.

The thermal expansion coefficient of brass is 18.9 x 10-6/°C while the thermal expansion coefficient of iron is 11.7 x10-6/°C. The difference is 7.2×10-6/°C; this will determine the key temperature. Now consider a large brass monkey, one with 400 x 400 holes on the lower level, 399 x 399 at the second, and so on. Though it doesn’t affect the result, we’ll consider a monkey that holds 12 lb cannon balls, a typical size of 1750 -1830. Each 12 lb ball is 4.4″ in diameter at room temperature, 20°C in those days. At 20°C, this monkey is about 1760″ wide. The balls will fall off when the monkey shrinks more than the balls by about 1/3 of a diameter, 1.5″.

We can calculate ∆T, the temperature change, °C, that is required to lower the width-difference by 1.5″ as follows:

kepler conjecture, brass monkey

-1.5″ = ∆T x 1760″ x 7.2 x10-6

We find that ∆T = -118°C. The temperature where this happens is 118 degrees cooler than 20°C, or -98°C. That’s a temperature that you could, perhaps reach on the South Pole or maybe deepest Russia. It’s not likely to be a problem, especially with a smaller brass monkey.

Robert E. Buxbaum, February 21, 2015 (modified Apr. 28, 2021). Some fun thoughts: Convince yourself that the key temperature is independent of the size of the cannon balls. That is, that I didn’t need to choose 12 pounders. A bit more advanced, what is the equation for the number of balls on any particular base-size monkey. Show that the packing density is no more efficient if the bottom lawyer were an equilateral triangle, and not a square. If you liked this, you might want to know how much wood a woodchuck chucks if a woodchuck could chuck wood, or on the relationship between mustaches and WWII diplomacy.

Is college worth no cost?

While a college degree gives most graduates a salary benefit over high school graduates, a study by the Bureau of Labor statistics indicates that the benefits disappear if you graduate in the bottom 25% of your class. Worse yet, if you don’t graduate at all you can end up losing salary money, especially if you go into low-paying fields like child development or physical sciences.

Salary benefits of a college degree are largely absent if you graduate in the bottom 25% of your class.

The average college graduate earns significantly more than a high school grad, but not if you attend a pricy school, or graduate in the bottom 1/4 of your class, or have the wrong major.

Most people realize there is a great earnings difference depending on your field of study with graduates in engineering and medicine doing fairly well financially and even top graduates in child development or athletic sciences barely able to justify the college and opportunity costs (worse if they go to an expensive college), but what isn’t always realized is that not all those who enter these fields graduate. For them there is a steep loss when the four (or more) years of lost income are considered.

risk premium in wages

If you don’t graduate or get only an AA or 2 year degree the increase in wages is minimal, and you lose time working and whatever your costs of education. The loss is particularly high if you study social science fields at an expensive college, and don’t graduate, or if you graduate in the bottom of your class.

A report from the New York Federal Reserve finds that the highest pay major is petroleum engineering, mid-career salary $176,300/yr, and the bottom is child development, mid-career salary $36,400/yr (click to check on your major). I’m not sure most students or advisors are aware of the steep salary difference, or that college can have a salary down-side if one picks the wrong major, or does not complete the degree. In terms of earnings, you might be better off avoiding even a free college degree in these areas unless you’re fairly sure you’ll complete the degree, or you really want to work in these fields.

Top earning majors Fed Reserve and Majors that pay you back.

Top earning majors: Majors that pay.

Of course college can provide more than money: knowledge, for instance, and learning: the ability to reason better. But these benefits are likely lost if you don’t work at it, or don’t go in a field you love. They can also come to those who study hard in self-taught reading. In either case, it is the work habits that will make you grow as a person, and leave you more employable. Tough colleges add a lot by exposure to new people and new ways of thinking about great books, and by forced experience in writing essays — but these benefits too are work-dependent and college dependent. If you work hard understanding a great book it will show. If you didn’t work at it, or only exposed yourself to easier fare, that too will show.

As students don’t like criticism, and as good criticism is hard to give — and harder to give well, many less-demanding colleges ,give little or no critical feedback, especially for disadvantaged students. This disadvantages them even more as criticism is an important part of learning. If all you get is a positive experience, a nice campus, and a dramatic graduation, this is not learning. Nor is it necessarily worth 4-5 years of your life.

As a comic take on the high time-cost of a liberal arts education, “Father” Guido Sarduchi, of Saturday Night LIve, describes his “5 minute college experience.” To a surprising extent, it provides everything you’ll remember of 4 year college experience in 5 minutes, including math, history, political science, and language (Spanish).For those who are not sure they will complete a liberal arts education, Father Sarduchi’s 5 minutes may be a better investment than a free 4 years in community college.

Robert. E. Buxbaum. January 21-22, 2015. My sense is that the better part of education is what you get when you don’t get what you want.

Nuclear fusion

I got my PhD at Princeton University 33 years ago (1981) working on the engineering of nuclear fusion reactors, and I thought I’d use this blog to rethink through the issues. I find I’m still of the opinion that developing fusion is important as the it seems the best, long-range power option. Civilization will still need significant electric power 300 to 3000 years from now, it seems, when most other fuel sources are gone. Fusion is also one of the few options for long-range space exploration; needed if we ever decide to send colonies to Alpha Centauri or Saturn. I thought fusion would be ready by now, but it is not, and commercial use seems unlikely for the next ten years at least — an indication of the difficulties involved, and a certain lack of urgency.

Oil, gas, and uranium didn’t run out like we’d predicted in the mid 70s. Instead, population growth slowed, new supplies were found, and better methods were developed to recover and use them. Shale oil and fracking unlocked hydrocarbons we thought were unusable, and nuclear fission reactors got better –safer and more efficient. At the same time, the more we studied, the clearer it came that fusion’s technical problems are much harder to tame than uranium fission’s.

Uranium fission was/is frighteningly simple — far simpler than even the most basic fusion reactor. The first nuclear fission reactor (1940) involved nothing more than uranium pellets in a pile of carbon bricks stacked in a converted squash court at the University of Chicago. No outside effort was needed to get the large, unstable uranium atoms split to smaller, more stable ones. Water circulating through the pile removed the heat released, and control was maintained by people lifting and lowering cadmium control rods while standing on the pile.

A fusion reactor requires high temperature or energy to make anything happen. Fusion energy is produced by combining small, unstable heavy hydrogen atoms into helium, a bigger more stable one, see figure. To do this reaction you need to operate at the equivalent of about 500,000,000 degrees C, and containing it requires (typically) a magnetic bottle — something far more complex than a pile of graphic bricks. The reward was smaller too: “only” about 1/13th as much energy per event as fission. We knew the magnetic bottles were going to be tricky, e.g. there was no obvious heat transfer and control method, but fusion seemed important enough, and the problems seemed manageable enough that fusion power seemed worth pursuing — with just enough difficulties to make it a challenge.

Basic fusion reaction: deuterium + tritium react to give helium, a neutron and energy.

Basic fusion reaction: deuterium + tritium react to give helium, a neutron and energy.

The plan at Princeton, and most everywhere, was to use a TOKAMAK, a doughnut-shaped reactor like the one shown below, but roughly twice as big; TOKAMAK was a Russian acronym. The doughnut served as one side of an enormous transformer. Hydrogen fuel was ionized into a plasma (a neutral soup of protons and electrons) and heated to 300,000,000°C by a current in the TOKOMAK generated by varying the current in the other side of the transformer. Plasma containment was provided by enormous magnets on the top and bottom, and by ring-shaped magnets arranged around the torus.

As development went on, we found we kept needing bigger and bigger doughnuts and stronger and stronger magnets in an effort to balance heat loss with fusion heating. The number density of hydrogen atoms per volume, n, is proportional to the magnetic strength. This is important because the fusion heat rate per volume is proportional to n-squared, n2, while heat loss is proportional to n divided by the residence time, something we called tau, τ. The main heat loss was from the hot plasma going to the reactor surface. Because of the above, a heat balance ratio was seen to be important, heat in divided by heat out, and that was seen to be more-or-less proportional to nτ. As the target temperatures increased, we found we needed larger and larger nτ reactors to make a positive heat balance. And this translated to ever larger reactors and ever stronger magnetic fields, but even here there was a limit, 1 billion Kelvin, a thermodynamic temperature where the fusion reaction went backward and no energy was produced. The Princeton design was huge, with super strong, super magnets, and was operated at 300 million°C, near the top of the reaction curve. If the temperature went above or below this temperature, the fire would go out. There was no room for error, but relatively little energy output per volume — compared to fission.

Fusion reaction options and reaction rates.

Fusion reaction options and reaction rates.

The most likely reaction involved deuterium and tritium, referred to as D and T. This was the reaction of the two heavy isotopes of hydrogen shown in the figure above — the same reaction used in hydrogen bombs, a point we rarely made to the public. For each reaction D + T –> He + n, you get 17.6 million electron volts (17.6 MeV). This is 17.6 million times the energy you get for an electron moving over one Volt, but only 1/13 the energy of a fission reaction. By comparison, the energy of water-forming, H2 + 1/2 O2 –> H2O, is the equivalent of two electrons moving over 1.2 Volts, or 2.4 electron volts (eV), some 8 million times less than fusion.

The Princeton design involved reacting 40 gm/hr of heavy hydrogen to produce 8 mol/hr of helium and 4000 MW of heat. The heat was converted to electricity at 38% efficiency using a topping cycle, a modern (relatively untried) design. Of the 1500 MWh/hr of electricity that was supposed to be produced, all but about 400 MW was to be delivered to the power grid — if everything worked right. Sorry to say, the value of the electricity did not rise anywhere as fast as the cost of the reactor and turbines. Another problem: 1100 MW was more than could be easily absorbed by any electrical grid. The output was high and steady, and could not be easily adjusted to match fluctuating customer demand. By contrast a coal plant’s or fuel cell’s output could be easily adjusted (and a nuclear plant with a little more difficulty).

Because of the need for heat balance, it turned out that at least 9% of the hydrogen had to be burnt per pass through the reactor. The heat lost per mol by conduction to the wall was, to good approximation, the heat capacity of each mol of hydrogen ions, 82 J/°C mol, times the temperature of the ions, 300 million °C divided by the containment time, τ. The Princeton design was supposed to have a containment of about 4 seconds. As a result, the heat loss by conduction was 6.2 GW per mol. This must be matched by the molar heat of reaction that stayed in the plasma. This was 17.6 MeV times Faraday’s constant, 96,800 divided by 4 seconds (= 430 GW/mol reacted) divided by 5. Of the 430 GW/mol produced in fusion reactions only 1/5 remains in the plasma (= 86 GW/mol) the other 4/5 of the energy of reaction leaves with the neutron. To get the heat balance right, at least 9% of the hydrogen must react per pass through the reactor; there were also some heat losses from radiation, so the number is higher. Burn more or less percent of the hydrogen and you had problems. The only other solution was to increase τ > 4 seconds, but this meant ever bigger reactors.

There was also a material handling issue: to get enough fuel hydrogen into the center of the reactor, quite a lot of radioactive gas had to be handled — extracted from the plasma chamber. These were to be frozen into tiny spheres of near-solid hydrogen and injected into the reactor at ultra-sonic velocity. Any slower and the spheres would evaporate before reaching the center. As 40 grams per hour was 9% of the feed, it became clear that we had to be ready to produce and inject 1 pound/hour of tiny spheres. These “snowballs-in-hell” had to be small so they didn’t dampen the fire. The vacuum system had to be able to be big enough to handle the lb/hr or so of unburned hydrogen and ash, keeping the pressure near total vacuum. You then had to purify the hydrogen from the ash-helium and remake the little spheres that would be fed back to the reactor. There were no easy engineering problems here, but I found it enjoyable enough. With a colleague, I came up with a cute, efficient high vacuum pump and recycling system, and published it here.

Yet another engineering challenge concerned the difficulty of finding a material for the first-wall — the inner wall of the doughnut facing the plasma. Of the 4000 MW of heat energy produced, all the conduction and radiation heat, about 1000 MW is deposited in the first wall and has to be conducted away. Conducting this heat means that the wall must have an enormous coolant flow and must withstand an enormous amount of thermal stress. One possible approach was to use a liquid wall, but I’ve recently come up with a rather nicer solid wall solution (I think) and have filed a patent; more on that later, perhaps after/if the patent is accepted. Another engineering challenge was making T, tritium, for the D-T reaction. Tritium is not found in nature, but has to be made from the neutron created in the reaction and from lithium in a breeder blanket, Li + n –> He + T. I examined all possible options for extracting this tritium from the lithium at low concentrations as part of my PhD thesis, and eventually found a nice solution. The education I got in the process is used in my, REB Research hydrogen engineering business.

Man inside the fusion reactor doughnut at ITER. He'd better leave before the 8,000,000°C plasma turns on.

Man inside the fusion reactor doughnut at ITER. He’d better leave before the 8,000,000°C plasma turns on.

Because of its complexity, and all these engineering challenges, fusion power never reached the maturity of fission power; and then Three-mile Island happened and ruined the enthusiasm for all things nuclear. There were some claims that fusion would be safer than fission, but because of the complexity and improvements in fission, I am not convinced that fusion would ever be even as safe. And the long-term need keeps moving out: we keep finding more uranium, and we’ve developed breeder reactors and a thorium cycle: technologies that make it very unlikely we will run out of fission material any time soon.

The main, near term advantage I see for fusion over fission is that there are fewer radioactive products, see comparison.  A secondary advantage is neutrons. Fusion reactors make excess neutrons that can be used to make tritium, or other unusual elements. A need for one of these could favor the development of fusion power. And finally, there’s the long-term need: space exploration, or basic power when we run out of coal, uranium, and thorium. Fine advantages but unlikely to be important for a hundred years.

Robert E. Buxbaum, March 1, 2014. Here’s a post on land use, on the aesthetics of engineering design, and on the health risks of nuclear power. The sun’s nuclear fusion reactor is unstable too — one possible source of the chaotic behavior of the climate. Here’s a control joke.

Toxic electrochemistry and biology at home

A few weeks back, I decided to do something about the low quality of experiments in modern chemistry and science sets; I posted to this blog some interesting science experiments, and some more-interesting experiments that could be done at home using the toxic (poisonous dangerous) chemicals available under the sink or on the hardware store. Here are some more. As previously, the chemicals are toxic and dangerous but available. As previously, these experiments should be done only with parental (adult) supervision. Some of these next experiments involve some math, as key aspect of science; others involve some new equipment as well as the stuff you used previously. To do them all, you will want a stop watch, a volt-amp meter, and a small transformer, available at RadioShack; you’ll also want some test tubes or similar, clear cigar tubes, wire and baking soda; for the coating experiment you’ll want copper drain clear, or copper containing fertilizer and some washers available at the hardware store; for metal casting experiment you’ll need a tin can, pliers, a gas stove and some pennies, plus a mold, some sand, good shoes, and a floor cover; and for the biology experiment you will need several 9 V batteries, and you will have to get a frog and kill it. You can skip any of these experiments, if you like and do the others. If you have not done the previous experiments, look them over or do them now.

1) The first experiments aim to add some numerical observations to our previous studies of electrolysis. Here is where you will see why we think that molecules like water are made of fixed compositions of atoms. Lets redo the water electrolysis experiment now with an Ammeter in line between the battery and one of the electrodes. With the ammeter connected, put both electrodes deep into a solution of water with a little lye, and then (while watching the ammeter) lift one electrode half out, place it back, and lift the other. You will find, I think, that one of the other electrode is the limiting electrode, and that the amperage goes to 1/2 its previous value when this electrode is half lifted. Lifting the other electrode changes neither the amperage or the amount of bubbles, but lifting this limiting electrode changes both the amount of bubbles and the amperage. If you watch closely, though, you’ll see it changes the amount of bubbles at both electrodes in proportion, and that the amount of bubbles is in promotion to the amperage. If you collect the two gasses simultaneously, you’ll see that the volume of gas collected is always in a ratio of 2 to 1. For other electrolysis (H2 and Cl2) it will be 1 to1; it’s always a ratio of small numbers. See diagram below on how to make and collect oxygen and hydrogen simultaneously by electrolyzing water with lye or baking soda as electrolyte. With lye or baking soda, you’ll find that there is always twice as much hydrogen produced as oxygen — exactly.

You can also do electrolysis with table salt or muriatic acid as an electrolyte, but for this you’ll need carbon or platinum electrodes. If you do it right, you’ll get hydrogen and chlorine, a green gas that smells bad. If you don’t do this right, using a wire instead of a carbon or platinum electrode, you’ll still get hydrogen, but no chlorine. Instead of chlorine, you’ll corrode the wire on that end, making e.g. copper chloride. With a carbon electrode and any chloride compound as the electrolyte, you’ll produce chlorine; without a chloride electrolyte, you will not produce chlorine at any voltage, or with any electrode. And if you make chlorine and check the volumes, you’ll find you always make one volume of chlorine for every volume of hydrogen. We imagine from this that the compounds are made of fixed atoms that transfer electrons in fixed whole numbers per molecule. You always make two volumes of hydrogen for every volume of oxygen because (we think) making oxygen requires twice as many electrons as making hydrogen.

At home electrolysis experiment

At home electrolysis experiment

We get the same volume of chlorine as hydrogen because making chlorine and hydrogen requires the same amount of electrons to be transferred. These are the sort of experiments that caused people to believe in atoms and molecules as the fundamental unchanging components of matter. Different solutes, voltages, and electrodes will affect how fast you make hydrogen and oxygen, as will the amount of dissolved solute, but the gas produced are always the same, and the ratio of volumes is always proportional to the amperage in a fixed ratio of small whole numbers.

As always, don’t let significant quantities of use hydrogen and oxygen or pure hydrogen and chlorine mix in a closed space. Hydrogen and oxygen is quite explosive brown’s gas; hydrogen and chlorine are reactive as well. When working with chlorine it is best to work outside or near an open window: chlorine is a poison gas.

You may also want to try this with non-electrolytes, pure water or water with sugar or alcohol dissolved. You will find there is hardly any amperage or gas with these, but the small amount of gas produced will retain the same ratio. For college level folks, here is some physics/math relating to the minimum voltage and relating to the quantities you should expect at any amperage.

2) Now let’s try electro-plating metals. Using the right solutes, metals can be made to coat your electrodes the same way that bubbles of gas coated your electrodes in the experiments above. The key is to find the right chemical, and as a start let me suggest the copper sulphate sold in hardware stores to stop root growth. As an alternative copper sulphate is often sold as part of a fertilizer solution like “Miracle grow.” Look for copper on the label, or for a blue color fertilizer. Make a solution of copper using enough copper so that the solution is recognizably green, Use two steel washers as electrodes (that is connect the wires from your battery to the washers) and put them in the solution. You will find that one side turns red, as it is coated with copper. Depending on what else your copper solution contained, bubbles may appear at the other washer, or the other washer will corrode. 

You are now ready to take this to a higher level — silver coating. take a piece of silver plated material that you want to coat, and clean it nicely with soap and water. Connect it to the electrode where you previously coated copper. Now clean out the solution carefully. Buy some silver nitrate from a drug store, and dissolve a few grams (1/8 tsp for a start) in pure water; place the silverware and the same electrodes as before, connected to the battery. For a nicer coat use a 1 1/2 volt lantern battery; the 6 V battery will work too, but the silver won’t look as nice. With silver nitrate, you’ll notice that one electrode produces gas (oxygen) and the other turns silvery. Now disconnect the silvery electrode. You can use this method to silver coat a ring, fork, or cup — anything you want to have silver coated. This process is called electroplating. As with hydrogen production, there is a proportional relationship between the time, the amperage and the amount of metal you deposit — until all the silver nitrate in solution is used up.

As a yet-more complex version, you can also electroplate without using a battery. This was my Simple electroplating (presented previously). Consider this only after you understand most everything else I’ve done. When I saw this the first time in high school I was confused.

3) Casting metal objects using melted pennies, heat from a gas stove, and sand or plaster as a cast. This is pretty easy, but sort of dangerous — you need parents help, if only as a watcher. This is a version of an experiment I did as a kid.  I did metal casting using lead that some plumbers had left over. I melted it in a tin can on our gas stove and cast “quarters” in a plaster mold. Plumbers no longer use lead, but modern pennies are mostly zinc, and will melt about as well as my lead did. They are also much safer.

As a preparation for this experiment, get a bucket full of sand. This is where you’ll put your metal when you’re done. Now get some pennies (1970 or later), a pair of pliers, and an empty clean tin can, and a gas stove. If you like you can make a plaster mold of some small object: a ring, a 50 piece — anything you might want to cast from your pennies. With parents’ help, light your gas stove, put 5-8 pennies in the empty tin can, and hold the can over the lit gas burner using your pliers. Turn the gas to high. In a few minutes the bottom of the can will burn and become red-hot. About this point, the pennies will soften and melt into a silvery puddle. By tilting the can, you can stir the metal around (don’t get it on you!). When it looks completely melted you can pour the molten pennies into your sand bucket (carefully), or over your plaster mold (carefully). If you use a mold, you’ll get a zinc copy of whatever your mold was: jewelry, coins, etc. If you work at it, you’ll learn to make fancier and fancier casts. Adult help is welcome to avoid accidents. Once the metal solidifies, you can help cool it faster by dripping water on it from a faucet. Don’t touch it while it’s hot!

A plaster mold can be made by putting a 50¢ piece at the bottom of a paper cup, pouring plaster over the coin, and waiting for it to dry. Tear off the cup, turn the plaster over and pull out the coin; you’ve got a one-sided mold, good enough to make a one-sided coin. If you enjoy this, you can learn more about casting on Wikipedia; it’s an endeavor that only costs 4 or 5 cents per try. As a safety note: wear solid leather shoes and cover the floor near the stove with a board. If you drop the metal on the floor you’ll have a permanent burn mark on the floor and your mother will not be happy. If you drop hot metal on your you’ll have a permanent injury, and you won’t be happy. Older pennies are made of copper and will not melt. Here’s a video of someone pouring a lot of metal into an ant-hill (kills lots of ants, makes a mold of the hill).

It's often helpful to ask yourself, "what would Dr. Frankenstein do?"

It’s nice to have assistants, friends and adult help in the laboratory when you do science. Even without the castle, it’s what Dr. Frankenstein did.

4) Bringing a dead frog back to life (sort of). Make a high voltage battery of 45 to 90 V battery by attaching 5-10, 9V batteries in a daisy chain they will snap together. If you touch both exposed contacts you’ll give yourself a wicked shock. If you touch the electrodes to a newly killed frog, the frog legs will kick. This is sort of groovy. It was the inspiration for Dr. Frankenstein (at right), who then decides he could bring a person back from the dead with “more power.” Frankenstein’s monster is brought back to life this way, but ends up killing the good doctor. Shocks are sometimes helpful reanimating people stricken by heat attacks, and many buildings have shockers for this purpose. But don’t try to bring back the long-dead. By all accounts, the results are less-than pleasing. Try dissecting the rest of the frog and guess what each part is (a world book encyclopedia helps). As I recall, the heart keeps going for a while after it’s out of the frog — spooky.

5) Another version of this shocker is made with a small transformer (1″ square, say, radioshack) and a small battery (1.5-6V). Don’t use the 90V battery, you’ll kill someone. As a first version of this shocker, strip 1″ of  insulation off of the ends of some wire 12″ long say, and attach one end to two paired wires of the transformer (there will usually be a diagram in the box). If the transformer already has some wires coming out, all you have to do is strip more insulation off the ends so 1″ is un-inuslated. Take two paired ends in your hand, holding onto the uninsulated part and touch both to the battery for a second or two. Then disconnect them while holding the bare wires; you’ll get a shock. As a nastier version, get a friend to hope the opposite pair of wires on the uninsulated parts, while you hold the insulated parts of your two. Touch your two to the battery and disconnect while holding the insulation, you will see a nice spark, and your friend will get a nice shock. Play with it; different arrangements give more sparks or bigger shocks. Another thing you can do: put your experiment near a radio or TV. The transformer sparks will interfere with most nearby electronics; you can really mess up a computer this way, so keep it far from your computer. This is how wireless radio worked long ago, and how modern warfare will probably go. The atom bomb was detonated with a spark like this.

If you want to do more advanced science, it’s a good idea to learn math. This is important for statistics, for engineering, for quantum mechanics, and can even help for music. Get a few good high school or college books and read them cover to cover. An approach to science is to try to make something cool, that sort-of works, and then try to improve it. You then decide what a better version would work like,  modify your original semi-randomly and see if you’re going in the right direction. Don’t redesign with only one approach –it may not work. Read whatever you can, but don’t believe all you read. Often books are misleading, or wrong, and blogs are worse (I ought to know). When you find mistakes, note them in the margin, and try to explain them. You may find you were right, or that the book was right, but it’s a learning experience. If you like you can write the author and inform him/her of the errors. I find mailed letters are more respectful than e-mails — it shows you put in more effort.

Robert Buxbaum, February 20, 2014. Here’s the difference between metals and non-metals, and a periodic table cup that I made, and sell. And here’s a difference between science and religion – reproducibility.

Ab Normal Statistics and joke

The normal distribution of observation data looks sort of like a ghost. A Distribution  that really looks like a ghost is scary.

The normal distribution of observation data looks sort of like a ghost. A Distribution that really looks like a ghost is scary.

It’s funny because …. the normal distribution curve looks sort-of like a ghost. It’s also funny because it would be possible to imagine data being distributed like the ghost, and most people would be totally clue-less as to how to deal with data like that — abnormal statistics. They’d find it scary and would likely try to ignore the problem. When faced with a statistics problem, most people just hope that the data is normal; they then use standard mathematical methods with a calculator or simulation package and hope for the best.

Take the following example: you’re interested in buying a house near a river. You’d like to analyze river flood data to know your risks. How high will the river rise in 100 years, or 1000. Or perhaps you would like to analyze wind data to know how strong to make a sculpture so it does not blow down. Your first thought is to use the normal distribution math in your college statistics book. This looks awfully daunting (it doesn’t have to) and may be wrong, but it’s all you’ve got.

The normal distribution graph is considered normal, in part, because it’s fairly common to find that measured data deviates from the average in this way. Also, this distribution can be derived from the mathematics of an idealized view of the world, where any variety derives from multiple small errors around a common norm, and not from some single, giant issue. It’s not clear this is a realistic assumption in most cases, but it is comforting. I’ll show you how to do the common math as it’s normally done, and then how to do it better and quicker with no math at all, and without those assumptions.

Lets say you want to know the hundred-year maximum flood-height of a river near your house. You don’t want to wait 100 years, so you measure the maximum flood height every year over five years, say, and use statistics. Lets say you measure 8 foot, 6 foot, 3 foot (a draught year), 5 feet, and 7 feet.

The “normal” approach (pardon the pun), is to take a quick look at the data, and see that it is sort-of normal (many people don’t bother). One now takes the average, calculated here as (8+6+3+5+7)/5 = 5.8 feet. About half the times the flood waters should be higher than this (a good researcher would check this, many do not). You now calculate the standard deviation for your data, a measure of the width of the ghost, generally using a spreadsheet. The formula for standard deviation of a sample is s = √{[(8-5.8)2 + (6-5.8)2 + (3-5.8)2 + (5-5.8)2 + (7-5.8)2]/4} = 1.92. The use of 4 here in the denominator instead of 5 is called the Brussels correction – it refers to the fact that a standard of deviation is meaningless if there is only one data point.

For normal data, the one hundred year maximum height of the river (the 1% maximum) is the average height plus 2.2 times the deviation; in this case, 5.8 + 2.2 x 1.92 = 10.0 feet. If your house is any higher than this you should expect few troubles in a century. But is this confidence warranted? You could build on stilts or further from the river, but you don’t want to go too far. How far is too far?

So let’s do this better. We can, with less math, through the use of probability paper. As with any good science we begin with data, not assumptions, like that the data is normal. Arrange the river height data in a list from highest to lowest (or lowest to highest), and plot the values in this order on your probability paper as shown below. That is on paper where likelihoods from .01% to 99.99% are arranged along the bottom — x axis, and your other numbers, in this case the river heights, are the y values listed at the left. Graph paper of this sort is sold in university book stores; you can also get jpeg versions on line, but they don’t look as nice.

probability plot of maximum river height over 5 years -- looks reasonably normal, but slightly ghost-like.

Probability plot of the maximum river height over 5 years. If the data suggests a straight line, like here the data is reasonably normal. Extrapolating to 99% suggests the 100 year flood height would be 9.5 to 10.2 feet, and that it is 99.99% unlikely to reach 11 feet. That’s once in 10,000 years, other things being equal.

For the x axis values of the 5 data points above, I’ve taken the likelihood to be the middle of its percentile. Since there are 5 data points, each point is taken to represent its own 20 percentile; the middles appear at 10%, 30%, 50%, etc. I’ve plotted the highest value (8 feet) at the 10% point on the x axis, that being the middle of the upper 20%. I then plotted the second highest (7 feet) at 30%, the middle of the second 20%; the third, 6 ft at 50%; the fourth at 70%; and the draught year maximum (3 feet) at 90%.  When done, I judge if a reasonably straight line would describe the data. In this case, a line through the data looks reasonably straight, suggesting a fairly normal distribution of river heights. I notice that, if anything the heights drop off at the left suggesting that really high river levels are less likely than normal. The points will also have to drop off at the right since a negative river height is impossible. Thus my river heights describe a version of the ghost distribution in the cartoon above. This is a welcome finding since it suggests that really high flood levels are unlikely. If the data were non-normal, curving the other way we’d want to build our house higher than a normal distribution would suggest. 

You can now find the 100 year flood height from the graph above without going through any the math. Just draw your best line through the data, and look where it crosses the 1% value on your graph (that’s two major lines from the left in the graph above — you may have to expand your view to see the little 1% at top). My extrapolation suggests the hundred-year flood maximum will be somewhere between about 9.5 feet, and 10.2 feet, depending on how I choose my line. This prediction is a little lower than we calculated above, and was done graphically, without the need for a spreadsheet or math. What’s more, our predictions is more accurate, since we were in a position to evaluate the normality of the data and thus able to fit the extrapolation line accordingly. There are several ways to handle extreme curvature in the line, but all involve fitting the curve some way. Most weather data is curved, e.g. normal against a fractal, I think, and this affects you predictions. You might expect to have an ice age in 10,000 years.

The standard deviation we calculated above is related to a quality standard called six sigma — something you may have heard of. If we had a lot of parts we were making, for example, we might expect to find that the size deviation varies from a target according to a normal distribution. We call this variation σ, the greek version of s. If your production is such that the upper spec is 2.2 standard deviations from the norm, 99% of your product will be within spec; good, but not great. If you’ve got six sigmas there is one-in-a-billion confidence of meeting the spec, other things being equal. Some companies (like Starbucks) aim for this low variation, a six sigma confidence of being within spec. That is, they aim for total product uniformity in the belief that uniformity is the same as quality. There are several problems with this thinking, in my opinion. The average is rarely an optimum, and you want to have a rational theory for acceptable variation boundaries. Still, uniformity is a popular metric in quality management, and companies that use it are better off than those that do nothing. At REB Research, we like to employ the quality methods of W. Edwards Deming; we assume non-normality and aim for an optimum (that’s subject matter for a further essay). If you want help with statistics, or a quality engineering project, contact us.

I’ve also meant to write about the phrase “other things being equal”, Ceteris paribus in Latin. All this math only makes sense so long as the general parameters don’t change much. Your home won’t flood so long as they don’t build a new mall up river from you with runoff in the river, and so long as the dam doesn’t break. If these are concerns (and they should be) you still need to use statistics and probability paper, but you will now have to use other data, like on the likelihood of malls going up, or of dams breaking. When you input this other data, you will find the probability curve is not normal, but typically has a long tail (when the dam breaks, the water goes up by a lot). That’s outside of standard statistic analysis, but why those hundred year floods come a lot more often than once in 100 years. I’ve noticed that, even at Starbucks, more than 1/1,000,000,000 cups of coffee come out wrong. Even in analyzing a common snafu like this, you still use probability paper, though. It may be ‘situation normal”, but the distribution curve it describes has an abnormal tail.

by Dr. Robert E. Buxbaum, November 6, 2013. This is my second statistics post/ joke, by the way. The first one dealt with bombs on airplanes — well, take a look.

An Aesthetic of Mechanical Strength

Back when I taught materials science to chemical engineers, I used the following poem to teach my aesthetic for the strength target for product design:

The secret to design, as the parson explained, is that the weakest part must withstand the strain. And if that part is to withstand the test, then it must be made as strong as all the rest. (by R.E. Buxbaum, based on “The Wonderful, One-hoss Shay, by Oliver Wendell Holmes, 1858).

My thought was, if my students had no idea what good mechanical design looked like, they’d never  be able to it well. I wanted them to realize that there is always a weakest part of any device or process for every type of failure. Good design accepts this and designs everything else around it. You make sure that the device will fail at a part of your choosing, when it fails, preferably one that you can repair easily and cheaply (a fuse, or a door hinge), and which doesn’t cause too much mayhem when it fails. Once this failure part is chosen and in place, I taught that the rest should be stronger, but there is no point in making any other part of that failure chain significantly stronger than the weakest link. Thus for example, once you’ve decided to use a fuse of a certain amperage, there is no point in making the rest of the wiring take more than 2-3 times the amperage of the fuse.

This is an aesthetic argument, of course, but it’s important for a person to know what good work looks like (to me, and perhaps to the student) — beyond just by compliments from the boss or grades from me. Some day, I’ll be gone, and the boss won’t be looking. There are other design issues too: If you don’t know what the failure point is, make a prototype and test it to failure, and if you don’t like what you see, remodel accordingly. If you like the point of failure but decide you really want to make the device stronger or more robust, be aware that this may involve strengthening that part only, or strengthening the entire chain of parts so they are as failure resistant as this part (the former is cheaper).

I also wanted to teach that there are many failure chains to look out for: many ways that things can wrong beyond breaking. Check for failure by fire, melting, explosion, smell, shock, rust, and even color change. Color change should not be ignored, BTW; there are many products that people won’t use as soon as they look bad (cars, for example). Make sure that each failure chain has it’s own known, chosen weak link. In a car, the paint on a car should fade, chip, or peel some (small) time before the metal underneath starts rusting or sagging (at least that’s my aesthetic). And in the DuPont gun-powder mill below, one wall should be weaker so that the walls should blow outward the right way (away from traffic).Be aware that human error is the most common failure mode: design to make things acceptably idiot-proof.

Dupont powder mills had a thinner wall and a stronger wall so that, if there were an explosion it would blow out towards the river. This mill has a second wall to protect workers. The thinner wall should be barely strong enough to stand up to wind and rain; the stronger walls should stand up to explosions that blow out the other wall.

Dupont powder mills had a thinner wall and a stronger wall so that, if there were an explosion, it would blow out ‘safely.’ This mill has a second wall to protect workers. The thinner wall must be strong enough to stand up to wind and rain; the stronger walls should stand up to all likely explosions.

Related to my aesthetic of mechanical strength, I tried to teach an aesthetic of cost, weight, appearance, and green: Choose materials that are cheaper, rather than more expensive; use less weight rather than more if both ways worked equally well. Use materials that look better if you’ve got the choice, and use recyclable materials. These all derive from the well-known axiom, omit needless stuff. Or, as William of Occam put it, “Entia non sunt multiplicanda sine necessitate.” As an aside, I’ve found that, when engineers use Latin, we look smart: “lingua bona lingua motua est.” (a good language is a dead language) — it’s the same with quoting 19th century poets, BTW: dead 19th century poets are far better than undead ones, but I digress.

Use of recyclable materials gets you out of lots of problems relative to materials that must be disposed of. E.g. if you use aluminum insulation (recyclable) instead of ceramic fiber, you will have an easier time getting rid of the scrap. As a result, you are not as likely to expose your workers (or you) to mesothelioma, or similar disease. You should not have to pay someone to haul away excess or damaged product; a scraper will oblige, and he may even pay you for it if you have enough. Recycling helps cash flow with decommissioning too, when money is tight. It’s better to find your $1 worth of scrap is now worth $2 instead of discovering that your $1 worth of garbage now costs $2 to haul away. By the way, most heat loss is from black body radiation, so aluminum foil may actually work better than ceramics of the same thermal conductivity.

Buildings can be recycled too. Buy them and sell them as needed. Shipping containers make for great lab buildings because they are cheap, strong, and movable. You can sell them off-site when you’re done. We have a shipping container lab building, and a shipping container storage building — both worth more now than when I bought them. They are also rather attractive with our advertising on them — attractive according to my design aesthetic. Here’s an insight into why chemical engineers earn more than chemists; and insight into the difference between mechanical engineering and civil engineering. Here’s an architecture aesthetic. Here’s one about the scientific method.

Robert E. Buxbaum, October 31, 2013