Category Archives: quality

Skilled labor isn’t cheap; cheap labor isn’t skilled

Popular emblem for hard hats in the USA. The original quote is attributed to Sailor Jack, a famous tattoo artist.

Popular emblem for hard hats in the USA. The original quote is attributed to Sailor Jack, a famous tattoo artist.

The title for this post is a popular emblem on US hard-hats and was the motto of a famous, WWII era tattoo artist. It’s also at the heart of a divide between the skilled trade unions and the labor movement. Skilled laborers expect to be paid more than unskilled, while the labor movement tends to push for uniform pay, with distinctions based only on seniority or courses taken. Managers and customers prefer skilled work to not, and usually don’t mind paying the skilled worker more. It’s understand that, if the skilled workers are not rewarded, they’ll go elsewhere or quit. Management too tends to understand that the skilled laborer is effectively a manager, often more responsible for success than the manager himself/herself. In this environment, a skilled trade union is an advantage as they tend to keep out the incompetent, the addict, and the gold-brick, if only to raise the stature of the rest. They can also help by taking some burden of complaints. In the late 1800s, it was not uncommon for an owner to push for a trade union, like the Knights of Labor, or the AFL, but usually just for skilled trades for the reasons above.

An unskilled labor union, like the CIO is a different animal. The unskilled laborer would like the salary and respect of the skilled laborer without having to develop the hard-to-replace skills. Management objects to this, as do the skilled workers. A major problem with unions, as best I can tell, is a socialist bent that combines the skilled and unskilled worker to the disadvantage of the skilled trades.

Not all unionists harbor fondness for welfare or socialism.

Also popular. Few workers harbor a fondness for welfare or socialism. Mostly they want to keep their earnings.

Labor union management generally prefer a high minimum wage — and often favor high taxes too as a way of curing societal ills. This causes friction, both in wage-negotiation and in political party support. Skilled workers tend to want to be paid more than unskilled, and generally want to keep the majority of their earnings. As a result, skilled laborers tend to vote Republican. Unskilled workers tend to vote for Democrats. Generally, there are more unskilled workers than skilled, and the union management tends to favor Democrats. Many union leaders have gone further — to international socialism. They push for high welfare payments with no work requirement, and for aid the foreign socialist poor. The hard-hats themselves tend to be less than pleased with these socialist pushes.

During the hippie-60’s and 70’s the union split turned violent. It was not uncommon for unionized police and construction workers to hurl insults and bricks on the anti-war leftists and non-working students and welfare farmers. Teamster boss Jimmy Hoffa, supported Nixon, Vietnam, and the idea that his truckers should keep their high wages at the expense of unskilled. Rival teamster boss, Frank Fitzsimmons pushed for socialist unity with the non-working of the world, a split that broke the union and cost Hoffa his life in 1975. Eventually the split became moot. The war ended, US factories closed and jobs moved overseas, and even the unskilled labor and poor lost.

Skilled workers are, essentially managers, and like to be treated that way.

Skilled workers are, essentially managers, and like to be treated that way.

The Americans with Disability Act is another part of the union split. The act was designed to protect the sick, pregnant and older worker, but has come to protect the lazy, nasty, and slipshod, as well as the drug addict and thief. Any worker who’s censored for these unfortunate behaviors can claim a disability. If the claim is upheld the law requires that the company provide for them. The legal status of the union demands that the union support the worker in his or her claim of disability. In this, the union becomes obligated to the worker, and not to the employer, customer, or craft — something else that skilled workers tend to object to. Skilled workers do not like having their neighbors show them high-priced, badly made products from their assembly line. Citing the ADA doesn’t help, nor does it help to know that their union dues support Democrats, welfare, and legislation that takes money from the pocket of any one who takes pride in good work. We’ll have to hope this split in the union pans out better than in 1860.

Robert Buxbaum, June 5, 2016. I’m running for water commissioner. I’d like to see my skilled sewer workers rewarded for their work and skill. Currently experienced workers get only $18/hour and that’s too little for their expertise. If they took off, they’d be irreplaceable, and the city would likely fall to typhus or the plague.

High minimum wages hurt the poor; try a negative tax

It is generally thought (correctly I suspect) that welfare is a poor way to help the poor as it robs them of the dignity of work. Something like welfare is needed to keep the poor from starving, and the something that’s generally chosen in a living wage — a minimum wage set high enough that even a minimally skilled worker should be able to support a family of 4. This may be better than welfare, but I’d like to propose something better still — and a way to pay for it — a negative tax.

I suspect that a high minimum wage hurts the poor and middle class in a few ways. For one, by flattening the wage structure, it hurts the ego of higher skilled workers and reduces their incentive to improve. A senior worker should make more than an unskilled beginner, but a high minimum wage dampens this. What’s more, a high minimum wage cuts the lower rungs off of the employment ladder, making it harder for young folks, and unskilled folks to be productively employed. There may be some worthwhile minimum, but not everyone lives independently (or should) and not every job deserves to support a family of four, if only because not every unskilled worker is supporting a family of four. Many minimum wage earners are living at home or are heads of double-income couples, and only a few have the skills to justify the wage on a value added basis. A high minimum wage is thus needlessly costly for many workers. People accept the cost because it’s borne by the company (and companies are seen as evil). But passing the burden has limits, and a high minimum wage creates high unemployment in low skill areas, as employees are reluctant to pay a lot for low skill work. In Detroit before bankruptcy, the living wage was set so high that companies could not compete. Many went bankrupt and the others hired so selectively that the unskilled were basically unemployable. Even the city couldn’t pay the wage and its bills.

Even with the highest minimum wage, there is always a need for welfare, as some workers will be unemployable — because of disability, because of lack of skill, or from an ingrained desire to not work. The punishments a community can mete out are limited, and sooner or later some communities stop working and stop learning as they see no advantage.

The difficulties of taking care of the genuinely needy and disabled while the lazy and unskilled has gotten even some communist to reconsider wealth as a motivator. The Chinese have come to realize that workers work better at all levels if there is a financial reward to experience and skill at all levels. But that still leaves the question of who should pay to help those in need and how.  Currently the welfare system only helps the disabled and the “looking” unemployed, but I suspect they should do more replacing some of the burden that our minimum wage laws places on the employers of unskilled labor. But I suspect the payment formula should be such that the worker ends up richer for every additional hour of work. That is, each dollar earned by a welfare recipient should result in less than one dollar reduction in welfare payment. Welfare would thus be set up as a negative tax that would continue to all levels of salary and need so that there is no sudden jump when the worker suddenly starts having to pay taxes. The current and proposed tax / welfare structure is shown below:

Currently someone's welfare check decreases by $1 for each dollar earned. I propose a system of negative tax (less than 100%) so each dollar earned puts a good fraction in his/her pocket.

Currently (black) someone’s welfare check decreases by $1 for each dollar earned, then he enters a stage of no tax — one keeps all he earns, and then a graduated tax. I propose a system of negative tax (red) so each dollar earned adds real income.

The system I propose (red line) would treat identically someone who is  incapacitated as someone who decided not to work, or to work at a job that paid $0/hr (e.g. working for a church). In the current system treats them differently, but there seems to be so much law and case-work and phony doctor reports involved in getting around it all that it hardly seems worth it. I’d use money as the sole motivator (all theoretical, and it may not work, but hang with me for now).

In the proposed system, a person who does not work would get some minimal income based on family need (there is still some need for case workers). If they are employed the employer would not have to pay minimum wage (or there would be a low minimum wage — $3/hr) but the employer would have to report the income and deduct, for every dollar earned some fraction in tax — 40¢ say. The net result would be that the amount of government subsidy received by the worker (disabled or not) would decrease by, for 40¢ for every dollar earned. At some salary the worker would discover that he/she was paying net tax and no longer receiving anything from the state. With this system, there is always an incentive to work more hours or develop more skills. If the minimum wage were removed too, there would be no penalty to hiring a completely unskilled worker.

At this point you may ask where the extra money will come from. In the long run, I hope the benefit comes from the reduced welfare rolls, but in the short-term, let me suggest tariffs. Tariffs can raise income and promote on-shore production. Up until 1900 or so, they were the main source of revenue for the USA. As an experiment, to see if this system works, it could be applied to enterprise zones, e.g. in Detroit.

R. E. Buxbaum, June 27, 2014. I worked out the math for this while daydreaming in an economics lecture. It strikes me as bizarre, by the way, that can contract with someone for barter, e.g. to help you move for a pizza, but you can’t contract for less than the minimum wage $7.45/hr. If you hire the worker for less you can go to jail. In Canada they have something even more bizarre, equal wages for equal skills — a cook and a manager must earn the same, independent of how well the cook cooks. No wonder violent crime is higher in Canada.

 

Dada, or it’s hard to look cool sucking on a carrot.

When it’s done right, Dada art is cool. It’s not confusing or preachy; it’s not out there, or sloppy; just cool. And today I found the most wonderful Dada piece: “Attention”, by Gabriel -Belladonna, shown below from “deviant art” (sorry about the water-mark).

At first glance it’s an advertisement against smoking, drinking, and eating sweets. The smoker has blackened lungs, the drinker has an enlarged liver, and the eater of sweets a diseased stomach. But something here isn’t right; the sinners are happy and young. These things are clearly bad for you but they’re enjoyable too and “cool” — Smoking is a lot cooler than sucking on a carrot.

Dada at it's best: Attention by Mio Belladonna. The sinners are happy.

Dada at it’s best: “Attention” by Dadaist Gabriel (Mio) Belladonna, 2012; image from deviant art. If I were to choose the title it would be “But it’s hard to look cool sucking on a carrot.”

At its best, Dada turns advertising and art on its head; it uses the imagery of advertising to show the shallowness of that, clearly slanted medium, or uses art-museum settings to show the narrow definition of what we’ve come to call “art”. In the above you see the balance of life- reality and the mind control of advertising.

Marcel Duchamp's fountain and "Manikken Pis" Similar idea, Manikken is better executed, IMHO.

Marcel Duchamp’s fountain and “Manikken Pis.”

Any mention of dada should also, I suppose, mention Duchamp’s fountain (at right, signed fancifully by R. Mutt). In 2004, fountain was voted “the most influential artwork of the 20th century” by a panel of artists and art historians. The basic idea was to show the slight difference between art and not-art (to be something, there has to be a non-something, as in this joke). Beyond this, the idea would be that same as for the Manikken Pis sculpture in Brussels. Duchamp’s was done with a lot less work — just by signing a “found object.” He submitted the work for exhibition in 1917, but it was rejected as not being art — proving, I guess, the point. Fountain is related to man: his life, needs, and vain ambitions; it’s sort-of beautiful, so why ain’t it art? (It has something to do with skill, I’d say.)

Duchamp designed two major surrealist exhibitions — a similar approach, but surrealism typically employs more skill and humor than Dada, with less shock. Below is another famous work of dada, Oppenheim’s fur-lined tea-cup (Breakfast in fur — see it at the Modern Museum in NYC) compared to a wonderful (and in my mind similar) surreal work, “Ruby lips” by Dali. Oppenheim made the tea-cup and spoon disgusting by making it out of a richer material, fur. That’s really cool, and sort-of shocking, even today.

Duchap's tea cup (left), and Dali's ruby lips (right). Similar ideas treated as Dada or Surreal.

Meret Oppenheim’s fur tea-cup (Breakfast in fur) and Dali’s ruby lips; the same idea (I think); dada vs surreal.

Dali’s “ruby libs” brooch took more skill than gluing fur to a cup and spoon; that adds to the humor, I’d say, but took from the shock. It’s made from real rubies and pearls: hard materials for something that should be soft; it’s sort of disgusting this way, and the message is more or less the same as Oppenheim’s, I’d say, but the message gets a little lost in the literal joke (pearly teeth, ruby lips…). I could imagine someone wearing Dali’s brooch, but no one would use the fur-lined cup. 

There is a lot of bad dada, too unfortunately, and it tends to be awful: incomprehensible, trite, or advertising. An unfortunate tendency is to collect some found pieces of garbage, and set it out in an attempt to scandalize the art world, or put down “the man” for his closed mindset. But that’s fountain, and it’s been done. A key way to tell if it’s good dada — is it cool; is it something that makes you say “Wow.” Christo’s surrounded islands certainly have the wow-cool factor, IMHO. 

Christo's wrapped Islands. Islands near Miami Beach wrapped in pink (fuscha) plastic.

Christo’s surrounded Islands: Islands near Miami Beach wrapped in pink (fuchsia) plastic.

A nice thing about Christo is that he takes it down 2 weeks or so after he makes the sculptures. Thus, the wow factor of his work never has a chance to go stale. Sorry to say, most dada stays around. Duchamp’s “fountain” sits in a museum and has grown stale, at least to me and Duchamp. What was scandalous and shocking in 1917 is passé and boring in 2014. The decline in shock is somewhat less for “breakfast in fur,” I think because the work is better crafted, a benefit I see in “Attention” too; skill matters.

Paris Street art. I don't know the artist, but it's cool.

Paris Street art; it’s just cool.

At the height of his success, Duchamp left art for 30 years and played chess. He became a chess grand master (life is as strange as art) and played for France in international tournaments. He later came back to art and did one, last, final piece, a very fine one, seen only through a peephole. Here’s some further thoughts on good vs bad modern art, and on surrealism, and on the aesthetic of strength in engineering: what materials to use; how strong should it be, and on architecture humor

Robert E. Buxbaum. April 4-7, 2014. Here is a link to my attempt at good Dada: Kilroy with eyes that follow you, and at right some Paris street art that I consider good dada too. As far as what the word “dada” means, I translate it as “cool,” “wow,” “gnarly,” or “go go.” It’s dada, man, y’ dig?

Near-Poisson statistics: how many police – firemen for a small city?

In a previous post, I dealt with the nearly-normal statistics of common things, like river crests, and explained why 100 year floods come more often than once every hundred years. As is not uncommon, the data was sort-of like a normal distribution, but deviated at the tail (the fantastic tail of the abnormal distribution). But now I’d like to present my take on a sort of statistics that (I think) should be used for the common problem of uncommon events: car crashes, fires, epidemics, wars…

Normally the mathematics used for these processes is Poisson statistics, and occasionally exponential statistics. I think these approaches lead to incorrect conclusions when applied to real-world cases of interest, e.g. choosing the size of a police force or fire department of a small town that rarely sees any crime or fire. This is relevant to Oak Park Michigan (where I live). I’ll show you how it’s treated by Poisson, and will then suggest a simpler way that’s more relevant.

First, consider an idealized version of Oak Park, Michigan (a semi-true version until the 1980s): the town had a small police department and a small fire department that saw only occasional crimes or fires, all of which required only 2 or 4 people respectively. Lets imagine that the likelihood of having one small fire at a given time is x = 5%, and that of having a violent crime is y =5% (it was 6% in 2011). A police department will need to have to have 2 policemen on call at all times, but will want 4 on the 0.25% chance that there are two simultaneous crimes (.05 x .05 = .0025); the fire department will want 8 souls on call at all times for the same reason. Either department will use the other 95% of their officers dealing with training, paperwork, investigations of less-immediate cases, care of equipment, and visiting schools, but this number on call is needed for immediate response. As there are 8760 hours per year and the police and fire workers only work 2000 hours, you’ll need at least 4.4 times this many officers. We’ll add some more for administration and sick-day relief, and predict a total staff of 20 police and 40 firemen. This is, more or less, what it was in the 1980s.

If each fire or violent crime took 3 hours (1/8 of a day), you’ll find that the entire on-call staff was busy 7.3 times per year (8x365x.0025 = 7.3), or a bit more since there is likely a seasonal effect, and since fires and violent crimes don’t fall into neat time slots. Having 3 fires or violent crimes simultaneously was very rare — and for those rare times, you could call on nearby communities, or do triage.

In response to austerity (towns always overspend in the good times, and come up short later), Oak Park realized it could use fewer employees if they combined the police and fire departments into an entity renamed “Public safety.” With 45-55 employees assigned to combined police / fire duty they’d still be able to handle the few violent crimes and fires. The sum of these events occurs 10% of the time, and we can apply the sort of statistics above to suggest that about 91% of the time there will be neither a fire nor violent crime; about 9% of the time there will be one or more fires or violent crimes (there is a 5% chance for each, but also a chance that 2 happen simultaneously). At least two events will occur 0.9% of the time (2 fires, 2 crimes or one of each), and they will have 3 or more events .09% of the time, or twice per year. The combined force allowed fewer responders since it was only rarely that 4 events happened simultaneously, and some of those were 4 crimes or 3 crimes and a fire — events that needed fewer responders. Your only real worry was when you have 3 fires, something that should happen every 3 years, or so, an acceptable risk at the time.

Before going to what caused this model of police and fire service to break down as Oak Park got bigger, I should explain Poisson statistics, exponential Statistics, and Power Law/ Fractal Statistics. The only type of statistics taught for dealing with crime like this is Poisson statistics, a type that works well when the events happen so suddenly and pass so briefly that we can claim to be interested in only how often we will see multiples of them in a period of time. The Poisson distribution formula is, P = rke/r! where P is the Probability of having some number of events, r is the total number of events divided by the total number of periods, and k is the number of events we are interested in.

Using the data above for a period-time of 3 hours, we can say that r= .1, and the likelihood of zero, one, or two events begin in the 3 hour period is 90.4%, 9.04% and 0.45%. These numbers are reasonable in terms of when events happen, but they are irrelevant to the problem anyone is really interested in: what resources are needed to come to the aid of the victims. That’s the problem with Poisson statistics: it treats something that no one cares about (when the thing start), and under-predicts the important things, like how often you’ll have multiple events in-progress. For 4 events, Poisson statistics predicts it happens only .00037% of the time — true enough, but irrelevant in terms of how often multiple teams are needed out on the job. We need four teams no matter if the 4 events began in a single 3 hour period or in close succession in two adjoining periods. The events take time to deal with, and the time overlaps.

The way I’d dealt with these events, above, suggests a power law approach. In this case, each likelihood was 1/10 the previous, and the probability P = .9 x10-k . This is called power law statistics. I’ve never seen it taught, though it appears very briefly in Wikipedia. Those who like math can re-write the above relation as log10P = log10 .9 -k.

One can generalize the above so that, for example, the decay rate can be 1/8 and not 1/10 (that is the chance of having k+1 events is 1/8 that of having k events). In this case, we could say that P = 7/8 x 8-k , or more generally that log10P = log10 A –kβ. Here k is the number of teams required at any time, β is a free variable, and Α = 1-10 because the sum of all probabilities has to equal 100%.

In college math, when behaviors like this appear, they are incorrectly translated into differential form to create “exponential statistics.” One begins by saying ∂P/∂k = -βP, where β = .9 as before, or remains some free-floating term. Everything looks fine until we integrate and set the total to 100%. We find that P = 1/λ e-kλ for k ≥ 0. This looks the same as before except that the pre-exponential always comes out wrong. In the above, the chance of having 0 events turns out to be 111%. Exponential statistics has the advantage (or disadvantage) that we find a non-zero possibility of having 1/100 of a fire, or 3.14159 crimes at a given time. We assign excessive likelihoods for fractional events and end up predicting artificially low likelihoods for the discrete events we are interested in except going away from a calculus that assumes continuity in a world where there is none. Discrete math is better than calculus here.

I now wish to generalize the power law statistics, to something similar but more robust. I’ll call my development fractal statistics (there’s already a section called fractal statistics on Wikipedia, but it’s really power-law statistics; mine will be different). Fractals were championed by Benoit B. Mandelbrot (who’s middle initial, according to the old joke, stood for Benoit B. Mandelbrot). Many random processes look fractal, e.g. the stock market. Before going here, I’d like to recall that the motivation for all this is figuring out how many people to hire for a police /fire force; we are not interested in any other irrelevant factoid, like how many calls of a certain type come in during a period of time.

To choose the size of the force, lets estimate how many times per year some number of people are needed simultaneously now that the city has bigger buildings and is seeing a few larger fires, and crimes. Lets assume that the larger fires and crimes occur only .05% of the time but might require 15 officers or more. Being prepared for even one event of this size will require expanding the force to about 80 men; 50% more than we have today, but we find that this expansion isn’t enough to cover the 0.0025% of the time when we will have two such major events simultaneously. That would require a 160 man fire-squad, and we still could not deal with two major fires and a simultaneous assault, or with a strike, or a lot of people who take sick at the same time. 

To treat this situation mathematically, we’ll say that the number times per year where a certain number of people are need, relates to the number of people based on a simple modification of the power law statistics. Thus:  log10N = A – βθ  where A and β are constants, N is the number of times per year that some number of officers are needed, and θ is the number of officers needed. To solve for the constants, plot the experimental values on a semi-log scale, and find the best straight line: -β is the slope and A  is the intercept. If the line is really straight, you are now done, and I would say that the fractal order is 1. But from the above discussion, I don’t expect this line to be straight. Rather I expect it to curve upward at high θ: there will be a tail where you require a higher number of officers. One might be tempted to modify the above by adding a term like but this will cause problems at very high θ. Thus, I’d suggest a fractal fix.

My fractal modification of the equation above is the following: log10N = A-βθ-w where A and β are similar to the power law coefficients and w is the fractal order of the decay, a coefficient that I expect to be slightly less than 1. To solve for the coefficients, pick a value of w, and find the best fits for A and β as before. The right value of w is the one that results in the straightest line fit. The equation above does not look like anything I’ve seen quite, or anything like the one shown in Wikipedia under the heading of fractal statistics, but I believe it to be correct — or at least useful.

To treat this politically is more difficult than treating it mathematically. I suspect we will have to combine our police and fire department with those of surrounding towns, and this will likely require our city to revert to a pure police department and a pure fire department. We can’t expect other cities specialists to work with our generalists particularly well. It may also mean payments to other cities, plus (perhaps) standardizing salaries and staffing. This should save money for Oak Park and should provide better service as specialists tend to do their jobs better than generalists (they also tend to be safer). But the change goes against the desire (need) of our local politicians to hand out favors of money and jobs to their friends. Keeping a non-specialized force costs lives as well as money but that doesn’t mean we’re likely to change soon.

Robert E. Buxbaum  December 6, 2013. My two previous posts are on how to climb a ladder safely, and on the relationship between mustaches in WWII: mustache men do things, and those with similar mustache styles get along best.

Ab Normal Statistics and joke

The normal distribution of observation data looks sort of like a ghost. A Distribution  that really looks like a ghost is scary.

The normal distribution of observation data looks sort of like a ghost. A Distribution that really looks like a ghost is scary.

It’s funny because …. the normal distribution curve looks sort-of like a ghost. It’s also funny because it would be possible to imagine data being distributed like the ghost, and most people would be totally clue-less as to how to deal with data like that — abnormal statistics. They’d find it scary and would likely try to ignore the problem. When faced with a statistics problem, most people just hope that the data is normal; they then use standard mathematical methods with a calculator or simulation package and hope for the best.

Take the following example: you’re interested in buying a house near a river. You’d like to analyze river flood data to know your risks. How high will the river rise in 100 years, or 1000. Or perhaps you would like to analyze wind data to know how strong to make a sculpture so it does not blow down. Your first thought is to use the normal distribution math in your college statistics book. This looks awfully daunting (it doesn’t have to) and may be wrong, but it’s all you’ve got.

The normal distribution graph is considered normal, in part, because it’s fairly common to find that measured data deviates from the average in this way. Also, this distribution can be derived from the mathematics of an idealized view of the world, where any variety derives from multiple small errors around a common norm, and not from some single, giant issue. It’s not clear this is a realistic assumption in most cases, but it is comforting. I’ll show you how to do the common math as it’s normally done, and then how to do it better and quicker with no math at all, and without those assumptions.

Lets say you want to know the hundred-year maximum flood-height of a river near your house. You don’t want to wait 100 years, so you measure the maximum flood height every year over five years, say, and use statistics. Lets say you measure 8 foot, 6 foot, 3 foot (a draught year), 5 feet, and 7 feet.

The “normal” approach (pardon the pun), is to take a quick look at the data, and see that it is sort-of normal (many people don’t bother). One now takes the average, calculated here as (8+6+3+5+7)/5 = 5.8 feet. About half the times the flood waters should be higher than this (a good researcher would check this, many do not). You now calculate the standard deviation for your data, a measure of the width of the ghost, generally using a spreadsheet. The formula for standard deviation of a sample is s = √{[(8-5.8)2 + (6-5.8)2 + (3-5.8)2 + (5-5.8)2 + (7-5.8)2]/4} = 1.92. The use of 4 here in the denominator instead of 5 is called the Brussels correction – it refers to the fact that a standard of deviation is meaningless if there is only one data point.

For normal data, the one hundred year maximum height of the river (the 1% maximum) is the average height plus 2.2 times the deviation; in this case, 5.8 + 2.2 x 1.92 = 10.0 feet. If your house is any higher than this you should expect few troubles in a century. But is this confidence warranted? You could build on stilts or further from the river, but you don’t want to go too far. How far is too far?

So let’s do this better. We can, with less math, through the use of probability paper. As with any good science we begin with data, not assumptions, like that the data is normal. Arrange the river height data in a list from highest to lowest (or lowest to highest), and plot the values in this order on your probability paper as shown below. That is on paper where likelihoods from .01% to 99.99% are arranged along the bottom — x axis, and your other numbers, in this case the river heights, are the y values listed at the left. Graph paper of this sort is sold in university book stores; you can also get jpeg versions on line, but they don’t look as nice.

probability plot of maximum river height over 5 years -- looks reasonably normal, but slightly ghost-like.

Probability plot of the maximum river height over 5 years. If the data suggests a straight line, like here the data is reasonably normal. Extrapolating to 99% suggests the 100 year flood height would be 9.5 to 10.2 feet, and that it is 99.99% unlikely to reach 11 feet. That’s once in 10,000 years, other things being equal.

For the x axis values of the 5 data points above, I’ve taken the likelihood to be the middle of its percentile. Since there are 5 data points, each point is taken to represent its own 20 percentile; the middles appear at 10%, 30%, 50%, etc. I’ve plotted the highest value (8 feet) at the 10% point on the x axis, that being the middle of the upper 20%. I then plotted the second highest (7 feet) at 30%, the middle of the second 20%; the third, 6 ft at 50%; the fourth at 70%; and the draught year maximum (3 feet) at 90%.  When done, I judge if a reasonably straight line would describe the data. In this case, a line through the data looks reasonably straight, suggesting a fairly normal distribution of river heights. I notice that, if anything the heights drop off at the left suggesting that really high river levels are less likely than normal. The points will also have to drop off at the right since a negative river height is impossible. Thus my river heights describe a version of the ghost distribution in the cartoon above. This is a welcome finding since it suggests that really high flood levels are unlikely. If the data were non-normal, curving the other way we’d want to build our house higher than a normal distribution would suggest. 

You can now find the 100 year flood height from the graph above without going through any the math. Just draw your best line through the data, and look where it crosses the 1% value on your graph (that’s two major lines from the left in the graph above — you may have to expand your view to see the little 1% at top). My extrapolation suggests the hundred-year flood maximum will be somewhere between about 9.5 feet, and 10.2 feet, depending on how I choose my line. This prediction is a little lower than we calculated above, and was done graphically, without the need for a spreadsheet or math. What’s more, our predictions is more accurate, since we were in a position to evaluate the normality of the data and thus able to fit the extrapolation line accordingly. There are several ways to handle extreme curvature in the line, but all involve fitting the curve some way. Most weather data is curved, e.g. normal against a fractal, I think, and this affects you predictions. You might expect to have an ice age in 10,000 years.

The standard deviation we calculated above is related to a quality standard called six sigma — something you may have heard of. If we had a lot of parts we were making, for example, we might expect to find that the size deviation varies from a target according to a normal distribution. We call this variation σ, the greek version of s. If your production is such that the upper spec is 2.2 standard deviations from the norm, 99% of your product will be within spec; good, but not great. If you’ve got six sigmas there is one-in-a-billion confidence of meeting the spec, other things being equal. Some companies (like Starbucks) aim for this low variation, a six sigma confidence of being within spec. That is, they aim for total product uniformity in the belief that uniformity is the same as quality. There are several problems with this thinking, in my opinion. The average is rarely an optimum, and you want to have a rational theory for acceptable variation boundaries. Still, uniformity is a popular metric in quality management, and companies that use it are better off than those that do nothing. At REB Research, we like to employ the quality methods of W. Edwards Deming; we assume non-normality and aim for an optimum (that’s subject matter for a further essay). If you want help with statistics, or a quality engineering project, contact us.

I’ve also meant to write about the phrase “other things being equal”, Ceteris paribus in Latin. All this math only makes sense so long as the general parameters don’t change much. Your home won’t flood so long as they don’t build a new mall up river from you with runoff in the river, and so long as the dam doesn’t break. If these are concerns (and they should be) you still need to use statistics and probability paper, but you will now have to use other data, like on the likelihood of malls going up, or of dams breaking. When you input this other data, you will find the probability curve is not normal, but typically has a long tail (when the dam breaks, the water goes up by a lot). That’s outside of standard statistic analysis, but why those hundred year floods come a lot more often than once in 100 years. I’ve noticed that, even at Starbucks, more than 1/1,000,000,000 cups of coffee come out wrong. Even in analyzing a common snafu like this, you still use probability paper, though. It may be ‘situation normal”, but the distribution curve it describes has an abnormal tail.

by Dr. Robert E. Buxbaum, November 6, 2013. This is my second statistics post/ joke, by the way. The first one dealt with bombs on airplanes — well, take a look.

An Aesthetic of Mechanical Strength

Back when I taught materials science to chemical engineers, I used the following poem to teach my aesthetic for the strength target for product design:

The secret to design, as the parson explained, is that the weakest part must withstand the strain. And if that part is to withstand the test, then it must be made as strong as all the rest. (by R.E. Buxbaum, based on “The Wonderful, One-hoss Shay, by Oliver Wendell Holmes, 1858).

My thought was, if my students had no idea what good mechanical design looked like, they’d never  be able to it well. I wanted them to realize that there is always a weakest part of any device or process for every type of failure. Good design accepts this and designs everything else around it. You make sure that the device will fail at a part of your choosing, when it fails, preferably one that you can repair easily and cheaply (a fuse, or a door hinge), and which doesn’t cause too much mayhem when it fails. Once this failure part is chosen and in place, I taught that the rest should be stronger, but there is no point in making any other part of that failure chain significantly stronger than the weakest link. Thus for example, once you’ve decided to use a fuse of a certain amperage, there is no point in making the rest of the wiring take more than 2-3 times the amperage of the fuse.

This is an aesthetic argument, of course, but it’s important for a person to know what good work looks like (to me, and perhaps to the student) — beyond just by compliments from the boss or grades from me. Some day, I’ll be gone, and the boss won’t be looking. There are other design issues too: If you don’t know what the failure point is, make a prototype and test it to failure, and if you don’t like what you see, remodel accordingly. If you like the point of failure but decide you really want to make the device stronger or more robust, be aware that this may involve strengthening that part only, or strengthening the entire chain of parts so they are as failure resistant as this part (the former is cheaper).

I also wanted to teach that there are many failure chains to look out for: many ways that things can wrong beyond breaking. Check for failure by fire, melting, explosion, smell, shock, rust, and even color change. Color change should not be ignored, BTW; there are many products that people won’t use as soon as they look bad (cars, for example). Make sure that each failure chain has it’s own known, chosen weak link. In a car, the paint on a car should fade, chip, or peel some (small) time before the metal underneath starts rusting or sagging (at least that’s my aesthetic). And in the DuPont gun-powder mill below, one wall should be weaker so that the walls should blow outward the right way (away from traffic).Be aware that human error is the most common failure mode: design to make things acceptably idiot-proof.

Dupont powder mills had a thinner wall and a stronger wall so that, if there were an explosion it would blow out towards the river. This mill has a second wall to protect workers. The thinner wall should be barely strong enough to stand up to wind and rain; the stronger walls should stand up to explosions that blow out the other wall.

Dupont powder mills had a thinner wall and a stronger wall so that, if there were an explosion, it would blow out ‘safely.’ This mill has a second wall to protect workers. The thinner wall must be strong enough to stand up to wind and rain; the stronger walls should stand up to all likely explosions.

Related to my aesthetic of mechanical strength, I tried to teach an aesthetic of cost, weight, appearance, and green: Choose materials that are cheaper, rather than more expensive; use less weight rather than more if both ways worked equally well. Use materials that look better if you’ve got the choice, and use recyclable materials. These all derive from the well-known axiom, omit needless stuff. Or, as William of Occam put it, “Entia non sunt multiplicanda sine necessitate.” As an aside, I’ve found that, when engineers use Latin, we look smart: “lingua bona lingua motua est.” (a good language is a dead language) — it’s the same with quoting 19th century poets, BTW: dead 19th century poets are far better than undead ones, but I digress.

Use of recyclable materials gets you out of lots of problems relative to materials that must be disposed of. E.g. if you use aluminum insulation (recyclable) instead of ceramic fiber, you will have an easier time getting rid of the scrap. As a result, you are not as likely to expose your workers (or you) to mesothelioma, or similar disease. You should not have to pay someone to haul away excess or damaged product; a scraper will oblige, and he may even pay you for it if you have enough. Recycling helps cash flow with decommissioning too, when money is tight. It’s better to find your $1 worth of scrap is now worth $2 instead of discovering that your $1 worth of garbage now costs $2 to haul away. By the way, most heat loss is from black body radiation, so aluminum foil may actually work better than ceramics of the same thermal conductivity.

Buildings can be recycled too. Buy them and sell them as needed. Shipping containers make for great lab buildings because they are cheap, strong, and movable. You can sell them off-site when you’re done. We have a shipping container lab building, and a shipping container storage building — both worth more now than when I bought them. They are also rather attractive with our advertising on them — attractive according to my design aesthetic. Here’s an insight into why chemical engineers earn more than chemists; and insight into the difference between mechanical engineering and civil engineering. Here’s an architecture aesthetic. Here’s one about the scientific method.

Robert E. Buxbaum, October 31, 2013

Why random experimental design is better

In a previous post I claimed that, to do good research, you want to arrange experiments so there is no pre-hypothesis of how the results will turn out. As the post was long, I said nothing direct on how such experiments should be organized, but only alluded to my preference: experiments should be organized at randomly chosen conditions within the area of interest. The alternative, shown below is that experiments should be done at the cardinal points in the space, or at corner extremes: the Wilson Box and Taguchi design of experiments (DoE), respectively. Doing experiments at these points implies a sort of expectation of the outcome; generally that results will be linearly, orthogonal related to causes; in such cases, the extreme values are the most telling. Sorry to say, this usually isn’t how experimental data will fall out. First experimental test points according to a Wilson Box, a Taguchi, and a random experimental design. The Wilson box and Taguchi are OK choices if you know or suspect that there are no significant non-linear interactions, and where experiments can be done at these extreme points. Random is the way nature works; and I suspect that's best -- it's certainly easiest.

First experimental test points according to a Wilson Box, a Taguchi, and a random experimental design. The Wilson box and Taguchi are OK choices if you know or suspect that there are no significant non-linear interactions, and where experiments can be done at these extreme points. Random is the way nature works; and I suspect that’s best — it’s certainly easiest.

The first test-points for experiments according to the Wilson Box method and Taguchi method of experimental designs are shown on the left and center of the figure above, along with a randomly chosen set of experimental conditions on the right. Taguchi experiments are the most popular choice nowadays, especially in Japan, but as Taguchi himself points out, this approach works best if there are “few interactions between variables, and if only a few variables contribute significantly.” Wilson Box experimental choices help if there is a parabolic effect from at least one parameter, but are fairly unsuited to cases with strong cross-interactions.

Perhaps the main problems with doing experiments at extreme or cardinal points is that these experiments are usually harder than at random points, and that the results from these difficult tests generally tell you nothing you didn’t know or suspect from the start. The minimum concentration is usually zero, and the minimum temperature is usually one where reactions are too slow to matter. When you test at the minimum-minimum point, you expect to find nothing, and generally that’s what you find. In the data sets shown above, it will not be uncommon that the two minimum W-B data points, and the 3 minimum Taguchi data points, will show no measurable result at all.

Randomly selected experimental conditions are the experimental equivalent of Monte Carlo simulation, and is the method evolution uses. Set out the space of possible compositions, morphologies and test conditions as with the other method, and perhaps plot them on graph paper. Now, toss darts at the paper to pick a few compositions and sets of conditions to test; and do a few experiments. Because nature is rarely linear, you are likely to find better results and more interesting phenomena than at any of those at the extremes. After the first few experiments, when you think you understand how things work, you can pick experimental points that target an optimum extreme point, or that visit a more-interesting or representative survey of the possibilities. In any case, you’ll quickly get a sense of how things work, and how successful the experimental program will be. If nothing works at all, you may want to cancel the program early, if things work really well you’ll want to expand it. With random experimental points you do fewer worthless experiments, and you can easily increase or decrease the number of experiments in the program as funding and time allows.

Consider the simple case of choosing a composition for gunpowder. The composition itself involves only 3 or 4 components, but there is also morphology to consider including the gross structure and fine structure (degree of grinding). Instead of picking experiments at the maximum compositions: 100% salt-peter, 0% salt-peter, grinding to sub-micron size, etc., as with Taguchi, a random methodology is to pick random, easily do-able conditions: 20% S and 40% salt-peter, say. These compositions will be easier to ignite, and the results are likely to be more relevant to the project goals.

The advantages of random testing get bigger the more variables and levels you need to test. Testing 9 variables at 3 levels each takes 27 Taguchi points, but only 16 or so if the experimental points are randomly chosen. To test if the behavior is linear, you can use the results from your first 7 or 8 randomly chosen experiments, derive the vector that gives the steepest improvement in n-dimensional space (a weighted sum of all the improvement vectors), and then do another experimental point that’s as far along in the direction of that vector as you think reasonable. If your result at this point is better than at any point you’ve visited, you’re well on your way to determining the conditions of optimal operation. That’s a lot faster than by starting with 27 hard-to-do experiments. What’s more, if you don’t find an optimum; congratulate yourself, you’ve just discovered an non-linear behavior; something that would be easy to overlook with Taguchi or Wilson Box methodologies.

The basic idea is one Sherlock Holmes pointed out (Study in Scarlet): It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.” (Case of Identity). Life is infinitely stranger than anything which the mind of man could invent.

Robert E. Buxbaum, September 11, 2013. A nice description of the Wilson Box method is presented in Perry’s Handbook (6th ed). SInce I had trouble finding a free, on-line description, I linked to a paper by someone using it to test ingredient choices in baked bread. Here’s a link for more info about random experimental choice, from the University of Michigan, Chemical Engineering dept. Here’s a joke on the misuse of statistics, and a link regarding the Taguchi Methodology. Finally, here’s a pointless joke on irrational numbers, that I posted for pi-day.

Detroit Teachers are not paid too much

Detroit is bankrupt financially, but not because the public education teachers have negotiated rich contracts. If anything Detroit teachers are paid too little given the hardship of their work. The education problem in Detroit, I think, is with the quality of education, and of life. Parents leave Detroit, if they can afford it; students who can’t leave the city avoid the Detroit system by transferring to private schools, by commuting to schools in the suburbs, or by staying home. Fewer than half of Detroit students are in the Detroit public schools.

The average salary for a public school teacher in Detroit is (2013) $51,000 per year. That’s 3% less than the national average and $3,020/year less than the Michigan average. While some Detroit teachers are paid over $100,000 per year, a factoid that angers some on the right, that’s a minority of teachers, only those with advanced degrees and many years of seniority. For every one of these, the Detroit system has several assistant teachers, substitute teachers, and early childhood teachers earning $20,000 to $25,000/ year. That’s an awfully low salary given their education and the danger and difficulty of their work. It’s less than janitors are paid on an annual basis (janitors work more hours generally). This is a city with 25 times the murder rate in the rest of the state. If anything, good teachers deserve a higher salary.

Detroit public schools provide among the worst math education in the US. In 2009, showing the lowest math proficiency scores ever recorded in the 21-year history of the national math proficiency test. Attendance and graduation are low too: Friday attendance averages 71.2%, and is never as high as 80% on any day. The high-school graduation rate in Detroit is only 29.4%. Interested parents have responded by shifting their children out of the Detroit system at the rate of 8000/year. Currently, less than half of school age children go to Detroit public schools (51,070 last year); 50,076 go to charter schools, some 9,500 go to schools in the suburbs, and 8,783, those in the 5% in worst-performing schools, are now educated by the state reform district.

Outside a state run reform district school, The state has taken over the 5% worst performing schools.

The state of Michigan has taken over the 5% worst performing schools in Detroit through their “Reform District” system. They provide supplies and emphasize job-skills.

Poor attendance and the departure of interested students makes it hard for any teacher to handle a class. Teachers must try to teach responsibility to kids who don’t show up, in a high crime setting, with only a crooked city council to look up to. This is a city council that oversaw decades of “pay for play,” where you had to bribe the elected officials to bid on projects. Even among officials who don’t directly steal, there is a pattern of giving themselves and their families fancy cars or gambling trips to Canada using taxpayers dollars. The mayor awarded Cadillac Escaldes to his family and friends, and had a 22-man team of police to protect him. On this environment, a teacher has to be a real hero to achieve even modest results.

Student departure means there a surfeit of teachers and schools, but it is hard to see what to do. You’d like to reassign teachers who are on the payroll, but doing little, and fire the worst teachers. Sorry to say, it’s hard to fire anyone, and it’s hard to figure out which are the bad teachers; just because your class can’t read doesn’t mean you are a bad teacher. Recently a teacher of the year was fired because the evaluation formula gave her a low rating.

Making changes involves upending union seniority rules. Further, there is an Americans with Disability Act that protects older teachers, along with the lazy, the thief, and the drug addict — assuming they claim disability by frailty, poor upbringing or mental disease. To speed change along, I would like to see the elected education board replaced by an appointed board with the power to act quickly and the responsibility to deliver quality education within the current budget. Unlike the present system, there must be oversight to keep them from using the money on themselves.

She state could take over more schools into the reform school district, or they could remove entire school districts from Detroit incorporation and make them Michigan townships. A Michigan township has more flexibility in how they run schools, police, and other services. They can run as many schools as they want, and can contract with their neighbors or independent suppliers for the rest. A city has to provide schools for everyone who’s not opted out. Detroit’s population density already matches that of rural areas; rural management might benefit some communities.

I would like to see the curriculum modified to be more financially relevant. Detroit schools could reinstate classes in shop and trade-skills. In effect that’s what’s done at Detroit’s magnet schools, e.g. the Cass Academy and the Edison Academy. It’s also the heart of several charter schools in the state-run reform district. Shop class teaches math, an important basis of science, and responsibility. If your project looks worse than your neighbor’s, you can only blame yourself, not the system. And if you take home your work, there is that reward for doing a good job. As a very last thought, I’d like to see teachers paid more than janitors; this means that the current wage structure has to change. If nothing else, a change would show that there is a monetary value in education.

Robert Buxbaum, August 16, 2013; I live outside Detroit, in one of the school districts that students go to when they flee the city.

What’s the quality of your home insulation

By Dr. Robert E. Buxbaum, June 3, 2013

It’s common to have companies call during dinner offering to blow extra insulation into the walls and attic of your home. Those who’ve added this insulation find a small decrease in their heating and cooling bills, but generally wonder if they got their money’s worth, or perhaps if they need yet-more insulation to get the full benefit. Here’s a simple approach to comparing your home heat bill to the ideal your home can reasonably reach.

The rate of heat transfer through a wall, Qw, is proportional to the temperature difference, ∆T, to the area, A, and to the average thermal conductivity of the wall, k; it is inversely proportional to the wall thickness, ∂;

Qw = ∆T A k /∂.

For home insulation, we re-write this as Qw = ∆T A/Rw where Rw is the thermal resistance of the wall, measured (in the US) as °F/BTU/hr-ft2. Rw = ∂/k.

Lets assume that your home’s outer wall thickness is nominally 6″ thick (0.5 foot). With the best available insulation, perfectly applied, the heat loss will be somewhat higher than if the space was filled with still air, k=.024 BTU/fthr°F, a result based on molecular dynamics. For a 6″ wall, the R value, will always be less than .5/.024 = 20.8 °F/BTU/hr-ft2.. It will be much less if there are holes or air infiltration, but for practical construction with joists and sills, an Rw value of 15 or 16 is probably about as good as you’ll get with 6″ walls.

To show you how to evaluate your home, I’ll now calculate the R value of my walls based on the size of my ranch-style home (in Michigan) and our heat bills. I’ll first do this in a simplified calculation, ignoring windows, and will then repeat the calculation including the windows. Windows are found to be very important. I strongly suggest window curtains to save heat and air conditioning,

The outer wall of my home is 190 feet long, and extends about 11 feet above ground to the roof. Multiplying these dimensions gives an outer wall area of 2090 ft2. I could now add the roof area, 1750 ft2 (it’s the same as the area of the house), but since the roof is more heavily insulated than the walls, I’ll estimate that it behaves like 1410 ft2 of normal wall. I calculate there are 3500 ftof effective above-ground area for heat loss. This is the area that companies keep offering to insulate.

Between December 2011 and February 2012, our home was about 72°F inside, and the outside temperature was about 28°F. Thus, the average temperature difference between the inside and outside was about 45°F; I estimate the rate of heat loss from the above-ground part of my house, Qu = 3500 * 45/R = 157,500/Rw.

Our house has a basement too, something that no one has yet offered to insulate. While the below-ground temperature gradient is smaller, it’s less-well insulated. Our basement walls are cinderblock covered with 2″ of styrofoam plus wall-board. Our basement floor is even less well insulated: it’s just cement poured on pea-gravel. I estimate the below-ground R value is no more than 1/2 of whatever the above ground value is; thus, for calculating QB, I’ll assume a resistance of Rw/2.

The below-ground area equals the square footage of our house, 1750 ft2 but the walls extend down only about 5 feet below ground. The basement walls are thus 950 ft2 in area (5 x 190 = 950). Adding the 1750 ft2 floor area, we find a total below-ground area of 2700 ft2.

The temperature difference between the basement and the wet dirt is only about 25°F in the winter. Assuming the thermal resistance is Rw/2, I estimate the rate of heat loss from the basement, QB = 2700*25*(2/Rw) = 135,000/Rw. It appears that nearly as much heat leaves through the basement as above ground!

Between December and February 2012, our home used an average of 597 cubic feet of gas per day or 25497 BTU/hour (heat value = 1025 BTU/ ft3). QU+ Q= 292,500/Rw. Ignoring windows, I estimate Rw of my home = 292,500/25497 = 11.47.

We now add the windows. Our house has 230 ft2 of windows, most covered by curtains and/or plastic. Because of the curtains and plastic, they would have an R value of 3 except that black-body radiation tends to be very significant. I estimate our windows have an R value of 1.5; the heat loss through the windows is thus QW= 230*45/1.5 = 6900 BTU/hr, about 27% of the total. The R value for our walls is now re-estimated to be 292,500/(25497-6900) = 15.7; this is about as good as I can expect given the fixed thickness of our walls and the fact that I can not easily get an insulation conductivity lower than still air. I thus find that there will be little or no benefit to adding more above-ground wall insulation to my house.

To save heat energy, I might want to coat our windows in partially reflective plastic or draw the curtains to follow the sun. Also, since nearly half the heat left from the basement, I may want to lay a thicker carpet, or lay a reflective under-layer (a space blanket) beneath the carpet.

To improve on the above estimate, I could consider our furnace efficiency; it is perhaps only 85-90% efficient, with still-warm air leaving up the chimney. There is also some heat lost through the door being opened, and through hot water being poured down the drain. As a first guess, these heat losses are balanced by the heat added by electric usage, by the body-heat of people in the house, and by solar radiation that entered through the windows (not much for Michigan in winter). I still see no reason to add more above-ground insulation. Now that I’ve analyzed my home, it’s time for you to analyze yours.

Camless valves and the Fiat-500

One of my favorite automobile engine ideas is the use of camless, electronic valves. It’s an idea whose advantages have been known for 100 years or more, and it’s finally going to be used on a mainstream, commercial car — on this year’s Fiat 500s. Fiat is not going entirely camless, but the plan is to replace the cams on the air intake valves with solenoids. A normal car engine uses cams and lifters to operate the poppet valves used to control the air intake and exhaust. Replacing these cams and lifters saves some weight, and allows the Fiat-500 to operate more efficiently at low power by allowing the engine to use less combustion energy to suck vacuum. The Fiat 500 semi-camless technology is called Multiair: it’s licensed from Valeo (France), and appeared as an option on the 2010 Alfa Romeo.

How this saves mpg is as follows: at low power (idling etc.), the air intake of a normal car engine is restricted creating a fairly high vacuum. The vacuum restriction requires energy to draw and reduces the efficiency of the engine by decreasing the effective compression ratio. It’s needed to insure that the car does not produce too much NOx when idling. In a previous post, I showed that the rate of energy wasted by drawing this vacuum was the vacuum pressure times the engine volume and the rpm rate; I also mentioned some classic ways to reduce this loss (exhaust recycle and adding water).

Valeo’s/Fiat’s semi-camless design does nothing to increase the effective compression ratio at low power, but it reduces the amount of power lost to vacuum by allowing the intake air pressure to be higher, even at low power demand. A computer reduces the amount of air entering the engine by reducing the amount of time that the intake valve is open. The higher air pressure means there is less vacuum penalty, both when the valve is open even when the valve is closed. On the Alfa Romeo, the 1.4 liter Multiair engine option got 8% better gas mileage (39 mpg vs 36 mpg) and 10% more power (168 hp vs 153 hp) than the 1.4 liter cam-driven engine.

David Bowes shows off his latest camless engines at NAMES, April 2013.

David Bowes shows off his latest camless engines at NAMES, April 2013.

Fiat used a similar technology in the 1970s with variable valve timing (VVT), but that involved heavy cams and levers, and proved to be unreliable. In the US, some fine engineers had been working on solenoids, e.g. David Bowes, pictured above with one of his solenoidal engines (he’s a sometime manufacturer for REB Research). Dave has built engines with many cycles that would be impractical without solenoids, and has done particularly nice work reducing the electric use of the solenoid.

Durability may be a problem here too, as there is no other obvious reason that Fiat has not gone completely camless, and has not put a solenoid-controlled valve on the exhaust too. One likely reason Fiat didn’t do this is that solenoidal valves tend to be unreliable at the higher temperatures found in exhaust. If so, perhaps they are unreliable on the intake too. A car operated at 1000-4000 rpm will see on the order of 100,000,000 cycles in 25,000 miles. No solenoid we’ve used has lasted that many cycles, even at low temperatures, but most customers expect their cars to go more than 25,000 miles without needing major engine service.

We use solenoidal pumps in our hydrogen generators too, but increase the operating live by operating the solenoid at only 50 cycles/minute — maximum, rather than 1000- 4000. This should allow our products to work for 10 years at least without needing major service. Performance car customers may be willing to stand for more-frequent service, but the company can’t expect ordinary customers to go back to the days where Fiat stood for “Fix It Again Tony.”