Tag Archives: science

Patterns in climate; change is the only constant

There is a general problem when looking for climate trends: you have to look at weather data. That’s a problem because weather data goes back thousands of years, and it’s always changing. As a result it’s never clear what start year to use for the trend. If you start too early or too late the trend disappears. If you start your trend line in a hot year, like in the late roman period, the trend will show global cooling. If you start in a cold year, like the early 1970s, or the small ice age (1500 -1800) you’ll find global warming: perhaps too much. Begin 10-15 years ago, and you’ll find no change in global temperatures.

Ice coverage data shows the same problem: take the Canadian Arctic Ice maximums, shown below. If you start your regression in 1980-83, the record ice year (green) you’ll see ice loss. If you start in 1971, the year of minimum ice (red), you’ll see ice gain. It might also be nice to incorporate physics thought a computer model of the weather, but this method doesn’t seem to help. Perhaps that’s because the physics models generally have to be fed coefficients calculated from the trend line. Using the best computers and a trend line showing ice loss, the US Navy predicted, in January 2006, that the Arctic would be ice-free by 2013. It didn’t happen; a new prediction is 2016 — something I suspect is equally unlikely. Five years ago the National Academy of Sciences predicted global warming would resume in the next year or two — it didn’t either. Garbage in -garbage out, as they say.

Arctic Ice in Northern Canada waters, 1970-2014 from icecanada.ca 2014 is not totally in yet. What year do you start when looking for a trend?

Arctic Ice in Northern Canada waters, 1971-2014 from the Canadian ice service 2014 is not totally in yet , but is likely to exceed 2013. If you are looking for trends, in what year do you start?

The same trend problem appears with predicting sea temperatures and el Niño, a Christmastime warming current in the Pacific ocean. This year, 2013-14, was predicted to be a super El Niño, an exceptionally hot, stormy year with exceptionally strong sea currents. Instead, there was no el Niño, and many cities saw record cold — Detroit by 9 degrees. The Antarctic ice hit record levels, stranding a ship of anti warming activists. There were record few hurricanes.  As I look at the Pacific sea temperature from 1950 to the present, below, I see change, but no pattern or direction: El Nada (the nothing). If one did a regression analysis, the slope might be slightly positive or negative, but r squared, the significance, would be near zero. There is no real directionality, just noise if 1950 is the start date.

El Niño and La Niña since 1950. There is no sign that they are coming more often, or stronger. Nor is there evidence even that the ocean is warming.

El Niño and La Niña since 1950. There is no sign that they are coming more often, or stronger. Nor is clear evidence that the ocean is warming.

This appears to be as much a fundamental problem in applied math as in climate science: when looking for a trend, where do you start, how do you handle data confidence, and how do you prevent bias? A thought I’ve had is to try to weight a regression in terms of the confidence in the data. The Canadian ice data shows that the Canadian Ice Service is less confident about their older data than the new; this is shown by the grey lines. It would be nice if some form of this confidence could be incorporated into the regression trend analysis, but I’m not sure how to do this right.

It’s not so much that I doubt global warming, but I’d like a better explanation of the calculation. Weather changes: how do you know when you’re looking at climate, not weather? The president of the US claimed that the science is established, and Prince Charles of England claimed climate skeptics were headless chickens, but it’s certainly not predictive, and that’s the normal standard of knowledge. Neither country has any statement of how one would back up their statements. If this is global warming, I’d expect it to be warm.

Robert Buxbaum, Feb 5, 2014. Here’s a post I’ve written on the scientific method, and on dealing with abnormal statistics. I’ve also written about an important recent statistical fraud against genetically modified corn. As far as energy policy, I’m inclined to prefer hydrogen over batteries, and nuclear over wind and solar. The president has promoted the opposite policy — for unexplained, “scientific” reasons.

Genetically modified food not found to cause cancer.

It’s always nice when a study is retracted, especially so if the study alerts the world to a danger that is found to not exist. Retractions don’t happen often enough, I think, given that false positives should occur in at least 5% of all biological studies. Biological studies typically use 95% confidence limits, a confidence limit that indicates there will be false positives 5% of the time for the best-run versions (or 10% if both 5% tails are taken to be significant). These false positives will appear in 5-10% of all papers as an expected result of statistics, no matter how carefully the study is done, or how many rats used. Still, one hopes that researchers will check for confirmation from other researchers and other groups within the study. Neither check was not done in a well publicized, recent paper claiming genetically modified foods cause cancer. Worse yet, the experiment design was such that false positives were almost guaranteed.

Séralini published this book, “We are all Guinea Pigs,” simultaneously with the paper.

As reported in Nature, the journal Food and Chemical Toxicology retracted a 2012 paper by Gilles-Eric Séralini claiming that eating genetically modified (GM) maize causes cancerous tumors in rats despite “no evidence of fraud or intentional misrepresentation.” I would not exactly say no evidence. For one, the choice of rats and length of the study was such that a 30% of the rats would be expected to get cancer and die even under the best of circumstances. Also, Séralini failed to mention that earlier studies had come to the opposite conclusion about GM foods. Even the same journal had published a review of 12 long-term studies, between 90 days and two years, that showed no harm from GM corn or other GM crops. Those reports didn’t get much press because it is hard to get excited at good news, still you’d have hoped the journal editors would demand their review, at least, would be referenced in a paper stating the contrary.

A wonderful book on understanding the correct and incorrect uses of statistics.

A wonderful book on understanding the correct and incorrect uses of statistics.

The main problem I found is that the study was organized to virtually guarantee false positives. Séralini took 200 rats and divided them into 20 groups of 10. Taking two groups of ten (one male, one female) as a control, he fed the other 18 groups of ten various doses of genetically modified grain, either alone of mixed with roundup, a pesticide often used with GM foods. Based on pure statistics, and 95% confidence, you should expect that, out of the 18 groups fed GM grain there is a 1- .9518 chance (60%) that at least one group will show cancer increase, and a similar 60% chance that at least one group will show cancer decrease at the 95% confidence level. Séralini’s study found both these results: One group, the female rats fed with 10% GM grain and no roundup, showed cancer increase; another group, the female rats fed 33% GM grain and no roundup, showed cancer decrease — both at the 95% confidence level. Séralini then dismissed the observation of cancer decrease, and published the inflammatory article and a companion book (“We are all Guinea Pigs,” pictured above) proclaiming that GM grain causes cancer. Better editors would have forced Séralini to acknowledge the observation of cancer decrease, or demanded he analyze the data by linear regression. If he had, Séralini would have found no net cancer effect. Instead he got to publish his bad statistics, and (since non of the counter studies were mentioned) unleashed a firestorm of GM grain products pulled from store shelves.

Did Séralini knowingly design a research method aimed to produce false positives? In a sense, I’d hope so; the alternative is pure ignorance. Séralini is a long-time, anti GM-activist. He claims he used few rats because he was not expecting to find any cancer — no previous tests on GM foods had suggested a cancer risk!? But this is mis-direction; no matter how many rats in each group, if you use 20 groups this way, there is a 60% chance you’ll find at least one group with cancer at the 95% confidence limit. (This is Poisson-type statistics see here). My suspicion is that Séralini knowingly gamed the experiments in an effort to save the world from something he was sure was bad. That he was a do-gooder twisting science for the greater good.

The most common reason for retraction is that the article has appeared elsewhere, either as a substantial repeat from the authors, or from other authors by plagiarism or coincidence. (BC Comics, by Johnny Hart, 11/25/10).

It’s important to cite previous work and aspects of the current work that may undermine the story you’d like to tell; BC Comics, Johnny Hart.

This was not the only major  retraction of the month, by the way. The Harrisburg Patriot & Union retracted its 1863 review of Lincoln’s Gettysburg Address, a speech the editors originally panned as “silly remarks”, deserving “a veil of oblivion….” In a sense, it’s nice that they reconsidered, and “…have come to a different conclusion…” My guess is that the editors were originally motivated by do-gooder instinct; they hoped to shorten the war by panning the speech.

There is an entire blog devoted to retractions, by the way:  http://retractionwatch.com. A good friend, Richard Fezza alerted me to it. I went to high school with him, then through under-grad at Cooper Union, and to grad school at Princeton, where we both earned PhDs. We’ll probably end up in the same old-age home. Cooper Union tried to foster a skeptical attitude against group-think.

Robert Buxbaum, Dec 23, 2013. Here is a short essay on the correct way to do science, and how to organize experiments (randomly) to make biassed analysis less likely. I’ve also written on nearly normal statistics, and near poisson statistics. Plus on other random stuff in the science and art world: Time travel, anti-matter, the size of the universe, Surrealism, Architecture, Music.

The 2013 hurricane drought

News about the bad weather that didn’t happen: there were no major hurricanes in 2013. That is, there was not one storm in the Atlantic Ocean, the Caribbean Sea, or the Gulf of Mexico with a maximum wind speed over 110 mph. None. As I write this, we are near the end of the hurricane season (it officially ends Nov. 30), and we have seen nothing like what we saw in 2012; compare the top and bottom charts below. Barring a very late, very major storm, this looks like it will go down as the most uneventful season in at least 2 decades. Our monitoring equipment has improved over the years, but even with improved detection, we’ve seen nothing major. The last time we saw this lack was 1994 — and before that 1986, 1972, and 1968.

Hurricanes 2012 -2013. This year looks like it will be the one with the lowest number and strength of modern times.

Hurricanes 2012 -2013. This year there were only two hurricanes, and both were category 1 The last time we had this few was 1994. By comparison, in 2012 we saw 5 category 1 hurricanes, 3 Category 2s, and 2 Category 3s including Sandy, the most destructive hurricane to hit New York City since 1938.

In the pacific, major storms are called typhoons, and this year has been fairly typical: 13 typhoons, 5 of them super, the same as in 2012.  Weather tends to be chaotic, but it’s nice to have a year without major hurricane damage or death.

In the news this month, no major storm lead to the lack of destruction of the boats, beaches and stately homes of the North Carolina shore.

In the news, a lack of major storms lead to the lack of destruction of the boats, beaches, and stately homes of the North Carolina shore.

The reason you have not heard of this before is that it’s hard to write a story about events that didn’t happen. Good news is as important as bad, and 2013 had been predicted to be one of the worst seasons on record, but then it didn’t happen and there was nothing to write about. Global warming is supposed to increase hurricane activity, but global warming has taken a 16 year rest. You didn’t hear about the lack of global warming for the same reason you didn’t hear about the lack of storms.

Here’s why hurricanes form in fall and spin so fast, plus how they pick up stuff (an explanation from Einstein). In other good weather news, the ozone hole is smaller, and arctic ice is growing (I suggest we build a northwest passage). It’s hard to write about the lack of bad news, still Good science requires an open mind to the data, as it is, or as it isn’t. Here is a simple way to do abnormal statistics, plus why 100 year storms come more often than once every 100 years.

Robert E. Buxbaum. November 23, 2013.

Why random experimental design is better

In a previous post I claimed that, to do good research, you want to arrange experiments so there is no pre-hypothesis of how the results will turn out. As the post was long, I said nothing direct on how such experiments should be organized, but only alluded to my preference: experiments should be organized at randomly chosen conditions within the area of interest. The alternative, shown below is that experiments should be done at the cardinal points in the space, or at corner extremes: the Wilson Box and Taguchi design of experiments (DoE), respectively. Doing experiments at these points implies a sort of expectation of the outcome; generally that results will be linearly, orthogonal related to causes; in such cases, the extreme values are the most telling. Sorry to say, this usually isn’t how experimental data will fall out. First experimental test points according to a Wilson Box, a Taguchi, and a random experimental design. The Wilson box and Taguchi are OK choices if you know or suspect that there are no significant non-linear interactions, and where experiments can be done at these extreme points. Random is the way nature works; and I suspect that's best -- it's certainly easiest.

First experimental test points according to a Wilson Box, a Taguchi, and a random experimental design. The Wilson box and Taguchi are OK choices if you know or suspect that there are no significant non-linear interactions, and where experiments can be done at these extreme points. Random is the way nature works; and I suspect that’s best — it’s certainly easiest.

The first test-points for experiments according to the Wilson Box method and Taguchi method of experimental designs are shown on the left and center of the figure above, along with a randomly chosen set of experimental conditions on the right. Taguchi experiments are the most popular choice nowadays, especially in Japan, but as Taguchi himself points out, this approach works best if there are “few interactions between variables, and if only a few variables contribute significantly.” Wilson Box experimental choices help if there is a parabolic effect from at least one parameter, but are fairly unsuited to cases with strong cross-interactions.

Perhaps the main problems with doing experiments at extreme or cardinal points is that these experiments are usually harder than at random points, and that the results from these difficult tests generally tell you nothing you didn’t know or suspect from the start. The minimum concentration is usually zero, and the minimum temperature is usually one where reactions are too slow to matter. When you test at the minimum-minimum point, you expect to find nothing, and generally that’s what you find. In the data sets shown above, it will not be uncommon that the two minimum W-B data points, and the 3 minimum Taguchi data points, will show no measurable result at all.

Randomly selected experimental conditions are the experimental equivalent of Monte Carlo simulation, and is the method evolution uses. Set out the space of possible compositions, morphologies and test conditions as with the other method, and perhaps plot them on graph paper. Now, toss darts at the paper to pick a few compositions and sets of conditions to test; and do a few experiments. Because nature is rarely linear, you are likely to find better results and more interesting phenomena than at any of those at the extremes. After the first few experiments, when you think you understand how things work, you can pick experimental points that target an optimum extreme point, or that visit a more-interesting or representative survey of the possibilities. In any case, you’ll quickly get a sense of how things work, and how successful the experimental program will be. If nothing works at all, you may want to cancel the program early, if things work really well you’ll want to expand it. With random experimental points you do fewer worthless experiments, and you can easily increase or decrease the number of experiments in the program as funding and time allows.

Consider the simple case of choosing a composition for gunpowder. The composition itself involves only 3 or 4 components, but there is also morphology to consider including the gross structure and fine structure (degree of grinding). Instead of picking experiments at the maximum compositions: 100% salt-peter, 0% salt-peter, grinding to sub-micron size, etc., as with Taguchi, a random methodology is to pick random, easily do-able conditions: 20% S and 40% salt-peter, say. These compositions will be easier to ignite, and the results are likely to be more relevant to the project goals.

The advantages of random testing get bigger the more variables and levels you need to test. Testing 9 variables at 3 levels each takes 27 Taguchi points, but only 16 or so if the experimental points are randomly chosen. To test if the behavior is linear, you can use the results from your first 7 or 8 randomly chosen experiments, derive the vector that gives the steepest improvement in n-dimensional space (a weighted sum of all the improvement vectors), and then do another experimental point that’s as far along in the direction of that vector as you think reasonable. If your result at this point is better than at any point you’ve visited, you’re well on your way to determining the conditions of optimal operation. That’s a lot faster than by starting with 27 hard-to-do experiments. What’s more, if you don’t find an optimum; congratulate yourself, you’ve just discovered an non-linear behavior; something that would be easy to overlook with Taguchi or Wilson Box methodologies.

The basic idea is one Sherlock Holmes pointed out (Study in Scarlet): It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.” (Case of Identity). Life is infinitely stranger than anything which the mind of man could invent.

Robert E. Buxbaum, September 11, 2013. A nice description of the Wilson Box method is presented in Perry’s Handbook (6th ed). SInce I had trouble finding a free, on-line description, I linked to a paper by someone using it to test ingredient choices in baked bread. Here’s a link for more info about random experimental choice, from the University of Michigan, Chemical Engineering dept. Here’s a joke on the misuse of statistics, and a link regarding the Taguchi Methodology. Finally, here’s a pointless joke on irrational numbers, that I posted for pi-day.

The Scientific Method isn’t the method of scientists

A linchpin of middle school and high-school education is teaching ‘the scientific method.’ This is the method, students are led to believe, that scientists use to determine Truths, facts, and laws of nature. Scientists, students are told, start with a hypothesis of how things work or should work, they then devise a set of predictions based on deductive reasoning from these hypotheses, and perform some critical experiments to test the hypothesis and determine if it is true (experimentum crucis in Latin). Sorry to say, this is a path to error, and not the method that scientists use. The real method involves a few more steps, and follows a different order and path. It instead follows the path that Sherlock Holmes uses to crack a case.

The actual method of Holmes, and of science, is to avoid beginning with a hypothesis. Isaac Newton claimed: “I never make hypotheses” Instead as best we can tell, Newton, like most scientists, first gathered as much experimental evidence on a subject as possible before trying to concoct any explanation. As Holmes says (Study in Scarlet): “It is a capital mistake to theorize before you have all the evidence. It biases the judgment.”

It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts (Holmes, Scandal in Bohemia).

Holmes barely tolerates those who hypothesize before they have all the data: “It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.” (Scandal in Bohemia).

Then there is the goal of science. It is not the goal of science to confirm some theory, model, or hypothesis; every theory probably has some limited area where it’s true. The goal for any real-life scientific investigation is the desire to explain something specific and out of the ordinary, or do something cool. Similarly, with Sherlock Holmes, the start of the investigation is the arrival of a client with a specific, unusual need – one that seems a bit outside of the normal routine. Similarly, the scientist wants to do something: build a bigger bridge, understand global warming, or how DNA directs genetics; make better gunpowder, cure a disease, or Rule the World (mad scientists favor this). Once there is a fixed goal, it is the goal that should direct the next steps: it directs the collection of data, and focuses the mind on the wide variety of types of solution. As Holmes says: , “it’s wise to make one’s self aware of the potential existence of multiple hypotheses, so that one eventually may choose one that fits most or all of the facts as they become known.” It’s only when there is no goal, that any path will do

In gathering experimental data (evidence), most scientists spend months in the less-fashionable sections of the library, looking at the experimental methods and observations of others, generally from many countries, collecting any scrap that seems reasonably related to the goal at hand. I used 3 x5″ cards to catalog this data and the references. From many books and articles, one extracts enough diversity of data to be able to look for patterns and to begin to apply inductive logic. “The little things are infinitely the most important” (Case of Identity). You have to look for patterns in the data you collect. Holmes does not explain how he looks for patterns, but this skill is innate in most people to a greater or lesser extent. A nice set approach to inductive logic is called the Baconian Method, it would be nice to see schools teach it. If the author is still alive, a scientist will try to contact him or her to clarify things. In every SH mystery, Holmes does the same and is always rewarded. There is always some key fact or observation that this turns up: key information unknown to the original client.

Based on the facts collected one begins to create the framework for a variety of mathematical models: mathematics is always involved, but these models should be pretty flexible. Often the result is a tree of related, mathematical models, each highlighting some different issue, process, or problem. One then may begin to prune the tree, trying to fit the known data (facts and numbers collected), into a mathematical picture of relevant parts of this tree. There usually won’t be quite enough for a full picture, but a fair amount of progress can usually be had with the application of statistics, calculus, physics, and chemistry. These are the key skills one learns in college, but usually the high-schooler and middle schooler has not learned them very well at all. If they’ve learned math and physics, they’ve not learned it in a way to apply it to something new, quite yet (it helps to read the accounts of real scientists here — e.g. The Double Helix by J. Watson).

Usually one tries to do some experiments at this stage. Homes might visit a ship or test a poison, and a scientist might go off to his, equally-smelly laboratory. The experiments done there are rarely experimenti crucae where one can say they’ve determined the truth of a single hypothesis. Rather one wants to eliminated some hypotheses and collect data to be used to evaluate others. An answer generally requires that you have both a numerical expectation and that you’ve eliminated all reasonable explanations but one. As Holmes says often, e.g. Sign of the four, “when you have excluded the impossible, whatever remains, however improbable, must be the truth”. The middle part of a scientific investigation generally involves these practical experiments to prune the tree of possibilities and determine the coefficients of relevant terms in the mathematical model: the weight or capacity of a bridge of a certain design, the likely effect of CO2 on global temperature, the dose response of a drug, or the temperature and burn rate of different gunpowder mixes. Though not mentioned by Holmes, it is critically important in science to aim for observations that have numbers attached.

The destruction of false aspects and models is a very important part of any study. Francis Bacon calls this act destruction of idols of the mind, and it includes many parts: destroying commonly held presuppositions, avoiding personal preferences, avoiding the tendency to see a closer relationship than can be justified, etc.

In science, one eliminates the impossible through the use of numbers and math, generally based on your laboratory observations. When you attempt to the numbers associated with our observations to the various possible models some will take the data well, some poorly; and some twill not fit the data at all. Apply the deductive reasoning that is taught in schools: logical, Boolean, step by step; if some aspect of a model does not fit, it is likely the model is wrong. If we have shown that all men are mortal, and we are comfortable that Socrates is a man, then it is far better to conclude that Socrates is mortal than to conclude that all men but Socrates is mortal (Occam’s razor). This is the sort of reasoning that computers are really good at (better than humans, actually). It all rests on the inductive pattern searches similarities and differences — that we started with, and very often we find we are missing a piece, e.g. we still need to determine that all men are indeed mortal, or that Socrates is a man. It’s back to the lab; this is why PhDs often take 5-6 years, and not the 3-4 that one hopes for at the start.

More often than not we find we have a theory or two (or three), but not quite all the pieces in place to get to our goal (whatever that was), but at least there’s a clearer path, and often more than one. Since science is goal oriented, we’re likely to find a more efficient than we fist thought. E.g. instead of proving that all men are mortal, show it to be true of Greek men, that is for all two-legged, fairly hairless beings who speak Greek. All we must show is that few Greeks live beyond 130 years, and that Socrates is one of them.

Putting numerical values on the mathematical relationship is a critical step in all science, as is the use of models — mathematical and otherwise. The path to measure the life expectancy of Greeks will generally involve looking at a sample population. A scientist calls this a model. He will analyze this model using statistical model of average and standard deviation and will derive his or her conclusions from there. It is only now that you have a hypothesis, but it’s still based on a model. In health experiments the model is typically a sample of animals (experiments on people are often illegal and take too long). For bridge experiments one uses small wood or metal models; and for chemical experiments, one uses small samples. Numbers and ratios are the key to making these models relevant in the real world. A hypothesis of this sort, backed by numbers is publishable, and is as far as you can go when dealing with the past (e.g. why Germany lost WW2, or why the dinosaurs died off) but the gold-standard of science is predictability.  Thus, while we a confident that Socrates is definitely mortal, we’re not 100% certain that global warming is real — in fact, it seems to have stopped though CO2 levels are rising. To be 100% sure you’re right about global warming we have to make predictions, e.g. that the temperature will have risen 7 degrees in the last 14 years (it has not), or Al Gore’s prediction that the sea will rise 8 meters by 2106 (this seems unlikely at the current time). This is not to blame the scientists whose predictions don’t pan out, “We balance probabilities and choose the most likely. It is the scientific use of the imagination” (Hound of the Baskervilles)The hope is that everything matches; but sometimes we must look for an alternative; that’s happened rarely in my research, but it’s happened.

You are now at the conclusion of the scientific process. In fiction, this is where the criminal is led away in chains (or not, as with “The Woman,” “The Adventure of the Yellow Face,” or of “The Blue Carbuncle” where Holmes lets the criminal free — “It’s Christmas”). For most research the conclusion includes writing a good research paper “Nothing clears up a case so much as stating it to another person”(Memoirs). For a PhD, this is followed by the search for a good job. For a commercial researcher, it’s a new product or product improvement. For the mad scientist, that conclusion is the goal: taking over the world and enslaving the population (or not; typically the scientist is thwarted by some detail!). But for the professor or professional research scientist, the goal is never quite reached; it’s a stepping stone to a grant application to do further work, and from there to tenure. In the case of the Socrates mortality work, the scientist might ask for money to go from country to country, measuring life-spans to demonstrate that all philosophers are mortal. This isn’t as pointless and self-serving as it seems, Follow-up work is easier than the first work since you’ve already got half of it done, and you sometimes find something interesting, e.g. about diet and life-span, or diseases, etc. I did some 70 papers when I was a professor, some on diet and lifespan.

One should avoid making some horrible bad logical conclusion at the end, by the way. It always seems to happen that the mad scientist is thwarted at the end; the greatest criminal masterminds are tripped by some last-minute flaw. Similarly the scientist must not make that last-mistep. “One should always look for a possible alternative, and provide against it” (Adventure of Black Peter). Just because you’ve demonstrated that  iodine kills germs, and you know that germs cause disease, please don’t conclude that drinking iodine will cure your disease. That’s the sort of science mistakes that were common in the middle ages, and show up far too often today. In the last steps, as in the first, follow the inductive and quantitative methods of Paracelsus to the end: look for numbers, (not a Holmes quote) check how quantity and location affects things. In the case of antiseptics, Paracelsus noticed that only external cleaning helped and that the help was dose sensitive.

As an example in the 20th century, don’t just conclude that, because bullets kill, removing the bullets is a good idea. It is likely that the trauma and infection of removing the bullet is what killed Lincoln, Garfield, and McKinley. Theodore Roosevelt was shot too, but decided to leave his bullet where it was, noticing that many shot animals and soldiers lived for years with bullets in them; and Roosevelt lived for 8 more years. Don’t make these last-minute missteps: though it’s logical to think that removing guns will reduce crime, the evidence does not support that. Don’t let a leap of bad deduction at the end ruin a line of good science. “A few flies make the ointment rancid,” said Solomon. Here’s how to do statistics on data that’s taken randomly.

Dr. Robert E. Buxbaum, scientist and Holmes fan wrote this, Sept 2, 2013. My thanks to Lou Manzione, a friend from college and grad school, who suggested I reread all of Holmes early in my PhD work, and to Wikiquote, a wonderful site where I found the Holmes quotes; the Solomon quote I knew, and the others I made up.

Religion vs Philosophy joke

“A philosopher is a blind man in a dark room looking for a black cat that isn’t there. A theologian is the man who finds it.” ~ H. L. Mencken

The distinction joke here is more sad than funny, I would say. It speaks to the inability of people to grapple with the big questions of their life in any really rational way. We’d like to be able to communicate directly with God, and have him speak back, but we can’t quite, and at some level we’d be too small for the interaction. We’d like to be able to stop evil with our religion, by holding up a cross, say, or by squirting holy water, but we can’t. I suspect it’s better that way, but sad. We’d like to know how and why the universe came to be, and what happens after death, but our best rational efforts are helpless. All of this is as they should be, says the philosopher, and he’s right, but it’s sad that it is and that he is. And then the theologian (rabbi, priest, imam) says he’s got all the answers and all the powers too. It’s too sad for words.

The philosopher in this joke is (I imagine) a PhD scientist, like me. While rational thought is great, and a PhD scientist can actually predict quite a lot that will happen in some cases, we have no real clue as to why things happen — except in terms of other things that we can’t explain: forces, gravity, electrons. It seems clear that the answer to the big-issue questions can not be found in science or rational philosophy. Nor can science deal well with one-time events like the creation of the universe, or unmeasurable items like where the apparent zero-point energy of quantum mechanics comes from. Untestable, one time events are the basis of religion and not science: science is the opposite of religion.

We thus turn to the theologian. In a sense, he has the answer: it’s God, Jesus, Jihad, prayer… Perhaps these words mean the same thing, or perhaps something different. A theologian can talk about this for hours. He has all the answers, but when he’s done, he’s left them as incomprehensible as before. Likely he is as confused as we are, but he doesn’t know it, or show it. While something like God does seem to underly the concept of time, or creation (the big bang), a one-word answer, like “God” isn’t really an answer. Even though there appears to be a God, God doesn’t seem contained within the word — he’s not there. And calling “God” doesn’t give us the power we’d want: it does not save the drowning, or cure disease.

Though the theologian will likely tell you miracle stories, and show you a pretty picture: long-haired Jesus, seated Zeus, or a dancing woman with the head of an elephant, that’s God and it isn’t. The reason people believe the theologian, is optimism: we hope he knows, though we know he doesn’t. Besides, the theologian has a costume and an audience, and that helps. He keeps on talking till he wears the audience down. Eventually we believe he sees the black cat in the dark room called God. Eventually we don’t care that he can’t do anything on the physical plane. Theologians work in pairs to increase their believability: one tells you the other is much smarter and holier than you; the other one tells you the same about the first. Eventually, you believe them both — or at least you believe you are stupider and eviler than they are.

A wise and good philosopher or theologian is very hard to find. He doesn’t talk too much, and instead lets his fine example do the teaching. He does charity and justice (Gen. 18:18) and makes good lemonade from the lemons life gives him. He will admit that he doesn’t really know which set of words and bows actually open up God’s warehouse (or if any are particularly effective) “God speaks within a cloud” (Ex. 40:34, etc.); “[His] thoughts are not our thoughts,” (Is. 55:8, etc.). “No man can see my face and live” (Ex. 33:20).

What percentage of leaders are like this? “In a thousand, I have found one leader of men”, says Solomon (Eccles 7:28). “The other 999 follow after the women” (Groucho Marx).

My hope with this blog post is not to diminish the good of rabbis, priests, or other theologians, but rather that you will not finish reading the post thinking you are stupid or evil for not understanding your theologian’s many words. Also, I can hope that you will seek justice, help the downtrodden, and make yourself into something of value. Then again, you might be tempted to run off to a bad theologian — to someone who will encourage you to pray long and hard, and who will get you to pay him for a picture of God that only he can provide — that is, for his special picture of the black cat, in the dark room, that can never be photographed.

Robert E. Buxbaum; Amateur philosopher, and maker of a good glass of lemonade.

Science is the Opposite of Religion

Some years ago, my daughter came back from religions school and asked for a definition of science. I told her that science was the opposite of religion. I didn’t mean to insult religion or science; the big bang for one thing, strongly suggests there is a God -creator, and quantum mechanics suggests (to me) that there is a God -maintainer, but religion deals with other things beyond a belief in God, and I meant to point out that every basic of how science looks at things finds its opposite in religion.

Science is based on reproducibility and lack of meaning: if you do the same experiment over and over, you’ll always get the same result as you did before and the same result as anyone else — when the results are measured to some good, statistical norm. The meaning for the observation? that’s a meaningless question. Religion is based on the centrality of drawing meaning, and the centrality of non-reproducible, one-time events: creation, the exodus from Egypt, the resurrection of Jesus, the birth of Zeus, etc. A religious believer is one who changes his or her life based on the lesson of these; to him, a non-believer is one who draws no meaning, or needs reproducible events.

Science also requires that anyone will get the same result if they do the same process. Thus, chemistry class results don’t depend on the age, sex, or election of the students. Any student who mixes the prescribed two chemicals and heats to a certain temperature is expected to get the same result. The same applies to measures of the size of the universe, or its angular momentum or age. In religion, it is fundamentally important what sex you are, how old you are, who your parents were, or what you are thinking at the time. If the right person says “hoc es corpus” over wine and wafers, they change; if not, they do not. If the right person opens the door to heaven, or closes it, it matters in religion.

A main aspect of all religion is prayer; the idea that what you are thinking or saying changes things on high and here below. In science, we only consider experiments where the words said over the experiment have no effect. Another aspect of religion is tchuvah (regret, repentance); the idea is that thoughts can change the effect of actions, at least retroactively. Science tends to ignore repentance, because they lack the ability to measure things that work backwards in time, and because the scientific instruments we have currently do not take measurements on the soul to see if the repentance had any effect. Basically, the science-universe is only populated with those things which can be measured or reproducibly affected, and that pretty much excludes the soul. That the soul does not exist in the science universe doesn’t mean it doesn’t exist.

Another main aspect of religion is morality: you’re supposed to do the right thing. Morality varies from one religion to another, and you may think the other fellow’s religion has a warped morality, but at least there is one in all religions. In science, for better or worse, there is no apparent morality, either to man or to the universe. Based on science, the universe will end, either by a bang or a whimper, and in that void of end it would seem that killing a mouse is about as important as killing a person. No religion I know of sees the universe ending in either cold or hot death; as a result. Consistent with this, they all see murder is a sin against God. This difference is a big plus for religion, IMHO. That man sees murder as a true evil is either a sign that religion is true, or that it isn’t depending on the value you put on life. Another example of the moral divide: Scientists, especially academics, tend to be elitists. Their morality, such as it is, values great minds and great projects over the humble and stupid. Classical religion sees the opposite; it promoting the elevation of the poor, weak, and humble. There is no fundamental way to tell which one is right, and I tend to think that both are right in their own, mirror-image universes.

It is now worthwhile to consider what each universe sees as wisdom. An Explanation in the universe of science has everything to do with utility and not any internal sense of having understood, as such. I understand something only to the extent I predict that thing or can do something based on the knowledge. in religion, the motivation for all activity is always just understanding — typically of God on the bone-deep level. This difference shows up very clearly in dealing with quantum mechanics. To a scientist, the quantum world is fundamentally a door from religion because it is basically non-understandable but very useful. Religion totally ignores quantum mechanics for the same reason: it’s non-understandable, but very precise and useful. Anything you can’t understand is meaningless to them (literally), and useful is mostly defined in terms of building the particular religion; I think this is a mistake on many levels. I note that looking for disproof is the glory-work of all science development, but the devil’s work of every religion. A religious leader will grab on to statistical findings that suggest that his type of prayer cures people, but will always reject disproof, e.g. evidence that someone else’s prayers works better, or that his prayer does nothing at all. Each religion is thus in a war with the other, each trying to build belief, while not removing it. Science is the opposite. Religion starts with the answer and accepts any support it can; fundamental change is considered a bad thing in religion. The opposite is so with science; disproof is considered “progress,” and change is good.

These are not minor aspects of science and religion, by the way, but these are the fundamental basics of each, as best I can tell. History, politics, and psychology seem to be border-line areas, somewhere between science and religion. The differences do not reflect a lack in these fields, but just a recognition that each works according to its own logic and universe.

My hope in life is to combine science and religion to the extent possible, but find that supporting science in any form presents difficulties when I have to speak to others in the religious community, my daughter’s teachers among them. As an example of the problem that come up, my sense is that the big bang is a fine proof of creation and should be welcomed by all (most) religious people. I think its a sign that there is a creator when science says everything came from nothing, 14,000,000 years ago. Sorry to say, the religious leaders I’ve met reject the big bang, and claim you can’t believe in anything that happened 14,000,000,000 years ago. So long as science shows no evidence of a bearded observer at the center, they are not interested. Scientists, too have trouble with the bang, I find. It’s a one-time event that they can’t quite explain away (Steven Hawking keeps trying). The only sane approach I’ve found is to keep blogging, and otherwise leave each to its area. There seems to be little reason to expect communal agreement.

by Robert E. Buxbaum, Apr. 7, 2013. For some further thoughts, see here.

For parents of a young scientist: math

It is not uncommon for parents to ask my advice or help with their child; someone they consider to be a young scientist, or at least a potential young scientist. My main advice is math.

Most often the tyke is 5 to 8 years old and has an interest in weather, chemistry, or how things work. That’s a good age, about the age that the science bug struck me, and it’s a good age to begin to introduce the power of math. Math isn’t the total answer, by the way; if your child is interested in weather, for example, you’ll need to get books on weather, and you’ll want to buy a weather-science kit at your local smart-toy store (look for one with a small wet-bulb and dry bulb thermometer setup so that you’ll be able to discuss humidity  in some modest way: wet bulb temperatures are lower than dry bulb with a difference that is higher the lower the humidity; it’s zero at 100%). But math makes the key difference between the interest blooming into science or having it wilt or worse. Math is the language of science, and without it there is no way that your child will understand the better books, no way that he or she will be able to talk to others who are interested, and the interest can bloom into a phobia (that’s what happens when your child has something to express, but can’t speak about it in any real way).

Math takes science out of the range of religion and mythology, too. If you’re stuck to the use of words, you think that the explanations in science books resemble the stories of the Greek gods. You either accept them or you don’t. With math you see that they are testable, and that the  versions in the book are generally simplified approximations to some more complex description. You also get to see that there the descriptions are testable, and that are many, different looking descriptions that will fit the same phenomena. Some will be mathematically identical, and others will be quite different, but all are testable as the Greek myths are not.

What math to teach depends on your child’s level and interests. If the child is young, have him or her count in twos or fives, or tens, etc. Have him or her learn to spot patterns, like that the every other number that is divisible by 5 ends in zero, or that the sum of digits for every number that’s divisible by three is itself divisible by three. If the child is a little older, show him or her geometry, or prime numbers, or squares and cubes. Ask your child to figure out the sum of all the numbers from 1 to 100, or to estimate the square-root of some numbers. Ask why the area of a circle is πr2 while the circumference is 2πr: why do both contain the same, odd factor, π = 3.1415926535… All these games and ideas will give your child a language to use discussing science.

If your child is old enough to read, I’d definitely suggest you buy a few books with nice pictures and practical examples. I’d grown up with the Giant Golden book of Mathematics by Irving Adler, but I’ve seen and been impressed with several other nice books, and with the entire Golden Book series. Make regular trips to the library, and point your child to an appropriate section, but don’t force the child to take science books. Forcing your child will kill any natural interest he or she has. Besides, having other interests is a sign of normality; even the biggest scientist will sometimes want to read something else (sports, music, art, etc.) Many scientists drew (da Vinci, Feynman) or played the violin (Einstein). Let your child grow at his or her own pace and direction. (I liked the theater, including opera, and liked philosophy).

Now, back to the science kits and toys. Get a few basic ones, and let your child play: these are toys, not work. I liked chemistry, and a chemistry set was perhaps the best toy I ever got. Another set I liked was an Erector set (Gilbert). Get good sets that they pick out, but don’t be disappointed if they don’t do all the experiments, or any of them. They may not be interested in this group; just move on. I was not interested in microscopy, fish, or animals, for example. And don’t be bothered if interests change. It’s common to start out interested in dinosaurs and then to change to an interest in other things. Don’t push an old interest, or even an active new interest: enough parental pushing will kill any interest, and that’s sad. As Solomon the wise said, the fire is more often extinguished by too much fuel than by too little. But you do need to help with math, though; without that, no real progress will be possible.

Oh, one more thing, don’t be disappointed if your child isn’t interested in science; most kids aren’t interested in science as such, but rather in something science-like, like the internet, or economics, or games, or how things work. These areas are all great too, and there is a lot more room for your child to find a good job or a scholarship based on their expertise in theses areas. Any math he or she learns is certain to help with all of these pursuits, and with whatever other science-like direction he or she takes.   — Good luck. Robert Buxbaum (Economics isn’t science, not because of the lack of math, but because it’s not reproducible: you can’t re-run the great depression without FDR’s stimulus, or without WWII)

How much wood could a woodchuck chuck?

How much wood could a woodchuck chuck, if a woodchuck could chuck wood. It’s a classic question with a simple answer: The woodchuck, also known as a groundhog or marmot, is a close relative to the beaver: it looks roughly the same, but is about 1/5 the weight  (10 pounds versus 50 pounds), and beavers do chuck wood, using their teeth to pile it onto their dams. I’ll call the tooth piling process chucking, since that’s what we would call it if a person did it by hand.

Beaver Dam

A beaver dam. From the size of this dam, and the rate of construction (one night) you can figure out how much wood a beaver could chuck, and from that how much a woodchuck could.

A reasonable assumption, is that a wood chuck would chuck about 1/5 as much wood as a beaver does. You might think this isn’t very much wood — and one researcher claimed it would be less than 1/2 lb. — but he’s wrong. A beaver is able to build a dam like the one shown in a single night. From the size of the dam and the speed of building you can estimate that the beaver chucked on the pile about 1000 lbs of wood per night (beavers work at night). To figure out how much wood a woodchuck would chuck, divide this rate by 5. Based on this, I’d estimate that a woodchuck would chuck some 200 lbs per day, if it chose to.

Woodchucks don’t chuck wood, as the question implies. Unlike beavers they do not build wood dams or lodges. Instead they live in burrows in the ground. Also woodchuck teeth are not so useful. Woodchucks do kick up a lot of dirt digging a burrow, as much as 700 lb/ day of dirt, but the question implies that this activity should not be counted as chucking. Well, now you know: it’s 200 lbs/night.

Robert Buxbaum. This post is revised January 30, 2020. My original estimate, from  January 2013 was half the value here. I’d come to believe that wood-chucks/ groundhogs are 1/10 the size of a beaver, so I’d estimated 100 lb/night.