Vaccine Safety: A Debate

As I said on Christmas Eve, thanks to Web comments and blogs, you can now hear many voices in a debate in a way you never could before. The New York Times has just added a vote-like recommendation feature to help sift through a large number of comments. (I hope they add a “sort by” feature to make the most popular comments easy to find.) People you could not usually hear from turn out to have enormously interesting and helpful things to say — again and again and again.

A new example is the debate over vaccine safety. A 2007 book called The Vaccine Book: Making the Right Decision For Your Child by Robert Sears took a middle ground: A way that parents can space out vaccines. This seems to have offended Dr. Paul Offit, a vaccine inventor. With Charlotte Moser he wrote a critique (may be gated) of the book, just published in Pediatrics, that is actually an attack on it. Would the critique be full of well-reasoned arguments? New facts? Nope. It reminds me of my surgeon claiming that a certain surgery was beneficial and, when questioned, saying that of course evidence supported her claim but never producing any evidence. However, overstatement from doctors is nothing new. What’s new is the comments section on the critique (may be gated), which contains several fascinating observations.

From John Trainer, a family doctor:

[For Offit and Moser] to castigate [Sears] for offering information to the laity is to fall prey to the same mindset as the early church. By controlling access to the Bible, the leaders of the church exerted control over all.

From Corrinne Zoli, a Syracuse University researcher:

The vaccine debate plays out against a backdrop not only of facts vs. falsehoods, refereed vs. non- mainstream journals and studies, science vs. speculation, a complicated enough arena, but of conflicting cultural ’facts,’ which may be equally important as the science. For instance, parental concerns over the safe cumulative levels of thimerosal (ethyl mercury) in vaccines were unwittingly validated by the American Academy of Pediatrics (AAP) and the U.S. Public Health Service and others’ recommending their removal (which largely occurred in 2001)—even while these organizations were steadfast in public declarations of no causal link between the preservative and various neurotoxic or neuropathological ill-effects. What did parents learn from this decision? Aside from the fact that the preservative had been long removed in many countries of the world (i.e., the UK and even Russia), or that infants may have received doses exceeding EPA recommendations, they learned that organizations designed to serve the public trust were contradictory in their words and deeds. . . . The larger ’lesson learned’ by parents was to fear the decision making processes of medical and public health institutions and to become critically engaged with them using whatever tools at one’s disposal (i.e., online information, reading scientific studies, discussion groups, etc.).

Fifty years ago, when doctors wouldn’t justify their claims, you couldn’t do much about it. Few had access to medical libraries or the time to visit them. Now there is an enormous amount you can do. Water will simply flow around the rocks, such as Dr. Offit, who get in the way of better decisions.

This sort of open discussion is so helpful it should be standard scientific practice: allow your research to be commented on by anyone for anyone to read.

Kafkaesque Research Regulation

From the BMJ:

The local research ethics subcommittee, which comprised a pharmacist and layman with limited clinical experience, had concerns about possible drug interactions between amiloride and other drugs being taken by the study participants and hyperkalaemia and requested resubmission. Although we pointed out that the pilot was identical to one limb of the amendment that it had already approved, in September 2007 the full committee rejected the application for the pilot to be considered as a study amendment. We therefore had to make new submissions to the local ethics committee, Medicines and Healthcare Products Regulatory Agency (MHRA), pharmacy, insurance company, research and development department, and the local (Wellcome Trust) clinical research facility.

In Spain it takes years to get approval. By the time you get approval someone else has published the study you wanted to do. A nightmarish research environment is one more reason that persons with health problems should do their own research: try to find solutions themselves. I started long-term self-experimentation because I knew that conventional sleep research would never — at least, in my lifetime — help me understand why I often woke up too early. A common problem, easy to measure — but conventional sleep research is nearly impossible.

Can it get worse? Yes, in Russia.

A Statistics Package in the News

I use R, the open-source version of S, several times/day. More often than I use Word. It works far better than S — fewer bugs, much cheaper (R is free) — and S worked a lot better than what it replaced (STATGRAPHICS). I was pleased to see a NY Times article about it:

R has also quickly found a following because statisticians, engineers and scientists without computer programming skills find it easy to use.

“Easy to use” — haha! Non-statisticians and non-engineers don’t find it easy to use, in my experience, but it’s true that I found it easy to use. “R has a steep learning curve” some people say, twisting the meaning of “steep learning curve” (which should mean fast learning, since that’s what a steep learning curve describes).

The popularity of R at universities could threaten SAS Institute, the privately held business software company that specializes in data analysis software. SAS, with more than $2 billion in annual revenue, has been the preferred tool of scholars and corporate managers. . .SAS says it has noticed R’s rising popularity at universities, despite educational discounts on its own software, but it dismisses the technology as being of interest to a limited set of people working on very hard tasks.“I think it addresses a niche market for high-end data analysts that want free, readily available code,” said Anne H. Milley, director of technology product marketing at SAS. She adds, “We have customers who build engines for aircraft. I am happy they are not using freeware when I get on a jet.”

Ah, “freeware.” You may remember when “Made in Japan” was derogatory. Most psychology departments, including Berkeley, use SPSS (Statistical Package for the Social Sciences). Like SAS and its ten feet of manuals, it is horrible. One of my students wanted to make a scatterplot of her data. She went to the psych departmental statistics consultant (a psych grad student who had taken courses in the statistics department). The statistics consultant didn’t know how to do this! A scatterplot! It’s like Vladimir Nabokov’s observation at Cornell and other schools of language professors who couldn’t speak the language they taught. Nothing But the Best describes a Julliard composition teacher who couldn’t read music. To be a scientist and not be able to analyze your own data is pretty much the same thing. With R making a scatterplot is easy.

To me, the value of R is that it makes high-quality data analysis available to everyone — something very new in the history of mankind. R makes self-experimentation easier because it makes data analysis easier and allows you to learn more from the data you have collected (e.g., make better graphs). I also use it for data collection — measuring how well my brain is working.

Via Andrew Gelman.

The Missing Heritability of Height

In a special section of Nature on personal genomics, Brendan Maher writes:

This year, three groups of researchers scoured the genomes of huge populations (the largest study looked at more than 30,000 people) for genetic variants associated with the height differences. More than 40 turned up.

But there was a problem: the variants had tiny effects. Altogether, they accounted for little more than 5% of height’s heritability — just 6 centimetres by the calculations above. Even though these genome-wide association studies (GWAS) turned up dozens of variants, they did “very little of the prediction that you [can] do just by asking people how tall their parents are”, says Joel Hirschhorn at the Broad Institute in Cambridge, Massachusetts, who led one of the studies. . . .

There could be scarier and more intractable reasons for unaccounted-for heritability that are not even being discussed. “It’s a possibility that there’s something we just don’t fundamentally understand,” Kruglyak says. “That it’s so different from what we’re thinking about that we’re not thinking about it yet.”

Still the mystery continues to draw its sleuths, for Kruglyak as for many other basic-research scientists. “You have this clear, tangible phenomenon in which children resemble their parents,” he says. “Despite what students get told in elementary-school science, we just don’t know how that works.”

I don’t think it’s so mysterious. My self-experimentation led me again and again to find unsuspected environmental causes for various problems. I believe the answer is this: The heritability estimates were overestimates. As one researcher put it, “Heritability estimates are basically what clusters in families, and environment clusters in families.” Variations in environment make far more difference than variation in genes.

What the researchers “don’t fundamentally understand,” I believe, is their own tendency toward religious thinking — the tendency, shared by all of us, to believe what we’re told regardless of the (lack of) evidence for it. The notion that genes make a big difference in practice is one of those beliefs, repeated endlessly by genetics researchers (James Watson is fond of repeating it), that are supported by poor evidence at best. Obesity, it should be obvious, is an environmental disease if there ever was one. Yet Jeffrey Friedman, a researcher at Rockefeller University, is studying the genetic basis of obesity.

Thanks to Dave Lull.

Corruption of Doctors by Drug Companies

Several books about this have appeared recently and are reviewed by Marcia Angell here. It’s a good review, especially a good summary of the books, but I was really surprised by this:

Members of medical school faculties who conduct clinical trials should not accept any payments from drug companies except research support, and that support should have no strings attached, including control by drug companies over the design, interpretation, and publication of research results.

She expects a researcher who depends on drug companies for research support to be honest? Why? If you don’t get favorable results your grant won’t be renewed. Under this system it will be survival of the most corrupt. A reformer proposed this.

I think it’s a lot like too much humanitarian aid. Supply free milk to a needy area for too long and you wipe out the local dairy industry. Judging from this stunning proposal, the drug companies have wiped out whole medical schools. The doctors who work in them are no longer capable of doing independent research. This is worse than corruption, it’s enfeeblement.

Voodoo Correlations in Social Neuroscience

Few scientific papers arouse emotion in reviewers and editors but this one — by my friend and collaborator Hal Pashler and his colleagues — must have because they allowed the use of voodoo in the title instead of spurious. Here is part of the abstract:

The newly emerging field of Social Neuroscience has drawn much attention in recent years, with high-profile studies frequently reporting extremely high (e.g., >.8) correlations between behavioral and self-report measures of personality or emotion and measures of brain activation obtained using fMRI. We show that these correlations often exceed what is statistically possible . . . Social-neuroscience method sections rarely contain sufficient detail to ascertain how these correlations were obtained. We surveyed authors of 54 articles that reported findings of this kind to determine the details of their analyses. More than half acknowledged using a strategy that computes separate correlations for individual voxels, and reports means of just the subset of voxels exceeding chosen thresholds. We show how this non-independent analysis grossly inflates correlations, while yielding reassuring-looking scattergrams. This analysis technique was used to obtain the vast majority of the implausibly high correlations in our survey sample.

The papers shown to be misleading appeared in such journals as Science and Nature.

A Book About Scientific Failure

Failure: the last taboo subject. I loved Selling Ben Cheever, a book about a series of low-level service jobs that Ben Cheever took after he left Reader’s Digest and couldn’t sell his third novel. In the introduction, Cheever noted that no one wanted to talk to him about what it was like to lose a job and have to start over. How Starbucks Saved My Life by Michael Gates Gill is another excellent book along those lines. (Curious that both authors are the sons of well-iknown writers, John Cheever and Brendan Gill.)

Now comes a scientific third-person account of failure: Sun in a Bottle by Charles Seife, about attempts to produce nuclear fusion in the lab.

Seife’s message: fusion scientists should just cut bait. By analogy to your closet, if you haven’t worn it, throw it out. If you’ve been trying it for the last half-century and it hasn’t worked, then enough already.

According to its subtitle, the book covers “the science of wishful thinking.” Was it wishful thinking or avoidance of the f-word? I will have to read the book to find out, it sounds fascinating.

When is Science Helpful?

Last spring, fourteen Chinese students from elite universities — seven from Tsinghua — traveled to several elite American universities, including Stanford, Harvard, and Yale, under the auspices of a program called IMUSE to discuss sensitive Chinese social topics, such as Tibet or censorship. One of the main events was panel discussions. The American students struck the Chinese students as admirably pragmatic but also in some cases “ignorant and arrogant”. In response to American students’ criticism, one Chinese student said this: “I eat a lot of rice. My ancestors ate a lot of rice. If you tell me to eat a lot of bread, I don’t know what to eat. I don’t know how to get a healthy diet.”

When I heard that comment, I said it was exactly right. Nutrition is perhaps 75% science, 25% religion. (The discovery of vitamins = science. Thinking the obesity epidemic is due to lack of exercise = religion.) The science part is helpful, the religious part is useless or, if taken seriously, harmful. Nutrition science is too uncertain to choose over the tried and true. Physics is almost 100% science. The stuff in physics textbooks has been used to build lots of useful stuff: buildings, bridges, computers. Economics and political science are perhaps 25% science — too little to rely on their recommendations, which was the Chinese student’s point. Better to rely on tradition. No one tells the American students any of this, however, and they believe far too much of what their professors tell them. (So much for all that teaching how “ to think and to reason.”) The result is they give foolish advice.

At Edge, four American experts tried to answer the question “Can science help solve the economic crisis?” Here is a bit of what they said:

Two basic assumptions must guide any thinking as we undertake these tasks. First, economies, financial institutions and markets cannot function without a context of rules and laws, which regulate them. . . . Second, mathematics, physics and computers already play a major and necessary role in our economic affairs.

They believed such statements are helpful. Nassim Taleb responded:

I spent close to 21 years in finance facing “scientists” in some field who show up in finance and economics, realize that economists and practitioners are not as smart as they are (they are not as “rigorous” and did not score as high in math), then think they can figure it all out. Nice, commendable impulse, but I blame the banking crisis (and other blowups) on such “scientism”. . . . Meanwhile the most robust understanding is present among practitioners who do not have the instinct to reduce ambiguity and uncertainty that scientists have. . . . Please, please, enough of this “science”. We have enough problems without you.

The Chinese student and Taleb are both saying that Big Ideas from elite American universities do not automatically improve on what people elsewhere have done for a long time. Weston Price and Jane Jacobs said the same thing. Somehow elite universities fail to teach this important lesson — perhaps because their professors haven’t learned it.

Thanks to Dave Lull.

Unfortunate Obituaries: The Case of David Freedman

One of my colleagues at Berkeley didn’t return library books. He kept them in his office, as if he owned them. He didn’t pay bills, either: He stuck them in his desk drawer. He was smart and interesting but after he failed to show up at a lunch date — no explanation, no apology — I stopped having lunch with him. He died several years ago. At his memorial service, at the Berkeley Faculty Club, one of the speakers mentioned his non-return of library books and non-payment of bills as if they were amusing eccentricities! I’m sure they were signs of a bigger problem. He did no research, no scholarly work of any sort. When talking about science with him — a Berkeley professor in a science department — it was like talking to a non-scientist.

David Freedman, a Berkeley statistics professor who died recently, was more influential. He is best known for a popular introductory textbook. The work of his I found most interesting was his comments on census adjustment: He was against adjusting the census to remove bias caused by undercount. This was only slightly less ridiculous than not returning library books — and far more harmful, because his arguments were used by Republicans to block census adjustment. The undercounted tended to vote Democrat. The similarity with my delinquent colleague is the very first line in Freedman’s obituary: He “fought for three decades to keep the United States census on a firm statistical foundation.” Please. A Berkeley statistics professor, I have no idea who, must have written or approved that statement!

The obituary elaborates on this supposed contribution:

“The census turns out to be remarkably good, despite the generally bad press reviews,” Freedman and Wachter wrote in a 2001 paper published in the journal Society. “Statistical adjustment is unlikely to improve the accuracy, because adjustment can easily put in more error than it takes out.”

There are two kinds of error: variance and bias. The adjustment would surely increase variance and almost surely decrease bias. The quoted comments ignore this. They are a modern Let Them Eat Cake.

Few people hoard library books, but Freedman’s misbehavior is common. I blogged earlier about a blue-ribbon nutrition committee that ignored evidence that didn’t come from a double-blind trial. Late in his career, Freedman spent a great deal of time criticizing other people’s work. Maybe his critiques did some good but I thought they were obvious (the assumptions of the statistical method weren’t clearly satisfied — who knew?) and that it was lazy the way he would merely show that the criticized work (e.g., earthquake prediction) fell short of perfection and fail to show how it related to other work in its field — whether it was an improvement or not. As they say, he could see the cost of everything and the value of nothing. That he felt comfortable spending most of his time doing this, and his obituary would praise it (“the skeptical conscience of statistics”), says something highly unflattering about modern scientific culture.

For reasonable comments about census adjustment, see Eriksen, Eugene P., Kadane, Joseph B., and Tukey, John W. (1989). Adjusting the 1980 census of population and housing. JASA, 84, 927-943.

Walking is to Driving as Idea Generation is to Idea Testing

Mary Soderstrom, a Montreal writer, has written a recently-published book called The Walkable City. From a blurb:

The idea that a city might not be walkable would never occur to anyone who lived before 1800. Over the past 200 years there have been dramatic changes to our cities.

Over the same period there were also dramatic changes in the practice of science. Maybe the biggest change was the introduction of significance tests and associated logic. Just as cars took over cities, so did significance tests take over statistics textbooks. Cities built for cars made it hard to walk; statistics textbooks full of significance tests made it hard to teach how to generate ideas.

How to generate plausible new ideas — ideas worth testing — is pretty much a mystery to most scientists, as far as I can tell. The idea generation:idea testing :: walking:driving analogy provides a little guidance, and at least makes it clear that something is missing from today’s scientific education. Walking is slower than driving; idea generation is slower than idea testing. Walking is more exploratory than driving; idea generation is more exploratory than idea testing. Walking is much cheaper than driving but it may take a lot of walking to discover somewhere you want to drive; techniques for idea generation should be very cheap because it may take a lot of use of them to discover an idea worth testing. Walking is “softer” than driving; perhaps idea generation will never be as mathematical as idea testing. Walking is far more flexible than driving; idea generation methods must be far more flexible than idea testing methods. It is hard to drive somewhere that no one has ever driven before but it is easy or at least much easier to walk somewhere new. Which should suggest to a scientist that if all you know how to do is test ideas, it will be hard for you to innovate.

The way science is supported in America is horribly biassed against idea generation — grant proposals must be all about idea testing. I don’t know if the people who run that system have any idea how unbalanced and unhealthy it is.