Smoking and Cancer

In his interview with me about The Truth in Small Doses (Part 1, Part 2), Clifton Leaf praised Racing to the Beginning of the Road (1996) by Robert Weinberg. “A masterful job . . . the single best book on cancer,” wrote Leaf. In an email, he continued:

In Chapter 3 of “Racing to the Beginning of the Road,” Weinberg goes through much of the early epidemiological work linking tobacco to smoking (John Hill, Percivall Pott, Katsusaburo Yamagiwa, Richard Doll), but then focuses on the story of Ernst Wynder, who just happens to be one of Weinberg’s cousins. [As a medical student, Wynder found a strong correlation between smoking and lung cancer.] Building on his own prior epidemiological work, and that of many others, Wynder actually built an experimental “smoking machine” at the Sloan-Kettering Institute in New York in the early 1950s. The machine collected the tar from cigarette smoke (and later, the condensate from the smoke) and Wynder used those to produce skin cancers in mice and rabbits. But the amazing part of the story is what happened later…with Wynder’s bosses at Sloan-Kettering and with one of the legendary figures in cancer research, Clarence Cook Little. I don’t want to give the story away. (If you have the time, you really would love reading the book.) But it’s one of the most damning stories of scientific interference I’ve read.

Wynder met a lot of opposition. His superiors at Sloan-Kettering required that his papers be okayed by his boss, who disagreed with his conclusions. Clarence Cook Little, according to Weinberg, made the following arguments:

The greater rates of lung cancer in smokers only gave evidence of a correlation, but hardly proved a causal connection. One’s credulity had to be strained to accept the ability of a single agent [he means smoking] to cause lung cancer along with so many other diseases including bronchitis, emphysema, coronary artery disease, and a variety of cancers of the mouth, pharynx, esophagus, bladder and kidney. After all, many of these diseases existed long before people started smoking.

A little masterpiece of foolishness . . . and more reason to never ever say correlation does not equal causation. Little was at one point President of the University of Michigan. Later he worked for the tobacco industry. It wasn’t just Little. Weinberg says that Wynder’s colleagues complained about his “statistical analyses and experimental protocols, which they found to be less than rigorous.”

Weinberg says little about epidemiology in the rest of the book — which, to be fair, is about the laboratory study of cancer. At the very end of the book, he writes:

We learned much about how cancer begins; it is no longer a mystery. We will surely learn more . . . but the major answers already rest firmly in our hands. . . . No, we have still not found the cure. But after so long, we know where to look.

The claim that “we know where to look” is not supported by examples. And Weinberg says nothing about prevention.

Weinberg’s book reminded me of a new-music concert I attended at the Brooklyn Academy of Music. Hard to listen to (non-melodic, etc.) like lots of new non-popular music. I didn’t enjoy it, but surely the composer did — this was fascinating. How did it happen? I wondered. Weinberg describes a great deal of research that has so far produced little practical benefit. Weinberg, it seems, has managed to avoid being bothered by this — if he even notices it. How did this happen?

I don’t think it’s “bad” or wrong or undesirable to do science with no practical benefit, just as I don’t complain about “unlistenable” music. Plenty of “useless” science has ultimately proved useful, but the transition from useless to useful can take hundreds of years, which is why there must be “scaffolding,” sources of support other than practicality. This is why scientists use the word elegant so much. Their enjoyment of “elegance” is scaffolding. Long before “useless” science, there was “useless” decoration (and nowadays there is “unlistenable” music). Thorstein Veblen showed no sign of understanding that the “waste” he mocked made possible exploration of the unknown, which is necessary for progress. (By supporting artisans, such as the artisans who make decorations, we support their research.) What is undesirable is when someone (like Wynder) manages to do something useful, to foolishly criticize it, as Little and Wynder’s colleagues did.

Vitamin D3 Eliminated Colds and Improved Sleep When Taken in the Morning (Stories 24 and 25)

A year and a half ago, the father of a friend of mine started taking Vitamin D3, 5000 IU/day at around 7 am — soon after getting up. That his regimen is exactly what I’d recommend (good dose, good time of day) is a coincidence — he doesn’t read this blog. He used to get 3 or 4 terrible colds every year, year after year. Since he started the Vitamin D3, he hasn’t gotten any. “A huge lifestyle improvement,” said my friend. His dad studied engineering at Caltech and is a considerable skeptic about new this and that.

Much more recently his mother changed the time of day she took her usual dose of Vitamin D3. For years she had been taking half in the morning (with a calcium supplement) and half at night. Two weeks ago she started taking the whole dose in the morning. Immediately — the first night — her sleep improved. She used to wake up every 2 hours. Since taking the Vitamin D3 in the morning, she has been waking up only every 3-6 hours. A few days ago, my friend reports she had “her best sleep in years”.

Sleep and immune function are linked in many ways beyond the fact that we sleep more when we’re sick. A molecule that promotes sleep turned out to be very close to a molecule that produces fever, for example. I found that when I did two things to improve my sleep (more standing, more morning light) I stopped getting colds. So it makes sense that a treatment that improves one (sleep or immune function) would also improve the other (immune function or sleep).

A few days ago I posted a link about a recent Vitamin D study that found no effect of Vitamin D on colds. The study completely neglected importance of time of day by giving one large injection of Vitamin D (100,000 IU) per month at unspecified time. I commented: “One more Vitamin D experiment that failed to have subjects take the Vitamin D early in the morning — the time it appears most likely to have a good effect.” These two stories, which I learned about after that post, support my comment. What’s interesting is that the researchers who do Vitamin D studies keep failing to take time of day into account and keep failing to find an effect and keep failing to figure out why. I have gathered 23 anecdotes that suggest that their studies are failing because they are failing to make sure their subjects take their Vitamin D early in the morning. Yet these researchers, if they resemble most medical researchers, disparage anecdotes. (Disparagement of anecdotes reaches its apotheosis in “evidence-based medicine”.) The same anecdotes that, I believe, contain the information they need to do a successful Vitamin D clinical trial. Could there be a serious problem with how Vitamin D researchers are trained to do research? A better approach would be to study anecdotes to get ideas about causation and then test those ideas. This isn’t complicated or hard to understand, but I haven’t heard of it being taught. If you understand this method, you treasure anecdotes rather than dismiss them (“anecdotal evidence”).

 

Assorted Links

  • Salem Comes to the National Institutes of Health. Dr. Herbert Needleman is harassed by the lead industry, with the help of two psychology professors.
  • Climate scientists “perpetuating rubbish”.
  • A humorous article in the BMJ that describes evidence-based medicine (EBM) as a religion. “Despite repeated denials by the high priests of EBM that they have founded a new religion, our report provides irrefutable proof that EBM is, indeed, a full-blown religious movement.” The article points out one unquestionable benefit of EBM — that some believers “demand that [the drug] industry divulge all of its secret evidence, instead of publishing only the evidence that favours its products.” Of course, you need not believe in EBM to want that. One of the responses to the article makes two of the criticisms of EBM I make: 1. Where is the evidence that EBM helps? 2. EBM stifles innovation.
  • What really happened to Dominique Strauss-Kahn? Great journalism by Edward Jay Epstein. This piece, like much of Epstein’s work, sheds a very harsh light on American mainstream media. They were made fools of by enemies of Strauss-Kahn. Epstein is a freelance journalist. He uncovered something enormously important that all major media outlets — NY Times, Washington Post, The New Yorker, ABC, NBC, CBS (which includes 60 Minutes), the AP, not to mention French news organizations, all with great resources — missed.

Testing Treatments: Nine Questions For the Authors

From this comment (thanks, Elizabeth Molin) I learned of a British book called Testing Treatments (pdf), whose second edition has just come out. Its goal is to make readers more sophisticated consumers of medical research. To help them distinguish “good” science from “bad” science. Ben Goldacre, the Bad Science columnist, fulsomely praises it (“I genuinely, truly, cannot recommend this awesome book highly enough for its clarity, depth, and humanity”). He wrote a foreword. The main text is by Imogen Evans (medical journalist), Hazel Thornton (writer), Iain Chalmers (medical researcher), and Paul Glaziou (medical researcher, editor of Journal of Evidence-Based Medicine).

To me, as I’ve said, medical research is almost entirely bad. Almost all medical researchers accept two remarkable rules: (a) first, let them get sick and (b) no cheap remedies. These rules severely limit what is studied. In terms of useful progress, the price of these limits has been enormous: near total enfeeblement. For many years the Nobel Prize in Medicine has documented the continuing failure of medical researchers all over the world to make significant progress on all major health problems, including depression, heart disease, obesity, cancer, diabetes, stroke, and so on. It is consistent with their level of understanding that some people associated with medicine would write a book about how to do something (good science) the whole field manifestly can’t do. Testing Treatments isn’t just a fat person writing a book about how to lose weight, it’s the author failing to notice he’s fat.

In case the lesson of the Nobel Prizes isn’t clear, here are some questions for the authors:

1. Why no chapter on prevention research? To fail to discuss prevention, which should be at least half of health care, at length is like writing a book using only half the letters of the alphabet. The authors appear unaware they have done so.

2. Why are practically all common medical treatments expensive?

3. Why should some data be ignored (“clear rules are followed, describing where to look for evidence, what evidence can be included”)? The “systematic reviews” that Goldacre praises here (p. 12) may ignore 95% of available data.

4. The book says: “Patients with life-threatening conditions can be desperate to try anything, including untested ‘treatments’. But it is far better for them to consider enrolling in a suitable clinical trial in which a new treatment is being compared with the current best treatment.” Really? Perhaps an ancient treatment (to authors, untested) would be better. Why are there never clinical trials that compare current treatments (e.g., drugs) to ancient treatments? The ancient treatments, unlike the current ones, have passed the test of time. (The authors appear unaware of this test.) Why is the comparison always one relatively new treatment versus another even newer treatment?

5. Why does all the research you discuss center on reducing symptoms rather than discovering underlying causes? Isn’t the latter vastly more helpful than the former?

6. In a discussion of how to treat arthritis (pp. 170-172), why no mention of omega-3? Many people (with good reason, including this) consider omega-3 anti-inflammatory. Isn’t inflammation a major source of disease?

7. Why is there nothing about how to make your immune system work better? Why is this topic absent from the examples? The immune system is mentioned only once (“Bacterial infections, such as pneumonia, which are associated with the children’s weakened immune system, are a common cause of death [in children with AIDS]“).

8. Care to defend what you say about “ghostwriting” (where med school professors are the stated authors of papers they didn’t write)? You say ghostwriting is when “a professional writer writes text that is officially credited to someone else” (p. 124). Officially credited? Please explain. You also say “ghostwritten material appears in academic publications too – and with potentially worrying consequences” (p. 124). Potentially worrying consequences? You’re not sure?

9. Have you ever discovered a useful treatment? No such discoveries are described in “About the Authors” nor does the main text contain examples. If not, why do you think you know how? If you’re just repeating what others have said, why do you think your teachers are capable of useful discovery? The authors dedicate the book to someone “who encouraged us repeatedly to challenge authority.” Did you ever ask your teachers for evidence that evidence-based medicine is an improvement?

The sad irony of Testing Treatments is that it glorifies evidence-based medicine. According to that line of thinking, doctors should ask for evidence of effectiveness. They should not simply prescribe the conventional treatment. In a meta sense, the authors of Testing Treatments have made exactly the mistake that evidence-based medicine was supposed to fix: Failure to look at evidence. They have failed to see abundant evidence (e.g., the Nobel Prizes) that, better or not, evidence-based medicine is little use.

Above all, the authors of Testing Treatments and the architects of evidence-based medicine have failed to ask: How do new ideas begin? How can we encourage them? Healthy science is more than hypothesis testing; it includes hypothesis generation — and therefore includes methods for doing so. What are those methods? By denigrating and ignoring and telling others to ignore what they call “low-quality evidence” (e.g., case studies), the architects of evidence-based medicine have stifled the growth of new ideas. Ordinary doctors cannot do double-blind clinical trials. Yet they can gather data. They can write case reports. They can do n=1 experiments. They can do n=8 experiments (“case series”). There are millions of ordinary doctors, some very smart and creative (e.g., Jack Kruse). They are potentially a great source of new ideas about how to improve health. By denigrating what ordinary doctors can do (the evidence they can collect) — not to mention what the rest of us can do — and by failing to understand innovation, the architects of evidence-based medicine have made a bad situation (the two rules I mentioned earlier) even worse. They have further reduced the ability of the whole field to innovate, to find practical solutions to common problems.

Evidence-based medicine is religion-like in its emphasis on hierarchy (grades of evidence) and rule-following. In the design of religions, these features made sense (to the designers). You want unquestioning obedience (followers must not question leaders) and you want the focus to be on procedure (rules and rituals) rather than concrete results. Like many religions, evidence-based medicine draws lines (on this side “good”, on that side “bad”) where no lines actually exist. Such line-drawing helps religious leaders because it allows their followers to feel superior to someone (to people outside their religion). When it comes to science, however, these features make things worse. Good ideas can come from anybody, high or low in the hierarchy, on either side of any line. And every scientist comes to realize, if they didn’t already know, that you can’t do good science simply by following rules. It is harder than that. You have to pay close attention to what happens and be flexible. Evidence-based medicine is the opposite of flexible. “ There is considerable intellectual tyranny in the name of science,” said Richard Feynman.

Testing Treatments has plenty of stories. Here I agree with the authors — good stories. It’s the rest of the book that shows their misunderstanding. I would replace the book’s many pages of advice and sermonizing with a few simple words: Ask your doctor for the evidence behind their treatment recommendation. He or she may not want to tell you. Insist. Don’t settle for vague banalities (“It’s good to catch these things early”). Don’t worry about being “difficult”. You won’t find this advice anywhere in Testing Treatments. If I wanted to help patients, I would find out what happens when it is followed.

More Two of the authors respond in the comments. And I comment on their response.

Causal Reasoning in Science: Don’t Dismiss Correlations

In a paper (and blog post), Andrew Gelman writes:

As a statistician, I was trained to think of randomized experimentation as representing the gold standard of knowledge in the social sciences, and, despite having seen occasional arguments to the contrary, I still hold that view, expressed pithily by Box, Hunter, and Hunter (1978) that “To find out what happens when you change something, it is necessary to change it.”

Box, Hunter, and Hunter (1978) (a book called Statistics for Experimenters) is well-regarded by statisticians. Perhaps Box, Hunter, and Hunter, and Andrew, were/are unfamiliar with another quote (modified from Beveridge): “Everyone believes an experiment except the experimenter; no one believes a theory except the theorist.”

Box, Hunter, and Hunter were/are theorists, in the sense that they don’t do experiments (or even collect data) themselves. And their book has a massive blind spot. It contains 500 pages on how to test ideas and not one page — not one sentence — on how to come up with ideas worth testing. Which is just as important. Had they considered both goals — idea generation and idea testing — they would have written a different book. It would have said much more about graphical data analysis and simple experimental designs, and, I hope, would not have contained the flat statement (“To find out what happens …”) Andrew quotes.

“To find out what happens when you change something, it is necessary to change it.” It’s not “necessary” because belief in causality, like all belief, is graded: it can take on an infinity of values, from zero (“can’t possibly be true”) to one (“I’m completely sure”). And belief changes gradually. In my experience, significant (substantially greater than zero) belief in the statement A changes B usually starts with the observation of a correlation between A and B. For example, I began to believe that one-legged standing would make me sleep better after I slept unusually well one night and realized that the previous day I had stood on one leg (which I almost never do). That correlation made one-legged standing improves sleep more plausible, taking it from near zero to some middle value of belief (“might be true, might not be true”) Experiments in which I stood on one leg various amounts pushed my belief in the statement close to one (“sure it’s true”). In other words, my journey “to find out what happens” to my sleep when I stood on one leg began with a correlation. Not an experiment. To push belief from high (say, 0.8) to really high (say, 0.99) you do need experiments. But to push belief from low (say, 0.0001) to medium (say, 0.5), you don’t need experiments. To fail to understand how beliefs begin, as Box et al. apparently do, is to miss something really important.

Science is about increasing certainty — about learning. You can learn from any observation, as distasteful as that may be to evidence snobs. By saying that experiments are “necessary” to find out something, Box et al. said the opposite of you can learn from any observation. Among shades of gray, they drew a line and said “this side white, that side black”.

The Box et al. attitude makes a big difference in practice. It has two effects:

  1. Too-complex research designs. Just as researchers undervalue correlations, they undervalue simple experiments. They overdesign. Their experiments (or data collection efforts) cost far more and take much longer than they should. The self-experimentation I’ve learned so much from, for example, is undervalued. This is one reason I learned so much from it — because it was new.
  2. Existing evidence is undervalued, even ignored, because it doesn’t meet some standard of purity.

In my experience, both tendencies (too-complex designs, undervaluation of evidence) are very common. In the last ten years, for example, almost every proposed experiment I’ve learned about has been more complicated than I think wise.

Why did Box, Hunter, and Hunter get it so wrong? I think it gets back to the job/hobby distinction. As I said, Box et al. didn’t generate data themselves. They got it from professional researchers — mostly engineers and scientists in academia or industry. Those engineers and scientists have jobs. Their job is to do research. They need regular publications. Hypothesis testing is good for that. You do an experiment to test an idea, you publish the result. Hypothesis generation, on the other hand, is too uncertain. It’s rare. It’s like tossing a coin, hoping for heads, when the chance of heads is tiny. Ten researchers might work for ten years, tossing coins many times, and generate only one new idea. Perhaps all their work, all that coin tossing, was equally good. But only one researcher came up with the idea. Should only one researcher get credit? Should the rest get fired, for wasting ten years? You see the problem, and so do the researchers themselves. So hypothesis generation is essentially ignored by professionals because they have jobs. They don’t go to statisticians asking: How can I better generate ideas? They do ask: How can I better test ideas? So statisticians get a biased view of what matters, do biased research (ignoring idea generation), and write biased books (that don’t mention idea generation).

My self-experimentation taught me that the Box et al. view of experimentation (and of science — that it was all about hypothesis testing) was seriously incomplete. It could do so because it was like a hobby. I had no need for publications or other steady output. Over thirty years, I collected a lot of data, did a lot of fast-and-dirty experiments, noticed informative correlations (“accidental observations”) many times, and came to see the great importance of correlations in learning about causality.

 

 

 

 

 

 

 

 

Yes, Canker Sores Prevented (and Cured) by Omega-3

Here is a comment left on my earlier canker-sore post by a reader named Ted:

I found out quite by accident WALNUTS get rid of [canker sores] quite quickly. The first sign of an ulcer I chew walnuts and leave the paste in my mouth for a little while (30 seconds or so).

The first time was by accident, my ulcers disappeared so quickly I knew it had to be something I ate. And the only thing I had eaten differently the past day was walnuts.

Flaxseed oil and walnuts differ in lots of ways but both are high in omega-3. My gums got much better around the time I started taking flaxseed oil. I neither noticed nor expected this; my dentist pointed it out. Several others have told me the same thing. Tyler Cowen’s gums got dramatically better. One reader started and stopped and restarted flaxseed oil, making it blindingly clear that the gum improvement is caused by flaxseed oil. There is plenty of reason to think the human diet was once much higher in omega-3. All this together convinces me that omega-3 can both prevent and cure canker sores. Not only that, I’m also convinced that canker sores are a sign of omega-3 deficiency. You shouldn’t just get rid of them with walnuts; you should change your diet. Omega-3 has other benefits (better brain function, less inflammation, probably others).

Let’s say I’m right about this — canker sores really are prevented and cured by omega-3. Then there are several things to notice.

1. Web facilitation. It was made possible by the internet. My initial interest in flaxseed oil came from reading the Shangri-La Diet forums. I didn’t have to read a single book about the Aquatic Ape theory; I could learn enough online. Tyler Cowen’s experience was in his blog. Eric Vlemmix contacted me by email. No special website was involved.

2. Value of self-experimentation. My flaxseed oil self-experimentation played a big part, although it had nothing to do with mouth health. These experiments showed dramatic benefits — so large and fast that something in flaxseed oil, presumably omega-3, had to be a necessary nutrient. Because of these results, I blogged about omega-3 a lot, which is why Eric emailed me about his experience.

3. Unconventional evidence. All the evidence here, not just the self-experimentation, is what advocates of evidence-based medicine and other evidence snobs criticize. Much of it is anecdotal. Yet the evidence snobs have, in this case, nothing to show for their snobbery. They missed this conclusion completely. Nor do you need a double-blind study to verify/test this conclusion. If you have canker sores, you simply drink flaxseed oil or eat walnuts and see if they go away. Maybe this omnipresent evidence snobbery is . . . completely wrong? Maybe this has something to do with the stagnation in health research?

4. Lack of credentials. No one involved with this conclusion is a nutrition professor or dentist or medical doctor, as far as I know. Apparently you don’t need proper credentials to figure out important things about health. Of course, we’ve been here before: Jane Jacobs, Elaine Morgan.

5. Failure of “trusted” health websites. Health websites you might think you could trust missed this completely. The Mayo Clinic website lists 15 possible causes — none of them involving omega-3. (Some of them, we can now see, are correlates of canker sores, also caused by lack of omega-3.) If canker sores can be cured with walnuts, the Mayo list of treatments reads like a list of scurvy cures from the Middle Ages. The Harvard Medical School health website is even worse. “Keep in mind that up to half of all adults have experienced canker sores at least once,” it says. This is supposed to reassure you. Surely something this common couldn’t be a serious problem.

6. Failure of the healthcare establishment. Even worse, the entire healthcare establishment, with its vast resources, hasn’t managed to figure this out. Canker sores are not considered a major health problem, no, but, if I’m right, that too is a mistake. They are certainly common. If they indicate an important nutritional deficiency (too little omega-3), they become very important and their high prevalence is a major health problem.

The Emperor’s New Clothes: Meta-Analysis

In an editorial about the effect of vitamin-mineral supplements in the prestigious American Journal of Clnicial Nutrition, the author, Donald McCormick, a professor of nutrition at Emory University, writes:

This study is a meta-analysis of randomized controlled trials that were previously reported. Of 2311 trials identified, only 16 met the inclusion criteria.

That’s throwing away a lot of data! Maybe, just maybe, something could be learned from other 2295 randomized controlled trials?

Evidence snobs.

The Ketogenic Diet and Evidence Snobs

If we can believe a movie based on a true story, the doctors consulted by the family with an epileptic son in …First Do No Harm knew about the ketogenic diet but (a) didn’t tell the parents about it, (b) didn’t take it seriously, and (c) thought that irreversible brain surgery should be done before trying the diet, which was of course much safer. Moreover, these doctors had an authoritative book to back up these remarkably harmful and unfortunate attitudes. The doctors in …First, as far as I can tell, reflected (and still reflect) mainstream medical practice.

Certainly the doctors were evidence snobs — treating evidence not from a double-blind study as worthless. Why were they evidence snobs? I suppose the universal tendency toward snobbery (we love feeling superior) is one reason but that may be only part of the explanation. In the 1990s, Phillip Price, a researcher at Lawrence Berkeley Labs, and one of his colleagues were awarded a grant from the Environmental Protection Agency (EPA) to study home radon levels nationwide. They planned to look at the distribution of radon levels and make recommendations for better guidelines. After their proposal was approved, some higher-ups at EPA took a look at it and realized that the proposed research would almost surely imply that the current EPA radon guidelines could be improved. To prevent such criticism, the grant was canceled. Price was told by an EPA administrator that this was the reason for the cancellation.

This has nothing to do with evidence snobbery. But I’m afraid it may have a lot to do with how the doctors in … First Do No Harm viewed the ketogenic diet. If the ketogenic diet worked, it called into question their past, present, and future practices — namely, (a) prescribing powerful drugs with terrible side effects and (b) performing damaging and irreversible brain surgery of uncertain benefit. If something as benign as the ketogenic diet worked some of the time, you’d want to try it before doing anything else. This hadn’t happened: The diet hadn’t been tried first, it had been ignored. Rather than allow evidence of the diet’s value to be gathered, which would open them up to considerable criticism, the doctors did their best to keep the parents from trying it. Much like canceling the radon grant.

The ketogenic diet.

The Ketogenic Diet

Speaking of evidence snobs, this is from the TV movie … First Do No Harm (1997) about a family’s discovery of the ketogenic diet (a high-fat low-carb diet) for their severely-epileptic son:

DOCTOR The diet is not an approved treatment.

MOTHER But there have been a lot of studies.

DOCTOR Those studies are anecdotal, not the kind of studies we base sound medical judgment on. Not double-blind studies.

Later:

DOCTOR I assume you know all the evidence in favor of the ketogenic diet is anecdotal. There’s absolutely no scientific evidence this diet works.

The doctor prefers brain surgery. When the diet is tried, it works beautifully (as it often does in real life). “What could have gone so horribly wrong with this whole medical system?” the mother writes the father.

Evidence Snobs

At a reunion of Reed College graduates who majored in psychology, I gave a talk about self-experimentation. One question was what I thought of Evidence-Based Medicine. I said the idea you could improve on anecdotes had merit, but that proponents of Evidence-Based Medicine have been evidence snobs (which derives from Alex Tabarrok’s credit snobs). I meant they’ve dismissed useful evidence because it didn’t reach some level of purity. Because health is important, I said, ignoring useful information, such as when coming up with nutritional recommendations, is really unfortunate.

Afterwards, four people mentioned “evidence snobs” to me. (Making it the most-mentioned thing I said.) They all liked it. Thanks, Alex.