Unhelpful Answers (Ancestral Health Symposium 2013)

At the Ancestral Health Symposium, I went to a talk about food and the brain, a great interest of mine. The speaker said that flaxseed oil was ineffective because only a small fraction (5%) gets converted into DHA — a common claim.

During the question period, I objected.

Seth I found that after I ate some flaxseed oil capsules, my balance improved. Apparently flaxseed oil improved my brain function. This disagrees with what you said.

Speaker Everyone’s different.

A man in the audience said what I observed might have been a placebo effect. I said that couldn’t be true because the effect was a surprise. He disagreed. (The next day, in the lunch line, he spoke to a friend about getting in a kerfuffle with “an emeritus professor who wasn’t used to being disagreed with.”) I spoke to the speaker again:

Seth Is it possible that flaxseed oil is converted to DHA at a higher rate than you said?

Speaker Anything’s possible.

This reminded me of a public lecture by Danny Kahneman at UC Berkeley. During the question period, a man, who appeared to have some kind of impairment, asked a question that was hard to understand. Kahneman gave a very brief answer, something like “No.”

Afterwards, a woman came over to me. Maybe flaxseed oil reduced inflammation, she said. Given that the brain is very high in omega-3, and so is flaxseed oil, this struck me as unlikely. I said I didn’t like how my question had been answered. I’ve been there, she said. Other members of her family were doctors, she said. She would object to what they said and they would respond in a dismissive way.

The speaker is/was a doctor. Her talk consisted of repeating what she had read, apparently. The possibility that something she read was wrong . . . well, anything’s possible.

 

 

 

The Truth in Small Doses: Interview with Clifton Leaf (Part 2 of 2)

Part 1 of this interview about Leaf’s book The Truth in Small Doses: Why We’re Losing the War on Cancer — and How to Win It was posted yesterday.

SR You say we should “let scientists learn as they go”. For example, reduce the need for grant proposals to require tests of hypotheses. I agree. I think most scientists know very little about how to generate plausible ideas. If they were allowed to try to do this, as you propose, they would learn how to do it. However, I failed to find evidence in your book that a “let scientists learn as they go” strategy works better (leaving aside Burkitt). Did I miss something?

CL Honestly, I don’t think we know yet that such a strategy would work. What we have in the way of evidence is a historical control (to some extent, we did try this approach in pediatric cancers in the 1940s through the 1960s) and a comparator arm (the current system) that so far has been shown to be ineffective.

As I tried to show in the book, the process now isn’t working. And much of what doesn’t work is what we’ve added in the way of bad management. Start with a lengthy, arduous, grants applications process that squelches innovative ideas, that funds barely 10 percent of a highly trained corps of academic scientists and demoralizes the rest, and that rewards the same applicants (and types of proposals) over and over despite little success or accountability. This isn’t the natural state of science. We BUILT that. We created it through bad management and lousy systems.
Same for where we are in drug development. We’ve set up clinical trials rules that force developers to spend years ramping up expensive human studies to test for statistical significance, even when the vast majority of the time, the question being asked is of little clinical significance. The human cost of this is enormous, as so many have acknowledged.

With regard to basic research, one has only to talk to young researchers (and examine the funding data) to see how badly skewed the grants process has become. As difficult (and sometimes inhospitable) as science has always been, it has never been THIS hard for a young scientist to follow up on questions that he or she thinks are important. In 1980, more than 40 percent of major research grants went to investigators under 40; today it’s less than 10 percent. For anyone asking provocative, novel questions (those that the study section doesn’t “already know the answer to,” as the saying goes), the odds of funding are even worse.

So, while I can’t say for sure that an alternative system would be better, I believe that given the current state of affairs, taking a leap into the unknown might be worth it.

SR I came across nothing about how it was discovered that smoking causes lung cancer. Why not? I would have thought we can learn a lot from how this discovery was made.

CL I wish I had spent more time on smoking. I mention it a few times in the book. In discussing Hoffman (pg. 34, and footnote, pg. 317), I say:

He also found more evidence to support the connection of “chronic irritation” from smoking with the rise in cancers of the mouth and throat. “The relation of smoking to cancer of the buccal [oral] cavity,” he wrote, “is apparently so well established as not to admit of even a question of doubt.” (By 1931, he would draw an unequivocal link between smoking and lung cancer—a connection it would take the surgeon general an additional three decades to accept.)

And I make a few other brief allusions to smoking throughout the book. But you’re right, I gave this preventable scourge short shrift. Part of why I didn’t spend more time on smoking was that I felt its role in cancer was well known, and by now, well accepted. Another reason (though I won’t claim it’s an excusable one) is that Robert Weinberg did such a masterful job of talking about this discovery in “Racing to the Beginning of the Road,” which I consider to be the single best book on cancer.

I do talk about Weinberg’s book in my own, but I should have singled out his chapter on the discovery of this link (titled “Smoke and Mirrors”), which is as much a story of science as it is a story of scientific culture.

SR Overall you say little about epidemiology. You write about Burkitt but the value of his epidemiology is unclear. Epidemiology has found many times that there are big differences in cancer rates between different places (with different lifestyles). This suggests that something about lifestyle has a big effect on cancer rates. This seems to me a very useful clue about how to prevent cancer. Why do you say nothing about this line of research (lifestyle epidemiology)?

CL Seth, again, I agree. I don’t spend enough time discussing the role that good epidemiology can play in cancer prevention. In truth, I had an additional chapter on the subject, which began by discussing decades of epidemiological work linking the herbicide 2-4-D with various cancers, particularly with prostate cancer in the wheat-growing states of the American west (Montana, the Dakotas and Minnesota). I ended up cutting the chapter in an effort to make the book a bit shorter (and perhaps faster). But maybe that was a mistake.

For what’s it worth, I do believe that epidemiology is an extremely valuable tool for cancer prevention.

[End of Part 2 of 2]

The Truth in Small Doses: Interview with Clifton Leaf (Part 1 of 2)

I found a lot to like and agree with in The Truth in Small Doses: Why We’re Losing the War on Cancer — and How to Win It by Clifton Leaf, published recently. It grew out of a 2004 article in Fortune in which Leaf described poor results from cancer research and said that cancer researchers work under a system that “rewards academic achievement and publication over all else” — in particular, over “genuine breakthroughs.” I did not agree, however, with his recommendations for improvement, which seemed to reflect the same thinking that got us here. It reminded me of President Obama putting in charge of fixing the economy the people who messed it up. However, Leaf had spent a lot of time on the book, and obviously cared deeply, and had freedom of speech (he doesn’t have to worry about offending anyone, as far as I can tell) so I wondered how he would defend his point of view.

Here is Part 1 of an interview in which Leaf answered written questions.

SR Let me begin by saying I think the part of the book that describes the problem – little progress in reducing cancer – is excellent. You do a good job of contrasting the amount of time and money spent with progress actually made and pointing out that the system seems designed to produce papers rather than progress. What I found puzzling is the part about how to do better. That’s what I want to ask you about.

In the Acknowledgements, you say Andy Grove said “a few perfect words” that helped shape your thesis. What were those words?

CL “It’s like a Greek tragedy. Everybody plays his individual part to perfection, everybody does what’s right by his own life, and the total just doesn’t work.” Andy had come to a meeting at Fortune, mostly just to chat. I can’t remember what the main topic of conversation was, but when I asked him a question about progress in the war on cancer, he said the above. (I quote this in the 2004 piece I wrote for Fortune.)

SR You praise Michael Sporn. His great contribution, you say, is an emphasis on prevention. I have a hard time seeing this as much of a contribution. The notion that “an ounce of prevention is worth a pound of cure” is ancient. What progress has Sporn made in the prevention of anything?

CL Would it be alright, Seth, if before I answer the question, I bring us back to what I said in the book? Because I think the point I was trying to make — successfully or not (and I’m guessing you would conclude “not” here) — is more nuanced than “an ounce of prevention is worth a pound of cure.”

Here’s what I see as the key passage regarding Dr. Sporn (pgs. 133-135):

For all his contributions to biology, biochemistry, and pharmacology, though, Sporn is still better known for something else. Rather than any one molecular discovery, it is an idea. The notion is so straightforward—so damned obvious, really—that it is easy to forget how revolutionary it was when he first proposed it in the mid-1970s: cancer, Sporn contended, could (and should) be chemically stopped, slowed, or reversed in its earliest preinvasive stages.

That was it. That was the whole radical idea.

Sporn was not the first to propose such an idea. Lee Wattenberg at the University of Minnesota had suggested the strategy in 1966 to little response. But Sporn refined it, pushed it, and branded it: To distinguish such intervention from the standard form of cancer treatment, chemotherapy—a therapy that sadly comes too late for roughly a third of patients to be therapeutic—he coined the term chemoprevention in 1976.

The name stuck.

On first reading, the concept might seem no more than a truism. But to grasp the importance of chemoprevention, one has first to dislodge the mind-set that has long reigned over the field of oncology: that cancer is a disease state. “One has cancer or one doesn’t.” Such a view, indeed, is central to the current practice of cancer medicine: oncologists today discover the event of cancer in a patient and respond—typically, quite urgently. This thinking is shared by patients, the FDA, drug developers, and health insurers (who decide what to pay for). This is the default view of cancer.

And, to Sporn, it is dead wrong. Cancer is not an event or a “state” of any kind. The disease does not suddenly come into being with a discovered lump on the mammogram. It does not begin with the microscopic lesion found on the chest X-ray. Nor when the physician lowers his or her voice and tells the patient, “I’m sorry. The pathology report came back positive. . . . You have cancer.”

Nor does the disease begin, says Sporn, when the medical textbooks say it does: when the first neoplastic cell breaks through the “basement membrane,” the meshwork layers of collagen and other proteins that separate compartments of bodily tissue. In such traditional thinking, it matters little whether a cell, or population of cells, has become immortalized through mutation. Or how irregular or jumbled the group might look under the microscope. Or how otherwise disturbed their genomes are. As long as none of the clones have breached the basement membrane, the pathology is not (yet) considered “cancer.”

For more than a century, this barrier has been the semantic line that separates the fearsome “invader” from the merely “abnormal.” It is the Rubicon of cancer diagnosis. From the standpoint of disease mechanics, the rationale is easy to understand, because just beyond this fibrous gateway are fast-moving channels (the blood and lymphatic vessels) that can conceivably transport a predatory cell, or cells, to any terrain in the body. Busting through the basement is therefore a seeming leap past the point of no return, a signal that a local disturbance is potentially emerging into a disseminating mob.*

But while invasion may define so-called clinical cancer for legions of first-year medical students, it is by no means the start of the pathology. Cancer is not any one act; it is a process. It begins with the first hints of subversion in the normal differentiation of a cell—with the first disruption of communication between that cell and its immediate environment. There is, perhaps, no precise moment of conception in this regard, no universally accepted beginning—which makes delineating the process that much harder. But most, if not all, types of “cancer” have their own somewhat recognizable stages of evolution along the route to clinically apparent disease.

“Saying it’s not cancer until the cells are through the basement membrane,” says Sporn, “is like saying the barn isn’t on fire until there are bright red flames coming out of the roof. It’s absolute nonsense!”

(Sorry for that long excerpt.) I think that Dr. Sporn’s greatest contribution was to reframe cancer as a continually evolving, dynamic process — carcinogenesis — rather than an event or state of being. And it was one that, conceivably at least, we could interrupt — and interrupt earlier than at the point at which it was clinically manifested. This was distinct from early detection, which, while effective to some extent and in some cancers, was both detecting cancers too late and “catching” many lesions that weren’t likely to develop any further (or didn’t really exist to begin with), adding to the already-great cancer burden.

There was a potential, said Sporn, to intervene in a way that might stop developing cancers in their tracks, and yet would not necessarily have to add to the burden of cancer overtreatment.

As I spend most of Chapter 7 discussing, there are enormous barriers to pulling this of—and I did my best to lay out the challenges. But I do believe that this is the way to go in the end.

SR You praise Kathy Giusti for her effect on multiple myeloma research. I couldn’t find the part where that research (“a worthy model for cancer research that can serve as a guidepost for the future . . . that teaches everything there is to teach about the power of collaborative science”, p. 260) came up with something useful.

CL Seth, sorry this again may be me not being very clear in my writing. I apologize for that. But the lines you cite actually are intended to set up the Burkitt story in the following chapter. It was Burkitt’s effort against the mysterious African lymphoma, that remains, in my view, “a worthy model for cancer research…”

SR You praise Burkitt’s epidemiology. How did that epidemiology help find out that Burkitt’s lymphoma responds to certain drugs? I couldn’t see a connection.

CL Good question. I think Burkitt’s very old-fashioned epidemiological investigation identified a widespread, terrible cancer that had been seen many times, but not noticed for what it was. It helped narrow down who was getting this cancer and—at least in a broad, geographical sense—why. But it wasn’t epidemiology that helped discover that this lymphoma was responsive to certain drugs—that was trial and error. As with the case of Farber and ALL [acute lymphocytic leukemia], many today would blanch at the primitive experimental protocols that tested these toxic drugs in children. But with an extraordinarily aggressive tumor that was killing these kids in weeks, Burkitt felt he had to try something. Again, that’s not epidemiology, but it is an understanding of the urgency of this disease that we can, perhaps, learn from.

[End of Part 1 of 2]

Researchers Fool Themselves: Water and Cognition

A recent paper about the effect of water on cognition illustrates a common way that researchers overstate the strength of the evidence, apparently fooling themselves. Psychology researchers at the University of East London and the University of Westminster did an experiment in which subjects didn’t drink or eat anything starting at 9 pm and the next morning came to the testing room. All of them were given something to eat, but only half of them were given something to drink. They came in twice. On one week, subjects were given water to drink; on the other week, they weren’t given water. Half of the subjects were given water on the first week, half on the second. Then they gave subjects a battery of cognitive tests.

One result makes sense: subjects were faster on a simple reaction time test (press button when you see a light) after being given water, but only if they were thirsty. Apparently thirst slows people down. Maybe it’s distracting.

The other result emphasized by the authors doesn’t make sense: Water made subjects worse at a task called Intra-Extra Dimensional Set Shift. The task provided two measures (total trials and total errors) but the paper gives results only for total trials. The omission is not explained. (I asked the first author about this by email; she did not explain the omission.) On total trials, subjects given water did worse, p = 0.03. A surprising result: after persons go without water for quite a while, giving them water makes them worse.

This p value is not corrected for number of tests done. A table of results shows that 14 different measures were used. There was a main effect of water on two of them. One was the simple reaction time result; the other was the IED Stages Completed (IED = intra/extra dimensional) result. It is likely that the effect of water on simple reaction time was a “true positive” because the effect was influenced by thirst. In contrast, the IED Stages Completed effect wasn’t reliably influenced by thirst. Putting the simple reaction time result aside, there are 13 p values for the main effect of water; one is weakly reliable (p = 0.03). If you do 20 independent tests, purely by chance one is likely to have p < 0.05 at least once even when there are no true effects. Taken together, there is no good reason to believe that water had main effects aside from the simple reaction time test. The paper would be a good question for an elementary statistics class (“Question: If 13 tests are independent, and there are no true effects present, how likely will at least one be p = 0.03 or better by chance? Answer: 1 – (0.97^13) = 0.33″).

I wrote to the first author (Caroline Edmonds) about this several days ago. My email asked two questions. She replied but failed to answer the question about number of tests. Her answer was written in haste; maybe she will address this question later.

A better analysis would have started by assuming that the 14 measures are unlikely to be independent. It would have done (or used) a factor analysis that condensed the 14 measures into (say) three factors. Then the researchers could ask if water affected each of the three factors. Far fewer tests, far more independent tests, far harder to fool yourself or cherry-pick.

The problem here — many tests, failure to correct for this or do an analysis with far fewer tests — is common but the analysis I suggest is, in experimental psychology papers, very rare. (I’ve never seen it.) Factor analysis is taught as part of survey psychology (psychology research that uses surveys, such as personality research), not as part of experimental psychology. In the statistics textbooks I’ve seen, the problem of too many tests and correction for/reduction of number of tests isn’t emphasized. Perhaps it is a research methodology example of Gresham’s Law: methods that make it easier to find what you want (differences with p < 0.05) drive out better methods.

Thanks to Allan Jackson.

Heart Disease Epidemic and Latitude Effect: Reconciliation

For the last half century, heart disease has been the most common cause of death in rich countries — more common than cancer, for example. I recently discussed the observation of David Grimes, a British gastroenterologist, that heart disease has followed an infectious-disease epidemic-like pattern: sharp rise, sharp fall. From 1920 to 1970, heart disease in England increased by a factor of maybe 100; from a very low level to 500 deaths per 100,000 people per year. From 1970 to 2010, it has decreased by a factor of 10. This pattern cannot be explained by any popular idea about heart disease. For example, dietary or exercise or activity changes cannot explain it. They haven’t changed the right way (way up, way down) at the right time (peaking in 1970). In spite of this ignorance, I have never heard a health expert express doubt about what causes heart disease. This fits with what I learned when I studied myself. What I learned had little correlation with what experts said.

Before the epidemic paper, Grimes wrote a book about heart disease. It stressed the importance of latitude: heart disease is more common at more extreme latitudes. For example, it is more common in Scotland than the south of England. The same correlation can be seen in many data sets and with other diseases, including influenza, variant Creuztfeldt-Jacob disease, multiple sclerosis, Crohn’s disease and other digestive diseases. More extreme latitudes get less sun. Grimes took the importance of latitude to suggest the importance of Vitamin D. Better sleep with more sun is another possible explanation.

The amount of sunlight has changed very little over the last hundred years so it cannot explain the epidemic-like rise and fall of heart disease. I asked Grimes how he reconciled the two sets of findings. He replied:

It took twenty years for me to realize the importance of the sun. I always felt that diet was grossly exaggerated and that victim-blaming was politically and medically convenient – disease was due to the sufferers and it was really up to them to correct their delinquent life-styles. I was brought up and work in the north-west of England, close to Manchester. The population has the shortest life-expectancy in England, Scotland and Northern Ireland even worse. It must be a climate effect. And so on to sunlight. So many parallels from a variety of diseases.

When I wrote my book I was aware of the unexplained decline of CHD deaths and I suggested that the UK Clean Air Act of 1953 might have been the turning point, the effect being after 1970. Cleaning of the air did increase sun exposure but the decline of CHD deaths since 1970 has been so great that there must be more to it than clean air and more sun. At that time I was unaware of the rise of CHD deaths after 1924 and so I was unaware of the obvious epidemic. I now realize that CHD must have been due to an environmental factor, probably biological, and unidentified micro-organism. This is the cause, but the sun, through immune-enhancement, controls the distribution, geographical, social and ethnic. The same applies to many cancers, multiple sclerosis, Crohn’s disease (my main area of clinical activity), and several others. I think this reconciles the sun and a biological epidemic.

He has written three related ebooks: Vitamin D: Evolution and Action, Vitamin D: What It Can Do For Your Baby, and You Will Not Die of a Heart Attack.

Assorted Links

  • Kombucha beer (which may not taste like beer)
  • A growing taste for sour. “I saw bottles of [kombucha] in rural Virginia gas stations . . . kimchi, fermented cabbage, has spread from Korean kitchens to Los Angeles taco trucks.”
  • Exercise and weight loss. Only the extremes of exercise — very intense exercise (very brief) and very long lasting exercise (walking) — reduce weight or keep weight low. The middling exercise Americans actually choose (aerobics) has little effect. This post, by my friend Phil Price, gets the high-intensity part right but the low-intensity part wrong.
  • Weight loss fails to prevent heart attacks. “The study followed 5,200 patients and lasted 11 years.” Surely cost tens of millions of dollars. More evidence of mainstream ignorance about heart disease.
  • A kickback by any other name . . . “At least 17 of the top 20 Bystolic prescribers in Medicare’s prescription drug program in 2010 have been paid by Forest [which makes Bystolic] to deliver promotional talks. In 2012, they together received $284,700 for speeches and more than $20,000 in meals.”

Thanks to Bryan Castañeda and Hal Pashler.

Assorted Links

  • natural acne remedies
  • A mainstream climate scientist has doubts. “We’re facing a puzzle. Recent CO2 emissions have actually risen even more steeply than we feared. As a result, according to most climate models, we should have seen temperatures rise by around 0.25 degrees Celsius (0.45 degrees Fahrenheit) over the past 10 years. That hasn’t happened. In fact, the increase over the last 15 years was just 0.06 degrees Celsius (0.11 degrees Fahrenheit) — a value very close to zero. This is a serious scientific problem.” What would Bill McKibben say?
  • Personal Experiments, a research site where you can sign up for experiments.
  • Trouble at GSK Shanghai. The defenses of the accused strike me as plausible.
  • Sleep disturbance in a hospital. “Between 10 p.m. and 6 a.m., I did not go more than an hour without some kind of interruption.” As ridiculous as cutting off part of the immune system because of too many infections (tonsillectomies) and the view that acne has nothing to do with diet.

Thanks to Dave Lull.

The Rise and Fall of Heart Disease

Heart disease was once the number one killer in rich countries. Maybe it still is. Huge amounts of time and money have gone into trying to reduce it — statins, risk factor measurement (e.g., cholesterol measurement), telling people to “eat healthy” and exercise more, and so on. Unfortunately for the poor souls who follow the advice (e.g., take statins), the advice givers, such as doctors, never make clear how little they know about what causes heart disease. Maybe they don’t realize how little they know.

I encountered an ignorant-without-knowing-it expert after a talk I gave about the effect of butter on brain function. I found that butter improved my brain function (measured by arithmetic speed). I had been eating lots of butter for more than a year. A cardiologist in the audience said I was killing myself. He thought butter caused heart disease. I said that I had experimental data that butter was good for me. Easy to interpret. The notion that butter is bad has come from epidemiological (non-experimental) data, which is hard to interpret. The cardiologist said that the epidemiology has not been misleading. One sign of our correct understanding, he said, is that heart disease has declined. I said there were many possible reasons for the decline.

A 2012 paper called “An epidemic of coronary heart disease” by David Grimes, a British doctor, could hardly make clearer how little we know about the cause of heart disease. Grimes points out that before 1920 heart disease was almost non-existent, that it rose sharply from 1930 to 1970 and since 1970 has declined sharply, at roughly the same rate that it rose. Both the rise and the fall are mysteries, says Grimes, in agreement with what I told the cardiologist. The rise and fall contradict all popular explanations. Heart disease cannot be due to obesity or wealth — both increased substantially at the same time heart disease fell sharply. Nor was the decline due to government intervention:

The decline of CHD deaths in the UK was further described in a UK Government report of 2004, Winning the War on Heart Disease. In this report, the government predictably but undeservedly assumed responsibility for the decline. Clearly, the NHS [National Health Service] in the UK could not have had an international effect [the decline is international].

“There [has been] no obvious effect of statin therapy or other medical intervention,” Grimes continues. Yet statins continue to be prescribed in very high amounts and very great expense. The NNT (number of people you need to treat to save one life) is often in the thousands, he noted.

Those who complain about the high cost of health care fatally fail to grasp this enormous ignorance — about many things, not just heart disease — and its consequences. Reducing the cost of health care (reducing the cost of statins, for example) would improve health if cost were the only thing deeply wrong with our health care system. It isn’t.

Hospitals and Their Employees: Stuck in the 1800s

An article in the New York Times describes how difficult it has been for hospital administrators to get their employees to wash their hands. Hospital-acquired infections are an enormous problem and cause many deaths, yet “studies [in the last 10 years] have shown that without encouragement, hospital workers wash their hands as little as 30 percent of the time that they interact with patients.” Hospitals are now — just now — trying all sorts of things to increase the hand-washing rate. The germ theory of disease dates from the 1800s. Ignasz Semmelweis did his pioneering work, showing that hand-washing dramatically reduced death rate (from 18% to 2%), in 1847.

So hospitals are only now (in the last few years) grasping the implications of facts and a well-established theory from the 1800s. What goes unsaid in the usual discussion of how awful this is — how dare doctors refuse to wash their hands!, a sentiment with which I agree — is how backward both sides of the discussion are. A discussion in which many lives are at stake.

The Times article now has 209 comments, many by doctors and nurses. The doctors, of course, went to medical school and passed a rigorous test about medicine (“board-certified”). Yet they don’t know basic things about infection. (One doctor, in the comments, calls hand-washing “ this current fad“.) They appear to have no idea that it is possible to improve the body’s ability to resist infection. I read all the comments. Not one mentioned two easy cheap low-tech ways to reduce hospital infections:

1. Allow patients to sleep well. The body fights off infection during sleep, but hospitals are notoriously bad places to sleep. Patients are woken up by nurses, for example. You might think that everyone knows sleep helps fight infection . . . but apparently not hospital administrators nor the doctors and nurses who commented on the Times article. It was in the interest of these doctors and nurses to suggest alternative solutions because they dislike washing their hands.

2. Feed patients fermented foods (or probiotics). Fermented foods help you fight off infections. I believe this is because the bacteria on fermented food are perfectly safe yet successfully compete with dangerous bacteria. In any case, plenty of studies show that probiotics and fermented foods reduce hospital infections. In one study, “use of probiotics reduced the new cases of C. difficile-associated diarrhea by two thirds (66 per cent), with no serious adverse events attributable to probiotics.” Maybe this just-published article (Probiotics: a new frontier for infection control”) will bring a few people who work in hospitals into the 21st century.

That hospital administrators and their doctors and nurses — and, in this discussion, their critics — are stuck in the 1800s is clear enough. What is slightly less clear is that our understanding is better now than it was in the 1800s and some of the new knowledge is useful.

Thanks to Bryan Castañeda.

Celiac Experts Make Less Than Zero Sense

In the 1960s, Edmund Wilson reviewed Vladimir Nabokov’s translation of Eugene Onegin. Wilson barely knew Russian and his review was a travesty. Everything was wrong. Nabokov wondered if it had been written that way to make sense when reflected in a mirror.

I thought of this when I read recent remarks by “celiac experts” in the New York Times. The article, about gluten sensitivity, includes an example of a woman who tried a gluten-free diet:

Kristen Golden Testa could be one of the gluten-sensitive. Although she does not have celiac, she adopted a gluten-free diet last year. She says she has lost weight and her allergies have gone away. “It’s just so marked,” said Ms. Golden Testa, who is health program director in California for the Children’s Partnership, a national nonprofit advocacy group. She did not consult a doctor before making the change, and she also does not know [= is unsure] whether avoiding gluten has helped at all. “This is my speculation,” she said. She also gave up sugar at the same time and made an effort to eat more vegetables and nuts.

Fine. The article goes on to quote several “celiac experts” (all medical doctors) who say deeply bizarre things.

“[A gluten-free diet] is not a healthier diet for those who don’t need it,” Dr. Guandalini [medical director of the University of Chicago’s Celiac Disease Center] said. These people “are following a fad, essentially.” He added, “And that’s my biased opinion.”

Where Testa provides a concrete example of health improvement and refrains from making too much of it, Dr. Guandalini does the opposite (provides no examples, makes extreme claims).

Later, the article says this:

Celiac experts urge people to not do what Ms. Golden Testa did — self-diagnose. Should they actually have celiac, tests to diagnose it become unreliable if one is not eating gluten. They also recommend visiting a doctor before starting on a gluten-free diet.

As someone put it in an email to me, “Don’t follow the example of the person who improved her health without expensive, invasive, inconclusive testing. If you think gluten may be a problem in your diet, you should keep eating it and pay someone to test your blood for unreliable markers and scope your gut for evidence of damage. It’s a much better idea than tracking your symptoms and trying a month without gluten, a month back on, then another month without to see if your health improves.”

Are the celiac experts trying to send a message to Edmund Wilson, who died many years ago?