Lyme Disease and Bad Medicine

I got Cure Unknown: Inside the Lyme Epidemic (2008) by Pamela Weintraub from the library and found something surprising: an angry foreword. Weintraub is a science journalist; the foreword is by Hillary Johnson, another science journalist and apparently a friend of Weintraub’s.

In her anger, Johnson says several things I say on this blog.

The more Weintraub investigated, the more virtually everyone with a shred of authority was losing their credibility. . . The so-called “objective” scientists were sending an entire disease down the river and over the cliff [meaning they ignored it] for reasons that seemed frequently to have more to do with mere opinion and crass external forces — cash, prestige, careerism — than with scientific erudition.

She rejected the science writer’s inbred habit of relying on the government official with the highest pay grade or the scientist with a job at Harvard as the final word on a topic. . . . I think of her, with enormous respect, as a “recovered” science journalist.

As one who also suffers from the disease I chronicled with kindred passion in Osler’s Web, I sometimes wonder if the only investigative writers who will possess the necessary temerity to remove the white gloves and tackle those putative experts to the ground will be those . . whose personal experience demands they follow the rocky trail that leads to the truth.

The last point is the most important, I think. You can curse the careerism of Harvard medical school professors and the servility of science writers but that does nothing, or not much, and what you are upset about (careerism and servility) is unsurprising. Less obvious, at least to me, is that there is a way to overcome the careerism and servility. It still surprises me that I was able to figure out interesting stuff about sleep, obesity, depression and so on that the experts in these fields hadn’t figured out — and that sometimes contradicted what they said. (For example, I found sugar isn’t fattening.) As Johnson says, one reason I was able to learn so much was that I wanted to sleep better, lose weight, be in a better mood, and so on. Unlike the experts in those fields, for whom research was a job.

Assorted Links

JFK Assassination Diary by Edward Jay Epstein

Edward Jay Epstein has just published a new book called The JFK Assassination Diary based on the diary he kept when he wrote Inquest. It is available on Kindle, Nook and as an Itunes ebook. It will soon be available in paperback.

He wrote me about it:

As you know I was the only person to interview the Warren Commission as well as its staff and liaisons with the intelligence services. I did these interviews as an undergraduate at Cornell with no credentials as a journalist, scholar, or author. My interviews also produced a revelation that shook the journalistic establishment, which had been blithely reporting until the publication of my book Inquest that the Commission had left no stone unturned in an exhaustive investigation. In fact, as I showed, it was a brief, sporadic, and incomplete investigation. Indeed one in which the senior staff lawyer in charge of the crime scene investigation quit after two days, and the young lawyer who took his place, Arlen Specter, was never able to view the single most crucial piece of evidence — the autopsy photographs. The Commission was never able to obtain them, nor other pieces of evidence, because Robert Kennedy blocked it. For the same reason, the Commission was not provided with any information about a parallel plot to kill Castro in 1963. The Commission could not connect dots to which it was denied access.

I had no problem getting this information. Many of the young lawyers on the staff were furious with the way the investigation had been handled and the time pressure imposed on them. So they gave me FBI reports, payroll records and their memos, without me even asking. This raises a question. As these lawyers and Commission members were not bound by any secrecy agreement, as amazing as that might seem nowadays, why had not journalists from major news organizations sought the same information from them? After all, in 1963, the Kennedy assassination was the crime of the century. Fifty years later, I still cannot answer this question.

A very good question. Why weren’t journalists from major news organizations more . . . enterprising? It is another variation on The Emperor’s New Clothes, where a Cornell undergraduate manages to see what many much more experienced and credentialed experts failed to see, or avoided seeing. I would answer Epstein’s question like this: The experts were disinterested in gathering evidence that might contradict their world view. That world view included a belief in the competence of exceedingly important government commissions. They didn’t want to gather evidence that might make them uncomfortable. I see this every year at Nobel Prize time. No journalist ever questions the claims in the press releases that accompany the prizes.

Assorted Links

  • How little is known about tinnitus
  • Michael Lewis on Greg Smith’s book. Published months ago. “The dystopia often imagined in the world of artificial intelligence—in which computers somehow take on a life of their own and come to rule mankind—has actually happened in the world of finance. The giant Wall Street firms have taken on lives of their own, beyond human control. The people flow into and out of them but have only incidental effect on their direction and behavior.”
  • The price of admission to the Chinese Academy of Sciences. “Businessmen seeking ministry contracts learned of Zhang’s nomination and offered to help. . . . Zhang, using a slush fund provided by the businessmen, cloistered 30 experts from mostly ministry-affiliated universities and research institutes in a hotel for 2 months, during which time they churned out three books on high-speed rail technology that were credited to Zhang.”
  • Why was Matthew Shepard killed? I have not yet read this book (I will) but it sounds so good I am happy to publicize it before that. It is being ignored. It supports a theme of Ron Unz and this blog, that lots of what we are told is wrong.
  • Someone leaving graduate school at École polytechnique fédérale de Lausanne explains why he is leaving only a few months before finishing his Ph.d. His complaints about professional (academic) science resemble mine — for example, the dominant role of will this help my career? in all decisions.

Thanks to Joyce Cohen and Allan Jackson.

The Truth in Small Doses: Interview with Clifton Leaf (Part 2 of 2)

Part 1 of this interview about Leaf’s book The Truth in Small Doses: Why We’re Losing the War on Cancer — and How to Win It was posted yesterday.

SR You say we should “let scientists learn as they go”. For example, reduce the need for grant proposals to require tests of hypotheses. I agree. I think most scientists know very little about how to generate plausible ideas. If they were allowed to try to do this, as you propose, they would learn how to do it. However, I failed to find evidence in your book that a “let scientists learn as they go” strategy works better (leaving aside Burkitt). Did I miss something?

CL Honestly, I don’t think we know yet that such a strategy would work. What we have in the way of evidence is a historical control (to some extent, we did try this approach in pediatric cancers in the 1940s through the 1960s) and a comparator arm (the current system) that so far has been shown to be ineffective.

As I tried to show in the book, the process now isn’t working. And much of what doesn’t work is what we’ve added in the way of bad management. Start with a lengthy, arduous, grants applications process that squelches innovative ideas, that funds barely 10 percent of a highly trained corps of academic scientists and demoralizes the rest, and that rewards the same applicants (and types of proposals) over and over despite little success or accountability. This isn’t the natural state of science. We BUILT that. We created it through bad management and lousy systems.
Same for where we are in drug development. We’ve set up clinical trials rules that force developers to spend years ramping up expensive human studies to test for statistical significance, even when the vast majority of the time, the question being asked is of little clinical significance. The human cost of this is enormous, as so many have acknowledged.

With regard to basic research, one has only to talk to young researchers (and examine the funding data) to see how badly skewed the grants process has become. As difficult (and sometimes inhospitable) as science has always been, it has never been THIS hard for a young scientist to follow up on questions that he or she thinks are important. In 1980, more than 40 percent of major research grants went to investigators under 40; today it’s less than 10 percent. For anyone asking provocative, novel questions (those that the study section doesn’t “already know the answer to,” as the saying goes), the odds of funding are even worse.

So, while I can’t say for sure that an alternative system would be better, I believe that given the current state of affairs, taking a leap into the unknown might be worth it.

SR I came across nothing about how it was discovered that smoking causes lung cancer. Why not? I would have thought we can learn a lot from how this discovery was made.

CL I wish I had spent more time on smoking. I mention it a few times in the book. In discussing Hoffman (pg. 34, and footnote, pg. 317), I say:

He also found more evidence to support the connection of “chronic irritation” from smoking with the rise in cancers of the mouth and throat. “The relation of smoking to cancer of the buccal [oral] cavity,” he wrote, “is apparently so well established as not to admit of even a question of doubt.” (By 1931, he would draw an unequivocal link between smoking and lung cancer—a connection it would take the surgeon general an additional three decades to accept.)

And I make a few other brief allusions to smoking throughout the book. But you’re right, I gave this preventable scourge short shrift. Part of why I didn’t spend more time on smoking was that I felt its role in cancer was well known, and by now, well accepted. Another reason (though I won’t claim it’s an excusable one) is that Robert Weinberg did such a masterful job of talking about this discovery in “Racing to the Beginning of the Road,” which I consider to be the single best book on cancer.

I do talk about Weinberg’s book in my own, but I should have singled out his chapter on the discovery of this link (titled “Smoke and Mirrors”), which is as much a story of science as it is a story of scientific culture.

SR Overall you say little about epidemiology. You write about Burkitt but the value of his epidemiology is unclear. Epidemiology has found many times that there are big differences in cancer rates between different places (with different lifestyles). This suggests that something about lifestyle has a big effect on cancer rates. This seems to me a very useful clue about how to prevent cancer. Why do you say nothing about this line of research (lifestyle epidemiology)?

CL Seth, again, I agree. I don’t spend enough time discussing the role that good epidemiology can play in cancer prevention. In truth, I had an additional chapter on the subject, which began by discussing decades of epidemiological work linking the herbicide 2-4-D with various cancers, particularly with prostate cancer in the wheat-growing states of the American west (Montana, the Dakotas and Minnesota). I ended up cutting the chapter in an effort to make the book a bit shorter (and perhaps faster). But maybe that was a mistake.

For what’s it worth, I do believe that epidemiology is an extremely valuable tool for cancer prevention.

[End of Part 2 of 2]

The Truth in Small Doses: Interview with Clifton Leaf (Part 1 of 2)

I found a lot to like and agree with in The Truth in Small Doses: Why We’re Losing the War on Cancer — and How to Win It by Clifton Leaf, published recently. It grew out of a 2004 article in Fortune in which Leaf described poor results from cancer research and said that cancer researchers work under a system that “rewards academic achievement and publication over all else” — in particular, over “genuine breakthroughs.” I did not agree, however, with his recommendations for improvement, which seemed to reflect the same thinking that got us here. It reminded me of President Obama putting in charge of fixing the economy the people who messed it up. However, Leaf had spent a lot of time on the book, and obviously cared deeply, and had freedom of speech (he doesn’t have to worry about offending anyone, as far as I can tell) so I wondered how he would defend his point of view.

Here is Part 1 of an interview in which Leaf answered written questions.

SR Let me begin by saying I think the part of the book that describes the problem – little progress in reducing cancer – is excellent. You do a good job of contrasting the amount of time and money spent with progress actually made and pointing out that the system seems designed to produce papers rather than progress. What I found puzzling is the part about how to do better. That’s what I want to ask you about.

In the Acknowledgements, you say Andy Grove said “a few perfect words” that helped shape your thesis. What were those words?

CL “It’s like a Greek tragedy. Everybody plays his individual part to perfection, everybody does what’s right by his own life, and the total just doesn’t work.” Andy had come to a meeting at Fortune, mostly just to chat. I can’t remember what the main topic of conversation was, but when I asked him a question about progress in the war on cancer, he said the above. (I quote this in the 2004 piece I wrote for Fortune.)

SR You praise Michael Sporn. His great contribution, you say, is an emphasis on prevention. I have a hard time seeing this as much of a contribution. The notion that “an ounce of prevention is worth a pound of cure” is ancient. What progress has Sporn made in the prevention of anything?

CL Would it be alright, Seth, if before I answer the question, I bring us back to what I said in the book? Because I think the point I was trying to make — successfully or not (and I’m guessing you would conclude “not” here) — is more nuanced than “an ounce of prevention is worth a pound of cure.”

Here’s what I see as the key passage regarding Dr. Sporn (pgs. 133-135):

For all his contributions to biology, biochemistry, and pharmacology, though, Sporn is still better known for something else. Rather than any one molecular discovery, it is an idea. The notion is so straightforward—so damned obvious, really—that it is easy to forget how revolutionary it was when he first proposed it in the mid-1970s: cancer, Sporn contended, could (and should) be chemically stopped, slowed, or reversed in its earliest preinvasive stages.

That was it. That was the whole radical idea.

Sporn was not the first to propose such an idea. Lee Wattenberg at the University of Minnesota had suggested the strategy in 1966 to little response. But Sporn refined it, pushed it, and branded it: To distinguish such intervention from the standard form of cancer treatment, chemotherapy—a therapy that sadly comes too late for roughly a third of patients to be therapeutic—he coined the term chemoprevention in 1976.

The name stuck.

On first reading, the concept might seem no more than a truism. But to grasp the importance of chemoprevention, one has first to dislodge the mind-set that has long reigned over the field of oncology: that cancer is a disease state. “One has cancer or one doesn’t.” Such a view, indeed, is central to the current practice of cancer medicine: oncologists today discover the event of cancer in a patient and respond—typically, quite urgently. This thinking is shared by patients, the FDA, drug developers, and health insurers (who decide what to pay for). This is the default view of cancer.

And, to Sporn, it is dead wrong. Cancer is not an event or a “state” of any kind. The disease does not suddenly come into being with a discovered lump on the mammogram. It does not begin with the microscopic lesion found on the chest X-ray. Nor when the physician lowers his or her voice and tells the patient, “I’m sorry. The pathology report came back positive. . . . You have cancer.”

Nor does the disease begin, says Sporn, when the medical textbooks say it does: when the first neoplastic cell breaks through the “basement membrane,” the meshwork layers of collagen and other proteins that separate compartments of bodily tissue. In such traditional thinking, it matters little whether a cell, or population of cells, has become immortalized through mutation. Or how irregular or jumbled the group might look under the microscope. Or how otherwise disturbed their genomes are. As long as none of the clones have breached the basement membrane, the pathology is not (yet) considered “cancer.”

For more than a century, this barrier has been the semantic line that separates the fearsome “invader” from the merely “abnormal.” It is the Rubicon of cancer diagnosis. From the standpoint of disease mechanics, the rationale is easy to understand, because just beyond this fibrous gateway are fast-moving channels (the blood and lymphatic vessels) that can conceivably transport a predatory cell, or cells, to any terrain in the body. Busting through the basement is therefore a seeming leap past the point of no return, a signal that a local disturbance is potentially emerging into a disseminating mob.*

But while invasion may define so-called clinical cancer for legions of first-year medical students, it is by no means the start of the pathology. Cancer is not any one act; it is a process. It begins with the first hints of subversion in the normal differentiation of a cell—with the first disruption of communication between that cell and its immediate environment. There is, perhaps, no precise moment of conception in this regard, no universally accepted beginning—which makes delineating the process that much harder. But most, if not all, types of “cancer” have their own somewhat recognizable stages of evolution along the route to clinically apparent disease.

“Saying it’s not cancer until the cells are through the basement membrane,” says Sporn, “is like saying the barn isn’t on fire until there are bright red flames coming out of the roof. It’s absolute nonsense!”

(Sorry for that long excerpt.) I think that Dr. Sporn’s greatest contribution was to reframe cancer as a continually evolving, dynamic process — carcinogenesis — rather than an event or state of being. And it was one that, conceivably at least, we could interrupt — and interrupt earlier than at the point at which it was clinically manifested. This was distinct from early detection, which, while effective to some extent and in some cancers, was both detecting cancers too late and “catching” many lesions that weren’t likely to develop any further (or didn’t really exist to begin with), adding to the already-great cancer burden.

There was a potential, said Sporn, to intervene in a way that might stop developing cancers in their tracks, and yet would not necessarily have to add to the burden of cancer overtreatment.

As I spend most of Chapter 7 discussing, there are enormous barriers to pulling this of—and I did my best to lay out the challenges. But I do believe that this is the way to go in the end.

SR You praise Kathy Giusti for her effect on multiple myeloma research. I couldn’t find the part where that research (“a worthy model for cancer research that can serve as a guidepost for the future . . . that teaches everything there is to teach about the power of collaborative science”, p. 260) came up with something useful.

CL Seth, sorry this again may be me not being very clear in my writing. I apologize for that. But the lines you cite actually are intended to set up the Burkitt story in the following chapter. It was Burkitt’s effort against the mysterious African lymphoma, that remains, in my view, “a worthy model for cancer research…”

SR You praise Burkitt’s epidemiology. How did that epidemiology help find out that Burkitt’s lymphoma responds to certain drugs? I couldn’t see a connection.

CL Good question. I think Burkitt’s very old-fashioned epidemiological investigation identified a widespread, terrible cancer that had been seen many times, but not noticed for what it was. It helped narrow down who was getting this cancer and—at least in a broad, geographical sense—why. But it wasn’t epidemiology that helped discover that this lymphoma was responsive to certain drugs—that was trial and error. As with the case of Farber and ALL [acute lymphocytic leukemia], many today would blanch at the primitive experimental protocols that tested these toxic drugs in children. But with an extraordinarily aggressive tumor that was killing these kids in weeks, Burkitt felt he had to try something. Again, that’s not epidemiology, but it is an understanding of the urgency of this disease that we can, perhaps, learn from.

[End of Part 1 of 2]

What I’m Reading

  • Republic of Outsiders: The Power of Amateurs, Dreamers, and Rebels by Alissa Quart
  • Brilliant Blunders: From Darwin to Einstein, Colossal Mistakes by Great Scientists that Changed our Understanding of Life and the Universe by Mario Livio
  • Fate of the States: The New Geography of American Prosperity by Meredith Whitney
  • Americanah by Chimamanda Ngozi Adichi, might set the record for largest value of letters in author’s name (21) minus letters in title (10) = 11
  • proofreading part of a new book by Edward Jay Epstein, described as a “Kennedy assassination diary”

Assorted Links

Thanks to Phil Alexander and Casey Manion.

Give and Take by Adam Grant

The publisher sent me a copy of Give and Take by Adam Grant after I sent several emails asking for a review copy. I expected it to be the best book about psychology in many years and it is.

The book’s main theme is the non-obvious advantages of being a “giver” (someone who helps others without concern about payback). Grant teaches at Wharton, whose students apparently enter Wharton believing (or are taught there?) that this is a poor strategy. With dozens of studies and stories, Grant argues that the truth is more complicated — that a giver, properly focussed, does better than others. Whether this reflects cause and effect (Grant seems to say it does) I have no idea. Perhaps “givers” are psychologically unusually sophisticated in many ways, not just a relaxed attitude toward payback, and that is why some of them do very well.

I was more impressed with two other things where cause and effect is clearer. One is a story about communication style. It is the best story in a book full of good stories. About ten years ago, Grant was asked to teach senior military officers how to motivate their troops. His first class was a four-hour lecture to Air Force colonels in their forties and fifties. Grant was 24. The feedback forms, filled out by the students after the class, reflected the age — and presumably wisdom — discrepancy. One comment was: “More quality information in audience than on podium.”

Grant taught the class again, to another group of Air Force colonels. Instead of talking about his credentials at the start of the class, he began like this:

I know what some of you are thinking right now: What can I possibly learn from a professor who’s twelve years old?

Everyone laughed. Grant does not say what he said next — how he answered the question. He went on to give the same lecture he had given before. The difference in feedback was “night and day”. Here is one of the comments: “Spoke with personal experience. He was the right age! High energy; clearly successful already.”

This is great. A non-obvious, seemingly small change produces a huge outcome difference. Grant clearly understands something enormously important about communication that isn’t not found in other psychology books, such as introductory textbooks. It isn’t easy to interpret (why exactly did Grant’s new opening have its effect?) nor study experimentally — but that’s fine. In Give and Take, Grant follows this story with research about what is called “the pratfall effect”: Under some circumstances making a blunder (such as spilling a cup of coffee) makes a speaker more likeable. But Grant’s opening (“what can I learn…”) isn’t a blunder. Grant calls it an “expression of vulnerability”, a category broad enough to include pratfalls — fair enough.

What can we learn from Grant’s story? Above all, that something mysterious and powerful happens or might happen at the beginning of a talk and that ordinary feedback forms are sensitive enough to detect it. What Grant did was highly specific to the situation (young speaker, older military officers) so you can’t copy it. To use it you really have to grasp the general rule. Which remains to be determined.

Tomorrow I will blog about another impressive part of the book.