The Link Between Lead and Crime

In the 1960s, a Caltech geochemist named Clair Patterson made the case that there had been worldwide contamination of living things by lead, due to the lead in gasoline. There were great increases in the amount of lead in fish and human skeletons, for example. More than anyone else he was responsible for the elimination of lead in gasoline. (By coincidence, this was just shown on the new Cosmos TV series.) A professor of pediatrics at the University of Pittsburgh named Herbert Needleman did some of the most important toxicology, linking lead exposure (presumably from paint) and IQ in children. Children with more lead in their teeth had lower IQ scores. The importance of this finding is shown by the fact he was accused of scientific misconduct.

When lead was eliminated from gasoline, blood levels of lead went down — and so did crime. The idea that childhood lead exposure causes crime many years later explains so many otherwise-hard-to-explain facts, especially worldwide declines in crime rates, that I conclude it’s true: lead exposure does cause criminality. Kevin Drum wrote a long article about this in Mother Jones a year ago and followed up his original article in many ways. A BBC radio show yesterday covered the topic.

This interests me for two reasons. One is simple. It shows the value of monitoring your own brain function by using something like the brain test I have often blogged about — e.g., to notice that butter made me smarter or mercury in my teeth fillings made me stupider. There’s still lots of lead in the world — in old windowpanes, for example. And you are exposed to thousands of other modern chemicals (e.g., in cleaning products) whose effects on your brain are essentially unknown.

The other reason is complicated. It involves the context of this discovery. Mostly, the health research establishment has been unable to get anything right. Heart disease has been the #1 killer for decades; doctors still claim (and vast number of people, including New York Times health writers, believe them) that it is caused by cholesterol. Depression and bipolar disorder might be the single greatest cause of suffering nowadays — and psychiatrists are still claiming it is caused by “chemical imbalance” in the brain. (For my view of what causes depression, see this.) Beyond figuring out that lung cancer is caused by smoking, there has been almost no progress understanding what causes cancer. The “oncogene theory” of cancer turned out to be a dead end. There have been little bits of progress here and there but on the big issues, there has been nonsense decade after decade — and lack of realization that it is nonsense.

In contrast, taking lead out of gasoline was a big step forward in public health and pointing out the link to crime a big step forward in understanding crime. Rare examples of progress. What can I learn from that? I have stressed the importance of insider/outsiders — people close enough to understand but far enough away to have freedom. The lead/crime case supports that. Clair Patterson was a geochemist, not a toxicologist. Rick Nevin, the first person to argue that lead causes crime, was an economist, not a criminologist. Both of them had a good methodological understanding and used this to shed light on a different area than their original training. (Obviously I have used my background in experimental psychology, especially my methodological knowledge — how to experiment, how to measure brain function — to shed light on many health questions.) The lead/crime link also supports my view that the notion that “correlation does not equal causation” does more harm than good. The immediate response of many many people to the lead/crime evidence was exactly that — putting them on what turned out to be the wrong side. Whatever truth correlation does not equal causation might have is outweighed by the damage it does when it is used to ignore evidence. How smart do you have to be to realize “correlation does not equal causation” is stupid? To me don’t ignore evidence is the most important principle of science. But many university professors don’t agree with me.

I’m also impressed — in a good way — by Drum’s article. At least it exists. Anyone can read it and then look further, for example at original scientific articles. I wouldn’t say it was easy to write but it did not require expensive travel, extensive interviews, or months of research. It did require original thinking. In contrast, the New York Times and The New Yorker, which do allow expensive time-consuming journalism, haven’t published anything nearly as good in decades. The New York Times‘s idea of high-quality journalism seems to be a series about the high cost of health care while The New Yorker weighs in on the harm done by Dr. Mehmet Oz.

 

 

 

 

Thanks to James Keller.

 

 

 

 

Progress in Psychiatry and Psychotherapy: The Half-Full Glass

Here is an excellent introduction to cognitive-behavioral therapy (CBT) for depression, centering on a Stanford psychiatrist named David Burns. I was especially interested in this:

[Burns] currently draws from at least 15 schools of therapy, calling his methodology TEAM—for testing, empathy, agenda setting and methods. . . . Testing means requiring that patients complete a short mood survey before and after each therapy session. In Chicago, Burns asks how many of the therapists [in the audience] do this. Only three [out of 100] raise their hands. Then how can they know if their patients are making progress? Burns asks. How would they feel if their own doctors didn’t take their blood pressure during each check-up?

Burns says that in the 1970s at Penn [where he learned about CBT], “They didn’t measure because there was no expectation that there would be a significant change in a single session or even over a course of months.” Forty years later, it’s shocking that so little attention is paid to measuring whether therapy makes a difference. . . ”Therapists falsely believe that their impression or gut instinct about what the patient is feeling is accurate,” says May [a Stanford-educated Bay Area psychiatrist], when in fact their accuracy is very low.

When I was a graduate student, I started measuring my acne. One day I told my dermatologist what I’d found. “Why did you do that?” he asked. He really didn’t know. Many years later, an influential psychiatrist — Burns, whose Feeling Good book, a popularization of CBT, has sold millions of copies — tells therapists to give patients a mood survey. That’s progress.

But it is also a testament to the backward thinking of doctors and therapists that Burns didn’t tell his audience:

–have patients fill out a mood survey every day
–graph the results

Even more advanced:

–use the mood scores to measure the effects of different treatments

Three cheap safe things. It is obvious they would help patients. Apparently Burns doesn’t do these things with his own patients, even though his own therapy (TEAM) stresses “testing” and “methods”. It’s 2013. Not only do psychiatrists and therapists not do these things, they don’t even think of doing them. I seem to be the first to suggest them.

Thanks to Alex Chernavsky.

Stagnation in Psychiatry

A recent New York Times article lays it out:

Fully 1 in 5 Americans take at least one psychiatric medication. Yet when it comes to mental health, we are facing a crisis in drug innovation. . . . Even though 25 percent of Americans suffer from a diagnosable mental illness in any year, there are few signs of innovation from the major drug makers.

The author has no understanding of the stagnation, yet is opinionated:

The simple answer [to what is causing the stagnation] is that we don’t yet understand the fundamental cause of most psychiatric disorders [what does “fundamental cause” mean? — Seth], in part because the brain is uniquely difficult to study; you can’t just biopsy the brain and analyze it. That is why scientists have had great trouble identifying new targets for psychiatric drugs.

The great increase in depression has an environmental cause. Meaning that depressed brains (aside from the effects of depression) are the same as non-depressed brains. Someone who knows that would not talk about biopsying the brain.

You come to a room with a door. If you don’t know how a door works, you are going to do a lot of damage getting inside. That is modern psychiatry. I described a new explanation for depression in this article (see Example 2).

Thanks to Alex Chernavsky.

The Truth in Small Doses: Interview with Clifton Leaf (Part 2 of 2)

Part 1 of this interview about Leaf’s book The Truth in Small Doses: Why We’re Losing the War on Cancer — and How to Win It was posted yesterday.

SR You say we should “let scientists learn as they go”. For example, reduce the need for grant proposals to require tests of hypotheses. I agree. I think most scientists know very little about how to generate plausible ideas. If they were allowed to try to do this, as you propose, they would learn how to do it. However, I failed to find evidence in your book that a “let scientists learn as they go” strategy works better (leaving aside Burkitt). Did I miss something?

CL Honestly, I don’t think we know yet that such a strategy would work. What we have in the way of evidence is a historical control (to some extent, we did try this approach in pediatric cancers in the 1940s through the 1960s) and a comparator arm (the current system) that so far has been shown to be ineffective.

As I tried to show in the book, the process now isn’t working. And much of what doesn’t work is what we’ve added in the way of bad management. Start with a lengthy, arduous, grants applications process that squelches innovative ideas, that funds barely 10 percent of a highly trained corps of academic scientists and demoralizes the rest, and that rewards the same applicants (and types of proposals) over and over despite little success or accountability. This isn’t the natural state of science. We BUILT that. We created it through bad management and lousy systems.
Same for where we are in drug development. We’ve set up clinical trials rules that force developers to spend years ramping up expensive human studies to test for statistical significance, even when the vast majority of the time, the question being asked is of little clinical significance. The human cost of this is enormous, as so many have acknowledged.

With regard to basic research, one has only to talk to young researchers (and examine the funding data) to see how badly skewed the grants process has become. As difficult (and sometimes inhospitable) as science has always been, it has never been THIS hard for a young scientist to follow up on questions that he or she thinks are important. In 1980, more than 40 percent of major research grants went to investigators under 40; today it’s less than 10 percent. For anyone asking provocative, novel questions (those that the study section doesn’t “already know the answer to,” as the saying goes), the odds of funding are even worse.

So, while I can’t say for sure that an alternative system would be better, I believe that given the current state of affairs, taking a leap into the unknown might be worth it.

SR I came across nothing about how it was discovered that smoking causes lung cancer. Why not? I would have thought we can learn a lot from how this discovery was made.

CL I wish I had spent more time on smoking. I mention it a few times in the book. In discussing Hoffman (pg. 34, and footnote, pg. 317), I say:

He also found more evidence to support the connection of “chronic irritation” from smoking with the rise in cancers of the mouth and throat. “The relation of smoking to cancer of the buccal [oral] cavity,” he wrote, “is apparently so well established as not to admit of even a question of doubt.” (By 1931, he would draw an unequivocal link between smoking and lung cancer—a connection it would take the surgeon general an additional three decades to accept.)

And I make a few other brief allusions to smoking throughout the book. But you’re right, I gave this preventable scourge short shrift. Part of why I didn’t spend more time on smoking was that I felt its role in cancer was well known, and by now, well accepted. Another reason (though I won’t claim it’s an excusable one) is that Robert Weinberg did such a masterful job of talking about this discovery in “Racing to the Beginning of the Road,” which I consider to be the single best book on cancer.

I do talk about Weinberg’s book in my own, but I should have singled out his chapter on the discovery of this link (titled “Smoke and Mirrors”), which is as much a story of science as it is a story of scientific culture.

SR Overall you say little about epidemiology. You write about Burkitt but the value of his epidemiology is unclear. Epidemiology has found many times that there are big differences in cancer rates between different places (with different lifestyles). This suggests that something about lifestyle has a big effect on cancer rates. This seems to me a very useful clue about how to prevent cancer. Why do you say nothing about this line of research (lifestyle epidemiology)?

CL Seth, again, I agree. I don’t spend enough time discussing the role that good epidemiology can play in cancer prevention. In truth, I had an additional chapter on the subject, which began by discussing decades of epidemiological work linking the herbicide 2-4-D with various cancers, particularly with prostate cancer in the wheat-growing states of the American west (Montana, the Dakotas and Minnesota). I ended up cutting the chapter in an effort to make the book a bit shorter (and perhaps faster). But maybe that was a mistake.

For what’s it worth, I do believe that epidemiology is an extremely valuable tool for cancer prevention.

[End of Part 2 of 2]

The Truth in Small Doses: Interview with Clifton Leaf (Part 1 of 2)

I found a lot to like and agree with in The Truth in Small Doses: Why We’re Losing the War on Cancer — and How to Win It by Clifton Leaf, published recently. It grew out of a 2004 article in Fortune in which Leaf described poor results from cancer research and said that cancer researchers work under a system that “rewards academic achievement and publication over all else” — in particular, over “genuine breakthroughs.” I did not agree, however, with his recommendations for improvement, which seemed to reflect the same thinking that got us here. It reminded me of President Obama putting in charge of fixing the economy the people who messed it up. However, Leaf had spent a lot of time on the book, and obviously cared deeply, and had freedom of speech (he doesn’t have to worry about offending anyone, as far as I can tell) so I wondered how he would defend his point of view.

Here is Part 1 of an interview in which Leaf answered written questions.

SR Let me begin by saying I think the part of the book that describes the problem – little progress in reducing cancer – is excellent. You do a good job of contrasting the amount of time and money spent with progress actually made and pointing out that the system seems designed to produce papers rather than progress. What I found puzzling is the part about how to do better. That’s what I want to ask you about.

In the Acknowledgements, you say Andy Grove said “a few perfect words” that helped shape your thesis. What were those words?

CL “It’s like a Greek tragedy. Everybody plays his individual part to perfection, everybody does what’s right by his own life, and the total just doesn’t work.” Andy had come to a meeting at Fortune, mostly just to chat. I can’t remember what the main topic of conversation was, but when I asked him a question about progress in the war on cancer, he said the above. (I quote this in the 2004 piece I wrote for Fortune.)

SR You praise Michael Sporn. His great contribution, you say, is an emphasis on prevention. I have a hard time seeing this as much of a contribution. The notion that “an ounce of prevention is worth a pound of cure” is ancient. What progress has Sporn made in the prevention of anything?

CL Would it be alright, Seth, if before I answer the question, I bring us back to what I said in the book? Because I think the point I was trying to make — successfully or not (and I’m guessing you would conclude “not” here) — is more nuanced than “an ounce of prevention is worth a pound of cure.”

Here’s what I see as the key passage regarding Dr. Sporn (pgs. 133-135):

For all his contributions to biology, biochemistry, and pharmacology, though, Sporn is still better known for something else. Rather than any one molecular discovery, it is an idea. The notion is so straightforward—so damned obvious, really—that it is easy to forget how revolutionary it was when he first proposed it in the mid-1970s: cancer, Sporn contended, could (and should) be chemically stopped, slowed, or reversed in its earliest preinvasive stages.

That was it. That was the whole radical idea.

Sporn was not the first to propose such an idea. Lee Wattenberg at the University of Minnesota had suggested the strategy in 1966 to little response. But Sporn refined it, pushed it, and branded it: To distinguish such intervention from the standard form of cancer treatment, chemotherapy—a therapy that sadly comes too late for roughly a third of patients to be therapeutic—he coined the term chemoprevention in 1976.

The name stuck.

On first reading, the concept might seem no more than a truism. But to grasp the importance of chemoprevention, one has first to dislodge the mind-set that has long reigned over the field of oncology: that cancer is a disease state. “One has cancer or one doesn’t.” Such a view, indeed, is central to the current practice of cancer medicine: oncologists today discover the event of cancer in a patient and respond—typically, quite urgently. This thinking is shared by patients, the FDA, drug developers, and health insurers (who decide what to pay for). This is the default view of cancer.

And, to Sporn, it is dead wrong. Cancer is not an event or a “state” of any kind. The disease does not suddenly come into being with a discovered lump on the mammogram. It does not begin with the microscopic lesion found on the chest X-ray. Nor when the physician lowers his or her voice and tells the patient, “I’m sorry. The pathology report came back positive. . . . You have cancer.”

Nor does the disease begin, says Sporn, when the medical textbooks say it does: when the first neoplastic cell breaks through the “basement membrane,” the meshwork layers of collagen and other proteins that separate compartments of bodily tissue. In such traditional thinking, it matters little whether a cell, or population of cells, has become immortalized through mutation. Or how irregular or jumbled the group might look under the microscope. Or how otherwise disturbed their genomes are. As long as none of the clones have breached the basement membrane, the pathology is not (yet) considered “cancer.”

For more than a century, this barrier has been the semantic line that separates the fearsome “invader” from the merely “abnormal.” It is the Rubicon of cancer diagnosis. From the standpoint of disease mechanics, the rationale is easy to understand, because just beyond this fibrous gateway are fast-moving channels (the blood and lymphatic vessels) that can conceivably transport a predatory cell, or cells, to any terrain in the body. Busting through the basement is therefore a seeming leap past the point of no return, a signal that a local disturbance is potentially emerging into a disseminating mob.*

But while invasion may define so-called clinical cancer for legions of first-year medical students, it is by no means the start of the pathology. Cancer is not any one act; it is a process. It begins with the first hints of subversion in the normal differentiation of a cell—with the first disruption of communication between that cell and its immediate environment. There is, perhaps, no precise moment of conception in this regard, no universally accepted beginning—which makes delineating the process that much harder. But most, if not all, types of “cancer” have their own somewhat recognizable stages of evolution along the route to clinically apparent disease.

“Saying it’s not cancer until the cells are through the basement membrane,” says Sporn, “is like saying the barn isn’t on fire until there are bright red flames coming out of the roof. It’s absolute nonsense!”

(Sorry for that long excerpt.) I think that Dr. Sporn’s greatest contribution was to reframe cancer as a continually evolving, dynamic process — carcinogenesis — rather than an event or state of being. And it was one that, conceivably at least, we could interrupt — and interrupt earlier than at the point at which it was clinically manifested. This was distinct from early detection, which, while effective to some extent and in some cancers, was both detecting cancers too late and “catching” many lesions that weren’t likely to develop any further (or didn’t really exist to begin with), adding to the already-great cancer burden.

There was a potential, said Sporn, to intervene in a way that might stop developing cancers in their tracks, and yet would not necessarily have to add to the burden of cancer overtreatment.

As I spend most of Chapter 7 discussing, there are enormous barriers to pulling this of—and I did my best to lay out the challenges. But I do believe that this is the way to go in the end.

SR You praise Kathy Giusti for her effect on multiple myeloma research. I couldn’t find the part where that research (“a worthy model for cancer research that can serve as a guidepost for the future . . . that teaches everything there is to teach about the power of collaborative science”, p. 260) came up with something useful.

CL Seth, sorry this again may be me not being very clear in my writing. I apologize for that. But the lines you cite actually are intended to set up the Burkitt story in the following chapter. It was Burkitt’s effort against the mysterious African lymphoma, that remains, in my view, “a worthy model for cancer research…”

SR You praise Burkitt’s epidemiology. How did that epidemiology help find out that Burkitt’s lymphoma responds to certain drugs? I couldn’t see a connection.

CL Good question. I think Burkitt’s very old-fashioned epidemiological investigation identified a widespread, terrible cancer that had been seen many times, but not noticed for what it was. It helped narrow down who was getting this cancer and—at least in a broad, geographical sense—why. But it wasn’t epidemiology that helped discover that this lymphoma was responsive to certain drugs—that was trial and error. As with the case of Farber and ALL [acute lymphocytic leukemia], many today would blanch at the primitive experimental protocols that tested these toxic drugs in children. But with an extraordinarily aggressive tumor that was killing these kids in weeks, Burkitt felt he had to try something. Again, that’s not epidemiology, but it is an understanding of the urgency of this disease that we can, perhaps, learn from.

[End of Part 1 of 2]

Assorted Links

Thanks to Phil Alexander and Casey Manion.

Rent-Seeking Experts

Two thought-provoking paragraphs from Matt Ridley:

From ancient Egypt to modern North Korea, always and everywhere, economic planning and control have caused stagnation; from ancient Phoenicia to modern Vietnam, economic liberation has caused prosperity. In the 1960s, Sir John Cowperthwaite, the financial secretary of Hong Kong, refused all instruction from his LSE-schooled masters in London to plan, regulate and manage the economy of his poor and refugee-overwhelmed island. Set merchants free to do what merchants can, was his philosophy. Today Hong Kong has higher per capita income than Britain.

In July 1948 Ludwig Erhard, director of West Germany’s economic council, abolished food rationing and ended all price controls on his own initiative. General Lucius Clay, military governor of the US zone, called him and said: “My advisers tell me what you have done is a terrible mistake. What do you say to that?” Erhard replied: “Herr General, pay no attention to them! My advisers tell me the same thing.” The German economic miracle was born that day; Britain kept rationing for six more years.

This is standard libertarianism. I like the stories but I don’t agree with the interpretation. I don’t think it is “economic planning and control” that causes stagnation in these examples. I believe it is expertise — more precisely, rent-seeking experts who know too little and extract too much rent. There are libertarian experts, too. They too are capable of doing immense damage (e.g., Alan Greenspan), contradicting Ridley’s view that “economic liberation” always causes prosperity. In both of Ridley’s examples, the experts give advice that empowers the experts. In the first example, Cowperthwaite is told by “LSE-schooled” economists to “plan, regulate and manage the economy.” All that planning, regulation and management require expertise, in particular expertise similar to that of the experts who advised it. Which you cannot buy — you have to rent it. You must pay the experts year after year after year to plan, regulate, and manage. Because the advice must empower the experts, there is a strong bias away from truth. That is the fundamental problem.

Freud is the classic rent-seeking expert. You are sick because of X, Y, and Z — and if you pay me for my time week after week, I will cure you, said Freud. Curiously no treatment that did not involve paying people like Freud would work. Curiously psychoanalytic patients never got better. Therapy lasted forever. You might think this is transparently ridiculous, but professors at esteemed universities such as Berkeley still take Freud seriously. Millions of people pay for psychotherapy. The latest psychotherapeutic fad is cognitive-behavioral therapy — which again requires paying experts to get better, week after week. Berkeley professors take that seriously, too.

Evidence-based medicine advocates are among the newest rent-seeking experts. Like Freud, they focus on process (you must follow a certain process) rather than results. (What they call process in other contexts is called ritual. Rituals always empower experts.) Rather than trying to learn from all the evidence — which might seem like a good idea, and a simple one — evidence-based medicine advocates preach that only a tiny fraction of the evidence (which you need a Cochrane expert to select and analyze) can actually tell us anything. Again, this might seem transparently ridiculous, but many people take it seriously. Evidence-based medicine has an amusing twist which is that its advocates tell the rest of us how stupid we are (for example, “correlation does not equal causation”).

The workhorses of the rent-seeking expert ecology — the ones that extract the most rent — are doctors. They are incapable of giving inexpensive advice. However they propose to help you, it always involves expensive treatment. This might seem like a recipe for crummy solutions, but again many people take a doctor’s advice seriously (by failing to do their own research). My introduction to the world of rent-seeking solutions was the dermatologist who told me I should take antibiotics for my acne. I was to take the antibiotics week after week — and because I was taking a dangerous drug, I should also see my doctor regularly. During these regular visits, the doctor never figured out that the antibiotic did nothing to cure my acne. I learned that by self-experimentation.

Like anthropologists who fail to notice their own weird beliefs (a recently-deceased Berkeley professor of anthropology took Freud seriously, for example), the profession that came up with the rent-seeking concept has failed to notice that many of them do exactly that.

One clue that you are dealing with a rent-seeking expert is that they literally ask for something like rent. Religious experts tell you to attend church week after week. Psychotherapists want you to attend therapy week after week. Psychiatrists tell you to take an anti-depressant daily for the rest of your life. My dermatologist told me to take an antibiotic daily (and to renew the prescription I needed to see him). And so on. As these examples suggest, rent-seeking experts thrive in areas of knowledge where our understanding is poor. Which includes economics.

“Rent-seeking experts” in education.

More What I call “standard libertarianism” Tyler Cowen calls “crude libertarianism”. Maybe I should have called it “off-the-shelf libertarianism”. In addition to what Tyler says, which I agree with, I would say that governments and their “central planners” have sponsored innovation (e.g., the Internet, the greenback, basic scientific discoveries) much better than Ridley seems to give them credit for. Innovation is a huge part of economic development.

Movie Grosses and Nobel Prizes

In Edward Jay Epstein’s new piece Gross Misunderstanding, in the Columbia Journalism Review, he writes

By focusing on the box-office race that is spoon-fed to them each week, journalists may entertain their audiences, but they are missing the real story.

Something similar happens with the Nobel Prizes. Journalists print what they are told — Scientists X and Y did beautiful “pure science” about this or that — and thereby miss the real story. In the case of Nobel Prizes in Medicine, the real story is the long-running lack of progress on major diseases (cancer, heart disease, depression, etc.).

Big Diet and Exercise Study Fails to Find Benefit

Persons with Type 2 diabetes have an increased risk of heart disease and stroke. They are usually overweight. A study of about 5000 persons with Type 2 diabetes who were overweight or worse asked if eating less and exercise — causing weight loss — would reduce the risk. of heart disease and stroke. The difficult treatment caused a small amount of weight loss (5%), which was enough to reduce risk factors. The study ended earlier than planned because eating less and exercise didn’t help: “11 years after the study began, researchers concluded it was futile to continue — the two groups had nearly identical rates of heart attacks, strokes and cardiovascular deaths.”

Heart disease and stroke are major causes of death and disability. Failure of such an expensive study ($20 million?) to produce a clearly helpful result is an indication that mainstream health researchers don’t understand what causes heart disease and stroke. Another indication is that the treatment being studied (eating less and exercise) was popular in the 1950s. Mainstream thinking about weight control is stuck in the 1950s. It is entirely possible that greater weight loss — which mainstream thinking is unable to achieve — would have reduced heart disease and stroke. If you understand what causes heart disease and stroke, your understanding may lead you to lines of reasoning less obvious than people with diabetes are overweight –> weight loss treatments).

One of the study organizers – Rena Wing, a Brown University professor who studies weight control — told a journalist “you do a study because you don’t know the answer.” She failed to add, I’m sure, that wise people do not give a super-expensive car to someone who can’t drive. You should learn to drive with a cheap car. Allowing ignorant researchers to do a super-expensive study was a mistake. To learn something, do the cheapest easiest study that will help. (As I have said many times.) You should not simply do “a study”. This principle was the most helpful thing I learned during my first ten years as a scientist. In this particular case, I doubt that a $20 million study was the cheapest easiest way to learn how to reduce heart disease and stroke.

I made progress on weight control, sleep, and other things partly because studying myself allowed me to learn quickly and cheaply. If researchers understood what causes major health problems, they would be able to invent treatments with big benefits. That the Nobel Prize in Physiology or Medicine is given year after year to work that makes no progress on major health problems is another sign of the lack of understanding reflected in the failure of this study. I have never seen this lack of understanding — which has great everyday consequences — pointed out by any science blogger or science columnist or science journalist, many of whom describe themselves as “skeptical” and complain about “bad science.”

 

 

The 2012 Nobel Prize in Physiology or Medicine

As usual, there is plenty of disease and disability in the world: depression, diabetes, heart disease, cancer, stroke, obesity, autoimmune disease, and so on. As usual, the Nobel Prize in Physiology or Medicine — supposed to be given for the most useful research — is given for research with no proven benefit to anyone (except career-wise). Once again implying that the world’s best biomedical researchers — judging by who wins Nobel Prizes — either don’t want to or don’t know how to do useful research.

Once again the press release tries to hide this. “From surprising discovery to medical use” reads one heading. If you read the text, however, you learn there is no actual “medical use”. Here’s what it says:

These discoveries have also provided new tools for scientists around the world and led to remarkable progress in many areas of medicine. iPS cells can also be prepared from human cells. For instance, skin cells can be obtained from patients with various diseases, reprogrammed, and examined in the laboratory to determine how they differ from cells of healthy individuals. Such cells constitute invaluable tools for understanding disease mechanisms and so provide new opportunities to develop medical therapies.

Apparently you can make “remarkable progress” in medicine without helping a single person, which says a lot about what passes for medical progress. Although iPS cells are supposedly “invaluable tools” for understanding disease mechanisms, we are not told a single disease that has thereby been understood or a single therapy that has been developed.

The Guardian printed a roundup of responses to the award. I read it eagerly. Maybe one of the comments will explain how the prize-winning work actually helped someone (besides career-wise). After all, Yamanaka, one of the winners, had previously won the Finland Prize, given to research that “significantly improves the quality of human life today and for future generations”. Paul Nurse says the prize-winning work did such-and-such, “paving the way for important developments in the diagnosis and treatment of disease” unfortunately not saying what those “important developments” are. Martin Evans says:

The practical outcome is that now we not only know that it might be theoretically possible to convert one cell type into another but it is also practically possible. These are very important foundation studies for future cellular therapies in medicine.

Emphasis added. Another comment: “These breakthroughs will ultimately lead to new and better treatments for conditions like Parkinson’s and improve the lives of millions of people around the world.” A bold prediction, given that they have not yet improved the life of even one person. Julian Savescu, an ethicist at Oxford, says “This is as significant as the discovery of antibiotics. Given the millions, or more lives, which could be saved, this is a truly momentous award.”

Year after year, the Nobel Prize in Physiology or Medicine is given for research that, we are told by biologists with huge conflicts of interest, will — no doubt! — be incredibly valuable in the future. Indicating there was no research that might be honored that had already been useful. It is as if you have a baseball award for best hitter but all hitters all over the world strike out all the time so you end up giving the award to people who strike out best. They are the best hitters, you tell credulous sportswriters. They receive the prestigious award for best hitter at an elaborate ceremony, with toasts all around. Nobody says they cannot hit.