Ten Interesting Things I Learned From Adventures in Nutritional Therapy

A blog called Adventures in Nutritional Therapy (started March 2011) is about what the author learned while trying to solve her health problems via nutrition and a few other things. She usually assumed her health problems were due to too much or too little of some nutrient. She puts it like this: “using mostly non-prescription, over-the-counter (OTC) supplements and treatments to address depression, brain fog, insomnia, migraines, hypothyroidism, restless legs, carpal tunnel syndrome, and a bunch of other annoyances.” In contrast to what “the American medical establishment” advises. Mostly it is nutritional self-experimentation about a wide range of health problems.

Interesting things I learned from the archives:

1. Question: Did Lance Armstrong take performance-enhancing drugs? I learned that LiveStrong (Armstrong’s site) is a content farm. Now answer that question again.

2. “If you return repeatedly to a conventional doctor with a problem they can’t solve, they will eventually suggest you need antidepressants.”

3. “When I mentioned [to Dr. CFS] the mild success I’d had with zinc, he said it was in my mind: I wanted it to work and it did. When I pointed out that 70% of the things I tried didn’t work, he changed the subject. Dr. CFS’ lack of basic reasoning skills did nothing to rebuild my confidence in the health care system.” Quite right. I have had the same experience. Most things I tried failed. When something finally worked, it could hardly be a placebo effect. This line of reasoning has been difficult for some supposedly smart people to grasp.

4. A list of things that helped her with depression. “Quit gluten” is number one.

5. Pepsi caused her to get acne. Same here.

6. 100 mg/day of iron caused terrible acne that persisted for weeks after she stopped taking the iron.

7. “ In September 2008 I started a journey that serves as a good example of the limits of the American health care system, where you can go through three months, 15 doctor visits, $7,000 in medical tests, three prescriptions and five over-the-counter medications trying to treat your abdominal pain, and after you lose ten pounds due to said pain, you are asked by the “specialists” if you have an eating disorder.” I agree. Also an example of the inability of people within the American health care system to see those limits. If they recognized that people outside their belief system might have something valuable to contribute, apparently something awful would happen.

8. Acupuncture relieved her sciatica, but not for long. “By the time I left [the acupuncturist’s office] the pain was gone, but it crept back during my 30-minute drive home.”

9. Pointing out many wrongs does not equal a right. She praises a talk by Robert Lustig about evil fructose. I am quite sure that fructose (by itself) did not cause the obesity epidemic. For one thing, I lost a lot of weight by drinking it. (Here is an advanced discussion.) In other words, being a good critic of other people’s work (as Lustig may be) doesn’t get you very far. I think it is hard for non-scientists (and even some scientists) to understand that all scientific work has dozens of “flaws”. Pointing out the flaws in this or that is little help, unless those flaws haven’t been noticed. What usually helps isn’t seeing flaws, it is seeing what can be learned.

10. A list of what caused headaches and migraines. One was MSG. Another was Vitamin D3, because it made her Vitamin B1 level too low.

She is a good writer. Mostly I found support for my beliefs: 1. Of the two aspects of self-experimentation (measure, change), change is more powerful. She does little or no self-tracking (= keeping records) as far as I could tell, yet has made a lot of progress. She has done a huge amount of trying different things. 2. Nutritional deficiencies cause a lot of problems. 3. Fermented food is overlooked. She never tries it, in spite of major digestive problems. She does try probiotics. 4. American health care is exceedingly messed-up. As she puts it, “the American medical establishment has no interest in this approach [which often helped her] and, when they do deign to discuss it, don’t know what the #%@! they’re talking about.” 5. “Over the years I’ve found accounts of personal experiences to be very helpful.” I agree. Her blog and mine are full of them.

Thanks to Alexandra Carmichael.

More Her latest post mentions me (“The fella after my own heart is Seth Roberts, who after ten years of experimenting . . . “). I was unaware of that when I wrote the above.

Duct Tape, the Eurozone, Status-Quo Bias, and Neglect of Innovation

In 1995, I visited my Swedish relatives. We argued about the Euro. They thought it was a good idea, I thought it had a serious weakness.

ME It ties together economies that are different.

MY AUNT It reduces the chance of war in Europe.

You could say we were both right. There have been no wars between Eurozone countries (supporting my aunt) and the Eurozone is now on the verge of breaking apart for exactly the reason I and many others pointed out (supporting me).

Last week a friend said to me that Europe was in worse shape than America. I was unconvinced. I said that I opposed Geithner’s “duct-tape solution”. It would have been better to let things fall apart and then put them back together in a safer way.

MY FRIEND Duct-tape works.

ME What Geithner did helped those who benefit from the status quo and hurt those who benefit from change. Just like duct tape.

This struck me as utterly banal until I read a one-sided editorial in The Economist:

The consequences of the euro’s destruction are so catastrophic that no sensible policymaker could stand by and let it happen. . . . the threat of a disaster . . . can anything be done to avert disaster?

and similar remarks in The New Yorker (James Surowiecki):

The financial crisis in Europe . . . has now entered a potentially disastrous phase.. . . with dire consequences not just for Europe but also for the rest of us. . . . This is that rarest of problems—one that you really can solve just by throwing money at it [= duct tape]

Wait a sec. What if the Eurozone is a bad idea? Like I (and many others) said in 1995? Why perpetuate a bad idea? Why drive further in the wrong direction? Sure, the dissolution will bring temporary trouble (“disaster”, “dire consequences”), but that will be a small price to pay for getting rid of a bad idea. Of course the Euro had/has pluses and minuses. Anyone who claimed to know that the pluses outweighed the minuses (or vice-verse) was a fool or an expert. Now we know more. Given that what the nay-sayers said has come to pass, it is reasonable to think that they (or we) were right: The minuses outweigh the pluses.

You have seen the phrase Japan’s lost decade a thousand times. You have never seen the phrase Greece’s lost decade. But Greeks lost an enormous amount from being able to borrow money for stupid conventional projects at too low a rate. Had loans been less available, they would have been more original (the less debt involved, the easier it is to take risks) and started at a smaller scale. Which I believe would have been a better use of their time and led to more innovation. Both The Economist‘s editorial writer and Surowiecki have a status-quo “duct-tape” bias without realizing it.

What’s important here is not what two writers, however influential their magazines, think or fail to think. It is that they are so sure of themselves. They fail to take seriously an alternative (breakup of the Eurozone would in the long run be a good thing) that has at least as much to recommend it as what they are sure of (the breakup would be a “disaster”). I believe they are so sure of themselves because they have absorbed (and now imitate) the hemineglect of modern economics. The whole field, they haven’t noticed, has an enormous status-quo bias in its failure to study innovation. Innovation — how new goods and services are invented and prosper — should be half the field. Let me repeat: A few years ago I picked up an 800-page introductory economics textbook. It had one page (one worthless page) on innovation. In this staggering neglect, it reflected the entire field. The hemineglect of economics professors is just as bad as the hemineglect of epidemiologists (who ignore immune function, study of what makes us better or worse at fighting off microbes) and statisticians (who pay almost no attention to idea generation).

MORE Even Joe Nocera, whom I like, has trouble grasping that the Euro might be a bad idea. “The only thing that should matter is what works,” he writes. Not managing to see that the Euro isn’t working.

Vitamin D: More Reason to Take at Sunrise

I blogged earlier about what I called a “stunning discovery”: Primal Girl found her sleep got much better when she started taking Vitamin D first thing in the morning (= soon after she got up) rather than mid-afternoon. This suggested that Vitamin D acts on your circadian system similar to a blast of sunlight. (More evidence and discussion here.) In his blog, Joseph Buchignani reports another experience that supports the idea that you should take Vitamin D first thing in the morning:

I picked up a bottle of Vit-D and Calcium. Dosage of Vit-D per pill was 1.6ud. Per the instructions, I took 1 at morning and 1 at night. I began this regimin on the night of the 24th of November. It’s now the night of the 25th of November, and my circadian rhythm is completely fucked. . . . I’m fully awake now (12:30 AM), and I probably took the last dose of Vit-D around 7-8 PM. . . . I woke up with dark eye rings on the morning of the 25th. My energy level did not rise as it should have, but sort of meandered in the middle, before finally tailing off. Stress levels and depression were both elevated. I got little productive done.

Yesterday I started taking Vitamin D first thing in the morning. I took 2000 IU of Vitamin D3 at 8 am. In the afternoon I felt more energetic than usual. The next morning (this morning) I woke up feeling more rested than usual. This also supports Primal Girl’s experience.

Let me repeat: first thing in morning. If you wake up before sunrise, take at sunrise (say, 7 am). Sunlight has a considerably different effect on your circadian system at 7 am than 10 am. (Look up circadian phase-response curve and especially the work of Patricia DeCoursey if you want to understand why three hours makes a big difference.) I have two bottles of Vitamin D. Neither mentions time of day. Both say take with meals.

Nobel Prize Report Card: Economics

The Nobel Prizes awarded each year resemble a kind of report card where each prize-worthy discipline (Physics, Chemistry, etc.) gets a grade that depends on the prize-winning research. If the prize-winning research is useful and surprising, the grade is high. If not the grade is low. More generally, at least to me, the intellectual history of the prize winners sheds light on the whole profession. Perhaps some biologists were unaware of the behavior of Eric Kandel described in Explorers of the Black Box when he was awarded the biology prize. Kandel, I hasten to add, is an unusual case.

Thomas Sargent is one of the winners of this year’s Economics prize. In 2007, he gave a graduation speech at Berkeley to economics majors (via Marginal Revolution). In the speech, Sargent called economics “organized common sense”. He went on to list 12 common-sense ideas, such as “Individuals and communities face trade-offs” and “governments and voters respond to incentives” that economists believe. The reasons for their belief weren’t stated.

When I started as a professor (at Berkeley) I did many experiments with rats and, to my annoyance, discovered an inconvenient truth: I understood rats less well than I thought. Even in a heavily-controlled heavily-studied situation (Skinner box), my rats often did not do what I expected. My common sense was often wrong, in other words. This experience made me considerably more skeptical of other people’s “common sense”.

To me, and I think to most scientists, science begins with common sense. Experimental psychology certainly does. I used common sense to design my experiments. Had I not done those experiments, I would not have learned that my common sense was wrong. So relying on common sense was helpful — as a place to start. As a way to begin to understand. You begin with common-sense ideas and you test them. That common sense is often wrong is a theme of Freakonomics, in agreement with my experience. Yet Sargent seemed content (he called economics “our beautiful subject”) to end with common sense, perhaps tidied up.

This is really unfortunate because economics, beautiful or not, is so important. If you ignore data, the answer to every hard question is the same: the most powerful people are right. That way lies stagnation (problems build up unsolved because powerful people prefer the status quo) and collapse (when the problems become overwhelming). Alan Greenspan’s faith-based belief in free markets and the 2008 financial crisis — after Sargent’s speech — is an example. In 2009, Sargent’s speech might have been less well-received.

 

Edward Jay Epstein on Homeland

A new series on Showtime called Homeland is about a CIA agent (played by Claire Danes) who believes that a newly-released American prisoner of war may have been “turned” during his years in Iraqi captivity. In the first episode, she tries to find evidence to support her belief. Judging by that episode, it is very good.

I told Edward Jay Epstein about it — his book on James Angleton centers on CIA infiltration by “moles”. He commented:

What is interesting here is the schism between the fictional world and real world of counterespionage. In the former, it is an issue of “who”. Find the guilty man and arrest or kill him. In the real world, the issue is vulnerability. The bureaucracy has two choices: admit its methods are vulnerable to penetration and paralyze the organization, or deem the search for a mole to be paranoia and sick think. That latter course is what happens in the real world, alas. Some fiction writers understand this: Graham Greene in Human Factor and Le Carre in Smiley’s People.

Yes. If you go back in time, I predict you will find that the term kill the messenger arose at the same time as powerful organizations. I have a theory: Only people who derive power from their placement in big organizations want to kill the messenger (who says the organization assumes something not true). In other situations, bad news is less threatening. In health care, outside ideas are met by insiders, such as doctors, with where’s the double-blind placebo-controlled study? As Epstein says, the dismissiveness is partly motivated by fear: fear that something is wrong with their system and its values.

Spycraft, Personal Science, and Overconfidence in What We Know

Edward Jay Epstein‘s newest Kindle book is James Jesus Angleton: Was He Right?. Angleton worked at the CIA most of his career, which spanned the Cold War. He struck some of his colleagues as paranoid: He believed that the CIA could easily contain Russian spies. Colleagues said Oh, no, that couldn’t happen. After his death, it turned out he was right (e.g., Aldrich Ames). At one point he warned the CIA director, “an intelligence [agency] is most vulnerable to deception when it considers itself invulnerable to deception.”

What interests me is the asymmetry of the mistakes. When it really matters, we overestimate far more than underestimate our understanding. CIA employees’ overestimation of their ability to detect deception is a big example. There are innumerable small examples. When people are asked to guess everyday facts (e.g., height of the Empire State Building) and provide 95% confidence intervals for their guesses, their intervals are too short, usually much too short (e.g., the correct answer is outside the intervals 20% of the time). People arrive at destinations more often later than expected than earlier than expected. Projects large and small take longer than expected far more often than shorter than expected. For any one example, there are many possible explanations. But the diversity of examples suggests the common thread is true: We are too sure of what we know.

There are several plausible explanations. One is that it helps groups work together. If people work together toward a single goal, they are more likely to reach that goal and at least learn what happens than if they squabble. Another is the same idea at an individual level. Overconfidence in our beliefs helps us act on them. By acting on them, we learn. Doing nothing teaches less. A third is a mismatch idea: We are overconfident because modern life is more complicated than the Stone-Age world to which evolution adjusted our brains. No one asked Stone-Age people How tall is the Empire State Building? A fourth is that we assume what physicists assume: the distant world follows the same rules as the world close to us. This is a natural assumption, but it’s wrong.

Early in Angleton’s career, he had a very unpleasant shock: He realized he had been fooled by the Russians in a big way for a long time. This led him to try to understand why he’d been fooled. Early in my scientific career, I too was shocked: Rats in Skinner boxes did not act as expected far more often than I would have thought. I overestimated my understanding of them. In a heavily-controlled heavily-studied situation! I generalized from this. If I couldn’t predict the behavior of rats in a Skinner box, I couldn’t predict human behavior in ordinary life. My conclusion was data is more precious than we think. In other words, data is underpriced. If a stock is underpriced, you buy as much of it as possible. I tried to collect as much data as possible. Personal science — studying my sleep, my weight, and so on — was a way to gather data at essentially zero cost. And, indeed, the results surprised me far more than I expected. I could act based on the overconfidence effect but I could not remove it from my expectations.

First, Let Them Get Sick

In Cities and the Wealth of Nations, Jane Jacobs tells how, in the 1920s, one of her aunts moved to an isolated North Carolina village to, among other things, have a church built. The aunt suggested to the villagers that the church be built out of the large stones in a nearby river. The villagers scoffed: Impossible. They had not just forgotten how to build with stone, they had forgotten it was possible.

A similar forgetting has taken place among influential Western intellectuals — the people whose words you read every day. Recently I wrote about why health care is so expensive. One reason is that the central principle of our health care is not the meaningless advertising slogan promoted by doctors (“first, do no harm”) but rather the entirely nasty first, let them get sick. Let people get sick. Then we (doctors, etc.) can make money from them. This is actually how the system works.

It is no surprise that doctors and others within the health care system take the first, let them get sick approach. It is wholly in their self-interest. It is how they get paid. If nobody got Disease X, specialists in Disease X would go out of business. What is interesting is that outsiders take the first, let them get sick attitude for granted. It is not at all in their self-interest, just as it was not at all in the self-interest of the Carolina villagers to think building with stones impossible.

An example of an outsider taking first, let them get sick for granted is a recent article in the London Review of Books by John Meeks, an excellent writer (except for this blind spot). The article is about the commercialization of the National Health System. Much of it is about hip replacements. How modern hip replacements were invented. Their inventor, John Charnley. How a hospital that specialized in hip replacements (the Cheshire and Merseyside NHS Treatment Centre) went out of business. And so on. Nothing, not one word, is said about the possibility of prevention. About figuring out why people come to need hip replacements and how they might change their lives so that they don’t. Sure, a surgeon (John Charnley) is unlikely to think or say or do anything about prevention. That’s not his job. But John Meeks, the author of the article, is outside the system. He is perfectly capable of grasping the possibility of prevention and the parasitic nature of a system that ignores it. Long ago, people understood that prevention was possible. As Weston Price documents, for example, isolated Swiss villagers knew they needed small amounts of seafood to stay healthy. But Meeks — and those whom he listens to and reads — have forgotten.

Causal Reasoning in Science: Don’t Dismiss Correlations

In a paper (and blog post), Andrew Gelman writes:

As a statistician, I was trained to think of randomized experimentation as representing the gold standard of knowledge in the social sciences, and, despite having seen occasional arguments to the contrary, I still hold that view, expressed pithily by Box, Hunter, and Hunter (1978) that “To find out what happens when you change something, it is necessary to change it.”

Box, Hunter, and Hunter (1978) (a book called Statistics for Experimenters) is well-regarded by statisticians. Perhaps Box, Hunter, and Hunter, and Andrew, were/are unfamiliar with another quote (modified from Beveridge): “Everyone believes an experiment except the experimenter; no one believes a theory except the theorist.”

Box, Hunter, and Hunter were/are theorists, in the sense that they don’t do experiments (or even collect data) themselves. And their book has a massive blind spot. It contains 500 pages on how to test ideas and not one page — not one sentence — on how to come up with ideas worth testing. Which is just as important. Had they considered both goals — idea generation and idea testing — they would have written a different book. It would have said much more about graphical data analysis and simple experimental designs, and, I hope, would not have contained the flat statement (“To find out what happens …”) Andrew quotes.

“To find out what happens when you change something, it is necessary to change it.” It’s not “necessary” because belief in causality, like all belief, is graded: it can take on an infinity of values, from zero (“can’t possibly be true”) to one (“I’m completely sure”). And belief changes gradually. In my experience, significant (substantially greater than zero) belief in the statement A changes B usually starts with the observation of a correlation between A and B. For example, I began to believe that one-legged standing would make me sleep better after I slept unusually well one night and realized that the previous day I had stood on one leg (which I almost never do). That correlation made one-legged standing improves sleep more plausible, taking it from near zero to some middle value of belief (“might be true, might not be true”) Experiments in which I stood on one leg various amounts pushed my belief in the statement close to one (“sure it’s true”). In other words, my journey “to find out what happens” to my sleep when I stood on one leg began with a correlation. Not an experiment. To push belief from high (say, 0.8) to really high (say, 0.99) you do need experiments. But to push belief from low (say, 0.0001) to medium (say, 0.5), you don’t need experiments. To fail to understand how beliefs begin, as Box et al. apparently do, is to miss something really important.

Science is about increasing certainty — about learning. You can learn from any observation, as distasteful as that may be to evidence snobs. By saying that experiments are “necessary” to find out something, Box et al. said the opposite of you can learn from any observation. Among shades of gray, they drew a line and said “this side white, that side black”.

The Box et al. attitude makes a big difference in practice. It has two effects:

  1. Too-complex research designs. Just as researchers undervalue correlations, they undervalue simple experiments. They overdesign. Their experiments (or data collection efforts) cost far more and take much longer than they should. The self-experimentation I’ve learned so much from, for example, is undervalued. This is one reason I learned so much from it — because it was new.
  2. Existing evidence is undervalued, even ignored, because it doesn’t meet some standard of purity.

In my experience, both tendencies (too-complex designs, undervaluation of evidence) are very common. In the last ten years, for example, almost every proposed experiment I’ve learned about has been more complicated than I think wise.

Why did Box, Hunter, and Hunter get it so wrong? I think it gets back to the job/hobby distinction. As I said, Box et al. didn’t generate data themselves. They got it from professional researchers — mostly engineers and scientists in academia or industry. Those engineers and scientists have jobs. Their job is to do research. They need regular publications. Hypothesis testing is good for that. You do an experiment to test an idea, you publish the result. Hypothesis generation, on the other hand, is too uncertain. It’s rare. It’s like tossing a coin, hoping for heads, when the chance of heads is tiny. Ten researchers might work for ten years, tossing coins many times, and generate only one new idea. Perhaps all their work, all that coin tossing, was equally good. But only one researcher came up with the idea. Should only one researcher get credit? Should the rest get fired, for wasting ten years? You see the problem, and so do the researchers themselves. So hypothesis generation is essentially ignored by professionals because they have jobs. They don’t go to statisticians asking: How can I better generate ideas? They do ask: How can I better test ideas? So statisticians get a biased view of what matters, do biased research (ignoring idea generation), and write biased books (that don’t mention idea generation).

My self-experimentation taught me that the Box et al. view of experimentation (and of science — that it was all about hypothesis testing) was seriously incomplete. It could do so because it was like a hobby. I had no need for publications or other steady output. Over thirty years, I collected a lot of data, did a lot of fast-and-dirty experiments, noticed informative correlations (“accidental observations”) many times, and came to see the great importance of correlations in learning about causality.

 

 

 

 

 

 

 

 

Marcia Angell on Psychiatry: A Train Wreck

Marcia Angell, a former editor of JAMA, may be the most prominent critic of drug companies. The most recent two issues of the New York Review of Books contain a two-part critique by her of psychiatry. I liked Part 1 because she described the excellent work of Irving Kirsch (The Emperor’s New Drugs). Part 2, however, is a disaster.

She goes on and on about the evils of the DSM s — the diagnostic manuals of psychiatry. Improving the reliability of diagnosis is playing into the hands of the drug companies, she seems to say. She complains that the number of diagnoses is increasing. Well, yes, all diagnostic systems get larger over time. This is a good thing; if you don’t have a name for a problem, it is hard to do cumulative research about it and hard to communicate research results to everyone else. She complains, apparently, that new categories are being added:

There are proposals for entirely new entries, such as “hypersexual disorder,” “restless legs syndrome,” and “binge eating.”

She does not say why this is bad. Maybe she thinks it’s obvious. It isn’t obvious to me. Diagnostic categories help researchers and doctors and the rest of us communicate. For example, Dennis Mangan’s research shows why it is a good idea for the term restless legs syndrome to have an agreed-upon meaning.

She complains that the DSM doesn’t have enough “citations”:

There are no citations of scientific studies to support its decisions. That is an astonishing omission, because in all medical publications, whether journal articles or textbooks, statements of fact are supposed to be supported by citations of published scientific studies. (There are four separate “sourcebooks” for the current edition of the DSM that present the rationale for some decisions, along with references, but that is not the same thing as specific references.)

Please. This is clueless. A diagnostic manual is a dictionary. It assigns meanings to diagnostic categories. You can make a useful dictionary without “citations of scientific studies”. Long before you can do scientific studies about the best way to define dog you can come up with a definition of dog that is better than nothing.

She ends her review with this:

Above all, we should remember the time-honored medical dictum: first, do no harm (primum non nocere)

Gag me with a spoon. Time-honored? Doctors — with the support of JAMA, not to mention the rest of the health-care establishment — continually prescribe drugs with bad side effects and high prices and suppress innovative alternatives. (Not only that. My own surgeon recommended a dangerous surgery of no clear value.) How they can claim to do no harm escapes me.

Sure, psychiatry is awful. For a long time psychiatrists rallied around a transparent intellectual fraud (Freud and his offshoots). Now they rally around a less transparent intellectual fraud (neurotransmitter theories of mental illness). Psychotherapists and their wacky theories and no-more-effective treatments are no better so I wouldn’t blame the drug companies for the underlying problem. I put the problem like this: Our health care system consists of a very large number of people, many with very large salaries, who must get paid. Being human, they strongly oppose any progress that would reduce their salary or influence or, heaven forbid, eliminate their job. Because of them, many promising lines of research, such as prevention via environmental change or cure via nutrition, are completely or almost completely ignored. This is the fundamental reason Angell’s critique is so bad: She is part of the problem. She is very smart, but she’s been brainwashed (“ primum non nocere“!). She utterly ignores the fact that we don’t know what causes depression, what causes schizophrenia, what causes autism, and so forth. Only when we learn what causes these and other mental disorders will we be in a good position to improve our mental health.

 

 

 

Six Signs of Profound Stagnation in Health Care

In a recent interview, Tim Harford, the Underground Economist, said,

That’s what makes medicine such an effective academic discipline.

By “that” he meant certain methodologies, especially randomized experiments. I disagree with this assessment. My opinion is that health care is in a state of profound stagnation, unable to make much progress on major problems.

Here are six signs of the stagnation in health care (by which I mean everything related to health):

1. The irrelevance of Nobel Prizes. Year after year, the Nobel Prize in medicine is usually given for research that is so far useless (e.g., teleomere research) or irrelevant to major health problems.

2. The obesity epidemic. Starting in 1980, obesity rates climbed fast. Thirty years later, doctors seem to know no more about how to cure obesity than in 1980. Low-fat diets, popular in the 1980s, are still popular! Low-carb diets are ancient — the Banting diet became popular in the 1860s.

3. Ancient treatments for depression still popular. SSRIs were introduced in 1988. Cognitive-behavioral therapy began in the 1980s, combining earlier ideas. Neither works terribly well — and notice how different they are.

4. The high cost of ineffective care. Americans pay much more for health care than people in other rich countries, yet American health is no better. All that new technology that Americans are paying for isn’t helping. In an article complaining about our education system, Joel Klein, the former head of New York City schools, wrote, “unlike in health care . . . in education, despite massive increases in expenditure, we don’t see improved results.” Actually, that’s exactly what we see in health care when we compare America to other countries. Tyler Cowan makes this point in The Great Stagnation.

5. Statins. A defender of modern medicine would claim that statins were an important innovation. They are heavily prescribed, yes. Yet in recent tests they have been stunningly ineffective — so much so that the earlier favorable evidence has been questioned.

6. The stagnation has become invisible — the normal state of affairs. Allowing Harford to make that comment. Harford, like Dr. Ben “Bad Science” Goldacre (whom Harford praises), believes you judge science by whether it follows certain rules. By making various rules (e.g., the need for placebo controls) and then following them, medical researchers have drawn attention — at least Harford’s and Goldacre’s — away from lack of progress. They’re making progress, they say, because they’re following self-imposed rules. Well, what if the rules make things worse? (For example, placing high value on placebo controls may draw attention away from non-pill treatments.) Better to judge by results.

What do you think are the clearest signs of health-care stagnation — if you agree with me about this?