Undisclosed Risks of Common Medical Treatments

Millions of tonsillectomies have been done, mostly to children. Were any of their parents told that tonsils are part of the immune system (taught in high school biology and known since the 1960s)? A Cochrane Review of tonsillectomies (the “highest standard” in evidence-based medicine) fails to mention that tonsils are part of the immune system. A recent study found tonsillectomies associated with a 50% increase in heart attacks. (I write about tonsillectomies here.)

Are tonsillectomies unusual? Several recent news stories suggest no, they aren’t. Failure to tell patients the full risks of medical treatment may be common:

1. Undisclosed risks of hernia surgery. From the Wall Street Journal: “More than 30% of patients may suffer from long-term chronic pain and restricted movement after surgery to fix a hernia . . . studies show.” The article says “many patients don’t consider” this risk — meaning they don’t know about it. A Berkeley surgeon named Eileen Consorti told me I should have surgery for a hernia I could not detect. I have previously written about her claim that evidence supported her recommendation when no such evidence existed — or, at least, no one including her has ever found it. I said I wanted to see the evidence because there were risks to surgery. She replied that none of her patients had died. I was shocked by the incompleteness of her answer. There are plenty of bad outcomes besides death — as the Wall Street Journal article shows.

2. Undisclosed risks of sleeping pills. A book called The Dark Side of Sleeping Pills by Daniel Kripke, a professor of psychiatry at UC San Diego, goes into great detail about risks of sleeping pills that few doctors tell their patients. For example, one study found that “patients who took sleeping pills died 4.6 times as often during follow-ups averaging 2.5 years [than matched patients who did not take sleeping pills]. Patients who took higher doses (averaging over 132 pills per year) died 5.3 times as often.” Insomnia alone was not associated with higher mortality. Tomorrow I will post Dr. Kripke’s answer to the question “why did you write this book?” Here is a website about the dangers of Ambien.

3. Undisclosed risks of anticholinergic drugs. From the NY Times: “After following more than 13,000 British men and women 65 or older for two years, researchers found that those taking more than one anticholinergic drug scored lower on tests of cognitive function than those who were not using any such drugs, and that the death rate for the heavy users during the course of the study was 68 percent higher. That finding, reported last July in The Journal of the American Geriatrics Society, stunned the investigators.” Anticholinergics are “very very common” said a researcher. They include many over-the-counter drugs, such as “allergy medications, antihistamines and Tylenol PM”.

4. Undisclosed risks of statins. A recent NY Times story says “the Food and Drug Administration has officially linked statin use with cognitive problems like forgetfulness and confusion, although some patients have reported such problems for years. Among the drugs affected are huge sellers like Lipitor, Zocor, Crestor and Vytorin.” Prior to this official linkage, the reports of forgetfulness and confusion were mere anecdotes that evidence-based medicine proponents ignore and tell the rest of us to ignore.

5. Undisclosed risks of metal-on-metal hip replacements. They leak dangerous amounts of metal (e.g., cobalt) into the rest of the body. “Despite the fact that these risks have been known and well documented for decades, patients have been kept in the dark,” says a recent article in the BMJ. By 2007, the danger was so clear that a British regulatory committee said that patients must sign a form saying they’ve been warned. This didn’t happen — a surgeon told the BMJ that “surgeons were unaware of these discussions.” Other materials could have been used.

These six treatments (tonsillectomy, hernia surgery, sleeping pills, anticholinergic drugs, statins, and hip replacement) are so common they raise a scary question: What fraction of the risks are patients usually told?

The surgeon or drug company gets paid no matter what happens to you. Malpractice lawsuits are very rare on a per-patient basis — and no one will be sued for performing a tonsillectomy on a child who gets a lot of colds or prescribing sleeping pills to someone who has trouble sleeping. In a Freakonomics podcast, Steve Levitt said that doctors terrify him. And his father is a doctor. Given the undisclosed risks of common treatments, he is right to be terrified.

Thanks to Allan Jackson, Alex Chernavsky and Tim Beneke.

Assorted Links

  • Unusual fermented foods, such as shio koji (fermented salt, sort of)
  • David Healy talk about problems with evidence-based medicine. Example of Simpson’s paradox in suicide rates.
  • The ten worst mistakes of DSM-5. This is miserably argued. The author has two sorts of criticisms: 1. Narrow a diagnosis (e.g., autism): People who need treatment won’t get it! 2. Widen a diagnosis (e.g., depression) or add a new one (many examples): This will cause fads and over-medication! It isn’t clear how to balance the two goals (helping people get treatment, avoiding fads and over-medication) nor why the various changes being criticized will produce more bad than good. Allen Frances, the author, was chair of the committee in charge of DSM-4. He could have written: “When we wrote DSM-4, we made several mistakes . . . . The committee behind DSM-5 has not learned from our mistakes. . . .” That would have been more convincing. That the chair of the committee behind DSM-4, in spite of feeling strongly about it, cannot persuasively criticize DSM-5 speaks volumes.
  • The Lying Dutchman. “Very few social psychologists make stuff up, but he was working in a discipline where cavalier use of data was common. This is perhaps the main finding of the three Dutch academic committees which investigated his fraud. The committees found many bad practices: researchers who keep rerunning an experiment until they get the right result, who omit inconvenient data, misunderstand statistics, don’t share their data, and so on.”

Few Doctors Understand Statistics?

A few days ago I wrote about a study that suggested that people who’d had bariatric surgery were at much higher risk of liver poisoning from acetaminophen than everyone else. I learned about the study from an article by Erin Allday in the San Francisco Chronicle. The article included this:

At this time, there is no reason for bariatric surgery patients to be alarmed, and they should continue using acetaminophen if that’s their preferred pain medication or their doctor has prescribed it.

This was nonsense. The evidence for a correlation between bariatric surgery and risk of acetaminophen poisoning was very strong. Liver poisoning is very serious. Anyone who’s had bariatric surgery should reduce their acetaminophen intake.

Who had told Allday this nonsense? The article attributed it to “the researchers” and “weight-loss surgeons”. I wrote Allday to ask.

She replied that everyone she’d spoken to for the article had told her that people with bariatric surgery shouldn’t be alarmed. She did not understand why I considered the statement (“no need for alarm”) puzzling. I replied:

The statement is puzzling because it is absurd. The evidence that acetaminophen is linked to liver damage in people with bariatric surgery is very strong. Perhaps the people you spoke to didn’t understand that. The size of the sample (“small”) is irrelevant. Statisticians have worked hard to be able to measure the strength of the evidence independent of sample size. In this case, their work reveals that the evidence is very strong.

If the experts you spoke to (a) didn’t understand statistics and (b) were being cautious, that would be forgivable. That’s not the case here. They (a) don’t understand statistics and (b) are being reckless. With other people’s health. It’s fascinating, and very disturbing, that all the experts you spoke to were like this.

I have no reason to think that the people Allday talked to were more ignorant than typical doctors. I expect researchers to be better at statistics than average doctors. One possible explanation of what Allday was told is that most doctors, given a test of basic statistical concepts, would flunk. Not only do they fail to understand statistics, they don’t understand that they don’t understand. Another possible explanation is that most doctors have a strong “doctors do everything right” bias, even when it endangers patients. Either way, bad news.

Doctor Logic: “Acne is Caused by Bacteria”

Presumably Dr. Jenny Kim is a good dermatologist because the author of this NPR piece chose to quote her:

UCLA dermatologist Dr. Jenny Kim says many people don’t realize it’s bacteria that cause acne. “Some people say your face is dirty, you need to clean it more, scrub more, don’t eat chocolate, things like that. But really, it’s caused by bacteria and the oil inside the pore allows the bacteria to overpopulate,” Kim says.

If I were to ask Dr. Kim how she knows that acne is “caused by bacteria” I think she’d say “because when you kill the bacteria [with antibiotics] the acne goes away.” Suppose I then asked: “Is there evidence that the bacteria of people who get acne differ from the bacteria of people who don’t get acne (before the acne)?” What I assume Dr. Kim would answer: “I don’t know.”

There is no such evidence, I’m sure. It is quite plausible that the bacteria of the two groups (with and without acne) are exactly the same, at least before acne. If it turned out, upon investigation, that the bacteria of people who get acne is the same as the bacteria of people who don’t get acne, that would make it much harder to say that acne is caused by bacteria. As far as I can tell, Dr. Kim and apparently all influential dermatologists have not thought even this deeply about it. To do so would be seriously inconvenient, because if acne isn’t caused by bacteria, it would be harder to justify prescribing antibiotics. Which dermatologists have been doing for decades.

It isn’t just dermatologists. Many doctors believe that H. pylori causes ulcers — wasn’t a Nobel Prize given for discovering that? The evidence for that assertion consisted of: 1. H. pylori found at ulcers. 2. Doctor swallowed billions of H. pylori and didn”t get an ulcer. (Not a typo.) It was enough that he got indigestion or something. 3. Antibiotics cause ulcers to heal. That was enough for the two doctors who made the H. pylori case and the Nobel Prize committee they convinced. The doctors and the committee failed to know or understand that H. pylori infection is very common and almost no one who is infected gets an ulcer. Psychiatric causal reasoning has been even simpler and even more self-serving. We know that depression — a huge problem — is due to “a chemical imbalance”, according to many psychiatrists, because (a) antidepressants work (not very well) and (b) antidepressants change brain chemistry.

Dr. Kim’s false certainty matters because I’m sure most people with acne don’t know what causes it. I didn’t. Dr. Kim’s false certainty and similar statements from other dermatologists make it harder for them to find out. I wrote about a woman who figured out what caused her acne. It wasn’t easy or obvious.

Thanks to Bryan Castañeda.

Two Recent Health Care Experiences

A friend and his pregnant wife, who live in Los Angeles and are not poor, recently had an ultrasound. (Probability of the ultrasound machine not operating properly and producing more than the stated amounts of energy: unknown, but a recent Stockholm survey found one-third of the machines malfunctioned.) Part of the office visit was a post-ultrasound visit with a genetic counselor. The genetic counselor walked them through illnesses in their family tree and assessed their coming baby with very low risk for Trisomy 21 (Down syndrome), Trisomy 13 and Trisomy 18.

At the end of their session, they were offered other services they might opt to buy to better know their chances of knowing about any fetal problems: Chorionic villus sampling and amniocentesis as well as a maternal blood test. None were really necessary.

My friend was irked that the CVS and the amniocentesis were called “low risk”. Maybe you know that a large fraction of doctors claim to practice “evidence-based medicine”. You might think this means they pay attention to all evidence. In fact, evidence-based medicine practitioners subscribe to a method of ranking evidence and ignore evidence that is not highly ranked. Most evidence of harm is not highly ranked, so evidence-based medicine practitioners ignore it. This makes every treatment appear less dangerous — misleadingly so. When a doctor says “low risk,” the truth, because the practice of ignoring evidence of harm is widespread (and drug companies routinely underestimate risk), is closer to “unknown risk”. The combination of (a) understating risk, (b) selling unnecessary stuff of which you have understated the risk, and (c) doing this with pregnant women, whose fetuses are especially vulnerable, is highly unattractive.

Also recently, the friend’s toddler had some sort of infection. The toddler had a bit of a fever, but was generally in good spirits, and played with his toys (i.e., was not bed-ridden or in severe distress). After a few days, his wife took the child to their pediatrician to make sure everything was fine.

“Don’t just accept the antibiotics,” my friend told his wife. “Push back a little. See what happens.”

The pediatrician did prescribe antibiotics. When my friend’s wife said she preferred not to give the child antibiotics if it were not really necessary, the doctor (female) said, “You’re right. I actually don’t know if the infection is bacterial or viral.”

Both stories — which obviously reflect common practice — illustrate how the healthcare system is biased toward treatment, including treatments that are unnecessary and dangerous. The good news is that this bias is clearer than ever before.

Why Self-Track? The Possibility of Hard-to-Explain Change

My personal science introduced me to a research method I have never seen used in research articles or described in discussions of scientific method. It might be called wait and see. You measure something repeatedly, day after day, with the hope that at some point it will change dramatically and you will be able to determine why. In other words: 1. Measure something repeatedly, day after day. 2. When you notice an outlier, test possible explanations. In most science, random (= unplanned) variation is bad. In an experiment, for example, it makes the effects of the treatment harder to see. Here it is good.

Here are examples where wait and see paid off for me:

1. Acne and benzoyl peroxide. When I was a graduate student, I started counting the number of pimples on my face every morning. One day the count improved. It was two days after I started using benzoyl peroxide more regularly. Until then, I did not think benzoyl peroxide worked well — I started using it more regularly because I had run out of tetracycline (which turned out not to work).

2. Sleep and breakfast. I changed my breakfast from oatmeal to fruit because a student told me he had lost weight eating foods with high water content (such as fruit). I did not lose weight but my sleep suddenly got worse. I started waking up early every morning instead of half the time. From this I figured out that any breakfast, if eaten early, disturbed my sleep.

3. Sleep and standing (twice). I started to stand a lot to see if it would cause weight loss. It didn’t, but I started to sleep better. Later, I discovered by accident that standing on one leg to exhaustion made me sleep better.

4. Brain function and butter. For years I measured how fast I did arithmetic. One day I was a lot faster than usual. It turned out to be due to butter.

5. Brain function and dental amalgam. My brain function, measured by an arithmetic test, improved over several months. I eventually decided that removal of two mercury-containing fillings was the likely cause.

6. Blood sugar and walking. My fasting blood sugar used to be higher than I would like — in the 90s. (Optimal is low 80s.) Even worse, it seemed to be increasing. (Above 100 is “pre-diabetic.”) One day I discovered it was much lower than expected (in the 80s). The previous day I had walked for an hour, which was unusual. I determined it was indeed cause and effect. If I walked an hour per day, my fasting blood sugar was much better.

This method and examples emphasize the point that different scientific methods are good at different things and we need all of them (in contrast to evidence-based medicine advocates who say some types of evidence are “better” than other types — implying one-dimensional evaluation). One thing we want to do is test cause-effect ideas (X causes Y). This method doesn’t do that at all. Experiments do that well, surveys are better than nothing. Another thing we want to do is assess the generality of our cause-effect ideas. This method doesn’t do that at all. Surveys do that well (it is much easier to survey a wide range of people than do an experiment with a wide range of people), multi-person experiments are better than nothing. A third thing we want to do is come up with cause-effect ideas worth testing. Most experiments are a poor way to do this, surveys are better than nothing. This method is especially good for that.

The possibility of such discoveries is a good reason to self-track. Professional scientists almost never use this method. But you can.

“How Ignorant Doctors Kill Patients”

I have already linked to this 2004 article (“How Ignorant Doctors Kill Patients”) by Russell Blaylock, a neurosurgeon, but after rereading think it deserves a second link and extended quotation.

I recently spoke to a large group concerning the harmful effects of glutamate, explaining it is now known that glutamate, as added to foods, significantly accelerates the growth and spread of cancers. I [rhetorically] asked the crowd when was the last time an oncologist told his or her patient to avoid MSG or foods high in glutamate. The answer, I said, was never.

After the talk, a crowd gathered to ask more questions. Suddenly I was interrupted by a young woman who identified herself as a radiation oncologist. She angrily stated, “I really took offense to your comment about oncologists not telling their patients about glutamate.”

I turned to her and asked, “Well, do you tell your patients to avoid glutamate?” She looked puzzled and said, “No one told us to.” I asked her who this person or persons were whose job it was to provide her with this information. I then reminded her that I obtained this information from her oncology journals. Did she not read her own journals?

Yet, this is the attitude of the modern doctor. An elitist group is in charge of disseminating all the information physicians are to know. If they do not tell them, then, in their way of thinking, the information was of no value.

The incentive structure of modern medicine in action. If you do harm, you are not punished — thus the high error rate. If you do good, you are not rewarded — so why bother to think (“no one told us”)? The similarity to pre-1980 Chinese communism, where it didn’t matter if you were a good farmer or a bad farmer, is obvious. It is a big step forward that the rest of us can now search the medical literature and see the evidence for ourselves.

Overtreatment in US Health Care

In April there was a conference in Cambridge, Massachusetts, about how to reduce overtreatment in American health care. Attendees were told:

The first randomised study of coronary artery bypass surgery was not carried out until 16 years after the procedure was first developed, a conference on overtreatment in US healthcare was told last week. When the results were published, they “provided no comfort for those doing the surgery,” as it showed no mortality benefit from surgery for stable coronary patients.

One participant said that overtreatment cost one-third of US health care spending. As far as I can tell, no one said that “evidence-based medicine” underestimates — in the case of tonsillectomies, almost completely ignores — bad effects of treatments. This failure to anticipate and accurately measure bad effects of treatments makes the overall picture worse. Maybe much worse.

Merck’s Vioxx and the American Death Rate

Ron Unz makes a very good point — that just one awful drug (Vioxx) sold by just one awful drug company (Merck) appear to have caused hundreds of thousands of deaths:

The headline of the short article that ran in the April 19, 2005 edition of USA Today was typical: “USA Records Largest Drop in Annual Deaths in at Least 60 Years.” During that one year, American deaths had fallen by 50,000 despite the growth in both the size and the age of the nation’s population. Government health experts were quoted as being greatly “surprised” and “scratching [their] heads” over this strange anomaly, which was led by a sharp drop in fatal heart attacks. . . .

On April 24, 2005, the New York Times ran another of its long stories about the continuing Vioxx controversy, disclosing that Merck officials had knowingly concealed evidence that their drug greatly increased the risk of heart-related fatalities. . . .

A cursory examination of the most recent 15 years worth of national mortality data provided on the Centers for Disease Control and Prevention website offers some intriguing clues to this mystery. We find the largest rise in American mortality rates occurred in 1999, the year Vioxx was introduced, while the largest drop occurred in 2004, the year it was withdrawn. Vioxx was almost entirely marketed to the elderly, and these substantial changes in national death-rate were completely concentrated within the 65-plus population. The FDA studies had proven that use of Vioxx led to deaths from cardiovascular diseases such as heart attacks and strokes, and these were exactly the factors driving the changes in national mortality rates.

The impact of these shifts was not small. After a decade of remaining roughly constant, the overall American death rate began a substantial decline in 2004, soon falling by approximately 5 percent, despite the continued aging of the population. This drop corresponds to roughly 100,000 fewer deaths per year. The age-adjusted decline in death rates was considerably greater.

This illustrates how Merck company executives got away with mass murder on a scale that the Khmer Rouge would be proud of. It also illustrates why I find “evidence-based medicine” as currently practiced so awful. Evidence-based medicine tells doctors to be evidence snobs. As I showed in my Boing Boing article about tonsillectomies, it causes them to ignore evidence of harm — such as heart attacks and strokes caused by Vioxx — because the first evidence of harm does not come from randomized controlled studies, the only evidence they accept. It delays the detection of monumental tragedies like this one.

Tonsillectomy Confidential

I wrote a piece for Boing Boing about tonsillectomies that has just been posted. It stemmed from a comment on this blog by a woman named Rachael. A doctor said her son should have a tonsillectomy. When Rachael did her own research, however, it seemed to her that the risks outweighed the benefits. I looked further into tonsillectomies and found that the risks were routinely greatly understated, even by advocates of evidence-based medicine.

More Here is a page on a doctor-run website called MedicineNet that grossly understates the risks of tonsillectomies. Compare their list of possible bad effects to mine.