Marco Arruda, an MD and PhD in the Department of Pediatric Neurology at the Glia Institute (São Paulo, Brazil) is the author of a recent editorial in JAMA Pediatrics about the use of Triptan for headaches in children. There’s a lot of controversy because placebos work very well for headache — so much so that they often have to use some tricky methods to actually show a treatment effect with the real drugs.
In a recent article on Medscape, Dr. Arruda is quoted as saying: “Although placebo is the enemy of great clinical trials, it is likely the best friend of good clinicians.”
This makes me wonder what he thinks makes a good clinician. If Triptan and a placebo are equally effective, it is curious that anyone would skip the placebo and prescribe the drug, which has listed as side effects:
A draft article by Spyros Makridakis about blood pressure and iatrogenics takes issue with the statement that “The treatment of hypertension has been one of medicine’s major successes of the past half-century.” Over the last half-century, the article says, the death rate for people with high blood pressure decreased by almost exactly the same amount as the death rate for people without high blood pressure. Apparently “one of medicine’s major successes” is a case where the health benefit no more than equaled the health cost — leaving aside what the treatment cost in time and money.
Because very high blood pressure (systolic > 180 mm Hg) is quite dangerous and blood pressure drugs really work, this is a surprising outcome. Makridakis points out that doctors start treating high blood pressure when it rises above systolic = 140 mm Hg, a point when there is little or no increase in death rate. This article tells doctors to immediately prescribe drugs when systolic blood pressure is above 160. Yet death rate clearly increases only when systolic blood pressure is above 180. Makridakis concludes (as do I) that blood pressure drugs have significant health costs as well as benefits. The drugs are so often prescribed when they do no good and the costs are so high that the overall health costs of blood pressure treatment have managed to be as high as the overall benefits. Even when handed a relatively easy-to-measure problem (high blood pressure) and a relatively simple solution (blood pressure drugs), our health care system managed to achieve no clear gain. If this is “one of medicine’s major successes”, medicine is in bad shape.
The last paragraph of Makridakis’s article makes a surprising statement: “We strongly believe that medicine is extremely useful.” It does not explain this belief, which is contradicted by the rest of the article. I was puzzled. I wrote to the author:
I recently read your paper on “High blood pressure and iatrogenics”. The main part makes good sense. Then it ends with something quite puzzling: “We strongly believe that medicine is extremely useful.” No doubt a few areas of medicine are extremely useful. For large chunks of medicine, it is hard to tell whether they do more good than harm, because so many drugs and other treatments have undisclosed or unnoticed bad effects.
For example, tonsillectomies — for a long time the most common operation — is associated with a 50% increase in mortality in one study. The notion that cutting off part of the immune system is a good idea makes as much sense as the idea that cutting out part of the brain is a good idea. Another example is sleeping pills. They are associated with a three-fold increase in death rate soon after they begin to be taken. I am not saying that medicine overall does more harm than good. I am saying that a strong belief about the outcome of such an assessment (does medicine overall do more good than harm?) doesn’t make sense.
Makridakis replied:
Thank you for your email. The paper you mention is a draft posted for comments. I agree with you that my statement is wrong. It should have read: : “We strongly believe that medicine can be extremely useful”. For instance, this could be the case in treating heart attacks, strokes, traumas from car accidents or bullet shots. But in most other cases the harm from treatment can be greater than the benefits. In addition, the harm from preventive medicine can exceed its value. Thank you for pointing out this mistake to me.
The Health Care Blog post titled “The Empowered Patient” by Maggie Mahar exists, as far as I can tell, because much hospital care has considerable room for improvement and many mistakes are made — for example, patients are given the wrong drug. One commenter (MD as Hell) said he has worked in hospitals more than 30 years and has some advice, including
Never be alone in a hospital
Never go to a hospital unless you have no alternative
Do not let fear motivate you to be a consumer of any part of healthcare
In the comments, several doctors expressed their dislike of the whole idea of “patient participation”. For example,
Patients manage the process. Really? I’m sure your plumber or mechanic love you and this philosophy so much they hug you when you greet them.
Here is another argument against patient participation:
The huge problem that barely anyone wants to talk about is [the assumption] that patient (and family ) participation are always (or even just mostly) beneficial. This is a completely unfounded assumption. Please read Dr. Brawley’s book “How we do harm” to read 2 long and IMHO representative anecdotes of patient/family centeredness resulting in net harm. . . . Lack of patient involvement and medical errors are hardly on top of the list of pressing flaws of the US health care system . . . Profit centeredness resulting in overtreatment of the insured and undertreatment of the underinsured are the main issues.
If medical errors are the #3 cause of death in America, they are one of the most serious flaws of the US health care system. The doctors who dislike patient participation in this comment section do not propose a better way to reduce mistakes, a better way to spend the time and mental energy required by patient participation. Maybe their annoyance is a good thing. Maybe they will be so annoyed they will reduce errors in other ways.
It is bizarre that patient involvement cannot be easily dismissed. I cannot think of another profession (accountants, bus drivers, carpenters, dentists, elementary school teachers, and so on) where anyone says never be alone with them. Sure, hospital patients are highly vulnerable but that vulnerability is no secret. It could have led to a system, similar to flying (airplane passengers are highly vulnerable), with an extremely low rate of fatal error. My own experience supports patient involvement. The biggest motivation for my self-experimentation, at least at first, was my self-experimental discovery that a powerful acne medicine my dermatologist had prescribed (tetracycline, an antibiotic) was no help. My dermatologist had shown no signs of considering this a possibility. When I told him about my experiment (varying the dose of the antibiotic) and the results (no change in acne), he said, “Why did you do that?” Later a surgeon I consulted about a tiny hernia was completely misleading about the evidence for her recommendation that I have surgery for it.
A few days ago I wrote about a study that suggested that people who’d had bariatric surgery were at much higher risk of liver poisoning from acetaminophen than everyone else. I learned about the study from an article by Erin Allday in the San Francisco Chronicle. The article included this:
At this time, there is no reason for bariatric surgery patients to be alarmed, and they should continue using acetaminophen if that’s their preferred pain medication or their doctor has prescribed it.
This was nonsense. The evidence for a correlation between bariatric surgery and risk of acetaminophen poisoning was very strong. Liver poisoning is very serious. Anyone who’s had bariatric surgery should reduce their acetaminophen intake.
Who had told Allday this nonsense? The article attributed it to “the researchers” and “weight-loss surgeons”. I wrote Allday to ask.
She replied that everyone she’d spoken to for the article had told her that people with bariatric surgery shouldn’t be alarmed. She did not understand why I considered the statement (“no need for alarm”) puzzling. I replied:
The statement is puzzling because it is absurd. The evidence that acetaminophen is linked to liver damage in people with bariatric surgery is very strong. Perhaps the people you spoke to didn’t understand that. The size of the sample (“small”) is irrelevant. Statisticians have worked hard to be able to measure the strength of the evidence independent of sample size. In this case, their work reveals that the evidence is very strong.
If the experts you spoke to (a) didn’t understand statistics and (b) were being cautious, that would be forgivable. That’s not the case here. They (a) don’t understand statistics and (b) are being reckless. With other people’s health. It’s fascinating, and very disturbing, that all the experts you spoke to were like this.
I have no reason to think that the people Allday talked to were more ignorant than typical doctors. I expect researchers to be better at statistics than average doctors. One possible explanation of what Allday was told is that most doctors, given a test of basic statistical concepts, would flunk. Not only do they fail to understand statistics, they don’t understand that they don’t understand. Another possible explanation is that most doctors have a strong “doctors do everything right” bias, even when it endangers patients. Either way, bad news.
A recent op-ed in the New York Times by H. Gilbert Welch, a co-author of Overdiagnosis, describes a tragedy of ignorance and overconfidence. The current emphasis on regular mammograms began thirty years ago. They will prevent breast cancer, doctors and health experts told hundreds of millions of women. They will allow early detection of cancers that, if not caught early, would become life-threatening. The campaign was very successful. According to the paper cited by Welch, about 70% of American women report getting such screening.
It is now abundantly clear this was a mistake. If screening worked perfectly — if all of the cancers it detected were dangerous — the rate of late-stage breast cancer should have gone down by the amount that the rate of early-stage breast cancer went up. Over the thirty years of screening, the rate of (detected) early-stage breast cancers among women over 40 doubled, no doubt because of screening. (Over the same period the rate of early-stage breast cancers among women under 40 barely changed.) In spite of all this early detection and treatment, the rate of late-stage breast cancer among women over 40 stayed essentially the same. All that screening (billions of mammograms), all that chemo and surgery and radiation, all that worry and time and misery — and no clear benefit to the women screened and those who paid for the screening, treatment, and so on. Roughly all of the “cancers” detected by screening and then, at great cost, removed, aren’t dangerous, it turns out.
Quite apart from the staggering size of the mistake and the long time needed to notice it, screening has been promoted with specious logic.
Proponents have used the most misleading screening statistic there is: survival rates. A recent Komen Foundation campaign typifies the approach: “Early detection saves lives. The five-year survival rate for breast cancer when caught early is 98 percent. When it’s not? It decreases to 23 percent.” Survival rates always go up with early diagnosis: people who get a diagnosis earlier in life will live longer with their diagnosis, even if it doesn’t change their time of death by one iota.
Did those making the 98% vs. 23% argument not understand this?
I applaud Welch’s research, but his op-ed has gaps. A unbiased assessment of breast cancer screening would include not only the (lack of) benefits but also the (full) costs. Treatment for a harmless “cancer” may cause worse health than no treatment. Maybe chemotherapy and radiation and surgery increase other cancers, for example. What about the effect of all those mammograms on overall cancer rate? Welch fails to consider this.
Welch also fails to make the most basic and important point of all. To reduce breast cancer, it would be a good idea to learn what environmental factors cause it. (For example, maybe poor sleep causes breast cancer.) Then it could be actually prevented. Much more cheaply and effectively. Yet the Komen Foundation and the Canadian Breast Cancer Foundation say “race for the cure” instead of trying to improve prevention.
doctor complains about over-prescription of opiates. The author, a doctor named Susana Duncan, complains about several things, including “a system where symptoms are treated but the source of pain remains”. Treatment of symptoms rather than identification of causes is overwhelmingly true of the whole health care system, not just treatment of chronic pain. One example is depression. Anti-depressants do not reduce whatever caused the depression. Another example is high blood pressure. Blood-pressure-lowering drugs do nothing to eliminate what caused the high blood pressure. Duncan was once science editor of New York magazine, which may have something to do with her ability to cogently criticize the system.
When I read in August that the talented Hollywood film director Tony Scott had killed himself without any apparent good reason, I was fairly sure that pretty soon we would find that the poor man had been taking ‘antidepressants’. Well, a preliminary autopsy has found ‘therapeutic’ levels of an ‘antidepressant’ in his system. I take no pleasure in being right, but as the scale of this scandal has become clear to me, I have learned to look out for the words ‘antidepressant’ or ‘being treated for depression’ in almost any case of suicide and violent, bizarre behavior. And I generally find it. The science behind these pills is extremely dubious. Their risks are only just beginning to emerge. It is time for an inquiry.
“ Tony Scott Suicide Remains a Mystery After Autopsy,” wrote a Vanity Fair editor. The autopsy found that he had been taking the antidepressant Remeron, whose known side effects include suicide. SSRI’s, of which Remeron is an example, cause suicidal thinking in people who are not depressed.
The psychiatrist David Healy was the first to emphasize this point. In 2000, after he began this research, he was offered a job at the University of Toronto. In a very unusual move, the job offer was rescinded. Apparently psychiatry professors at the University of Toronto realized that Healy’s research made the psychiatric drug industry look bad.
I don’t think it’s wrong to sell drugs that improve this or that condition (e.g., depression), even if the improvement is slight. I do think it’s wrong to make false claims to induce people to buy the drugs. In the case of depression, the false claim is that depression is due to a “chemical imbalance.” No chemical difference has ever been shown between people who later become depressed and people who don’t later become depressed. This claim, repeated endlessly, makes it harder to do research into what causes depression. If you figured out what caused depression, you could treat it and prevent it much better. This false claim does enormous damage. It delays by many years discovery of effective treatment and prevention of depression, a disease from which hundreds of millions of people now suffer.
This happens in dozens of areas of medicine. Dermatologists say “ acne is caused by bacteria“. Most doctors appear to believe “ulcers are caused by bacteria”. Ear nose and throat surgeons claim that part of the immune system (the tonsils) causes illness. The “scale of the scandal” — medical school professors either (a) don’t understand causality or (b) deceive the rest of us — is great.