The Economics of Medical Hypotheses and Its Successor (part 2 of 2)

A successor to Medical Hypotheses, called Hypotheses in the Life Sciences, will be edited by William Bains and published by Buckingham University Press (BUP).

ROBERTS Does BUP hope to eventually make money from the successor journal? Or do they merely hope the subsidy required will decrease with time?

WILLIAM BAINS BUP is a small operation, and does not have the resources to subsidize Hypotheses in the Life Sciences beyond its start-up stage, so we hope to make enough money to break even fairly soon. Ultimately the aim is to be profitable. I for one am determined to put scientific quality first, and I have emphasized to BUP that I only want the journal to grow (and hence generate more revenue) when the quality of submissions allows it.

ROBERTS What led BUP to decide to publish the new journal?

BAINS I think a combination of similarity in philosophy and being in the right place at the right time. They thought it was an exciting project which would both raise their profile (in a good way) and make them money. Buckingham University is the UK’s only private university, and as such takes a heterodox, even iconoclastic view towards what the academic establishment says is writ in stone. The Chancellor has a robust approach to academic and individual freedom. So a journal trying to do something rather new, enabling those with good ideas but little power to be heard, fitted with their approach.  For me, an added advantage is that I deal directly with the man at the top. There are no intermediate layers of management to take decisions about the journal, and we discuss everything from philosophy to web page design. This is the sort of immediacy you do not get with a big publisher.

Part 1 (Bruce Charlton). Bioscience Hypotheses, a similar journal founded by Bains.

Learning From “Pseudoscience”

The second episode of BBC’s The Story of Science is about chemistry. It shows unusual sophistication by emphasizing that early chemists built on the alchemists. The alchemists invented techniques and equipment later used by “real” chemists such as Joseph Priestly — the ones who reached conclusions we still believe. Not everyone understands that some “pseudoscience”, such as alchemy, is valuable.

A few years after I became an assistant professor, I realized the key thing a scientist needs is an excuse. Not a prediction. Not a theory. Not a concept. Not a hunch. Not a method. Just an excuse — an excuse to do something, which in my case meant an excuse to do a rat experiment. If you do something, you are likely to learn something, even if your reason for action was silly. The alchemists wanted gold so they did something. Fine. Gold was their excuse. Their activities produced useful knowledge, even though those activities were motivated by beliefs we now think silly. I’d like to think none of my self-experimentation was based on silly ideas but, silly or not, it often paid off in unexpected ways. At one point I tested the idea that standing more would cause weight loss. Even as I was doing it I thought the premise highly unlikely. Yet this led me to discover that standing a lot improved my sleep.

Richard Feynman, in his famous “cargo-cult science” speech, failed to understand that “real” science can build on “pseudoscience”:

Another example is how to treat criminals. We obviously have made no progress–lots of theory, but no progress–in decreasing the amount of crime by the method that we use to handle criminals. Yet these things are said to be scientific. We study them. And I think ordinary people with commonsense ideas are intimidated by this pseudoscience.

Absence of obvious progress (such as no decrease in crime) doesn’t mean something is worthless. Bizarre ideas or unsupported ideas (“lots of theory but no progress”) doesn’t mean something is worthless. What’s worthless, in terms of science, is not paying attention to reality. Not caring about how the world actually is. The cargo cults Feynman mentioned weren’t worthless. They tested their beliefs. They found out the planes didn’t land. Fine. It wasn’t pseudoscience, it was just early science, where the reasons for doing stuff now appear ridiculous. Of course the alchemists had beliefs we now think ridiculous. How could they not have?

Science is fundamentally on the side of the weak, since it offers hope of improvement. The powerful not only can afford to ignore reality they would like to, because it might be inconvenient. So they do so as much as possible. When I’ve heard “the debate is over” (= it’s now time to ignore reality) it’s always turned out that the person saying this (e.g., Al Gore, mainstream journalists) was powerful or credulous.

It’s not bad that some people ignore reality. We need people like that. I think of the body: parts of it (e.g., sensory systems) are very sensitive to reality, parts of it (e.g., bones) are not. We need both. When leaders ignore reality is when trouble begins.

The Economics of Medical Hypotheses and Its Successor (part 1 of 2)

A successor to Medical Hypotheses, titled Hypotheses in the Life Sciences, will soon be published. I asked Bruce Charlton and William Bains, the founder of the new journal, about the economics of the situation.

ROBERTS Did Medical Hypotheses make money for Elsevier? How much did it cost to run per year (leaving aside time contributed by you and the editorial board)? How much of that did Elsevier pay?

BRUCE CHARLTONÂ Medical Hypotheses did for sure make money for Elsevier – but I was never allowed to see the accounts.

I was told circa April 2009 that that the journal still made a profit even after page charges were abolished in early 2009 (income from things like subscriptions, sales of reprints including paid downloads, but mainly from its share of internet access ‘bundles’ via ScienceDirect – which is purchased mainly via library subscriptions from colleges etc).

Costs were my salary plus a share of the Elsevier editorial team – the journal secretary, the person who put together the issues and the manager – i.e., three main people at Elsevier each of whom worked on a group of journals.

Before 2009, when Medical Hypotheses still had page charges, the journal will have been very profitable since it had the above sources of income plus about page charges at about 60 dollars per thousand words, for a journal of between 160-240 pages, with about 500 words per page – that’s roughly 50 thousand dollars extra income per issue – with 12 issues per year that is roughly half a million dollars p.a. in page charges alone. Over seven years as editor I must have generated a few million dollars income for Elsevier.

So – in my opinion Elsevier’s behavior with Medical Hypotheses does not make business sense, since it lost them a lot of income and risked even more. Also hounding a successful editor, and sacking him before the contract was finished and with issues for 2010 un-compiled (and with nobody lined up to replace me) did not make business sense, nor did the mass of bad publicity all this generated for Elsevier.

My inference is that an individual or group in Elsevier senior management – perhaps Senior Vice President (USA) Glen P Campbell, who began the whole business and who has remained personally active in it (including the appointment of the new editor) – I guess that Campbell took a personal interest in Medical Hypotheses and in my editorship for reasons unknown to me – and drove the whole process.

The most sinister aspect of the whole thing for me is that senior Elsevier managers are now exerting personal influence on the content of the scientific literature and the conduct of science (overseeing appointment of editors, new restrictions on editorial conduct etc) – and they are doing this not for business reasons, but presumably to pursue their own private agendas.

The strict legalistic definition of academic freedom
(for what it is worth — see writings by Louis Menand)
is that academics be autonomous in the conduct of academic work (conduct, appointments, promotions, reviewing etc). The Medical Hypotheses Affair shows Elsevier very clearly in breach of academic freedom, and every competent editor will immediately recognize this fact.

In addition, in the later stages of the journal, Elsevier managers were also involved in covertly selecting (i.e. rejecting) what they considered ‘controversial’ Medical Hypotheses papers – the papers were intercepted after I had formally accepted them and held back, some were later rejected.

Elsevier also employed the Lancet (which they own) to choose ‘peer reviewers’ for the Duesberg and Ruggiero papers and arrange to have them rejected (using criteria quite different from those of Medical Hypotheses).

So that we know for sure that the Elsevier owned Lancet (one of the most prestigious medical journals in the world – perhaps the most prestigious?) is nowadays in the pocket of Elsevier management, and willing to do dirty jobs for them.

Yet there has been no outcry against Elsevier’s breach of academic autonomy from senior journal editors (nothing from the editors of Nature, Science, Lancet (understandably, since they are Elsevier employees), NEJM, JAMA, BMJ etc.). This silence means, I take it, that these senior editors are not any longer autonomous journals, but are nowadays in the pocket of their own publishers and live in fear of their own jobs.

The Medical Hypotheses affair is therefore a straw in the wind: an indicator on a small scale of what is happening at the larger scale: i.e. the thoroughly dishonest and hypocritical state of modern science and academia, and the domination of the content and conduct of science by outside interests.

But the unusual point that is not well understood is that key aspects of these outside interests are not always operating in profit maximizing ways. My understanding is that senior managers (in the private and public sector) are ‘using’ – even exploiting – their organization’s resources in pursuing personal goals – engaging in a kind of moral grandstanding, in making large gestures which show how ‘ethical’ they are in their views – at everyone else’s expense.

This can be most clearly seen in the ‘Green’ ‘ethical’ behaviours linked to the Global Warming scam – senior managers have shown themselves willing to sacrifice efficiency in pursuit of large moralistic policy gestures of ‘caring about the planet’ with which they become personally associated (recycling schemes, fair trade, campaigns of ‘save energy’ or promote public transportation among staff etc – none of which are actually effective in terms of real world effects, but which are effective in expressing ‘concern’).

Such moral gestures are invariably designed to appeal to elite PC opinion – it is a major form of status competition among the elites. My guess is that something of this sort is behind what happened at Medical Hypotheses: a senior manager or group of managers at Elsevier probably wanted to show themselves and their peers that they were taking a strong ‘moral’ stance against people who published AIDS-denialist papers.

Logarithmically Right

In Kathryn Schulz’s new book about being wrong (Being Wrong), she makes an interesting mistake:

In the instant of uttering [“I told you so”], I become right squared, maybe even right factorial, logarithmically right — at any rate, really, extremely right.

Schulz doesn’t know that the logarithm of a number 1 or more is much less than the number itself. For example, log 100 = 4.6.

What’s interesting is that logarithmically right is a good way of describing how one’s beliefs should be transformed to be a fair approximation of the truth. When you think you are right, you probably are — but logarithmically. Much less than you think.

When faced with a scientific paper — the sort that press releases are written about, for example — the naive reader takes it at face value. The little-knowledge-is-a-dangerous-thing reader finds many shortcomings and dismisses it (“how did this get through peer review?”). The more likely interpretation, in my experience, is that the paper, in spite of its imperfection, moves us a little bit forward. Much less than appearances, but more than zero.

Bruce Charlton on the Trouble With RCTs

In response to my post about the trouble with randomized controlled trials (RCTs), Bruce Charlton, the editor of Medical Hypotheses, wrote me:

The golden age of medical discovery came before the widespread usage of RCTs. This golden age was all but over by the end of the 1960s; since then the rate of progress has declined (see refs such as Horrobin, Le Fanu and Wurtman in https://www.hedweb.com/bgcharlton/funding.html).

The earliest big and influential RCT in psychiatry was in the mid 1960s, and it was – in retrospect – misleading wrt MAOIs due to too low a dosage. Now that RCTs are regarded as indispensible, medical research is captive to Big Pharma

https://www.guardian.co.uk/commentisfree/2009/aug/08/seroxat-pharmaceutical-birth-defect

Another area of medicine [in addition to obstetrics] that has made big progress without being RCT-led is anesthetics. Dentistry is a third. These specialties are instead technology-led.

He also pointed me to an article by David Horrobin, the founder of Medical Hypotheses, titled “Are large clinical trials in rapidly lethal diseases usually unethical?” His answer was that some of their aspects are unethical: Prospective subjects (sick persons) are not told the low chance of benefit, the high chance of bad side effects, and the great financial benefit of such trials to the institutions that run them.

Horrobin’s article also made the point I made: The emphasis on RCTs suppresses innovation because only big well-established companies can afford them:

50 years ago, good scientific evidence of a potential therapeutic effect would quickly have generated a small clinical trial in one or two centers with perhaps 30 or 40 patients. Such a trial would have cost almost nothing. It would certainly have missed small or marginal effects, but it would not have missed the sort of large effect that most patients want. Unfortunately, now, such an approach has become impossible. . . . The escalation of costs has therefore drastically reduced the range of compounds from which new treatments can be drawn.

My reading of history is that suppression of innovation can last a long time but eventually change comes from the outside and the system collapses. Detroit, for example, has collapsed. General Motors was once as dominant as big drug companies are now.

The Trouble With RCTs

In an email to a friend, I compared the obsession of med school professors with methodological purity (e.g., efficacy must be demonstrated with an RCT, randomized controlled trial) to religious ritual. More concern with appearances (ritual), I said, is linked to less understanding of substance. My friend replied:

I am actually a believer in this particular religion (The Cult of RCT)! Seriously: I think the medical world is quite right to put a huge premium on RCTs, because RCTs so often prove that things they are doing don’t work. While sometimes the RCT may provide a negative verdict on something that does work, this seems to me an unusual case, and generally avoidable if one considers statistical power, possible subgroup responses, etc and avoids overgeneralizing the conclusions.

I replied:

Are RCTs better than what prevailed before? Probably. But I would say the same about religion, which has its benefits.

I think the medical world has turned off a large fraction of its brain via insistence on RCTs and failure to understand their weaknesses and the strengths of alternatives. It isn’t just that “RCTs may return a negative verdict on something that works,” it’s also that such a requirement for very expensive research suppresses innovation — testing things via cheaper ways. Atul Gawande wrote about how obstetricians made a lot of progress by ignoring this requirement:

https://www.newyorker.com/archive/2006/10/09/061009fa_fact

Other areas of medicine, which followed the RCT requirement, made less progress during the same period, it can be argued.

Let’s say I told you that the only way you can travel to work is via an armed escort — you would be appalled, even though it’s true you would be safer. An insistence on RCTs is overreaction. Given the lack of innovation in medicine/health care, for which I believe they (or at least the lack of understanding they embody) are partly responsible, very expensive overreaction.

The best way to learn is to do. The best way to learn about health is to do as many experiments as possible. Not slow, expensive RCTs. Not slow, expensive surveys, which don’t involving “doing” to the extent that an experiment does. This is a big reason my self-experiments taught me a lot — because I could do so many of them.

The Dreams of Geneticists

In a wiser world, we would see genetics research as we see astronomy: worth supporting, but without expecting practical benefit. In this world, however, genetics research is far better funded than astronomy and is expected to have practical benefits.

Unfortunately, the benefits have been slight. A New York Times article by Nicholas Wade makes this clear:

The primary goal of the $3 billion Human Genome Project — to ferret out the genetic roots of common diseases like cancer and Alzheimer’s and then generate treatments — remains largely elusive. Indeed, after 10 years of effort, geneticists are almost back to square one in knowing where to look for the roots of common disease.

“Largely” elusive? Completely elusive is more accurate, as far as I know. Not one treatment has come from this work.

In spite of ten years of failure, geneticists appear no wiser than before:

With most diseases, the common variants have turned out to explain just a fraction of the genetic risk. It now seems more likely [to prominent geneticists] that each common disease is mostly caused by large numbers of rare variants.

I know of no examples where a common (or any) disease has been shown to be caused by “large numbers of rare variants.” Perhaps these estimates of “genetic risk” are as misleading as asking what percentage of the area of a rectangle is determined by its width.

History repeats. Ten years ago, geneticists had zero examples of how mapping the human genome would help anyone with a common disease. Absence of any examples didn’t prevent such vast claims as human genome mapping will ““revolutionize the diagnosis, prevention and treatment of most, if not all, human diseases”. From zero, they extrapolated to “most”.

It’s a sad comment on science journalism that, at the time, no one pointed out the absence of examples, as far as I know, and a sad comment on Wade, holder of a powerful and prestigious job, that he has not pointed it out now. He simply repeats a claim. At least he has noticed a gigantic failure after it happens, even if he inaccurately describes it (“largely” rather than “completely”).

Lack of examples of the practical value of genetic mapping didn’t keep a huge amount of money from being spent.

With the catalog [of common genetic variants] in hand, the second stage was to see if any of the variants were more common in the patients with a given disease than in healthy people. These studies required large numbers of patients and cost several million dollars apiece. Nearly 400 of them had been completed by 2009.

Ten failures would have been plenty; 400 failures shows the resistant-to-evidence nature of the whole enterprise. It’s an example of how a little biochemical-mechanism research goes a long way; a lot of biochemical-mechanism research goes a little way.

For geneticists, to acknowledge the lack of examples is scary. Their funding might be cut! So they don’t. But nothing prevents journalists from thinking for themselves and asking a supposedly “tough” question (“what’s an example?”) — although asking for examples is the most basic question there is.

Thanks to Alex Chernavsky. More about the cargo-cult nature of modern biology. If you don’t believe me, read this: “Of the roughly 50 companies at the conference, not one is focused on approaches related to tracking down new genes. . . . The one corner of the genome-focused biotech industry that’s thriving is the one churning out equipment and services to support researchers in their endless hunt for gene links.”

Show-Off Professors

A new Jeffrey Eugenides short story quotes Derrida. Quote 1:

In that sense it is the Aufhebung of other writings, particularly of hieroglyphic script and of the Leibnizian characteristic that had been criticized previously through one and the same gesture.

Quote 2:

What writing itself, in its nonphonetic moment, betrays, is life. It menaces at once the breath, the spirit, and history as the spirit’s relationship with itself. It is their end, their finitude, their paralysis.

“A little Derrida goes a long way and a lot of Derrida goes a little way,” said a friend of mine who was a graduate student in English. These quotes show why. In Theory of the Leisure Class, Veblen argued that professors write like this (and assign such stuff to their students) to show status. I have yet to hear a convincing refutation of this explanation nor a plausible alternative. Is there a plausible alternative?

Veblen was saying that professors are like everyone else. Think of English professors as a model system. Their showing-off is especially clear. It’s pretty harmless, too, but when a biology professor (say) pursues a high-status line of research about some disease rather than a low-status but more effective one, it does — if it happens a lot — hurt the rest of us. Sleep researchers, for example, could do lots of self-experimentation but don’t, presumably because it’s low-status. And poor sleep is a real problem. Throughout medical school labs, researchers are studying the biochemical mechanism and genetic basis of this or that disorder. I’m sure this is likely to be less effective in helping people avoid that disorder than studying its environmental roots, but such lines of research allow the researchers to request expensive equipment and work in clean isolated laboratories — higher status than cheap equipment and getting your hands dirty. I don’t mean high-status research shouldn’t happen; we need diversity of research. But, like the thinking illustrated by the Derrida quotes, there’s too much of it. A little biochemical-mechanism research goes a long way and lot of biochemical-mechanism research goes a little way.

Can John Gottman Predict Divorce With Great Accuracy?

Andrew Gelman blogged about the research of John Gottman, an emeritus professor at the University of Washington, who claimed to be able to predict whether newlyweds would divorce within 5 years with greater than 90% accuracy. These predictions were based on brief interviews near the time of marriage. Andrew agreed with another critic who said these claims were overstated. He modified Gottman’s Wikipedia page to reflect those criticisms. Andrew’s modifications were removed by someone who works for the Gottman Institute.

Were the criticisms right or wrong? The person who removed reference to them in Wikipedia referred to a FAQ page on the Gottman Institute site. Supposedly they’d been answered there. The criticism is that the “predictions” weren’t predictions: they were descriptions of how closely a model fitted after the data were collected could fit the data. If the model were complicated enough (had enough adjustable parameters), it could fit the data perfectly, but that would be no support for the model — and not “100% accurate prediction” as most people understand it.

The FAQ page says this:

Six of the seven studies have been predictive—each began with a hypothesis about factors leading to divorce. [I think the meaning is this: The first study figured out how to predict. The later six tested that method.] Based on these factors, Dr. Gottman predicted who would divorce, then followed the couples for a pre-determined length of time. Finally, he drew conclusions about the accuracy of his predictions. . . . This is true prediction.

This is changing the subject. The question is not whether Gottman’s research is any help at all, which is the question answered here; the question is whether he can predict at extremely high levels (> 90% accuracy), as claimed. Do the later six studies provide reasonable estimates of prediction accuracy? Presumably the latest ones are better than the earlier ones. The latest one (2002) was obviously not about accurate prediction estimates (its title used the term “exploratory”) so I looked at the next newest, published in 2000. Here’s what its abstract says:

A longitudinal study with 95 newlywed couples examined the power of the Oral History Interview to predict stable marital relationships and divorce. A principal components analysis of the interview with the couples (Time 1) identified a latent variable, perceived marital bond, that was significant in predicting which couples would remain married or divorce within the first 5 years of their marriage. A discriminant function analysis of the newlywed oral history data predicted, with 87.4% accuracy, those couples whose marriages remained intact or broke up at the Time 2 data collection point.

The critics were right. To say a discriminant function “predicted” something is to mislead those who don’t know what a discriminant function is. They don’t predict, they fit a model to data, after the fact. To call this “true prediction” is false.

To me, the “87.4%” suggests something seriously off. It is too precise; I would have written “about 90%”. It is as if you asked someone their age and they said they were “24.37 years old.”

Speaking of overstating your results, reporting bias in medical research. Thanks to Anne Weiss.

Cigarettes are Bad, Right?

My mom says her friends knew that smoking was harmful long before the Surgeon General’s report in 1962; they smoked anyway. The evidence that smoking causes lung cancer began to be accumulated in the 1950s. At first it was a radical idea. The boss of one of the scientists involved, Ernst Wynder, cut his research budget for continuing to study such a far-fetched notion.

Some of the details, indeed, did not make sense, as this fascinating essay (“The Scientific Scandal of Antismoking”, thanks to Robert Reis) points out. Were I to teach a course in scientific method, I might make this essay the first assignment: “Tell me its strengths and weaknesses.” Its strength is that it brings up new data that challenge a well-known idea (smoking causes lung cancer) that most people don’t give a second thought to. The conventional view that smoking is simply bad is surely wrong. The essay’s weaknesses are a dismissive attitude (“second-rate”) and a failure to learn from facts that don’t fit the authors’s ideas. For example, the big correlation between smoking and lung cancer that Wynder was the first to notice. What causes it? A more subtle lesson is that the big randomized controlled clinical trials are not the wonderful thing that most writers, including the authors of this essay, make them out to be (“the gold standard”). MRFIT was a hugely-expensive controlled clinical trial that produced no difference between the groups. It isn’t clear why. What can we learn from this? I’d ask my students. One lesson is the value of doing the smallest possible study — if they’d figured out the problems with a small study (and designed a better study that avoided them) they would have had a better chance of learning something from their massive study.