Salt is Good, Says New Study

A new study in JAMA found higher salt consumption strongly associated with less death from heart disease. The association with total mortality (more salt, less death) was almost significant. To grasp the strength of the evidence, see this. Yes, it’s a correlation, but I don’t know of any examples of such a strong correlation reversing (so that more salt is now correlated with more death) when now-unknown confounders are taken into account. In 1998, Gary Taubes argued that the benefits of salt reduction were greatly overstated. The new study did find more salt correlated with higher systolic blood pressure but in the big picture (mortality) that didn’t matter. If all those warnings about salt had any effect, the new study suggests their effect was negative.

Perhaps people who eat less salt are more credulous (they believed the experts) — and this damages them in other ways? Perhaps they rely on doctors more, for example. It is hard to interpret this finding in a way that makes mainstream health care look good. A New York Times article about the study points out that “the new study is not the only one to find adverse effects of low-sodium diets.” And it reports what someone at the Centers for Disease Control said:

Dr. Peter Briss, a medical director at the centers, said that the study was small; that its subjects were relatively young, with an average age of 40 at the start; and that with few cardiovascular events, it was hard to draw conclusions.

Dr. Briss fails to understand statistics. Ordinary statistical calculations take sample size and number of events into consideration when indicating the strength of the evidence. That’s the one of the main purposes of those calculations. As for “relatively young,” I know of nothing to suggest that the effects of sodium reverse with age — so it is irrelevant that the subjects were relatively young. That someone at the CDC is so clueless is remarkable.

Conway’s Law and Science

Conway’s Law is the observation that the structure of a product will reflect the structure of the organization that designed it. If the organization has three parts, so will the product. In the original paper (1968), Conway put it like this:

Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.

Here is an example:

A contract research organization had eight people who were to produce a COBOL and an ALGOL compiler. After some initial estimates of difficulty and time, five people were assigned to the COBOL job and three to the ALGOL job. The resulting COBOL compiler ran in five phases, the ALG0L compiler ran in three.

A consumer — someone outside the organization who uses the product — wants the best design. Conway’s Law implies they are unlikely to get it.

I generalize Conway’s Law like this: It is hard for people with jobs to innovate — for reasons that outsiders know nothing about. Whereas persons without jobs have total freedom. An example is a politician who promises change but fails to deliver. The promises of change are plausible to outsiders (voters) so they elect the politician. However, being outsiders, they barely understand how government works. When the promised changes don’t happen, the voters are “disillusioned”.

To me, the most interesting application of the generalized law is to science. In my experience, people who complain about “bad science”, such as John Ioannides and Ben Goldacre, have the same incomplete view of the world as the “disillusioned” voters. They fail to grasp the constraints involved. They fail to consider that the science they are criticizing may be the best those professional scientists can produce, given the system within which they work. Better critiques would look at the constraints the professional scientists are under, the reasons for those constraints, and how those constraints might be overcome.

“Much research is conducted for reasons other than the pursuit of truth,” writes Ioannidis. Well, yes — people with jobs want to keep them and get promoted. They want to appear high status. That’s not going to change. It’s absolutely true that drug company scientists slant the evidence to favor their company’s drug, as Irving Kirsch explains in The Emperor’s New Drugs. But if you don’t understand what causes depression and you’re trying to produce a new anti-depressant and you want to keep your job . . . things get difficult. The core problem is lack of understanding. Lack of understanding makes innovation difficult. Completely failing to understand this, Ioannidis recommends something that would discourage new ideas: “We must routinely demand robust and extensive external validation—in the form of additional studies—for any report that claims to have found something new.”

Truly “bad science” has little to do with what Ioannides or Goldacre or any quackbuster talks about. Truly bad science is derivative science, science that fails to find new answers to major questions, such as the cause of obesity. Failure of innovation isn’t shown by any one study. Given the rarity of innovation, it is unwise to expect much of any one study. To see lack of innovation clearly you need to look at the whole distribution of innovation. Whether the system is working well or poorly, I think the distribution of innovation resembles a power law: most studies produce little progress, a tiny number produce large progress. The slope of the distribution is what matters. Bad science = steep downward slope. With bad science, even the most fruitful studies produce only small amounts of innovation.

Just as outsiders expect too much from professionals, they fail to grasp the innovative power of non-professionals. Mendel was not a professional scientist. Darwin was not a professional scientist. Einstein did his best work while a patent clerk. John Snow, the first person to use data (a graph) to learn the cause of an infection, was a doctor. His job had nothing to do with preventing infection. To improve innovation about health (or anything else), we should give more power to non-professionals, as I argued in my talk at the First Quantified Self Conference.

Thanks to Robin Barooah.

 

 

 

 

Seth Roberts Interview About Self-Experimentation

For an article about self-experimentation and self-tracking to appear in Men’s Fitness UK this summer, Mark Bailey sent me several questions.

In what ways have the results of your self-experimentation directly affected your daily life e.g. health / work / lifestyle changes?

  1. Acne. My dermatologist prescribed two medicines. I found that one worked , the other didn’t.
  2. Weight. Found new ways to lose weight (e.g., nose-clipping).
  3. Sleep. Found new ways to sleep more deeply, avoid early awakening (e.g., one-legged standing).
  4. Mood, energy, serenity. Found that morning faces make me more cheerful, more energetic, and more serene.
  5. Productivity. After I started to track when I was working, I discovered that a certain feedback system made me work more, goof off less.
  6. Inflammation. Self-experimentation led me to take flaxseed oil. In the right dose — which I determined via self-experimentation — it greatly reduces inflammation. As a result, my gums are pink instead of red. They no longer bleed when I floss.
  7. Balance, reflexes. Flaxseed oil improved my balance and quickened my reflexes — I catch what I would have dropped.
  8. Blood sugar. I found that walking a lot improves my blood sugar level.
  9. Mental clarity. I found that flaxseed oil and butter improve how well my brain works in several ways.

Changes 1-6 are/were obvious. The rest are more subtle.

How long have you been self-experimenting?

About 35 years.

What are the main advantages of self-experimentation e.g. yields results specifically relevant to the individual and engages them directly in the process of finding solutions?

My self-experimentation has had three benefits:

1. Find new ways to improve health. Ways that no one knew about. I mentioned most of them earlier: New ways to lose weight, sleep better, be in a better mood, and so on. I find them to be much better (safer, cheaper, more powerful) than what was already available.

2. Test health claims made by others. I’ve done this many times. My interesting self-experimentation started when, as I said earlier, I measured the efficacy of two acne medicines my dermatologist had prescribed. I found that Treatment A worked and Treatment B did not worked, which was the opposite of what I had believed. It’s been claimed that drinking vinegar causes weight loss. I tried that, it didn’t work. Many people say that exercise improves sleep. I found that aerobic exercise made me fall asleep faster but did not reduce early awakening. The most dramatic “test” of health claims made by others came when I discovered that butter improved my arithmetic speed — which meant it was likely that butter improved overall brain function. I took this to mean that butter was good for the rest of the body — in contradiction to the official line that saturated fats are bad for us.

3. Find best “dose” of a treatment. Many people have claimed that flaxseed oil is beneficial. I found they were right. I tested different amounts/day and found the dosage that produced the most benefit. The best dose (2-3 tablespoons/day) was much larger than you would guess from the size of flaxseed oil capsules and the suggested dose on bottles of flaxseed oil capsules.

What do you consider are the potential weaknesses e.g. lack of clinical precision / possible placebo effect?

Is too-high expectations a weakness? You could spend a lot of time and not learn anything useful. Which isn’t so much a weakness as a fact of life.

In my experience, useful self-tracking and self-experimentation are slow. Other people’s self-tracking projects often strike me as too ambitious — doing too much too soon. For example, they are tracking too many things. Or worrying too much about placebo effects. Because they are doing too much — carrying too much, you could say — they may get tired and stop before they have learned something useful.

From a psychological perspective, why is the use of data / numbers, as in self-tracking, so much more powerful and engaging than merely ‘setting a goal’?

For one thing, it’s more forgiving. When I set goals for myself, I often fail to meet them. That can be so unpleasant I give up. When you simply measure something, it much easier to succeed — all you have to do is make the measurement. For another thing, it’s more informative. By studying my data I can learn what controls what I’m measuring (e.g., sleep). Setting a goal doesn’t do that.

Why, in a world dominated by numbers / statistics, has it taken so long for us to use data to learn about ourselves, our lives and our bodies?

You seem to be asking why has it taken so long to apply something so useful elsewhere (“numbers/statistics”) to ourselves? I have a different starting point. I think it is science — which is more than numbers and statistics — that has been useful elsewhere. Numbers/statistics by themselves are little help. I also think health scientists (e.g., med school professors) have used numbers/statistics to learn about ourselves — with little success.

In my experience, you need four things to make useful progress on health: 1. Good tools. Computer, numerical measurement. 2. Experiments. You need to systematically change things. 3. Knowledge of what others have learned. You can’t do experiments blindly, there are too many possibilities. You have to choose wisely what to change. 4. Motivation. You have to really care about finding something useful.

Professional scientists have Numbers 1-3 (tools, experiments, knowledge). Lacking Number 4 (motivation), they haven’t gotten very far. Self-trackers have Number 1 (tools). If they have a problem, something they want to improve, they have Number 4 (motivation). Most self-trackers have Numbers 1 and 4. Without Numbers 2 and 3 (experiment and knowledge) they aren’t going to get very far. What’s so important about the self-quantification movement is they might get Numbers 2 and 3. They might learn to experiment. They might learn to study what everyone else has already learned. When that happens, I think they will make a lot of progress. They will discover useful stuff that professional scientists have missed. And the whole world will benefit.

What developments will need to occur before self-tracking can really grow in the future e.g. better analysis / devices etc?

More successful examples. More examples where self-tracking led to improvement. They will teach everyone how to do it usefully. I think these examples will show that self-tracking alone is not nearly enough, as I said. But maybe I’m wrong. We need examples to find out.


Sterilities of Scale and What They Say About Economics

You have surely heard the phrase economies of scale — meaning that when you make many copies of something each instance costs less than when you make only a few copies. Large companies are said to benefit from “economies of scale” — so there is pressure to become bigger. Every introductory economics textbook says something like this.

Here’s what none of them say: The more of Item X made by one company, the more “sterile” Item X becomes, meaning the less Item X is able to spark innovation. Call this sterilities of scale. You have never heard this phrase — I invented it. (I cannot find it anywhere on the Web.) But it is just as obviously true as the notion that when you make more of something you can make each one more cheaply. If 100 widgets are made by one company, there is going to be less innovation surrounding widgets than if 100 widgets are made by 10 different companies. Sterility of Scale 1: When ten different companies make something, more people are studying and thinking about and pursuing different ways of making it than if only one company makes it. Sterility of Scale 2: The more profitable a single item becomes (due to low cost of manufacture), the more pressure not to change anything — not to kill the goose that lays golden eggs. Sterility of Scale 3: The larger the company, the more employees who care only about preservation of their fiefdom (comparing 10 companies of 10 people each to 1 company of 100 people). See how obvious it is that sterilities of scale exist?

The two concepts — economies of scale and sterilities of scale — are equally elementary. But only one is taught. Study of innovation should be 50% of economics but in fact is close to 0%.

This is why Tyler Cowen’s The Great Stagnation is so important — because it begins to point to this great gap. Jane Jacobs did so, but had little or no impact. (At a Reed Alumni Gathering I was seated next to a professor of economics. “What do you think of the work of Jane Jacobs?” I asked her. “Who’s Jane Jacobs?” she replied.) I think human decorative preferences are so diverse (chacun a son gout, no accounting for taste) for exactly this reason, to avoid sterilities of scale. Diversity of preference makes it easier for many different manufacturers to thrive, which increases innovation. For example, diversity of furniture preference makes it easier for dozens of furniture companies to survive, thus increasing innovation surrounding furniture. Clayton Christensen’s The Innovator’s Dilemma describes many examples where large companies were much less innovative than smaller companies — so much so they often went bankrupt. Which suggests sterilities of scale can be fatal.

If there were more understanding that ten small things are going to be more innovative than one big thing, I like to think that scientists would better understand the value of very small research and grant sizes would go down. An illustration of the general cluelessness is someone who wrote to Andrew Gelman complaining that a sample size was only 30.

I started thinking about this after hearing Nassim Taleb discuss economies of scale (e.g., here).

 

Assorted Links

Methodological Lessons From My One-Legged-Standing Experiment

A few days ago I described an experiment that found standing on one leg improved my sleep. Four/day (= right leg twice, left leg twice) was better than three/day or two/day. I didn’t know that. For a long time I’d done two/day.
I think the results also contain more subtle lessons. At the level of raw methodology, I found that context didn’t matter. The effect of four/day was nearly the same when (a) I measured that effect using four days in a randomized design (where the dose for each day is randomly chosen from two, three, and four) and when (b) I measured that effect using a dose of four day after day. Suppose I want to compare three and four. Which design should I use: (a) 3333344444, (b) 3434343434, or (c) 4433343434 (randomized)? The results suggest it doesn’t matter.

The experiment didn’t take long (a few months) but it took me a long time to begin. I noticed the effect behind it (one-legged standing improves sleep) two years ago. Why did I wait so long to do an experiment about details?

I was already collecting the data (on paper) — writing down how long I slept, rating how rested I felt, etc. But I wasn’t entering that data in my laptop. To transfer months of data into my laptop required motivation. Most of my self-experimentation has been motivated by the possibility of big improvements — much less acne, much better mood, and so on. That wasn’t possible here. I slept well, night after night.

What broke the equilibrium of doing nothing? A growing sense of loss. I knew I was throwing away something by not doing experiments (= doing roughly the same thing day after day). The longer I did nothing, the more I lost. To say this in an extreme way: I had discovered a way to improve sleep that was unconnected to previous work — sleep experts haven’t heard of anything like it. It was real progress. To fail to figure out details was like finding a whole new place and not looking around. Moreover, the experiments wouldn’t even be difficult. The treatment takes less than a day and you measure its effect the next morning. This is much easier than lots of research. Suppose you know that radioactivity is bad and you discover something radioactive in your house. A sane person would move that radioactive thing as far away as possible — minimizing the harm it does. I had discovered something beneficial yet wasn’t trying to maximize the benefits. Crazy!

An early lesson I learned about experimentation is to run each condition much longer than might seem necessary. If you think a condition should last a week, do it for a month. Things will turn out to be more complicated than you think, having more data will help you deal with the additional complexity that turns up. Now it was clear I had gone too far in the direction of passivity. I did the experiment, it was helpful, I could have done it a year ago.

Growth of Quantified Self (more)

At the Quantified Self blog, Alexandra Carmichael has posted several graphs showing how much the Quantified Self movement has grown during the past year. The number of QS meetup members has grown by a factor of 3; the number of groups has grown by a factor of 6.

Measuring yourself is a step toward controlling yourself — especially, controlling your health and well-being. Almost everyone wants more control of these things. I believe that the idea, which the Quantified Self movement encourages, that ordinary people can do useful science is a shift with implications on the order of the shift from religion (the Sun revolves around the Earth) to science (the Earth revolves around the Sun). When ordinary people begin to do science, I predict we will learn a lot more about how to control our bodies.

Before science became powerful, people knew lots of correct useful stuff (e.g., metallurgy). But there were limits on what could be learned (e.g., Galileo was imprisoned). Now religion is much less powerful but most people believe that science can only be done by certain people (e.g., professors). This too places serious limits on what can be learned. For control of the outside world (e.g., material science, physics), I don’t think these limits matter (although the case of Starlight suggests that even here amateurs can make important discoveries). But for control of the inner world (our bodies), the message of my work is that these limits matter a lot. By studying myself I managed to learn a bunch of useful things that professional scientists could learn only with great difficulty. For example, I could learn from accidents how to sleep better; I could easily test ideas about how to sleep better. Few if any professional sleep researchers measure sleep night after night for long periods of time; nor do they do cheap fast experiments.

“Do a Small Thing”: Good Advice For Revolutionaries and Scientists

This is the best magazine article I have read in a long time. The subtitle is “What Egypt Learned from the Students Who Overthrew Milosevic”, a good description. The Serbian students who overthrew Milosevic had several lessons for budding revolutionaries in other countries, such as Egypt and Burma. One was/is:

Do a small thing and if it is successful, you have the confidence to do another one and another one.

Much like my advice about science: Do the smallest easiest thing that will tell you something. You will learn more from it than you expect. If someone criticizes a study for being “small” they are saying “1 + 1 = 3″. If someone does a large study that fails, they are saying the same thing.

Via Long Form. I knew little about the author, Tina Rosenberg, before this. I am looking forward to reading the book about peer pressure from which this article was taken.

Growth of Quantified Self

The first Quantified Self (QS) Meetup group met in Kevin Kelly’s house near San Francisco in 2008. I was there; so was Tim Ferriss. Now there are 19 QS groups, as distant as Sydney and Cape Town.

I believe this is the beginning of a movement that will greatly improve human health. I think QS participants will discover, as I did, that simple experiments can shed light on how to be healthy — experiments that mainstream researchers are unwilling or unable to do. Echoing Jane Jacobs, I’ve said farmers didn’t invent tractors. That’s not what farmers do, nor could they do it. Likewise, mainstream health researchers, such as medical school professors, are unable to greatly improve their research methods. That’s not what they do, nor could they do it. They have certain methodological skills; they apply them over and over. To understand the limitations of those methods would require a broad understanding of science that few health researchers seem to have. (For example, many health researchers dismiss correlations because “correlation does not equal causation.” In fact, correlations have been extremely important clues to causality.) Big improvements in health research will never come from people who make their living doing health research, just as big improvements in farming have never come from farmers. That’s where QS comes in.

The first QS conference is May 28-29. Tickets are still available.

Monocultures of Evidence

After referring to Jane Jacobs (“successful city neighborhoods need a mixture of old and new buildings”), which I liked, Tim Harford wrote this, which I didn’t like:

Many medical treatments (and a few social policies) have been tested by randomized trials. It is hard to imagine a more clear-cut practice of denying treatment to some and giving it to others. Yet such lotteries — proper lotteries, too — are the foundation of much medical progress.

The notion of evidence-based medicine was a step forward in that it recognized that evidence mattered. It was only a small step forward, however, because its valuation of evidence — on a single dimension, with double-blind randomized trials at the top — was naive. Different sorts of decisions need different sorts of evidence, just as Jacobs said different sorts of businesses need different sorts of buildings. In particular, new ideas need cheap tests, just as new businesses need cheap rent. As an idea becomes more plausible, it makes sense to test it in more expensive ways. That is one reason a monoculture of evidence is a poor idea.

Another is that you should learn from the past. Sometimes a placebo effect is plausible; sometimes it isn’t. To ignore this and insist everything should be placebo-controlled is to fail to learn a lot you could have learned.

A third reason a monoculture of evidence is a poor idea is that it ignores mechanistic understandings — understanding of what causes this or that problem. In some cases, you may think that the disorder you are studying has a single cause (e.g., scurvy). In other cases, you may think the problem probably has several causes (e.g., depression, often divided into endogenous and exogenous). In the latter case, it is plausible that a treatment will help only some of those with the problem. So you should design your study and analyze your data taking into account that possibility. You may want to decide for each subject whether or not the treatment helped rather than lump all subjects together. And the “best” designs will be those that best allow you to do this.