Who Tests the Genetic Testers? And the Experts?

In the New York Times, a writer named Kira Piekoff, a graduate student in Bioethics, tells how she sent her blood to three different companies, including 23andMe, for genetic analysis and got back results that differed greatly. As usual, none of the companies told her anything about the error of measurement in their reports, judging from what she wrote. So she’s naive and they’re naive (or dishonest). Fine.

I’m unsurprised that a graduate student in bioethics has no understanding of measurement error. What’s fascinating is that the experts she consulted didn’t either, judging by what they said.

A medical ethicist named Arthur L. Caplan weighed in. He said:

The ‘risk is in the eye of the beholder’ standard is not going to work.We need to get some kind of agreement on what is high risk, medium risk and low risk. [Irrelevant — Seth] If you want to spend money wisely to protect your health and you have a few hundred dollars, buy a scale, stand on it, and act accordingly.

As if blood sugar and blood pressure measurements aren’t useful. A good scale costs $15.

A director of clinical genetics named Wendy Chung said:

Even if they are accurately looking at 5 percent of the attributable risk, they’ve ignored the vast majority of the other risk factors — the dark matter for genetics — because we as a scientific community haven’t yet identified those risk factors.

She changed the subject.

J. Craig Venter, the famous gene sequencer, does not understand the issue:

Your results are not the least bit surprising. Anything short of sequencing is going to be short on accuracy — and even then, there’s almost no comprehensive data sets to compare to.

The notion that “anything short of [complete] sequencing” cannot be helpful is absurd, if I understand what “short on accuracy” means. He reminds me of doctors who don’t understand that a t test corrects for sample size. They believe any study with less than 100 subjects cannot be trusted.

I told a friend recently that I have become very afraid of doctors. For exactly the reason illustrated in these quotes, from well-known experts who are presumably much more competent than any doctor I am likely to see. The experts were unable to comment usefully on something as basic as measurement error. Failing to understand basics makes them easy marks — for drug companies, for example — just as the writer of the article was an easy mark for the experts, who managed to be quoted in the Times, making them appear competent. Surely almost any doctor will be worse.

9 thoughts on “Who Tests the Genetic Testers? And the Experts?

  1. Dear God, I hate it when people chunter on about “risk factors” as if they were causes. Or even just features that make you more vulnerable to other causes.
  2. It seems to me that measurement error is only the start of the limits of DNA testing. There’s no simple way to measure the limits of knowledge about the prevalence of diseases or the limits of current theories.
  3. Comments sections are just so perfect for tossing out frisbees of vocabulary. If one uses a highfalutin word, one surely must know what one is talking about. The Emperor has no clothes in all too many professions, not just medicine. But especially medicine. (But don’t get me started about investment banking.)
    Who tests the testers is a vital question. Re the tragedy of what happened in the state of Massachusetts with deliberately fudged lab tests that sent untold numbers of people to jail. Anyone who doesn’t know about that, google and ye shall find.
    Seth: Yes, I linked to a story about the Massachusetts forensics tester who made up lab results. I wonder if better technical competence by the people who read her reports would have caught her fabrications much earlier.
  4. 23andme did give you those numbers but they also gave you the links to the research it was generated from and which gene it related to.
  5. Ms. Peikoff (who I believe is the daughter of Leonard Peikoff, who was Ayn Rand’s choice for heir to her estate –– interesting if not relevant) ran into the same problem I did with the Quicksilver Scientific Co. “mercury speciation” test –– New York does not allow these test kits to be mailed from New York to labs that are not certified according to some specific NYS standard. I am surprised she admitted that “my in-laws mailed it from their home in New Jersey.” It seems New York is hostile to the concept of “health-test buyer beware,” though in this context there may be something to that….
  6. > They believe any study with less than 100 subjects cannot be trusted.
    I’d agree with them. See: publication bias, underpowering, winner’s curse, base-rates, self-selection, internal vs external validity, assumption of normality and the central limit theorem. To name just a few reasons why small n studies, even if they have nice <0.05 p-values, are predictably untrustworthy.
  7. A study can only be trusted if the participants are independent from each other. If you test Alice with SNP-detection chip X 100 times those 100 times aren’t independent measurements. SNP-detection chip X might have a systematic bias.
    As far of the subject of what makes a good scale, I think these days a good scale should measure body fat.
    If you do spend money on an expensive scale like the Withings scale you also get pulse measurements in addition to weight and body fat data. Having the air CO2 and temperature data can also be useful.
    ____
    Claiming that Wendy Chung changed the subject shows misunderstanding of how to read a newsarticle. Chung probably spend >10 minutes at the telephone or in person with the journalist of the article and during that time she said that sentence or something that the journalist considers to be equivalent to that sentence. Given that the sentence is true, it doesn’t suggest that Wendy Chung said anything wrong.
    I also don’t see any indication that the journalist who wrote the article is naive. She did what a good journalist is supposed to do. She investigated whether the 3 companies are going to report similar results. She might have already expected the result of the measurements but I commend a journalist who actually goes out and tests and then tells the readers about the experiment. She doesn’t just tell readers about what the FDA has to say but she actually goes out and experiments and reports the results to the readers.
    Seth: She is naive because she failed to address the question of measurement error. The subject of the article. It isn’t clear she even understands what it is. I agree that it is a good article, just for the reason you said. Sometimes naive people do good work.

Leave a Reply

Your email address will not be published. Required fields are marked *