Google vs Yahoo: Scientific Implications

Google vs Yahoo over several years. A fable for scientists. Yahoo is worth countless billions of dollars less than Google, in spite of a big head start. The moral: methodological complications, always seen as “improvements”, have a price. The benefit of a more complex experiment is easy to see, while the increase in cost (difficulty) usually goes unremarked.

My usual comment on proposed research is that an easier experiment — often smaller, often less “well-controlled” — would be better. I seem to be the only person who says this, yet I say it all the time.

3 thoughts on “Google vs Yahoo: Scientific Implications

  1. Is there a means one could come up with to help get a sense of the likelihood of any given self experiment working, or having good results? Maybe self experimenters on average deviate from later more reliable, but presumably more costly, studies by x percentage points, so when you’re doing your particular self experiment, you say, “Well, self experimenters tend to have exagerated figures on average, by x percent compared to later more reliable findings, so I can account for that,” and you account for that by trying to get even greater findings.

    You perhaps say, “Well, x percent of generalizations from a sample of one are off by x percent compared to later more reliable studies, so I’ll account for that too,” and you expand your sample to a few friends to decrease the variation. You then say, “Well, small, non-random sample tends to deviate from large random samples by x percent on average, so I’ll try to focus on findings that are large enough to be larger than the average deviation.”

    Would this approach be useful (does such information exist about large randomized sample studies compared to self experimentation)? I’m not terribly knowledgeable about probaility and statistics. This approach does seem to be in keeping with your idea of making your findings as pronounced as possible. I’d love to hear your thoughts and perhaps you can clear up misconceptions I have if you’re so inclined.

  2. “good results” would be learning something you wouldn’t know otherwise. It is hard to imagine that not happening. Usually the alternative to self-experimentation is doing nothing.

    as for getting “wrong” results, you have to look long and hard through the history of science to find cases where the results of self-experimentation were misleading; it doesn’t take long at all to find cases where they pointed in the correct direction.

  3. I guess what I’m wondering is what kind of generalizations can we feel comfortable making about self experimentaiton, and what biases should perhaps be controlled for when looking at our data? I think it makes sense to self experiment, and in an informal way I think we all do it, and it seems your formal approach is way better than the informal self experimentation. I’d just like to have a sense of what reasonable inferences we can draw from the facts we’ve unconvered via self experimentation.

    For example, if I were to observe that my pulse is 70 beats per minute, and I generalized, saying “I would guess the average person’s heart rate is that,” I’d probably be close (I think). But if I observed that I could read 100 pages in a half an hour, and assumed this was average, I’d be wrong (I wish I could!).

    I’m just wondering if there are some useful approaches or information out there to help in interpeting data from self experimentation that helps one make good inferences. For example, would it be useful to have a sample of self experimentation results compared to more conventional studies and see the differences in conclusions. “Oh, the average self experimentation results varied from the conventional study results by x percent. Oh, x percent of self experimentation findings approximated the findings of more conventional studies. I can use that info in judging the results of my own self experimentation.”

Leave a Reply

Your email address will not be published. Required fields are marked *