My take on the Decline Effect

So I’ve read Jonah Lehrer’s New Yorker piece now several times. I take it seriously. The policy implications, particularly with regards to the use of pharmaceuticals, are incredibly disturbing. I’m less concerned with the Rhine’s ESP research in the 1930’s.

I should point out that there are many areas of science ranging from molecular biology to astrophysics where I don’t believe there is any evidence at all for such a “decline effect”.  The disciplines affected by the problem are those that generally depend on to a greater extent on parametric statistics (t-tests and the like) rather than categorical “yes-no” results (e.g. a gene sequence, the timing of an eclipse, a band in a gel).

So what about the causes? First, yes there is experimenter bias. Experimenters are (still) human and hence are imperfect.

But much more interesting to me is the problem of replicability.  As a journal editor myself, I have to make difficult decisions about what to publish and the reality of today’s scientific marketplace is that negative results have a hard time making it past editors and into print. So another real part of the problem is that when many studies are compared for replicability (meta-analysis), this type of research itself is inherently biased by the “dark matter” of unpublished negative results.

Is something else spooky going on here? I don’t think so. Science, I’m pleased to say, has not yet been seriously targeted by deconstructive criticism.

Jonah Lehrer’s piece in the New Yorker

Behind the firewall, here’s the abstract of Jonah’s new piece on the “decline effect”. And here’s Steven Novella’s response on his NeuroLogica Blog.

Basically what’s at stake is our (the community of scientists and those who use scientific results to create informed policies) faith in the Scientific Method (as defined best by Popper).

I’m still working through my own thoughts as to Lehrer’s article. It’s creating a big stir among my colleagues and it deserves a serious response. So stay tuned.

Anthony Gottlieb on the limitations of Science

A fine short essay from the Blog More Intelligent Life can be found here. Anthony is the former Executive Editor of The Economist.

His central point, which is well-taken, is that while the scientific method may be flawed, it’s the only game in town….Somewhat in the spirit of Winston’s Churchill’s view of democracy.

I would only add, that issue of publishing negative results he raises is somewhat overblown. Negative results that are truly paradigm-breaking, are newsworthy, and hence will find a scientific (and usually public) audience.  So for example the famed 1919 test of Einstein’s theory, not withstanding Einstein’s famous quote, would have definitely have been published even if the result had been negative.

What is science?

When I’m reading the popular media (as opposed to talking with other scientists), I’m often struck by the apparent disconnect between the intelligent lay public view of science and the understanding of science held by most practitioners. In particular, there is a real confusion between technology, applied science and basic science. And then secondarily there is, among many members of the public, little understanding of the scientific method itself.

With regards to the first confusion, I would define technology as the material and sometimes non-material artifacts, built by human labor, that facilitate or enable some portion of human behavior. Thus the wheel is certainly technology, but so also is the Google search algorithm. By the same token, the I-Phone is technology, but so too is the Pub Med database of journal articles. But these things, material and non-material, are not science. Even though Pub Med contains scientific data and is used by scientists, Pub Med, from my point of view is technology.

Applied Science, on the other hand, is scientific research that can more or less, lead directly to the invention and deployment of new technologies. Thus, I view much of biomedical research (translational research in fact) as applied science because it can be used to develop new practical therapies to advance the public health. In the same way, agent-based modeling, applied to economics, potentially offers decision makers new computational tools for predicting and avoiding systemic risk.  The key difference here is that while a prototype technological artifact may emerge from the practice of applied science, a mature technology generally does not.

Basic science, in contrast, is scientific research aimed at understanding the rule-set of the universe–usually by the hypothesis-based practice we call the scientific method.

It is this scientific method, that I think deserves a much better public explanation–it may well help to depoliticize public policy decisions that are based on scientific research, especially basic research. Most discussions of the scientific method, inevitably refer to Karl Popper. Popper’s key point was the the centrality of falsifying a hypothesis. That is, the aim and design of scientific experiments is to produce data which can serve as a test for an underlying hypothesis being true.

Thus, science is constrained by questions that are in fact testable. The ones that aren’t testable (and they are surely out there) aren’t science. In my opinion, it is this notion that’s terribly important for decision makers to understand: a policy issue, sub served by an implicit theory that can’t be tested, is a bad match for scientific research, applied or otherwise.

One last thought: there is a lot of good science out there that is non-hypothesis based. It’s exploratory science–the Human Genome Project was an excellent example of this. Exploratory science aims at adding to our knowledge of the universe, not by gleaning its rule-set, but rather by collecting and curating its facts. And with the technology of modern databases, collecting and curating may offer great practical utility.