Hauser Redux

My father’s colleague, Charles Gross of Princeton, has written a superb piece in The Nation about the Marc Hauser case. It’s here.

Besides a discussion of the facts of the now famous scientific misconduct case, there is an excellent general discussion of scientific misconduct and a beautiful description of a typical lab ecosystem.

Publishing a scientific paper….

A new model by Kravitz and Baker is here. Bradley Voytek’s blog entry on it is here (by the way, he’s an accomplished young neuroscientist and you can read his recent papers here).

So back to the new idea: basically the notion is that all submissions get published, but they go through pre-publication and post-publication review, ranking and discussion (think Faculty of 1000). Journal editors like me are obsolete, only managing editors survive–along with elected editorial boards. I guess all made possible by the wonders of technology.

My main concern is this (for those readers who are in the life sciences). Think about all those posters at the largest meeting you attend. Actually think about all those non-peer reviewed posters at the Society for Neuroscience meeting: thousands and thousands of them. Now, think about (even w. a ranking system) having to find the jewels among them.

But wait you say, you can use the ranking and computational tools to just view the very best….and I say, why then would you ever publish the very worst?

And what would it mean for promotion and tenure when any person could have 1000 first authorship papers—in Neuron?

So I remain skeptical.

H5N1 Flu papers

As you’ve probably heard, there are two of them out there drawing a great deal of controversy because they purport to show how to molecularly engineer this deadly flu virus to be more transmissible. Recall, that the global threat from a given flu virus is a function of both mortality rate and transmissibility rate

Now, the US Biosecurity Panel is calling for a second “Asilomar” to consider how to handle such manuscripts in the future, with a publishing moratorium of “perhaps three months”. ScienceInsider has the story here.

As a journal editor, I have a huge stake in how this is handled. On the one hand, open science, is essential to scientific progress. We need to have sufficient information in the manuscript for independent replication of the results, particularly if they are consequential. On the other hand, we don’t want to give the recipe for global catastrophe to an individual (or group) with ill intent.

Ideas that are out there right now converge around the lines of publishing results and holding methods for those parties who are properly vetted (perhaps by the CDC or WHO). But I’m not sure that would work, because the methods of molecular biology are really well-known (see biohackers), and in this case, the results alone, might provide the essential “launch” key.

Richard Florida is wrong-headed

At least according to Mario Polèse here in City Journal.

Money quote:

The conclusion to draw from all this isn’t that cities can do nothing to promote economic development. It’s that they should avoid academic fads and quick fixes, which are no substitute for obvious policy goals like competently providing mandated services at reasonable cost, keeping streets safe, and not taxing and regulating away businesses—good governance, in sum, and even that comes with no guarantee to work.

Holidays in DC

As we head into Winter Break here at George Mason, there is a bit of time to sit back and reflect on the past semester: we saw the selection of a new President, the opening of some spectacular new campus facilities and an incredibly productive time for our science, both in terms of new research findings and teaching. I’m grateful to the entire faculty, staff and students at the Mason’s Institute for Advanced Study for what they’ve accomplished this Fall and look forward to the Spring semester.

In the meantime, I’m pleased to report that the traffic seems to be getting a bit lighter here as folks leave town. The weather in DC has been relatively balmy lately–as a native Californian, I’d love to see that trend continue through January and February.

Publish in high "retraction rate journals"….

From Bjorn Brembs via the London School of Economics site, here. Turns out that if you’re encouraged to publish in high impact factor journals, you’re also being encouraged to publish in high retraction rate journals.

Money quote:

This already looks like a much stronger correlation than the one between IF and citations. How do the critical values measure up? The regression is highly significant at p<0.000003, with a coefficient of determination at a whopping 0.77. Thus, at least with the current data, IF indeed seems to be a more reliable predictor of retractions than of actual citations. How can this be, given that the IF is supposed to be a measure of citation rate for each journal? There are many reasons why this argument falls flat, but here are the three most egregious ones:

  • The IF is negotiable and doesn’t reflect actual citation counts (source)
  • The IF cannot be reproduced, even if it reflected actual citations (source)
  • The IF is not statistically sound, even if it were reproducible and reflected actual citations (source)

In other words, all the organizations that require scientists to publish in ‘high-impact journals’ at the same time require them to publish in ‘high-retraction journals’. I wonder if requiring publication in high-retraction journals can be good for science?

Christopher Hitchens RIP

A really top notch writer and debater has passed. His Financial Times obit is here. Christopher Buckley’s New Yorker obit is here. Over the years, I always admired his ability to use reasoned argument as a means to advance his views–as opposed to tired emotional screeds that are all too common, especially in the blogosphere. So we’ll miss his voice.