Skip to content

Neuroscience Meridian

April 27, 2013

I want to talk about my loveable 2-year old nephew – I’m going to use a pseudonym, so let’s call him Judge Holden (or the Judge for short). Now, the Judge is learning to use the toilet. When he successfully manages to use a potty (for it’s intended function), he looks tremendously pleased with himself – he has mastered something sophisticated and difficult, that previously was beyond his ken. But, the Judge does not just show his self-satisfaction when he succeeds, he also declares it to the whole world. He also thinks that everyone should emulate him, carrying a potty around the whole time, and wearing a nappy, just in case, to be on the safe side.

Now, turning to neuroscience and psychology, and at the risk (intention?) of insulting lots of my colleagues, I wonder if those scientists currently decrying statistics in neuroscience (e.g., the replication crisis, inflated p-values, the need to pre-register scientific studies etc. etc.) are like my little nephew. They have learnt some basic statistics, and discovered some fairly obvious points about multiple comparisons and bias and stuff, and now they are terribly pleased with themselves and feel the need to shout about it to everyone from the rooftops. And just like my nephew wanting everyone to partake in his robust approaches to stop self-soiling, they think that new robust requirements are necessary to stop bias and nonsense across the board in science. And, because my nephew is hopelessly naïve (not his fault he’s 2), making everyone wear nappies and use potties would be cumbersome. Similarly what many of these scientists want (particularly requirements for pre-registration) would be cumbersome and may well have a negative effects on science as a whole.

Right, enough about the Judge. Let’s talk about pre-registration of studies.

Pre-registration of studies is meant to stop unconstrained post-hoc analyses that end up finding differences that are actually just noise in the data. Who could argue against it right? It’ll mean that everything is above board, transparent, right? Not necessarily. Who knows what the effects will be. They could be positive or (I reckon) they could be negative.

First, as other commentators have said, pre-registration will act to discourage exploratory science. Reviewers will be biased against analyses that were not in the original pre-registration. As others point out, this makes it harder to publish incidental findings (although, that’s kind of the whole point). But more harmfully, it will make it harder to do interesting science. In FMRI, for example, the really interesting developments almost all use existing data. Take the work of FMRIB, in Oxford. Sure, they made lots of methodological developments, but their most important work is theoretical (I think), working on defining and understanding functional brain networks (e.g,. Beckmann et al, 2005; Smith et al, 2009 etc etc). Much of this work uses existing data. Data that has not been acquired for the purpose (e.g., generic resting state data). There’s no way to pre-register for this kind of research. The researchers had an idea about neural function that should be present in some data. There’s no way to correct the p-values for the different exploratory approaches taken with the data. Such an idea is mad. And if you look in the literature, there’re loads of examples just like this. Pre-registration goes directly against this type of research.

Second, it will slow science down for two reasons: i) there will end up being several rounds of review of the pre-registration phase so these will have to be written carefully, slowly. (If there isn’t any review of pre-registration then the whole thing can’t work). ii) In fast moving fields, there will be analysis and theoretically developments throughout the planning/data acquisition phase that will mean that if you’re taking pre-registration seriously, you have to re-register and collect more data, not just repurpose what you have. This may also involve people collecting even more boring FMRI datasets that all show the same thing (this already happens by the way), which swallows precious research resources.

Third, I suspect that pre-registration will increase direct fraud in science. To understand this, consider the parable of the ratings agencies. These financial agencies rate different types of debt; for example, of different companies. What happened is that these ratings were useful for investors in making decisions about financial products; however, after a while the investors started relying entirely on the rating agencies. And, in the run-up to the 2008 financial crisis, this meant that getting a AAA rating was all you needed to do to sell your debt; therefore, tricking the rating agency into giving your dodgy debt a AAA rating made sense. I wonder if a similar thing could happen with pre-registration in neuroscience. By making the system seem more robust, you make it easier for people who are prepared to commit fraud to get their results published and accepted, and give their work an even bigger veneer of respectability.

I think that 1) and 2) above are serious problems and 3) could be. They will slow neuroscience’s onward march to glory if pre-registration is implemented widely. So, the cons are only worth it if the pros of pre-registration improve things substantially. Fundamentallly, this means that pre-registration has to fix a problem which is actually holding up science’s onward march. And no-one has actually shown that they are. They’ve just speculated that they are.

I think the opposite. I don’t think dodgy results in some papers have that big an effect, or not enough to lead to any drastic, restrictive countermeasures. In fact, it’s possible that having dodgy papers is good for science as a whole, maybe it’s like stochastic resonance in motor systems, a bit of noise may help the field. This is what I think: below there’s a picture of a hypothetical world where there are some dodgy papers out there in non-clinical fields. Here it is again in a world where those papers have been exorcised from the face of history. See the difference?

So, what do I think? There are plenty of dodgy results out there. There always have been, for lots of reasons.  And that’s why just because something’s published in a journal doesn’t mean you should trust it. Maintain your skepticism, especially for high-profile journals. If there are small samples, implausible results, then don’t believe the paper, don’t cite it and criticize it – publicly – in blogs etc, if you want. There are no easy fixes, and most importantly, you have to think for yourselves.



From → Publishit

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: