Skip to content

The only fundamental principle of neuroscience

The brain is complicated. Really complicated. You just won’t believe how vastly, hugely, mindbogglingly complicated it is. I mean, you may think that your laptop is a complex piece of engineering, but that’s just peanuts compared to the brain.

I mean, there are loads of different types of neurons, synapses, glial cells, neurotransmitters, receptors. Genetic, epigenetic influences. There’s even the cerebellum, whatever that is. These all interact at a range of spatial and temporal scales, from the microscopic to the macroscopic. Loops and networks. Top-down, bottom-up. And all of this complexity comes together to allow us to do the glorious, epic meaningful things we do: walk the dog, stare at passing traffic, allow a solitary tear to roll down our check, maybe even write a blog.

And in response to this complexity, what do we do? Well, we could give up and do something else (bee-keeping perhaps). But instead, we try to make sense of it all, and we do that by making it simpler. We look for underlying fundamental principles, capable of explaining big chunks of this complexity. And everyone agrees that this approach is a good, and a noble and a wholesome and a sensible thing to do.

As a result, the world is awash with fundamental principles. The bayesian brain, the small world brain, the homeostatic brain, the efficient brain, the critical brain, the computational brain, the dynamical brain, the hierarchical brain, the sensorimotor brain, the topological brain, free energy, fast and slow thinking (OK, we can’t think of any more, we should probably read more widely).

The NeuroRant wishes to point out, however, that while these different approaches have plenty of merit, there’s only one real fundamental principle out there: the brain is a hack. This fundamental principle trumps all the others. So we’ll say it again, the brain is a hack. We don’t mean that the brain is a hack as in teenagers in Guy Fawkes’ masks and irritating acronyms. We mean a hack as in: “an inelegant but effective solution to a computing problem” (to quote Wikipedia). No doubt, evolutionary processes followed many principles in shaping the brain, but in the end, the human brain is a jury-rigged, hatchet-job cobbled together out of bits of string and sticky tape. Therefore, any of these other (inferior) fundamental principles are going to  turn out to be inadequate when enough data is put together.

So kids, be warned and keep frosty when you read those all-encompassing review articles; you know the ones we mean, the articles that seem to really tie all of the data together. A healthy dose of skepticism accompanied by pain and confusion is the only way to go.

Image

Advertisements

Homo Scientificus (or whatever it should be in latin)

A bit of discipline hopping to start with.

Homo Ecomomicus is the theoretical construct that economists use to build their models of how economies work, and how interventions (such as a change in tax rate) will affect the economy. Homo Economicus is the idea that homo sapiens are perfectly rational and narrowly self-interested beings. Now, it kind of goes without saying that Homo Economicus is basically completely wrong, and a mad starting assumption to base most of economics on. But it is simple, and it allows elegant, sophisticated models of behaviour, that have founding principles, and so lots of people like it and use it (even if it is (a) obviously wrong and (b) was demonstrated to be wrong by Tversky and Kahnemann and others, decades ago).

Now to the pre-registration debate (which for some reason, makes me disproportionately incandescent with rage), and Homo Scientificus. Most people accept that there are some bad practices being performed in science, including not publishing results, selectively publishing results, or p-hacking (not a new idea, Ronald Coase said in the 1960s: “If you torture the data enough, nature will always confess”). Agreeing that there are some bad practices is one thing, what to do about it is another. So this is where I want to introduce the idea of Homo Scientificus, which will make it much easier to formulate a response.

Homo Scientificus (HS) is a strange beast. HS’s goal is to further his/her career at all costs through acquiring by-lines in scientific publications (rather than find out real things about the world). What’s strange is that HS has a very specific moral compass; HS will not commit outright fraud by making data up; that would be plain wrong! That said, HS will do a range of dodgy practices that he/she knows are a bit wrong, such as p-hacking etc but draws the line at outright fraud.

Now, armed with the construct of HS and the assumption that all scientists are like HS, we can develop a robust response to all the dodgy practices. Pre-registration. Pre-registering experiments will mean that HS can no longer do p-hacking, selective publishing of data etc and the truthfulness of all scientific output will improve, followed shortly by cold fusion,  downloadable brains, and the AI singularity.

But maybe, just maybe (like Homo Economicus), HS is not a good construct. Maybe it only applies to a very small subset of scientists out there. Maybe lots of real scientists would be perfectly happy to commit fraud, but given the way things are at the moment, a bit of p-hacking is a safer and easier approach to take. (I mean fraud is difficult: what random distribution to use to generate the data, how many made-up subjects; how large an effect size will be plausible). Maybe, making your results up would be even more attractive in a pre-registered world, since, presumably, there’ll be fewer implausible, unexpected studies being published to great acclaim in Current Biology, so a fraudulent study with a nice unexpected result might stand out even more.

At the other end of the moral spectrum, many scientists may be much more moral than HS, not wanting to publish dodgy results (lying awake at night worrying about whether what they write is true). Their p-hacking/selective publishing could be a question of education/ignorance, and requires some form of awareness campaign (like the one currently ongoing). Pre-registration for these scientists is using a sledge-hammer to crack a nut (or FMRI to ask a behavioural psychology question).

Many perfectly sensible, thoughtful, informed scientists think that pre-registration will come with costs (whether pre-registration becomes obligatory, or whether it just becomes the cultural norm and leads to down-weighting of non-pre-registered studies). Costs in time, costs in making much of science more conservative by rewarding conservative studies, and down-weighting novel  approaches to asking and answering questions. And for these costs not to outweigh the benefits, pre-registration requires the existence of a cryptozoological being such as Homo Scientificus.

Aside

Allow it

So, ping-pong.

There’s a forceful rebuttal, to our previous rant, here: http://blogs.discovermagazine.com/neuroskeptic/2013/04/29/preregistration-problem/#more-3894

A few final points and then a suggestion – something that I hope everyone can agree with.

First, I think Neuroskeptic, and others are making a fundamental error:

“The trouble with the current system is that planned and exploratory analyses are confused. All registration does, at its core, is make the distinction clear”. But this is simply wrong. There are plenty of examples of planned science that will be classified as exploratory because they don’t involve pre-registration. The following frequently happens: I have a prior hypothesis about the brain (derived before I’ve looked at any data), I test it using an existing dataset (e.g., 1000 connectomes dataset). This is planned science, not exploratory – I repeat, it’s not exploratory. Fundamentally, this sort of experiment shouldn’t be penalised, shouldn’t have to declare that it’s exploratory, or suffer any problems from reviewers complaining that it’s post-hoc. It’s not post-hoc, not from the point of view of the hypothesis, which is what matters for dodgy science.

Second, Neuroskeptic side-steps my point about preregistration and unintended consequences, such as the potential for increasing fraud: “Neurorant’s third point is that like an AAA credit rating, preregistration could be exploited by fraudsters. Well, I’m sure that will happen – unfortunately, fraudsters exploit systems.”

Not all systems are equally exploitable. My point was that it’s possible that a system such as pre-registration becomes easier to exploit by fraudsters than the status quo. Just because fraud will happen under all systems doesn’t mean that some don’t encourage/allow more fraud than others.

Or alternatively way of putting it: can you be sure that there won’t be unintended negative consequences from pre-registration? (There are many examples of institutions and structures where a regulation, even when implemented with the best of intentions, has unexpected, bad side effects).

Finally. We could argue back and forth about these points ad nauseum. It turns out, we have different feelings (gut instincts? hunches?) about what we think is limiting scientific progress. I think that these pre-registration proposals will carry a significant cost in time and resources and make it harder to do creative science that pushes the field forward, for possibly little gain. Plenty of other people, not least Neuroskeptic, think otherwise. However, without some experimental evidence one way or the other it’s just an interventional hypothesis in need of testing in the complex maelstrom of the real world. No amount of arguments can change that. There are plenty of examples of interventions based on good, well-argued hypotheses that haven’t survived testing – have a look at the epic fails in the Alzheimer’s literature, or any neurological or psychiatric disorder, for that matter.

So, with that in mind. Why don’t we do something that we all agree on? Why not, as a community, conduct a randomised trial to test whether pre-registration improves the quality of neuroscientific/psychological research? If there’s a nice juicy effect of pre-registration with no irritating side effects, then let’s go for it, with both barrels. Otherwise, not.

Allow it.

Aside

Neuroscience Meridian

I want to talk about my loveable 2-year old nephew – I’m going to use a pseudonym, so let’s call him Judge Holden (or the Judge for short). Now, the Judge is learning to use the toilet. When he successfully manages to use a potty (for it’s intended function), he looks tremendously pleased with himself – he has mastered something sophisticated and difficult, that previously was beyond his ken. But, the Judge does not just show his self-satisfaction when he succeeds, he also declares it to the whole world. He also thinks that everyone should emulate him, carrying a potty around the whole time, and wearing a nappy, just in case, to be on the safe side.

Now, turning to neuroscience and psychology, and at the risk (intention?) of insulting lots of my colleagues, I wonder if those scientists currently decrying statistics in neuroscience (e.g., the replication crisis, inflated p-values, the need to pre-register scientific studies etc. etc.) are like my little nephew. They have learnt some basic statistics, and discovered some fairly obvious points about multiple comparisons and bias and stuff, and now they are terribly pleased with themselves and feel the need to shout about it to everyone from the rooftops. And just like my nephew wanting everyone to partake in his robust approaches to stop self-soiling, they think that new robust requirements are necessary to stop bias and nonsense across the board in science. And, because my nephew is hopelessly naïve (not his fault he’s 2), making everyone wear nappies and use potties would be cumbersome. Similarly what many of these scientists want (particularly requirements for pre-registration) would be cumbersome and may well have a negative effects on science as a whole.

Right, enough about the Judge. Let’s talk about pre-registration of studies.

Pre-registration of studies is meant to stop unconstrained post-hoc analyses that end up finding differences that are actually just noise in the data. Who could argue against it right? It’ll mean that everything is above board, transparent, right? Not necessarily. Who knows what the effects will be. They could be positive or (I reckon) they could be negative.

First, as other commentators have said, pre-registration will act to discourage exploratory science. Reviewers will be biased against analyses that were not in the original pre-registration. As others point out, this makes it harder to publish incidental findings (although, that’s kind of the whole point). But more harmfully, it will make it harder to do interesting science. In FMRI, for example, the really interesting developments almost all use existing data. Take the work of FMRIB, in Oxford. Sure, they made lots of methodological developments, but their most important work is theoretical (I think), working on defining and understanding functional brain networks (e.g,. Beckmann et al, 2005; Smith et al, 2009 etc etc). Much of this work uses existing data. Data that has not been acquired for the purpose (e.g., generic resting state data). There’s no way to pre-register for this kind of research. The researchers had an idea about neural function that should be present in some data. There’s no way to correct the p-values for the different exploratory approaches taken with the data. Such an idea is mad. And if you look in the literature, there’re loads of examples just like this. Pre-registration goes directly against this type of research.

Second, it will slow science down for two reasons: i) there will end up being several rounds of review of the pre-registration phase so these will have to be written carefully, slowly. (If there isn’t any review of pre-registration then the whole thing can’t work). ii) In fast moving fields, there will be analysis and theoretically developments throughout the planning/data acquisition phase that will mean that if you’re taking pre-registration seriously, you have to re-register and collect more data, not just repurpose what you have. This may also involve people collecting even more boring FMRI datasets that all show the same thing (this already happens by the way), which swallows precious research resources.

Third, I suspect that pre-registration will increase direct fraud in science. To understand this, consider the parable of the ratings agencies. These financial agencies rate different types of debt; for example, of different companies. What happened is that these ratings were useful for investors in making decisions about financial products; however, after a while the investors started relying entirely on the rating agencies. And, in the run-up to the 2008 financial crisis, this meant that getting a AAA rating was all you needed to do to sell your debt; therefore, tricking the rating agency into giving your dodgy debt a AAA rating made sense. I wonder if a similar thing could happen with pre-registration in neuroscience. By making the system seem more robust, you make it easier for people who are prepared to commit fraud to get their results published and accepted, and give their work an even bigger veneer of respectability.

I think that 1) and 2) above are serious problems and 3) could be. They will slow neuroscience’s onward march to glory if pre-registration is implemented widely. So, the cons are only worth it if the pros of pre-registration improve things substantially. Fundamentallly, this means that pre-registration has to fix a problem which is actually holding up science’s onward march. And no-one has actually shown that they are. They’ve just speculated that they are.

I think the opposite. I don’t think dodgy results in some papers have that big an effect, or not enough to lead to any drastic, restrictive countermeasures. In fact, it’s possible that having dodgy papers is good for science as a whole, maybe it’s like stochastic resonance in motor systems, a bit of noise may help the field. This is what I think: below there’s a picture of a hypothetical world where there are some dodgy papers out there in non-clinical fields. Here it is again in a world where those papers have been exorcised from the face of history. See the difference?

So, what do I think? There are plenty of dodgy results out there. There always have been, for lots of reasons.  And that’s why just because something’s published in a journal doesn’t mean you should trust it. Maintain your skepticism, especially for high-profile journals. If there are small samples, implausible results, then don’t believe the paper, don’t cite it and criticize it – publicly – in blogs etc, if you want. There are no easy fixes, and most importantly, you have to think for yourselves.

Image

Give up? Do something else (bee keeping, perhaps)?

Screen Shot 2013-03-04 at 13.17.25

There’s a popular complaint out there that FMRI/MEG/EEG are correlational techniques, and so the patterns of activation we see may be by-products or epiphenomena. As such, FMRI/EEG/EEG may not tell us anything of interest about how the brain accomplishes tasks. Let’s take this position as a given and that there is a very real possibility that these are epiphenomena and so they are of limited use (NB the NeuroRant collective will point out how fundamentally stupid the epiphenomenon idea at a later date).

So what’s the solution? Give up? Do something else (bee keeping, perhaps)?

No. Never. Not this time. The solution is transcranial magnetic stimulation (TMS). TMS disrupts neural processes and so tells us if a region is NECESSARY for a given task (unlike grubby old FMRI which only shows it’s correlated with a task). Now TMS fixes everything, doesn’t it? For example, we zap motor cortex and motor output is disrupted. Ipso facto, the motor cortical region is necessary for motor output (like ranting). Great. Solved. Now we can get somewhere, right? Cure some neurological and psychiatric conditions? Get some Pulltizer prize-winning popular science books written, right? Wrong.

NeuroRant says: ‘Big Whoop’. We have established that a certain bit of the brain, X, is ‘necessary’ for process Y, but what have we learnt? What does being necessary mean? Does it mean that region X contains the neural representations that allows process Y to work? Not necessarily. Does it mean X is an important way-point, necessary for relaying information from brain region A to brain region B or down the spinal tract to make the body do something? Again not necessarily. Does it mean that region X modulates what region A does to allow A to do something important? Again not necessarily.

In fact, TMS-ing region X and so disrupting process Y could mean very, very many things, some of which are deeply uninteresting. In fact, you could imagine a really long daisy chain of processes that all have contributions on each other. So disrupting region X, disrupts process Z1 which disrupts process Z2 which disrupts process Z3 which disrupts process Z4, ad nauseum…  finally disrupting process Y. In fact this process could go on for ever, like might happen in a reductio ad absurdum. Oh, it is a reductio ad absurdum. Nice.

To illustrate, I might find that zapping my left medial-inferior-ventral-caudal-anterior-lateral-dorsal-parietal lobe increases the likelihood of me trolling strangers’ blog posts. This region is therefore necessary for stopping me irritating strangers. However, this may be deeply uninteresting. In actual fact (fact, in this thought experiment although unfortunately not in reality), this is because left medial-inferior-ventral-caudal-anterior-lateral-dorsal-parietal lobe actually is the part of the brain that allows bee-keeping thought processes to flourish. As everyone knows, bee-keeping, and thinking about such matters,  is an intensely calming persuit. If you stop this relaxing brain process then all sort of chaos starts churning around the brain. One thing leads to another, and it’s stranger-trolling time. The ‘necessary’ bit of TMS hasn’t really bought us anything about understanding how the brain works.

NeuroRant does not want to dismiss TMS as a useful technique. In fact, some of NeuroRant’s best friends are TMS-ers. However, NeuroRant isn’t convinced that TMS adds something, IN PRINCIPLE, very different to FMRI/MEG/EEG. It’s not better. It’s just a different approach with a different emphasis. That is, TMS could, in principle, all be neurobollocks in the same way that FMRI/EEG/MEG could be. Although, completely, blindingly, obviously BOTH techniques aren’t neurobollocks.

The Shittest Supervisor of Them All

198is3beastvari

Imagine you are young, full of life, eager to change the world, and bursting with enthusiasm.  You are a PhD student!  You have embarked on a scientific career, where your brain power alone will literally change the world.  You will discover things hitherto unknown.  You are a God amongst women! Your balls are massive!

Your quest for success will be perilous though.  You will encounter many trials and tribulations.  You will face hostility from editors, reviewers, and colleagues, and you will be embroiled in conflicts at the upper echelons of academia.  Carnivorous cenozoic dinosaurs will try to squish the ambitions of young whippersnappers like you, and you will surely face defeat if you are not well equipped.

But fear not!  You have been entrusted with a Guardian.  You have been granted a Supervisor!  This Angel of Light will train you in the Art of Science.  They will guide you in your first forrays, and help you find your feet.  They will train you and shape your overwhelming excitement into deadly empirical weapons.  In a few years you will master all the necessary skills for a career of discovery.

So it is written, and so it should be.  Except that the Gift of Supervision is not bestowed to all.  Some of us are hindered even at the outset, by having been betrothed to The Shittest Supervisor of Them All (TSSoTA).

TSSoTA’s crimes won’t be listed here.  There aren’t enough bits on the internet.  But their single greatest sin should be aired as a warning to all.

All that a Supervisor needs to do to avoid becoming TSSoTA is to keep their students’ flame alight.  All that they need to do is prevent their students’ interest and enthusiasm for science from expiring.  This may be hard with all the setbacks a student will naturally face, but if you, as a supervisor, become the student’s greatest obstacle, you have failed.  You are a shit supervisor.

TSSoTAs snuff their students out.  They choke them with narrow-minded pettifogging, turning minor details into insurpassable hurdles.   They crush them with their abhorent lack of empathy.  They single-handedly prevent their students from flourishing.  They snip the roots of their students’ passion and drain their thirst for growth, making them feel like they will never sprout new shoots.  TSSoTAs actively block every sign of initiative, or every spark of genius with their corpulent mediocrity.  They will purposively discourage a student from aiming for higher goals, for setting the bar high, for trying to reach the unreachable.  They will drag their students down into the fetid swamps of obscurity that they call home, and try to keep them there so that the misery might be shared and their burden eased.

Words can’t express the indignation I feel when imbeciles like these are given the responsibility over a student’s life.  It is an honour to be awarded the stewardship of a bright and inquisitive mind.  The damage done by a TSSoAT will far far outlast the time they spend crippling their student.  Shit supervisors, you are all bastards, and you should be treated as such.  You should be stripped of your standing and shelved.

Rant over.