Of three metamorphoses of the scientist: how the scientist becomes a camel; and the camel, a lion; and the lion, finally, a child.
There is much that is difficult for the scientist, the inquisitive scientist that would discover much.
“What is to be discovered?” asks the scientist that would discover much, and kneels down like a camel wanting to be well loaded. “What do I need to learn, O scientific heroes,” asks the scientist that would discover much, “that I can become an expert and exult in my science? Is it feeding on the acorns and grass of knowledge [nb talks and reading papers]? Or is it this: stepping into filthy waters when they are the waters of truth? [nb doing grunt work in the lab]”. These most difficult things the scientist takes upon itself: like the camel that, burdened, speeds into the library or laboratory.
In the loneliest laboratory, however, the second metamorphosis occurs: here the scientist becomes a lion who would conquer his freedom and be principal investigator in his own laboratory. He wants to fight and; for ultimate victory he wants to fight with the great dragon.
Who is the great dragon whom the scientist will no longer call lord and god? “In this paper, they discovered” is the name of the great dragon. But the lion scientist says, “I discover.” “They discovered” lies in his way, sparkling like gold, an animal covered with scales; and on every scale shines a golden “I discover.”
Discoveries, years or decades old, shine on these scales; and thus speaks the mightiest of all dragons: “All important results have long been known and discovered. Verily, there shall be no more ‘I discover.” Thus speaks the dragon.
My brothers, why is there a need in the scientist for the lion? Why is not the beast of burden, which learns and studies in the library and laboratory, enough?
He who once loved “They discovered” as most sacred: now he must find illusion and caprice even in the most sacred, that freedom from his love may become his prey: the lion is needed for such prey [nb Neurorant doesn’t understand this bit, but liked the sound of it, so here it is. Neurorant hopes it means something like: "the lion phase is necessary to conduct your own research, but to do so the lion is puffed up and over-confident and predatory and probably takes credit for other people's work"].
But say, my brothers, what can the child do that even the lion could not do? Why must the preying lion still become a child? The child is innocence and forgetting, a new beginning, a game, a self-propelled wheel, a first movement. For the game of creation, my brothers, the scientist now wills his own will, creates his own new theories, avoids dogma, says something new, and he who had been an irrelevant also-ran to the world of science, doing similar work to lots of other labs, now conquers the scientific world.
Of three metamorphoses of the scientist I have told you: how the scientist became a camel; and the camel, a lion; and the lion, finally, a child.
Thus spoke Neurorant.
Back in the 18th century, Isaac Newton was walking resplendently (as was the wont of the times) in his backyard, when he observed an apple falling from a tree. He asked himself, “why should that apple always descend perpendicularly to the ground? Why should it not go sideways, or upwards? but constantly to the earths centre?”. The conclusion he came to was that all matter must draw matter to itself, and since “the sum of the drawing power in the matter of the earth must be in the earths centre, not in any side of the earth. therefore dos this apple fall perpendicularly, or toward the centre.”
And with that, Newton had solved the world. He expressed a set of physical laws that could explain the movements of the objects on earth, and showed that those same laws governed the movement of the heavenly bodies we observe in the night sky. After centuries of darkness, finally everything made sense. Physicists rejoiced!
A couple of centuries later, Albert Einstein was walking industriously (as was the wont of the times) at the speed of light when he realised that there were special situations under which Newton’s laws did not hold true (we’re pretty sure that’s how it happened). At extremely high mass and energy, Newtonian mechanics broke down and could not explain for example, why light would bend around the sun if you were at the north pole and there was an eclipse (we’re pretty sure that’s what the special situation was). The framework of Relativity was created to account for these special situations, and now, surely, everything could be made to make sense. Physicists rejoiced!
Around about the same time, Max Planck (the same Planck who famously advocated that academocide was the easiest way for new scientific truths to flourish, inspiring us to write this neuroscience death-wish-list) was teleporting himself indignantly when he realised that the particles that made up the matter and light around him were quantized into little packets, or quanta. This meant that Relativity’s predictions, which were based on a smooth and continuous space-time being bent by massive objects like the sun, could no longer be true because space wasn’t actually smooth, but made up of fuzzy particles. And so, with the stake firmly in Enistein’s heart, Planck cocked his head back and laughed and gave birth to the field of Quantum mechanics, which made and still makes extraordinarily accurate predictions about the world around us (we won’t go into all the details because as Feynman once almost pointed out: “If you think you don’t understand quantum mechanics, you definitely don’t understand quantum mechanics.”). All of this meant that Relativity was completely wrong, and that QM was the real final ultimate solution. And Physicists did rejoice!
So what have we learned? Well, hopefully not a lot, except that Neurorant is not a physicist. But going back to neuroscience, should we be confident that the gazillions of dollars being poured into mapping out every synapse in a brain will eventually solve the brain? Will this research yield a Theory of Brain (TOB), which will explain why the brain is the way it is, and how it allows thoughts, emotions, memories and consciousness to come to be?
No, probably not. We cannot be certain that a really detailed understanding of the phenomena at the cellular level, and any grand theories originating from this, will translate to the levels above and below. Knowing every connection in the brain may not tell us much at all about how activity in millions and millions of neurons working together can allow one to behave pompously, for example. Its possible that different frameworks are needed to explain each special situation. Just like with the physics.
But then again, it might. In which case we will rejoice along with the other Neuroscientists.
Sorry. The NeuroRant Collective’s collective frontal lobes kicked in.
Our science advancement deathlist will have to remain unwritten, for now.
(This is a link from a tweet (that we would recursively link back to if we knew how this wordpress thing works)).
PS For the picky out there, we are aware that Max Planck didn’t say exactly: “science advances one funeral at a time” but he said something similar and too long and inconvenient for Twitter (see http://en.wikiquote.org/wiki/Max_Planck)
Vast swathes of the unwashed public think some very strange things, like: the power of positive thinking alone can magically cure diseases or that the mind and brain are completely different things.
Whatever. Let’s ignore them. They’re never going to listen to us anyway, and we have nothing but utter vomit-inducing contempt for them.
But what about more refined, science-friendly plebs?
Well, they may know little bits and pieces of neuroscience: such as, they might know that serotonin makes you happy, and that taking prozac or eating chocolate increases serotonin and so makes you happy.
All horribly simplistic to the point of being completely wrong.
And even the most informed lay-people: well they’re looking for simple (*cough* wrong) stories too; that single genes explain complex multidimensional behavioural traits; that specific neurotransmitters ‘do’ a single specific function; that brain regions can be neatly mapped onto other functions.
All a bit silly when you think about it, but very strongly entrenched in what advanced lay-people think they know.
So public engagement is needed isn’t it? As much as possible, all the time?
We have to try to communicate with other neuroscientists (which is hard enough) because otherwise the university/grant-bodies won’t give us money to buy things like food and shelter.
But, reaching beyond that, to any of public? Well, it’s going to have to involve simplifying the science to the point where it corrupts. Since we can’t reach inside the public’s heads to scrape them back to a state of neuroscience tabula rasa, we will have to engage with their wrong, well dug-in prior believes. And the public don’t have the patience to let us do that in a way to make them learn anything meaningful.
So, instead, most scientists and science journalists simplify massively, horribly: all the way down, down, down to the level where they can engage with the public’s prior simple beliefs. What this means is that public engagers end up perpetuating massively simplistic ideas. In the end, this public engagement may, therefore, make the public less informed, and make doing science harder, than if we scientists just shut up, and left the public alone.
Either way, why bother. Neurorant doesn’t (and that’s not because the media people ignore us completely, honest).
Anyway, here’s Kurt Vonnegut in Cat’s Cradle:
‘The trouble with the world was,’ she continued hesitatingly, ‘that people were still superstitious instead of scientific. He said if everybody would study science more, there wouldn’t be all the trouble there was.’
‘He said science was going to discover the basic secret of life some day,’ the bartender put in.
He scratched his head and frowned. ‘Didn’t I read in the paper the other day where they’d finally found out what it was?’
‘I missed that,’ I murmured.
‘I saw that,’ said Sandra. ‘About two days ago.’
‘That’s right,’ said the bartender.
‘What is the secret of life?’ I asked.
‘I forget,’ said Sandra.
‘Protein,’ the bartender declared. ‘They found out something about protein.’
‘Yeah,’ said Sandra. ‘That’s it.’
So it goes.
Behold, the Simple Tree. Whilst all of the universe tends towards entropy, the unstructured vacuousness devoid of pattern and information, the humble tree arises as an Engine of Negentropy.
From thin air it plucks carbons and constructs its Edifice to Order. And from the fruits of its chloroformy tendrils, it begets Oxygen. And Oxygen begets the lifeforms, and all the Structure and Order their brief flicker of existence entails.
All thanks to the humble Tree.
So, in the pit of hell where the NeuroRant idles the days away, it is sacking season. Senior academics have got their hunting licenses all in order (approved by human resources no less), and are looking for prey.
So who do they go for? Well obviously those who have failed to publish enough and get big grants, obviously. Why them? Well because, obviously, they are dead wood, who don’t have what it takes to be superstar world beating academics. They are lazy. They are stupid. If only they could be quietly pushed away onto a slag heap of poverty and unhappiness, then they could be replaced by shiny new academics who are energetic and sparkling, and will usher in a new era or plenty. This time everything will be different.
Everyone agrees with this approach. Those who don’t agree with this approach are soft, or don’t get enough grants and papers and so are scared. It’s academic Darwinism. It’s great.
Unfortunately it’s wrong. There are lots of reasons why it’s wrong, but I’m going to focus on one big one: senior academics have no understanding of randomness. They see only signal and no noise. Simply put, they don’t understand the regression to the mean.
There is random variability everywhere: especially with getting papers and grants accepted (due to randomness of reviewers, editors, being beaten to publication by another group etc etc). Taking a step back, there is variability in getting results from experiments that make sense or are what you predicted, randomness in a new technique working, in an MRI machine not breaking down; all of which makes getting grants and papers vary randomly. I don’t mean it’s all random, but a big chunk of it is.
This means that even if all academics are equally shiny and energetic, some will randomly fail to get grants and papers, and so be in the dead wood category. The same thing happens with football managers (read about it here).
So, many academics are sacked who are as good (in terms of future grants and papers) as the people that will replace them. And as with football managers, even if we ignore the fact that the punishment has a large arbitrary, unfair aspect to it, sacking people comes with costs. In academia, all those start-up costs for new staff, all that disruption to research and teaching programmes, fear and stress on non-sacked academics, arguments about space for labs, not to mention the costs in human resources time for delicately sacking someone (poor human resources). Therefore, if people are sacked because of regression to the mean effects, it’s a negative sum game for the university. It’s just stupid.
But, what I find interesting, is that it’s not just senior academics who don’t understand the regression to the mean, it’s also the other academics who aren’t being sacked, the successful academics. Those who are getting grants and papers with apparent ease. In fact, they are actually enjoying the other side of the regression to the mean. Their success is also, in part, the result of random variability and it probably won’t keep going. But those academics also ignore this randomness, and put it all down to their own brilliance, while agreeing that the dead wood needs to go.
And so the system trundles on. All these pompous academics beating some of their members up and falling over each other to fawn over others, unaware that this whole pecking order is fundamentally flawed. Like a bunch of stupid, fighting penguins or something.
There’s got to be a better way. Maybe we could try to be nice and support each other for a change.
Disclaimer (NeuroRant is judged successful by their institution’s standards if not their own).
The brain is complicated. Really complicated. You just won’t believe how vastly, hugely, mindbogglingly complicated it is. I mean, you may think that your laptop is a complex piece of engineering, but that’s just peanuts compared to the brain.
I mean, there are loads of different types of neurons, synapses, glial cells, neurotransmitters, receptors. Genetic, epigenetic influences. There’s even the cerebellum, whatever that is. These all interact at a range of spatial and temporal scales, from the microscopic to the macroscopic. Loops and networks. Top-down, bottom-up. And all of this complexity comes together to allow us to do the glorious, epic meaningful things we do: walk the dog, stare at passing traffic, allow a solitary tear to roll down our check, maybe even write a blog.
And in response to this complexity, what do we do? Well, we could give up and do something else (bee-keeping perhaps). But instead, we try to make sense of it all, and we do that by making it simpler. We look for underlying fundamental principles, capable of explaining big chunks of this complexity. And everyone agrees that this approach is a good, and a noble and a wholesome and a sensible thing to do.
As a result, the world is awash with fundamental principles. The bayesian brain, the small world brain, the homeostatic brain, the efficient brain, the critical brain, the computational brain, the dynamical brain, the hierarchical brain, the sensorimotor brain, the topological brain, free energy, fast and slow thinking (OK, we can’t think of any more, we should probably read more widely).
The NeuroRant wishes to point out, however, that while these different approaches have plenty of merit, there’s only one real fundamental principle out there: the brain is a hack. This fundamental principle trumps all the others. So we’ll say it again, the brain is a hack. We don’t mean that the brain is a hack as in teenagers in Guy Fawkes’ masks and irritating acronyms. We mean a hack as in: “an inelegant but effective solution to a computing problem” (to quote Wikipedia). No doubt, evolutionary processes followed many principles in shaping the brain, but in the end, the human brain is a jury-rigged, hatchet-job cobbled together out of bits of string and sticky tape. Therefore, any of these other (inferior) fundamental principles are going to turn out to be inadequate when enough data is put together.
So kids, be warned and keep frosty when you read those all-encompassing review articles; you know the ones we mean, the articles that seem to really tie all of the data together. A healthy dose of skepticism accompanied by pain and confusion is the only way to go.