Tuesday 13 August 2013

In the News: Death wave

Near-death Experience
(Wiki Commons)
Can neuroscience shed light on one of life's biggest mysteries - death? In a paper just published in PNAS, researchers describe a surge of brain activity just moments before death. This raises the fascinating possibility that they have identified the neural basis for near death experiences.

First, to put this research into context, death-related brain activity was examined in rats, not humans. For obvious reasons, it is easier to study the death process in animals rather than humans. In this study, nine rats were implanted with electrodes in various brain regions, anaesthetised then 'euthanized' (i.e., killed). The exact moment of death was identified as the last regular heartbeat (clinical death). Electroencephalogram (EEG) was recorded during normal waking phase, anaesthesia and after cardiac arrest (i.e., after death) from right and left frontal (RF/LF), parietal (RP/LP) and occipital (RO/LO) cortex (see Figure below). Data shown in panel A ranges from about 1hr before death to 30mins afterwards. At this coarse scale you can see some patterns in the waking data that generally reflect high frequency brain activity (gamma band, >40Hz). During anaesthesia, activity becomes synchronised at lower frequency bands (especially delta band: 0.1–5 Hz), but everything seems to flatline after cardiac arrest. However, if we now zoom in on the moment just after death (Panels B and C), we can see that the death process actually involves a sequence of structured stages, including a surge of high-frequency brain activity that is normally associated with wakefulness.


Adapted from Fig 1 of Borjogin et al. (2013)

In the figure above, Panel B shows brain activity zoomed in at 30min after death, and Panel C provides an even closer view, with activity from each brain area overlaid in a different colour. The authors distinguish  four distinct cardiac arrest stages (CAS). CAS1 reflects the time between the last regular heartbeat and the loss of oxygenated blood pulse (mean duration ~4 seconds). The next stage, CAS2 (~6 seconds duration) ended with a burst in delta waves (so-called 'delta blip' ~1.7 seconds duration), and CAS3 (~20 seconds duration) continued until there was no more evidence of meaningful brain activity (i.e., CAS4 >30mins duration). These stages reflect an organized series of brain states. First, activity during CAS1 transitions from the anaesthetised state with an increase in high-frequency activity (~130Hz) across all brain areas. Next, activity settles into a period of low-frequency brain waves during CAS2. Perhaps most surprisingly, during CAS3 recordings were dominated by mid-range gamma activity (brain waves ~35-50Hz). In further analyses, they also demonstrate that this post-mortem brain activity is also highly coordinated across brain areas and different frequency bands. These are the hallmarks of high-level cognitive activity. In sum, these data suggests that long after death, the brain enters a brief state of heightened activity that is normally associated with wakeful consciousness.

Heightened awareness just after death  

Adapted from Fig 2 of Borjogin et al. (2013)
The authors even suggest that the level of activity observed during CAS3 may not only resemble the waking state, but might even reflect a heightened state of conscious awareness similar to the “highly lucid and realer-than-real mental experiences reported by near-death survivors”. This is based on the observation that there is more evidence for consciousness-related activity during this final phase of death than during normal wakeful consciousness. This claim, however, depends critically on their quantification of 'consciousness'. To date, there is no simple index of 'consciousness' that can be reliability measured to infer the true state of awareness. And even if we could derive such a consciousness metric in humans (see here), to generalise to animals could only ever be speculative. Indeed, research in animals can only ever hint at human experience, including near-death experiences.

Nevertheless, as the authors note, this research certainly demonstrates that activity in the brain is consistent with active cognitive processing. The results demonstrate that a neural explanation for these experiences is at least plausible. They have identified the right kind of brain activity for a neural explanation of near-death experiences, yet it remains to be verified whether these signatures do actually relate directly to the subjective experience.

Future directions: The obvious next step is to test weather similar patterns of brain activity are observed in humans after clinical death. Next, it will be important to show that such activity is strongly coupled to near-death experience. For example, does the presence or absence of such activity predict whether or not the person would report a near death experience. This second step is obviously fraught with technical and ethical challenges (think: The Flatliners), but would provide good evidence to link the neural phenomena to the phenomenal experience.

Key Reference:

Borjigin, Lee, Liu, Pal, Huff, Klarr, Sloboda, Hernandez, Wang & Mashour (2013) Surge of neurophysiological coherence and connectivity in the dying brain. PNAS

Related references:

Tononi G (2012) Integrated information theory of consciousness: An updated account. Arch Ital Biol 150(2-3):56–90.

Auyong DB, et al. (2010) Processed electroencephalogram during donation after
cardiac death. Anesth Analg 110(5):1428–1432

Related blogs and news articles:

BBC News
Headquarters Hosted by the Guardian
National Geographic
The Independent

Thursday 27 June 2013

Book Review: Hallucinations, by Oliver Sacks

George Wallis
This is a guest post by George Wallis, one of my PhD students.  We recently attended a seminar in which Oliver Sacks discussed his recent book ‘Hallucinations’.  In this post George discusses the ways in which hallucinations provide neuroscientists with clues about the hidden workings of the brain. This article is also cross-posted at Brain Metrics, a Scitable Blog hosted by Nature Education. 

Oliver Sacks is a neurologist and a writer, and close to a household name.  For many readers, he will be a familiar figure.  Since 1970 he has been writing humane accounts of the ways in which different forms of neurological illness or damage affect the lives of his patients – or occasionally Sacks himself.  Amongst his book-length works are The Man Who Mistook His Wife For a Hat, and Awakenings, an account of the almost miraculous effect of the drug l-DOPA on sleeping sickness patients at the Beth Abraham hospital, that has been adapted into a feature film starring Robin Williams.  Mark and I were lucky to be invited to a small discussion session with Dr Sacks at Warwick University where he is a visiting professor.  The topic of discussion was his most recent book, Hallucinations.


Hallucinations is known for its detailed account of Sack’s own hallucinatory experiences during his remarkably excessive drug-taking phase in the 1960s.  Before tight drug laws, and with access to the most potent compounds to be found in a doctor’s medicine cabinet, Sacks experimented with a wide range of compounds – often in huge doses.  He describes the mind-altering experiences he had with classic psychedelics, the disturbingly real-seeming hallucinations experienced whilst on Artane, frightening episodes of psychotic delirium following withdrawal from some lesser known toxic agents, and the time-eating stupor of opiates.  Most fascinating for Sacks fans is his description of the amphetamine fuelled epiphany that crystallized his desire to write about the neurology and the experiences of his patients.

Beyond the spectacle of these autobiographical chapters, Sacks’ book is a catalogue of the many varieties of hallucination.  For students of neuroscience, this makes for engrossing reading.  Hallucinations can tell us a lot about the brain.

What are hallucinations?  Sack’s defines them as ‘percepts arising in the absence of any external reality – seeing things or hearing things that are not there’.  A few hundred years ago hallucinations might have been ascribed to the influence of Gods or ghosts.  Nowadays, neuroscientists and psychologists see hallucinations as the result of abnormal activity in the brain.  Crucially, neuroscientists consider all of the things we experience to result from models the brain builds.  When you look at something in the outside world, your brain doesn’t magically ‘reach out and touch’ the object so you can perceive it (though, some philosophers might disagree with neuroscientists on this point!).  Instead, the brain builds a model of what is probably out there in the world, doing its best to match the model to the sensory input we receive at our sense organs (for example, in the retina of the eye).  The things you perceive reflect the model the brain builds – a model built out of the buzzing activity of billions of neurons in your brain.  It’s basically intelligent guesswork, but mostly our brains do pretty well, and we have the impression of a stable world.  Importantly, we tend to agree with other people about what’s out there - which gives an indication that our brains are getting things right!  However, if the activity of the brain is in some way altered by a neurological disturbance of one form or another (illness, drugs, damage from a stroke or injury), the model can diverge from its normal faithful representation of the outside world, and we can have hallucinatory perceptions.

Depending on the type of neural disturbance, these hallucinations can take many different forms.  These are all interesting to neuroscientists, as they all have the potential to tell us something about the workings of the brain.

For example, there is Charles Bonnet Syndrome, which Sacks describes in his opening chapter.  The brain’s intelligent guesswork about the outside world is normally informed by a stream of activity from the sense organs. What happens if you cut off that stream of incoming information?  In some cases, the brain keeps on ‘making up a story’ – except now, it has no information to go on, so the percepts that are produced bear no relation to reality.  For example, diseases of the eye can deprive someone of the visual input their brain has been used to receiving.  If part of the retina is damaged, this can leave a blind patch called a ‘scotoma’, and people with a scotoma can sometimes have vivid hallucinations in just their blind patch. 

Charles Bonnet type hallucinations can also occur if someone goes completely blind.  These hallucinations can be highly ornate – for example little ‘lilliputian’ people are sometimes seen, often in very colorful and ornate clothing.  Some people describe these hallucinations as being like a movie.  For most people, however, Charles Bonnet syndrome involves simpler hallucinations – shapes, colours and patterns.  The patterns in the scotoma can ‘scintillate’, giving the impression of constant movement.

Scintillating scotoma patterns


Just because the retina is damaged doesn’t imply that the visual parts of the brain are damaged too – this isn’t necessary for hallucination.  Charles Bonnet syndrome reflects the normal activity of a brain forced to guess in the absence of information – and people with Charles Bonnet are often well aware that their hallucinations aren’t real, even if they seem very solid and detailed.  Interestingly, some people with disrupted sensory input experience hallucinations and some do not – it isn’t clear why.

Does this mean that you could hallucinate too if you were deprived of sensory input?  Yes – though as with Charles Bonnet syndrome, it seems to vary from person to person.  There have been various experiments with sensory deprivation.  A recent example was published in the Journal of Neuro-opthalmolagy in 2004, by Lofti Merabet, Alvaro Pascual-Leone, and their collaborators (Merabet et al., 2004).  They simply blindfolded thirteen healthy volunteers for four days – otherwise, their volunteers were able to walk inside and outside, talk to others, and listen to the TV.  10 out of 13 people reported hallucinations.  Just like in Charles Bonnet syndrome, these were sometimes simple (flashing lights, geometric patterns) and sometimes complex (landscapes, people, buildings, sunsets – often seeming extremely vivid; more vivid than normal visual perceptions).

Hallucinations resulting from sensory deprivation are evidence for the neuroscientists’ view of perception – that the brain generates a model and fits it to the world.  Sometimes the brain tissue responsible for generating that model is disturbed in a way that alters the things people perceive.  For example, in epilepsy, the normally controlled activity of the brain briefly goes haywire.  Out of control neuronal firing emerges, and can spread over the brain surface.  Another form of disturbed brain activity is experienced by many people in the form of migraine.  Migraines are sometimes accompanied by a visual hallucination superimposed on the real visual scene – often termed a ‘migraine aura’.

A migraine sufferer’s recreation of a ‘migraine aura’

 In migraine or epilepsy, people sometimes perceive geometric patterns – for example chequer-boards, zig-zag lines, or concentric rings.  These geometric hallucinations are so consistent across people, they were catalogued in the 1920s by the psychologist Heinrich Klüver.  He divided them into four types: tunnels and funnels, spirals, lattices, and cobwebs.


Kluver’s four categories of hallucination pattern.
Bressloff et al., 2002; used with permission.



These patterned hallucinations are interesting because they seem to reflect the structure of the parts of the brain responsible for early visual processing - parts of the brain that are quite organized in their layout.  In the 1970s, mathematicians Jack Cowan and G Ermentrout built models of aberrant activity patterns, given what they knew about the structure of the visual cortex.  These models have been extended by the Oxford mathematician Paul Bressloff (Bressloff, Cowan, Golubitsky, Thomas, & Wiener, 2002).  By modeling unusual activity in the visual cortex, and then also taking account of the way the neurons in our visual cortex map onto visual space, these researchers are able to predict the kind of hallucinatory patterns catalogued by Klüver.

A mathematical simulation of a hallucination pattern


Whilst migraines and epilepsy are certainly not pleasant, the actual hallucinations experienced are rarely frightening.  The same is true for Charles Bonnet Syndrome.  People experiencing these hallucinations are usually able to tell them apart from reality, though sometimes only once they have become used to them and know what to expect!  Of course, this isn’t true of all hallucinations.  Sacks also discusses the more terrifying types of hallucinations, for example, those of psychosis, or of the ‘night terror’ associated with sleep paralysis – in which people awake unable to move, with the feeling that they are trapped beneath a horrible intruder who is trying to suffocate them (the ‘night mare’ or ‘night hag’).

Nicolai Abildgaard’s ‘Nightmare’



What do these more frightening hallucinations - in particular, the hallucinations associated with psychosis (in which people often also experience delusions) - say about the brain?  This is a fascinating but difficult area, as yet poorly understood.  Here it becomes more difficult to draw the line between perceptions and beliefs, and emotional and motivational factors seem to be more involved.  Researchers are currently trying to understand how hallucinations in diseases like schizophrenia are related to the other symptoms of the disorder, and how they may be similar or different to the kind of hallucinations produced by sensory deprivation or epileptic activity patterns in the brain.

Finally, an interesting speculation that may haunt you as you read Sacks’ book is that hallucinatory experiences – which, as Sacks points out, are much more common than one might think – could be responsible for the religious, mystical, and paranormal parts of our culture.  For example, Sacks points out that Joan of Arc’s visions are classic manifestations of epileptic activity in the temporal lobes.  He speculates that these seizure related visions were the reason an uneducated farmer’s daughter became a religious leader who rallied thousands of followers.

Sacks’ book is a engrossing survey of hallucinatory experiences of all types.  In their variety (far more extensive than described in this blog post) hallucinations provide many insights into the way our ordinary perception works.  Reading Sacks’ book is also a good preparation for the possibility – not too slim, as Sacks points out – that you will one day have a hallucinatory experience of one form or another (if you haven’t already!).

References

Bressloff, P. C., Cowan, J. D., Golubitsky, M., Thomas, P. J., & Wiener, M. C. (2002). What Geometric Visual Hallucinations Tell Us about the Visual Cortex. Neural Computation, 14(3), 473–491. doi:10.1007/BF00288786

Merabet, L. B., Maguire, D., Warde, A., Alterescu, K., Stickgold, R., & Pascual-Leone, A. (2004). Visual Hallucinations During Prolonged Blindfolding in Sighted Subjects. Journal of Neuro-Ophthalmology, 24(2), 109.

All images Creative Commons except Kluver patterns, from Bresloff et al., used with permission.


Monday 24 June 2013

Research Briefing: Dynamic population coding for flexible cognition


Dynamic population coding in prefrontal cortex
Our environment is in constant flux. At any given moment there could be a shift in scenario that demands an equally rapid shift in how we interpret the world around us. For example, the meaning of a simple traffic light critically depends on whether you are driving to work or travelling on foot. Our brains must constantly adapt to accommodate an enormous range of such possible scenarios - in this study, we applied new analysis tools to explore how patterns of brain activity change for different task contexts, allowing for flexible cognitive processing (in Stokes et al., 2013, Neuron; see also Comment by Miller and Fusi in the same issue).

Prefontal Cortex

Adapted from Fig 1

We focused our investigation on an area in the frontal lobe known as lateral prefrontal cortex. This brain area has long been implicated in flexible cognitive processing. Damage to prefrontal cortex is classically associated with reduced cognitive flexibility (Luria, 1966) as part of a more general dysexecutive syndrome. In studies using functional magnetic resonance imaging (fMRI), lateral frontal cortex is also usually more active when participants perform tasks that demand cognitive flexibility (Wager et al., 2004). It it widely believed that prefrontal cortex is especially important for representing information about our environment and task goals in mind for guiding flexible behaviour (Baddeley, 2003; Miller, 2000).

Dynamic coding population coding

Dynamic trajectory through state-space

In this study, we observe a highly dynamic process underlying flexible cognitive processing using a statistical approach that allows us to decode the patterns of population-level activity in prefrontal cortex at high temporal resolution. During a task that requires a different stimulus-response mapping according to trial-by-trial instruction cues (see Fig 1), we found that the pattern of activity rapidly changes during processing of the instructive cue stimulus. After this complex cascade through activity state-space (for more info, see Stokes, 2011), overall activity levels return to baseline for the remainder of a delay period spanning the instruction cue and a possible target stimulus.

Adapted from Fig 5
However, the effect of the cue response lingers on. Subsequent stimuli elicit a population response that critically depends on the previous cue identity. In other words, the dynamic population response triggered by the cue stimulus shifts the response profile of the network of prefrontal cells. This shift in tuning profile allows us to decode the current task-rule (i.e., cue indentify) based on a simple driving stimulus (i.e., neutral stimuli, see Fig 5).

Adapted from Fig 6
More importantly, the shift in the network response profile could also underlie task-dependent target processing (i.e., choice stimuli, see Fig 6). The population response to potential target stimuli rapidly evolved from a stimulus-specific coding scheme, to a more abstract code that distinguishes only between different target and non-target items. This dynamic tuning property is ideal for flexible cognition (Duncan, 2001).

Putative mechanism: flexible connectivity


The flow of brain activity critically depends on the pattern of connections between neurons. Contrary to intuition, these connections are always changing. The pattern of connections that make up the very essence of personal experience is constantly adjusting and adapting to the myriad changes experienced throughout life.

Synaptic Plasticity [wiki commons]
Extensive research focuses on long-term structural changes in connectivity through synaptic plasticity, however the rapid changes we experience from moment-to-moment requires a more flexible kind of memory that can represent the transient features of a given scenario. This kind of flexible "online" memory is typically referred to as ‘working memory’.

It has long been assumed that working memory is maintained by keeping a specific thought in mind, like a static snapshot of a visual image or an abstract goal such as ‘turn left at the next set of lights’. However, more recent evidence suggests that working memory can also be stored by laying down specific, but temporary neural pathways (e.g., Mongillo, Barak & Tsodyks, 2008). Neural pathways are formed by synaptic connections. In a comprehensive review of the literature on short-term synaptic plasticity, Zucker (1989) writes: “Chemical synapses are not static. Postsynaptic potentials wax and wane, depending on the recent history of presynaptic activity”. Short-term plasticity could provide a key mechanisms for flexible connectivity that is necessary for rapid, but temporary changes in network behaviour.

This new idea allows for a more dynamic theory of brain function, which is more consistent with the everyday experience of continuous thought processes that seem to evolve through time, rather than persist as a static representation. We suggest that short-term plasticity could help explain our data:

Adapted from Fig 7
The initial instruction cue stimulus establishes a specific (but temporary) connectivity state during the most active phase of the response. This would explain why the pattern constantly changes - if the synapse are constantly changing, then even identical input to the system will result in constantly shifting output patterns (Buonomano and Maass, 2009). This temporary shift in the response sensitivity of the prefrontal network allows the identity of previous input to be decoded by the patterned response to subsequent input, consistent with the silent memory hypothesis. Finally, dynamic changes in connectivity could also be used to rapidly shift the tuning profile of the prefrontal network to accommodate changes in what specific stimuli mean for behaviour (see Fig. 7).

Broader implications


Brain activity is inherently non-stationary - the continuity/stability of cognitive states are unlikely to depend on static activity states, but rather rapid changes in temporary connectivity patterns. This research also raises the intriguing possibility that cognitive capacity limits are not so much constrained by the sheer amount of information that we can keep in mind, but rather how we can put that information to use. Further research in our lab will explore these exciting possibilities.


Reference:

Stokes, Kusunoki, Sigala, Nili, Gaffan and Duncan (2013). Dynamic Coding for Cognitive Control in Prefrontal Cortex. Neuron, 78, 364-375 [here]

Also see coverage: Miller Lab (MIT), Neuron Preview


Other literature cited:

Baddeley, A. (2003). Working memory: looking back and looking forward. Nat. Rev. Neurosci. 4, 829–839. [here]

Buonomano, D.V., and Maass, W. (2009). State-dependent computations: spatiotemporal processing in cortical networks. Nat. Rev. Neurosci. 10, 113–125. [here]

Luria, A.R. (1966). Higher Cortical Functions in Man (New York: Basic Books).

Miller, E.K. (2000). The prefrontal cortex and cognitive control. Nat. Rev. Neurosci. 1, 59–65. [here]

Mongillo, G., Barak, O., and Tsodyks, M. (2008). Synaptic theory of working memory. Science 319, 1543–1546. [here]

Wager, T.D., Jonides, J., and Reading, S. (2004). Neuroimaging studies of shifting attention: a meta-analysis. Neuroimage 22, 1679–1693. [here]

Zucker (1989) Short-term synaptic plasticity. Ann. Rev. Neurosci, 12: 13-31 [here]

Sunday 23 June 2013

Neuroscience can reveal mysteries of the mind

Recently I responded in the Guardian to a couple of high-profile articles criticising over-hyped neuroscience [e.g., here and here]. Most of the claims were levelled at bad scientific practices identified in the field, however boldly concluded that neuroscience is in principle unable to answer deep questions of mind.

In my response, I point out that current limitations in practice do not imply limitations in principle. It is far too early to predict so-called "in principle limits". Also, I point out that the neurocentric view does not necessarily neglect all the external (non-brain) influences that shape our experience (i.e., society, culture, history, art, etc). The goal for neuroscience is to understand how the brain responds to all such influences, from basic sensory stimulation to social and cultural influences. Finally, I also make the point that neuroscience is not just functional magnetic resonance imaging (fMRI), and the blobology often used to parody fMRI, and neuroscience by association. Neuroscience is a multilevel approach that includes a vast array of complementary techniques, which is often neglected by in principle critics of neuroscience who tend to focus on the more simplistic, and sensationalist, claims that circulate around the mainstream media. Similar responses have been elicited elsewhere [Neurocritic, Brembs, New Yorker, BrainFacts.org]. In this post, I would just like to elaborate on the more general question of mind, and what we might expect neuroscience to help us understand.

Recently, philosopher, poet, novelist and cultural critic Raymond Tallis reminds us the brain is not the mind [see here for a similar argument by David Brooks in the NY Times]. As Gilbert Ryle famously argues in the Concept of Mind, to confuse the two levels of description is to commit a category mistake. The brain is not the mind, but the basic medium that gives rise to all mental faculties. In other words, mind is what the brain does. The philosophical distinction between mind and brain is valid and important, but it does not imply any limit on how much studying the brain will inform us about the workings of the mind. That is an empirical question. 

Experience - The Explanatory Gap


From Wiki Commons
I think some of the confusion comes from different ideas of knowledge: explanation vs. experience. To quote from another famous philosophical example, Frank Jackson considers the plight of Mary the colour scientist. She "knows all the physical facts about colour, including every physical fact about the experience of colour in other people, from the behavior a particular colour is likely to elicit to the specific sequence of neurological firings that register that a colour has been seen. However, she has been confined from birth to a room that is black and white, and is only allowed to observe the outside world through a black and white monitor. When she is allowed to leave the room, it must be admitted that she learns something about the colour red the first time she sees it — specifically, she learns what it is like to see that colour" [from here].

But this is a red herring - who really expects neuroscience to substitute subjective experience? If you want to experience red, you should find something red to look at. If you want to experience Bach, then go to a concert and leave the neuroscientists alone! You will certainly learn something new that can't be got from reading all the research on how the brain processes colour or music. If you do not have the basic neural machinery necessary for these experiences, then you will remain ever-deprived in this respect as no other kind of knowledge will substitute for experience. Neuroscience (or any other study) is never going to provide a satisfactory substitute for direct subjective experience, but if you are searching for a causal explanation how the brain gives rise to these experiences, then there is no substitute for neuroscience.
Wiki Commons

Every experience we have, every memory, every perception, hope, dream, plan, action... everything that makes up our mental life is causally dependent on some electrochemical state in the brain. In the modern age, this basic materialist view is rarely contested, even by the most vociferous critics of neuroscience (though Brooks gets pretty close here). It is simply no longer credible to invoke some non-material entity (ghost in the machine) as the ultimate cause of the private and uniquely special quality of human experience. If we want to understand how the material of brain gives rise to the phenomena of mind, then we need to understand the causal biological mechanisms that underpin the cognitive architecture that is collectively termed mind. This includes perception, memory, imagination, language (and other social interactions), sense of agency/free will, etc. But I reiterate, the purpose is to understand the causal mechanisms the give rise to the phenomena of mind, not to substitute the first order experience. The explanatory gap is simply a red herring.

I have argued that it is too early to predict how far neuroscience will be able to take us. It is hard to imagine some magical endpoint at which the final piece of the puzzle falls into place, and all mystery finally dissolves. But there is every reason to believe that the current direction is a promising one, and new technical developments and analysis approaches are likely to yield important new insights that can hardly be predicted at this early stage of the adventure. But to make a case in favour of neuroscience as a likely best place to look for answers to mind, it makes sense to consider how far we have come so far.

Never mind the neurobollocks


Phrenology (Wiki Commons)
As neurobacklashers are quick to point out, there have been many examples of over-hyped studies (usually some form of one-to-one mapping between cognitive states X and Y to brain areas A and B using fMRI). Neurophrenology is impossibly simplistic and theoretically absurd, but it is probably an important first step to map out some of the basic correspondences between brain structure and function before we can move on to more complex interactions. The neurobacklash may also draw upon some more systemic problems with the practical application of neuroscience (e.g., poor statistical methods, unreliable results, etc). These are all serious problems in the field today, but also not unique to neuroscience. In fact, one of the most striking and often cited examples of such bad practices comes from preclinical cancer research, which found that only about 10% of previously published results were reliable, with the implication that implying that almost 90% of results published in preclinical cancer research were effectively false positives.

All scientific conclusions depend critically on the rigour of the scientific practice that is used to gather and evaluate the evidence. I have previously argued that current funding models prioritise quantity over quality [posts at the Guardian and Brain Box], which seriously distorts the incentive structure in science to reward shoddy practices for expedient publications. It is the same if your building contractor cuts corners to save on costs. An unstable edifice built on under-resource science will not stand the test of time. Worse, science is a cumulative process, so poor science leads future research down blind alleys. I have also advocated more stringent statistical criteria [Brain Box], and others have argued for more checks and balances in the publication process [here, here]. We must remain ever-vigilant to protect previously established safeguards from increasing pressure to cut-corners, and also find new ways to improve the reliability of established results. 

Although a litany of bad practices in neuroscience does not imply that the endeavour is flawed in principle, it would undermine the future promise if there were no examples that survived the in practice critique. But this is simply not the case. Neuroscience has completely revolutionised our understanding of many core mental faculties over the last century. Research in long-term memory is probably a good example to illustrate how neuroscience can provide a powerful explanatory framework for understanding how the biology of brain causes a key mental faculty.

Case study: Long-term Memory


From Wiki
This story starts with the (in)famous case of an ill-fated surgical procedure to treat otherwise intractable epilepsy. After bilateral resection of the medial temporal lobe in a patient widely known by his initial HM, we discovered that a very specific part of the brain was absolutely necessary for long-term memory: the hippocampus. This was a remarkable case of localisation. Without the hippocampus, the patient becomes profoundly amnesic, therefore we can conclude that this brain structure is necessary for forming new memories. But not all types of memories. The amnesic patient is still able to learn new motor skills, for example, so we learn something important about the mind - there are different types of memory [see here for other examples]. Moreover, the patient is also able to recall old memories, suggesting that our past experiences are stored in widely distributed networks throughout the entire brain (massively non-localised). 
MRI (Wiki Commons)

So the further neuroscientific question naturally arises: how do we form new memories? One popular theory is that the hippocampus re-activates recent experiences during sleep. This results in a kind of replay sequence of events is thought to eventually re-wire existing networks in the rest of the brain to integrate the new experience with all the other previous experiences. This is perhaps why we have experiences in our sleep (albeit with confusing/disjointed narratives). Unfortunately, dreams are notoriously difficult to study. There are no observable behaviours for animal studies (other than a bit of paw twitching... we can't ask a mouse to keep a dream diary), and human studies must rely on whatever experience survives the transition between sleep and wakefulness. Neuroscience is able to break through this barrier by measuring patterns of activity during sleep. Already, we have seen how activity in the mouse hippocampus reflects reactivation of previous experiences of recent events [i.e., learning a new maze: article]. More exciting, a recent proof-of-principle fMRI study has now shown that it won't be long before we can extend the same approach to humans [article here, and my review here and here]. Aside from the general awe and wonder associated with idea that we can reading peoples brains during sleep, these kind of studies provide the basic pathways to entirely new approaches for understanding the mental rumblings that are beyond the scope of other forms of enquiry [see here on 'mind reading'].

But this is just one example how neuroscience has provided key insights in the fundamental mechanisms of mind: memory (see here for a related discussion on free will by Björn Brembs). If this level of explanation does not satisfy your definition of "learning something new about the mind", then I can't imagine any other form of enquiry that is likely to be more satisfactory. We have not learned what it is like to have memory (i.e., the explanatory gap), but most of us already know what memory feels like anyway. The deeper question is how the brain gives rise to such phenomena of mind. Some questions of mind are amenable to introspection, others can be studied using more subtle cognitive behavioural experiments (i.e., cognitive psychology), while others can only be realistically addressed using neuroscientific methods. Future developments in neuroscientific methods will set the limit of this endeavour.

For other excellent (err... like-minded) responses to recent neurobacklash: Neurocritic Blog, Björn Brembs BlogNew YorkerBrainFacts.org]. 

Tuesday 23 April 2013

In the news: Decoding dreams with fMRI

Recently Horikawa and colleagues from ATR Computational Neuroscience Laboratories, in Kyoto (Japan), caused a media sensation with the publication of the study in Science that shows first-time proof-of-principle that non-invasive brain scanning (fMRI) can be used to decode dreams. Rumblings were already heard in various media circles after Yuki Kamitani presented their initial findings at the annual meeting of the Society for Neuroscience in New Orleans last year [see Mo Costandi's report]. But now the peer-reviewed paper is officially published, the press releases have gone out and the journal embargo has been lifted, there was a media frenzy [e.g., here, here and here]. The idea of reading people's dreams was always bound to attract a lot of media attention.

OK, so this study is cool. OK, very cool - what could be cooler than reading people's dreams while they sleep!? But is this just a clever parlour trick, using expensive brain imaging equipment? What does it tell us about the brain, and how it works?

First, to get beyond the hype, we need to understand exactly what they have, and have not, achieved in this study. Research participants were put into the narrow bore of an fMRI for a series of mid afternoon naps (up to 10 sessions in total). With the aid of simultaneous EEG recordings, the researchers were able to detect when their volunteers had slipped off into the earliest stage of sleep (stage 1 or 2). At this point, they were woken and questioned about any dream that they could remember, before being allowed to go back to sleep again. That is, until the EEG next registered evidence of early stage sleep again, and then again they were awoken, questioned, and allowed back to sleep. So on and so forth, until they had recorded at least 200 distinct awakenings.

After all the sleep data were collected, the experimenters then analysed the verbal dream reports using a semantic network analysis (WordNet) to help organise the contents of the dreams their participants had experience during the brain scans. The results of this analysis could then be used to systematically label dream content associated with the sleep-related brain activity they had recorded earlier.

Having identified the kind of things their participants had been dreaming about in the scanner, the researchers then searched for actual visual images that best matched the reported content of dreams. Scouring the internet, the researchers built up a vast database of images that more or less corresponded to the contents of the reported dreams. In a second phase of the experiment, the same participants were scanned again, but this time they were fully awake and asked to view the collection of images that were chosen to match their previous dream content. These scans provided the research team with individualised measures of brain activity associated with specific visual scenes. Once these patterns had been mapped, the experimenters returned to the sleep data, using the normal waking perception data as a reference map.

If it looks like a duck...

In the simplest possible terms, if the pattern of activity measured during one dream looks more like activity associated with viewing a person, compared to activity associated with seeing an empty street scene, then you should say that the dream probably contains a person, if you were forced to guess. This is the essence of their decoding algorithm. They use sophisticated ways to characterise patterns in fMRI activity (support vector machine), but essentially the idea is simply to match up, as best they can, the brain patterns observed during sleep with those measures during wakeful viewing of corresponding images. Their published result is shown on the right for different areas of the brain's visual system. Lower visual cortex (LVC) includes primary visual cortex (V1), and areas V2 and V3; whereas higher visual cortex (HVC) includes lateral occipital complex (LOC), fusiform face area (FFA) and parahippocampal place area (PPA).

Below is a more creative reconstruction of this result. The researchers have put together a movie based on one set of sleep data taken before waking. Each frame represents the visual image from their database that best matches the current pattern of brain activity. Note, the reason why the image gets clearer towards the end of the movie is because the brain activity is nearer to the time point at which the participants were woken, and therefore were more likely to be described at waking. If the content at other times did not make it into the verbal report, then the dream activity would be difficult to classify because the corresponding waking data would not have been entered into the image database. This highlights how this approach only really works for content that has been characterised using the waking visual perception data.      


OK, so these scientists have decoded dreams. The accuracy is hardly perfect, but still, the results are significantly above chance, and that's no mean feat. In fact, it has never been done before. But some might still say, so what? Have we learned anything very new about the brain? Or is this just a lot of neurohype?

Well, beyond the tour de force technical achievement of actually collecting this kind of multi-session simultaneous fMRI/EEG sleep data, these results also provide valuable insights into how dreams are represented in the brain. As in many neural decoding studies, the true purpose of the classifier is not really to make perfectly accurate predictions, but rather to work out how the brain represented information by studying how patterns of brain activity differ between conditions [see previous post]. For example, are there different patterns of visual activity during different types of dreams? Technically, this could be tested by just looking for any difference in activity patterns associated with different dream content. In machine-learning language, this could be done using a cross-validated classification algorithm. If a classifier trained to discriminate activity patterns associated with known dream states can then make accurate predictions of new dreams, then it is safe to assume that there are reliable differences in activity patterns between the two conditions. However, this only tells you that activity in a specific brain area is different between conditions. In this study, they go one step further.

By training the dream decoder using only patterns of activity associated with the visual perception of actual images, they can also test whether there is a systematic relationship between the way dreams are presented, and how actual everyday perception is represented in the brain. This cross-generalisation approach helps isolate the shared features between the two phenomenological states. In my own research, we have used this approach to show that visual imagery during normal waking selectively activates patterns in high-level visual areas (lateral occipital complex: LOC) that are very similar to the patterns associated with directly viewing the same stimulus (Stokes et al., 2009, J Neurosci). The same approach can be used to test for other coding principles, including high-order properties such as position-invariance (Stokes et al., 2011, NeuroImage), or the pictorial nature of dreams, as studied here. As in our previous findings during waking imagery, Horikawa et al show that the visual content of dreams shares similar coding principles to direct perception in higher visual brain areas. Further research, using a broader base of comparisons, will provide deeper insights into the representational structure of these inherently subject and private experiences.

Many barriers remain for an all-purpose dream decoder

When the media first picked up this story, the main question I was asked went something like: are scientists going to be able to build dream decoders? In principle, yes, this result shows that a well trained algorithm, given good brain data, is able to decode the some of the content of dreams. But as always, there are plenty of caveats and qualifiers.

Firstly, the idea of downloading people's dreams while they sleep is still a very long way off. This study shows that, in principle, it is possible to use patterns of brain activity to infer the contents of peoples dreams, but only at a relatively coarse resolution. For example, it might be possible to distinguish between patterns of activity associated with a dream containing people or an empty street, but it is another thing entirely to decode which person, or which street, not to mention all the other nuances that make dreams so interesting.

To boost the 'dream resolution' of any viable decoding machine, the engineer would need to scan participants for much MUCH longer, using many more visual exemplars to build up an enormous database of brain scans to use as a reference for interpreting more subtle dream patterns. In this study, the researchers took advantage of prior knowledge of specific dream content to limit their database to a manageable size. By verbally assessing the content of dreams first, they were able to focus on just a relatively small subset of all the possible dream content one could imagine. If you wanted to build an all-purpose dream decoder, you would need an effectively infinite database, unless you could discover a clever way to generalise from a finite set of exemplars to reconstruct infinitely novel content. This is an exciting area of active research (e.g., see here).

Another major barrier to a commercially available model is that you would also need to characterise this data for each individual person. Everyone's brain is different, unique at birth and further shaped by individual experiences. There is no reason to believe that we could build a reliable machine to read dreams without taking this kind of individual variability into account. Each dream machine would have to be tuned to each person's brain.


Finally, it is also worth noting that the method that was used in this experiment requires some pretty expensive and unwieldy machinery. Even if all the challenges set out above were solved, it is unlikely that dream readers for the home will be hitting the shelves any time soon. Other cheaper, and more portable methods for measuring brain activity, such as EEG, can only really be used to identify difference sleep stages, not what goes on inside them. Electrodes placed directly into the brain could be more effective, but at the cost of invasive brain surgery.


For the moment, it is probably better just to keep a dream journal.

Reference:


Horikawa, Tamaki, Miyawaki & Kamitani (2013) Neural Decoding of Visual Imagery During Sleep, Science [here]

Tuesday 16 April 2013

Statistical power is truth power

This week, Nature Reviews Neuroscience published an important article by Kate Button and colleagues quantifying the extent to which experiments in neuroscience may be statistically underpowered. For a number of excellent, and accessible summaries of the research, see here, here, here and this one in the Guardian from the lead author of the research.

The basic message is clear - collect more data! Data collection is expensive, and time consuming, but underpowered experiments are a waste of both time and money. Noisy data will decrease the likelihood detecting important effects (false negative), which is obviously disappointing for all concerned. But noisy datasets are also more likely to be over-interpreted, as the disheartened experimenter attempts to find something interesting to report. With enough time, and effort, trying lots of different analyses, something 'worth reporting' will inevitably emerge, even by chance (false positive). Put a thousand monkeys to a thousand typewriters, or leave an enthusiastic researcher alone long enough with a noisy data set, and eventually something that reads like a coherent story will emerge. If you are really lucky (and/or determined), it might even sound like a pretty good story, and end up published in a high-impact journal.

This is the classic Type 1 error, the bogeyman of undergraduate Statistics 101. But the problem of  false positives is very real, and continues to plague empirical research, from biological oncology to social psychology. Failure to replicate published results is the diagnostic marker of a systematic failure to separate signal from noise.

There are many bad scientific practices that increase the likelihood of false positives entering the literature, such as peeking, parameter tweaking, and publication bias, and there are some excellent initiatives out there to clean up these common forms of bad research practice. For example, Cortex has introduced a Registered Report format that should bring some rigour back to hypothesis testing, Psychological Science in now hoping to encourage replications and Nature Neuroscience has drawn up clearer guidelines to improve statistical practices.

These are all excellent initiatives, but I think we also need to consider simply increasing the margin of error. In a previous post, I argued that the accepted statistical threshold is far too lax. A 1-in-20 false discovery rate already seems absurdly permissive, but if we consider in all the other factors that invalidate basic statistical assumptions, then the true rate of false positives must be extremely high (perhaps 'Why Most Published Research Findings are False'). To increase the safety margin seems like an obvious first step to improving the reliability of published findings.

The downside, of course, to a more stringent threshold for separating signal from noise is that it demands a lot more data. Obviously, this will reduce the total number of experiments that can be conducted for the same amount of money. But as I recently argue in the Guardian, science on a shoestring budget can lead to more harm than good. If the research is important enough to fund, then it is even more important that it is funded properly. Spreading resources too thinly will only add noise and confusion to the process, leading further research down expensive and time-consuming blind alleys opened up by false positives.

So, the take home message is simple - collect more data! But how much more?

Matt Wall recently posted his thoughts on power analyses. These are standardised procedures for estimating the probability that you will be able to detect a significant effect, given a certain effect size and variance, for a given number of subjects. This approach is used widely for planning clinical studies, and is essentially the metric that Kate and colleagues use for demonstrate the systematic lack of statistical power in the neuroscience literature. But there's an obvious catch 22, as Matt points out. How are you supposed to know the effect size (and variance) if you haven't done the experiment? Indeed, isn't that exactly why you have proposed to conduct the experiment? To sample the distribution for an estimate of effect size (and variance)? Also, in a typical experiment, you might be interested in a number of possible effects, so which one do you base your power analysis on?

I tend to think that power analysis is best served for clinical studies, in which there is already a clear idea of the effect size you should be looking for (as it is bounded by practical concerns of clinical relevance). In contrast, basic science is often interested in whether there is an effect, in principle. Even if very small, it could be of major theoretical interest. In this case, there may be no lower bound effect size to impose, so without pre-cognition, it seems difficult to see how to establish the necessary sample size. Power calculations would clearly benefit replication studies, but it difficult to see how they could be applied for planning new experiments. Researchers can make a show of power calculations, by basing effect size estimations on some randomly selected previous study, but this is clearly a pointless exercise.

Instead, researchers often adopt rules of thumb, but I think the new rule of thumb should be: double your old rule of thumb! If you were previously content with 20 participants for fMRI, then perhaps you should recruit 40. If you have always relied on 100 cells, then perhaps you should collect data from 200 cells instead. Yes, these are essentially still just numbers, but there is nothing arbitrary about improving statistical power. And you can be absolutely sure that the extra time and effort (and cost) will pay dividends in the long run. You will spend less time analysing your data trying to find something interesting to report, and you will be less likely to send some other research down the miserable path of persistent failures to replicate your published false positive.