Saturday, 16 May 2015

What does fMRI measure?

Fig 1. From Kuo, Stokes, Murray & Nobre (2014)
When you say ‘brain activity’, many people first think of activity maps generated by functional magnetic resonance imaging (fMRI; see figure 1). As a non-invasive braining imaging method, fMRI has become the go-to workhorse of cognitive neuroscience. Since the first papers were published in the early 1990s, there has been an explosion of studies using this technique to study brain function, from basic perception to mind-reading for communicating with locked-inpatients or detecting lies in criminal investigations. At its best, fMRI provides unparalleled access to detailed patterns of activity in the healthy human brain; at its worst, fMRI could reduce to an expensive generator of 3-dimensional Rorschach images. To understand the relative strengths and weaknesses of fMRI, it is essential to understand exactly what fMRI measures. Without delving too deeply into the nitty-gritty (see below for further reading), we will cover the basics that are necessary for understanding the potential and limits of this ever popular and powerful tool.
“fMRI does not directly measure brain activity”
First and foremost, electricity is the language of the brain. At any moment, there are millions of tiny electrical impulses (action potentials) whizzing around your brain. At synaptic junctions, these impulses release specific chemicals (i.e., neurotransmitters), which in turn modulate the electrical activity in the next cell. This is the fundamental basis for neural communication. Somehow, these processes underpin every thought/feeling/action you have ever experienced. Our challenge is to understand how these electric events give rise to these phenomena of mind.

However, fMRI does not exactly measure electrical activity (compare EEG, MEG, intracranial neurophysiology); but rather it measures the indirect consequences of neural activity (the haemodynamic response). The pathway from neural activity to the fMRI activity map is schematised in figure 2 below:


Fig 2. From Arthurs & Boniface (2002)


Fig 3. From Oxford Sparks
To summarise, let's consider three key principles: 1) neural activity is systematically associated with changes in the relative concentration of oxygen in local blood supply (figure 3); 2) oxygenated blood has different magnetic susceptibility relative to deoxygenated blood; 3) changes in the ratio of oxygenated/de-oxygenated blood (haemodynamicresponse function; figure 4) can be inferred with fMRI by measuring the blood-oxygen-leveldependent (BOLD) response.
Fig 4. Haemodynamic response function


So fMRI only provides an indirect measure of brain activity. This is not necessarily a bad thing. Your classic thermometer does not directly measure ‘temperature’, but rather the volume of mercury in a glass tube. Because these two parameters are tightly coupled, a well calibrated thermometer does a nice job of tracking temperature. The problem arises when the coupling is incomplete, noisy or just very complex. For example, the haemodynamic response is probably most tightly coupled to synaptic events rather than action potentials (see here). This means certain types of activity will be effectively invisible to fMRI, resulting in systematic biases (e.g., favouring input (and local processing) to output neural activity). The extent to which coupling depends on unknown (or unknowable) variability also limits the extent to which we can interpret the BOLD signal. Basic neurophysiological research is therefore absolutely essential for understanding exactly what we are measuring when we switch on the big scanner. See here for an authoritative review by Logothetis, a great pioneer in neural basis of fMRI.
“spatial resolution”
Just like your digital camera, a brain scan can be defined by units of spatial resolution. However, because the image is 3D, we call these volumetric pixels, or voxels for short. In a typical scan, each voxel might cover 3mm3 of tissue, a volume that would encompass ~ 630,000 neurons in cortex. However, the exact size of the voxel only defines the theoretically maximal resolution. In practice, the effective resolution in fMRI also depends on the spatial specificity of the hemodynamic response, as well as more practical considerations such as the degree of head movement during scanning. These additional factors can add substantial spatial distortion or blurring. Despite these limits, there are few methods with superior spatial resolution. Intracranial recordings can measure activity with excellent spatial precision (even isolating activity from single cells), but this invasive procedure is limited to animal models or very specific clinical conditions that require this level of precision for diagnostic purposes (see here). Moreover, microscopic resolution isn't everything. If we focus in too closely without seeing the bigger picture, there is always the danger of not seeing the forest for all the trees. fMRI provides a good compromise between precision and coverage. Ultimately, we need to bridge different levels of analysis to capitalise on insights that can only be gained with microscopic precision and macroscopic measures that can track larger-scale network dynamics. 
 “snapshot is more like a long exposure photograph”
Fig 5. Wiki Commons
Every student in psychology or neuroscience should be able to tell you that fMRI has good spatial resolution (as above), but poor temporal resolution. This is because the haemodynamic response imposes a fundamental limit on the time-precision of the measurement. Firstly, the peak response is delayed by approximately 4-6 seconds. However, this doesn’t really matter for offline analysis, because we can simply adjust our recording to correct for this lag. The real problem is that the response is extended over time. Temporal smoothing makes it difficult to pinpoint the precise moment of activity. Therefore, the image actually reflects an average over many seconds. Think of this like a very long long-exposure photograph (see figure 5), rather than a snapshot of brain activity. This makes it very difficult to study highly dynamic mental processes – fast neural processes are simply blurred. Methods that measure electrical activity more directly have inherently higher temporal resolution (EEGMEGintracranial neurophysiology).
“too much data to make sense of”
A standard fMRI experiment generates many thousands of measures in one scan. This is a major advantage of fMRI (mass simultaneous recording), but raises a number of statistical challenges. Data mining can be extremely powerful, however the intrepid data explorer will inevitably encounter spurious effects, or false positives (entertain yourself with some fun false positives here).
This is more of an embarrassment of riches, rather than a limit. I don’t believe that you can ever have too much data, the important thing is to know how to interpret it properly (see here). Moreover, the same problem applies to other data-rich measures of brain activity. The solution is not to limit our recordings, but to improve our analysis approaches to the multivariate problem that is the brain (e.g., see here). 
“too many free parameters”
There are many ways to analyse an fMRI dataset, so which do you choose? Especially when many of the available options make sense and can be easily justified, but different choices generate slightly different results. This dilemma will be familiar to anyone who has ever analysed fMRI. A recent paper identified 6,912 slightly different paths through the analysis pipeline, resulting in 34,560 different sets of results. By fully exploiting this wiggle room, it should be possible to generate almost any kind of result you would like (see here for further consideration). Although this flexibility is not strictly a limit in fMRI (and certainly not unique to fMRI), it is definitely something to keep in mind when interpreting what you read in the fMRI literature. It is important to define the analysis pipeline independently of your research question, rather than try them all and choose the one that gives you the ‘best’ result. Otherwise there is a danger that you will only see what you want to see (i.e., circular analysis).
“…correlation, not causation”
It is often pointed out the fMRI can only provide correlational evidence. The same can be said for any other measurement technique. Simply because a certain brain area lights up with a specific mental function, we cannot be sure that the observed activity actually caused the mental event (see here). Only an interference approach can provide such causal evidence. For example, if we ‘knock-out’ a specific area (e.g., natural occurring brain damage, TMS, tDCS, animal ablation studies, optogenetics), and observe a specific impairment in behaviour, then we can infer that the targeted area normally plays a causal role. Although this is strictly correct, this does not necessarily imply the causal methods are better. Neural recordings can provide enormously rich insights into how brain activity unfolds during normal behaviour. In contrast, causal methods allow you to test how the system behaves without a specific area. Because there is likely to be redundancy in the brain (multiple brain areas capable of performing the same function), interference approaches are susceptible to missing important contributions. Moreover, perturbing the neural system is likely to have knock-on effects that are difficult to control for, thereby complicating positive effects. These issues probably deserve a dedicated post in the future. But the point for now is simply to note that one approach is not obviously superior to the other. It depends on the nature of the question.
“…the spectre of reverse inference”
A final point worth raising is the spectre of reverse inference making. In an influential review paper, Russ Poldrak outlines the problem:
The usual kind of inference that is drawn from neuroimaging data is of the form ‘if cognitive process X is engaged, then brain area Z is active’. Perusal of the discussion sections of a few fMRI articles will quickly reveal, however, an epidemic of reasoning taking the following form: 

  1. In the present study, when task comparison A was presented, brain area Z was active. 
  2. In other studies, when cognitive process X was putatively engaged, then brain area Z was active. 
  3. Thus, the activity of area Z in the present study demonstrates engagement of cognitive process X by task comparison A. 
This is a ‘reverse inference’, in that it reasons backwards from the presence of brain activation to the engagement of a particular cognitive function.
Reverse inferences are not a valid from of deductive reasoning, because there might be other cognitive functions that activate the brain area. Nevertheless, the general form of reasoning can provide useful information, especially when the function of the particular brain area is relatively specific and particularly well-understood. Using accumulated knowledge to interpret new findings is necessary for theory building. However, in the asbence of a strict one-to-one mapping between structure and function, reverse inference is best approached from a Bayesian perspective rather than a logical inference.

Summary: fMRI is one of the most popular methods in cognitive neuroscience, and certainly the most headline grabbing. fMRI provides unparalleled access to the patterns of brain activity underlying human perception, memory and action; but like any method, there are important limitations. To appreciate these limits, it is important understand some of the basic principles of fMRI. We also need to consider fMRI as part of a broader landscape of available techniques, each with their unique strengths and weakness (figure 6). The question is not so much: is fMRI useful? But rather: is fMRI the right tool for my particular question.

Fig 6. from Sejnowski, Churchland and Movshon, 2014, Nature Neuroscience

Further reading:

Oxford Sparks (see below for video demo)


Key references 

Arthurs, O. J., & Boniface, S. (2002). How well do we understand the neural origins of the fMRI BOLD signal? Trends Neurosci, 25(1), 27-31. doi: Doi 10.1016/S0166-2236(00)01995-0
Logothetis, N. K. (2008). What we can do and what we cannot do with fMRI. Nature, 453(7197), 869-878. doi: DOI 10.1038/nature06976
Poldrack, R. A. (2006). Can cognitive processes be inferred from neuroimaging data? Trends Cogn Sci, 10(2), 59-63. doi: DOI 10.1016/j.tics.2005.12.004
Sejnowski, T. J., Churchland, P. S., & Movshon, J. A. (2014). Putting big data to good use in neuroscience. Nat Neurosci, 17(11), 1440-1441.

Fun demonstration from Oxford Sparks:



Wednesday, 29 April 2015

Peering directly into the human brain

Wiki Commons
With the rise of non-invasive brain imaging such as functional magnetic resonance imaging (fMRI), researchers have been granted unprecedented access to the inner workings of the brain. It is now relatively straightforward to put your experimental subjects in an fMRI machine and measure activity 'blobs' in the brain. This approach has undoubtedly revolutionised cognitive neuroscience, and looms very large in people's idea of contemporary brain science. But fMRI has it's limitations. As every student in the business should know, fMRI has poor temporal resolution. fMRI is like a very long-exposure photograph: the activity snapshot actually reflects an average over many seconds. Yet the mind operates at the millisecond scale. This is obviously a problem. Neural dynamics are simply blurred with fMRI. However, probably more important is the theoretical limit.

Wiki in ECoG
Electricity is the language of the brain, but fMRI only measures changes in blood flow that are coupled to these electrical signals. This coupling is complex, therefore fMRI can only provide a relatively indirect measure of neural activity. Electroencephalography (EEG) is a classic method for measuring actual electrical activity. It has been around for more than 100 years, but again, as every student should know: EEG has poor spatial resolution. It is difficult to know exactly where the activity is coming from. Magnetoencephalography (MEG) is a close cousin of EEG. Developed more recently, MEG is better at localising the source of brain activity. But the fundamental laws of physics mean that any measure of electromagnetic activity from outside the head will always be spatially ambiguous (the inverse problem). The best solution is to record directly from the surface of the brain. Here we discuss the unique opportunities in that arise in the clinic to measure electrical activity directly from the human brain using electrocorticography (ECoG).

Epilepsy can be a seriously debilitating neurological condition. Although the symptoms can often be managed with medication, some patients continue to have major seizures despite a cocktail of anti-epileptic drugs. So-called intractable epilepsy affects every aspect of life, and can even be life-threatening. Sometimes the only option is neurosurgery: careful removal of the specific brain area responsible for seizures can dramatically improve quality of life.

Neurosurgery
Psychology students should be familiar with the case of Henry Molaison (aka HM). Probably the most famous neuropsychology patient in history, HM suffered intractable epilepsy until the neurosurgeon William Scoville removed two large areas of tissue in the medial temporal lobe, including left and right hippocampus. This pioneering surgery successfully treated his epilepsy, but this is not why the case became so famous in neuropsychology. Unfortunately, the treatment also left HM profoundly amnesic. It turns out that removing both sides of the medial temporal lobe effectively removes the brain circuitry for forming new memories. This lesson in functional neuroanatomy is what made the case of HM so important, but there was also a important lesson for neurosurgery – be careful which parts of the brain you remove!

The best way to plan a neurosurgical resection of epileptic tissue is to identify exactly where the seizure is comping from. The best way to map out the affected region is to record activity directly from the surface of the brain. This typically involves neurosurgical implantation of recording electrodes directly in the brain to be absolutely sure of the exact location of the seizure focus. Activity can then be monitored over a number of days, or even weeks, for seizure related abnormalities. This invasive procedure allows neurosurgeons to monitor activity in specific areas that could be the source of epileptic seizures, but also provides a unique opportunity for neuroscientific research.

From Pasley et al., 2012 PLoS Biol. Listen to audio here
During the clinical observation period, patients are typically stuck on the hospital ward with electrodes implanted in their brain literally waiting for a seizure to happen so that the epileptic brain activity can be ‘caught on camera’. This observation period provides a unique opportunity to also explore healthy brain function. If patients are interested, they can perform some simple experiments using computer based tasks to determine how different parts of the brain perform different functions. Previous studies from some of the great pioneers in neuroscience mapped out the motor cortex by stimulating different brain areas during neurosurgery. Current experiments are continuing in this tradition to explore less well charted brain areas involved in high-level thought. For example, in a recent study from Berkeley, researchers used novel brain decoding algorithms to convert brain activity associated with internal speech into actual words. This research helps us understand the fundamental neural code for the internal dialogue that underlies much of conscious thought, but could also help develop novel tools for providing communication to those otherwise unable to general natural speech.


From Dastjerdi et al 2013 Nature Communications (watch video below)

In Stanford, researchers were recently able to identify a brain area that codes for numbers and quantity estimation (read study here). Critically, they were even able to show that this area is involved in everyday use for numerical cognition, rather than just under their specific experimental conditions. See video below.
Wiki Commons



The great generosity of these patients vitally contributes to the broader understanding of brain function. They have dedicated their valuable time in otherwise adverse circumstances to help neuroscientists explore the very frontiers of the brain. These patients are true pioneers.





Key References

Dastjerdi, M., Ozker, M., Foster, B. L., Rangarajan, V., & Parvizi, J. (2013). Numerical processing in the human parietal cortex during experimental and natural conditions. Nat Commun, 4, 2528.


Pasley, B. N., David, S. V., Mesgarani, N., Flinker, A., Shamma, S. A., Crone, N. E., Knight, R. T., & Chang, E. F. (2012). Reconstructing speech from human auditory cortex. PLoS Biol, 10, e1001251.


Video showing the use of a number processing brain area in everyday use:

video


Friday, 24 April 2015

Research Briefing: organising the contents of working memory

Figure 1. Nicholas Myers
Research Briefing, by Nicholas Myers

Everyone has been in this situation: you are stuck in an endless meeting, and a colleague drones on about a topic of marginal relevance. You begin to zone out and focus on the art hanging in your boss’s office, when suddenly you hear your name mentioned. On high alert, you suddenly shift back to the meeting and scramble to retrieve your colleague’s last sentences. Miraculously, you are able to retrieve a few key words – they must have entered your memory a moment ago, but would have been quickly forgotten if hearing your name had not cued them as potentially vital bits of information.

This phenomenon, while elusive in everyday situations, has been studied experimentally for a number of years now: cues indicating the relevance of a particular item in working memory have a striking benefit to our ability to recall it, even if the cue is presented after the item has already entered memory. See our previous Research Briefing on how retrospective cueing can restore information to the focus of attention in working memory.

In a new article, published in the Journal of Cognitive Neuroscience, we describe a recent experiment that set out to add to our expanding knowledge of how the brain orchestrates these retrospective shifts of attention. We were particularly interested in the potential role of neural synchronization of 10 Hz (or alpha-band) oscillations, because they are important in similar prospective shifts of attention.

Figure 2. Experimental Task Design. [from Myers et al, 2014]
We wanted to examine the similarity of alpha-band responses (and other neural signatures of the engagement of attention) both to retrospective and prospective attention shifts. We needed to come up with a new task that allowed for this comparison. On each trial in our task, experiment volunteers first memorized two visual stimuli. Two seconds later, a second set of two stimuli appeared, so that a total of four stimuli was kept in mind. After a further delay, participants recalled one of the four items.  

In between the presentation of the first and the second set of two items, we sometimes presented a cue: this cue indicated which of the four items would likely be tested at the end of the trial. Crucially, this cue could have either a prospective or a retrospective function, depending on whether it pointed to location where an item had already been presented (a retrospective cue, or retrocue), or to a location where a stimulus was yet to appear (a prospective cue, or precue). This allowed us to examine neural responses to attention-guiding cues that were identical with respect to everything but their forwards- or backwards-looking nature. See Figure 2 for a task schematic.

Figure 3. Results: retro-cueing and pre-cueing
trigger different attention-related ERPs.
[from Myers et al, 2014]
We found marked differences in event-related potential (ERP) profiles between the precue and retrocue conditions. We found evidence that precues primarily generate an anticipatory shift of attention toward the location of an upcoming item: potentials just before the expected appearance of the second set of stimuli reflected the location where volunteers were attending. These included the so-called early directing attention negativity (or 'EDAN') and the late directing attention-related positivity (or 'LDAP'; see Figure 3, middle panel; and see here for a review of attention-related ERPs). Retrocues elicited a different pattern of ERPs that was compatible with an early selection mechanism, but not with stimulus anticipation (i.e., no LDAP, see Figure 3, upper panel). The latter seems plausible, since the cued information was already in memory, and upcoming stimuli were therefore not deserving of attention. In contrast to the distinct ERP patterns, alpha band (8-14 Hz) lateralization was indistinguishable between cue types (reflecting, in both conditions, the location of the cued item; see Figure 4).

Figure 4. Results: retro-cueing and pre-cueing trigger similar patters
of de-synchronisation in low frequency activity (alpha band at ~10Hz).
[from Myers et al, 2014]
What did we learn from this study? Taken together with the ERP results, it seems that alpha-band lateralization can have two distinct roles: after a precue it likely enables anticipatory attention. After a retrocue, however, the alpha-band response may reflect the controlled retrieval of a recently memorized piece of information that has turned out to be more useful than expected, without influencing the brain’s response to upcoming stimuli.

It seems that our senses are capable of storing a limited amount of information on the off chance that it may suddenly become relevant. When this turns out to be the case, top-down control allows us to pick out the relevant information from among all the items quietly rumbling around in sensory brain regions.

Many interesting questions remain that we were not able to address in this study. For example, how do cortical areas responsible for top-down control activate in response to a retrocue, and how do they shuttle cued information into a state that can guide behaviour? 



Key Reference: 

Myers, Walther, Wallis, Stokes & Nobre (2014) Temporal Dynamics of Attention during Encoding versus Maintenance of Working Memory: Complementary Views from Event-related Potentials and Alpha-band Oscillations. Journal of Cognitive Neuroscience (Open Access)

Friday, 10 April 2015

Research Briefing: Preferential encoding of behaviourally relevant predictions revealed by EEG

Figure 1. Accurate predictions help us prepare the best action
Statistical regularities in the environment allow us to generate predictions to guide perception and action. For example, consider the challenge facing a goal keeper during a penalty shoot-out. There is simply not enough time to act responsively. By the time the ball in hurtling along its path to some deep corner of the net, it is probably already too late to plan and execute the appropriate action to save the goal. Instead, the goal keeper must actively predict the likely trajectory of ball before it has even left the boot of the other player. The goal keeper must use the any subtle clues betrayed by the kicker, any reliable signal to help prepare for a dive in the correct direction.

Background

Predictions are useful in many contexts, not just professional sport. In everyday life, your brain is constantly generating predictions that help you to interpret the world around you and plan appropriate behaviour. Hermann von Helmholtz described the importance of predictions derived from past experience for interpreting perceptual information (see previous post). More recently, theorists that argued that the brain is essentially a predictive machine - for example, the Free Energy Principle proposes that perception and action are best conceptualised as a dynamic interplay between predictions we make about our environment and how well these predictions explain future events.

Research Question

In any given context, some predictions might be useful for behaviour, but others less so. Here, we asked whether and how the brain learns relevant and/or irrelevant predictive relationships using electroencephalography (EEG).

Methods Summary

Participants in our experiment performed a simple target detection task (see Figure 2). On each experimental trial, they were presented with a visual stimulus drawn for a set of ten possible fractals images. At the start of the experiment, one image was assigned as the target. Participants were simply instructed to press a button as quickly as possible each time they detected the target image. Critically, unbeknownst to the participants. we also assigned specific roles for some of the other 'non-target' images. Firstly, we randomly assigned one of the stimuli to act as a task-relevant predictive cue. We rigged the presentation probabilities such that the target stimulus was more likely to follow the predictive cue than any other stimulus. We reasoned that participants should be able to implicitly learn this task-relevant predictive relationship to help prepare for the response to the target stimulus (faster response times). 

Figure 2. Behavioural task to compare neural processing associated with task-relevant and irrelevant statistical relationships. Presentation probabilities were randomly assigned for each participant at the start of the experiment [from Stokes et al., 2014]
Critically, we also included a task-irrelevant predictive relationship to test whether learning is specific to task-relevant relationships, or do participants implicitly encode all the regularities that they experience. Because this manipulation was by definition task-irrelevant, there was no behavioural index of learning. However, we could look to the EEG data to compare the neural response to task relevant versus irrelevant predictive relationships.
Figure 3. Reaction times became faster for cued targets 
as participants presumably learned the predictive 
nature of the task-relevant cue [from Stokes et al., 2014]

Results Summary

Analysis of the reaction time data confirmed that participants learned the task-relevant statistical regularity that we introduced into the experiment. Reaction times were faster for cued targets relative to uncued targets (see Figure 3). By definition, there is no behavioural measure for task-irrelevant learning in this task, so we must turn to the EEG data (see Figure 4). Panel A shows the EEG response to cued targets relative to uncued targets as a function of learning (block number) in frontal, central and posterior scalp electrodes. The colour scale shows the difference in voltage between cued and uncued targets, i.e., the effect of the predictive cue on target processing. Towards the end of the experiment (blocks 7 & 8), a positive difference emerges at around 300ms after the presentation of the target. We also estimated the effect of learning by calculating the linear relationship between block number and the EEG response. In panel B, we can see the scalp distribution of this learning effect. In comparison to the robust learning effect of task-relevant predictions, we find no evidence for an effect of block (i.e., learning) on task-irrelevant predictions (Panel C & D). Finally, in Panel E we also plot the time-course of the learning effect for relevant (in blue) and irrelevant (in red) predictions for frontal, central and posterior electrodes, revealing a significant effect of learning relevant predictions (black significance bar, relative to baseline), but not irrelevant learning (relevant>irrelevant in grey significance bar). 


Figure 4. The EEG learning effect for cued vs. uncued targets and cued vs. uncued control non-targets. There was a robust effect of learning for predicted target stimuli (Panel A & B) relative to the task-irrelevant stimulus pairs (Panels C & D). In Panel E, we directly contrast the effect of learning task-relevant and irrelevant predictions in frontal, central and posterior electrodes, revealing a significant difference from around 250ms post-stimulus in central and posterior channels. Horizontal bars indicate significant regression slopes in the target learning condition compared to chance (in black; central: p = 0.053, cluster-corrected, dashed line, posterior: p = 0.0130, cluster-corrected, solid line), and directly compared to the control non-target condition (in grey; central: p = 0.026; posterior: p = 0.045, cluster corrected) [from Stokes et al., 2014]
Finally, we performed the same analysis, but time-locked to the cue stimulus (Figure 5). All the conventions were the same, expect now we are looking at the response to the predictive stimulus (rather than the predicted stimulus). Again, we observed a robust learning effect for the task relevant predictive cue (Panels A,B), but not for the task-irrelevant (Panels A,B and C for the direct comparison).   

Figure 5. Event-related potentials to predictive stimuli: target cue and control non-target cues. All the conventions are the same as Figure 4. Note that there is a significant learning effect of the task-relevant predictive cue (Panel E, in blue), but not the task-irrelevant cue (in red) [from Stokes et al., 2014]

Summary

This experiment shows that learning predictive relationships critically depends on the task relevance. In our experiment, participants were not explicitly informed about any of the statistical relationships between stimuli, but simply learned them through experience. Task-relevant predictions clearly benefited behaviour in the task. As participants learned the implicit statistics of the task, they responded more quickly to cued, relative to uncued targets. The learning effect was also clearly evident in EEG activity, consistent with differential processing of predictive, and predicted, task-relevant stimuli. In contrast, there was no evidence for a corresponding neural effect for task-irrelevant predictions, providing strong evidence that the brain prioritises which relationships to learn. Of course, our null effect does not mean that task-irrelevant predictions are never learned or represented, but rather highlights to importance of task-relevance in modulating the learning process. 

As a side note, this experiment also provides a nice example of how we can use EEG to probe cognitive variables without requiring a behavioural response. In many situations, we are interested in how the brain processes non-target information. This presents an obvious challenge for a behavioural experiment: how can we measure processing, without making the stimulus task-relevant? Here, we use EEG to measure the response to task-irrelevant input, thereby providing insights at both the neural and cognitive level (see relevant post here)



Key reference: 

Stokes, Myers, Turnbull & Nobre (2014). Preferential encoding of behaviourally relevant predictions revealed by EEG. Frontiers in Human Neuroscience, 8:687 [open access]







Thursday, 9 April 2015

New arrival, keeping us all busy

It has been a while since I have posted anything new, but in the meantime this little guy has arrived in our lives:


Thursday, 12 June 2014

Research Briefing: Oscillatory Brain State and Variability in Working Memory

Hot off the press: Oscillatory Brain State and Variability in Working Memory

In a new paper, Nick Myers and colleagues show how spontaneous fluctuations in
alpha-band synchronization over visual cortex predict the trial-by-trial accuracy of items stored in visual working memory. The pre-stimulus desynchronization of alpha oscillations correlated with the accuracy of memory recall. A model-based analysis indicated that this effect arises from a modulation in the precision of memorized items, but not the likelihood of remembering them (the recall rate). The phase of posterior alpha oscillations preceding the memorized item also predicted memory accuracy. The study highlights the influence of spontaneous changes in cortical excitability on higher visual cognition, and how these state changes contribute to large amounts of variability in what is normally thought of as a stable aspect of behavior.
 
From Figure 2 in Myers et al. (2014)



Reference: 

Myers, N. E., M. G. Stokes, et al. (2014). "Oscillatory brain state predicts variability in working memory." J Neurosci 34(23): 7735-7743 http://www.jneurosci.org/content/34/23/7735.short

Tuesday, 13 August 2013

In the News: Death wave

Near-death Experience
(Wiki Commons)
Can neuroscience shed light on one of life's biggest mysteries - death? In a paper just published in PNAS, researchers describe a surge of brain activity just moments before death. This raises the fascinating possibility that they have identified the neural basis for near death experiences.

First, to put this research into context, death-related brain activity was examined in rats, not humans. For obvious reasons, it is easier to study the death process in animals rather than humans. In this study, nine rats were implanted with electrodes in various brain regions, anaesthetised then 'euthanized' (i.e., killed). The exact moment of death was identified as the last regular heartbeat (clinical death). Electroencephalogram (EEG) was recorded during normal waking phase, anaesthesia and after cardiac arrest (i.e., after death) from right and left frontal (RF/LF), parietal (RP/LP) and occipital (RO/LO) cortex (see Figure below). Data shown in panel A ranges from about 1hr before death to 30mins afterwards. At this coarse scale you can see some patterns in the waking data that generally reflect high frequency brain activity (gamma band, >40Hz). During anaesthesia, activity becomes synchronised at lower frequency bands (especially delta band: 0.1–5 Hz), but everything seems to flatline after cardiac arrest. However, if we now zoom in on the moment just after death (Panels B and C), we can see that the death process actually involves a sequence of structured stages, including a surge of high-frequency brain activity that is normally associated with wakefulness.


Adapted from Fig 1 of Borjogin et al. (2013)

In the figure above, Panel B shows brain activity zoomed in at 30min after death, and Panel C provides an even closer view, with activity from each brain area overlaid in a different colour. The authors distinguish  four distinct cardiac arrest stages (CAS). CAS1 reflects the time between the last regular heartbeat and the loss of oxygenated blood pulse (mean duration ~4 seconds). The next stage, CAS2 (~6 seconds duration) ended with a burst in delta waves (so-called 'delta blip' ~1.7 seconds duration), and CAS3 (~20 seconds duration) continued until there was no more evidence of meaningful brain activity (i.e., CAS4 >30mins duration). These stages reflect an organized series of brain states. First, activity during CAS1 transitions from the anaesthetised state with an increase in high-frequency activity (~130Hz) across all brain areas. Next, activity settles into a period of low-frequency brain waves during CAS2. Perhaps most surprisingly, during CAS3 recordings were dominated by mid-range gamma activity (brain waves ~35-50Hz). In further analyses, they also demonstrate that this post-mortem brain activity is also highly coordinated across brain areas and different frequency bands. These are the hallmarks of high-level cognitive activity. In sum, these data suggests that long after death, the brain enters a brief state of heightened activity that is normally associated with wakeful consciousness.

Heightened awareness just after death  

Adapted from Fig 2 of Borjogin et al. (2013)
The authors even suggest that the level of activity observed during CAS3 may not only resemble the waking state, but might even reflect a heightened state of conscious awareness similar to the “highly lucid and realer-than-real mental experiences reported by near-death survivors”. This is based on the observation that there is more evidence for consciousness-related activity during this final phase of death than during normal wakeful consciousness. This claim, however, depends critically on their quantification of 'consciousness'. To date, there is no simple index of 'consciousness' that can be reliability measured to infer the true state of awareness. And even if we could derive such a consciousness metric in humans (see here), to generalise to animals could only ever be speculative. Indeed, research in animals can only ever hint at human experience, including near-death experiences.

Nevertheless, as the authors note, this research certainly demonstrates that activity in the brain is consistent with active cognitive processing. The results demonstrate that a neural explanation for these experiences is at least plausible. They have identified the right kind of brain activity for a neural explanation of near-death experiences, yet it remains to be verified whether these signatures do actually relate directly to the subjective experience.

Future directions: The obvious next step is to test weather similar patterns of brain activity are observed in humans after clinical death. Next, it will be important to show that such activity is strongly coupled to near-death experience. For example, does the presence or absence of such activity predict whether or not the person would report a near death experience. This second step is obviously fraught with technical and ethical challenges (think: The Flatliners), but would provide good evidence to link the neural phenomena to the phenomenal experience.

Key Reference:

Borjigin, Lee, Liu, Pal, Huff, Klarr, Sloboda, Hernandez, Wang & Mashour (2013) Surge of neurophysiological coherence and connectivity in the dying brain. PNAS

Related references:

Tononi G (2012) Integrated information theory of consciousness: An updated account. Arch Ital Biol 150(2-3):56–90.

Auyong DB, et al. (2010) Processed electroencephalogram during donation after
cardiac death. Anesth Analg 110(5):1428–1432

Related blogs and news articles:

BBC News
Headquarters Hosted by the Guardian
National Geographic
The Independent