Showing posts with label neuroscience. Show all posts
Showing posts with label neuroscience. Show all posts

Saturday, 16 May 2015

What does fMRI measure?

Fig 1. From Kuo, Stokes, Murray & Nobre (2014)
When you say ‘brain activity’, many people first think of activity maps generated by functional magnetic resonance imaging (fMRI; see figure 1). As a non-invasive braining imaging method, fMRI has become the go-to workhorse of cognitive neuroscience. Since the first papers were published in the early 1990s, there has been an explosion of studies using this technique to study brain function, from basic perception to mind-reading for communicating with locked-inpatients or detecting lies in criminal investigations. At its best, fMRI provides unparalleled access to detailed patterns of activity in the healthy human brain; at its worst, fMRI could reduce to an expensive generator of 3-dimensional Rorschach images. To understand the relative strengths and weaknesses of fMRI, it is essential to understand exactly what fMRI measures. Without delving too deeply into the nitty-gritty (see below for further reading), we will cover the basics that are necessary for understanding the potential and limits of this ever popular and powerful tool.
“fMRI does not directly measure brain activity”
First and foremost, electricity is the language of the brain. At any moment, there are millions of tiny electrical impulses (action potentials) whizzing around your brain. At synaptic junctions, these impulses release specific chemicals (i.e., neurotransmitters), which in turn modulate the electrical activity in the next cell. This is the fundamental basis for neural communication. Somehow, these processes underpin every thought/feeling/action you have ever experienced. Our challenge is to understand how these electric events give rise to these phenomena of mind.

However, fMRI does not exactly measure electrical activity (compare EEG, MEG, intracranial neurophysiology); but rather it measures the indirect consequences of neural activity (the haemodynamic response). The pathway from neural activity to the fMRI activity map is schematised in figure 2 below:


Fig 2. From Arthurs & Boniface (2002)


Fig 3. From Oxford Sparks
To summarise, let's consider three key principles: 1) neural activity is systematically associated with changes in the relative concentration of oxygen in local blood supply (figure 3); 2) oxygenated blood has different magnetic susceptibility relative to deoxygenated blood; 3) changes in the ratio of oxygenated/de-oxygenated blood (haemodynamicresponse function; figure 4) can be inferred with fMRI by measuring the blood-oxygen-leveldependent (BOLD) response.
Fig 4. Haemodynamic response function


So fMRI only provides an indirect measure of brain activity. This is not necessarily a bad thing. Your classic thermometer does not directly measure ‘temperature’, but rather the volume of mercury in a glass tube. Because these two parameters are tightly coupled, a well calibrated thermometer does a nice job of tracking temperature. The problem arises when the coupling is incomplete, noisy or just very complex. For example, the haemodynamic response is probably most tightly coupled to synaptic events rather than action potentials (see here). This means certain types of activity will be effectively invisible to fMRI, resulting in systematic biases (e.g., favouring input (and local processing) to output neural activity). The extent to which coupling depends on unknown (or unknowable) variability also limits the extent to which we can interpret the BOLD signal. Basic neurophysiological research is therefore absolutely essential for understanding exactly what we are measuring when we switch on the big scanner. See here for an authoritative review by Logothetis, a great pioneer in neural basis of fMRI.
“spatial resolution”
Just like your digital camera, a brain scan can be defined by units of spatial resolution. However, because the image is 3D, we call these volumetric pixels, or voxels for short. In a typical scan, each voxel might cover 3mm3 of tissue, a volume that would encompass ~ 630,000 neurons in cortex. However, the exact size of the voxel only defines the theoretically maximal resolution. In practice, the effective resolution in fMRI also depends on the spatial specificity of the hemodynamic response, as well as more practical considerations such as the degree of head movement during scanning. These additional factors can add substantial spatial distortion or blurring. Despite these limits, there are few methods with superior spatial resolution. Intracranial recordings can measure activity with excellent spatial precision (even isolating activity from single cells), but this invasive procedure is limited to animal models or very specific clinical conditions that require this level of precision for diagnostic purposes (see here). Moreover, microscopic resolution isn't everything. If we focus in too closely without seeing the bigger picture, there is always the danger of not seeing the forest for all the trees. fMRI provides a good compromise between precision and coverage. Ultimately, we need to bridge different levels of analysis to capitalise on insights that can only be gained with microscopic precision and macroscopic measures that can track larger-scale network dynamics. 
 “snapshot is more like a long exposure photograph”
Fig 5. Wiki Commons
Every student in psychology or neuroscience should be able to tell you that fMRI has good spatial resolution (as above), but poor temporal resolution. This is because the haemodynamic response imposes a fundamental limit on the time-precision of the measurement. Firstly, the peak response is delayed by approximately 4-6 seconds. However, this doesn’t really matter for offline analysis, because we can simply adjust our recording to correct for this lag. The real problem is that the response is extended over time. Temporal smoothing makes it difficult to pinpoint the precise moment of activity. Therefore, the image actually reflects an average over many seconds. Think of this like a very long long-exposure photograph (see figure 5), rather than a snapshot of brain activity. This makes it very difficult to study highly dynamic mental processes – fast neural processes are simply blurred. Methods that measure electrical activity more directly have inherently higher temporal resolution (EEGMEGintracranial neurophysiology).
“too much data to make sense of”
A standard fMRI experiment generates many thousands of measures in one scan. This is a major advantage of fMRI (mass simultaneous recording), but raises a number of statistical challenges. Data mining can be extremely powerful, however the intrepid data explorer will inevitably encounter spurious effects, or false positives (entertain yourself with some fun false positives here).
This is more of an embarrassment of riches, rather than a limit. I don’t believe that you can ever have too much data, the important thing is to know how to interpret it properly (see here). Moreover, the same problem applies to other data-rich measures of brain activity. The solution is not to limit our recordings, but to improve our analysis approaches to the multivariate problem that is the brain (e.g., see here). 
“too many free parameters”
There are many ways to analyse an fMRI dataset, so which do you choose? Especially when many of the available options make sense and can be easily justified, but different choices generate slightly different results. This dilemma will be familiar to anyone who has ever analysed fMRI. A recent paper identified 6,912 slightly different paths through the analysis pipeline, resulting in 34,560 different sets of results. By fully exploiting this wiggle room, it should be possible to generate almost any kind of result you would like (see here for further consideration). Although this flexibility is not strictly a limit in fMRI (and certainly not unique to fMRI), it is definitely something to keep in mind when interpreting what you read in the fMRI literature. It is important to define the analysis pipeline independently of your research question, rather than try them all and choose the one that gives you the ‘best’ result. Otherwise there is a danger that you will only see what you want to see (i.e., circular analysis).
“…correlation, not causation”
It is often pointed out the fMRI can only provide correlational evidence. The same can be said for any other measurement technique. Simply because a certain brain area lights up with a specific mental function, we cannot be sure that the observed activity actually caused the mental event (see here). Only an interference approach can provide such causal evidence. For example, if we ‘knock-out’ a specific area (e.g., natural occurring brain damage, TMS, tDCS, animal ablation studies, optogenetics), and observe a specific impairment in behaviour, then we can infer that the targeted area normally plays a causal role. Although this is strictly correct, this does not necessarily imply the causal methods are better. Neural recordings can provide enormously rich insights into how brain activity unfolds during normal behaviour. In contrast, causal methods allow you to test how the system behaves without a specific area. Because there is likely to be redundancy in the brain (multiple brain areas capable of performing the same function), interference approaches are susceptible to missing important contributions. Moreover, perturbing the neural system is likely to have knock-on effects that are difficult to control for, thereby complicating positive effects. These issues probably deserve a dedicated post in the future. But the point for now is simply to note that one approach is not obviously superior to the other. It depends on the nature of the question.
“…the spectre of reverse inference”
A final point worth raising is the spectre of reverse inference making. In an influential review paper, Russ Poldrak outlines the problem:
The usual kind of inference that is drawn from neuroimaging data is of the form ‘if cognitive process X is engaged, then brain area Z is active’. Perusal of the discussion sections of a few fMRI articles will quickly reveal, however, an epidemic of reasoning taking the following form: 

  1. In the present study, when task comparison A was presented, brain area Z was active. 
  2. In other studies, when cognitive process X was putatively engaged, then brain area Z was active. 
  3. Thus, the activity of area Z in the present study demonstrates engagement of cognitive process X by task comparison A. 
This is a ‘reverse inference’, in that it reasons backwards from the presence of brain activation to the engagement of a particular cognitive function.
Reverse inferences are not a valid from of deductive reasoning, because there might be other cognitive functions that activate the brain area. Nevertheless, the general form of reasoning can provide useful information, especially when the function of the particular brain area is relatively specific and particularly well-understood. Using accumulated knowledge to interpret new findings is necessary for theory building. However, in the asbence of a strict one-to-one mapping between structure and function, reverse inference is best approached from a Bayesian perspective rather than a logical inference.

Summary: fMRI is one of the most popular methods in cognitive neuroscience, and certainly the most headline grabbing. fMRI provides unparalleled access to the patterns of brain activity underlying human perception, memory and action; but like any method, there are important limitations. To appreciate these limits, it is important understand some of the basic principles of fMRI. We also need to consider fMRI as part of a broader landscape of available techniques, each with their unique strengths and weakness (figure 6). The question is not so much: is fMRI useful? But rather: is fMRI the right tool for my particular question.

Fig 6. from Sejnowski, Churchland and Movshon, 2014, Nature Neuroscience

Further reading:

Oxford Sparks (see below for video demo)


Key references 

Arthurs, O. J., & Boniface, S. (2002). How well do we understand the neural origins of the fMRI BOLD signal? Trends Neurosci, 25(1), 27-31. doi: Doi 10.1016/S0166-2236(00)01995-0
Logothetis, N. K. (2008). What we can do and what we cannot do with fMRI. Nature, 453(7197), 869-878. doi: DOI 10.1038/nature06976
Poldrack, R. A. (2006). Can cognitive processes be inferred from neuroimaging data? Trends Cogn Sci, 10(2), 59-63. doi: DOI 10.1016/j.tics.2005.12.004
Sejnowski, T. J., Churchland, P. S., & Movshon, J. A. (2014). Putting big data to good use in neuroscience. Nat Neurosci, 17(11), 1440-1441.

Fun demonstration from Oxford Sparks:



Friday, 24 April 2015

Research Briefing: organising the contents of working memory

Figure 1. Nicholas Myers
Research Briefing, by Nicholas Myers

Everyone has been in this situation: you are stuck in an endless meeting, and a colleague drones on about a topic of marginal relevance. You begin to zone out and focus on the art hanging in your boss’s office, when suddenly you hear your name mentioned. On high alert, you suddenly shift back to the meeting and scramble to retrieve your colleague’s last sentences. Miraculously, you are able to retrieve a few key words – they must have entered your memory a moment ago, but would have been quickly forgotten if hearing your name had not cued them as potentially vital bits of information.

This phenomenon, while elusive in everyday situations, has been studied experimentally for a number of years now: cues indicating the relevance of a particular item in working memory have a striking benefit to our ability to recall it, even if the cue is presented after the item has already entered memory. See our previous Research Briefing on how retrospective cueing can restore information to the focus of attention in working memory.

In a new article, published in the Journal of Cognitive Neuroscience, we describe a recent experiment that set out to add to our expanding knowledge of how the brain orchestrates these retrospective shifts of attention. We were particularly interested in the potential role of neural synchronization of 10 Hz (or alpha-band) oscillations, because they are important in similar prospective shifts of attention.

Figure 2. Experimental Task Design. [from Myers et al, 2014]
We wanted to examine the similarity of alpha-band responses (and other neural signatures of the engagement of attention) both to retrospective and prospective attention shifts. We needed to come up with a new task that allowed for this comparison. On each trial in our task, experiment volunteers first memorized two visual stimuli. Two seconds later, a second set of two stimuli appeared, so that a total of four stimuli was kept in mind. After a further delay, participants recalled one of the four items.  

In between the presentation of the first and the second set of two items, we sometimes presented a cue: this cue indicated which of the four items would likely be tested at the end of the trial. Crucially, this cue could have either a prospective or a retrospective function, depending on whether it pointed to location where an item had already been presented (a retrospective cue, or retrocue), or to a location where a stimulus was yet to appear (a prospective cue, or precue). This allowed us to examine neural responses to attention-guiding cues that were identical with respect to everything but their forwards- or backwards-looking nature. See Figure 2 for a task schematic.

Figure 3. Results: retro-cueing and pre-cueing
trigger different attention-related ERPs.
[from Myers et al, 2014]
We found marked differences in event-related potential (ERP) profiles between the precue and retrocue conditions. We found evidence that precues primarily generate an anticipatory shift of attention toward the location of an upcoming item: potentials just before the expected appearance of the second set of stimuli reflected the location where volunteers were attending. These included the so-called early directing attention negativity (or 'EDAN') and the late directing attention-related positivity (or 'LDAP'; see Figure 3, middle panel; and see here for a review of attention-related ERPs). Retrocues elicited a different pattern of ERPs that was compatible with an early selection mechanism, but not with stimulus anticipation (i.e., no LDAP, see Figure 3, upper panel). The latter seems plausible, since the cued information was already in memory, and upcoming stimuli were therefore not deserving of attention. In contrast to the distinct ERP patterns, alpha band (8-14 Hz) lateralization was indistinguishable between cue types (reflecting, in both conditions, the location of the cued item; see Figure 4).

Figure 4. Results: retro-cueing and pre-cueing trigger similar patters
of de-synchronisation in low frequency activity (alpha band at ~10Hz).
[from Myers et al, 2014]
What did we learn from this study? Taken together with the ERP results, it seems that alpha-band lateralization can have two distinct roles: after a precue it likely enables anticipatory attention. After a retrocue, however, the alpha-band response may reflect the controlled retrieval of a recently memorized piece of information that has turned out to be more useful than expected, without influencing the brain’s response to upcoming stimuli.

It seems that our senses are capable of storing a limited amount of information on the off chance that it may suddenly become relevant. When this turns out to be the case, top-down control allows us to pick out the relevant information from among all the items quietly rumbling around in sensory brain regions.

Many interesting questions remain that we were not able to address in this study. For example, how do cortical areas responsible for top-down control activate in response to a retrocue, and how do they shuttle cued information into a state that can guide behaviour? 



Key Reference: 

Myers, Walther, Wallis, Stokes & Nobre (2014) Temporal Dynamics of Attention during Encoding versus Maintenance of Working Memory: Complementary Views from Event-related Potentials and Alpha-band Oscillations. Journal of Cognitive Neuroscience (Open Access)

Tuesday, 13 August 2013

In the News: Death wave

Near-death Experience
(Wiki Commons)
Can neuroscience shed light on one of life's biggest mysteries - death? In a paper just published in PNAS, researchers describe a surge of brain activity just moments before death. This raises the fascinating possibility that they have identified the neural basis for near death experiences.

First, to put this research into context, death-related brain activity was examined in rats, not humans. For obvious reasons, it is easier to study the death process in animals rather than humans. In this study, nine rats were implanted with electrodes in various brain regions, anaesthetised then 'euthanized' (i.e., killed). The exact moment of death was identified as the last regular heartbeat (clinical death). Electroencephalogram (EEG) was recorded during normal waking phase, anaesthesia and after cardiac arrest (i.e., after death) from right and left frontal (RF/LF), parietal (RP/LP) and occipital (RO/LO) cortex (see Figure below). Data shown in panel A ranges from about 1hr before death to 30mins afterwards. At this coarse scale you can see some patterns in the waking data that generally reflect high frequency brain activity (gamma band, >40Hz). During anaesthesia, activity becomes synchronised at lower frequency bands (especially delta band: 0.1–5 Hz), but everything seems to flatline after cardiac arrest. However, if we now zoom in on the moment just after death (Panels B and C), we can see that the death process actually involves a sequence of structured stages, including a surge of high-frequency brain activity that is normally associated with wakefulness.


Adapted from Fig 1 of Borjogin et al. (2013)

In the figure above, Panel B shows brain activity zoomed in at 30min after death, and Panel C provides an even closer view, with activity from each brain area overlaid in a different colour. The authors distinguish  four distinct cardiac arrest stages (CAS). CAS1 reflects the time between the last regular heartbeat and the loss of oxygenated blood pulse (mean duration ~4 seconds). The next stage, CAS2 (~6 seconds duration) ended with a burst in delta waves (so-called 'delta blip' ~1.7 seconds duration), and CAS3 (~20 seconds duration) continued until there was no more evidence of meaningful brain activity (i.e., CAS4 >30mins duration). These stages reflect an organized series of brain states. First, activity during CAS1 transitions from the anaesthetised state with an increase in high-frequency activity (~130Hz) across all brain areas. Next, activity settles into a period of low-frequency brain waves during CAS2. Perhaps most surprisingly, during CAS3 recordings were dominated by mid-range gamma activity (brain waves ~35-50Hz). In further analyses, they also demonstrate that this post-mortem brain activity is also highly coordinated across brain areas and different frequency bands. These are the hallmarks of high-level cognitive activity. In sum, these data suggests that long after death, the brain enters a brief state of heightened activity that is normally associated with wakeful consciousness.

Heightened awareness just after death  

Adapted from Fig 2 of Borjogin et al. (2013)
The authors even suggest that the level of activity observed during CAS3 may not only resemble the waking state, but might even reflect a heightened state of conscious awareness similar to the “highly lucid and realer-than-real mental experiences reported by near-death survivors”. This is based on the observation that there is more evidence for consciousness-related activity during this final phase of death than during normal wakeful consciousness. This claim, however, depends critically on their quantification of 'consciousness'. To date, there is no simple index of 'consciousness' that can be reliability measured to infer the true state of awareness. And even if we could derive such a consciousness metric in humans (see here), to generalise to animals could only ever be speculative. Indeed, research in animals can only ever hint at human experience, including near-death experiences.

Nevertheless, as the authors note, this research certainly demonstrates that activity in the brain is consistent with active cognitive processing. The results demonstrate that a neural explanation for these experiences is at least plausible. They have identified the right kind of brain activity for a neural explanation of near-death experiences, yet it remains to be verified whether these signatures do actually relate directly to the subjective experience.

Future directions: The obvious next step is to test weather similar patterns of brain activity are observed in humans after clinical death. Next, it will be important to show that such activity is strongly coupled to near-death experience. For example, does the presence or absence of such activity predict whether or not the person would report a near death experience. This second step is obviously fraught with technical and ethical challenges (think: The Flatliners), but would provide good evidence to link the neural phenomena to the phenomenal experience.

Key Reference:

Borjigin, Lee, Liu, Pal, Huff, Klarr, Sloboda, Hernandez, Wang & Mashour (2013) Surge of neurophysiological coherence and connectivity in the dying brain. PNAS

Related references:

Tononi G (2012) Integrated information theory of consciousness: An updated account. Arch Ital Biol 150(2-3):56–90.

Auyong DB, et al. (2010) Processed electroencephalogram during donation after
cardiac death. Anesth Analg 110(5):1428–1432

Related blogs and news articles:

BBC News
Headquarters Hosted by the Guardian
National Geographic
The Independent

Tuesday, 31 July 2012

Research Meeting: Visual Search and Selective Attention

Just returned from a really great meeting at the scenic lakeside (“Ammersee”) location near Munich, Germany. The third Visual Search and Selective Attention symposium was hosted and organised by Hermann Müller and Thomas Geyer (Munich), and supported by the Munich Center for Neurosciences (MCN) and the German Science Foundation (DFG). The stated aim of the meeting was:
"to foster an interdisciplinary dialogue in order to identify important shared issues in visual search and selective attention and discuss ways of how these can be resolved using convergent methodologies: Psychophysics, mental chronometry, eyetracking, ERPs, source reconstruction, fMRI, investigation of (neuropsychological) impairments, TMS and computational modeling."
The meeting was held over three days, and organised by four general themes:

- Pre-attentive and post-selective processing in visual search (Keynote: Hermann Müller)
- The role of (working) memory guidance in visual search (Keynote: Chris Olivers, Martin Eimer)
- Brain mechanisms of visual search (Keynote: Glyn Humphreys)
- Modelling visual search (Keynote: Jeremy Wolfe).

Poster sessions gave grad students (including George Wallis and Nick Myers) a great chance to chat about their research with the invited speakers as well as other students tackling similar issues. 

Of course, a major highlight was the Bavarian beer. Soeren Kyllingsbaek was still to give his talk, presumably explaining the small beer in hand!

More photos of the meeting can be found here.

***New***

All presentations can be downloaded from here

Monday, 18 June 2012

In the news: Mind Reading

Mind reading tends to capture the headlines. And these days we don't need charlatan mentalists to perform parlour tricks before a faithful audience - we now have true scientific mind reading. Modern brain imaging tools allow us to read the patterns of brain activity that constitute mind... well, sort of. I thought to write this post in response to a recent Nature News Feature on research into methods for reading the minds of patients without any other means of communication. In this post, I consider what modern brain imaging brings to the art of mind reading.

Mind reading as a tool for neuroscience research



First, it should be noted that almost any application of brain imaging in cognitive neuroscience can be thought of as a form of mind reading. Standard analytic approaches test whether we can predict brain activity from the changes in cognitive state (e.g., in statistical parametric mapping). It is straightforward to turn this equation round to predict mental state from brain activity. With this simple transformation, the huge majority of brain imaging studies are doing mind reading. Moreover, a class of analytic methods known as multivariate (or multivoxel) pattern analysis (or classification) have come even closer to mind reading for research purposes. Essentially, these methods rely on a two-stage procedure. The first step is to learn which patterns of brain activity correspond to which cognitive states. Next, these learned relationships are used to predict the cognitive state associated with brain activity. This train/test procedure is strictly "mind reading", but essentially as a by-product.

In fact, the main advantage of this form of mind reading in research neuroscience is that it provides a powerful method for exploring how complex patterns in brain data vary with the experimental condition. Multivariate analysis can also be performed the other way around (by predicting brain activity from behaviour, see here), and similarly, there is no reason why train-test procedures can't be used for univariate analyses. In this type of research, the purpose is not actually to read the mind of cash-poor undergraduates who tend to volunteer for these experiments, but rather to understand the relationship between mind and brain.

Statistical methods for prediction provide a formal framework for this endeavour, and although they are a form of mind reading, it is unlikely to capture the popular imagination once the finer details are explained. Experiments may sometimes get dressed up like a mentalist's parlour trick (e.g., "using fMRI, scientists could read the contents of consciousness"), but such hype invariably leaves those who actually read the scientific paper a bit disappointed by the more banal reality (e.g., "statistical analysis could predict significantly above chance whether participants were seeing a left or right tilted grating"... hardly the Jedi mind trick, but very cool from a neuroscientific perspective), or contribute to paranoid conspiracy theories in those who didn't read the paper, but have an active imagination.

Mind reading as a tool for clinical neuroscience


So, in neuroscientific research, mind reading is most typically used as a convenient tool for studying mind-brain relationships. However, the ability to infer mental states from brain activity has some very important practical applications. For example, in neural prosthesis, internal thoughts are decoded by "mind reading" algorithms to control external devices (see previous post here). Mind reading may also provide a vital line of communication to patients who are otherwise completely unable to control any voluntary movement.

Imagine you are in an accident. You suffer serious brain damage that leaves you with eye blinking as your only voluntary movement for communicating with the outside world. That's bad, very bad in fact - but in time you might perfect this new form of communication, and eventually you might even write a good novel, with sufficient blinking and heroic patience. But now imagine that your brain damage is just a little bit worse, and now you can't even blink your eyes. You are completely locked in, unable to show the world any sign of your conscious existence. To anyone outside, you appear completely without a mind. But inside, your mind is active. Maybe not as sharp and clear as it used to be, but still alive with thoughts, feelings, emotions, hopes and fears. Now mind reading, at any level, becomes more than just a parlour trick.
"It is difficult to imagine a worse experience than to be a functioning mind trapped in a body over which you have absolutely no control" Prof Chris Frith, UCL [source here]
As a graduate student in Cambridge, I volunteered as a control participant in a study conducted by Adrian Owen to read mental states with fMRI for just this kind of clinical application (since published in Science). While I lay in the scanner, I was instructed to either imagine playing tennis or to spatially navigate around a familiar environment. The order was up to me, but it was up to Adrian and his group to use my brain response to predict which of these two tasks I was doing at any given time. I think I was quite bad at spatially navigating, but whatever I did inside my brain was good enough for the team to decode my mental state with remarkable accuracy.

Once validated in healthy volunteers (who, conveniently enough, can reveal which task they were doing inside their head, thus the accuracy of the predictions can be confirmed), Adrian and his team then applied this neuroscientific knowledge to track the mental state of a patient who appeared to be in a persistent vegetative state. When they asked her to imagine playing tennis, her brain response looked just like mine (and other control participants), and when asked to spatially navigate, her brain looked just like other brains (if not mine) engaged in spatial navigation.

In this kind of study, nothing very exciting is learned about the brain, but something else extremely important has happened: someone has been able to communicate for the first time since being diagnosed as completely non-conscious. Adrian and his team have further provided proof-of-principle that this form of mind reading can be applied in other patients to test their level conscious awareness (see here). By following the instructions, some patients were able to demonstrate for the first time a level of awareness that was previously completely undetected. In one further example, they even show that this brain signal can be used to answer some basic yes/no questions.

This research has generated an enormous amount of scientific, clinical and public interest [see his website for examples]. As quoted in a recent Nature New Feature, Adrian has since been "awarded a 7-year Can$10-million Canada Excellence Research Chair and another $10 million from the University of Western Ontario" and "is pressing forward with the help of three new faculty members and a troop of postdocs and graduate students". Their first goal is to develop cheaper and more effective means of using non-invasive methods like fMRI and EEG to restore communication. However, one could also imagine a future for invasive recording methods. Bob Knight's team in Berkeley have been using electrical recording made directly from the brain surface to decode speech signals (see here for a great summary in the Guardian by Ian Sample). Presumably, this kind of method could be considered for patients identified as partially conscious.

See also an interesting interview with Adrian by Mo Constandi in the Guardian

References:
Monti, al. (2010). Willful modulation of brain activity in disorders of consciousness. New England Journal of Medicine
Owen, et al (2006). Detecting awareness in the vegetative state. Science
Pasley,  et al (2012). Reconstructing Speech from Human Auditory Cortex. PLoS Biology

Monday, 7 May 2012

Research Briefing: How memory influences attention

Background


In the late 19th Century, the great polymath Hermann von Helmholtz eloquently described how our past experiences shape how we see the world. Given the optical limitations of the eye, he concluded that the rich experience of vision must be informed by a lot more than meets the eye. In particular, he argued that we use our past experiences to infer the perceptual representation from the imperfect clues that pass from the outside world to the brain. 


Consider the degraded black and white image below. It is almost impossible to interpret, until you learn that it is a Dalmatian. Now it is almost impossible not to see the dog in dappled light.

More than one hundred years after Helmholtz, we are now starting to understand the brain mechanisms that mediate this interaction between memory and perception. One important direction follows directly from Helmholtz 's pioneering work. Often couched in more contemporary language, such as Bayesian inference, vision scientists are beginning to understand how our perceptual experience is determined by the interaction between sensory input and our perceptual knowledge established through past experience in the world. 

Prof Nobre (cognitive neuroscientist, University of Oxford) has approached this problem from a slightly different angle. Rather than ask how memory shapes the interpretation of sensory input, she took one step back to ask how past experience prepares the visual system to process memory-predicted visual input. With this move, Nobre's research draws on a rich history of cognitive neuroscientific research in attention and long-term memory. 

Although both attention and memory have been thoroughly studied in isolation, very is little is actually known of how these two core cognitive functions interact in everyday life. In 2006, Nobre and colleagues published the results of a brain imaging experiment designed to identify the brain areas involved in memory-guided attention (Summerfield et al., 2006, Neuron). Participants in this experiment first studied a large number of photographs depicting natural everyday scenes. The instruction was to find a small target object embedded in each scene, very much like the classic Where's Wally game.


After performing the search task a number of times, participants were able learned the location of the target in each scene. When Nobre and her team tested their participants again on a separate day, they found that people were able to use the familiar scenes to direct attention to the previously learned target location in the scene. 


Next, the research team repeated this experiment, but this time changes in brain activity were measured in each participant while they used their memories to direct the focus of their attention. With functional magnetic resonance imaging (fMRI), the team found an increase in neural activity in brain areas associated with memory (especially the hippocampus) as well as a network of brain areas associated with attention (especially parietal and prefrontal cortex). 

This first exploration of memory guided attention (1) confirmed that participants can use long-term memory to guide attention, and (2) further suggested that the brain areas that the mediate long-term could interact with attention-related areas to support this coalition. However, due to methodological limitations at the time, there was no way to separate activity associated with memory-guided preparatory attention, and the consequences of past-experience on perception (e.g., Helmholtzian inference). This was the aim of our follow-up study.

The Current Study: Design and Results 


In collaboration with Nobre and colleagues, we combined multiple brain imaging methods to show that past experience can change the activation state of visual cortex in preparation for memory-predicted input (Stokes, Atherton, Patai & Nobre, 2012, PNAS). Using electroencephalography (EEG), we demonstrated that the memories can reduce inhibitory neural oscillations in visual cortex at memory-specific spatial locations.

With fMRI, we further show that this change in electrical activity is also associated with an increase in activity for the brain areas that represent the memory-predicted spatial location. Together, these results provide key convergent evidence that past-experience alone can shape activity in visual cortex to optimise processing of memory-predicted information. 


Finally, we were also able to provide the most compelling evidence to date that memory-guided attention is mediated via the interaction between processing in the hippocampus, prefrontal and parietal cortex. However, further research is needed to verify this further speculation. In particular, we cannot yet confirm whether activation of the attention network is necessary for memory-guided preparation of visual cortex, or whether a direct pathway between the hippocampus and visual cortex is sufficient for the changes in preparatory activity observed with fMRI and EEG. This is now the focus of on-going research.




Saturday, 28 April 2012

Research Grant to Explore Fluid Intelligence


Thank you to the British Academy for awarding John Duncan and myself research funds to test a key hypothesis in the cognitive neuroscience of human performance: is prefrontal cortex necessary for fluid intelligence?

We will use non-invasive brain stimulation (transcranial magnetic stimulation: TMS) to temporarily ‘deactivate’ the prefrontal cortex, and then measure the consequences for performance on standard tests of fluid intelligence. It is a relatively simple experimental design, but if done correctly, the results should provide important and novel insights into the brain mechanisms underlying one of the most important human faculties: flexible reasoning and problem solving.

My co-investigator, John Duncan, gained his reputation in the cognitive neuroscience of intelligence with his seminal brain imaging study published in Science. This research demonstrated that when people perform tasks that tax fluid intelligence, neural activity increases in the prefrontal cortex relative to control tasks that require less fluid intelligence.


This result suggests that the prefrontal cortex is involved in fluid intelligence - but of course, as every undergraduate in psychology/cognitive neuroscience should be able to tell you, brain imaging alone cannot tell us whether the activated brain area is in fact necessary for performing the task.


So, to verify the causal role of the prefrontal cortex, Duncan and colleagues next examined stroke patients (published in PNAS). The logic here is simple: does damage to the prefrontal cortex reduce fluid intelligence? But the methodology is not so simple. Of particular importance, how can you tell  whether a patient has low IQ because of the brain damage, or  whether they were always a low IQ individual?

Duncan's team tackled this problem by estimated pre-damage fluid intelligence from scores on other tests that measure so-called crystallised intelligence (e.g., vocabulary and general knowledge). Critically, crystallised intelligence reflects the life long achievements that depend on fluid intelligence during acquisition, and therefore can be used to approximate pre-damage fluid intelligence. If the prefrontal cortex is especially important for fluid intelligence, then damage should result in a disparity between fluid and crystallised intelligence. Indeed, this is what they found. 


As developed in his popular science book, "How Intelligence Happens", Duncan suggests that the prefrontal cortex is essential for flexible structured cognitive processing, a key ingredient to fluid intelligence. If this theory is correct, then temporary deactivation of the prefrontal cortex should impair fluid intelligence. If not, then we need to rethink this working hypothesis. 

What will these results tell us? Are we just heading back to 19th Century phrenology – associating discrete brain areas with complex high order human traits that are more like sociocultural inventions than principled neurocognitive constructs? Do we then plan to localise creativity here, insight there, and perhaps a little bit of moral judgment over here? 


Of course, I don't think this is modern day phrenology. Rather, I would argue that this research could provide key insights into the fundamental cognitive neuroscience of this important brain area. From a theoretical perspective, we can attempt to decompose the underlying processes for fluid intelligence, and relate these to the neurophysiological principles of prefrontal function. Intelligence is not mystical or intractable. It is a specific cognitive process that we can measure, and must have a neurological basis that is an important target for cognitive neuroscience.

However, we must also recognise that we have to be careful how these results could be interpreted. Intelligence is a particularly sensitive area. The very concept of fluid intelligence often takes on more than it should - a reflection of the fundamental worth or even moral character of the individual.

Obviously there is some danger in reducing one of the most important cognitive mechanisms to a single number (e.g., intelligence quotient: IQ), which we can then compare between individuals and against groups. It is a dangerous business that can be exploited for any number of nefarious agendas. For example, we can try to confirm our own racial or sexist prejudices, conjuring up a biological, and therefore 'scientific' excuse for beliefs that are motivated by simple bigotry (recall the recent Watson controversy?). Conversely, on the other side, the same logic could be used to pursue an equality agenda. This could also be a dangerous path to follow - what if we are not all equal in ability? I see no a priori reason that there should not be group differences in any measure, including IQ. It is simply an empirical issue, and therefore a risky business to stake our sense of equality on equal ability. 

IQ is certainly a loaded concept. Recently, I was speaking with a mathematician and historian about an advert they saw for a brain imaging study comparing IQ between academics from the sciences and humanities. The historian was intrigued, and eager to participate, whereas the mathematician was much more reluctant. I guess the risk of a lower-than-hoped-for score is quite disconcerting when your very livelihood depends on an almost mythical concept of pure intelligence, or better still - genius.

An anecdote comes to mind of a researcher who was to be the first subject in an fMRI study of IQ conducted by his colleagues. Being scanned can sometimes make people nervous the first time, but this was a seasoned neuroscientist, no stranger to the confined and uncomfortable space of an MRI. Rather, what made this no ordinary scanning experience was the fact that his respected colleagues were watching from the control room, monitoring his responses to the IQ task. Enough to make any academic uncomfortable!

This kind of awkwardness raises an important practical issue for us. Like many cognitive neuroscientists, I often rely on friends and colleagues to participate in my experiments, especially students, academic visitors, post-doctoral researchers. Obviously, one could easily imagine some tension arising in a lab that has tested everyone’s IQ. This could be particularly worrying for the more senior amongst us, as fluid intelligence is negatively correlated with age. We would not want to upset the natural order of the academic hierarchy!


Anyway, I will keep you posted how we get on with the project.