Showing posts with label EEG. Show all posts
Showing posts with label EEG. Show all posts

Sunday, 31 May 2015

What does MEG measure?

This is a guest post by Lev Tankelevitch, one of my PhD students. He is currently using MEG to explore reward-guided attention at the Oxford Centre for Human Brain activity. This article is also cross-posted at the Brain Metrics.


In 1935, Hans Berger writes in one of his seminal reports on the electroencephalogram (EEG), addressing the controversy surrounding the origin of the then unbelievable electrical potentials recorded by him from the human scalp:




Fig. 1. Hans Berger and his early EEG recordings
from the 1930s. Adapted from Wiki Commons.
"I disagree with the statement of the English investigators that the EEG originates exclusively in the occipital lobe. The EEG originates everywhere in the cerebral cortex...In the EEG a fundamental function of the human cerebrum intimately connected with the psychophysical processes becomes visible manifest." (see here for a history of Hans Berger and the EEG)
Fig. 2. The forward and inverse problems

Decades later, the correctness of his position is both a blessing and a curse - we now know that the entire brain produces EEG signals, but it has been a struggle to match components of the EEG to their specific sources in the brain, and thus to further our understanding of how exactly the functioning of the brain relates to those psychophysical processes with which Berger was so enthralled. This struggle is best summarised as an inverse problem, in which one begins with a set of observations (e.g., EEG signals) and has to work backwards to try to calculate what caused them (e.g., neural activity in a specific brain region). A massive obstacle to this approach is the fact that as electrical signals pass from the brain to the scalp they become heavily distorted by the skull. This distortion makes it exceedingly difficult to try to reconstruct the underlying sources in the brain.

In 1969, the journey to understand the electrical potentials of the brain took an interesting and fruitful detour when David Cohen, a physicist working at MIT, became the first to confidently measure the incredibly tiny magnetic fields produced by the heart's electrical signals (see here for a talk by David Cohen on the origins of MEG). To do this, he constructed a shielded room, blocking interference from the overwhelming magnetic fields generated by earth itself and by other electrical devices in the vicinity, effectively closing the door on a cacophony of voices to carefully listen to a slight

Fig. 3. Comparisons of magnetic field strengths
on a logarithmic scale. From Vrba (2002).
whisper. His shielding technique became central to the advent of magnetoencephalography (MEG), which measures the yet even quieter magnetic fields generated by the brain's electrical activity.

This approach to record the brain's magnetic fields, rather than the electrical potentials themselves, was advanced even further by James Zimmerman and others working at the Ford Motor Company, where they developed the SQUID, a superconducting quantum interference device. A SQUID is an extremely sensitive magnetometer, operating on the principles of quantum physics beyond the scope of this article, which is able to detect precisely those very tiny magnetic fields produced by the brain. To appreciate the contributions of magnetic shielding and SQUIDs to magnetoencephalography, consider that the earth's magnetic field, the one acting on your compass needle, is at least 200 million times the strength of the fields generated by your brain trying to read that very same
compass.


Fig. 4. A participant being scanned inside a MEG scanner.
From OHBA.

A MEG scanner is a large machine allowing participants to sit upright. As its centrepiece, it contains a helmet populated with many hidden SQUIDs cooled at all times by liquid helium. Typical scanners contain about 300 sensors covering the entirety of the scalp. These sensors include magnetometers, which measure magnetic fields directly, and gradiometers, which are pairs of magnetometers placed at a small distance from each other, measuring the difference in magnetic field between their two locations (hence "gradient" in the name). This difference measure subtracts out large and distant sources of magnetic noise (such as earth's magnetic field), while remaining sensitive to local sources of magnetic fields (such as those emanating from the brain). Due to their positioning, magnetometers and gradiometers also provide complementary information about the direction of magnetic fields.

Given that these magnetic fields occur simultaneously with electrical activity, MEG is afforded the same millisecond resolution as EEG, allowing one to examine neural activity at its natural temporal resolution. This is in contrast to functional magnetic resonance imaging, fMRI, which, using magnetic fields as a tool rather than a target of measurement, actually measures changes in blood oxygenation which occur on the order of seconds, making it impossible to effective pinpoint the time of neural activity (see here). Another advantage over fMRI is the fact that electromagnetic signals are more directly related to the underlying neural activity than the haemodynamic response, which may differ across brain regions, clinical populations, or with respect to drug effects, thereby complicating interpretations of observed effects. Unlike the electrical potentials measured in EEG, however, the magnetic fields measured in MEG pass from the brain through the skull in a relatively undisturbed manner, substantially simplifying the inverse problem. In these ways, for a non-invasive technique, MEG best combines high temporal resolution and improves source localisation within the human brain.
What exactly do those tiny magnetic fields reflect about brain activity? When a neuron receives communication from a neighbour, an excitatory or inhibitory postsynaptic potential (EPSP or IPSP) is generated in the neuron's dendrites, causing that local dendritic membrane to become
Fig. 5. The source of recorded magnetic
fields in MEG. Adapted from Hansen et al. (2010)

transiently depolarised relative to the body of the neuron. This difference in potential generates a current flow both inside and outside the neuron, which creates a magnetic field. One such event, however, is still insufficient in generating a magnetic field large enough to be detected even by the mightiest of SQUIDs, so it is thought that the fields measured in MEG are the result of at least 50,000 neurons simultaneously experiencing EPSPs or IPSPs within a certain region. Unfortunately, current technology and analysis methods are limited to detecting magnetic fields generated along the cortex, the bit of the brain closest to the scalp. Fields generated in deeper cortical and subcortical areas rapidly dissipate as they travel much longer distances through the brain. To complicate things further, we have to remember that magnetic fields obey Ampère's right-hand rule which states that if a current flows in the direction of the thumb in a "thumbs-up" gesture of the right hand, the generated magnetic field will flow perpendicularly to the thumb, in the direction of the fingers. This means that only neurons oriented tangentially along the skull surface generate magnetic fields which radiate outwards in the direction of the skull to be measured at the surface. Fortunately, mother nature has cut scientists some slack here, as the pervasive folding pattern (gyrification) of the brain's cortex provides us with plenty of neurons arranged in the direction useful for MEG measurement. The cortex alone is enough to keep scientists busy, and findings from fMRI and direct electrophysiological recordings from non-human animals provide complementary information about the world underneath the cortex, and how it may all fit together.

At the end of a long and arduous MEG scanning session, one is left with about 300 individual time series, typically recorded at 1000 Hz, reflecting tiny changes in magnetic fields driven by neural activity presumably occuring in response to some cognitive task. Although the shielded room blocks out magnetic interference from other electrical devices (and all equipment inside the room works through optical fibres), there is still massive interference from the subject's heart and any other muscle activity around the head. For this reason, participants are typically instructed to limit eye movements and blinking and any remaining artefactual noise in the data (i.e., anything not thought to be brain activity) is taken out at the analysis stage using techniques like independent component analysis.


Fig. 6. Raw MEG data (left), and event-related
fields in sensor space and source space (right).
Adapted from Schoenfeld et al. (2014).

Analysis of MEG data can be done in sensor space, in which one simply looks at how the signals at individual sensors change during different parts of a cognitive task. This provides a rough estimate of the activation patterns along the cortex. The perk of MEG, however, is the ability to project data recorded in the 300 sensors to source space, and effectively estimate where in the brain these signals may originate. Although this is certainly more feasible in MEG than EEG, the inverse problem is actual a fundamental issue to both types of extracranial recordings (we don't have this problem when measuring directly from the brain during intracranial recording). One way to narrow down which possible activation regions in the brain could underlie the observed magnetic fields is to establish certain assumptions about what we expect brain activity to look like in general, and how that activity is translated into the signal measured at the scalp. Such assumptions are more reasonable in MEG than EEG due to the higher fidelity of magnetic fields as they pass from the brain to scalp.




Fig. 7. Neural activation is smooth, forming
clusters of active neurons. Adapted from Wiki Commons.
For example, neural activation in the brain is assumed to be smooth. Imagine all the active neurons in a brain at a single point in time as stars in the sky: smooth activation would mean that the stars would form little clusters, rather than appear completely randomly all over the sky. Indeed, this feature of brain activation is what allows us to detect any magnetic fields using MEG in the first place! Remember that only many neurons within a local region which happen to be simultaneously active generate fields strong enough to be detected at the scalp.


Fig. 8. MRI structural image of the head and brain (left),
and sensor, head, and brain model (right).
Adapted from Wiki Commons and OHBA.
Another assumption is that the fate of the travelling magnetic fields depends on the physical size, shape, and organisation of the brain and scalp. To this end, MEG data across all 300 sensors are registered to an MRI scan of each participant's head and a 3D mapping of their scalp (obtained by literally marking hundreds of points along each participant's scalp using a digital pen), which together provide a high spatial resolution description of the anatomy of the entire head, brain included. These assumptions, among others, are used to mathematically estimate where in the brain the measured magnetic fields may have originated at each point in time.

Fig. 9. Alpha, beta, and gamma oscillations.
Adapted from Wiki Commons.


There are two general approaches when analyzing MEG data. Analysis of event-related fields looks at how the timing or the size of the magnetic
fields changes with respect to an event of interest during a cognitive task (e.g., the appearance of an image). The idea is that although there is a lot of noise in the measurement, if one averages many trials together the noise will cancel out, while the effect of interest, which always occurs in relation to a precisely timed event in the cognitive task, will remain. This follows in the tradition of EEG analysis, in which these evoked responses are called event-related potentials. Alternatively, one can use Fourier transformations to break the data down into frequency components, also known as waves, rhythms, or oscillations, and measure changes in their phase or amplitude in response to cognitive events. This follows in the tradition established by Berger himself, who discovered and named alpha and beta waves. Neural oscillations have recently received a lot of attention as they are suggested to be involved in synchronizing the activity of populations of neurons, and have been associated with a number of cognitive functions such as attentional control and movement preparation, as in the case of alpha and beta oscillations, respectively.


Other resources:

For a slightly more in-depth description of MEG, see here.
For a more in-depth description of MEG acquisition, see this video.
And for the kids, see this excellent article at Frontiers for Young Minds

References

Baillet, S., Mosher, J.C., & Leahy, R.M. (2001). Electromagnetic brain mapping. IEEE Signal Processing Magazine..
Fernando H, Lopes da Silva. MEG: an introduction to methods. eds: Hansen, Kringelback & Salmelin. USA: OUP, 2010:1-23, figure 1.3 from p6.
La Vaque, T. J. (1999). The History of EEG Hans Berger: Psychophysiologist. A Historical Vignette. Journal of Neurotherapy.
Proudfoot, M., Woolrich, M.W., Nobre, A.C., & Turner, M. (2014). Magnetoencephalography. Pract Neurol, 0, 1-8.
Schoenfeld, M.A., Hopf, J-M., Merkel, C. Heinz, H-J., & Hillyard, S.A. (2014). Object-based attention involves the sequential activation of feature-specific cortical modules. Nature Neuroscience, 17(4).
Vrba, J. (2002). Magnetencephalography: the art of finding a needle in a haystack. Physica C, 368, 1-9.

The data figures are from papers cited above. All other figures are from Wiki Commons.

Friday, 24 April 2015

Research Briefing: organising the contents of working memory

Figure 1. Nicholas Myers
Research Briefing, by Nicholas Myers

Everyone has been in this situation: you are stuck in an endless meeting, and a colleague drones on about a topic of marginal relevance. You begin to zone out and focus on the art hanging in your boss’s office, when suddenly you hear your name mentioned. On high alert, you suddenly shift back to the meeting and scramble to retrieve your colleague’s last sentences. Miraculously, you are able to retrieve a few key words – they must have entered your memory a moment ago, but would have been quickly forgotten if hearing your name had not cued them as potentially vital bits of information.

This phenomenon, while elusive in everyday situations, has been studied experimentally for a number of years now: cues indicating the relevance of a particular item in working memory have a striking benefit to our ability to recall it, even if the cue is presented after the item has already entered memory. See our previous Research Briefing on how retrospective cueing can restore information to the focus of attention in working memory.

In a new article, published in the Journal of Cognitive Neuroscience, we describe a recent experiment that set out to add to our expanding knowledge of how the brain orchestrates these retrospective shifts of attention. We were particularly interested in the potential role of neural synchronization of 10 Hz (or alpha-band) oscillations, because they are important in similar prospective shifts of attention.

Figure 2. Experimental Task Design. [from Myers et al, 2014]
We wanted to examine the similarity of alpha-band responses (and other neural signatures of the engagement of attention) both to retrospective and prospective attention shifts. We needed to come up with a new task that allowed for this comparison. On each trial in our task, experiment volunteers first memorized two visual stimuli. Two seconds later, a second set of two stimuli appeared, so that a total of four stimuli was kept in mind. After a further delay, participants recalled one of the four items.  

In between the presentation of the first and the second set of two items, we sometimes presented a cue: this cue indicated which of the four items would likely be tested at the end of the trial. Crucially, this cue could have either a prospective or a retrospective function, depending on whether it pointed to location where an item had already been presented (a retrospective cue, or retrocue), or to a location where a stimulus was yet to appear (a prospective cue, or precue). This allowed us to examine neural responses to attention-guiding cues that were identical with respect to everything but their forwards- or backwards-looking nature. See Figure 2 for a task schematic.

Figure 3. Results: retro-cueing and pre-cueing
trigger different attention-related ERPs.
[from Myers et al, 2014]
We found marked differences in event-related potential (ERP) profiles between the precue and retrocue conditions. We found evidence that precues primarily generate an anticipatory shift of attention toward the location of an upcoming item: potentials just before the expected appearance of the second set of stimuli reflected the location where volunteers were attending. These included the so-called early directing attention negativity (or 'EDAN') and the late directing attention-related positivity (or 'LDAP'; see Figure 3, middle panel; and see here for a review of attention-related ERPs). Retrocues elicited a different pattern of ERPs that was compatible with an early selection mechanism, but not with stimulus anticipation (i.e., no LDAP, see Figure 3, upper panel). The latter seems plausible, since the cued information was already in memory, and upcoming stimuli were therefore not deserving of attention. In contrast to the distinct ERP patterns, alpha band (8-14 Hz) lateralization was indistinguishable between cue types (reflecting, in both conditions, the location of the cued item; see Figure 4).

Figure 4. Results: retro-cueing and pre-cueing trigger similar patters
of de-synchronisation in low frequency activity (alpha band at ~10Hz).
[from Myers et al, 2014]
What did we learn from this study? Taken together with the ERP results, it seems that alpha-band lateralization can have two distinct roles: after a precue it likely enables anticipatory attention. After a retrocue, however, the alpha-band response may reflect the controlled retrieval of a recently memorized piece of information that has turned out to be more useful than expected, without influencing the brain’s response to upcoming stimuli.

It seems that our senses are capable of storing a limited amount of information on the off chance that it may suddenly become relevant. When this turns out to be the case, top-down control allows us to pick out the relevant information from among all the items quietly rumbling around in sensory brain regions.

Many interesting questions remain that we were not able to address in this study. For example, how do cortical areas responsible for top-down control activate in response to a retrocue, and how do they shuttle cued information into a state that can guide behaviour? 



Key Reference: 

Myers, Walther, Wallis, Stokes & Nobre (2014) Temporal Dynamics of Attention during Encoding versus Maintenance of Working Memory: Complementary Views from Event-related Potentials and Alpha-band Oscillations. Journal of Cognitive Neuroscience (Open Access)

Thursday, 12 June 2014

Research Briefing: Oscillatory Brain State and Variability in Working Memory

Hot off the press: Oscillatory Brain State and Variability in Working Memory

In a new paper, Nick Myers and colleagues show how spontaneous fluctuations in
alpha-band synchronization over visual cortex predict the trial-by-trial accuracy of items stored in visual working memory. The pre-stimulus desynchronization of alpha oscillations correlated with the accuracy of memory recall. A model-based analysis indicated that this effect arises from a modulation in the precision of memorized items, but not the likelihood of remembering them (the recall rate). The phase of posterior alpha oscillations preceding the memorized item also predicted memory accuracy. The study highlights the influence of spontaneous changes in cortical excitability on higher visual cognition, and how these state changes contribute to large amounts of variability in what is normally thought of as a stable aspect of behavior.
 
From Figure 2 in Myers et al. (2014)



Reference: 

Myers, N. E., M. G. Stokes, et al. (2014). "Oscillatory brain state predicts variability in working memory." J Neurosci 34(23): 7735-7743 http://www.jneurosci.org/content/34/23/7735.short

Tuesday, 23 April 2013

In the news: Decoding dreams with fMRI

Recently Horikawa and colleagues from ATR Computational Neuroscience Laboratories, in Kyoto (Japan), caused a media sensation with the publication of the study in Science that shows first-time proof-of-principle that non-invasive brain scanning (fMRI) can be used to decode dreams. Rumblings were already heard in various media circles after Yuki Kamitani presented their initial findings at the annual meeting of the Society for Neuroscience in New Orleans last year [see Mo Costandi's report]. But now the peer-reviewed paper is officially published, the press releases have gone out and the journal embargo has been lifted, there was a media frenzy [e.g., here, here and here]. The idea of reading people's dreams was always bound to attract a lot of media attention.

OK, so this study is cool. OK, very cool - what could be cooler than reading people's dreams while they sleep!? But is this just a clever parlour trick, using expensive brain imaging equipment? What does it tell us about the brain, and how it works?

First, to get beyond the hype, we need to understand exactly what they have, and have not, achieved in this study. Research participants were put into the narrow bore of an fMRI for a series of mid afternoon naps (up to 10 sessions in total). With the aid of simultaneous EEG recordings, the researchers were able to detect when their volunteers had slipped off into the earliest stage of sleep (stage 1 or 2). At this point, they were woken and questioned about any dream that they could remember, before being allowed to go back to sleep again. That is, until the EEG next registered evidence of early stage sleep again, and then again they were awoken, questioned, and allowed back to sleep. So on and so forth, until they had recorded at least 200 distinct awakenings.

After all the sleep data were collected, the experimenters then analysed the verbal dream reports using a semantic network analysis (WordNet) to help organise the contents of the dreams their participants had experience during the brain scans. The results of this analysis could then be used to systematically label dream content associated with the sleep-related brain activity they had recorded earlier.

Having identified the kind of things their participants had been dreaming about in the scanner, the researchers then searched for actual visual images that best matched the reported content of dreams. Scouring the internet, the researchers built up a vast database of images that more or less corresponded to the contents of the reported dreams. In a second phase of the experiment, the same participants were scanned again, but this time they were fully awake and asked to view the collection of images that were chosen to match their previous dream content. These scans provided the research team with individualised measures of brain activity associated with specific visual scenes. Once these patterns had been mapped, the experimenters returned to the sleep data, using the normal waking perception data as a reference map.

If it looks like a duck...

In the simplest possible terms, if the pattern of activity measured during one dream looks more like activity associated with viewing a person, compared to activity associated with seeing an empty street scene, then you should say that the dream probably contains a person, if you were forced to guess. This is the essence of their decoding algorithm. They use sophisticated ways to characterise patterns in fMRI activity (support vector machine), but essentially the idea is simply to match up, as best they can, the brain patterns observed during sleep with those measures during wakeful viewing of corresponding images. Their published result is shown on the right for different areas of the brain's visual system. Lower visual cortex (LVC) includes primary visual cortex (V1), and areas V2 and V3; whereas higher visual cortex (HVC) includes lateral occipital complex (LOC), fusiform face area (FFA) and parahippocampal place area (PPA).

Below is a more creative reconstruction of this result. The researchers have put together a movie based on one set of sleep data taken before waking. Each frame represents the visual image from their database that best matches the current pattern of brain activity. Note, the reason why the image gets clearer towards the end of the movie is because the brain activity is nearer to the time point at which the participants were woken, and therefore were more likely to be described at waking. If the content at other times did not make it into the verbal report, then the dream activity would be difficult to classify because the corresponding waking data would not have been entered into the image database. This highlights how this approach only really works for content that has been characterised using the waking visual perception data.      


OK, so these scientists have decoded dreams. The accuracy is hardly perfect, but still, the results are significantly above chance, and that's no mean feat. In fact, it has never been done before. But some might still say, so what? Have we learned anything very new about the brain? Or is this just a lot of neurohype?

Well, beyond the tour de force technical achievement of actually collecting this kind of multi-session simultaneous fMRI/EEG sleep data, these results also provide valuable insights into how dreams are represented in the brain. As in many neural decoding studies, the true purpose of the classifier is not really to make perfectly accurate predictions, but rather to work out how the brain represented information by studying how patterns of brain activity differ between conditions [see previous post]. For example, are there different patterns of visual activity during different types of dreams? Technically, this could be tested by just looking for any difference in activity patterns associated with different dream content. In machine-learning language, this could be done using a cross-validated classification algorithm. If a classifier trained to discriminate activity patterns associated with known dream states can then make accurate predictions of new dreams, then it is safe to assume that there are reliable differences in activity patterns between the two conditions. However, this only tells you that activity in a specific brain area is different between conditions. In this study, they go one step further.

By training the dream decoder using only patterns of activity associated with the visual perception of actual images, they can also test whether there is a systematic relationship between the way dreams are presented, and how actual everyday perception is represented in the brain. This cross-generalisation approach helps isolate the shared features between the two phenomenological states. In my own research, we have used this approach to show that visual imagery during normal waking selectively activates patterns in high-level visual areas (lateral occipital complex: LOC) that are very similar to the patterns associated with directly viewing the same stimulus (Stokes et al., 2009, J Neurosci). The same approach can be used to test for other coding principles, including high-order properties such as position-invariance (Stokes et al., 2011, NeuroImage), or the pictorial nature of dreams, as studied here. As in our previous findings during waking imagery, Horikawa et al show that the visual content of dreams shares similar coding principles to direct perception in higher visual brain areas. Further research, using a broader base of comparisons, will provide deeper insights into the representational structure of these inherently subject and private experiences.

Many barriers remain for an all-purpose dream decoder

When the media first picked up this story, the main question I was asked went something like: are scientists going to be able to build dream decoders? In principle, yes, this result shows that a well trained algorithm, given good brain data, is able to decode the some of the content of dreams. But as always, there are plenty of caveats and qualifiers.

Firstly, the idea of downloading people's dreams while they sleep is still a very long way off. This study shows that, in principle, it is possible to use patterns of brain activity to infer the contents of peoples dreams, but only at a relatively coarse resolution. For example, it might be possible to distinguish between patterns of activity associated with a dream containing people or an empty street, but it is another thing entirely to decode which person, or which street, not to mention all the other nuances that make dreams so interesting.

To boost the 'dream resolution' of any viable decoding machine, the engineer would need to scan participants for much MUCH longer, using many more visual exemplars to build up an enormous database of brain scans to use as a reference for interpreting more subtle dream patterns. In this study, the researchers took advantage of prior knowledge of specific dream content to limit their database to a manageable size. By verbally assessing the content of dreams first, they were able to focus on just a relatively small subset of all the possible dream content one could imagine. If you wanted to build an all-purpose dream decoder, you would need an effectively infinite database, unless you could discover a clever way to generalise from a finite set of exemplars to reconstruct infinitely novel content. This is an exciting area of active research (e.g., see here).

Another major barrier to a commercially available model is that you would also need to characterise this data for each individual person. Everyone's brain is different, unique at birth and further shaped by individual experiences. There is no reason to believe that we could build a reliable machine to read dreams without taking this kind of individual variability into account. Each dream machine would have to be tuned to each person's brain.


Finally, it is also worth noting that the method that was used in this experiment requires some pretty expensive and unwieldy machinery. Even if all the challenges set out above were solved, it is unlikely that dream readers for the home will be hitting the shelves any time soon. Other cheaper, and more portable methods for measuring brain activity, such as EEG, can only really be used to identify difference sleep stages, not what goes on inside them. Electrodes placed directly into the brain could be more effective, but at the cost of invasive brain surgery.


For the moment, it is probably better just to keep a dream journal.

Reference:


Horikawa, Tamaki, Miyawaki & Kamitani (2013) Neural Decoding of Visual Imagery During Sleep, Science [here]

Sunday, 24 June 2012

Journal Club: Brains Resonating to the Dream Machine


By George Wallis

On: VanRullen and Macdonald (2012). PerceptualEchoes at 10Hz in the Human Brain

One day in 1958 the artist Brion Gysin was sleeping on a bus in the south of France. The bus passed a row of trees, through which the sun was shining. As the flickering light illuminated Gysin, he awoke and with his eyes closed, began to hallucinate, seeing:

an overwhelming flood of intensely bright patterns in supernatural colours… Was that a vision?”.  
By the turn of the decade Gysin was living with William S Burroughs in the flophouse in Paris that became known as the Beat Hotel. Gysin told Burroughs of his experience, and they decided to build a device to recreate the flickering stimulation. The ‘Dream Machine’ is a cylinder of cardboard, cut at regular intervals with windows, which can be spun on a 78rpm record player, a light bulb inside to throw off a flickering light.   The light flickers around ten times per second (10Hz). Some, like the poet Ginsberg (it sets up optical fields as religious and mandalic as the hallucinogenic drugs”), claim to have experienced vivid hallucinations when seated eyes closed before a spinning Dream Machine (although, most devotees admitted that the effect was much stronger in combination with psychedelic drugs).

Gysin and Burroughs had rediscovered a phenomenon that had been known to scientists for some time. The great neurophysiologist Purkinje documented the hallucinatory effect of flickering light by waving an open-fingered hand in front of a gaslight. Another neuro-luminary, Hermann von Helmholtz, investigated the same phenomenon in Physiological Optics, calling the resulting hallucinations ‘shadow patterns’. In the 1930s Adrian and Matthews, investigating the rhythmic EEG signal recently discovered by Hans Berger, shone a car headlamp through a bicycle wheel and found that they could ‘entrain’ the EEG recording of their subject to the stimulation, in ‘a coordinated beat’. And from there investigation of the magical 10Hz flicker continued, on and off, until the present day (for a very readable review, see the paper by ter Meulen, Tavy and Jacobs referenced at the bottom of this post – from which the above quotations from Gysin and Ginsberg are taken; see also a relate post by Mind Hacks).

This week’s journal club paper is not about flicker-induced hallucinations. However, it does use EEG to address the related idea that there is something rather special to the visual system about the 10Hz rhythm. The paper, by Rufin VanRullen and James Macdonald, and published this month in Current Biology, used a very particular type of flickering stimulation to probe the ‘impulse response’ of the brain. They found – perhaps to their surprise – that the brain seems to ‘echo back’ their stimulation at about 10 echoes per second.

Macdonald and VanRullen’s participants were ‘plugged in’ during the experiment – electroencephalography (EEG) was used to measure the tiny, constantly changing voltages on their scalps that reflect the workings of the millions of neurons in the brain beneath. The stimulus sequence presented (with appropriate controls to ensure the participants paid attention) was a flickering patch on a screen. The flicker was of a very particular kind. It was a flat spectrum sequence, a type of signal used by engineers to probe the ‘impulse response’ of a system. The impulse response is the response of a system to a very short, sharp stimulation. Imagine clicking your fingers in an empty cathedral – that short, sharp click is transformed into a long, echoing sound that slowly dies away. This is the impulse response of the cathedral: VanRullen and MacDonald were trying to measure the impulse response of the brain’s visual system. Because of its property of very low autocorrelation (the value of the signal at one point in time says nothing about what the value of the signal will be at any other time), the kind of signal the authors flashed at their participants can be used to mathematically extract the impulse response of a system (for more details, see the paper by Lalor at al., referenced at the bottom of this post).



To extract the impulse response, you do a ‘cross-correlation’ of the input signal (the flickering patch on the screen) with the output of the system – which, in this case, was the EEG signal from over the visual cortex of the participants (the occipital lobe). Cross-correlation involves lining up the input signal with the output at many different points in time and seeing how similar the signals are. So, you start with the input lined up exactly with the output, and ask how similar the input and output signals look. Then you move the input signal so it’s lined up with the output signal 1ms later – how similar now? And so on… all the way up to around 1s ‘misalignment’, in this paper.  Here, for two example subjects (S1 and S2), is the result:


The grey curves are the cross-correlation functions, stretched out over time. Up until about 0.2 seconds you see the classic ‘visual evoked potential’ response, but after that time a striking 10Hz ‘echo’ emerges. The authors perform various controls, to show, for example, that these ‘echoes’ are not induced only by the brightest or darkest values in their stimulus sequence. They argue that because of the special nature of the stimuli they used, this effect must represent the brain actually ‘echoing back’ the input signal at a later time. In their discussion, they propose that this could be a mechanism for remembering stimuli over short periods of time: replaying them 10 times per second.


This is a bold hypothesis. Are these 10Hz reverberations really ‘echoes’ of the visual input, used for visual short term memory? We weren’t sure. We already know that the EEG resonates by far the most easily to flickering stimuli at 10Hz (see the paper by Hermann, referenced below), so despite the sophisticated stimulus used here, it is easy to suspect that the result of this experiment depends more on this ‘ringing’ quality of the EEG than on mnemonic echoes of stimuli themselves. We felt that in order to really nail this question you would need to show, for example, that our sensitivity to specific stimuli we have just been shown changes with a 10Hz rhythm in the seconds after we encounter it. However, this is the sort of thing that could be achieved with behavioural experiments.
Perhaps a new theory of short term memories will emerge.  

In the meantime, why not build yourself a dream machine and see if you can have your own visionary insights with the help of some 10Hz flickering light?  You’ll need the diagram below (blow up; cut out; fold into a cylinder), an old 78rpm record player, and a light-bulb.


References:

Current Biology, 2012: Perceptual Echoes at 10Hz in the Human Brain, Rufin VanRullen and James S.P. Macdonald

European Neurology, 2009: FromStroboscope to Dream Machine: A History of Flicker-Induced Hallucination,  B.C. ter Meulen, D. Tavy and B.C. Jacobs

NeuroImage, 2006: The VESPA: a method for the rapid estimation of a visual evoked potential.  Edmund C. Lalor, Barak A. Pearlmutter, Richard B. Reilly, Gary McDarby and John J. Foxe



Monday, 18 June 2012

In the news: Mind Reading

Mind reading tends to capture the headlines. And these days we don't need charlatan mentalists to perform parlour tricks before a faithful audience - we now have true scientific mind reading. Modern brain imaging tools allow us to read the patterns of brain activity that constitute mind... well, sort of. I thought to write this post in response to a recent Nature News Feature on research into methods for reading the minds of patients without any other means of communication. In this post, I consider what modern brain imaging brings to the art of mind reading.

Mind reading as a tool for neuroscience research



First, it should be noted that almost any application of brain imaging in cognitive neuroscience can be thought of as a form of mind reading. Standard analytic approaches test whether we can predict brain activity from the changes in cognitive state (e.g., in statistical parametric mapping). It is straightforward to turn this equation round to predict mental state from brain activity. With this simple transformation, the huge majority of brain imaging studies are doing mind reading. Moreover, a class of analytic methods known as multivariate (or multivoxel) pattern analysis (or classification) have come even closer to mind reading for research purposes. Essentially, these methods rely on a two-stage procedure. The first step is to learn which patterns of brain activity correspond to which cognitive states. Next, these learned relationships are used to predict the cognitive state associated with brain activity. This train/test procedure is strictly "mind reading", but essentially as a by-product.

In fact, the main advantage of this form of mind reading in research neuroscience is that it provides a powerful method for exploring how complex patterns in brain data vary with the experimental condition. Multivariate analysis can also be performed the other way around (by predicting brain activity from behaviour, see here), and similarly, there is no reason why train-test procedures can't be used for univariate analyses. In this type of research, the purpose is not actually to read the mind of cash-poor undergraduates who tend to volunteer for these experiments, but rather to understand the relationship between mind and brain.

Statistical methods for prediction provide a formal framework for this endeavour, and although they are a form of mind reading, it is unlikely to capture the popular imagination once the finer details are explained. Experiments may sometimes get dressed up like a mentalist's parlour trick (e.g., "using fMRI, scientists could read the contents of consciousness"), but such hype invariably leaves those who actually read the scientific paper a bit disappointed by the more banal reality (e.g., "statistical analysis could predict significantly above chance whether participants were seeing a left or right tilted grating"... hardly the Jedi mind trick, but very cool from a neuroscientific perspective), or contribute to paranoid conspiracy theories in those who didn't read the paper, but have an active imagination.

Mind reading as a tool for clinical neuroscience


So, in neuroscientific research, mind reading is most typically used as a convenient tool for studying mind-brain relationships. However, the ability to infer mental states from brain activity has some very important practical applications. For example, in neural prosthesis, internal thoughts are decoded by "mind reading" algorithms to control external devices (see previous post here). Mind reading may also provide a vital line of communication to patients who are otherwise completely unable to control any voluntary movement.

Imagine you are in an accident. You suffer serious brain damage that leaves you with eye blinking as your only voluntary movement for communicating with the outside world. That's bad, very bad in fact - but in time you might perfect this new form of communication, and eventually you might even write a good novel, with sufficient blinking and heroic patience. But now imagine that your brain damage is just a little bit worse, and now you can't even blink your eyes. You are completely locked in, unable to show the world any sign of your conscious existence. To anyone outside, you appear completely without a mind. But inside, your mind is active. Maybe not as sharp and clear as it used to be, but still alive with thoughts, feelings, emotions, hopes and fears. Now mind reading, at any level, becomes more than just a parlour trick.
"It is difficult to imagine a worse experience than to be a functioning mind trapped in a body over which you have absolutely no control" Prof Chris Frith, UCL [source here]
As a graduate student in Cambridge, I volunteered as a control participant in a study conducted by Adrian Owen to read mental states with fMRI for just this kind of clinical application (since published in Science). While I lay in the scanner, I was instructed to either imagine playing tennis or to spatially navigate around a familiar environment. The order was up to me, but it was up to Adrian and his group to use my brain response to predict which of these two tasks I was doing at any given time. I think I was quite bad at spatially navigating, but whatever I did inside my brain was good enough for the team to decode my mental state with remarkable accuracy.

Once validated in healthy volunteers (who, conveniently enough, can reveal which task they were doing inside their head, thus the accuracy of the predictions can be confirmed), Adrian and his team then applied this neuroscientific knowledge to track the mental state of a patient who appeared to be in a persistent vegetative state. When they asked her to imagine playing tennis, her brain response looked just like mine (and other control participants), and when asked to spatially navigate, her brain looked just like other brains (if not mine) engaged in spatial navigation.

In this kind of study, nothing very exciting is learned about the brain, but something else extremely important has happened: someone has been able to communicate for the first time since being diagnosed as completely non-conscious. Adrian and his team have further provided proof-of-principle that this form of mind reading can be applied in other patients to test their level conscious awareness (see here). By following the instructions, some patients were able to demonstrate for the first time a level of awareness that was previously completely undetected. In one further example, they even show that this brain signal can be used to answer some basic yes/no questions.

This research has generated an enormous amount of scientific, clinical and public interest [see his website for examples]. As quoted in a recent Nature New Feature, Adrian has since been "awarded a 7-year Can$10-million Canada Excellence Research Chair and another $10 million from the University of Western Ontario" and "is pressing forward with the help of three new faculty members and a troop of postdocs and graduate students". Their first goal is to develop cheaper and more effective means of using non-invasive methods like fMRI and EEG to restore communication. However, one could also imagine a future for invasive recording methods. Bob Knight's team in Berkeley have been using electrical recording made directly from the brain surface to decode speech signals (see here for a great summary in the Guardian by Ian Sample). Presumably, this kind of method could be considered for patients identified as partially conscious.

See also an interesting interview with Adrian by Mo Constandi in the Guardian

References:
Monti, al. (2010). Willful modulation of brain activity in disorders of consciousness. New England Journal of Medicine
Owen, et al (2006). Detecting awareness in the vegetative state. Science
Pasley,  et al (2012). Reconstructing Speech from Human Auditory Cortex. PLoS Biology

Monday, 7 May 2012

Research Briefing: How memory influences attention

Background


In the late 19th Century, the great polymath Hermann von Helmholtz eloquently described how our past experiences shape how we see the world. Given the optical limitations of the eye, he concluded that the rich experience of vision must be informed by a lot more than meets the eye. In particular, he argued that we use our past experiences to infer the perceptual representation from the imperfect clues that pass from the outside world to the brain. 


Consider the degraded black and white image below. It is almost impossible to interpret, until you learn that it is a Dalmatian. Now it is almost impossible not to see the dog in dappled light.

More than one hundred years after Helmholtz, we are now starting to understand the brain mechanisms that mediate this interaction between memory and perception. One important direction follows directly from Helmholtz 's pioneering work. Often couched in more contemporary language, such as Bayesian inference, vision scientists are beginning to understand how our perceptual experience is determined by the interaction between sensory input and our perceptual knowledge established through past experience in the world. 

Prof Nobre (cognitive neuroscientist, University of Oxford) has approached this problem from a slightly different angle. Rather than ask how memory shapes the interpretation of sensory input, she took one step back to ask how past experience prepares the visual system to process memory-predicted visual input. With this move, Nobre's research draws on a rich history of cognitive neuroscientific research in attention and long-term memory. 

Although both attention and memory have been thoroughly studied in isolation, very is little is actually known of how these two core cognitive functions interact in everyday life. In 2006, Nobre and colleagues published the results of a brain imaging experiment designed to identify the brain areas involved in memory-guided attention (Summerfield et al., 2006, Neuron). Participants in this experiment first studied a large number of photographs depicting natural everyday scenes. The instruction was to find a small target object embedded in each scene, very much like the classic Where's Wally game.


After performing the search task a number of times, participants were able learned the location of the target in each scene. When Nobre and her team tested their participants again on a separate day, they found that people were able to use the familiar scenes to direct attention to the previously learned target location in the scene. 


Next, the research team repeated this experiment, but this time changes in brain activity were measured in each participant while they used their memories to direct the focus of their attention. With functional magnetic resonance imaging (fMRI), the team found an increase in neural activity in brain areas associated with memory (especially the hippocampus) as well as a network of brain areas associated with attention (especially parietal and prefrontal cortex). 

This first exploration of memory guided attention (1) confirmed that participants can use long-term memory to guide attention, and (2) further suggested that the brain areas that the mediate long-term could interact with attention-related areas to support this coalition. However, due to methodological limitations at the time, there was no way to separate activity associated with memory-guided preparatory attention, and the consequences of past-experience on perception (e.g., Helmholtzian inference). This was the aim of our follow-up study.

The Current Study: Design and Results 


In collaboration with Nobre and colleagues, we combined multiple brain imaging methods to show that past experience can change the activation state of visual cortex in preparation for memory-predicted input (Stokes, Atherton, Patai & Nobre, 2012, PNAS). Using electroencephalography (EEG), we demonstrated that the memories can reduce inhibitory neural oscillations in visual cortex at memory-specific spatial locations.

With fMRI, we further show that this change in electrical activity is also associated with an increase in activity for the brain areas that represent the memory-predicted spatial location. Together, these results provide key convergent evidence that past-experience alone can shape activity in visual cortex to optimise processing of memory-predicted information. 


Finally, we were also able to provide the most compelling evidence to date that memory-guided attention is mediated via the interaction between processing in the hippocampus, prefrontal and parietal cortex. However, further research is needed to verify this further speculation. In particular, we cannot yet confirm whether activation of the attention network is necessary for memory-guided preparation of visual cortex, or whether a direct pathway between the hippocampus and visual cortex is sufficient for the changes in preparatory activity observed with fMRI and EEG. This is now the focus of on-going research.