Monday, 3 August 2015

Journal Club: Decoding spatial activity patterns with high temporal resolution

by Michael Wolff

on: Cichy, Ramirez and Pantazis (2015) Can visual information encoded in cortical columns be decoded from magnetoencephalography data in humans? NeuroImage

Knowing what information the brain is holding at any given time is an intriguing prospect. It would enable researchers to explore how and where information are processed and formed in the brain, as well as how they guide behaviour.

A big step towards this possibility was made in 2005 when Kamitani and Tong decoded simple visual grating stimuli in the human brain using functional magnetic resonance imaging (fMRI). The defining new feature of this study was that instead of looking for differences in overall activity levels between conditions (or in this case visual stimuli), they tested the differences in activity patterns across voxels between stimuli. This method is now more generally known as multivariate pattern analysis (MVPA). A classifier (usually linear) is trained on a subset of data to discriminate between conditions/stimuli, and then tested on the left-out data. This is repeated many times, and the percentages of correctly labelled test data are reported. Crucially, this process is carried out separately for each participant, as subtle individual differences in activity patterns and cortical folding would be lost when averaged, defeating the purpose of the analysis. MVPA has since revolutionised fMRI research and, in combination with the increased power of computers, has become a widely used technique.

The differential brain patterns observed by Kamitani and Tong are thought to arise from the orientation columns in the primary visual cortex (V1), discovered by Hubel and Wiesel more than 50 years ago. They showed that columns contain neurons that are excited differentially by visual stimuli of varying orientations. Since these columns are very small (<1 mm) it is surprising that their activity patterns can apparently be picked up by conventional fMRI with about 2-3mm spatial resolution. More surprising still is that even magnetoencephalography (MEG) and electroencephalography (EEG) seem to be able to decode visual information, which are generally considered to have a spatial resolution of several centimetres! How is this possible?

Critics have raised alternative possible origins of the decodable patterns, which could result in more coarse-level activity patterns (e.g. by global form properties or overrepresentation of specific stimuli), and thus confound the interpretation of decodable patterns in the brain.

In response to these criticisms, a recent study by Cichy, Ramirez, and Pantazis (2015) investigated to what extent specific confounds could affect decodable patters by systematically changing the properties of presented stimuli. They used MEG as the physiological measure instead of fMRI. This enabled them to explore the time-course of decoding, which can be used to infer at which visual processing stage decodable patterns arise.

In the first experiment they showed that neither the cardinal bias (over representation of horizontal or vertical gratings) nor the phase of gratings (and thus local luminance) is necessary to reliably decode the stimuli.

Figure 1. From Cichy et al., in press
As can be seen from the decoding time-course the decodability is significant approximately 50 ms after stimulus presentation and ramps up extremely quickly, peaking at about 100 ms. This time-course alone, which was very similar in the other experiments testing for different possible confounds, suggests that the decodable patterns arise early in the visual processing pathway, probably in V1.

The other confounds that were tested involved the radial-bias (neural overrepresentation of lines parallel to fixation), the edge effect (gratings could be represented as ellipses elongated in the orientation of the gratings), and global form (where gratings are perceived as coherent tilted objects). None of these biases could fully explain the decodable patterns, casting doubt on the notion of coarse-level driven decoding. Again, how is this possible, when the spatial resolution of MEG should be far too coarse to pick up such small neural differences?

The authors tested the possibility of decoding neural activity from the orientation columns with MEG more directly. They projected neurophysiologically realistic activity patterns on to the modelled surface of V1 of one subject (A). The distance between each activity node was comparable to the actual size of the orientation columns. The corresponding MEG scalp recordings were obtained by forward modelling (B) and their differences decoded (C and D). The activity patterns could be reliably discriminated across a wide range of signal to noise ratios (SNR) and, most crucially, at the same SNR as in the first experiment.

Figure 2. From Cichy et al., in press

This procedure nicely demonstrates the theoretical feasibility of discriminating neural activity at V1 with MEG, and suggests that the well-known “inverse-problem” inherent to MEG and EEG source localisation does not necessarily mean that small activation differences on the sub-millimetre scale are not present in the activation topographies. While it remains impossible to say where the origin of a neural activation pattern lies, the activation pattern of MEG is still spatially rich.

Even with EEG it is possible to decode the orientations of gratings (Wolff, Ding, Myers, & Stokes, in press); and this can be observed more than 1.5 seconds after stimulus presentation. We believe that there is a bright future ahead for EEG and MEG decoding research: not only is EEG considerably cheaper than fMRI, but the time-resolved decoding offered by both methods could nicely complement the more spatially resolved decoding of fMRI.



References

Cichy, R. M., & Pantazis, D. (in press). Can visual information encoded in cortical columns be decoded from magnetoencephalography data in humans? NeuroImage.

Hubel, D. H., & Wiesel, T. N. (1959). Receptive fields of single neurones in the cat's striate cortex. The Journal of physiology, 148(3), 574-591.

Kamitani, Y., & Tong, F. (2005). Decoding the visual and subjective contents of the human brain. Nature Neuroscience, 8(5), 679-685.

Wolff, M. J., Ding, J., Myers, N. E., & Stokes, M. G. (in press). Revealing hidden states in visual working memory using EEG. Frontiers in Systems Neuroscience.

No comments:

Post a Comment