Showing posts with label neural decoding. Show all posts
Showing posts with label neural decoding. Show all posts

Monday, 3 August 2015

Journal Club: Decoding spatial activity patterns with high temporal resolution

by Michael Wolff

on: Cichy, Ramirez and Pantazis (2015) Can visual information encoded in cortical columns be decoded from magnetoencephalography data in humans? NeuroImage

Knowing what information the brain is holding at any given time is an intriguing prospect. It would enable researchers to explore how and where information are processed and formed in the brain, as well as how they guide behaviour.

A big step towards this possibility was made in 2005 when Kamitani and Tong decoded simple visual grating stimuli in the human brain using functional magnetic resonance imaging (fMRI). The defining new feature of this study was that instead of looking for differences in overall activity levels between conditions (or in this case visual stimuli), they tested the differences in activity patterns across voxels between stimuli. This method is now more generally known as multivariate pattern analysis (MVPA). A classifier (usually linear) is trained on a subset of data to discriminate between conditions/stimuli, and then tested on the left-out data. This is repeated many times, and the percentages of correctly labelled test data are reported. Crucially, this process is carried out separately for each participant, as subtle individual differences in activity patterns and cortical folding would be lost when averaged, defeating the purpose of the analysis. MVPA has since revolutionised fMRI research and, in combination with the increased power of computers, has become a widely used technique.

The differential brain patterns observed by Kamitani and Tong are thought to arise from the orientation columns in the primary visual cortex (V1), discovered by Hubel and Wiesel more than 50 years ago. They showed that columns contain neurons that are excited differentially by visual stimuli of varying orientations. Since these columns are very small (<1 mm) it is surprising that their activity patterns can apparently be picked up by conventional fMRI with about 2-3mm spatial resolution. More surprising still is that even magnetoencephalography (MEG) and electroencephalography (EEG) seem to be able to decode visual information, which are generally considered to have a spatial resolution of several centimetres! How is this possible?

Critics have raised alternative possible origins of the decodable patterns, which could result in more coarse-level activity patterns (e.g. by global form properties or overrepresentation of specific stimuli), and thus confound the interpretation of decodable patterns in the brain.

In response to these criticisms, a recent study by Cichy, Ramirez, and Pantazis (2015) investigated to what extent specific confounds could affect decodable patters by systematically changing the properties of presented stimuli. They used MEG as the physiological measure instead of fMRI. This enabled them to explore the time-course of decoding, which can be used to infer at which visual processing stage decodable patterns arise.

In the first experiment they showed that neither the cardinal bias (over representation of horizontal or vertical gratings) nor the phase of gratings (and thus local luminance) is necessary to reliably decode the stimuli.

Figure 1. From Cichy et al., in press
As can be seen from the decoding time-course the decodability is significant approximately 50 ms after stimulus presentation and ramps up extremely quickly, peaking at about 100 ms. This time-course alone, which was very similar in the other experiments testing for different possible confounds, suggests that the decodable patterns arise early in the visual processing pathway, probably in V1.

The other confounds that were tested involved the radial-bias (neural overrepresentation of lines parallel to fixation), the edge effect (gratings could be represented as ellipses elongated in the orientation of the gratings), and global form (where gratings are perceived as coherent tilted objects). None of these biases could fully explain the decodable patterns, casting doubt on the notion of coarse-level driven decoding. Again, how is this possible, when the spatial resolution of MEG should be far too coarse to pick up such small neural differences?

The authors tested the possibility of decoding neural activity from the orientation columns with MEG more directly. They projected neurophysiologically realistic activity patterns on to the modelled surface of V1 of one subject (A). The distance between each activity node was comparable to the actual size of the orientation columns. The corresponding MEG scalp recordings were obtained by forward modelling (B) and their differences decoded (C and D). The activity patterns could be reliably discriminated across a wide range of signal to noise ratios (SNR) and, most crucially, at the same SNR as in the first experiment.

Figure 2. From Cichy et al., in press

This procedure nicely demonstrates the theoretical feasibility of discriminating neural activity at V1 with MEG, and suggests that the well-known “inverse-problem” inherent to MEG and EEG source localisation does not necessarily mean that small activation differences on the sub-millimetre scale are not present in the activation topographies. While it remains impossible to say where the origin of a neural activation pattern lies, the activation pattern of MEG is still spatially rich.

Even with EEG it is possible to decode the orientations of gratings (Wolff, Ding, Myers, & Stokes, in press); and this can be observed more than 1.5 seconds after stimulus presentation. We believe that there is a bright future ahead for EEG and MEG decoding research: not only is EEG considerably cheaper than fMRI, but the time-resolved decoding offered by both methods could nicely complement the more spatially resolved decoding of fMRI.



References

Cichy, R. M., & Pantazis, D. (in press). Can visual information encoded in cortical columns be decoded from magnetoencephalography data in humans? NeuroImage.

Hubel, D. H., & Wiesel, T. N. (1959). Receptive fields of single neurones in the cat's striate cortex. The Journal of physiology, 148(3), 574-591.

Kamitani, Y., & Tong, F. (2005). Decoding the visual and subjective contents of the human brain. Nature Neuroscience, 8(5), 679-685.

Wolff, M. J., Ding, J., Myers, N. E., & Stokes, M. G. (in press). Revealing hidden states in visual working memory using EEG. Frontiers in Systems Neuroscience.

Wednesday, 29 April 2015

Peering directly into the human brain

Wiki Commons
With the rise of non-invasive brain imaging such as functional magnetic resonance imaging (fMRI), researchers have been granted unprecedented access to the inner workings of the brain. It is now relatively straightforward to put your experimental subjects in an fMRI machine and measure activity 'blobs' in the brain. This approach has undoubtedly revolutionised cognitive neuroscience, and looms very large in people's idea of contemporary brain science. But fMRI has it's limitations. As every student in the business should know, fMRI has poor temporal resolution. fMRI is like a very long-exposure photograph: the activity snapshot actually reflects an average over many seconds. Yet the mind operates at the millisecond scale. This is obviously a problem. Neural dynamics are simply blurred with fMRI. However, probably more important is the theoretical limit.

Wiki in ECoG
Electricity is the language of the brain, but fMRI only measures changes in blood flow that are coupled to these electrical signals. This coupling is complex, therefore fMRI can only provide a relatively indirect measure of neural activity. Electroencephalography (EEG) is a classic method for measuring actual electrical activity. It has been around for more than 100 years, but again, as every student should know: EEG has poor spatial resolution. It is difficult to know exactly where the activity is coming from. Magnetoencephalography (MEG) is a close cousin of EEG. Developed more recently, MEG is better at localising the source of brain activity. But the fundamental laws of physics mean that any measure of electromagnetic activity from outside the head will always be spatially ambiguous (the inverse problem). The best solution is to record directly from the surface of the brain. Here we discuss the unique opportunities in that arise in the clinic to measure electrical activity directly from the human brain using electrocorticography (ECoG).

Epilepsy can be a seriously debilitating neurological condition. Although the symptoms can often be managed with medication, some patients continue to have major seizures despite a cocktail of anti-epileptic drugs. So-called intractable epilepsy affects every aspect of life, and can even be life-threatening. Sometimes the only option is neurosurgery: careful removal of the specific brain area responsible for seizures can dramatically improve quality of life.

Neurosurgery
Psychology students should be familiar with the case of Henry Molaison (aka HM). Probably the most famous neuropsychology patient in history, HM suffered intractable epilepsy until the neurosurgeon William Scoville removed two large areas of tissue in the medial temporal lobe, including left and right hippocampus. This pioneering surgery successfully treated his epilepsy, but this is not why the case became so famous in neuropsychology. Unfortunately, the treatment also left HM profoundly amnesic. It turns out that removing both sides of the medial temporal lobe effectively removes the brain circuitry for forming new memories. This lesson in functional neuroanatomy is what made the case of HM so important, but there was also a important lesson for neurosurgery – be careful which parts of the brain you remove!

The best way to plan a neurosurgical resection of epileptic tissue is to identify exactly where the seizure is comping from. The best way to map out the affected region is to record activity directly from the surface of the brain. This typically involves neurosurgical implantation of recording electrodes directly in the brain to be absolutely sure of the exact location of the seizure focus. Activity can then be monitored over a number of days, or even weeks, for seizure related abnormalities. This invasive procedure allows neurosurgeons to monitor activity in specific areas that could be the source of epileptic seizures, but also provides a unique opportunity for neuroscientific research.

From Pasley et al., 2012 PLoS Biol. Listen to audio here
During the clinical observation period, patients are typically stuck on the hospital ward with electrodes implanted in their brain literally waiting for a seizure to happen so that the epileptic brain activity can be ‘caught on camera’. This observation period provides a unique opportunity to also explore healthy brain function. If patients are interested, they can perform some simple experiments using computer based tasks to determine how different parts of the brain perform different functions. Previous studies from some of the great pioneers in neuroscience mapped out the motor cortex by stimulating different brain areas during neurosurgery. Current experiments are continuing in this tradition to explore less well charted brain areas involved in high-level thought. For example, in a recent study from Berkeley, researchers used novel brain decoding algorithms to convert brain activity associated with internal speech into actual words. This research helps us understand the fundamental neural code for the internal dialogue that underlies much of conscious thought, but could also help develop novel tools for providing communication to those otherwise unable to general natural speech.


From Dastjerdi et al 2013 Nature Communications (watch video below)

In Stanford, researchers were recently able to identify a brain area that codes for numbers and quantity estimation (read study here). Critically, they were even able to show that this area is involved in everyday use for numerical cognition, rather than just under their specific experimental conditions. See video below.
Wiki Commons



The great generosity of these patients vitally contributes to the broader understanding of brain function. They have dedicated their valuable time in otherwise adverse circumstances to help neuroscientists explore the very frontiers of the brain. These patients are true pioneers.





Key References

Dastjerdi, M., Ozker, M., Foster, B. L., Rangarajan, V., & Parvizi, J. (2013). Numerical processing in the human parietal cortex during experimental and natural conditions. Nat Commun, 4, 2528.


Pasley, B. N., David, S. V., Mesgarani, N., Flinker, A., Shamma, S. A., Crone, N. E., Knight, R. T., & Chang, E. F. (2012). Reconstructing speech from human auditory cortex. PLoS Biol, 10, e1001251.


Video showing the use of a number processing brain area in everyday use:



Tuesday, 23 April 2013

In the news: Decoding dreams with fMRI

Recently Horikawa and colleagues from ATR Computational Neuroscience Laboratories, in Kyoto (Japan), caused a media sensation with the publication of the study in Science that shows first-time proof-of-principle that non-invasive brain scanning (fMRI) can be used to decode dreams. Rumblings were already heard in various media circles after Yuki Kamitani presented their initial findings at the annual meeting of the Society for Neuroscience in New Orleans last year [see Mo Costandi's report]. But now the peer-reviewed paper is officially published, the press releases have gone out and the journal embargo has been lifted, there was a media frenzy [e.g., here, here and here]. The idea of reading people's dreams was always bound to attract a lot of media attention.

OK, so this study is cool. OK, very cool - what could be cooler than reading people's dreams while they sleep!? But is this just a clever parlour trick, using expensive brain imaging equipment? What does it tell us about the brain, and how it works?

First, to get beyond the hype, we need to understand exactly what they have, and have not, achieved in this study. Research participants were put into the narrow bore of an fMRI for a series of mid afternoon naps (up to 10 sessions in total). With the aid of simultaneous EEG recordings, the researchers were able to detect when their volunteers had slipped off into the earliest stage of sleep (stage 1 or 2). At this point, they were woken and questioned about any dream that they could remember, before being allowed to go back to sleep again. That is, until the EEG next registered evidence of early stage sleep again, and then again they were awoken, questioned, and allowed back to sleep. So on and so forth, until they had recorded at least 200 distinct awakenings.

After all the sleep data were collected, the experimenters then analysed the verbal dream reports using a semantic network analysis (WordNet) to help organise the contents of the dreams their participants had experience during the brain scans. The results of this analysis could then be used to systematically label dream content associated with the sleep-related brain activity they had recorded earlier.

Having identified the kind of things their participants had been dreaming about in the scanner, the researchers then searched for actual visual images that best matched the reported content of dreams. Scouring the internet, the researchers built up a vast database of images that more or less corresponded to the contents of the reported dreams. In a second phase of the experiment, the same participants were scanned again, but this time they were fully awake and asked to view the collection of images that were chosen to match their previous dream content. These scans provided the research team with individualised measures of brain activity associated with specific visual scenes. Once these patterns had been mapped, the experimenters returned to the sleep data, using the normal waking perception data as a reference map.

If it looks like a duck...

In the simplest possible terms, if the pattern of activity measured during one dream looks more like activity associated with viewing a person, compared to activity associated with seeing an empty street scene, then you should say that the dream probably contains a person, if you were forced to guess. This is the essence of their decoding algorithm. They use sophisticated ways to characterise patterns in fMRI activity (support vector machine), but essentially the idea is simply to match up, as best they can, the brain patterns observed during sleep with those measures during wakeful viewing of corresponding images. Their published result is shown on the right for different areas of the brain's visual system. Lower visual cortex (LVC) includes primary visual cortex (V1), and areas V2 and V3; whereas higher visual cortex (HVC) includes lateral occipital complex (LOC), fusiform face area (FFA) and parahippocampal place area (PPA).

Below is a more creative reconstruction of this result. The researchers have put together a movie based on one set of sleep data taken before waking. Each frame represents the visual image from their database that best matches the current pattern of brain activity. Note, the reason why the image gets clearer towards the end of the movie is because the brain activity is nearer to the time point at which the participants were woken, and therefore were more likely to be described at waking. If the content at other times did not make it into the verbal report, then the dream activity would be difficult to classify because the corresponding waking data would not have been entered into the image database. This highlights how this approach only really works for content that has been characterised using the waking visual perception data.      


OK, so these scientists have decoded dreams. The accuracy is hardly perfect, but still, the results are significantly above chance, and that's no mean feat. In fact, it has never been done before. But some might still say, so what? Have we learned anything very new about the brain? Or is this just a lot of neurohype?

Well, beyond the tour de force technical achievement of actually collecting this kind of multi-session simultaneous fMRI/EEG sleep data, these results also provide valuable insights into how dreams are represented in the brain. As in many neural decoding studies, the true purpose of the classifier is not really to make perfectly accurate predictions, but rather to work out how the brain represented information by studying how patterns of brain activity differ between conditions [see previous post]. For example, are there different patterns of visual activity during different types of dreams? Technically, this could be tested by just looking for any difference in activity patterns associated with different dream content. In machine-learning language, this could be done using a cross-validated classification algorithm. If a classifier trained to discriminate activity patterns associated with known dream states can then make accurate predictions of new dreams, then it is safe to assume that there are reliable differences in activity patterns between the two conditions. However, this only tells you that activity in a specific brain area is different between conditions. In this study, they go one step further.

By training the dream decoder using only patterns of activity associated with the visual perception of actual images, they can also test whether there is a systematic relationship between the way dreams are presented, and how actual everyday perception is represented in the brain. This cross-generalisation approach helps isolate the shared features between the two phenomenological states. In my own research, we have used this approach to show that visual imagery during normal waking selectively activates patterns in high-level visual areas (lateral occipital complex: LOC) that are very similar to the patterns associated with directly viewing the same stimulus (Stokes et al., 2009, J Neurosci). The same approach can be used to test for other coding principles, including high-order properties such as position-invariance (Stokes et al., 2011, NeuroImage), or the pictorial nature of dreams, as studied here. As in our previous findings during waking imagery, Horikawa et al show that the visual content of dreams shares similar coding principles to direct perception in higher visual brain areas. Further research, using a broader base of comparisons, will provide deeper insights into the representational structure of these inherently subject and private experiences.

Many barriers remain for an all-purpose dream decoder

When the media first picked up this story, the main question I was asked went something like: are scientists going to be able to build dream decoders? In principle, yes, this result shows that a well trained algorithm, given good brain data, is able to decode the some of the content of dreams. But as always, there are plenty of caveats and qualifiers.

Firstly, the idea of downloading people's dreams while they sleep is still a very long way off. This study shows that, in principle, it is possible to use patterns of brain activity to infer the contents of peoples dreams, but only at a relatively coarse resolution. For example, it might be possible to distinguish between patterns of activity associated with a dream containing people or an empty street, but it is another thing entirely to decode which person, or which street, not to mention all the other nuances that make dreams so interesting.

To boost the 'dream resolution' of any viable decoding machine, the engineer would need to scan participants for much MUCH longer, using many more visual exemplars to build up an enormous database of brain scans to use as a reference for interpreting more subtle dream patterns. In this study, the researchers took advantage of prior knowledge of specific dream content to limit their database to a manageable size. By verbally assessing the content of dreams first, they were able to focus on just a relatively small subset of all the possible dream content one could imagine. If you wanted to build an all-purpose dream decoder, you would need an effectively infinite database, unless you could discover a clever way to generalise from a finite set of exemplars to reconstruct infinitely novel content. This is an exciting area of active research (e.g., see here).

Another major barrier to a commercially available model is that you would also need to characterise this data for each individual person. Everyone's brain is different, unique at birth and further shaped by individual experiences. There is no reason to believe that we could build a reliable machine to read dreams without taking this kind of individual variability into account. Each dream machine would have to be tuned to each person's brain.


Finally, it is also worth noting that the method that was used in this experiment requires some pretty expensive and unwieldy machinery. Even if all the challenges set out above were solved, it is unlikely that dream readers for the home will be hitting the shelves any time soon. Other cheaper, and more portable methods for measuring brain activity, such as EEG, can only really be used to identify difference sleep stages, not what goes on inside them. Electrodes placed directly into the brain could be more effective, but at the cost of invasive brain surgery.


For the moment, it is probably better just to keep a dream journal.

Reference:


Horikawa, Tamaki, Miyawaki & Kamitani (2013) Neural Decoding of Visual Imagery During Sleep, Science [here]

Monday, 18 June 2012

In the news: Mind Reading

Mind reading tends to capture the headlines. And these days we don't need charlatan mentalists to perform parlour tricks before a faithful audience - we now have true scientific mind reading. Modern brain imaging tools allow us to read the patterns of brain activity that constitute mind... well, sort of. I thought to write this post in response to a recent Nature News Feature on research into methods for reading the minds of patients without any other means of communication. In this post, I consider what modern brain imaging brings to the art of mind reading.

Mind reading as a tool for neuroscience research



First, it should be noted that almost any application of brain imaging in cognitive neuroscience can be thought of as a form of mind reading. Standard analytic approaches test whether we can predict brain activity from the changes in cognitive state (e.g., in statistical parametric mapping). It is straightforward to turn this equation round to predict mental state from brain activity. With this simple transformation, the huge majority of brain imaging studies are doing mind reading. Moreover, a class of analytic methods known as multivariate (or multivoxel) pattern analysis (or classification) have come even closer to mind reading for research purposes. Essentially, these methods rely on a two-stage procedure. The first step is to learn which patterns of brain activity correspond to which cognitive states. Next, these learned relationships are used to predict the cognitive state associated with brain activity. This train/test procedure is strictly "mind reading", but essentially as a by-product.

In fact, the main advantage of this form of mind reading in research neuroscience is that it provides a powerful method for exploring how complex patterns in brain data vary with the experimental condition. Multivariate analysis can also be performed the other way around (by predicting brain activity from behaviour, see here), and similarly, there is no reason why train-test procedures can't be used for univariate analyses. In this type of research, the purpose is not actually to read the mind of cash-poor undergraduates who tend to volunteer for these experiments, but rather to understand the relationship between mind and brain.

Statistical methods for prediction provide a formal framework for this endeavour, and although they are a form of mind reading, it is unlikely to capture the popular imagination once the finer details are explained. Experiments may sometimes get dressed up like a mentalist's parlour trick (e.g., "using fMRI, scientists could read the contents of consciousness"), but such hype invariably leaves those who actually read the scientific paper a bit disappointed by the more banal reality (e.g., "statistical analysis could predict significantly above chance whether participants were seeing a left or right tilted grating"... hardly the Jedi mind trick, but very cool from a neuroscientific perspective), or contribute to paranoid conspiracy theories in those who didn't read the paper, but have an active imagination.

Mind reading as a tool for clinical neuroscience


So, in neuroscientific research, mind reading is most typically used as a convenient tool for studying mind-brain relationships. However, the ability to infer mental states from brain activity has some very important practical applications. For example, in neural prosthesis, internal thoughts are decoded by "mind reading" algorithms to control external devices (see previous post here). Mind reading may also provide a vital line of communication to patients who are otherwise completely unable to control any voluntary movement.

Imagine you are in an accident. You suffer serious brain damage that leaves you with eye blinking as your only voluntary movement for communicating with the outside world. That's bad, very bad in fact - but in time you might perfect this new form of communication, and eventually you might even write a good novel, with sufficient blinking and heroic patience. But now imagine that your brain damage is just a little bit worse, and now you can't even blink your eyes. You are completely locked in, unable to show the world any sign of your conscious existence. To anyone outside, you appear completely without a mind. But inside, your mind is active. Maybe not as sharp and clear as it used to be, but still alive with thoughts, feelings, emotions, hopes and fears. Now mind reading, at any level, becomes more than just a parlour trick.
"It is difficult to imagine a worse experience than to be a functioning mind trapped in a body over which you have absolutely no control" Prof Chris Frith, UCL [source here]
As a graduate student in Cambridge, I volunteered as a control participant in a study conducted by Adrian Owen to read mental states with fMRI for just this kind of clinical application (since published in Science). While I lay in the scanner, I was instructed to either imagine playing tennis or to spatially navigate around a familiar environment. The order was up to me, but it was up to Adrian and his group to use my brain response to predict which of these two tasks I was doing at any given time. I think I was quite bad at spatially navigating, but whatever I did inside my brain was good enough for the team to decode my mental state with remarkable accuracy.

Once validated in healthy volunteers (who, conveniently enough, can reveal which task they were doing inside their head, thus the accuracy of the predictions can be confirmed), Adrian and his team then applied this neuroscientific knowledge to track the mental state of a patient who appeared to be in a persistent vegetative state. When they asked her to imagine playing tennis, her brain response looked just like mine (and other control participants), and when asked to spatially navigate, her brain looked just like other brains (if not mine) engaged in spatial navigation.

In this kind of study, nothing very exciting is learned about the brain, but something else extremely important has happened: someone has been able to communicate for the first time since being diagnosed as completely non-conscious. Adrian and his team have further provided proof-of-principle that this form of mind reading can be applied in other patients to test their level conscious awareness (see here). By following the instructions, some patients were able to demonstrate for the first time a level of awareness that was previously completely undetected. In one further example, they even show that this brain signal can be used to answer some basic yes/no questions.

This research has generated an enormous amount of scientific, clinical and public interest [see his website for examples]. As quoted in a recent Nature New Feature, Adrian has since been "awarded a 7-year Can$10-million Canada Excellence Research Chair and another $10 million from the University of Western Ontario" and "is pressing forward with the help of three new faculty members and a troop of postdocs and graduate students". Their first goal is to develop cheaper and more effective means of using non-invasive methods like fMRI and EEG to restore communication. However, one could also imagine a future for invasive recording methods. Bob Knight's team in Berkeley have been using electrical recording made directly from the brain surface to decode speech signals (see here for a great summary in the Guardian by Ian Sample). Presumably, this kind of method could be considered for patients identified as partially conscious.

See also an interesting interview with Adrian by Mo Constandi in the Guardian

References:
Monti, al. (2010). Willful modulation of brain activity in disorders of consciousness. New England Journal of Medicine
Owen, et al (2006). Detecting awareness in the vegetative state. Science
Pasley,  et al (2012). Reconstructing Speech from Human Auditory Cortex. PLoS Biology

Thursday, 24 May 2012

In the news: More neural prosthetics

Last week we heard about the retinal implant, this week is all about the neural prosthetic arm (video). As part of a clinical trial conducted by the BrainGate team, patients suffering long-term tretraplegia (paralysis including all limbs and torso) were implanted with tiny 4x4mm 96-channel microelectrode arrays. Signals from the primary motor cortex were then recorded, and analysed, to decode action commands that could then be used to drive a robotic arm. According to one of the patients:
"At the very beginning I had to concentrate and focus on the muscles I would use to perform certain functions. BrainGate felt natural and comfortable, so I quickly got accustomed to the trial."
Plugging directly into the motor cortex to control a robotic arm could open a whole host of possibilities, if the even larger host of methodological obstacles can be over come. Neuroscientists have become increasingly good at decoding brain signals, especially those controlling action, and are continually fine tuning these skills (see here in the same issue of Nature for another great example of the basic science that ultimately underpins these kinds of clinical applications). The biggest problem, however, is likely to be the bioengineering challenge of developing implants that can read brain activity without damaging neurons over time. The build up of scar tissue around the electrodes will inevitably reduce the quality of the signal. As noted by the authors:
"The use of neural interface systems to restore functional movement will become practical only if chronically implanted sensors function for many years" 
They go on to say that one of their experimental participants had been implanted with their electrode array some 5 years earlier. Although they concede that the quality of the signal had degraded over that time, it was still sufficiently rich to decode purposeful action. They suggest that:
"the goal of creating long-term intracortical interfaces is feasible"
These results are certainly encouraging, however such high-profile trials should not overshadow other excellent research into non-invasive methods for brain computer interface. To avoid neurosurgical procedures has obvious appeal, and would also allow for more flexibility in updating hardware as new developments arise.



References:

Hochberg, Bacher, Jarosiewicz, Masse, Simeral, Vogel, Haddadin, Liu, Cash, van der Smagt & Donoghue. Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature, 485(7398):372-5

Ethier, Oby, Bauman & Miller (2012) Restoration of grasp following paralysis through brain-controlled stimulation of muscles. Nature, 485(7398):368-71.