Wednesday 30 May 2012

A simple plan for open access?

Just chatting with Chris Chambers, and we came up with this simple 6-step plan to solve the Open Access problem:

1. Submit your paper to your Journal of Choice

2. With editor approval, go to review

3. With reviewer approval, make suggested changes, tweak figures, add caveats, improve the science, etc.

4. Repeat steps 2-3, or 1-3 as necessary

5. Finally, with editor approval, receive acceptance email (and notification of publication cost, copyright restrictions, etc)

6. Now, here's the sting: take your accepted peer-reviewed paper and publish it yourself, on-line, along with all the reviewer comments, reply to reviewers (more reviewer comments, replies to comments, etc.) and most importantly, the final decision email - i.e., your proof of endorsement from said Journal of Choice

Are you brave enough to follow these 6 simple steps to DIY Open Access? It is fully peer-reviewed, and endorsed by Journal of Choice, with good reputation and respectable impactor factor. I am not, and so instead I just signed this petition to the White House to: Require free access over the Internet to scientific journal articles arising from taxpayer-funded research. If you haven't done so already, get to it! Non-US signatories are welcome...

Also see: http://deevybee.blogspot.co.uk/2012/01/time-for-academics-to-withdraw-free.html


Monday 28 May 2012

A Tale of Two Evils: Bad statistical inference and just bad inference

Evil 1:  Flawed statistical inference

There has been a recent lively debate on the hazards of functional magnetic resonance imaging (fMRI), and what claims to believe or not in the scientific and/or popular literature [here, and here]. The focus has been on flawed statistical methods for assessing fMRI data, and in particular failure to correct for multiple comparisons [see also here at the Brain Box]. There was quite good consensus within this debate that the field is pretty well attuned to the problem, and has taken sound and serious steps to preserve the validity of statistical inferences in the face of mass data collection. Agreed, there are certainly papers out there that have failed to use appropriate corrections, and therefore the resulting statistical inferences are certainly flawed. But hopefully these can be identified, and reconsidered by the field. A freer and more dynamic system of publication could really help in this kind of situation [e.g., see here]. The same problems, and solutions apply to non-brain imaging field [e.g., see here].

But I feel that it may be worth pointing out that the consequence of such failures is a matter of degree, not kind. Although statistical significance is often presented as a category value (sig vs ns), the threshold is of course arbitrary, as undergraduates are often horrified to learn (why P<.05? yes, why indeed??). When we fail to correct for multiple comparisons, the expected probabilities change, therefore the reported statistical significance is incorrectly represented. Yes, this is bad, this is Evil 1. But perhaps there is a greater, more insidious evil to beware.

Evil 2: Flawed inference, period.

Whatever our statistical test say, or do not say, ultimately it is the scientist, journalist, politician, skeptic, whoever, who interprets the result. One of the most serious and common problems is flawed causal inference: "because brain area X lights up when I think about/do/say/hear/dream/hallucinate Y, area X must cause Y". Again, this is a very well known error, undergraduates typically have it drilled into them, and most should be able to recite like mantra: "fMRI is correlational, not causal". Yet time and again we see this flawed logic hanging around, causing trouble.

There are of course other conceptual errors at play in the literature (e.g., there must be a direct mapping between function and structure; each cognitive concept that we can imagine must have its own dedicated bit of brain, etc), but I would argue perhaps that fMRI is actually doing more to banish than reinforce ideas that we largely inherited from the 19th Century. The mass of brain imaging data, corrected or otherwise, will only further challenge these old ideas, as it becomes increasingly obvious that function is mediated via a distributed network of interrelated brain areas (ironically, ultra-conservative statistical approaches may actually obscure the network approach to brain function). However, brain imaging, even in principle, cannot disentangle correlation from causality. Other methods can, but as Vaughan Bell poetically notes:
Perhaps the most important problem is not that brain scans can be misleading, but that they are beautiful. Like all other neuroscientists, I find them beguiling. They have us enchanted and we are far from breaking their spell. [from here]
In contrast, the handful of methods (natural lesions, TMS, tDCS, animal ablation studies) that allow us to test the causal role of brain function do not readily generate beautiful pictures, and perhaps, therefore suffer a prejudice that keeps them under-represented in peer-review journals, and/or popular press. It would be interesting to assess the role of beauty in publication bias...

Update - For even more related discussion, see:
http://thermaltoy.wordpress.com/2012/05/28/devils-advocate-uncorrected-stats-and-the-trouble-with-fmri/
http://www.danielbor.com/dilemma-weak-neuroimaging/
http://neuroskeptic.blogspot.co.uk/2012/04/fixing-science-systems-and-politics.html

Sunday 27 May 2012

The Science in the Middle


When it comes to controversies, science can find itself stuck between GMO doomsayers on the left and climate change deniers on the right. To the former, science may be the evil arm of big business interests, but to the latter, a bunch of left wing saboteurs intent on halting progress and civilisations. Both sides of the argument can be high-jacked for political point scoring.

Today, a united front of self-proclaimed "Geeks in the Park" staged a protest against anti-GMO group "Take the Flour Back" hoping to halt an experimental trial in Harpenden to test a genetically modified wheat crop. It is a pretty fiery debate. Following it live on Twitter today, quite a few inflammatory things were said on both sides.


From a Tweeting Greens Party politician on the anti-GMO side:
The mouth frother is still here. Being debated with. I must say, very brave of him to mix with us. Credit for that.
And from a Tweeting Labour Party politician, on the anti-anti-GMO side:
Have lots of anti-histamines & am tempted to offer them to any sneezing anti-GM protestors. But animal tested
But jokes aside, this is obviously an important issue. Research into how we (i.e., a very large number of people, who show every sign of becoming an ever-larger number!) are going to feed ourselves into the 21st century is probably one of the most pressing issues facing science today. There will be many routes that need to be explored, including genetic modification (following in the tradition of the great 19th century Augustinian friar Gregor Mendel) as well as other agricultural developments (which will also have potential risks and unforeseen side effects - everything does!). Moreover, pending climate change makes this research even more urgent. But here we find science attacked from the other side of the political spectrum. The list of strongly worded claims and counter claims is pretty long, but for a taste of the controversy see here for one scientist's perspective (and ensuing comments) and here for the latest views from climate-change skeptics.

Truth before beauty? Making sense of mass data

Modern brain imaging methods can produce some remarkably beautiful images, and sometimes we are won over by them. In his opinion piece in today's Observer, Vaughan Bell highlights some of the multifarious problems that may arise in brain imaging, and in particular, functional magnetic resonance imaging (fMRI). During a typical fMRI experiment, we record many thousands of estimates of neural activity across the whole brain every second. At the end, we have an awful lot of data, and potentially an embarrassment of riches. Firstly, where do we begin looking for interesting effects? And when we find something that could be interesting, how do we know that it is 'real', and not just the kind of lucky find that is bound to accompany an exhaustive search?

Bell focuses on this later problem, highlighting in particular the problem of multiple comparisons. Essentially, the more we look, the more we are likely to find something by chance (i.e., some segment of random noise that doesn't look random - e.g., when eventually a thousand monkeys string together a few words from Hamlet). This is an extremely well known problem in neuroscience, and indeed any other science that is fortunate to have at its disposal methods for collecting so much data. Various statistical methods have been introduced, and debated, to deal with this problem. Some of these have been criticised for not doing what it says on the tin (i.e., overestimating the true statistical significance, e.g., see here), but there is also an issue of appropriateness. Most neuroimagers know the slightly annoying feeling you get when you apply the strictest correction to your data set and find an empty brain. Surely there must be some brain area active in my task? Or have I discovered a new form of cognition that does not depend on the physical properties of brain! So we lower the threshold a bit, and suddenly some sensible results emerge. 

This is where we need to be extremely careful. In some sense, the eye can perform some pretty valid statistical operations. We can immediately see if there is any structure in the image (e.g., symmetry, etc), we can also tell whether there seems to be a lot of 'noise' (e.g., other random looking blobs). But now we are strongly influenced by our hopes and expectations. We ran the experiment to test some hypothesis, and our eye is bound to be more sympathetic to seeing something interesting in noise (especially as we have spent a lot of hard earned grant money to run the experiment, and under a lot of pressure to show something for it!). While expectations can be useful (i.e., the expert eye), they can also perpetuate bad science - once falsehoods slip into the collective consciousness of the neuroscientific community, they can be hard to dispel. Finally, structure is a truly deceptive beast. We are often completely captivated by it's beauty, even when the structure comes from something quite banal (e.g., smoothing kernel, respiratory artifact, etc).

So, we need to be conservative. But how conservative? To be completely sure we don't say anything wrong, we should probably just stay at home and run no experiments - zero chance of false positives. But if we want to find something out about the brain, we need to take some risks. However, we don't need to be complete cowboys about it either. Plenty of pioneers have already laid the groundwork for us to explore data whilst controlling for many of the problems of multiple comparisons, so we can start to make some sense of the beautiful and rich brain imaging data now clogging up hard drives all around the world.

These issues are not in any way unique to brain imaging. Exactly the same issues arise in any science lucky enough to suffer the embarrassment of riches (genetics, meteorology, epidemiology, to name just a few). And I would always defend mass data collection as inherently good. Although it raises problems, how can we really complain about having too much data? Many neuroimagers today even feel that fMRI is too limited, if only we could measure with high-temporal resolution as well! Progress in neuroscience (or indeed any empirical science) is absolutely dependent on our ability to collect the best data we can, but we also need clever analysis tools to make some sense of it all.



Update - For even more related discussion, see:
http://mindhacks.com/2012/05/28/a-bridge-over-troubled-waters-for-fmri/
http://thermaltoy.wordpress.com/2012/05/28/devils-advocate-uncorrected-stats-and-the-trouble-with-fmri/
http://www.danielbor.com/dilemma-weak-neuroimaging/


Thursday 24 May 2012

In the news: More neural prosthetics

Last week we heard about the retinal implant, this week is all about the neural prosthetic arm (video). As part of a clinical trial conducted by the BrainGate team, patients suffering long-term tretraplegia (paralysis including all limbs and torso) were implanted with tiny 4x4mm 96-channel microelectrode arrays. Signals from the primary motor cortex were then recorded, and analysed, to decode action commands that could then be used to drive a robotic arm. According to one of the patients:
"At the very beginning I had to concentrate and focus on the muscles I would use to perform certain functions. BrainGate felt natural and comfortable, so I quickly got accustomed to the trial."
Plugging directly into the motor cortex to control a robotic arm could open a whole host of possibilities, if the even larger host of methodological obstacles can be over come. Neuroscientists have become increasingly good at decoding brain signals, especially those controlling action, and are continually fine tuning these skills (see here in the same issue of Nature for another great example of the basic science that ultimately underpins these kinds of clinical applications). The biggest problem, however, is likely to be the bioengineering challenge of developing implants that can read brain activity without damaging neurons over time. The build up of scar tissue around the electrodes will inevitably reduce the quality of the signal. As noted by the authors:
"The use of neural interface systems to restore functional movement will become practical only if chronically implanted sensors function for many years" 
They go on to say that one of their experimental participants had been implanted with their electrode array some 5 years earlier. Although they concede that the quality of the signal had degraded over that time, it was still sufficiently rich to decode purposeful action. They suggest that:
"the goal of creating long-term intracortical interfaces is feasible"
These results are certainly encouraging, however such high-profile trials should not overshadow other excellent research into non-invasive methods for brain computer interface. To avoid neurosurgical procedures has obvious appeal, and would also allow for more flexibility in updating hardware as new developments arise.



References:

Hochberg, Bacher, Jarosiewicz, Masse, Simeral, Vogel, Haddadin, Liu, Cash, van der Smagt & Donoghue. Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature, 485(7398):372-5

Ethier, Oby, Bauman & Miller (2012) Restoration of grasp following paralysis through brain-controlled stimulation of muscles. Nature, 485(7398):368-71.

Saturday 12 May 2012

Journal Club:Twists and turns through memory space



You enter an unfamiliar building for a job interview. The receptionist tells you to make a left turn at the end of the corridor to get to your interviewer’s office. Easy instructions, but your brain has to remember them nonetheless. For the past decade, theoretical neuroscientists have proposed that, to do this job, neurons in the parietal cortex act as a kind of memory container: once you have learned that you need to make a left, dedicated ‘left-turn’ neurons are persistently active until you have reached the end of the hall, have turned, and can forget about it again. In addition to having lots of supporting evidence and enjoying intuitive appeal, the memory-container model has the advantage that, once the appropriate neurons are activated, they can potentially hold on to the ‘left-turn’ memory indefinitely (for instance, allowing you to get a drink of water before heading to the office).

However, a recent paper in the journal Nature has added to a growing list of evidence contradicting this model. In the paper, Princeton researchers Christopher Harvey, Philip Coen, and David Tank describe how ‘left-turn’ neurons in the parietal cortex of mice fire in a stereotypical cascade as the animals navigate along a virtual-reality corridor. The sequence begins with a small number of ‘left-turn’ neurons activating the next group and then falling silent again (see image), while the new group in turn activates yet another subset, and so forth until the end of the cascade is reached at the end of the corridor. In contrast to the memory-container model, this kind of dynamic activation sequence could be more similar to your car’s sat nav, constantly keeping you up to date on when you will have to turn left. Like a sat nav, dynamic memories could become more prominent when you are navigating through a complicated environment and have to make a left turn at the right time or in the right place (say, for instance, that there are lots of possible left turns, and you must remember to turn behind the drink fountain). Perhaps previous researchers may have failed to pick up on such dynamics because their memory tasks did not involve this aspect (in a typical experiment, a participant will receive instructions to make an eye movement to a certain location, remember the location for a few seconds, and then execute the movement).

After instructions to make a left- or right-hand turn at the end of a virtual reality corridor, left- or right-turn neurons activate in a specific sequence. Single neurons fall completely silent following a brief activation burst, so that the average activity during the memory delay is low. Nevertheless, the sparse but specific activation sequence is sufficient to predict whether the animal will make a left or right turn at the end of the corridor.
The notion of dynamical memories is particularly interesting to our research because it relates to the idea that memories are an anticipation to act in a certain way (turn left) at a specified place (the end of the corridor) and a specified time (in about 10 seconds) – something we have been exploring in recent papers as well (i.e., research briefing from May 7th).

The new empirical evidence for dynamic memories now raises the theoretical challenge of showing how the brain is capable of quickly creating new sequences. After all, we are able to remember which way to go within seconds of entering a completely new environment. Another open question, which was not addressed in the article, is whether or not we can use dynamic memories to remember continuous quantities: the receptionist may tell you that the office is 40 feet away. Do you now have an activation sequence remembering ’40 feet’ in the parietal cortex? Is this sequence more similar to the ’30 feet’ sequence than to the ’20 feet’ sequence? Further, when we see a sign in the corridor indicating that the location of the interview has been moved, can we integrate this new information into the ongoing memory sequence? Does it then branch off into a new sequence? The many open questions will direct memory research toward exciting new directions.



Reference:
Harvey CD, Coen P and Tank DW (2012) Choice-specific sequences in parietal cortex during a virtual-navigation decision task. Nature; 484(7392):62-8

Tuesday 8 May 2012

In The News: The "Bionic Eye"

It is a big news story in the UK at the moment. Surgeons at the John Radcliffe Hospital in Oxford have implanted UK's first subretinal prosthetic device (see link). This is an exciting development toward restoring useful vision to people suffering retinal degeneration. According to the NHS website, the clinical trial involves a number of patients with retinitis pigmentosa, which is a progressive eye disease affecting photoreceptors (rods and cones). So far, the results are promising. Quoting from the NHS press release:

"When his electronic retina was switched on for the first time, three weeks after the operation, James was able to distinguish light against a black background in both eyes. He is now reported to be able to recognise a plate on a table and other basic shapes, and his vision continues to improve"

Although these effects might seem modest to the sighted, they could provide major improvement in the quality of life for those involved in the trial. To a fully blind patient, even partial vision could dramatically increase their independence. According to the manufacturerafter implantation of the chip the patient’s visual ability should meet the following criteria:
  • Orientation in space 
  • visual field: 8° - 12° 
  • Capacity to see without visual aids (except glasses): at least ability to count fingers, at best ability to recognize faces. 
  • Ability to recognize the letters of the alphabet with additional visual aids. 
  • Ability to see in surround brightness from 10 Lux to 100.000 Lux.
To achieve all these would certainly make a real difference to a fully blind patient. So, how does the device work? In general, there are two type of retinal implant being developed: epiretinal and subretinal. Both essentially work by converting light energy to electrical energy to stimulate intact retinal cells, which is normally done by damaged photoreceptors (rods and cones). The epiretinal variety consists of an external video camera that transmits a processed signal to the implant, which in turn activates the reintal cells corresponding to the pixelated representation of the image. The subretinal implant, used in this trial, is fitted behind the retina and microphotodiodes directly convert light into electrical impulses to stimulate retinal cells. The principal advantage of the subretinal device, everything is internal to the implant (except for a small power source fitted under the skin). To quote Professor MacLaren:

 

"What makes this unique is that all functions of the retina are integrated into the chip. It has 1,500 light sensing diodes and small electrodes that stimulate the overlying nerves to create a pixellated image. Apart from a hearing aid-like device behind the ear, you would not know a patient had one implanted."



Moreover, by directly stimulating retinal cells rather than ganglion cells results in a more direct and natural correspondence between the implant and the underlying biology. Essentially, this reflects a general trade-off principle in neuroprosthetics. The simplest and most effective devices (e.g., cochlear implants) utilize the existing organisation of primary receptor surfaces, and/or their close neighbours, thereby minimizing the engineering challenge of interfacing with the more complex neural coding schemes. But this only works if those structures remain intact. To by-pass the entire sensory organ and project directly to the cortex is an entirely different game.

Monday 7 May 2012

Research Meeting: Memory and Attention


Last month saw a meeting of academics from across the UK, and abroad, exploring a common theme: the interaction between attention and memory. 

These are core concepts in cognitive neuroscience, with a rich tradition of research dating back to the seminal behavioural and neuropsychological studies in the early half of last century to more contemporary cognitive neuroscience with all the bells and whistles of brain imaging and stimulation. Yet still, with all the developments, relatively little is known about how these core functions interact. This was the motivation for Duncan Astle (MRC Cognition and Brain Sciences Unit, Cambridge) to propose to the British Academy a two-day meeting between leading academics interested in the interaction between attention and memory. Not to mention, they are just a fun group of people, and therefore a good excuse for a get-together in London...

Speakers were invited to present their latest research concerning links between attention and memory. Although the scope for “memory” is broad (i.e., iconic memory, working memory, long-term memory), most delegates took the opportunity to focus on short-term and/or working memory. Check out the website for more information on individual presentations, including audio and video downloads!

Research Briefing: How memory influences attention

Background


In the late 19th Century, the great polymath Hermann von Helmholtz eloquently described how our past experiences shape how we see the world. Given the optical limitations of the eye, he concluded that the rich experience of vision must be informed by a lot more than meets the eye. In particular, he argued that we use our past experiences to infer the perceptual representation from the imperfect clues that pass from the outside world to the brain. 


Consider the degraded black and white image below. It is almost impossible to interpret, until you learn that it is a Dalmatian. Now it is almost impossible not to see the dog in dappled light.

More than one hundred years after Helmholtz, we are now starting to understand the brain mechanisms that mediate this interaction between memory and perception. One important direction follows directly from Helmholtz 's pioneering work. Often couched in more contemporary language, such as Bayesian inference, vision scientists are beginning to understand how our perceptual experience is determined by the interaction between sensory input and our perceptual knowledge established through past experience in the world. 

Prof Nobre (cognitive neuroscientist, University of Oxford) has approached this problem from a slightly different angle. Rather than ask how memory shapes the interpretation of sensory input, she took one step back to ask how past experience prepares the visual system to process memory-predicted visual input. With this move, Nobre's research draws on a rich history of cognitive neuroscientific research in attention and long-term memory. 

Although both attention and memory have been thoroughly studied in isolation, very is little is actually known of how these two core cognitive functions interact in everyday life. In 2006, Nobre and colleagues published the results of a brain imaging experiment designed to identify the brain areas involved in memory-guided attention (Summerfield et al., 2006, Neuron). Participants in this experiment first studied a large number of photographs depicting natural everyday scenes. The instruction was to find a small target object embedded in each scene, very much like the classic Where's Wally game.


After performing the search task a number of times, participants were able learned the location of the target in each scene. When Nobre and her team tested their participants again on a separate day, they found that people were able to use the familiar scenes to direct attention to the previously learned target location in the scene. 


Next, the research team repeated this experiment, but this time changes in brain activity were measured in each participant while they used their memories to direct the focus of their attention. With functional magnetic resonance imaging (fMRI), the team found an increase in neural activity in brain areas associated with memory (especially the hippocampus) as well as a network of brain areas associated with attention (especially parietal and prefrontal cortex). 

This first exploration of memory guided attention (1) confirmed that participants can use long-term memory to guide attention, and (2) further suggested that the brain areas that the mediate long-term could interact with attention-related areas to support this coalition. However, due to methodological limitations at the time, there was no way to separate activity associated with memory-guided preparatory attention, and the consequences of past-experience on perception (e.g., Helmholtzian inference). This was the aim of our follow-up study.

The Current Study: Design and Results 


In collaboration with Nobre and colleagues, we combined multiple brain imaging methods to show that past experience can change the activation state of visual cortex in preparation for memory-predicted input (Stokes, Atherton, Patai & Nobre, 2012, PNAS). Using electroencephalography (EEG), we demonstrated that the memories can reduce inhibitory neural oscillations in visual cortex at memory-specific spatial locations.

With fMRI, we further show that this change in electrical activity is also associated with an increase in activity for the brain areas that represent the memory-predicted spatial location. Together, these results provide key convergent evidence that past-experience alone can shape activity in visual cortex to optimise processing of memory-predicted information. 


Finally, we were also able to provide the most compelling evidence to date that memory-guided attention is mediated via the interaction between processing in the hippocampus, prefrontal and parietal cortex. However, further research is needed to verify this further speculation. In particular, we cannot yet confirm whether activation of the attention network is necessary for memory-guided preparation of visual cortex, or whether a direct pathway between the hippocampus and visual cortex is sufficient for the changes in preparatory activity observed with fMRI and EEG. This is now the focus of on-going research.