Showing posts with label TMS. Show all posts
Showing posts with label TMS. Show all posts

Thursday, 17 January 2013

Research Briefing: Targeting "silent" brain areas with TMS


A major challenge in neuroscience is how to study brain processes that are securely encased within the skull. Over the last twenty years, there has been enormous progress in non-invasive brain imaging methods. In particular, functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG) allow researchers to measure brain activity from outside the head.

Although brain imaging methods allow us to peer inside the head and watch the brain in action, we also need to be able to perturb brain function to understand more fully what observed brain activity is actually doing. We will never understand the brain by just watching it - we also need to be able to poke around to see what happens when certain processes are disrupted. In formal terms, we can only verify causality by disrupting brain activity and observing the consequences.

The most effective method for non-invasive brain disruption is transcranial magnetic stimulation (TMS). TMS is able to disrupt brain activity by delivering a focal magnetic pulse to the overlying scalp surface. The magnetic field passes through the scalp and skull, stimulating brain cells, thereby disrupting brain function.

TMS is the only method currently available in human neuroscience to disrupt specific brain areas and measure the consequence on brain function. TMS has been in use in labs across the world for more than 25 years, and sophisticated methods have been developed for targeting specific brain areas (see neuronavigation, pictured right). Nevertheless, it remain relatively unclear exactly how best to set stimulate intensity.

Setting the right stimulation level is essential for safe and effective use of TMS. Over-stimulation can cause adverse effects, such as seizure. From an experimental point of view, over-stimulation also reduces the focality of disruption, therefore complicating the interpretation of any effects. On the other hand, under-stimulation could compromise treatment in clinical settings, and lead to false negative results in research. Poor control over the stimulation intensity also compromises experimental comparisons between treatment conditions.

In a series of methodological studies performed with Chris Chambers and others, we previously explored the effect of skull thickness on brain stimulation. It is well known that the flux density of a magnetic field declines as a function of distance. As a direct consequence, if people have thicker skulls, they will require a higher intensity field at the scalp surface to activate underlying brain areas. To quantify this dependency, we varied TMS distance over motor cortex.


When TMS is applied to primary motor cortex, stimulation triggers a twitch in the muscle associated with the stimulated portion of the motor map (pictured left). This an extremely reliable and repeatable effect, and therefore provides a very useful tool for assessing the effect of TMS. We simply varied distance between the stimulation coil and the target brain region to characterise the relationship between distance and TMS effect (pictured right). From these initial studies, we suggested that TMS protocols could be usefully calibrated at motor cortex, and corrected for distance to derive a distance-independent estimate of cortical excitability. Distance-corrected levels could then be used to determine the appropriate stimulation intensity for 'silent' brain areas, such as non-motor brain areas for which there is no simple index of effective stimulation.

However, distance adjusted TMS still relies on the assumption that individual differences in response to TMS are due to variations in a general factor of cortical excitability. In this new study we tested this key assumption. We compared peoples' sensitivity to stimulation of motor cortex with stimulation of their visual cortex (indexed by a visual percept known as a phosphene). We found a systematic relationship between individual differences in sensitivity across stimulation sites, consistent with the idea that a common factor of cortical excitability might account for individual differences in the response to TMS.

In conclusion, this research suggests that TMS intensity can be calibrated to distance adjusted motor threshold, and applied to other brain areas. For further information, please see our paper here, or contact me directly.


References

Stokes, Barker, Dervinis, Verbruggen, Maizey, Adams & Chambers (2013) Biophysical Determinants of Transcranial Magnetic Stimulation: Effects of Excitability and Depth of Targeted Area. Journal of Neurophysiology, 109: 437– 444 [pdf]

Stokes, Chambers, Gould, English, McNaught, McDonald & Mattingley (2007) Distance-adjusted motor threshold for transcranial magnetic stimulation. Clinical Neurophysiology, 118(7): 1617-1625 [pdf]

Stokes, Chambers, Gould, Henderson, Janko, Allen & Mattingley (2005) A simple metric for scaling motor threshold based on scalp-cortex distance: application to studies using transcranial magnetic stimulation. Journal of Neurophysiology, 94(6): 4520-4527 [pdf]

Monday, 28 May 2012

A Tale of Two Evils: Bad statistical inference and just bad inference

Evil 1:  Flawed statistical inference

There has been a recent lively debate on the hazards of functional magnetic resonance imaging (fMRI), and what claims to believe or not in the scientific and/or popular literature [here, and here]. The focus has been on flawed statistical methods for assessing fMRI data, and in particular failure to correct for multiple comparisons [see also here at the Brain Box]. There was quite good consensus within this debate that the field is pretty well attuned to the problem, and has taken sound and serious steps to preserve the validity of statistical inferences in the face of mass data collection. Agreed, there are certainly papers out there that have failed to use appropriate corrections, and therefore the resulting statistical inferences are certainly flawed. But hopefully these can be identified, and reconsidered by the field. A freer and more dynamic system of publication could really help in this kind of situation [e.g., see here]. The same problems, and solutions apply to non-brain imaging field [e.g., see here].

But I feel that it may be worth pointing out that the consequence of such failures is a matter of degree, not kind. Although statistical significance is often presented as a category value (sig vs ns), the threshold is of course arbitrary, as undergraduates are often horrified to learn (why P<.05? yes, why indeed??). When we fail to correct for multiple comparisons, the expected probabilities change, therefore the reported statistical significance is incorrectly represented. Yes, this is bad, this is Evil 1. But perhaps there is a greater, more insidious evil to beware.

Evil 2: Flawed inference, period.

Whatever our statistical test say, or do not say, ultimately it is the scientist, journalist, politician, skeptic, whoever, who interprets the result. One of the most serious and common problems is flawed causal inference: "because brain area X lights up when I think about/do/say/hear/dream/hallucinate Y, area X must cause Y". Again, this is a very well known error, undergraduates typically have it drilled into them, and most should be able to recite like mantra: "fMRI is correlational, not causal". Yet time and again we see this flawed logic hanging around, causing trouble.

There are of course other conceptual errors at play in the literature (e.g., there must be a direct mapping between function and structure; each cognitive concept that we can imagine must have its own dedicated bit of brain, etc), but I would argue perhaps that fMRI is actually doing more to banish than reinforce ideas that we largely inherited from the 19th Century. The mass of brain imaging data, corrected or otherwise, will only further challenge these old ideas, as it becomes increasingly obvious that function is mediated via a distributed network of interrelated brain areas (ironically, ultra-conservative statistical approaches may actually obscure the network approach to brain function). However, brain imaging, even in principle, cannot disentangle correlation from causality. Other methods can, but as Vaughan Bell poetically notes:
Perhaps the most important problem is not that brain scans can be misleading, but that they are beautiful. Like all other neuroscientists, I find them beguiling. They have us enchanted and we are far from breaking their spell. [from here]
In contrast, the handful of methods (natural lesions, TMS, tDCS, animal ablation studies) that allow us to test the causal role of brain function do not readily generate beautiful pictures, and perhaps, therefore suffer a prejudice that keeps them under-represented in peer-review journals, and/or popular press. It would be interesting to assess the role of beauty in publication bias...

Update - For even more related discussion, see:
http://thermaltoy.wordpress.com/2012/05/28/devils-advocate-uncorrected-stats-and-the-trouble-with-fmri/
http://www.danielbor.com/dilemma-weak-neuroimaging/
http://neuroskeptic.blogspot.co.uk/2012/04/fixing-science-systems-and-politics.html

Saturday, 28 April 2012

Research Grant to Explore Fluid Intelligence


Thank you to the British Academy for awarding John Duncan and myself research funds to test a key hypothesis in the cognitive neuroscience of human performance: is prefrontal cortex necessary for fluid intelligence?

We will use non-invasive brain stimulation (transcranial magnetic stimulation: TMS) to temporarily ‘deactivate’ the prefrontal cortex, and then measure the consequences for performance on standard tests of fluid intelligence. It is a relatively simple experimental design, but if done correctly, the results should provide important and novel insights into the brain mechanisms underlying one of the most important human faculties: flexible reasoning and problem solving.

My co-investigator, John Duncan, gained his reputation in the cognitive neuroscience of intelligence with his seminal brain imaging study published in Science. This research demonstrated that when people perform tasks that tax fluid intelligence, neural activity increases in the prefrontal cortex relative to control tasks that require less fluid intelligence.


This result suggests that the prefrontal cortex is involved in fluid intelligence - but of course, as every undergraduate in psychology/cognitive neuroscience should be able to tell you, brain imaging alone cannot tell us whether the activated brain area is in fact necessary for performing the task.


So, to verify the causal role of the prefrontal cortex, Duncan and colleagues next examined stroke patients (published in PNAS). The logic here is simple: does damage to the prefrontal cortex reduce fluid intelligence? But the methodology is not so simple. Of particular importance, how can you tell  whether a patient has low IQ because of the brain damage, or  whether they were always a low IQ individual?

Duncan's team tackled this problem by estimated pre-damage fluid intelligence from scores on other tests that measure so-called crystallised intelligence (e.g., vocabulary and general knowledge). Critically, crystallised intelligence reflects the life long achievements that depend on fluid intelligence during acquisition, and therefore can be used to approximate pre-damage fluid intelligence. If the prefrontal cortex is especially important for fluid intelligence, then damage should result in a disparity between fluid and crystallised intelligence. Indeed, this is what they found. 


As developed in his popular science book, "How Intelligence Happens", Duncan suggests that the prefrontal cortex is essential for flexible structured cognitive processing, a key ingredient to fluid intelligence. If this theory is correct, then temporary deactivation of the prefrontal cortex should impair fluid intelligence. If not, then we need to rethink this working hypothesis. 

What will these results tell us? Are we just heading back to 19th Century phrenology – associating discrete brain areas with complex high order human traits that are more like sociocultural inventions than principled neurocognitive constructs? Do we then plan to localise creativity here, insight there, and perhaps a little bit of moral judgment over here? 


Of course, I don't think this is modern day phrenology. Rather, I would argue that this research could provide key insights into the fundamental cognitive neuroscience of this important brain area. From a theoretical perspective, we can attempt to decompose the underlying processes for fluid intelligence, and relate these to the neurophysiological principles of prefrontal function. Intelligence is not mystical or intractable. It is a specific cognitive process that we can measure, and must have a neurological basis that is an important target for cognitive neuroscience.

However, we must also recognise that we have to be careful how these results could be interpreted. Intelligence is a particularly sensitive area. The very concept of fluid intelligence often takes on more than it should - a reflection of the fundamental worth or even moral character of the individual.

Obviously there is some danger in reducing one of the most important cognitive mechanisms to a single number (e.g., intelligence quotient: IQ), which we can then compare between individuals and against groups. It is a dangerous business that can be exploited for any number of nefarious agendas. For example, we can try to confirm our own racial or sexist prejudices, conjuring up a biological, and therefore 'scientific' excuse for beliefs that are motivated by simple bigotry (recall the recent Watson controversy?). Conversely, on the other side, the same logic could be used to pursue an equality agenda. This could also be a dangerous path to follow - what if we are not all equal in ability? I see no a priori reason that there should not be group differences in any measure, including IQ. It is simply an empirical issue, and therefore a risky business to stake our sense of equality on equal ability. 

IQ is certainly a loaded concept. Recently, I was speaking with a mathematician and historian about an advert they saw for a brain imaging study comparing IQ between academics from the sciences and humanities. The historian was intrigued, and eager to participate, whereas the mathematician was much more reluctant. I guess the risk of a lower-than-hoped-for score is quite disconcerting when your very livelihood depends on an almost mythical concept of pure intelligence, or better still - genius.

An anecdote comes to mind of a researcher who was to be the first subject in an fMRI study of IQ conducted by his colleagues. Being scanned can sometimes make people nervous the first time, but this was a seasoned neuroscientist, no stranger to the confined and uncomfortable space of an MRI. Rather, what made this no ordinary scanning experience was the fact that his respected colleagues were watching from the control room, monitoring his responses to the IQ task. Enough to make any academic uncomfortable!

This kind of awkwardness raises an important practical issue for us. Like many cognitive neuroscientists, I often rely on friends and colleagues to participate in my experiments, especially students, academic visitors, post-doctoral researchers. Obviously, one could easily imagine some tension arising in a lab that has tested everyone’s IQ. This could be particularly worrying for the more senior amongst us, as fluid intelligence is negatively correlated with age. We would not want to upset the natural order of the academic hierarchy!


Anyway, I will keep you posted how we get on with the project.