Saturday, 15 December 2012

Actors volunteer to be hypontised on TV

We are all pretty familiar with the basic formula of stage hypnosis. Supposedly normal people are hypnotised to do silly and embarrassing things before a wide-eyed audience. It is pretty hard to see exactly how the stage hypnotist is able to make normal folks cluck like a chicken (wiki ideas: peer pressure, social compliance, participant selection, ordinary suggestibility, and some amount of physical manipulation, stagecraft, and trickery). But whatever is going on in people's mind, it seems unlikely that these showman have somehow discovered how to master full mind control with the snap of the fingers.

Yet still, the idea that hypnosis could be used to control all types of behaviour seems to capture the imagination. Recent TV programmes have dived straight into the sensationalist deep end, to pose the question: can people be hypnotised to commit a cold-blooded assassination? On Channel 4's Experiment series, Derren Brown claims to show that a normal everyday kind of guy can be plucked off the street and hypnotised to shoot a celebrity at a public gathering. Similarly, hypnotist Tom Silver was employed by the Discovery Channel programme Brainwashed to test the same idea: can Joe Citizen be hypnotised to commit cold blooded murder?

Both shows set up the question by invoking the case surrounding Bobby Kennedy's assassination. In what seems an absurdly flimsy defence, Sirhan Sirhan claimed to be hypnotised by secret agents to carry out the killing. The counter-evidence for premeditation was over-whelming, and the appeal was unsuccessful - this is hardly a strong starting point for establishing precedent. Rather, Sirhan Sirhan's defence seems to fit somewhere between desperate appeal and paranoid delusion. The Discovery Channel additionally invoked the case of Patty Hearst. This is a fascinating story in its own right. Heiress to the fortune of media mogul William Randolph Hurst (immortalised as Charles Foster Kane by Orson Welles in the classic Citizen Kane) was kidnapped by a self-styled left-wing revolutionary group, involved in bank robberies, two murders, and other acts of violence. After a failed ransom bid, she became an active member of this vanguard army until she was eventually captured by police and put to trial for armed robbery. Her defence, heavily influenced by her extremely influential parents, argued that Patty Hearst had been brainwashed to join the revolutionary group. It seems likely that her parents were unable to accept the more shocking possibility that their daughter would willingly turn on their way of life to adopt an outlaw revolutionary life. Here, the term brainwashed sounds more like an expression of parental disbelief than a systematic process of coercive mind control.

Despite the weak starting premise for mind control, both shows nevertheless set out to demonstrate that hypnosis can be used to programme an ordinary person to carry out a (mock) assassination. On Brainwashed, I was called in to join a panel of experts to assess a series of 'experiments', starting from relatively benign tests of hypnotic suggestion and culminating in the mock assassination. Our role as the scientific experts was relatively limited, but we were able to observe the overall process reasonably closely. We saw no obvious jiggery pokery during production, although post-production clearly used the usual kinds of selective editing tricks that can mould impressions without explicit falsehoods.

Most viewers are pretty wise to the fact that the final cut includes only footage that the director wants you to see. Very many hours of footage never make it to screen, leaving plenty of wriggle room to create a 'coherent narrative'. But what did seem to surprise many viewers was the fact that the star 'assassin' of the show turned out to be a part-time actor, not a regular member of the public as the overall narrative implied. A similar minor scandal erupted when it was suggested that one of Derren Brown's hypnosis subjects was in the acting profession.

But is it really so surprising that a volunteer who signs up to be on TV turns out to be an actor? Presumably most people who volunteer for these kinds of things are either actors, or at least aspiring actors. And presumably the directors know this too. Even if they don't explicitly advertise for actors, they are very likely to get actors answering to the call for participation. And conveniently, actors will no doubt act the part for the cameras - so what more could a director want? It is not impossible that actors can also be hypnotised (maybe good acting is a form of hypnosis anyway), but it is important to keep in mind the relevant context: TV studio, with lights, cameras, etc; and actors (or similar) who want to be on TV. This scenario is not the stuff of controlled scientific research - needless to say, such shows should be viewed with a healthy scepticism. It maybe not be necessary quite yet to abandon your pre-existing sense that you are more or less in control of your own actions and behaviour.

Thursday, 13 December 2012

Science, LIVE: A made-for-TV experiment


[also see my related Guardian post]

A few months ago, Channel 4 attracted considerable attention for their sensationally titled – “Drugs Live: the Ecstasy Trial”. The subject matter was clearly designed to court controversy, with Prof Nutt at the centre of the storm. The show "hooked almost 2 million" viewers on the first night, and triggered a lively debate around highly charged questions, such as: Do we need to focus more on the medical/chemical nature of particular drugs, and less on the moral/legal status? Is it right to film volunteers taking a Class A drug, even for a medial experiment? Is Channel 4 glorifying illegal drugs, or contributing to rational discussion?

Much was written and said on either side of this debate (e.g. this conversation between Nutt and Manning). However, as an empirical scientist, I would like to draw attention to another more general issue that was  perhaps neglected in the mêlée of moral and ethical arguments. I would like to know why TV science is conducting experiments in the first place?

The made-for-TV experiment

Drugs Live was structured around an ethically approved double-blinded experiment to test the effects of MDMA. Data were collected from 25 participants under the influence of MDMA and a placebo control (sugar pill). Tests included questionnaire and computer-based tasks to measure changes in mood and cognition, as well as the ever-TV-friendly fMRI to measure changes in brain activity. This sounds like a reasonable set of experiments, but according to Prof Nutt, the Medical Research Council (MRC) declined to fund the research because it did not "fit in with the MRC's portfolio of addiction". Instead, Channel 4 agreed to pick up the tab, presumably for more financial motives compared to the MRC's commitment to "improve human health through world-class medical research" (from mission statement). Perhaps we should celebrate this innovative collaboration between academia and the private sector. In these times of austerity, perhaps TV-funded research is the future big-society answer to maintain Britain’s place as a leading powerhouse of innovation, science and technology.

And indeed, if the research is well-conducted, the results could provide valuable insights into the effects of MDMA, of genuine scientific interest with important political/social relevance. After many weeks and months of painstaking data analysis, the results could be submitted to a reputable scientific journal for rigorous peer-review. If the submitted findings are accepted by the academy as sufficiently trustworthy, then the scientific report would be published for consideration by a wider scientific audience. Journal press-releases might then alert the popular news outlets, who may then report these novel findings to their more general readership. 

This is how scientific findings are normally disseminated to the wider audience. Slowly, but surely, complex data yield their secrets to careful systematic analysis. It is not gripping TV, but this systematic process is the foundation of modern scientific research. Made-for-TV science, on the other hand, can by-pass the process completely and stream their own results directly into living rooms across the country. 


Science to a production schedule: lights, camera, action!

Maybe science needs a bit more of a can-do attitude. Like Jon Snow, who promises on Drugs Live that “tonight we will get to the bottom of it”. Not in another month, six months, or couple of years - but this very evening! And true to his word, by the end of the first episode Snow can already announce that we have all witnessed “two scientific breakthroughs”. He was not very specific, but we may assume that he was referring to the two brain scans that were rotated on a large plasma screen.


The first scan, from actor Keith Allen, appeared to show a relative decrease in communication between two areas previously associated with the so-called default mode network. Firstly, I should leave aside any academic debate about the true nature of this brain network, as tempting as it is to question Prof Nutt’s proclamation that this area is no less than “you, your personality, sense of self”. My purpose here is just to make the point that data from one brain in one volunteer, hastily analyzed and not peer-reviewed, does not constitute a “scientific breakthrough”. Perhaps an interesting hint. A potential clue, maybe. Promising lead, why not? But certainly not a: “scientific breakthrough”.


The second scan was even more ambiguous. The rotating image appeared to show a number of brain regions we are told were more active when the volunteer closed her eyes after taking MDMA. We are told that this reflects the heightened perceptual experience caused by the drug. But to be blunt, these data look like a mess, like random variation in the MRI signal. This is not really all that surprising, considering it is only one scan from one person, analyzed under the unrealistic time pressure of the TV production schedule. I would be amazed, or even suspicious if the result was any clearer. 


Of course science needs to be simplified for a TV audience. Matt Wall, neuroscientist involved in the Drugs Live programme, says of his experience:
"TV needs everything to be black-and-white, and unambiguous... They don’t care that you haven’t run the necessary control experiments, or that the study was only correlational and therefore can’t be used to imply direct causation – they want a neat, clear story above all else"
Oversimplifying the deeper complexities "can very often lead to distortions, or ‘Lies-to-children’". This is a perennial issue for TV science, whether following the classic science reporting formula or made-for-TV experiments. To be able to articulate complex theoretical concepts and technical details to general audience without misrepresentation is a great but rare skill.

Astonishing Science

Jon Snow promised his live audience “astonishing science”. Quite right, TV should bring astonishing science to the wider audience. But this mission is seriously compromised by the production demands associated with made-for-TV experiments.

Advising the Discovery Channel on a recent made-for-TV experiment, I was told by a production assistant: “you don’t have to always be so cynical, you know.” And I absolutely agree with the sentiment. Science TV should convey the excitement of science, not just the limitations. Just like this production assistant, I am also frustrated by too many caveats. The reason I came to science was to discover something about the world, not just to point out flaws in putative findings. But like any empirical scientist, I have learned many times over not to get too excited over half-baked results. Only solid reliable results are really exciting.

But TV science does not have to be boring. TV has many tricks up its sleeve, such as dramatic music, frenetically paced scene cuts, angled screen shots in darkened laboratories, and expensive props like MRI. All these can be used to covey the excitement of science, without resorting to made-for-TV experiments.

Reality-science TV

The enormous success of reality TV tells us that viewers like to experience the activities on screen through people they can relate to. Extrapolating to science TV, I guess viewers like to feel the science experience through personalities that they can relate to. The personal touch can make it seem more real.

Recently, I was asked to help conduct another made-for-TV brain imaging experiment with the show's presenter as the experimental subject. To fit the production schedule, we had to analyse complex brain imaging data within a matter of hours. We did manage to produce some very rudimentary results within this science-improbable time frame - we had to! Production costs are clocked by the hour.

Of course the actual result was of limited scientific value, and we were naturally so circumspect about what we said on camera that it is hard to see how this data could have been of any great interest to the audience. It was essentially just a brain on a screen: demonstration science - what it looks like to do science.
There is of course no harm in such demonstrations - eye-catching demonstrations are bread and butter tools for conveying the excitement of science. We don't need to pretend that they are also conducting novel scientific research. It is important enough to help convey the process of science without pretending to add to the content of scientific knowledge.There are plenty of good and proper TV production devices for engaging public interest in science. And we expect a little bit of TV gimmickry, it is show biz after all. But why not be content with reporting on science, rather than making science as well?


Investigative Journalism

The Drugs Live formula is a hybrid of traditional science programme and the exposé. Borrowing from the rich history of the investigative journalism, TV is not just reporting news, but making news as well. The production company can herald exclusive access to a breaking news story:
“Now, in a UK television first, two live programmes will follow volunteers as they take MDMA, the pure form of ecstasy, as part of a ground-breaking scientific study” [from the Channel 4 series synopsis]
But investigative journalism is also tricky business. The precise outcome of any investigation is impossible to predict, and therefore hard to plan for. Out of the many possible leads, only a minority will reveal something worth reporting. Producers are presumably familiar with the frustration of stories that lead nowhere, and presumably they are reasonably careful about committing to a production schedule until after the results of the investigation are relatively clear. Jumping to premature conclusions can lead to serious false claims, as dramatically highlighted recently by the Newsnight debacle that cost the BBC general director his job. In a recent post-mortem of this botched investigation, David Leigh writes: “to be faithful to the evidence" is essential for successful investigative journalism. And although journalism may not be "rocket science" [in Leigh's words], investigative journalism also demands a genuine commitment to follow the evidence, wherever it leads. 


Conflict of interest

In the shadow of recent revelations of fraud and serious malpractice across a range of scientific disciplines, from psychology to anaesthesiology, many scientists have been asking how to improve the scientific process (e.g., see here and here). Televising the process is unlikely to be the answer. 

Although industry-funding can be a valuable source of revenue, we must always seriously consider potential conflicts of interest. Increasingly, concerns have been raised regarding dubious practices in clinical research funded by pharmaceutical companies. Ben Goldacre’s book, Bad Pharma, is a must read on this serious public health issue. Conflicts of interest can distort many stages of the experimental process to increase the likelihood of finding a particular result. Clinical research is becoming increasingly alert to these problems, and serious steps are being made to avoid the malevolent influence of funding agencies with vested interests.

But even without a commercial interested in a particular result, the pressure to "find something" noteworthy can also motivate bad scientific practice. In academic circles, the pressure to publish is typically considered a major driving factor in scientific malpractice and fraud.The bottom line for Channel 4, of course, is to maximise audience numbers to boost the value of their commercial time. This does not automatically rule out potential scientific merit of TV-funded experiments, but it certainly is worth bearing in mind, especially when thinking about how the particular demands of TV could compromise scientific method. As discussed above, these include short-cuts and rushed analyses to fit a tight production schedule, as well as the pressure to find "something in the data" by the end of the shoot, however unreliable it might turn out to be later. The experimental approach in Drugs Live was also apparently compromised by recruiting celebrities (and other TV-friendly personalities) as experimental subjects, tapping into the proven success of the reality-TV format but skewing the sample of experimental subjects. It is also hard to imagine that omnipresent TV cameras did not influence the results of the experiment.

It will be interesting to follow up on this research to see how the results are received within the academic community. There is no reason not to expect some interesting and important findings, but I wonder if the more detailed and scientifically meaningful results will be heralded with as much fanfare as the actual pill-popping on camera. I fear Channel 4 might be more interested in the controversy surrounding MDMA than the science motivating the research.


Friday, 30 November 2012

Bold predictions for good science


Undergraduates are taught proper scientific method. First, the experimenter makes a prediction, then s/he collects data to test that prediction. Standard statistical methods assume this hypothesis driven approach, most statistical inferences are invalid unless this rigid model is followed.

But very often it is not. Very often experimenters change their hypotheses (and/or analyses methods) after data collection. Indeed, students conducting their first proper research project are often surprised by this 'real-world' truth: "oh, that is how we really do it!". They learn to treat research malpractice like a cheeky misdemeanour. 

After recent interest in science malpractice, fuelled by revelations of outright fraud, commentators are starting to treat the problem more seriously, especially in psychology and neuroscience. This month, Perspectives on Psychological Science devoted an entire issue to the problem of peer-reviewed results that fail to replicate, because they were born of bad scientific practice. 

Arguably, scientific journals share much of the responsibility for allowing bad research practices to flourish. Although there maybe little journals can do to stop outright fraud, they can certainly do a lot to improve research culture more generally. Recently, the journal Cortex has announced that it will try to do just that. Chris Chambers, associate editor, has outlined a new submission format that will strictly demand that researchers conform to the classic experimental model: predictions before data. With the proposed Registration Report, authors will be required to set out their predictions (and design/analysis details) before they collect the data, thus cutting off the myriad opportunities to capitalise on random vagaries in observed data. And although researchers could still lie and make up data, cleaning up the grey area of more routine bad behaviour could have important knock on effects. As I have argued elsewhere, bad scientific practice is presumably a fertile breeding ground for more serious acts of fraud.

This is a bold new initiative, and if successful, could precipitate a major change in the way science is done. For further details, and some interesting discussion, check out this panel discussion on Fixing the Fraud at SpotOn London 2012 and this article in the Guardian.


Wednesday, 3 October 2012

Distance Code

Accurate brain stimulation requires precise neuroanatomical information. To activate a specific brain region with transcranial magnetic stimulation (TMS), it is important to know where on the scalp to place the induction coil. Commercial neuronavigation systems have been developed for this purpose. However, it is also important to know the depth of the targeted area, because the effect of TMS critically depends on the distance between the stimulating coil and targeted brain area.

We have developed a simple TMS Distance Toolbox for calculating the distance between a stimulation site on the scalp surface and underlying cortical surface. The toolbox can be download here, and requires Matlab and SPM8. I will posted further information soon, including user instructions.

Saturday, 15 September 2012

Must we really accept a 1-in-20 false positive rate in science?

There has been some very interesting and extremely important discussion recently addressing a fundamental problem in science: can we believe what we read?

After a spate of high-profile cases of scientific misdemeanours and outright fraud (see Alok Jha's piece in the Guardian), people are rightly looking for solutions to restore credibility to the scientific process [e.g., see Chris Chambers and Petroc Sumner's Guardian response here].

These include more transparency (especially pre-registering experiments), encouraging replication, promoting the dissemination of null effects, shifting career rewards from new findings (neophilia) to genuine discoveries, abolishing the cult of impact factors, etc. All these are important ideas, and many are more or less feasible to implement, especially with the right top-down influence. However, it seems to me that one of the most basic problems is staring us right in the face, and would require absolutely no structural change to correct. The fix is as simple as re-drawing a line in the sand.

Critical p-value: line in the sand

Probability estimates are inherently continuous, yet we typically divide our observations into two classes: significant (i.e., true, real, bona fide, etc) and non-significant (i.e., the rest). This reduces the mental burden of assessing experimental results - all we need to know is whether an effect is real, i.e., passes a statistical threshold. And so there are conventions, the most widely used being p<.05. If our statistical test falls below a probability of 5% chance level, we may assert that our conclusion is justified. Ideally, this threshold ensures that our inference is correct with at least 95% certainty. But turn this around, and it also means that at worst, the assertion could be wrong (i.e., false positive) one time in twenty (about the same odds as being awarded a research grant in the current climate). That already seems pretty high odds for accepting false positive claims in science. But worse, this is also only the ideal theoretical case. There are many dubious scientific practices that dramatically inflate the false discovery rate, such as cherry picking and peeking during data collection (see here).

These kinds of fishy goings-on are evident in statistical anomalies, such as the preponderance of just-significant effects reported in the literature (see here for blog review of empirical paper). Although it is difficult to estimate the true false positive rate out there, it can only be higher than the ideal one in twenty rate assumed by our statistical convention. So, even before worrying about outright fraud, it is actually quite likely that many of the results we read about in the peer-reviewed literature are in fact false positives.

Boosting the buffer zone

The obvious solution is to tighten up the accepted statistical threshold. Take physics, for example. Those folk only accept a new particle into their text books if the evidence reaches a statistical threshold of  5 sigma (i.e., p<0.0000003). Although the search for the Higgs boson involved plenty of peeking along the way, at 5 sigma the resultant inflation of the false discovery rate is hardly likely to matter. We can still believe the effect. A strict threshold level provides a more comfortable buffer between false positive and true effect. Although there are good and proper ways to correct for peeking, multiple comparisons, etc., all these assume full disclosure. It would clearly be safer just to adopt a conservative threshold. Perhaps not one quite as heroic as 5 sigma (after all, we aren't trying to find the God particle), but surely we can do better than a one-in-twenty false discovery rate as the minimal and ideal threshold.

Too conservative?

Of course, tightening the statistical threshold would necessarily increase the number of failures to detect a true effect, so-called type II errors. However, it is probably fair to say that most fields in science are suffering more from false positives (type I errors) than type II errors. False positives are more influential than false negatives, and harder to dispel. In fact, we are probably more likely to consider a null effect as a real effect cloaked in noise, especially if there is already a false positive lurking about somewhere in the published literature. It is notoriously difficult to convince your peers that your non-significant test indicates a true null effect. Increasingly, Bayesian methods are being developed to test for sameness between distributions, but this is another story.

The main point is that we can easily afford to be more conservative when bestowing statistical significance to putative effects, without stifling scientific progress. Sure, it would be harder to demonstrate evidence for really small effects, but not impossible if they are important enough to pursue. After all, the effect that betrayed the Higgs particle was very small indeed, but that didn't stop them from finding it. Valuable research could focus on validating trends of interest (i.e., strongly predicted results), rather than chasing down the next new positive effect leaving behind a catalogue of potentially suspect "significant effects" in your wake. Science cannot progress as a house of cards.

Too expensive?

Probably not. Currently, we are almost certainly wasting research money chasing down the dead ends that are opened up by false positives. A reduced, but more reliable corpus of highly reliable results would almost certainly increase the productivity of many scientific fields. At present, the pressure to publish has precipitated a flood of peer-reviewed scientific papers reporting any number of significant effects, many of which will almost certainly not stand the test of time. It would seem a far more sensible use of resources to focus on producing fewer, but more reliable scientific manuscripts. Interim findings and observations could be made readily available via any number of suggested internet-based initiatives. These more numerous 'leads' could provide a valuable source of possible research directions, without yet falling into the venerable category of immutable (i.e., citable) scientific fact. Like conference proceedings, they could adopt a more provisional status until they are robustly validated.

Raise the bar for outright fraud

Complete falsification is hard to detect in the absence of actual whistleblowers. In Simonsohn's words: "outright fraud is somewhat impossible to estimate, because if you're really good at it you wouldn't be detectable" (from Alok Jha). Even publishing the raw data is no guarantee of catching out the fraudster, as there are clever ways to generate plausible-looking data sets that would pass veracity testing.

However, fraudsters presumably start their life of crime in the grey area of routine misdemeanour. A bit of peeking here, some cherry picking there, before they are actually making up data points. Moreover, they know that even if their massaged results fail to replicate, benefit of the doubt should reasonably allow them to claim to be unwitting victims of an innocent false positive. After all, at p<0.05 there is already a 1-in-20 chance of a false positive, even if you do everything by the letter!

Like rogue traders, scientific fraudsters presumably start with a small, spur-of-the-moment act that they reasonably believe they can get away with. If we increase the threshold that needs to be crossed, fewer unscrupulous researchers will be tempted down the dark and ruinous path of scientific fraud. And if they did, it would be much harder for them to claim innocence after their 5 sigma results fail to replicate.

Why impose any statistical threshold at all?

Finally, it is worth noting some arguments that the statistical threshold should be abolished altogether. Maybe we should be more interested in the continua of effect sizes and confidence intervals, rather than discrete hypothesis testing  [e.g., see here]. I have a lot of sympathy for this argument. A more quantitative approach to inferential statistics would more accurately reflect the continuous nature of evidence and certainly, and also more readily suit meta-analyses. However, it is also useful to have a standard against which we can hold up putative facts for the ultimate test: true or false.

Wednesday, 15 August 2012

In the news: clever coding gets the most out of retinal prosthetics

This is something of an update to a previous post, but I thought interesting enough for its own blog entry. Just out in PNAS, Nirenberg and Pandarinath describe how they mimic the retina’s neural code to improve the effective resolution of an optogenetic prosthetic device (for a good review, see Nature News).

As we have described previously, retinal degeneration affects the photoreceptors (i.e., rod and cone cells), but often spares the ganglion cells that would otherwise carry the visual information to the optic nerve (see retina diagram below). By stimulating these intact output cells, visual information can bypass the damaged retinal circuitry to reach the brain. Although the results from recent clinical trials are promising, restored vision is still fairly modest at best. To put it in perspective, Nirenberg and Pandarinath write:
[current devices enable] "discrimination of objects or letters if they span ∼7 ° of visual angle; this corresponds to about 20/1,400 vision; for comparison, 20/200 is the acuity-based legal definition of blindness in the United States"
Obviously, this poor resolution must be improved upon. Typically, the problem is framed as a limit in the resolution of the stimulating hardware, but Nirenberg and Pandarinath show that software matters too. In fact, they demonstrate that software matters a great deal.

This research focuses on a specific implementation of retinal prosthesis based on optogenetics (for more on approach check out this Guardian article, and for an early empirical demonstration). Basically, intact retinal ganglion cells are injected with a genetically engineered virus that produces a light sensitive protein. These modified cells will now respond to light coming into the eye, just as the rods and cones do in the healthy retina. This approach, although still being developed in mouse models, promises a more powerful and less invasive alternative to electrode arrays previously trialled in humans. But it is not the hardware that is the focus of this research. Rather, Nirenberg and Pandarinath show how the efficacy of the these prosthetic devices critically depends on the type of signal used to activate the ganglion cells. As schematised below, they developed a special type of encoder to convert natural images into a format that more closely matches the neural code expected by the brain. 

The steps from visual input to retinal output proceed as follows: Images enter a device that contains the encoder and a stimulator [a modified minidigital light projector (mini-DLP)]. The encoder converts the images into streams of electrical pulses, analogous to the streams of action potentials that would be produced by the normal retina in response to the same images. The electrical pulses are then converted into light pulses (via the mini-DLP) to drive the ChR2, which is expressed in the ganglion cells.
This neural code is illustrated in the image below: 



The key result of this research paper is a dramatic increase in the amount of information that is transduced to the retinal output cells. They used a neural decoding procedure to quantify the information content in the activity patterns elicited during visual stimulation of a healthy retina, compared to optogenetic activation of ganglion cells in the degenerated retina via encoded or unencoded stimulation. Sure enough, the encoded signals were able to reinstate activity patterns that contained much more information than the raw signals. In a more dramatic, and illustrative, demonstration of this improvement, they used an image reconstruction method to show how the original image (baby's face in panel A) is first encoded by the device (reconstructed in panel B) to activate a pattern of ganglion cells (image-reconstructed in panel C). Clearly, the details are well-preserved, especially in comparison to the image-reconstruction of a non-encoded transduction (in panel D). In a final demonstration, they also found that the experimental mice could track a moving stimulus using the coded signal, but not the raw unprocessed input.

According to James Weiland, ophthalmologist at University of Southern California (quoted in by Geoff Brumfiel Nature News), there has been considerable debate whether it is more important to try to mimic the neural code, or just allow the system to adapt to an unprocessed signal. Nirenberg and Pandarinath argue that clever pre-processing will be particularly important for retinal prosthetics, as there appears to be less plasticity in the visual system than say the auditory system. Therefore, it is essential that researchers crack the neural code of the retina rather than hope the visual system will learn to adapt to an artificial input. The team are optimistic:
"the combined effect of using the code and high-resolution stimulation is able to bring prosthetic capabilities into the realm of normal image representation"
But only time, and clinical trials, will tell.


References:

Bi A, et al. (2006) Ectopic expression of a microbial-type rhodopsin restores visual responses in mice with photoreceptor degeneration. Neuron 50(1):23–33.

Nirenberg and Pandarinath (2012). Retinal prosthetic strategy with the capacity to restore normal vision. PNAS

Monday, 13 August 2012

Research Briefing: Lacking Control over the Trade-off between Quality and Quantity in Visual Short-Term Memory

This paper, just out in PLoS One, describes research led by Alexandra Murray during her doctoral studies with Kia Nobre and myself. The series of behavioural experiments began with a relatively simple question: how do people prepare for encoding into visual short-term memory (VSTM)?

VSTM is capacity limited. To some extent, increasing the number of items in memory reduces the quality of each representation. However, this trade-off does not seem to continue ad infinitum. If there are too many items to encode, people tend to remember only a subset of possible items, but with reasonable precision, rather than a more vague recollection of all the items. 

Previously, we and others had shown that directing participants to encode only a subset of items from a larger set of possible memory items increases the likelihood that the cued items would be recalled after a memory delay. Using electroencephalogram (EEG), we further showed that the brain mechanisms associated with preparation for selective VSTM encoding were similar to those previously associated with selective attention. 

To follow up on this previous research, Murray further asked whether can people strategically fine tune the trade-off between the number and quality of items in VSTM? Given foreknowledge of the likely demands (i.e., many or few memory items, difficult or easy memory test), can people engage an encoding strategy that favours quality over quality, or vice versa?  

From the outset, we were pretty confident that people would be able to fine-tune their encoding strategy according to such foreknowledge. Extensive previous evidence, including our own mentioned above, had revealed a variety of control mechanisms that optimise VSTM encoding according to expected task demands. Our first goal was simply to develop a nice behavioural task that would allow us to explore in future brain imaging experiments the neural principles underlying preparation for encoding strategy, relative to other forms of preparatory control. But this particular line of enquiry never got that far! Instead, we encountered a stubborn failure of our manipulations to influence encoding strategy. We started with quite an optimistic design in the first experiment, but progressively increased the power of our experiments to detect any influence of foreknowledge over expected memory demands - and still nothing at all! The figure on the right summarises the final experiment in the series. The red squares in the data plot (i.e., panel b) highlight the two conditions that should differ if our hypothesis was correct.  

By this stage it was clear that we would have to rethink our plans for subsequent brain imaging experiments. But in the interim, we had also potentially uncovered an important limit to VSTM encoding flexibility that we had not expected. The data just kept on telling us: people seem to encode as many task-relevant items as possible, irrespective of how many items they expect, or how difficult the expected memory test at the end of the trial. In other words, this null effect had revealed an important boundary condition for encoding flexibility in VSTM. Rather than condemn these data to the file draw, shelved as a dead-end line of enquiry, we decided that we should definitely try to publish this important, and somewhat surprising null effect. We decided PLoS One would be the perfect home for this kind of robust null effect. The experimental designs were sensible, with a logical progression of manipulations, the experiments were well-conducted and the data were otherwise clean. There was just no evidence that our key manipulations influenced short-term memory performance. 

As we were preparing our manuscript for submission, a highly relevant paper by Zhang and Luck came out in Psychological Sciences (see here). Like us, they found no evidence that people can strategically alter the trade-off between remembering many items poorly and/or few items well. If it is possible to be scooped on a null effect, then I guess we were scooped! But in a way, the precedent only increased our confidence that our null effect was real and interesting, and definitely worth publishing. Further, PLoS One is also a great place for replication studies, and so surely a replication of a null effect makes it a doubly ideal! 


For further details, see:

Murray, Nobre & Stokes (2012) Lacking control over the trade-off between quality and quantity in VSTM. PLoS One

Murray, Nobre & Stokes (2011). Markers of preparatory attention predict visual short-term memory. Neuropsychologia, 49:1458-1465.

Zhang W, Luck SJ (2011) The number and quality of representations in working memory. Psychol Sci. 22: 1434–1441



Tuesday, 31 July 2012

Research Meeting: Visual Search and Selective Attention

Just returned from a really great meeting at the scenic lakeside (“Ammersee”) location near Munich, Germany. The third Visual Search and Selective Attention symposium was hosted and organised by Hermann Müller and Thomas Geyer (Munich), and supported by the Munich Center for Neurosciences (MCN) and the German Science Foundation (DFG). The stated aim of the meeting was:
"to foster an interdisciplinary dialogue in order to identify important shared issues in visual search and selective attention and discuss ways of how these can be resolved using convergent methodologies: Psychophysics, mental chronometry, eyetracking, ERPs, source reconstruction, fMRI, investigation of (neuropsychological) impairments, TMS and computational modeling."
The meeting was held over three days, and organised by four general themes:

- Pre-attentive and post-selective processing in visual search (Keynote: Hermann Müller)
- The role of (working) memory guidance in visual search (Keynote: Chris Olivers, Martin Eimer)
- Brain mechanisms of visual search (Keynote: Glyn Humphreys)
- Modelling visual search (Keynote: Jeremy Wolfe).

Poster sessions gave grad students (including George Wallis and Nick Myers) a great chance to chat about their research with the invited speakers as well as other students tackling similar issues. 

Of course, a major highlight was the Bavarian beer. Soeren Kyllingsbaek was still to give his talk, presumably explaining the small beer in hand!

More photos of the meeting can be found here.

***New***

All presentations can be downloaded from here

Sunday, 24 June 2012

Journal Club: Brains Resonating to the Dream Machine


By George Wallis

On: VanRullen and Macdonald (2012). PerceptualEchoes at 10Hz in the Human Brain

One day in 1958 the artist Brion Gysin was sleeping on a bus in the south of France. The bus passed a row of trees, through which the sun was shining. As the flickering light illuminated Gysin, he awoke and with his eyes closed, began to hallucinate, seeing:

an overwhelming flood of intensely bright patterns in supernatural colours… Was that a vision?”.  
By the turn of the decade Gysin was living with William S Burroughs in the flophouse in Paris that became known as the Beat Hotel. Gysin told Burroughs of his experience, and they decided to build a device to recreate the flickering stimulation. The ‘Dream Machine’ is a cylinder of cardboard, cut at regular intervals with windows, which can be spun on a 78rpm record player, a light bulb inside to throw off a flickering light.   The light flickers around ten times per second (10Hz). Some, like the poet Ginsberg (it sets up optical fields as religious and mandalic as the hallucinogenic drugs”), claim to have experienced vivid hallucinations when seated eyes closed before a spinning Dream Machine (although, most devotees admitted that the effect was much stronger in combination with psychedelic drugs).

Gysin and Burroughs had rediscovered a phenomenon that had been known to scientists for some time. The great neurophysiologist Purkinje documented the hallucinatory effect of flickering light by waving an open-fingered hand in front of a gaslight. Another neuro-luminary, Hermann von Helmholtz, investigated the same phenomenon in Physiological Optics, calling the resulting hallucinations ‘shadow patterns’. In the 1930s Adrian and Matthews, investigating the rhythmic EEG signal recently discovered by Hans Berger, shone a car headlamp through a bicycle wheel and found that they could ‘entrain’ the EEG recording of their subject to the stimulation, in ‘a coordinated beat’. And from there investigation of the magical 10Hz flicker continued, on and off, until the present day (for a very readable review, see the paper by ter Meulen, Tavy and Jacobs referenced at the bottom of this post – from which the above quotations from Gysin and Ginsberg are taken; see also a relate post by Mind Hacks).

This week’s journal club paper is not about flicker-induced hallucinations. However, it does use EEG to address the related idea that there is something rather special to the visual system about the 10Hz rhythm. The paper, by Rufin VanRullen and James Macdonald, and published this month in Current Biology, used a very particular type of flickering stimulation to probe the ‘impulse response’ of the brain. They found – perhaps to their surprise – that the brain seems to ‘echo back’ their stimulation at about 10 echoes per second.

Macdonald and VanRullen’s participants were ‘plugged in’ during the experiment – electroencephalography (EEG) was used to measure the tiny, constantly changing voltages on their scalps that reflect the workings of the millions of neurons in the brain beneath. The stimulus sequence presented (with appropriate controls to ensure the participants paid attention) was a flickering patch on a screen. The flicker was of a very particular kind. It was a flat spectrum sequence, a type of signal used by engineers to probe the ‘impulse response’ of a system. The impulse response is the response of a system to a very short, sharp stimulation. Imagine clicking your fingers in an empty cathedral – that short, sharp click is transformed into a long, echoing sound that slowly dies away. This is the impulse response of the cathedral: VanRullen and MacDonald were trying to measure the impulse response of the brain’s visual system. Because of its property of very low autocorrelation (the value of the signal at one point in time says nothing about what the value of the signal will be at any other time), the kind of signal the authors flashed at their participants can be used to mathematically extract the impulse response of a system (for more details, see the paper by Lalor at al., referenced at the bottom of this post).



To extract the impulse response, you do a ‘cross-correlation’ of the input signal (the flickering patch on the screen) with the output of the system – which, in this case, was the EEG signal from over the visual cortex of the participants (the occipital lobe). Cross-correlation involves lining up the input signal with the output at many different points in time and seeing how similar the signals are. So, you start with the input lined up exactly with the output, and ask how similar the input and output signals look. Then you move the input signal so it’s lined up with the output signal 1ms later – how similar now? And so on… all the way up to around 1s ‘misalignment’, in this paper.  Here, for two example subjects (S1 and S2), is the result:


The grey curves are the cross-correlation functions, stretched out over time. Up until about 0.2 seconds you see the classic ‘visual evoked potential’ response, but after that time a striking 10Hz ‘echo’ emerges. The authors perform various controls, to show, for example, that these ‘echoes’ are not induced only by the brightest or darkest values in their stimulus sequence. They argue that because of the special nature of the stimuli they used, this effect must represent the brain actually ‘echoing back’ the input signal at a later time. In their discussion, they propose that this could be a mechanism for remembering stimuli over short periods of time: replaying them 10 times per second.


This is a bold hypothesis. Are these 10Hz reverberations really ‘echoes’ of the visual input, used for visual short term memory? We weren’t sure. We already know that the EEG resonates by far the most easily to flickering stimuli at 10Hz (see the paper by Hermann, referenced below), so despite the sophisticated stimulus used here, it is easy to suspect that the result of this experiment depends more on this ‘ringing’ quality of the EEG than on mnemonic echoes of stimuli themselves. We felt that in order to really nail this question you would need to show, for example, that our sensitivity to specific stimuli we have just been shown changes with a 10Hz rhythm in the seconds after we encounter it. However, this is the sort of thing that could be achieved with behavioural experiments.
Perhaps a new theory of short term memories will emerge.  

In the meantime, why not build yourself a dream machine and see if you can have your own visionary insights with the help of some 10Hz flickering light?  You’ll need the diagram below (blow up; cut out; fold into a cylinder), an old 78rpm record player, and a light-bulb.


References:

Current Biology, 2012: Perceptual Echoes at 10Hz in the Human Brain, Rufin VanRullen and James S.P. Macdonald

European Neurology, 2009: FromStroboscope to Dream Machine: A History of Flicker-Induced Hallucination,  B.C. ter Meulen, D. Tavy and B.C. Jacobs

NeuroImage, 2006: The VESPA: a method for the rapid estimation of a visual evoked potential.  Edmund C. Lalor, Barak A. Pearlmutter, Richard B. Reilly, Gary McDarby and John J. Foxe



Monday, 18 June 2012

In the news: Mind Reading

Mind reading tends to capture the headlines. And these days we don't need charlatan mentalists to perform parlour tricks before a faithful audience - we now have true scientific mind reading. Modern brain imaging tools allow us to read the patterns of brain activity that constitute mind... well, sort of. I thought to write this post in response to a recent Nature News Feature on research into methods for reading the minds of patients without any other means of communication. In this post, I consider what modern brain imaging brings to the art of mind reading.

Mind reading as a tool for neuroscience research



First, it should be noted that almost any application of brain imaging in cognitive neuroscience can be thought of as a form of mind reading. Standard analytic approaches test whether we can predict brain activity from the changes in cognitive state (e.g., in statistical parametric mapping). It is straightforward to turn this equation round to predict mental state from brain activity. With this simple transformation, the huge majority of brain imaging studies are doing mind reading. Moreover, a class of analytic methods known as multivariate (or multivoxel) pattern analysis (or classification) have come even closer to mind reading for research purposes. Essentially, these methods rely on a two-stage procedure. The first step is to learn which patterns of brain activity correspond to which cognitive states. Next, these learned relationships are used to predict the cognitive state associated with brain activity. This train/test procedure is strictly "mind reading", but essentially as a by-product.

In fact, the main advantage of this form of mind reading in research neuroscience is that it provides a powerful method for exploring how complex patterns in brain data vary with the experimental condition. Multivariate analysis can also be performed the other way around (by predicting brain activity from behaviour, see here), and similarly, there is no reason why train-test procedures can't be used for univariate analyses. In this type of research, the purpose is not actually to read the mind of cash-poor undergraduates who tend to volunteer for these experiments, but rather to understand the relationship between mind and brain.

Statistical methods for prediction provide a formal framework for this endeavour, and although they are a form of mind reading, it is unlikely to capture the popular imagination once the finer details are explained. Experiments may sometimes get dressed up like a mentalist's parlour trick (e.g., "using fMRI, scientists could read the contents of consciousness"), but such hype invariably leaves those who actually read the scientific paper a bit disappointed by the more banal reality (e.g., "statistical analysis could predict significantly above chance whether participants were seeing a left or right tilted grating"... hardly the Jedi mind trick, but very cool from a neuroscientific perspective), or contribute to paranoid conspiracy theories in those who didn't read the paper, but have an active imagination.

Mind reading as a tool for clinical neuroscience


So, in neuroscientific research, mind reading is most typically used as a convenient tool for studying mind-brain relationships. However, the ability to infer mental states from brain activity has some very important practical applications. For example, in neural prosthesis, internal thoughts are decoded by "mind reading" algorithms to control external devices (see previous post here). Mind reading may also provide a vital line of communication to patients who are otherwise completely unable to control any voluntary movement.

Imagine you are in an accident. You suffer serious brain damage that leaves you with eye blinking as your only voluntary movement for communicating with the outside world. That's bad, very bad in fact - but in time you might perfect this new form of communication, and eventually you might even write a good novel, with sufficient blinking and heroic patience. But now imagine that your brain damage is just a little bit worse, and now you can't even blink your eyes. You are completely locked in, unable to show the world any sign of your conscious existence. To anyone outside, you appear completely without a mind. But inside, your mind is active. Maybe not as sharp and clear as it used to be, but still alive with thoughts, feelings, emotions, hopes and fears. Now mind reading, at any level, becomes more than just a parlour trick.
"It is difficult to imagine a worse experience than to be a functioning mind trapped in a body over which you have absolutely no control" Prof Chris Frith, UCL [source here]
As a graduate student in Cambridge, I volunteered as a control participant in a study conducted by Adrian Owen to read mental states with fMRI for just this kind of clinical application (since published in Science). While I lay in the scanner, I was instructed to either imagine playing tennis or to spatially navigate around a familiar environment. The order was up to me, but it was up to Adrian and his group to use my brain response to predict which of these two tasks I was doing at any given time. I think I was quite bad at spatially navigating, but whatever I did inside my brain was good enough for the team to decode my mental state with remarkable accuracy.

Once validated in healthy volunteers (who, conveniently enough, can reveal which task they were doing inside their head, thus the accuracy of the predictions can be confirmed), Adrian and his team then applied this neuroscientific knowledge to track the mental state of a patient who appeared to be in a persistent vegetative state. When they asked her to imagine playing tennis, her brain response looked just like mine (and other control participants), and when asked to spatially navigate, her brain looked just like other brains (if not mine) engaged in spatial navigation.

In this kind of study, nothing very exciting is learned about the brain, but something else extremely important has happened: someone has been able to communicate for the first time since being diagnosed as completely non-conscious. Adrian and his team have further provided proof-of-principle that this form of mind reading can be applied in other patients to test their level conscious awareness (see here). By following the instructions, some patients were able to demonstrate for the first time a level of awareness that was previously completely undetected. In one further example, they even show that this brain signal can be used to answer some basic yes/no questions.

This research has generated an enormous amount of scientific, clinical and public interest [see his website for examples]. As quoted in a recent Nature New Feature, Adrian has since been "awarded a 7-year Can$10-million Canada Excellence Research Chair and another $10 million from the University of Western Ontario" and "is pressing forward with the help of three new faculty members and a troop of postdocs and graduate students". Their first goal is to develop cheaper and more effective means of using non-invasive methods like fMRI and EEG to restore communication. However, one could also imagine a future for invasive recording methods. Bob Knight's team in Berkeley have been using electrical recording made directly from the brain surface to decode speech signals (see here for a great summary in the Guardian by Ian Sample). Presumably, this kind of method could be considered for patients identified as partially conscious.

See also an interesting interview with Adrian by Mo Constandi in the Guardian

References:
Monti, al. (2010). Willful modulation of brain activity in disorders of consciousness. New England Journal of Medicine
Owen, et al (2006). Detecting awareness in the vegetative state. Science
Pasley,  et al (2012). Reconstructing Speech from Human Auditory Cortex. PLoS Biology

Tuesday, 5 June 2012

Book Review: Sum

 Sum: Forty Tales from the Afterlives by David Eagleman

This inaugural book review for the Brain Box does not feature the latest neuroscience book to hit the shelves, nor is it even the latest work by author, David Eagleman. What marks this book out in particular is a recent chamber opera adaptation by composer Max Richter and directed by the choreographer Wayne McGregor, which I saw performed last night at the Royal Opera House Linbury Studio Theatre. So, this is a slightly unconventional start, part book and opera review!
"In the afterlife..."
As the title suggests, the book is comprised of a collection of short stories, more like a series of vignettes, each imagining a different possible afterlife. For example, the opening tale, Sum, invites you to image an afterlife in which you relive all your previous experiences, but reordered and grouped according to common themes/qualities. You spend six days clipping your nails, six weeks waiting for a green light, one year reading books, two week lying, three week realising you are wrong, two weeks counting money, etc. And buried in this inventory of such life experience is fourteen minutes of pure joy, as well as the pain and heartache all tallied and accounted for.
"...fourteen minutes of pure joy..."
The opera also beings with this title piece, with an intense overlay of instrument and voice, interweaving fragments of a categorised life with fourteen minutes of pure joy at the heart of the storm. It is a powerful opening, the emotion intensified by the use of space to trap and magnify the experience of sound.
"The spoken word becomes like a thought flying across space"
The whole performance is contained within a cube, or waiting room, surrounded by large projected walls carrying a constant flow of images. The musical ensemble plays from a central pit and the vocal performers roam about the audience. According to the director, all these elements should "coalesce to have a visceral, personal and profound impact on each individual in the room". Indeed, within the confined space, it is impossible to remained detached. The director certainly succeeds in creating a "living, breathing installation where the audience become intrinsic players"

Max Richter likens Sum to a series of literary variations, a study of the same subject from different angles and  perspectives. Although, strictly, each story is mutually exclusive, the narrative flows from one vignette to the other as Eagleman sketches out the human condition. Like the lone quark in the tale Conservation, a singular common theme is used to sketch out the hopes, dreams, loves and disappointments of the human, a curious creature, who, despite the sophisticated sensory apparatus, simply wants to clump together with other conspecifics, to be stroked and look at one another (from tale Narcissus).

The opera captures the powerful emotion and beauty of Sum, but not so much the humour. It would probably be a mistake to attempt an operatic translation of hilarious tales like the Death Switch, in which life is preserved through an absurd extension of the out-of-office-reply. Also conspicuously absent is the humorous tale Graveyard of the Gods. The opera is almost certainly better for these absences, enabling a more coherent, and deeper exploration of a common theme. But for the full experience, the book is essential reading.

Read and listen to more from Max Richter here
And read more from Wayne McGregor here
And hear an interview with David Eagleman on this Guardian podcast

Wednesday, 30 May 2012

A simple plan for open access?

Just chatting with Chris Chambers, and we came up with this simple 6-step plan to solve the Open Access problem:

1. Submit your paper to your Journal of Choice

2. With editor approval, go to review

3. With reviewer approval, make suggested changes, tweak figures, add caveats, improve the science, etc.

4. Repeat steps 2-3, or 1-3 as necessary

5. Finally, with editor approval, receive acceptance email (and notification of publication cost, copyright restrictions, etc)

6. Now, here's the sting: take your accepted peer-reviewed paper and publish it yourself, on-line, along with all the reviewer comments, reply to reviewers (more reviewer comments, replies to comments, etc.) and most importantly, the final decision email - i.e., your proof of endorsement from said Journal of Choice

Are you brave enough to follow these 6 simple steps to DIY Open Access? It is fully peer-reviewed, and endorsed by Journal of Choice, with good reputation and respectable impactor factor. I am not, and so instead I just signed this petition to the White House to: Require free access over the Internet to scientific journal articles arising from taxpayer-funded research. If you haven't done so already, get to it! Non-US signatories are welcome...

Also see: http://deevybee.blogspot.co.uk/2012/01/time-for-academics-to-withdraw-free.html


Monday, 28 May 2012

A Tale of Two Evils: Bad statistical inference and just bad inference

Evil 1:  Flawed statistical inference

There has been a recent lively debate on the hazards of functional magnetic resonance imaging (fMRI), and what claims to believe or not in the scientific and/or popular literature [here, and here]. The focus has been on flawed statistical methods for assessing fMRI data, and in particular failure to correct for multiple comparisons [see also here at the Brain Box]. There was quite good consensus within this debate that the field is pretty well attuned to the problem, and has taken sound and serious steps to preserve the validity of statistical inferences in the face of mass data collection. Agreed, there are certainly papers out there that have failed to use appropriate corrections, and therefore the resulting statistical inferences are certainly flawed. But hopefully these can be identified, and reconsidered by the field. A freer and more dynamic system of publication could really help in this kind of situation [e.g., see here]. The same problems, and solutions apply to non-brain imaging field [e.g., see here].

But I feel that it may be worth pointing out that the consequence of such failures is a matter of degree, not kind. Although statistical significance is often presented as a category value (sig vs ns), the threshold is of course arbitrary, as undergraduates are often horrified to learn (why P<.05? yes, why indeed??). When we fail to correct for multiple comparisons, the expected probabilities change, therefore the reported statistical significance is incorrectly represented. Yes, this is bad, this is Evil 1. But perhaps there is a greater, more insidious evil to beware.

Evil 2: Flawed inference, period.

Whatever our statistical test say, or do not say, ultimately it is the scientist, journalist, politician, skeptic, whoever, who interprets the result. One of the most serious and common problems is flawed causal inference: "because brain area X lights up when I think about/do/say/hear/dream/hallucinate Y, area X must cause Y". Again, this is a very well known error, undergraduates typically have it drilled into them, and most should be able to recite like mantra: "fMRI is correlational, not causal". Yet time and again we see this flawed logic hanging around, causing trouble.

There are of course other conceptual errors at play in the literature (e.g., there must be a direct mapping between function and structure; each cognitive concept that we can imagine must have its own dedicated bit of brain, etc), but I would argue perhaps that fMRI is actually doing more to banish than reinforce ideas that we largely inherited from the 19th Century. The mass of brain imaging data, corrected or otherwise, will only further challenge these old ideas, as it becomes increasingly obvious that function is mediated via a distributed network of interrelated brain areas (ironically, ultra-conservative statistical approaches may actually obscure the network approach to brain function). However, brain imaging, even in principle, cannot disentangle correlation from causality. Other methods can, but as Vaughan Bell poetically notes:
Perhaps the most important problem is not that brain scans can be misleading, but that they are beautiful. Like all other neuroscientists, I find them beguiling. They have us enchanted and we are far from breaking their spell. [from here]
In contrast, the handful of methods (natural lesions, TMS, tDCS, animal ablation studies) that allow us to test the causal role of brain function do not readily generate beautiful pictures, and perhaps, therefore suffer a prejudice that keeps them under-represented in peer-review journals, and/or popular press. It would be interesting to assess the role of beauty in publication bias...

Update - For even more related discussion, see:
http://thermaltoy.wordpress.com/2012/05/28/devils-advocate-uncorrected-stats-and-the-trouble-with-fmri/
http://www.danielbor.com/dilemma-weak-neuroimaging/
http://neuroskeptic.blogspot.co.uk/2012/04/fixing-science-systems-and-politics.html