Saturday, 15 December 2012

Actors volunteer to be hypontised on TV

We are all pretty familiar with the basic formula of stage hypnosis. Supposedly normal people are hypnotised to do silly and embarrassing things before a wide-eyed audience. It is pretty hard to see exactly how the stage hypnotist is able to make normal folks cluck like a chicken (wiki ideas: peer pressure, social compliance, participant selection, ordinary suggestibility, and some amount of physical manipulation, stagecraft, and trickery). But whatever is going on in people's mind, it seems unlikely that these showman have somehow discovered how to master full mind control with the snap of the fingers.

Yet still, the idea that hypnosis could be used to control all types of behaviour seems to capture the imagination. Recent TV programmes have dived straight into the sensationalist deep end, to pose the question: can people be hypnotised to commit a cold-blooded assassination? On Channel 4's Experiment series, Derren Brown claims to show that a normal everyday kind of guy can be plucked off the street and hypnotised to shoot a celebrity at a public gathering. Similarly, hypnotist Tom Silver was employed by the Discovery Channel programme Brainwashed to test the same idea: can Joe Citizen be hypnotised to commit cold blooded murder?

Both shows set up the question by invoking the case surrounding Bobby Kennedy's assassination. In what seems an absurdly flimsy defence, Sirhan Sirhan claimed to be hypnotised by secret agents to carry out the killing. The counter-evidence for premeditation was over-whelming, and the appeal was unsuccessful - this is hardly a strong starting point for establishing precedent. Rather, Sirhan Sirhan's defence seems to fit somewhere between desperate appeal and paranoid delusion. The Discovery Channel additionally invoked the case of Patty Hearst. This is a fascinating story in its own right. Heiress to the fortune of media mogul William Randolph Hurst (immortalised as Charles Foster Kane by Orson Welles in the classic Citizen Kane) was kidnapped by a self-styled left-wing revolutionary group, involved in bank robberies, two murders, and other acts of violence. After a failed ransom bid, she became an active member of this vanguard army until she was eventually captured by police and put to trial for armed robbery. Her defence, heavily influenced by her extremely influential parents, argued that Patty Hearst had been brainwashed to join the revolutionary group. It seems likely that her parents were unable to accept the more shocking possibility that their daughter would willingly turn on their way of life to adopt an outlaw revolutionary life. Here, the term brainwashed sounds more like an expression of parental disbelief than a systematic process of coercive mind control.

Despite the weak starting premise for mind control, both shows nevertheless set out to demonstrate that hypnosis can be used to programme an ordinary person to carry out a (mock) assassination. On Brainwashed, I was called in to join a panel of experts to assess a series of 'experiments', starting from relatively benign tests of hypnotic suggestion and culminating in the mock assassination. Our role as the scientific experts was relatively limited, but we were able to observe the overall process reasonably closely. We saw no obvious jiggery pokery during production, although post-production clearly used the usual kinds of selective editing tricks that can mould impressions without explicit falsehoods.

Most viewers are pretty wise to the fact that the final cut includes only footage that the director wants you to see. Very many hours of footage never make it to screen, leaving plenty of wriggle room to create a 'coherent narrative'. But what did seem to surprise many viewers was the fact that the star 'assassin' of the show turned out to be a part-time actor, not a regular member of the public as the overall narrative implied. A similar minor scandal erupted when it was suggested that one of Derren Brown's hypnosis subjects was in the acting profession.

But is it really so surprising that a volunteer who signs up to be on TV turns out to be an actor? Presumably most people who volunteer for these kinds of things are either actors, or at least aspiring actors. And presumably the directors know this too. Even if they don't explicitly advertise for actors, they are very likely to get actors answering to the call for participation. And conveniently, actors will no doubt act the part for the cameras - so what more could a director want? It is not impossible that actors can also be hypnotised (maybe good acting is a form of hypnosis anyway), but it is important to keep in mind the relevant context: TV studio, with lights, cameras, etc; and actors (or similar) who want to be on TV. This scenario is not the stuff of controlled scientific research - needless to say, such shows should be viewed with a healthy scepticism. It maybe not be necessary quite yet to abandon your pre-existing sense that you are more or less in control of your own actions and behaviour.

Thursday, 13 December 2012

Science, LIVE: A made-for-TV experiment


[also see my related Guardian post]

A few months ago, Channel 4 attracted considerable attention for their sensationally titled – “Drugs Live: the Ecstasy Trial”. The subject matter was clearly designed to court controversy, with Prof Nutt at the centre of the storm. The show "hooked almost 2 million" viewers on the first night, and triggered a lively debate around highly charged questions, such as: Do we need to focus more on the medical/chemical nature of particular drugs, and less on the moral/legal status? Is it right to film volunteers taking a Class A drug, even for a medial experiment? Is Channel 4 glorifying illegal drugs, or contributing to rational discussion?

Much was written and said on either side of this debate (e.g. this conversation between Nutt and Manning). However, as an empirical scientist, I would like to draw attention to another more general issue that was  perhaps neglected in the mêlée of moral and ethical arguments. I would like to know why TV science is conducting experiments in the first place?

The made-for-TV experiment

Drugs Live was structured around an ethically approved double-blinded experiment to test the effects of MDMA. Data were collected from 25 participants under the influence of MDMA and a placebo control (sugar pill). Tests included questionnaire and computer-based tasks to measure changes in mood and cognition, as well as the ever-TV-friendly fMRI to measure changes in brain activity. This sounds like a reasonable set of experiments, but according to Prof Nutt, the Medical Research Council (MRC) declined to fund the research because it did not "fit in with the MRC's portfolio of addiction". Instead, Channel 4 agreed to pick up the tab, presumably for more financial motives compared to the MRC's commitment to "improve human health through world-class medical research" (from mission statement). Perhaps we should celebrate this innovative collaboration between academia and the private sector. In these times of austerity, perhaps TV-funded research is the future big-society answer to maintain Britain’s place as a leading powerhouse of innovation, science and technology.

And indeed, if the research is well-conducted, the results could provide valuable insights into the effects of MDMA, of genuine scientific interest with important political/social relevance. After many weeks and months of painstaking data analysis, the results could be submitted to a reputable scientific journal for rigorous peer-review. If the submitted findings are accepted by the academy as sufficiently trustworthy, then the scientific report would be published for consideration by a wider scientific audience. Journal press-releases might then alert the popular news outlets, who may then report these novel findings to their more general readership. 

This is how scientific findings are normally disseminated to the wider audience. Slowly, but surely, complex data yield their secrets to careful systematic analysis. It is not gripping TV, but this systematic process is the foundation of modern scientific research. Made-for-TV science, on the other hand, can by-pass the process completely and stream their own results directly into living rooms across the country. 


Science to a production schedule: lights, camera, action!

Maybe science needs a bit more of a can-do attitude. Like Jon Snow, who promises on Drugs Live that “tonight we will get to the bottom of it”. Not in another month, six months, or couple of years - but this very evening! And true to his word, by the end of the first episode Snow can already announce that we have all witnessed “two scientific breakthroughs”. He was not very specific, but we may assume that he was referring to the two brain scans that were rotated on a large plasma screen.


The first scan, from actor Keith Allen, appeared to show a relative decrease in communication between two areas previously associated with the so-called default mode network. Firstly, I should leave aside any academic debate about the true nature of this brain network, as tempting as it is to question Prof Nutt’s proclamation that this area is no less than “you, your personality, sense of self”. My purpose here is just to make the point that data from one brain in one volunteer, hastily analyzed and not peer-reviewed, does not constitute a “scientific breakthrough”. Perhaps an interesting hint. A potential clue, maybe. Promising lead, why not? But certainly not a: “scientific breakthrough”.


The second scan was even more ambiguous. The rotating image appeared to show a number of brain regions we are told were more active when the volunteer closed her eyes after taking MDMA. We are told that this reflects the heightened perceptual experience caused by the drug. But to be blunt, these data look like a mess, like random variation in the MRI signal. This is not really all that surprising, considering it is only one scan from one person, analyzed under the unrealistic time pressure of the TV production schedule. I would be amazed, or even suspicious if the result was any clearer. 


Of course science needs to be simplified for a TV audience. Matt Wall, neuroscientist involved in the Drugs Live programme, says of his experience:
"TV needs everything to be black-and-white, and unambiguous... They don’t care that you haven’t run the necessary control experiments, or that the study was only correlational and therefore can’t be used to imply direct causation – they want a neat, clear story above all else"
Oversimplifying the deeper complexities "can very often lead to distortions, or ‘Lies-to-children’". This is a perennial issue for TV science, whether following the classic science reporting formula or made-for-TV experiments. To be able to articulate complex theoretical concepts and technical details to general audience without misrepresentation is a great but rare skill.

Astonishing Science

Jon Snow promised his live audience “astonishing science”. Quite right, TV should bring astonishing science to the wider audience. But this mission is seriously compromised by the production demands associated with made-for-TV experiments.

Advising the Discovery Channel on a recent made-for-TV experiment, I was told by a production assistant: “you don’t have to always be so cynical, you know.” And I absolutely agree with the sentiment. Science TV should convey the excitement of science, not just the limitations. Just like this production assistant, I am also frustrated by too many caveats. The reason I came to science was to discover something about the world, not just to point out flaws in putative findings. But like any empirical scientist, I have learned many times over not to get too excited over half-baked results. Only solid reliable results are really exciting.

But TV science does not have to be boring. TV has many tricks up its sleeve, such as dramatic music, frenetically paced scene cuts, angled screen shots in darkened laboratories, and expensive props like MRI. All these can be used to covey the excitement of science, without resorting to made-for-TV experiments.

Reality-science TV

The enormous success of reality TV tells us that viewers like to experience the activities on screen through people they can relate to. Extrapolating to science TV, I guess viewers like to feel the science experience through personalities that they can relate to. The personal touch can make it seem more real.

Recently, I was asked to help conduct another made-for-TV brain imaging experiment with the show's presenter as the experimental subject. To fit the production schedule, we had to analyse complex brain imaging data within a matter of hours. We did manage to produce some very rudimentary results within this science-improbable time frame - we had to! Production costs are clocked by the hour.

Of course the actual result was of limited scientific value, and we were naturally so circumspect about what we said on camera that it is hard to see how this data could have been of any great interest to the audience. It was essentially just a brain on a screen: demonstration science - what it looks like to do science.
There is of course no harm in such demonstrations - eye-catching demonstrations are bread and butter tools for conveying the excitement of science. We don't need to pretend that they are also conducting novel scientific research. It is important enough to help convey the process of science without pretending to add to the content of scientific knowledge.There are plenty of good and proper TV production devices for engaging public interest in science. And we expect a little bit of TV gimmickry, it is show biz after all. But why not be content with reporting on science, rather than making science as well?


Investigative Journalism

The Drugs Live formula is a hybrid of traditional science programme and the exposé. Borrowing from the rich history of the investigative journalism, TV is not just reporting news, but making news as well. The production company can herald exclusive access to a breaking news story:
“Now, in a UK television first, two live programmes will follow volunteers as they take MDMA, the pure form of ecstasy, as part of a ground-breaking scientific study” [from the Channel 4 series synopsis]
But investigative journalism is also tricky business. The precise outcome of any investigation is impossible to predict, and therefore hard to plan for. Out of the many possible leads, only a minority will reveal something worth reporting. Producers are presumably familiar with the frustration of stories that lead nowhere, and presumably they are reasonably careful about committing to a production schedule until after the results of the investigation are relatively clear. Jumping to premature conclusions can lead to serious false claims, as dramatically highlighted recently by the Newsnight debacle that cost the BBC general director his job. In a recent post-mortem of this botched investigation, David Leigh writes: “to be faithful to the evidence" is essential for successful investigative journalism. And although journalism may not be "rocket science" [in Leigh's words], investigative journalism also demands a genuine commitment to follow the evidence, wherever it leads. 


Conflict of interest

In the shadow of recent revelations of fraud and serious malpractice across a range of scientific disciplines, from psychology to anaesthesiology, many scientists have been asking how to improve the scientific process (e.g., see here and here). Televising the process is unlikely to be the answer. 

Although industry-funding can be a valuable source of revenue, we must always seriously consider potential conflicts of interest. Increasingly, concerns have been raised regarding dubious practices in clinical research funded by pharmaceutical companies. Ben Goldacre’s book, Bad Pharma, is a must read on this serious public health issue. Conflicts of interest can distort many stages of the experimental process to increase the likelihood of finding a particular result. Clinical research is becoming increasingly alert to these problems, and serious steps are being made to avoid the malevolent influence of funding agencies with vested interests.

But even without a commercial interested in a particular result, the pressure to "find something" noteworthy can also motivate bad scientific practice. In academic circles, the pressure to publish is typically considered a major driving factor in scientific malpractice and fraud.The bottom line for Channel 4, of course, is to maximise audience numbers to boost the value of their commercial time. This does not automatically rule out potential scientific merit of TV-funded experiments, but it certainly is worth bearing in mind, especially when thinking about how the particular demands of TV could compromise scientific method. As discussed above, these include short-cuts and rushed analyses to fit a tight production schedule, as well as the pressure to find "something in the data" by the end of the shoot, however unreliable it might turn out to be later. The experimental approach in Drugs Live was also apparently compromised by recruiting celebrities (and other TV-friendly personalities) as experimental subjects, tapping into the proven success of the reality-TV format but skewing the sample of experimental subjects. It is also hard to imagine that omnipresent TV cameras did not influence the results of the experiment.

It will be interesting to follow up on this research to see how the results are received within the academic community. There is no reason not to expect some interesting and important findings, but I wonder if the more detailed and scientifically meaningful results will be heralded with as much fanfare as the actual pill-popping on camera. I fear Channel 4 might be more interested in the controversy surrounding MDMA than the science motivating the research.


Friday, 30 November 2012

Bold predictions for good science


Undergraduates are taught proper scientific method. First, the experimenter makes a prediction, then s/he collects data to test that prediction. Standard statistical methods assume this hypothesis driven approach, most statistical inferences are invalid unless this rigid model is followed.

But very often it is not. Very often experimenters change their hypotheses (and/or analyses methods) after data collection. Indeed, students conducting their first proper research project are often surprised by this 'real-world' truth: "oh, that is how we really do it!". They learn to treat research malpractice like a cheeky misdemeanour. 

After recent interest in science malpractice, fuelled by revelations of outright fraud, commentators are starting to treat the problem more seriously, especially in psychology and neuroscience. This month, Perspectives on Psychological Science devoted an entire issue to the problem of peer-reviewed results that fail to replicate, because they were born of bad scientific practice. 

Arguably, scientific journals share much of the responsibility for allowing bad research practices to flourish. Although there maybe little journals can do to stop outright fraud, they can certainly do a lot to improve research culture more generally. Recently, the journal Cortex has announced that it will try to do just that. Chris Chambers, associate editor, has outlined a new submission format that will strictly demand that researchers conform to the classic experimental model: predictions before data. With the proposed Registration Report, authors will be required to set out their predictions (and design/analysis details) before they collect the data, thus cutting off the myriad opportunities to capitalise on random vagaries in observed data. And although researchers could still lie and make up data, cleaning up the grey area of more routine bad behaviour could have important knock on effects. As I have argued elsewhere, bad scientific practice is presumably a fertile breeding ground for more serious acts of fraud.

This is a bold new initiative, and if successful, could precipitate a major change in the way science is done. For further details, and some interesting discussion, check out this panel discussion on Fixing the Fraud at SpotOn London 2012 and this article in the Guardian.


Wednesday, 3 October 2012

Distance Code

Accurate brain stimulation requires precise neuroanatomical information. To activate a specific brain region with transcranial magnetic stimulation (TMS), it is important to know where on the scalp to place the induction coil. Commercial neuronavigation systems have been developed for this purpose. However, it is also important to know the depth of the targeted area, because the effect of TMS critically depends on the distance between the stimulating coil and targeted brain area.

We have developed a simple TMS Distance Toolbox for calculating the distance between a stimulation site on the scalp surface and underlying cortical surface. The toolbox can be download here, and requires Matlab and SPM8. I will posted further information soon, including user instructions.

Saturday, 15 September 2012

Must we really accept a 1-in-20 false positive rate in science?

There has been some very interesting and extremely important discussion recently addressing a fundamental problem in science: can we believe what we read?

After a spate of high-profile cases of scientific misdemeanours and outright fraud (see Alok Jha's piece in the Guardian), people are rightly looking for solutions to restore credibility to the scientific process [e.g., see Chris Chambers and Petroc Sumner's Guardian response here].

These include more transparency (especially pre-registering experiments), encouraging replication, promoting the dissemination of null effects, shifting career rewards from new findings (neophilia) to genuine discoveries, abolishing the cult of impact factors, etc. All these are important ideas, and many are more or less feasible to implement, especially with the right top-down influence. However, it seems to me that one of the most basic problems is staring us right in the face, and would require absolutely no structural change to correct. The fix is as simple as re-drawing a line in the sand.

Critical p-value: line in the sand

Probability estimates are inherently continuous, yet we typically divide our observations into two classes: significant (i.e., true, real, bona fide, etc) and non-significant (i.e., the rest). This reduces the mental burden of assessing experimental results - all we need to know is whether an effect is real, i.e., passes a statistical threshold. And so there are conventions, the most widely used being p<.05. If our statistical test falls below a probability of 5% chance level, we may assert that our conclusion is justified. Ideally, this threshold ensures that our inference is correct with at least 95% certainty. But turn this around, and it also means that at worst, the assertion could be wrong (i.e., false positive) one time in twenty (about the same odds as being awarded a research grant in the current climate). That already seems pretty high odds for accepting false positive claims in science. But worse, this is also only the ideal theoretical case. There are many dubious scientific practices that dramatically inflate the false discovery rate, such as cherry picking and peeking during data collection (see here).

These kinds of fishy goings-on are evident in statistical anomalies, such as the preponderance of just-significant effects reported in the literature (see here for blog review of empirical paper). Although it is difficult to estimate the true false positive rate out there, it can only be higher than the ideal one in twenty rate assumed by our statistical convention. So, even before worrying about outright fraud, it is actually quite likely that many of the results we read about in the peer-reviewed literature are in fact false positives.

Boosting the buffer zone

The obvious solution is to tighten up the accepted statistical threshold. Take physics, for example. Those folk only accept a new particle into their text books if the evidence reaches a statistical threshold of  5 sigma (i.e., p<0.0000003). Although the search for the Higgs boson involved plenty of peeking along the way, at 5 sigma the resultant inflation of the false discovery rate is hardly likely to matter. We can still believe the effect. A strict threshold level provides a more comfortable buffer between false positive and true effect. Although there are good and proper ways to correct for peeking, multiple comparisons, etc., all these assume full disclosure. It would clearly be safer just to adopt a conservative threshold. Perhaps not one quite as heroic as 5 sigma (after all, we aren't trying to find the God particle), but surely we can do better than a one-in-twenty false discovery rate as the minimal and ideal threshold.

Too conservative?

Of course, tightening the statistical threshold would necessarily increase the number of failures to detect a true effect, so-called type II errors. However, it is probably fair to say that most fields in science are suffering more from false positives (type I errors) than type II errors. False positives are more influential than false negatives, and harder to dispel. In fact, we are probably more likely to consider a null effect as a real effect cloaked in noise, especially if there is already a false positive lurking about somewhere in the published literature. It is notoriously difficult to convince your peers that your non-significant test indicates a true null effect. Increasingly, Bayesian methods are being developed to test for sameness between distributions, but this is another story.

The main point is that we can easily afford to be more conservative when bestowing statistical significance to putative effects, without stifling scientific progress. Sure, it would be harder to demonstrate evidence for really small effects, but not impossible if they are important enough to pursue. After all, the effect that betrayed the Higgs particle was very small indeed, but that didn't stop them from finding it. Valuable research could focus on validating trends of interest (i.e., strongly predicted results), rather than chasing down the next new positive effect leaving behind a catalogue of potentially suspect "significant effects" in your wake. Science cannot progress as a house of cards.

Too expensive?

Probably not. Currently, we are almost certainly wasting research money chasing down the dead ends that are opened up by false positives. A reduced, but more reliable corpus of highly reliable results would almost certainly increase the productivity of many scientific fields. At present, the pressure to publish has precipitated a flood of peer-reviewed scientific papers reporting any number of significant effects, many of which will almost certainly not stand the test of time. It would seem a far more sensible use of resources to focus on producing fewer, but more reliable scientific manuscripts. Interim findings and observations could be made readily available via any number of suggested internet-based initiatives. These more numerous 'leads' could provide a valuable source of possible research directions, without yet falling into the venerable category of immutable (i.e., citable) scientific fact. Like conference proceedings, they could adopt a more provisional status until they are robustly validated.

Raise the bar for outright fraud

Complete falsification is hard to detect in the absence of actual whistleblowers. In Simonsohn's words: "outright fraud is somewhat impossible to estimate, because if you're really good at it you wouldn't be detectable" (from Alok Jha). Even publishing the raw data is no guarantee of catching out the fraudster, as there are clever ways to generate plausible-looking data sets that would pass veracity testing.

However, fraudsters presumably start their life of crime in the grey area of routine misdemeanour. A bit of peeking here, some cherry picking there, before they are actually making up data points. Moreover, they know that even if their massaged results fail to replicate, benefit of the doubt should reasonably allow them to claim to be unwitting victims of an innocent false positive. After all, at p<0.05 there is already a 1-in-20 chance of a false positive, even if you do everything by the letter!

Like rogue traders, scientific fraudsters presumably start with a small, spur-of-the-moment act that they reasonably believe they can get away with. If we increase the threshold that needs to be crossed, fewer unscrupulous researchers will be tempted down the dark and ruinous path of scientific fraud. And if they did, it would be much harder for them to claim innocence after their 5 sigma results fail to replicate.

Why impose any statistical threshold at all?

Finally, it is worth noting some arguments that the statistical threshold should be abolished altogether. Maybe we should be more interested in the continua of effect sizes and confidence intervals, rather than discrete hypothesis testing  [e.g., see here]. I have a lot of sympathy for this argument. A more quantitative approach to inferential statistics would more accurately reflect the continuous nature of evidence and certainly, and also more readily suit meta-analyses. However, it is also useful to have a standard against which we can hold up putative facts for the ultimate test: true or false.

Wednesday, 15 August 2012

In the news: clever coding gets the most out of retinal prosthetics

This is something of an update to a previous post, but I thought interesting enough for its own blog entry. Just out in PNAS, Nirenberg and Pandarinath describe how they mimic the retina’s neural code to improve the effective resolution of an optogenetic prosthetic device (for a good review, see Nature News).

As we have described previously, retinal degeneration affects the photoreceptors (i.e., rod and cone cells), but often spares the ganglion cells that would otherwise carry the visual information to the optic nerve (see retina diagram below). By stimulating these intact output cells, visual information can bypass the damaged retinal circuitry to reach the brain. Although the results from recent clinical trials are promising, restored vision is still fairly modest at best. To put it in perspective, Nirenberg and Pandarinath write:
[current devices enable] "discrimination of objects or letters if they span ∼7 ° of visual angle; this corresponds to about 20/1,400 vision; for comparison, 20/200 is the acuity-based legal definition of blindness in the United States"
Obviously, this poor resolution must be improved upon. Typically, the problem is framed as a limit in the resolution of the stimulating hardware, but Nirenberg and Pandarinath show that software matters too. In fact, they demonstrate that software matters a great deal.

This research focuses on a specific implementation of retinal prosthesis based on optogenetics (for more on approach check out this Guardian article, and for an early empirical demonstration). Basically, intact retinal ganglion cells are injected with a genetically engineered virus that produces a light sensitive protein. These modified cells will now respond to light coming into the eye, just as the rods and cones do in the healthy retina. This approach, although still being developed in mouse models, promises a more powerful and less invasive alternative to electrode arrays previously trialled in humans. But it is not the hardware that is the focus of this research. Rather, Nirenberg and Pandarinath show how the efficacy of the these prosthetic devices critically depends on the type of signal used to activate the ganglion cells. As schematised below, they developed a special type of encoder to convert natural images into a format that more closely matches the neural code expected by the brain. 

The steps from visual input to retinal output proceed as follows: Images enter a device that contains the encoder and a stimulator [a modified minidigital light projector (mini-DLP)]. The encoder converts the images into streams of electrical pulses, analogous to the streams of action potentials that would be produced by the normal retina in response to the same images. The electrical pulses are then converted into light pulses (via the mini-DLP) to drive the ChR2, which is expressed in the ganglion cells.
This neural code is illustrated in the image below: 



The key result of this research paper is a dramatic increase in the amount of information that is transduced to the retinal output cells. They used a neural decoding procedure to quantify the information content in the activity patterns elicited during visual stimulation of a healthy retina, compared to optogenetic activation of ganglion cells in the degenerated retina via encoded or unencoded stimulation. Sure enough, the encoded signals were able to reinstate activity patterns that contained much more information than the raw signals. In a more dramatic, and illustrative, demonstration of this improvement, they used an image reconstruction method to show how the original image (baby's face in panel A) is first encoded by the device (reconstructed in panel B) to activate a pattern of ganglion cells (image-reconstructed in panel C). Clearly, the details are well-preserved, especially in comparison to the image-reconstruction of a non-encoded transduction (in panel D). In a final demonstration, they also found that the experimental mice could track a moving stimulus using the coded signal, but not the raw unprocessed input.

According to James Weiland, ophthalmologist at University of Southern California (quoted in by Geoff Brumfiel Nature News), there has been considerable debate whether it is more important to try to mimic the neural code, or just allow the system to adapt to an unprocessed signal. Nirenberg and Pandarinath argue that clever pre-processing will be particularly important for retinal prosthetics, as there appears to be less plasticity in the visual system than say the auditory system. Therefore, it is essential that researchers crack the neural code of the retina rather than hope the visual system will learn to adapt to an artificial input. The team are optimistic:
"the combined effect of using the code and high-resolution stimulation is able to bring prosthetic capabilities into the realm of normal image representation"
But only time, and clinical trials, will tell.


References:

Bi A, et al. (2006) Ectopic expression of a microbial-type rhodopsin restores visual responses in mice with photoreceptor degeneration. Neuron 50(1):23–33.

Nirenberg and Pandarinath (2012). Retinal prosthetic strategy with the capacity to restore normal vision. PNAS

Monday, 13 August 2012

Research Briefing: Lacking Control over the Trade-off between Quality and Quantity in Visual Short-Term Memory

This paper, just out in PLoS One, describes research led by Alexandra Murray during her doctoral studies with Kia Nobre and myself. The series of behavioural experiments began with a relatively simple question: how do people prepare for encoding into visual short-term memory (VSTM)?

VSTM is capacity limited. To some extent, increasing the number of items in memory reduces the quality of each representation. However, this trade-off does not seem to continue ad infinitum. If there are too many items to encode, people tend to remember only a subset of possible items, but with reasonable precision, rather than a more vague recollection of all the items. 

Previously, we and others had shown that directing participants to encode only a subset of items from a larger set of possible memory items increases the likelihood that the cued items would be recalled after a memory delay. Using electroencephalogram (EEG), we further showed that the brain mechanisms associated with preparation for selective VSTM encoding were similar to those previously associated with selective attention. 

To follow up on this previous research, Murray further asked whether can people strategically fine tune the trade-off between the number and quality of items in VSTM? Given foreknowledge of the likely demands (i.e., many or few memory items, difficult or easy memory test), can people engage an encoding strategy that favours quality over quality, or vice versa?  

From the outset, we were pretty confident that people would be able to fine-tune their encoding strategy according to such foreknowledge. Extensive previous evidence, including our own mentioned above, had revealed a variety of control mechanisms that optimise VSTM encoding according to expected task demands. Our first goal was simply to develop a nice behavioural task that would allow us to explore in future brain imaging experiments the neural principles underlying preparation for encoding strategy, relative to other forms of preparatory control. But this particular line of enquiry never got that far! Instead, we encountered a stubborn failure of our manipulations to influence encoding strategy. We started with quite an optimistic design in the first experiment, but progressively increased the power of our experiments to detect any influence of foreknowledge over expected memory demands - and still nothing at all! The figure on the right summarises the final experiment in the series. The red squares in the data plot (i.e., panel b) highlight the two conditions that should differ if our hypothesis was correct.  

By this stage it was clear that we would have to rethink our plans for subsequent brain imaging experiments. But in the interim, we had also potentially uncovered an important limit to VSTM encoding flexibility that we had not expected. The data just kept on telling us: people seem to encode as many task-relevant items as possible, irrespective of how many items they expect, or how difficult the expected memory test at the end of the trial. In other words, this null effect had revealed an important boundary condition for encoding flexibility in VSTM. Rather than condemn these data to the file draw, shelved as a dead-end line of enquiry, we decided that we should definitely try to publish this important, and somewhat surprising null effect. We decided PLoS One would be the perfect home for this kind of robust null effect. The experimental designs were sensible, with a logical progression of manipulations, the experiments were well-conducted and the data were otherwise clean. There was just no evidence that our key manipulations influenced short-term memory performance. 

As we were preparing our manuscript for submission, a highly relevant paper by Zhang and Luck came out in Psychological Sciences (see here). Like us, they found no evidence that people can strategically alter the trade-off between remembering many items poorly and/or few items well. If it is possible to be scooped on a null effect, then I guess we were scooped! But in a way, the precedent only increased our confidence that our null effect was real and interesting, and definitely worth publishing. Further, PLoS One is also a great place for replication studies, and so surely a replication of a null effect makes it a doubly ideal! 


For further details, see:

Murray, Nobre & Stokes (2012) Lacking control over the trade-off between quality and quantity in VSTM. PLoS One

Murray, Nobre & Stokes (2011). Markers of preparatory attention predict visual short-term memory. Neuropsychologia, 49:1458-1465.

Zhang W, Luck SJ (2011) The number and quality of representations in working memory. Psychol Sci. 22: 1434–1441