Monday 13 August 2012

Research Briefing: Lacking Control over the Trade-off between Quality and Quantity in Visual Short-Term Memory

This paper, just out in PLoS One, describes research led by Alexandra Murray during her doctoral studies with Kia Nobre and myself. The series of behavioural experiments began with a relatively simple question: how do people prepare for encoding into visual short-term memory (VSTM)?

VSTM is capacity limited. To some extent, increasing the number of items in memory reduces the quality of each representation. However, this trade-off does not seem to continue ad infinitum. If there are too many items to encode, people tend to remember only a subset of possible items, but with reasonable precision, rather than a more vague recollection of all the items. 

Previously, we and others had shown that directing participants to encode only a subset of items from a larger set of possible memory items increases the likelihood that the cued items would be recalled after a memory delay. Using electroencephalogram (EEG), we further showed that the brain mechanisms associated with preparation for selective VSTM encoding were similar to those previously associated with selective attention. 

To follow up on this previous research, Murray further asked whether can people strategically fine tune the trade-off between the number and quality of items in VSTM? Given foreknowledge of the likely demands (i.e., many or few memory items, difficult or easy memory test), can people engage an encoding strategy that favours quality over quality, or vice versa?  

From the outset, we were pretty confident that people would be able to fine-tune their encoding strategy according to such foreknowledge. Extensive previous evidence, including our own mentioned above, had revealed a variety of control mechanisms that optimise VSTM encoding according to expected task demands. Our first goal was simply to develop a nice behavioural task that would allow us to explore in future brain imaging experiments the neural principles underlying preparation for encoding strategy, relative to other forms of preparatory control. But this particular line of enquiry never got that far! Instead, we encountered a stubborn failure of our manipulations to influence encoding strategy. We started with quite an optimistic design in the first experiment, but progressively increased the power of our experiments to detect any influence of foreknowledge over expected memory demands - and still nothing at all! The figure on the right summarises the final experiment in the series. The red squares in the data plot (i.e., panel b) highlight the two conditions that should differ if our hypothesis was correct.  

By this stage it was clear that we would have to rethink our plans for subsequent brain imaging experiments. But in the interim, we had also potentially uncovered an important limit to VSTM encoding flexibility that we had not expected. The data just kept on telling us: people seem to encode as many task-relevant items as possible, irrespective of how many items they expect, or how difficult the expected memory test at the end of the trial. In other words, this null effect had revealed an important boundary condition for encoding flexibility in VSTM. Rather than condemn these data to the file draw, shelved as a dead-end line of enquiry, we decided that we should definitely try to publish this important, and somewhat surprising null effect. We decided PLoS One would be the perfect home for this kind of robust null effect. The experimental designs were sensible, with a logical progression of manipulations, the experiments were well-conducted and the data were otherwise clean. There was just no evidence that our key manipulations influenced short-term memory performance. 

As we were preparing our manuscript for submission, a highly relevant paper by Zhang and Luck came out in Psychological Sciences (see here). Like us, they found no evidence that people can strategically alter the trade-off between remembering many items poorly and/or few items well. If it is possible to be scooped on a null effect, then I guess we were scooped! But in a way, the precedent only increased our confidence that our null effect was real and interesting, and definitely worth publishing. Further, PLoS One is also a great place for replication studies, and so surely a replication of a null effect makes it a doubly ideal! 


For further details, see:

Murray, Nobre & Stokes (2012) Lacking control over the trade-off between quality and quantity in VSTM. PLoS One

Murray, Nobre & Stokes (2011). Markers of preparatory attention predict visual short-term memory. Neuropsychologia, 49:1458-1465.

Zhang W, Luck SJ (2011) The number and quality of representations in working memory. Psychol Sci. 22: 1434–1441



No comments:

Post a Comment