Sunday, 24 February 2013

Research Briefing: Attention restores forgotten items to visual short-term memory

Our paper, just out in Psychological Science, describes the final series of experiments conducted by Alexandra Murray during her PhD with Kia Nobre and myself at the Department of Experimental Psychology, Oxford University. Building on previous research by Kia and others in the Brain and Cognition Lab, these studies were designed to test how selective attention modulates information being held in mind, in a format known as visual short-term memory (VSTM).

Typically, VSTM is thought of as a temporary buffer for storing a select subset of information extracted during perceptual processing. This buffer is typically assumed to be insulated from the constant flux of sensory input streaming continuously into the brain, allowing the most important information to be held in mind beyond the duration of sensory stimulation. This way, VSTM enables us to use visual information to achieve longer-term goals, helping to free us from direct stimulus-response contingencies (right).

Previous studies have shown that attention is important for keeping visual information in mind. For example, Ed Awh and colleagues have suggested that selective attention is crucial for rehearsing spatial information in VSTM, just like inner speech helps us keep a telephone number in mind. Our results described in this paper further suggest that attention is not simply a mechanisms for maintenance, but is also important for converting information into a retrievable format.

In long term-memory research, retrieval mechanisms are often considered as important to memory performance as the storage format. It is all well and good if the information is stored, but to what end if it cannot be retrieved? We think that retrieval is also important in VSTM - valuable information could be stored in short-term traces that are not directly available for memory retrieval. In this study, we show that attention can be directed to such memory traces to convert them into a format that is easier to use (i.e., retrieve). In this respect, attention can be used to restore information to VSTM for accurate recall.

We combined behavioural and psychophysical approaches to show that attention, directed to memory items about one second after they had been presented, increases the discrete probability of recall, rather than a more perceptual improvement in the precision of recall judgements (for relevant methods, see also here). This combination of approaches was necessary to infer a discrete state transition between retrievable and non-retrievable formats.

Next step? Tom Hartley asked on twitter: what happened to the unattended items in memory? We did not address this question in this study, and the current literature presents a mixed picture, some suggesting the attention during maintenance impairs memory for unattended items (see), whereas others find no such suppression effect (see). It is possible that differences in strategy could account for some of the confusion.

To test the effect on unattended items in behavioural studies, researchers typically probe memory for unattended items every so often. This presents a contradiction to the participant - sometimes uncued items will be relevant for task performance, therefore individuals need to decide on an optimal strategy (i.e., how much attention to allocate to uncued items, just in case...). A cleaner approach is to use brain imaging to measure the neural consequence for unattended items. The principal advantage is that you don't need to confuse your participants with a mixed message: attend to the cued item, even though we might ask you about one of the other ones!!

References:

Awh & Jonides (2001) Overlapping mechanisms of attention and spatial working memory. TICS (pdf)

Bays & Husain (2008) Dynamic shifts of limited working memory resources in human vision. Science (pdf)

Landman, Spekreijse, & Lamme (2003). Large capacity storage of integrated objects before change blindness. Vision Research (link).

Matsukura, Luck, & Vecera (2007). Attention effects during visual short-term memory maintenance: Protection or prioritization? Perception & Psychophysics (link).

Murray, Nobre, Clark, Cravo & Stokes (2013) Attention Restores Discrete Items to Visual Short-Term Memory. Psychological Science (pdf)




Saturday, 23 February 2013

Biased Debugging


We all make mistakes - Russ Poldrack's recent blog post is an excellent example of how even the most experienced scientists are liable to miss a malicious bug in complex code. It could be the mental equivalent of missing a single double negative in a 10,000 word essay, or a split-infinite that Microsoft word fails to detect or even a bald-faced typo underlined in red that remains unnoticed by the over-familiar eyes of the author.

In the case reported by Russ last week, although there was an error in the analysis, the actual result fit their experimental hypothesis and slipped through undetected. It was only when someone else independently analysed the same data, but failed to reproduce the exact result, that alarm bells sounded. Luckily, in this case the error was detected before anything was committed to print, but the warning is clear. Obviously, we need to be more careful, and cross-check our results more carefully.

Here, I argue that we also need to think a bit more carefully about bias in the debugging process. Almost certainly, it was no coincidence that Russ's undetected error also yielded a result that was consistent with the experimental hypothesis. I argue that the debugging process is inherently biased, and will tend to seek out false positive findings that conform to our prior hopes and expectations.

Data analysis is noisy


Writing complex customised analysis routines is crucial in leading-edge scientific research, but is also error prone. Perfect coding is as unrealistic as perfect prose - errors are simply part of the creative process. When composing a manuscript, we may have multiple co-authors to help proofread numerous versions of the paper, and yet even then we often find a few persistent grammatical errors, split infinitives, double negatives slip through the net. Analysis scripts, however, are less often so well scrutinised, line by line, variable by variable.

If lucky, coding errors just cause our analyses to crash, or throw up a clearly outrageous result. Either way, we will know that we have made a mistake, and roughly where we erred - we can then switch directly to debugging mode. But what if the erroneous result looks sensible? Just by chance, what if the spurious result supports your experimental hypothesis? What are the chances that you will continue to search for errors in your code when the results make perfect sense?

Your analysis script might contain hundreds of lines of code, and even if you do go through each one, we are notoriously bad at detecting errors in familiar script. Just think of the last time you asked someone else to read draft prose because you had become blind to typos in the text that you have read a million times before. By that stage, you know exactly what the text should say, and that is the only thing you can read any more. Unless you recruit fresh eyes from a willing proofreader, or your attention is directed to specific candidate errors, you will be pretty bad at seeing even blatant mistakes right in front of you.

Debugging is non-random


OK, analysis is noisy - so what? Data are noisy too, isn't it all just part of the messy business of empirical science? Perhaps, but the real problem is that the noise is not random. On the contrary, debugging is systematically biased to favour results that conform to our prior hopes and expectations, that is, our theoretical hypotheses.

If an error yields a plausible result by chance, it is far less likely to be detected and corrected than if the error throws up a crazy result. Worse, if the result is not even crazy, but just non-significant or otherwise 'uninteresting', then the dejected researcher will presumably spend longer looking for potential mistakes that could 'explain' the 'failed analysis'. In contrast, if the results looks just fine, why rock the boat? This is like a drunkard's walk that veers systematically toward wine bottles to the left, and away from police to the right.

More degrees of freedom for generating false positives


With recent interest in myriad bad practises that boost false positive rates far beyond the assumed statistical probabilities (e.g., see Alok Jha's piece in the Guardian), I suggest that biased debugging could also contribute to the proliferation of false positives in the literature, especially in the neuroimaging literature. Biased debugging is also perhaps more insidious, because the pull towards false positives is not as obvious in debugging as it is with cherry-picking, data peeking, etc. Moreover, it is perhaps less obvious how to avoid the bias in debugging practices. As Russ notes in his post, code sharing is a good start, but it is not sufficient - errors can remain undetected even in shared code, especially if not widely used. The best possible safeguard is independent reanalysis - to reproduced identical results using independently written analysis scripts. In this respect, it is more important to share the data rather than the analysis scripts, which should not be re-run with blind faith!


See also: http://www.russpoldrack.org/2013/02/anatomy-of-coding-error.html