Showing posts with label attention. Show all posts
Showing posts with label attention. Show all posts

Monday, 1 June 2015

Research Briefing: reward-guided working memory

George Wallis
Research Briefing, by George Wallis

In almost any situation, there are hundreds of things (or ‘stimuli’) that could attract our attention - just count the number of objects you can see from where you are now.  In order to get on with life and avoid total mental chaos, we have to be extremely selective about what we process – most stimuli are essentially ignored.  It is a long-established finding that the number of things we can hold in mind (or hold in ‘working memory’) is really very small – about 2-8 depending on the experiment we run.  Clearly we possess powerful mechanisms that let us filter in only certain stimuli.  The ways in which we can select what gets into working memory is a much-studied topic in psychology, and often psychologists run experiments in which they present ‘cues’ (e.g. arrows) that tell people which items in the experiment to ‘select’.  This is the experimental equivalent of pointing out something with your finger.
Harry Styles of One Direction
giving an attentional cue to the crowd

However, most of the time, this isn’t how we select what gets into working memory: in the real world people aren’t on hand to continuously tell us what to pay attention to.  One real-world factor psychologists think may be important in determining whether an item gets into memory is its ‘reward value’.  For example, a twenty-pound note is more likely to grab our attention than a piece of scrap paper, even if they are about the same size and appearance.  Our paper recently published in Visual Cognition (Wallis, Stokes, Arnold, & Nobre, 2015) describe the results of two experiments we ran in which we looked at how reward value affects the likelihood that a stimulus will get into working memory.

Anderson and colleagues performed experiments showing that items displayed in colours that the experimenters had previously associated with a high monetary reward ‘grab’ attention as people look around a visual scene for a particular target, slowing them down slightly (e.g. Anderson, Laurent, & Yantis, 2011).  We adapted their experiment to look specifically at memory, presenting four nonsense-shapes briefly, and then testing how well people remembered which shapes had been presented a few seconds later.  Before running the memory task we associated some of the shapes with high reward and some with low reward. 

An experimental trial from our shapes experiment

When we asked people to remember four shapes out of which some items were worth more than others, people didn’t remember the high value items any better than the low value items.  This was a surprise!  On the basis of the paper by Anderson, we expected the high value items would be better remembered – after all, they ought to grab attention.  However, we did find a curious effect.  If all of the shapes in a display were high-value (a ‘high value trial’), then any one of them was remembered better than if all the shapes were low value (a ‘low value trial’).  More curious yet, if half the shapes were high value, and half were low value, any shape was remembered about equally well – but they were remembered a little less well than when all the items were high value, and a little better than when all the items were low value.

We reasoned that this could have been because people simply made more effort when the shapes in the memory array were higher in value, on average, and so they did a bit better.  However, a more specific (and interesting) explanation was also possible. We know that a chemical in the brain, dopamine, is involved in processing reward – in studies on monkeys, where the dopamine neurons are measured directly, experimenters see ‘pulses’ of dopamine release when rewarded items are presented to monkeys.  We also know that the prefrontal cortex (PFC) – the part of the brain thought to be most important in controlling working memory, is ‘soaked’ in dopamine: dopamine is released throughout the PFC.  Some have suggested that the dopamine pulses ‘open the gate’ to working memory, and the more dopamine released at a given time, the wider the gate is opened (Braver & Cohen, 2000).

We couldn’t record dopamine firing in our volunteers, so we couldn’t test this possibility directly.  However, it made us wonder – what would happen if instead of showing our memory items all at the same time, we presented them very quickly one after the other, in a row?  If subjects simply made more effort on those trials where they had encountered a higher value item, then we would still expect all of the items we showed to benefit from this.  However, we know that dopamine pulses are only a fraction of a second long (about a third of a second).  If we presented each item for about this length of time, one after the other, then a ‘reward pulse’ might be able to ‘pick out’ the high reward item, and not the other less valuable items.   So, we ran the experiment, with a few adaptations: rather than shapes, we used coloured lines, presented one after the other, and asked people to remember the orientation of the lines.  Certain colours were given high or low reward values.

An experimental trial from our 
second, sequential experiment

We found that indeed, only the high-value item in this experiment was more likely to be encoded, and not its near-neighbours.  This doesn’t prove that dopamine pulses are responsible – we didn’t measure our volunteers’ dopamine neurons – but it does suggest that the reward effect is quite tightly localized in time: a ‘pulse’ tied to an item, not a more general ‘making an effort’ effect.  This was an intriguing finding and it opens up several questions.  Firstly, is this effect really down to dopamine, like we speculate?  To find out, we’d need to see what dopamine neurons are doing at the same time as running the task.  Interestingly, there is some evidence that the diameter of people’s pupils responds rapidly to dopamine release, so maybe measuring pupil diameter would be a way of getting some more evidence without having to get inside the brain. 

Secondly – what’s the point of this rather weird-seeming ‘pulse’ mechanism?  And why would it be useful – like in our first experiment – for unrewarded items to get ‘caught by the pulse’?  Our speculative answer to this question is that our experiment was unnatural – we asked our volunteers to keep staring at the centre of the screen and flashed up the shapes all together, just for a moment.  They had no chance to move their eyes (indeed we deliberately tried to prevent that!).  However, in more natural settings, our eyes constantly flit around the scene.  This is pretty hard to notice in yourself but watch a friend’s eyes for a while (without freaking them out too much) – they jump from place to place continually, ‘fixating’ first this object, then that.  In fact they move about 3 or 4 times per second, jumping from looking at one item to another.  If our putative working-memory-updating pulses were ‘tied’ to these fixations this might provide a mechanism by which the more rewarded items in the scene were more likely to enter memory.

                                                         
References:

Anderson, B. A., Laurent, P. A., & Yantis, S. (2011). Value-driven attentional capture. Proceedings of the National Academy of Sciences, 108(25), 10367–10371. doi:10.1073/pnas.1104047108
Braver, T. S., & Cohen, J. D. (2000). On the control of control: The role of dopamine in regulating prefrontal function and working memory. Control of Cognitive Processes: Attention and Performance XVIII, 713–737.

Wallis, G., Stokes, M. G., Arnold, C., & Nobre, A. C. (2015). Reward boosts working memory encoding over a brief temporal window. Visual Cognition, 23(1-2), 291–312. doi:10.1080/13506285.2015.1013168

Friday, 24 April 2015

Research Briefing: organising the contents of working memory

Figure 1. Nicholas Myers
Research Briefing, by Nicholas Myers

Everyone has been in this situation: you are stuck in an endless meeting, and a colleague drones on about a topic of marginal relevance. You begin to zone out and focus on the art hanging in your boss’s office, when suddenly you hear your name mentioned. On high alert, you suddenly shift back to the meeting and scramble to retrieve your colleague’s last sentences. Miraculously, you are able to retrieve a few key words – they must have entered your memory a moment ago, but would have been quickly forgotten if hearing your name had not cued them as potentially vital bits of information.

This phenomenon, while elusive in everyday situations, has been studied experimentally for a number of years now: cues indicating the relevance of a particular item in working memory have a striking benefit to our ability to recall it, even if the cue is presented after the item has already entered memory. See our previous Research Briefing on how retrospective cueing can restore information to the focus of attention in working memory.

In a new article, published in the Journal of Cognitive Neuroscience, we describe a recent experiment that set out to add to our expanding knowledge of how the brain orchestrates these retrospective shifts of attention. We were particularly interested in the potential role of neural synchronization of 10 Hz (or alpha-band) oscillations, because they are important in similar prospective shifts of attention.

Figure 2. Experimental Task Design. [from Myers et al, 2014]
We wanted to examine the similarity of alpha-band responses (and other neural signatures of the engagement of attention) both to retrospective and prospective attention shifts. We needed to come up with a new task that allowed for this comparison. On each trial in our task, experiment volunteers first memorized two visual stimuli. Two seconds later, a second set of two stimuli appeared, so that a total of four stimuli was kept in mind. After a further delay, participants recalled one of the four items.  

In between the presentation of the first and the second set of two items, we sometimes presented a cue: this cue indicated which of the four items would likely be tested at the end of the trial. Crucially, this cue could have either a prospective or a retrospective function, depending on whether it pointed to location where an item had already been presented (a retrospective cue, or retrocue), or to a location where a stimulus was yet to appear (a prospective cue, or precue). This allowed us to examine neural responses to attention-guiding cues that were identical with respect to everything but their forwards- or backwards-looking nature. See Figure 2 for a task schematic.

Figure 3. Results: retro-cueing and pre-cueing
trigger different attention-related ERPs.
[from Myers et al, 2014]
We found marked differences in event-related potential (ERP) profiles between the precue and retrocue conditions. We found evidence that precues primarily generate an anticipatory shift of attention toward the location of an upcoming item: potentials just before the expected appearance of the second set of stimuli reflected the location where volunteers were attending. These included the so-called early directing attention negativity (or 'EDAN') and the late directing attention-related positivity (or 'LDAP'; see Figure 3, middle panel; and see here for a review of attention-related ERPs). Retrocues elicited a different pattern of ERPs that was compatible with an early selection mechanism, but not with stimulus anticipation (i.e., no LDAP, see Figure 3, upper panel). The latter seems plausible, since the cued information was already in memory, and upcoming stimuli were therefore not deserving of attention. In contrast to the distinct ERP patterns, alpha band (8-14 Hz) lateralization was indistinguishable between cue types (reflecting, in both conditions, the location of the cued item; see Figure 4).

Figure 4. Results: retro-cueing and pre-cueing trigger similar patters
of de-synchronisation in low frequency activity (alpha band at ~10Hz).
[from Myers et al, 2014]
What did we learn from this study? Taken together with the ERP results, it seems that alpha-band lateralization can have two distinct roles: after a precue it likely enables anticipatory attention. After a retrocue, however, the alpha-band response may reflect the controlled retrieval of a recently memorized piece of information that has turned out to be more useful than expected, without influencing the brain’s response to upcoming stimuli.

It seems that our senses are capable of storing a limited amount of information on the off chance that it may suddenly become relevant. When this turns out to be the case, top-down control allows us to pick out the relevant information from among all the items quietly rumbling around in sensory brain regions.

Many interesting questions remain that we were not able to address in this study. For example, how do cortical areas responsible for top-down control activate in response to a retrocue, and how do they shuttle cued information into a state that can guide behaviour? 



Key Reference: 

Myers, Walther, Wallis, Stokes & Nobre (2014) Temporal Dynamics of Attention during Encoding versus Maintenance of Working Memory: Complementary Views from Event-related Potentials and Alpha-band Oscillations. Journal of Cognitive Neuroscience (Open Access)

Sunday, 24 February 2013

Research Briefing: Attention restores forgotten items to visual short-term memory

Our paper, just out in Psychological Science, describes the final series of experiments conducted by Alexandra Murray during her PhD with Kia Nobre and myself at the Department of Experimental Psychology, Oxford University. Building on previous research by Kia and others in the Brain and Cognition Lab, these studies were designed to test how selective attention modulates information being held in mind, in a format known as visual short-term memory (VSTM).

Typically, VSTM is thought of as a temporary buffer for storing a select subset of information extracted during perceptual processing. This buffer is typically assumed to be insulated from the constant flux of sensory input streaming continuously into the brain, allowing the most important information to be held in mind beyond the duration of sensory stimulation. This way, VSTM enables us to use visual information to achieve longer-term goals, helping to free us from direct stimulus-response contingencies (right).

Previous studies have shown that attention is important for keeping visual information in mind. For example, Ed Awh and colleagues have suggested that selective attention is crucial for rehearsing spatial information in VSTM, just like inner speech helps us keep a telephone number in mind. Our results described in this paper further suggest that attention is not simply a mechanisms for maintenance, but is also important for converting information into a retrievable format.

In long term-memory research, retrieval mechanisms are often considered as important to memory performance as the storage format. It is all well and good if the information is stored, but to what end if it cannot be retrieved? We think that retrieval is also important in VSTM - valuable information could be stored in short-term traces that are not directly available for memory retrieval. In this study, we show that attention can be directed to such memory traces to convert them into a format that is easier to use (i.e., retrieve). In this respect, attention can be used to restore information to VSTM for accurate recall.

We combined behavioural and psychophysical approaches to show that attention, directed to memory items about one second after they had been presented, increases the discrete probability of recall, rather than a more perceptual improvement in the precision of recall judgements (for relevant methods, see also here). This combination of approaches was necessary to infer a discrete state transition between retrievable and non-retrievable formats.

Next step? Tom Hartley asked on twitter: what happened to the unattended items in memory? We did not address this question in this study, and the current literature presents a mixed picture, some suggesting the attention during maintenance impairs memory for unattended items (see), whereas others find no such suppression effect (see). It is possible that differences in strategy could account for some of the confusion.

To test the effect on unattended items in behavioural studies, researchers typically probe memory for unattended items every so often. This presents a contradiction to the participant - sometimes uncued items will be relevant for task performance, therefore individuals need to decide on an optimal strategy (i.e., how much attention to allocate to uncued items, just in case...). A cleaner approach is to use brain imaging to measure the neural consequence for unattended items. The principal advantage is that you don't need to confuse your participants with a mixed message: attend to the cued item, even though we might ask you about one of the other ones!!

References:

Awh & Jonides (2001) Overlapping mechanisms of attention and spatial working memory. TICS (pdf)

Bays & Husain (2008) Dynamic shifts of limited working memory resources in human vision. Science (pdf)

Landman, Spekreijse, & Lamme (2003). Large capacity storage of integrated objects before change blindness. Vision Research (link).

Matsukura, Luck, & Vecera (2007). Attention effects during visual short-term memory maintenance: Protection or prioritization? Perception & Psychophysics (link).

Murray, Nobre, Clark, Cravo & Stokes (2013) Attention Restores Discrete Items to Visual Short-Term Memory. Psychological Science (pdf)




Tuesday, 31 July 2012

Research Meeting: Visual Search and Selective Attention

Just returned from a really great meeting at the scenic lakeside (“Ammersee”) location near Munich, Germany. The third Visual Search and Selective Attention symposium was hosted and organised by Hermann Müller and Thomas Geyer (Munich), and supported by the Munich Center for Neurosciences (MCN) and the German Science Foundation (DFG). The stated aim of the meeting was:
"to foster an interdisciplinary dialogue in order to identify important shared issues in visual search and selective attention and discuss ways of how these can be resolved using convergent methodologies: Psychophysics, mental chronometry, eyetracking, ERPs, source reconstruction, fMRI, investigation of (neuropsychological) impairments, TMS and computational modeling."
The meeting was held over three days, and organised by four general themes:

- Pre-attentive and post-selective processing in visual search (Keynote: Hermann Müller)
- The role of (working) memory guidance in visual search (Keynote: Chris Olivers, Martin Eimer)
- Brain mechanisms of visual search (Keynote: Glyn Humphreys)
- Modelling visual search (Keynote: Jeremy Wolfe).

Poster sessions gave grad students (including George Wallis and Nick Myers) a great chance to chat about their research with the invited speakers as well as other students tackling similar issues. 

Of course, a major highlight was the Bavarian beer. Soeren Kyllingsbaek was still to give his talk, presumably explaining the small beer in hand!

More photos of the meeting can be found here.

***New***

All presentations can be downloaded from here

Monday, 7 May 2012

Research Meeting: Memory and Attention


Last month saw a meeting of academics from across the UK, and abroad, exploring a common theme: the interaction between attention and memory. 

These are core concepts in cognitive neuroscience, with a rich tradition of research dating back to the seminal behavioural and neuropsychological studies in the early half of last century to more contemporary cognitive neuroscience with all the bells and whistles of brain imaging and stimulation. Yet still, with all the developments, relatively little is known about how these core functions interact. This was the motivation for Duncan Astle (MRC Cognition and Brain Sciences Unit, Cambridge) to propose to the British Academy a two-day meeting between leading academics interested in the interaction between attention and memory. Not to mention, they are just a fun group of people, and therefore a good excuse for a get-together in London...

Speakers were invited to present their latest research concerning links between attention and memory. Although the scope for “memory” is broad (i.e., iconic memory, working memory, long-term memory), most delegates took the opportunity to focus on short-term and/or working memory. Check out the website for more information on individual presentations, including audio and video downloads!

Research Briefing: How memory influences attention

Background


In the late 19th Century, the great polymath Hermann von Helmholtz eloquently described how our past experiences shape how we see the world. Given the optical limitations of the eye, he concluded that the rich experience of vision must be informed by a lot more than meets the eye. In particular, he argued that we use our past experiences to infer the perceptual representation from the imperfect clues that pass from the outside world to the brain. 


Consider the degraded black and white image below. It is almost impossible to interpret, until you learn that it is a Dalmatian. Now it is almost impossible not to see the dog in dappled light.

More than one hundred years after Helmholtz, we are now starting to understand the brain mechanisms that mediate this interaction between memory and perception. One important direction follows directly from Helmholtz 's pioneering work. Often couched in more contemporary language, such as Bayesian inference, vision scientists are beginning to understand how our perceptual experience is determined by the interaction between sensory input and our perceptual knowledge established through past experience in the world. 

Prof Nobre (cognitive neuroscientist, University of Oxford) has approached this problem from a slightly different angle. Rather than ask how memory shapes the interpretation of sensory input, she took one step back to ask how past experience prepares the visual system to process memory-predicted visual input. With this move, Nobre's research draws on a rich history of cognitive neuroscientific research in attention and long-term memory. 

Although both attention and memory have been thoroughly studied in isolation, very is little is actually known of how these two core cognitive functions interact in everyday life. In 2006, Nobre and colleagues published the results of a brain imaging experiment designed to identify the brain areas involved in memory-guided attention (Summerfield et al., 2006, Neuron). Participants in this experiment first studied a large number of photographs depicting natural everyday scenes. The instruction was to find a small target object embedded in each scene, very much like the classic Where's Wally game.


After performing the search task a number of times, participants were able learned the location of the target in each scene. When Nobre and her team tested their participants again on a separate day, they found that people were able to use the familiar scenes to direct attention to the previously learned target location in the scene. 


Next, the research team repeated this experiment, but this time changes in brain activity were measured in each participant while they used their memories to direct the focus of their attention. With functional magnetic resonance imaging (fMRI), the team found an increase in neural activity in brain areas associated with memory (especially the hippocampus) as well as a network of brain areas associated with attention (especially parietal and prefrontal cortex). 

This first exploration of memory guided attention (1) confirmed that participants can use long-term memory to guide attention, and (2) further suggested that the brain areas that the mediate long-term could interact with attention-related areas to support this coalition. However, due to methodological limitations at the time, there was no way to separate activity associated with memory-guided preparatory attention, and the consequences of past-experience on perception (e.g., Helmholtzian inference). This was the aim of our follow-up study.

The Current Study: Design and Results 


In collaboration with Nobre and colleagues, we combined multiple brain imaging methods to show that past experience can change the activation state of visual cortex in preparation for memory-predicted input (Stokes, Atherton, Patai & Nobre, 2012, PNAS). Using electroencephalography (EEG), we demonstrated that the memories can reduce inhibitory neural oscillations in visual cortex at memory-specific spatial locations.

With fMRI, we further show that this change in electrical activity is also associated with an increase in activity for the brain areas that represent the memory-predicted spatial location. Together, these results provide key convergent evidence that past-experience alone can shape activity in visual cortex to optimise processing of memory-predicted information. 


Finally, we were also able to provide the most compelling evidence to date that memory-guided attention is mediated via the interaction between processing in the hippocampus, prefrontal and parietal cortex. However, further research is needed to verify this further speculation. In particular, we cannot yet confirm whether activation of the attention network is necessary for memory-guided preparation of visual cortex, or whether a direct pathway between the hippocampus and visual cortex is sufficient for the changes in preparatory activity observed with fMRI and EEG. This is now the focus of on-going research.