Wednesday, 29 April 2015

Peering directly into the human brain

Wiki Commons
With the rise of non-invasive brain imaging such as functional magnetic resonance imaging (fMRI), researchers have been granted unprecedented access to the inner workings of the brain. It is now relatively straightforward to put your experimental subjects in an fMRI machine and measure activity 'blobs' in the brain. This approach has undoubtedly revolutionised cognitive neuroscience, and looms very large in people's idea of contemporary brain science. But fMRI has it's limitations. As every student in the business should know, fMRI has poor temporal resolution. fMRI is like a very long-exposure photograph: the activity snapshot actually reflects an average over many seconds. Yet the mind operates at the millisecond scale. This is obviously a problem. Neural dynamics are simply blurred with fMRI. However, probably more important is the theoretical limit.

Wiki in ECoG
Electricity is the language of the brain, but fMRI only measures changes in blood flow that are coupled to these electrical signals. This coupling is complex, therefore fMRI can only provide a relatively indirect measure of neural activity. Electroencephalography (EEG) is a classic method for measuring actual electrical activity. It has been around for more than 100 years, but again, as every student should know: EEG has poor spatial resolution. It is difficult to know exactly where the activity is coming from. Magnetoencephalography (MEG) is a close cousin of EEG. Developed more recently, MEG is better at localising the source of brain activity. But the fundamental laws of physics mean that any measure of electromagnetic activity from outside the head will always be spatially ambiguous (the inverse problem). The best solution is to record directly from the surface of the brain. Here we discuss the unique opportunities in that arise in the clinic to measure electrical activity directly from the human brain using electrocorticography (ECoG).

Epilepsy can be a seriously debilitating neurological condition. Although the symptoms can often be managed with medication, some patients continue to have major seizures despite a cocktail of anti-epileptic drugs. So-called intractable epilepsy affects every aspect of life, and can even be life-threatening. Sometimes the only option is neurosurgery: careful removal of the specific brain area responsible for seizures can dramatically improve quality of life.

Neurosurgery
Psychology students should be familiar with the case of Henry Molaison (aka HM). Probably the most famous neuropsychology patient in history, HM suffered intractable epilepsy until the neurosurgeon William Scoville removed two large areas of tissue in the medial temporal lobe, including left and right hippocampus. This pioneering surgery successfully treated his epilepsy, but this is not why the case became so famous in neuropsychology. Unfortunately, the treatment also left HM profoundly amnesic. It turns out that removing both sides of the medial temporal lobe effectively removes the brain circuitry for forming new memories. This lesson in functional neuroanatomy is what made the case of HM so important, but there was also a important lesson for neurosurgery – be careful which parts of the brain you remove!

The best way to plan a neurosurgical resection of epileptic tissue is to identify exactly where the seizure is comping from. The best way to map out the affected region is to record activity directly from the surface of the brain. This typically involves neurosurgical implantation of recording electrodes directly in the brain to be absolutely sure of the exact location of the seizure focus. Activity can then be monitored over a number of days, or even weeks, for seizure related abnormalities. This invasive procedure allows neurosurgeons to monitor activity in specific areas that could be the source of epileptic seizures, but also provides a unique opportunity for neuroscientific research.

From Pasley et al., 2012 PLoS Biol. Listen to audio here
During the clinical observation period, patients are typically stuck on the hospital ward with electrodes implanted in their brain literally waiting for a seizure to happen so that the epileptic brain activity can be ‘caught on camera’. This observation period provides a unique opportunity to also explore healthy brain function. If patients are interested, they can perform some simple experiments using computer based tasks to determine how different parts of the brain perform different functions. Previous studies from some of the great pioneers in neuroscience mapped out the motor cortex by stimulating different brain areas during neurosurgery. Current experiments are continuing in this tradition to explore less well charted brain areas involved in high-level thought. For example, in a recent study from Berkeley, researchers used novel brain decoding algorithms to convert brain activity associated with internal speech into actual words. This research helps us understand the fundamental neural code for the internal dialogue that underlies much of conscious thought, but could also help develop novel tools for providing communication to those otherwise unable to general natural speech.


From Dastjerdi et al 2013 Nature Communications (watch video below)

In Stanford, researchers were recently able to identify a brain area that codes for numbers and quantity estimation (read study here). Critically, they were even able to show that this area is involved in everyday use for numerical cognition, rather than just under their specific experimental conditions. See video below.
Wiki Commons



The great generosity of these patients vitally contributes to the broader understanding of brain function. They have dedicated their valuable time in otherwise adverse circumstances to help neuroscientists explore the very frontiers of the brain. These patients are true pioneers.





Key References

Dastjerdi, M., Ozker, M., Foster, B. L., Rangarajan, V., & Parvizi, J. (2013). Numerical processing in the human parietal cortex during experimental and natural conditions. Nat Commun, 4, 2528.


Pasley, B. N., David, S. V., Mesgarani, N., Flinker, A., Shamma, S. A., Crone, N. E., Knight, R. T., & Chang, E. F. (2012). Reconstructing speech from human auditory cortex. PLoS Biol, 10, e1001251.


Video showing the use of a number processing brain area in everyday use:



Friday, 24 April 2015

Research Briefing: organising the contents of working memory

Figure 1. Nicholas Myers
Research Briefing, by Nicholas Myers

Everyone has been in this situation: you are stuck in an endless meeting, and a colleague drones on about a topic of marginal relevance. You begin to zone out and focus on the art hanging in your boss’s office, when suddenly you hear your name mentioned. On high alert, you suddenly shift back to the meeting and scramble to retrieve your colleague’s last sentences. Miraculously, you are able to retrieve a few key words – they must have entered your memory a moment ago, but would have been quickly forgotten if hearing your name had not cued them as potentially vital bits of information.

This phenomenon, while elusive in everyday situations, has been studied experimentally for a number of years now: cues indicating the relevance of a particular item in working memory have a striking benefit to our ability to recall it, even if the cue is presented after the item has already entered memory. See our previous Research Briefing on how retrospective cueing can restore information to the focus of attention in working memory.

In a new article, published in the Journal of Cognitive Neuroscience, we describe a recent experiment that set out to add to our expanding knowledge of how the brain orchestrates these retrospective shifts of attention. We were particularly interested in the potential role of neural synchronization of 10 Hz (or alpha-band) oscillations, because they are important in similar prospective shifts of attention.

Figure 2. Experimental Task Design. [from Myers et al, 2014]
We wanted to examine the similarity of alpha-band responses (and other neural signatures of the engagement of attention) both to retrospective and prospective attention shifts. We needed to come up with a new task that allowed for this comparison. On each trial in our task, experiment volunteers first memorized two visual stimuli. Two seconds later, a second set of two stimuli appeared, so that a total of four stimuli was kept in mind. After a further delay, participants recalled one of the four items.  

In between the presentation of the first and the second set of two items, we sometimes presented a cue: this cue indicated which of the four items would likely be tested at the end of the trial. Crucially, this cue could have either a prospective or a retrospective function, depending on whether it pointed to location where an item had already been presented (a retrospective cue, or retrocue), or to a location where a stimulus was yet to appear (a prospective cue, or precue). This allowed us to examine neural responses to attention-guiding cues that were identical with respect to everything but their forwards- or backwards-looking nature. See Figure 2 for a task schematic.

Figure 3. Results: retro-cueing and pre-cueing
trigger different attention-related ERPs.
[from Myers et al, 2014]
We found marked differences in event-related potential (ERP) profiles between the precue and retrocue conditions. We found evidence that precues primarily generate an anticipatory shift of attention toward the location of an upcoming item: potentials just before the expected appearance of the second set of stimuli reflected the location where volunteers were attending. These included the so-called early directing attention negativity (or 'EDAN') and the late directing attention-related positivity (or 'LDAP'; see Figure 3, middle panel; and see here for a review of attention-related ERPs). Retrocues elicited a different pattern of ERPs that was compatible with an early selection mechanism, but not with stimulus anticipation (i.e., no LDAP, see Figure 3, upper panel). The latter seems plausible, since the cued information was already in memory, and upcoming stimuli were therefore not deserving of attention. In contrast to the distinct ERP patterns, alpha band (8-14 Hz) lateralization was indistinguishable between cue types (reflecting, in both conditions, the location of the cued item; see Figure 4).

Figure 4. Results: retro-cueing and pre-cueing trigger similar patters
of de-synchronisation in low frequency activity (alpha band at ~10Hz).
[from Myers et al, 2014]
What did we learn from this study? Taken together with the ERP results, it seems that alpha-band lateralization can have two distinct roles: after a precue it likely enables anticipatory attention. After a retrocue, however, the alpha-band response may reflect the controlled retrieval of a recently memorized piece of information that has turned out to be more useful than expected, without influencing the brain’s response to upcoming stimuli.

It seems that our senses are capable of storing a limited amount of information on the off chance that it may suddenly become relevant. When this turns out to be the case, top-down control allows us to pick out the relevant information from among all the items quietly rumbling around in sensory brain regions.

Many interesting questions remain that we were not able to address in this study. For example, how do cortical areas responsible for top-down control activate in response to a retrocue, and how do they shuttle cued information into a state that can guide behaviour? 



Key Reference: 

Myers, Walther, Wallis, Stokes & Nobre (2014) Temporal Dynamics of Attention during Encoding versus Maintenance of Working Memory: Complementary Views from Event-related Potentials and Alpha-band Oscillations. Journal of Cognitive Neuroscience (Open Access)

Friday, 10 April 2015

Research Briefing: Preferential encoding of behaviourally relevant predictions revealed by EEG

Figure 1. Accurate predictions help us prepare the best action
Statistical regularities in the environment allow us to generate predictions to guide perception and action. For example, consider the challenge facing a goal keeper during a penalty shoot-out. There is simply not enough time to act responsively. By the time the ball in hurtling along its path to some deep corner of the net, it is probably already too late to plan and execute the appropriate action to save the goal. Instead, the goal keeper must actively predict the likely trajectory of ball before it has even left the boot of the other player. The goal keeper must use the any subtle clues betrayed by the kicker, any reliable signal to help prepare for a dive in the correct direction.

Background

Predictions are useful in many contexts, not just professional sport. In everyday life, your brain is constantly generating predictions that help you to interpret the world around you and plan appropriate behaviour. Hermann von Helmholtz described the importance of predictions derived from past experience for interpreting perceptual information (see previous post). More recently, theorists that argued that the brain is essentially a predictive machine - for example, the Free Energy Principle proposes that perception and action are best conceptualised as a dynamic interplay between predictions we make about our environment and how well these predictions explain future events.

Research Question

In any given context, some predictions might be useful for behaviour, but others less so. Here, we asked whether and how the brain learns relevant and/or irrelevant predictive relationships using electroencephalography (EEG).

Methods Summary

Participants in our experiment performed a simple target detection task (see Figure 2). On each experimental trial, they were presented with a visual stimulus drawn for a set of ten possible fractals images. At the start of the experiment, one image was assigned as the target. Participants were simply instructed to press a button as quickly as possible each time they detected the target image. Critically, unbeknownst to the participants. we also assigned specific roles for some of the other 'non-target' images. Firstly, we randomly assigned one of the stimuli to act as a task-relevant predictive cue. We rigged the presentation probabilities such that the target stimulus was more likely to follow the predictive cue than any other stimulus. We reasoned that participants should be able to implicitly learn this task-relevant predictive relationship to help prepare for the response to the target stimulus (faster response times). 

Figure 2. Behavioural task to compare neural processing associated with task-relevant and irrelevant statistical relationships. Presentation probabilities were randomly assigned for each participant at the start of the experiment [from Stokes et al., 2014]
Critically, we also included a task-irrelevant predictive relationship to test whether learning is specific to task-relevant relationships, or do participants implicitly encode all the regularities that they experience. Because this manipulation was by definition task-irrelevant, there was no behavioural index of learning. However, we could look to the EEG data to compare the neural response to task relevant versus irrelevant predictive relationships.
Figure 3. Reaction times became faster for cued targets 
as participants presumably learned the predictive 
nature of the task-relevant cue [from Stokes et al., 2014]

Results Summary

Analysis of the reaction time data confirmed that participants learned the task-relevant statistical regularity that we introduced into the experiment. Reaction times were faster for cued targets relative to uncued targets (see Figure 3). By definition, there is no behavioural measure for task-irrelevant learning in this task, so we must turn to the EEG data (see Figure 4). Panel A shows the EEG response to cued targets relative to uncued targets as a function of learning (block number) in frontal, central and posterior scalp electrodes. The colour scale shows the difference in voltage between cued and uncued targets, i.e., the effect of the predictive cue on target processing. Towards the end of the experiment (blocks 7 & 8), a positive difference emerges at around 300ms after the presentation of the target. We also estimated the effect of learning by calculating the linear relationship between block number and the EEG response. In panel B, we can see the scalp distribution of this learning effect. In comparison to the robust learning effect of task-relevant predictions, we find no evidence for an effect of block (i.e., learning) on task-irrelevant predictions (Panel C & D). Finally, in Panel E we also plot the time-course of the learning effect for relevant (in blue) and irrelevant (in red) predictions for frontal, central and posterior electrodes, revealing a significant effect of learning relevant predictions (black significance bar, relative to baseline), but not irrelevant learning (relevant>irrelevant in grey significance bar). 


Figure 4. The EEG learning effect for cued vs. uncued targets and cued vs. uncued control non-targets. There was a robust effect of learning for predicted target stimuli (Panel A & B) relative to the task-irrelevant stimulus pairs (Panels C & D). In Panel E, we directly contrast the effect of learning task-relevant and irrelevant predictions in frontal, central and posterior electrodes, revealing a significant difference from around 250ms post-stimulus in central and posterior channels. Horizontal bars indicate significant regression slopes in the target learning condition compared to chance (in black; central: p = 0.053, cluster-corrected, dashed line, posterior: p = 0.0130, cluster-corrected, solid line), and directly compared to the control non-target condition (in grey; central: p = 0.026; posterior: p = 0.045, cluster corrected) [from Stokes et al., 2014]
Finally, we performed the same analysis, but time-locked to the cue stimulus (Figure 5). All the conventions were the same, expect now we are looking at the response to the predictive stimulus (rather than the predicted stimulus). Again, we observed a robust learning effect for the task relevant predictive cue (Panels A,B), but not for the task-irrelevant (Panels A,B and C for the direct comparison).   

Figure 5. Event-related potentials to predictive stimuli: target cue and control non-target cues. All the conventions are the same as Figure 4. Note that there is a significant learning effect of the task-relevant predictive cue (Panel E, in blue), but not the task-irrelevant cue (in red) [from Stokes et al., 2014]

Summary

This experiment shows that learning predictive relationships critically depends on the task relevance. In our experiment, participants were not explicitly informed about any of the statistical relationships between stimuli, but simply learned them through experience. Task-relevant predictions clearly benefited behaviour in the task. As participants learned the implicit statistics of the task, they responded more quickly to cued, relative to uncued targets. The learning effect was also clearly evident in EEG activity, consistent with differential processing of predictive, and predicted, task-relevant stimuli. In contrast, there was no evidence for a corresponding neural effect for task-irrelevant predictions, providing strong evidence that the brain prioritises which relationships to learn. Of course, our null effect does not mean that task-irrelevant predictions are never learned or represented, but rather highlights to importance of task-relevance in modulating the learning process. 

As a side note, this experiment also provides a nice example of how we can use EEG to probe cognitive variables without requiring a behavioural response. In many situations, we are interested in how the brain processes non-target information. This presents an obvious challenge for a behavioural experiment: how can we measure processing, without making the stimulus task-relevant? Here, we use EEG to measure the response to task-irrelevant input, thereby providing insights at both the neural and cognitive level (see relevant post here)



Key reference: 

Stokes, Myers, Turnbull & Nobre (2014). Preferential encoding of behaviourally relevant predictions revealed by EEG. Frontiers in Human Neuroscience, 8:687 [open access]







Thursday, 9 April 2015

New arrival, keeping us all busy

It has been a while since I have posted anything new, but in the meantime this little guy has arrived in our lives:


Thursday, 12 June 2014

Research Briefing: Oscillatory Brain State and Variability in Working Memory

Hot off the press: Oscillatory Brain State and Variability in Working Memory

In a new paper, Nick Myers and colleagues show how spontaneous fluctuations in
alpha-band synchronization over visual cortex predict the trial-by-trial accuracy of items stored in visual working memory. The pre-stimulus desynchronization of alpha oscillations correlated with the accuracy of memory recall. A model-based analysis indicated that this effect arises from a modulation in the precision of memorized items, but not the likelihood of remembering them (the recall rate). The phase of posterior alpha oscillations preceding the memorized item also predicted memory accuracy. The study highlights the influence of spontaneous changes in cortical excitability on higher visual cognition, and how these state changes contribute to large amounts of variability in what is normally thought of as a stable aspect of behavior.
 
From Figure 2 in Myers et al. (2014)



Reference: 

Myers, N. E., M. G. Stokes, et al. (2014). "Oscillatory brain state predicts variability in working memory." J Neurosci 34(23): 7735-7743 http://www.jneurosci.org/content/34/23/7735.short

Tuesday, 13 August 2013

In the News: Death wave

Near-death Experience
(Wiki Commons)
Can neuroscience shed light on one of life's biggest mysteries - death? In a paper just published in PNAS, researchers describe a surge of brain activity just moments before death. This raises the fascinating possibility that they have identified the neural basis for near death experiences.

First, to put this research into context, death-related brain activity was examined in rats, not humans. For obvious reasons, it is easier to study the death process in animals rather than humans. In this study, nine rats were implanted with electrodes in various brain regions, anaesthetised then 'euthanized' (i.e., killed). The exact moment of death was identified as the last regular heartbeat (clinical death). Electroencephalogram (EEG) was recorded during normal waking phase, anaesthesia and after cardiac arrest (i.e., after death) from right and left frontal (RF/LF), parietal (RP/LP) and occipital (RO/LO) cortex (see Figure below). Data shown in panel A ranges from about 1hr before death to 30mins afterwards. At this coarse scale you can see some patterns in the waking data that generally reflect high frequency brain activity (gamma band, >40Hz). During anaesthesia, activity becomes synchronised at lower frequency bands (especially delta band: 0.1–5 Hz), but everything seems to flatline after cardiac arrest. However, if we now zoom in on the moment just after death (Panels B and C), we can see that the death process actually involves a sequence of structured stages, including a surge of high-frequency brain activity that is normally associated with wakefulness.


Adapted from Fig 1 of Borjogin et al. (2013)

In the figure above, Panel B shows brain activity zoomed in at 30min after death, and Panel C provides an even closer view, with activity from each brain area overlaid in a different colour. The authors distinguish  four distinct cardiac arrest stages (CAS). CAS1 reflects the time between the last regular heartbeat and the loss of oxygenated blood pulse (mean duration ~4 seconds). The next stage, CAS2 (~6 seconds duration) ended with a burst in delta waves (so-called 'delta blip' ~1.7 seconds duration), and CAS3 (~20 seconds duration) continued until there was no more evidence of meaningful brain activity (i.e., CAS4 >30mins duration). These stages reflect an organized series of brain states. First, activity during CAS1 transitions from the anaesthetised state with an increase in high-frequency activity (~130Hz) across all brain areas. Next, activity settles into a period of low-frequency brain waves during CAS2. Perhaps most surprisingly, during CAS3 recordings were dominated by mid-range gamma activity (brain waves ~35-50Hz). In further analyses, they also demonstrate that this post-mortem brain activity is also highly coordinated across brain areas and different frequency bands. These are the hallmarks of high-level cognitive activity. In sum, these data suggests that long after death, the brain enters a brief state of heightened activity that is normally associated with wakeful consciousness.

Heightened awareness just after death  

Adapted from Fig 2 of Borjogin et al. (2013)
The authors even suggest that the level of activity observed during CAS3 may not only resemble the waking state, but might even reflect a heightened state of conscious awareness similar to the “highly lucid and realer-than-real mental experiences reported by near-death survivors”. This is based on the observation that there is more evidence for consciousness-related activity during this final phase of death than during normal wakeful consciousness. This claim, however, depends critically on their quantification of 'consciousness'. To date, there is no simple index of 'consciousness' that can be reliability measured to infer the true state of awareness. And even if we could derive such a consciousness metric in humans (see here), to generalise to animals could only ever be speculative. Indeed, research in animals can only ever hint at human experience, including near-death experiences.

Nevertheless, as the authors note, this research certainly demonstrates that activity in the brain is consistent with active cognitive processing. The results demonstrate that a neural explanation for these experiences is at least plausible. They have identified the right kind of brain activity for a neural explanation of near-death experiences, yet it remains to be verified whether these signatures do actually relate directly to the subjective experience.

Future directions: The obvious next step is to test weather similar patterns of brain activity are observed in humans after clinical death. Next, it will be important to show that such activity is strongly coupled to near-death experience. For example, does the presence or absence of such activity predict whether or not the person would report a near death experience. This second step is obviously fraught with technical and ethical challenges (think: The Flatliners), but would provide good evidence to link the neural phenomena to the phenomenal experience.

Key Reference:

Borjigin, Lee, Liu, Pal, Huff, Klarr, Sloboda, Hernandez, Wang & Mashour (2013) Surge of neurophysiological coherence and connectivity in the dying brain. PNAS

Related references:

Tononi G (2012) Integrated information theory of consciousness: An updated account. Arch Ital Biol 150(2-3):56–90.

Auyong DB, et al. (2010) Processed electroencephalogram during donation after
cardiac death. Anesth Analg 110(5):1428–1432

Related blogs and news articles:

BBC News
Headquarters Hosted by the Guardian
National Geographic
The Independent

Thursday, 27 June 2013

Book Review: Hallucinations, by Oliver Sacks

George Wallis
This is a guest post by George Wallis, one of my PhD students.  We recently attended a seminar in which Oliver Sacks discussed his recent book ‘Hallucinations’.  In this post George discusses the ways in which hallucinations provide neuroscientists with clues about the hidden workings of the brain. This article is also cross-posted at Brain Metrics, a Scitable Blog hosted by Nature Education. 

Oliver Sacks is a neurologist and a writer, and close to a household name.  For many readers, he will be a familiar figure.  Since 1970 he has been writing humane accounts of the ways in which different forms of neurological illness or damage affect the lives of his patients – or occasionally Sacks himself.  Amongst his book-length works are The Man Who Mistook His Wife For a Hat, and Awakenings, an account of the almost miraculous effect of the drug l-DOPA on sleeping sickness patients at the Beth Abraham hospital, that has been adapted into a feature film starring Robin Williams.  Mark and I were lucky to be invited to a small discussion session with Dr Sacks at Warwick University where he is a visiting professor.  The topic of discussion was his most recent book, Hallucinations.


Hallucinations is known for its detailed account of Sack’s own hallucinatory experiences during his remarkably excessive drug-taking phase in the 1960s.  Before tight drug laws, and with access to the most potent compounds to be found in a doctor’s medicine cabinet, Sacks experimented with a wide range of compounds – often in huge doses.  He describes the mind-altering experiences he had with classic psychedelics, the disturbingly real-seeming hallucinations experienced whilst on Artane, frightening episodes of psychotic delirium following withdrawal from some lesser known toxic agents, and the time-eating stupor of opiates.  Most fascinating for Sacks fans is his description of the amphetamine fuelled epiphany that crystallized his desire to write about the neurology and the experiences of his patients.

Beyond the spectacle of these autobiographical chapters, Sacks’ book is a catalogue of the many varieties of hallucination.  For students of neuroscience, this makes for engrossing reading.  Hallucinations can tell us a lot about the brain.

What are hallucinations?  Sack’s defines them as ‘percepts arising in the absence of any external reality – seeing things or hearing things that are not there’.  A few hundred years ago hallucinations might have been ascribed to the influence of Gods or ghosts.  Nowadays, neuroscientists and psychologists see hallucinations as the result of abnormal activity in the brain.  Crucially, neuroscientists consider all of the things we experience to result from models the brain builds.  When you look at something in the outside world, your brain doesn’t magically ‘reach out and touch’ the object so you can perceive it (though, some philosophers might disagree with neuroscientists on this point!).  Instead, the brain builds a model of what is probably out there in the world, doing its best to match the model to the sensory input we receive at our sense organs (for example, in the retina of the eye).  The things you perceive reflect the model the brain builds – a model built out of the buzzing activity of billions of neurons in your brain.  It’s basically intelligent guesswork, but mostly our brains do pretty well, and we have the impression of a stable world.  Importantly, we tend to agree with other people about what’s out there - which gives an indication that our brains are getting things right!  However, if the activity of the brain is in some way altered by a neurological disturbance of one form or another (illness, drugs, damage from a stroke or injury), the model can diverge from its normal faithful representation of the outside world, and we can have hallucinatory perceptions.

Depending on the type of neural disturbance, these hallucinations can take many different forms.  These are all interesting to neuroscientists, as they all have the potential to tell us something about the workings of the brain.

For example, there is Charles Bonnet Syndrome, which Sacks describes in his opening chapter.  The brain’s intelligent guesswork about the outside world is normally informed by a stream of activity from the sense organs. What happens if you cut off that stream of incoming information?  In some cases, the brain keeps on ‘making up a story’ – except now, it has no information to go on, so the percepts that are produced bear no relation to reality.  For example, diseases of the eye can deprive someone of the visual input their brain has been used to receiving.  If part of the retina is damaged, this can leave a blind patch called a ‘scotoma’, and people with a scotoma can sometimes have vivid hallucinations in just their blind patch. 

Charles Bonnet type hallucinations can also occur if someone goes completely blind.  These hallucinations can be highly ornate – for example little ‘lilliputian’ people are sometimes seen, often in very colorful and ornate clothing.  Some people describe these hallucinations as being like a movie.  For most people, however, Charles Bonnet syndrome involves simpler hallucinations – shapes, colours and patterns.  The patterns in the scotoma can ‘scintillate’, giving the impression of constant movement.

Scintillating scotoma patterns


Just because the retina is damaged doesn’t imply that the visual parts of the brain are damaged too – this isn’t necessary for hallucination.  Charles Bonnet syndrome reflects the normal activity of a brain forced to guess in the absence of information – and people with Charles Bonnet are often well aware that their hallucinations aren’t real, even if they seem very solid and detailed.  Interestingly, some people with disrupted sensory input experience hallucinations and some do not – it isn’t clear why.

Does this mean that you could hallucinate too if you were deprived of sensory input?  Yes – though as with Charles Bonnet syndrome, it seems to vary from person to person.  There have been various experiments with sensory deprivation.  A recent example was published in the Journal of Neuro-opthalmolagy in 2004, by Lofti Merabet, Alvaro Pascual-Leone, and their collaborators (Merabet et al., 2004).  They simply blindfolded thirteen healthy volunteers for four days – otherwise, their volunteers were able to walk inside and outside, talk to others, and listen to the TV.  10 out of 13 people reported hallucinations.  Just like in Charles Bonnet syndrome, these were sometimes simple (flashing lights, geometric patterns) and sometimes complex (landscapes, people, buildings, sunsets – often seeming extremely vivid; more vivid than normal visual perceptions).

Hallucinations resulting from sensory deprivation are evidence for the neuroscientists’ view of perception – that the brain generates a model and fits it to the world.  Sometimes the brain tissue responsible for generating that model is disturbed in a way that alters the things people perceive.  For example, in epilepsy, the normally controlled activity of the brain briefly goes haywire.  Out of control neuronal firing emerges, and can spread over the brain surface.  Another form of disturbed brain activity is experienced by many people in the form of migraine.  Migraines are sometimes accompanied by a visual hallucination superimposed on the real visual scene – often termed a ‘migraine aura’.

A migraine sufferer’s recreation of a ‘migraine aura’

 In migraine or epilepsy, people sometimes perceive geometric patterns – for example chequer-boards, zig-zag lines, or concentric rings.  These geometric hallucinations are so consistent across people, they were catalogued in the 1920s by the psychologist Heinrich Klüver.  He divided them into four types: tunnels and funnels, spirals, lattices, and cobwebs.


Kluver’s four categories of hallucination pattern.
Bressloff et al., 2002; used with permission.



These patterned hallucinations are interesting because they seem to reflect the structure of the parts of the brain responsible for early visual processing - parts of the brain that are quite organized in their layout.  In the 1970s, mathematicians Jack Cowan and G Ermentrout built models of aberrant activity patterns, given what they knew about the structure of the visual cortex.  These models have been extended by the Oxford mathematician Paul Bressloff (Bressloff, Cowan, Golubitsky, Thomas, & Wiener, 2002).  By modeling unusual activity in the visual cortex, and then also taking account of the way the neurons in our visual cortex map onto visual space, these researchers are able to predict the kind of hallucinatory patterns catalogued by Klüver.

A mathematical simulation of a hallucination pattern


Whilst migraines and epilepsy are certainly not pleasant, the actual hallucinations experienced are rarely frightening.  The same is true for Charles Bonnet Syndrome.  People experiencing these hallucinations are usually able to tell them apart from reality, though sometimes only once they have become used to them and know what to expect!  Of course, this isn’t true of all hallucinations.  Sacks also discusses the more terrifying types of hallucinations, for example, those of psychosis, or of the ‘night terror’ associated with sleep paralysis – in which people awake unable to move, with the feeling that they are trapped beneath a horrible intruder who is trying to suffocate them (the ‘night mare’ or ‘night hag’).

Nicolai Abildgaard’s ‘Nightmare’



What do these more frightening hallucinations - in particular, the hallucinations associated with psychosis (in which people often also experience delusions) - say about the brain?  This is a fascinating but difficult area, as yet poorly understood.  Here it becomes more difficult to draw the line between perceptions and beliefs, and emotional and motivational factors seem to be more involved.  Researchers are currently trying to understand how hallucinations in diseases like schizophrenia are related to the other symptoms of the disorder, and how they may be similar or different to the kind of hallucinations produced by sensory deprivation or epileptic activity patterns in the brain.

Finally, an interesting speculation that may haunt you as you read Sacks’ book is that hallucinatory experiences – which, as Sacks points out, are much more common than one might think – could be responsible for the religious, mystical, and paranormal parts of our culture.  For example, Sacks points out that Joan of Arc’s visions are classic manifestations of epileptic activity in the temporal lobes.  He speculates that these seizure related visions were the reason an uneducated farmer’s daughter became a religious leader who rallied thousands of followers.

Sacks’ book is a engrossing survey of hallucinatory experiences of all types.  In their variety (far more extensive than described in this blog post) hallucinations provide many insights into the way our ordinary perception works.  Reading Sacks’ book is also a good preparation for the possibility – not too slim, as Sacks points out – that you will one day have a hallucinatory experience of one form or another (if you haven’t already!).

References

Bressloff, P. C., Cowan, J. D., Golubitsky, M., Thomas, P. J., & Wiener, M. C. (2002). What Geometric Visual Hallucinations Tell Us about the Visual Cortex. Neural Computation, 14(3), 473–491. doi:10.1007/BF00288786

Merabet, L. B., Maguire, D., Warde, A., Alterescu, K., Stickgold, R., & Pascual-Leone, A. (2004). Visual Hallucinations During Prolonged Blindfolding in Sighted Subjects. Journal of Neuro-Ophthalmology, 24(2), 109.

All images Creative Commons except Kluver patterns, from Bresloff et al., used with permission.