Wednesday 15 August 2012

In the news: clever coding gets the most out of retinal prosthetics

This is something of an update to a previous post, but I thought interesting enough for its own blog entry. Just out in PNAS, Nirenberg and Pandarinath describe how they mimic the retina’s neural code to improve the effective resolution of an optogenetic prosthetic device (for a good review, see Nature News).

As we have described previously, retinal degeneration affects the photoreceptors (i.e., rod and cone cells), but often spares the ganglion cells that would otherwise carry the visual information to the optic nerve (see retina diagram below). By stimulating these intact output cells, visual information can bypass the damaged retinal circuitry to reach the brain. Although the results from recent clinical trials are promising, restored vision is still fairly modest at best. To put it in perspective, Nirenberg and Pandarinath write:
[current devices enable] "discrimination of objects or letters if they span ∼7 ° of visual angle; this corresponds to about 20/1,400 vision; for comparison, 20/200 is the acuity-based legal definition of blindness in the United States"
Obviously, this poor resolution must be improved upon. Typically, the problem is framed as a limit in the resolution of the stimulating hardware, but Nirenberg and Pandarinath show that software matters too. In fact, they demonstrate that software matters a great deal.

This research focuses on a specific implementation of retinal prosthesis based on optogenetics (for more on approach check out this Guardian article, and for an early empirical demonstration). Basically, intact retinal ganglion cells are injected with a genetically engineered virus that produces a light sensitive protein. These modified cells will now respond to light coming into the eye, just as the rods and cones do in the healthy retina. This approach, although still being developed in mouse models, promises a more powerful and less invasive alternative to electrode arrays previously trialled in humans. But it is not the hardware that is the focus of this research. Rather, Nirenberg and Pandarinath show how the efficacy of the these prosthetic devices critically depends on the type of signal used to activate the ganglion cells. As schematised below, they developed a special type of encoder to convert natural images into a format that more closely matches the neural code expected by the brain. 

The steps from visual input to retinal output proceed as follows: Images enter a device that contains the encoder and a stimulator [a modified minidigital light projector (mini-DLP)]. The encoder converts the images into streams of electrical pulses, analogous to the streams of action potentials that would be produced by the normal retina in response to the same images. The electrical pulses are then converted into light pulses (via the mini-DLP) to drive the ChR2, which is expressed in the ganglion cells.
This neural code is illustrated in the image below: 



The key result of this research paper is a dramatic increase in the amount of information that is transduced to the retinal output cells. They used a neural decoding procedure to quantify the information content in the activity patterns elicited during visual stimulation of a healthy retina, compared to optogenetic activation of ganglion cells in the degenerated retina via encoded or unencoded stimulation. Sure enough, the encoded signals were able to reinstate activity patterns that contained much more information than the raw signals. In a more dramatic, and illustrative, demonstration of this improvement, they used an image reconstruction method to show how the original image (baby's face in panel A) is first encoded by the device (reconstructed in panel B) to activate a pattern of ganglion cells (image-reconstructed in panel C). Clearly, the details are well-preserved, especially in comparison to the image-reconstruction of a non-encoded transduction (in panel D). In a final demonstration, they also found that the experimental mice could track a moving stimulus using the coded signal, but not the raw unprocessed input.

According to James Weiland, ophthalmologist at University of Southern California (quoted in by Geoff Brumfiel Nature News), there has been considerable debate whether it is more important to try to mimic the neural code, or just allow the system to adapt to an unprocessed signal. Nirenberg and Pandarinath argue that clever pre-processing will be particularly important for retinal prosthetics, as there appears to be less plasticity in the visual system than say the auditory system. Therefore, it is essential that researchers crack the neural code of the retina rather than hope the visual system will learn to adapt to an artificial input. The team are optimistic:
"the combined effect of using the code and high-resolution stimulation is able to bring prosthetic capabilities into the realm of normal image representation"
But only time, and clinical trials, will tell.


References:

Bi A, et al. (2006) Ectopic expression of a microbial-type rhodopsin restores visual responses in mice with photoreceptor degeneration. Neuron 50(1):23–33.

Nirenberg and Pandarinath (2012). Retinal prosthetic strategy with the capacity to restore normal vision. PNAS

No comments:

Post a Comment