The first part of our series highlighting UWIN fellow research starts with rising third year PhD Psychology graduate Ezgi Yucel. Ezgi is co-advised by Ione Fine and Ariel Rokem, and received a UWIN fellowship before arriving at the University of Washington. Her general research combines vision perception, computer modeling, and patient experience improvement.

Retinal degenerative diseases, such as retinitis pigmentosa, can cause gradual vision loss due to the death of the cone and rod photoreceptor cells in the eye. Treatment for vision loss includes electrical retinal implants that are surgically placed, similar to cochlear implants. The implants electrically stimulate the remaining retinal cells to induce specific percepts, or a perceptual experience that a patient sees.

Visual loss typical of earlier stages of retinitis pigmentosa (A) and macular degeneration (B). Both diseases lead to total blindness at later stages. Source: Action for blind people (UK).

In 2013, the ARGUS II became the first FDA approved retinal implant in the United States. The Argus II includes a microelectrode array placed on the retina at the back of the eye, a camera and transmitter mounted on eyeglasses, and a video processing unit. The camera captures visual information that the processing unit translates into a series of instructions to be transmitted to the microelectrode array. The array then stimulates the remaining cells in the retina. While these percepts do not copy exact visual information recorded by the camera, it is a step towards restoring vision.

Optimizing a perceptual experience

Ezgi’s work revolves around refining the percepts that patients see.  She does this by developing a clearer understanding of patients’ perceptual experiences and investigating how various factors can affect the experience of the patient. Initially, the purpose of her work was to validate a model called pulse2percept, which was created to simulate the perceptual experience of retinal prosthesis patients. The model receives the series of pulses sent to the microarray and predicts a possible precept seen by a patient. This original model was based on behavior data of three patients with older Argus I and one patient with an Argus II device. While the model applies to this small population, it would need to be validated against a larger audience.

Simulation by pulse2percept predicting the perceptual experience of a retinal prosthesis patient. Source:[]

Ezgi began this project by conducting behavior experiments with blind patients using a drawing task.  In this task, researchers stimulated specific electrodes, and the patients drew a recreation their percepts. Initial tests did not lead to reproducible percepts across of the patients, showing a variability in how the ARGUS II functioned in each patient. There was even trouble in receiving two distinct percepts in multi-electrode stimulation trials, hinting at a large patient variability.   She continued meeting with patients and had more success on another data collection trip. She collected consistent percepts across trials, even receiving distinct shapes when multi-electrode stimulation occurred. This gap between the first data collection trip and the second prompted an evolution in Ezgi’s research. 

Considering the gap in perceptual experiences, Ezgi expanded her project to focus on possible causes of the perceptual difference. The first cause that she is investigating is how physiological changes, such as retinal scarring, affects the spatial vision. While there may different possible reasons for perceptual differences, investigating how insertion surgery methods and existing retinal cell damage affect percepts may inform surgeons on how to best align the electrode array for optimized percepts. This research involves using computational models designed to relate the location of scarring and cell abnormalities with the resulting percepts.

Investigation into fMRI data

Alongside her work to optimize the percepts over the entire entire population of patients, Ezgi is also investigating the influence of system-wide low frequency oscillation on the Blood Oxygenation Level Dependent (BOLD) signal. BOLD signals occur where additional blood flow highlights areas where neuronal activity is occurring. Blood flow naturally has measurable oscillations throughout the body, and may lead to biases in the BOLD signal. As the location of retinal neurons for visual input is well known for humans, she is using visual field mapping to estimate the effects of these BOLD oscillations on the measurement of the neuronal activity.

Time delay maps due to blood flow may mimic resting state organization, taken from Tong and Frederick, 2014 (Tracking cerebral blood flow in BOLD fMRI using recursively generated regressors)

The measurements for a visual field map are taken with fMRI (functional Magnetic Resonance Imaging). FMRI’s show how blood flows through the brain while neurons rest or fire, changing the amount of blood needed. While the measurement of the BOLD signal allows researchers to directly measure brain activity during various tasks, researchers question the natural noise of BOLD signals. Ezgi hopes by investigating the noise of the BOLD signal, researchers can clean the BOLD measurements, gaining more precise data.

For the next few years, Ezgi will continue her work on the perceptual experience of retinal prosthesis patients, now focusing on scarring, and expand into the investigation of fMRI noise. She is looking forward to developing better simulations and hopes to improve the consistency of percepts created by devices such as the ARGUS II.

Recently Ezgi presented at the 2019 Neural Computation and Engineering Connection (NCEC), giving a short talk about this research. Portions of her research were also covered in PC magazine’s interview with post-doctoral colleague Dr. Michael Beyeler.