Author: ss (page 1 of 4)

Registration open for 2020 Neural Computation and Engineering Connection (NCEC)

Poster for the 2020 University of Washington Neural Computation and Engineering Connection (NCEC)

Registration is open for the 2020 Neural Computation and Engineering Connection (NCEC)! It will be held on the afternoon of Thursday, January 30, 2020 and all day Friday, January 31, 2020.  NCEC brings together the University of Washington neuroengineering and computational neuroscience communities in an exciting and stimulating event!  This event is sponsored by the UW Institute for Neuroengineering (UWIN), the UW Computational Neuroscience Center, and the Center for Neurotechnology.

The keynote speakers for NCEC 2020 are: Terry Sejnowski (Salk Institute), Eva Dyer (Georgia Tech), Michel Maharbiz (UC Berkeley), and Karunesh Ganguly (UCSF).

Local speakers include:  Ione Fine (UW Psychology), Nathan Kutz (UW Applied Math), Hesam Jahanian (UW Radiology), and David Gire (UW Psychology) .  UWIN and Computational Neuroscience graduate and postdoctoral fellows will also be giving talks.

Registration is free but required.
Please register at:

Registration closes on Thursday, January 16, 2020.

Thursday’s events will be at the Husky Union Building (HUB) in rooms 332 and 334, starting with a poster session during lunch, followed by student talks, a keynote lecture by Karunesh Ganguly (UCSF), and a panel discussion on ethics in neuroscience.

Friday’s events will be all-day at the Bill & Melinda Gates Center (CSE2) Zillow Commons and will include keynote talks by Terry Sejnowski (Salk Institute), Eva Dyer (Georgia Tech) and Michel Maharbiz (UC Berkeley), talks by UW faculty Ione Fine (Psychology), Nathan Kutz (Applied Math), Hesam Jahanian (Radiology), and David Gire (Psychology), and talks by senior UWIN postdoctoral fellows. NCEC will end with a late afternoon reception on Friday in HUB 332.

More information about the schedule can be found here.

January 2020 UWIN Seminar: Short talks by Fred Rieke and Arka Majumdar

January 2020 UWIN Seminar speakers Fred Rieke and Arka Majumdar.

The UWIN seminars start in January 2020 with a pair of short talks by Fred Rieke and Arka Majumdar. The seminar is on Wednesday, January 8, 2020 at 3:30pm in Husky Union Building (HUB) 337. Refreshments will be served prior to the talks.

“Towards generalizable models for sensory responses”

Fred Rieke, Professor, Department of Physiology and Biophysics, University of Washington

“Extreme miniaturization of optics using metasurface and computational imaging”

Arka Majumdar, Assistant Professor, Department of Electrical and Computer Engineering, University of Washington


“Towards generalizable models for sensory responses” (Fred Rieke)

Receptive field models attempt to concisely summarize neuronal responses to sensory inputs. These models instantiate our understanding of the mechanistic basis of circuit function, and by doing so, help identify gaps in that understanding. Current models often fail to generalize to predict responses to stimuli other than those to which they were directly fit. Such failures are particularly striking for stimuli with the large dynamic range and strong temporal and spatial correlations characteristic of natural visual images. I will summarize recent work aimed at generating models with better ability to generalize across stimuli, and discuss what those models can reveal about key circuit functions.

“Extreme miniaturization of optics using metasurface and computational imaging” (Arka Majumdar)

Modern image sensors consist of systems of cascaded and bulky spherical optics for imaging with minimal aberrations. While these systems provide high quality images, the improved functionality comes at the cost of increased size and weight. One route to reduce a system’s complexity is via computational imaging, in which much of the aberration correction and functionality of the optical hardware is shifted to post-processing in the software realm. Alternatively, a designer could miniaturize the optics by replacing them with diffractive optical elements, which mimic the functionality of refractive systems in a more compact form factor. Metasurfaces are an extreme example of such diffractive elements, in which quasiperiodic arrays of resonant subwavelength optical antennas impart spatially-varying changes on a wavefront. While separately both computational imaging and metasurfaces are promising avenues toward simplifying optical systems, a synergistic combination of these fields can further enhance system performance and facilitate advanced capabilities. In this talk, I will present a method to combine these two techniques to perform full-color imaging across the whole visible spectrum. I will also discuss the use of computational techniques to design new metasurfaces, and using metasurfaces to perform computation on wavefronts, with applications in optical information processing and sensing.

December 2019 UWIN Seminar: Short talks by Jeff Riffell and Josh Smith

Jeff Riffell and Josh Smith, December 2019 UWIN Seminar Speakers

The UWIN seminars continue with a pair of short talks by Jeff Riffell and Josh Smith. The seminar is on Wednesday, December 11, 2019 at 3:30pm in Husky Union Building (HUB) 337. Refreshments will be served prior to the talks.

“The sensory biology and neurobiology of the mosquito – the world’s deadliest animal”

Jeff Riffell, Professor, Department of Biology, University of Washington

“Acoustic Levitation”

Josh Smith, Professor, Department of Computer Science and Engineering, University of Washington


“The sensory biology and neurobiology of the mosquito – the world’s deadliest animal” ( Jeff Riffell)

Here, in this talk, I will focus our recent work on the Aedes aegypti mosquito, an important disease vector. Little is known about the neural circuits that drive mosquito host-finding behavior, but I will describe our efforts towards characterizing the neural bases of behavior at different life-history stages (adult sugar-seeking, host-locating), and developing new tools for interrogating those circuits. Further, I will argue that such an understanding can lead to new interventions and tools for mosquito control. 

“Acoustic Levitation” (Josh Smith)

I will present some early work in my group on using ultrasound to levitate small objects. After introducing the technique, I will discuss potential applications in neuroscience, neurotechnology, robotics, and other areas.

Launch of Weill Neurohub, a collaborative research network for neuroscience

Sanford and Joan Weill, donated money for the new Weill Neurohub
Sanford and Joan Weill. UC San Francisco

The Weill Neurohub is a recently launched collaborative research network with the purpose of spurring neuroscience research into brain disease and disorders. This collaboration is being funding by a $106 million dollar initiative given by the Weill Family Foundation. Former Wall Street financier Sanford Weill and his wife, Joan Weill, operate the Weill Family Foundation which has a continued history of support in neuroscience, with total gifts exceeding $300 million dollars.

The new Neurohub will combine the experience of three universities, as well as the resources of seventeen national laboratories overseen by the Department of Energy. The intent of this extensive collaboration is to forge and nurture collaborations between neuroscientists and researchers working in other disciplines to speed development of therapies for neurological diseases and disorders.

Neurohub will be headed by a leadership committee including representatives from all 3 schools, their neuroscience departments, and the Weill Family Foundation. The committee includes UWIN co-director, and professor of biology at the UW, Tom Daniel. The committee is looking forward to establishing, as Tom Daniel sees it, a “nationally unique enterprise — drawing on diverse approaches to accomplish goals no single institution could reach alone, as well as seeding and accelerating research and discovery. ”

The Neurohub plans on providing funding for faculty, postdoctoral fellows, and graduate students at the UW, Berkeley and UCSF working on cross-disciplinary projects, including funding for “high-risk/high-reward” proposals that are particularly innovative and less likely to find support through conventional funding sources. They will also be funding novel cross institutional projects in the “pillars” of neuro-discovery: imaging; engineering; genomics and molecular therapeutics; and computation and data analytics.

This project has been written about in many different mediums including:

November 2019 UWIN Seminar: Talk by Keith Hengen

The November 2019 UWIN seminar features a talk by Keith Hengen. The seminar is on Wednesday, November 6, 2019 at 3:30pm in Husky Union Building (HUB) 214 . Refreshments will be served prior to the talk.

“Long term computational stability in the brain (and 500 single units in a freely behaving mouse)”

Keith Hengen, Assistant Professor of Biology, Washington University in St. Louis


Neuronal computation is extremely robust across a lifetime. Our data suggest that the brain actively tunes itself to criticality, a computational regime that maximizes information processing. To address these types of ideas, we developed methods to record from 500-1000 single units throughout the brain of a freely behaving mouse. Further, we can follow these neurons for months without pause, allowing the consideration of complex, natural behaviors on ethologically relevant timescales. 

UWIN Fellow Research Focus: Raymond Sanchez

The second focus of the UWIN fellow research series is Raymond Sanchez, a graduate student in the UW Neuroscience Program and Department of Biology. Ray is advised by Horacia de la Iglesia, and received a UWIN fellowship in 2017. His research focuses on sleep disturbances related to seizures, and methods of detection and prevention of seizures during sleep.

Epilepsy is among the most common neurological disorders in the world, and are typically accompanied by sleep and circadian rhythm disturbances. These disturbances include waking up during the night, difficulty maintaining a consistent sleep cycle, and tiredness throughout the waking day. These disturbances tend to increase the likelihood and severity of seizures and other associated cognitive and developmental deficits. Additionally, frequent nocturnal seizures put patients at high risk for sudden unexpected death, particularly in children.

Investigating Sleep Disturbances

Dravet syndrome is a specific type of epilepsy syndrome that begins in infancy or early childhood, and is caused by a brain wide change in the SCN1A gene (which controls the sodium-ion exchange that is crucial in the action of a nervous system). Due to the specific gene mutation, Dravet syndrome can be induced in animal models and studied in specific locations of the brain. This allows for isolation of the modification of the SCN1A gene in the sleep section of the brain, clarifying the relationship between sleep and epilepsy.

Using existing techniques in mice, a gene can be tagged, modified, and removed, in this case, inducing physiological changes associated with Dravet syndrome in the sleep section of the brain. This allows these mice to act as Dravet syndrome models in research. With these mice models available, Ray is able to directly study the repercussions of Dravet syndrome in long scale sleep studies that can lend information about how environmental changes modify sleep architecture. Determining the sleep architecture of a night’s rest is important; while the total amount of sleep and the types of each sleep cycle is important, the order in which each sleep stage occurs also matters.

One of the most consistent observations of caretakers of individuals with Dravet syndrome is that travel is very hard to recover from, specifically jet lag and the transition to a new time zone or circadian rhythm. Ray looked into this phenomenon by observing the sleep regularity within two sets of mice, one with the Dravet syndrome mutation, the other without, while the light schedule was changed to simulate a time-zone change.

actogram from Raymond Sanchez's paper on mice sleep habits

A way of visually depicting circadian sleep cycles is using an actogram. The actogram above is double plotted, meaning that the current day is plotted beside the previous day, allowing for an easier comparison from day to day. This plot compares the time and duration of activity during the day (x-axis) over the course of a 30 day (y-axis) observational period. The light gray shading represents the period ‘night’ time with lights off, while the light areas indicate the ‘day’ hours or lights on. During the day, an ECoG/EMG recording is taken of each mouse, with the black sections showing the sleeping times of the mouse, and the white gaps showing active times. This experiment was conducted for both a Wild Type control mouse (left) and a Dravet Syndrome model mouse (right), and observed sleep patterns as a “jet lag” time shift occurred. Visually, the common sleep disturbances with the Dravet Syndrome are present: inconsistent sleep during the night and tiredness or more resting though the day. These long-term sleep studies over multiple days are useful for understanding the severity and the circadian cycle adjustment needs over the Wild type and the Dravet Syndrome mice.

Sleep Stage Detection

An important distinction in the sleep architecture lies in what type of sleep stage is occurring. There are five different sleep stages, REM sleep, stage 1 and 2 (light sleep stages), stages 3 and 4 (deep sleep stages). These stages all serve a different purpose during a night’s rest, with the deep sleep stages encouraging the body to recover and rest during the night. This includes heart health, immune health, and brain help, even aiding the transfer of short-term memories into long term storage.

Patients with Dravet Syndrome often find it hard to get enough total sleep or enough of the deeper sleep cycles. This lack of sleep often serves as a future seizure trigger. The mouse studies have been useful in determining correlations in sleep and seizure activities, but due to the length of the study (multiple days), analyzing the data can be challenging.

To aid in this data analysis, Ray is currently developing a machine learning model for sleep stage classification.  This will allow him to process long term sleep recordings with the same accuracy as a human, but automated and much quicker. This model is the first step in predicting when a seizure might occur. The ability to detect the type of sleep stage allows for a device to identify if there is a sleep stage that has been skipped, or ended early. This also allows the program to check large scale correlations for artifacts in the readings that are specific to a specific stage before a seizure occurs.

Once the data analysis is automated, Ray is looking towards how to detect, prevent, and interrupt night seizures. In order to accurately predict when a night seizure is going to occur, there needs to be identification and automatic detection of pre-markers of the seizures. While the seizure prediction models are still in the future for Ray, he imagines a noninvasive device that can measure the child’s sleep, and detect markers for a seizure. This device would then be able to wake the child or warn the caregivers before the seizure would occur. There is still work to be done in the development of a device like this, but Ray is looking forward for his work to be useful for therapeutic devices in the future.

Ray has presented this work at the Computation Neuroscience Conference in 2019 and has had his manuscript, “Circadian Regulation of Sleep in a Pre-Clinical Model of Dravet Syndrome: Dynamics of Sleep Stage and Siesta Re-entrainment” accepted into Sleep.

Amy Orsborn, UWIN faculty, awarded with L’Oreal grant

UWIN PI, Amy Orsborn, an assistant professor in Electrical and Computer Engineering at the University of Washington, was awarded a “Changing the Face of STEM” grant by L’Oréal USA. The grant supports former L’Oréal “For Women in Science” fellows to inspire future generations of girls and women in STEM.

Dr. Orsborn will be using her first CTFS grant to support her mentorship organization that promotes Women In Neural Engineering (WINE). WINE looks to provide vital peer‐to‐peer mentorship and networking for women in neural engineering. The group’s initial efforts center on women at the faculty level, as this career stage represents a key bottleneck towards inclusive STEM leadership. The CTFS grant will help WINE provide mentorship and outreach across the training pipeline.

More information on the award, and the other ten awardees can be found in the L’Oréal press release.

October 2019 UWIN Seminar: Short talks by Anitha Pasupathy and Sam Burden

October 2019 UWIN Seminar speakers Anitha Pasupathy and Sam Burden

The first UWIN seminar for the 2019-2020 school year features a pair of short talks by Anitha Pasupathy and Sam Burden. The seminar is on Wednesday, October 9, 2019 at 3:30pm in Husky Union Builiding (HUB) 337. Refreshments will be served prior to the talks.

“Mid-level cortical representation for object recognition”

Anitha Pasupathy, Professor, Department of Biological Structure, University of Washington

“Sensorimotor games: Human/machine collaborative learning and control”

Sam Burden, Assistant Professor, Department of Electrical & Computer Engineering, University of Washington


“Mid-level cortical representation for object recognition” (Anitha Pasupathy)

Recognizing a myriad visual objects rapidly is a hallmark of the primate visual system. Traditional theories of object recognition have focused on how critical form features, e.g. the orientation of edges, may be extracted in early visual cortex and utilized to recognize objects. An alternative view argues that much of early and mid-level visual processing focuses on encoding surface characteristics, e.g. texture. Neurophysiological evidence from primate area V4 supports a third alternative—the joint, but independent, encoding of form and texture—that would be advantageous for segmenting objects from the background in natural scenes and for object recognition that’s independent of surface texture. Future studies that leverage deep convolutional network models can advance our insights into how such a joint representation of form and surface properties might emerge in visual cortex.

“Sensorimotor games: Human/machine collaborative learning and control” (Sam Burden)

While interacting with a machine, humans will naturally formulate beliefs about the machine’s behavior, and these beliefs will affect the interaction. Since humans and machines have imperfect information about each other and their environment, a natural model for their interaction is a game. Such games have been investigated from the perspective of economic game theory, and some results on discrete decision-making have been translated to the neuromechanical setting, but there is little work on continuous sensorimotor games that arise when humans interact in a dynamic closed loop with machines. We study these games both theoretically and experimentally, deriving predictive models for steady-state (i.e. equilibrium) and transient (i.e. learning) behaviors of humans interacting with other agents (humans and machines).

Applications open for UWIN undergrad and post-baccalaureate fellowships

Applications are now open for UWIN’s final round of WRF Innovation Undergraduate and Post-baccalaureate Fellowships in Neuroengineering. 

Fellowships awarded in this cycle can be used for research during the Winter, Spring, and/or Summer 2020 quarters.  Applications are due by Monday, November 4, 2019.

These fellowships provide up to $6000 to support for undergraduate and post-baccalaureate researchers committed to working in UWIN faculty labs.  More information about applying for these fellowships can be found in the links below:

UWIN Fellow Research Focus: Ezgi Yucel

The first part of our series highlighting UWIN fellow research starts with third year PhD Psychology graduate Ezgi Yucel. Ezgi is co-advised by Ione Fine and Ariel Rokem, and received a UWIN fellowship before arriving at the University of Washington. Her general research combines vision perception, computer modeling, and patient experience improvement.

Retinal degenerative diseases, such as retinitis pigmentosa, can cause gradual vision loss due to the death of the cone and rod photoreceptor cells in the eye. Treatment for vision loss includes electrical retinal implants that are surgically placed, similar to cochlear implants. The implants electrically stimulate the remaining retinal cells to induce specific percepts, or a perceptual experience that a patient sees.

Visual loss typical of earlier stages of retinitis pigmentosa (A) and macular degeneration (B). Both diseases lead to total blindness at later stages. Source: Action for blind people (UK).

In 2013, the ARGUS II became the first FDA approved retinal implant in the United States. The Argus II includes a microelectrode array placed on the retina at the back of the eye, a camera and transmitter mounted on eyeglasses, and a video processing unit. The camera captures visual information that the processing unit translates into a series of instructions to be transmitted to the microelectrode array. The array then stimulates the remaining cells in the retina. While these percepts do not copy exact visual information recorded by the camera, it is a step towards restoring vision.

Optimizing a perceptual experience

Ezgi’s work revolves around refining the percepts that patients see.  She does this by developing a clearer understanding of patients’ perceptual experiences and investigating how various factors can affect the experience of the patient. Initially, the purpose of her work was to validate a model called pulse2percept, which was created to simulate the perceptual experience of retinal prosthesis patients. The model receives the series of pulses sent to the microarray and predicts a possible precept seen by a patient. This original model was based on behavior data of three patients with older Argus I and one patient with an Argus II device. While the model applies to this small population, it would need to be validated against a larger audience.

Simulation by pulse2percept predicting the perceptual experience of a retinal prosthesis patient. Source:[]

Ezgi began this project by conducting behavior experiments with blind patients using a drawing task.  In this task, researchers stimulated specific electrodes, and the patients drew a recreation their percepts. Initial tests did not lead to reproducible percepts across of the patients, showing a variability in how the ARGUS II functioned in each patient. There was even trouble in receiving two distinct percepts in multi-electrode stimulation trials, hinting at a large patient variability.   She continued meeting with patients and had more success on another data collection trip. She collected consistent percepts across trials, even receiving distinct shapes when multi-electrode stimulation occurred. This gap between the first data collection trip and the second prompted an evolution in Ezgi’s research. 

Considering the gap in perceptual experiences, Ezgi expanded her project to focus on possible causes of the perceptual difference. The first cause that she is investigating is how physiological changes, such as retinal scarring, affects the spatial vision. While there may different possible reasons for perceptual differences, investigating how insertion surgery methods and existing retinal cell damage affect percepts may inform surgeons on how to best align the electrode array for optimized percepts. This research involves using computational models designed to relate the location of scarring and cell abnormalities with the resulting percepts.

Investigation into fMRI data

Alongside her work to optimize the percepts over the entire entire population of patients, Ezgi is also investigating the influence of system-wide low frequency oscillation on the Blood Oxygenation Level Dependent (BOLD) signal. BOLD signals occur where additional blood flow highlights areas where neuronal activity is occurring. Blood flow naturally has measurable oscillations throughout the body, and may lead to biases in the BOLD signal. As the location of retinal neurons for visual input is well known for humans, she is using visual field mapping to estimate the effects of these BOLD oscillations on the measurement of the neuronal activity.

Time delay maps due to blood flow may mimic resting state organization, taken from Tong and Frederick, 2014 (Tracking cerebral blood flow in BOLD fMRI using recursively generated regressors)

The measurements for a visual field map are taken with fMRI (functional Magnetic Resonance Imaging). FMRI’s show how blood flows through the brain while neurons rest or fire, changing the amount of blood needed. While the measurement of the BOLD signal allows researchers to directly measure brain activity during various tasks, researchers question the natural noise of BOLD signals. Ezgi hopes by investigating the noise of the BOLD signal, researchers can clean the BOLD measurements, gaining more precise data.

For the next few years, Ezgi will continue her work on the perceptual experience of retinal prosthesis patients, now focusing on scarring, and expand into the investigation of fMRI noise. She is looking forward to developing better simulations and hopes to improve the consistency of percepts created by devices such as the ARGUS II.

Recently Ezgi presented at the 2019 Neural Computation and Engineering Connection (NCEC), giving a short talk about this research. Portions of her research were also covered in PC magazine’s interview with post-doctoral colleague Dr. Michael Beyeler.

« Older posts