The November 2017 UWIN seminar features a fascinating pair of short talks by UWIN faculty members Sawyer Fuller and Wyeth Bair:
- “Fly-inspired visual flight control of insect-sized robots using wind sensing”
Sawyer Fuller, Assistant Professor, Department of Mechanical Engineering, University of Washington
- “Comparing shape representation in mid-level visual cortex to that in a deep convolutional neural network”
Wyeth Bair, Associate Professor, Department of Biological Structure, University of Washington
The seminar is on Wednesday, November 8th, 2017 at 3:30pm in Husky Union Building (HUB) 337. Refreshments will be served prior to the talks.
“Fly-inspired visual flight control of insect-sized robots using wind sensing” (Sawyer Fuller):
In the Autonomous Insect Robotics Laboratory at the University of Washington, one of the projects we are interested in is how to give robots the size of a honeybee the ability to fly themselves autonomously. Larger drones can now do this task, but they use sensors that are not available in insect-sized packages, like the global positioning system and laser rangefinders. We are left with sight, the same modality used by flies. But visual processing is typically computationally intensive. I will describe research I performed on flies that reveals that they overcome this by combining slow feedback from vision with fast wind feedback, and discuss ramifications for our robots.
“Comparing shape representation in mid-level visual cortex to that in a deep convolutional neural network” (Wyeth Bair):
Convolutional neural nets (CNNs) are currently the best performing general purpose image recognition computer algorithms. Their design is hierarchical, not unlike the neural architecture of the ventral visual pathway in the primate brain, which underlies form perception and object recognition. We examined whether units within an implementation of “AlexNet” (Krizhevsky et al., 2012), following extensive supervised training, end up showing selectivity for the boundary curvature of simple objects in an object-centered coordinate system, similar to that found for neurons in the mid-level cortical area V4 (Pasupathy and Connor, 2001). I will show how the units in AlexNet compare to those in V4 in terms of shape-tuning and translation invariance and will discuss the benefits and limitations of comparing complex artificial neural networks to the brain.