Projects

Neural Network Visual Binding

Full Network

Visual binding in animal vision is the process by which visual signals are separated and grouped according to the object in the visual field generating them. The process by which this is accomplished in biology is not well understood though many complicated models have been proposed. With this work we demonstrate one method by which this process can be accomplished in a neural network using biologically inspired algorithms to calculate wide-field motion, color, and orientation perception. This research is ongoing and we hope to expand this simple network to be able to recognize and separate more complex visual stimuli.

This work has so far led to two publications (the first and the second, back-to-back in the same journal).

Chronic monitoring of human sleep

We continue to pursue a project using the Zeo Sleep Manager (shown below) to run a long-term study of human sleep. In contrast to traditional highly uncomfortable one- or two-night polysomnograph studies designed to diagnose a specific disorder (sleep apnea, for example), our hope is that by monitoring sleep noninvasively for longer terms (weeks to months) we can get new measures of human health.

Biomimetic Visual Navigation

Platform

Visual navigation is something that almost all animals do exceedingly well and something that robots are exceedingly poor at replicating. We are using principles of biologically inspired engineering in an attempt to utilize what we have learned about how organisms use visual information to perceive and navigate in the world around them to develop a system that will enable robots to be more successful in the same.

Our current test platform is the highly modified radio-controlled car shown at top left, but we are hoping to test some of our high-speed algorithms on a flying platform in the near future. Stay tuned!

Process

Rat "cognition"

How does "thinking" work? Perhaps by studing how rats "think", we can understand how it works in humans. We are focusing on an area in the mammalian brain called the hippocampus, which has homologs in insects as well. By modeling this in a robotic framework, we hope to approach a practical understanding of machine intelligence.

Neuromorphic VLSI design ("Vision chips")

Brains have a lot to teach us about how to design engineering systems. The Higgins Lab has a long history of designing biologically-inspired VLSI electronic systems. During Prof. Higgins' postdoc at Caltech, he worked on analog VLSI vision chips that could compute the optical flow of an image focused with a lens directly onto the chip.
You can find a list of relevant articles below.

Rule-based learning

Prof. Higgins' Ph.D. research concerned information theoretic extraction of conjunctive rules from databases, applied both to discrete classification and function approximation. The function approximator was used to learn a nonlinear control system. You can find more information below. This kind of research could lead to computer systems which learn and can explain what they learned!

Parallel computing

In the ancient days, while working for IBM, Prof. Higgins worked on a novel hardware routing system for a message-passing parallel computer. You can find information on an associated patent below.

Biomedical Projects

From time to time, the Higgins Lab develops something that has immediate application to medicine. Some of those projects are described below.

Check out the NROS 415 laboratory!

Check out the new undergraduate course NROS 415: undergraduates for the first time ever at the UofA are getting to record signals from living brains.

Honeybee Speed Estimation

The goal of this research is to mathematically describe how the brain processes and uses sensory information to generate appropriate behavioral responses. These mathematical models can then be used as a basis to understand higher-level behaviors or to design more intelligent robotic systems. The human brain contains around 10 billion neurons (the functional cells of the brain), making it a dauntingly large and complex structure to study. Because of this complexity, we study the honeybee, an organism with a much smaller brain (with around 1 million neurons) that still exhibits a variety of complex social, visual, and navigational behaviors. Of particular interest to us is the "waggle dance", in which a foraging honeybee communicates the location of a distant food source to other honeybees in the hive. Specifically, our research looks at how honeybees estimate the distance they have traveled based solely on a visual estimate of their flight speed. To accomplish this goal we combine information from multiple levels of analysis, from biophysics to neuroanatomy, to create a mathematical model of early visual processing. We then refine the model by studying the responses of tethered honeybees in a virtual flight arena. The model can then be programmed into a robotic system.



Dipteran Elementary Motion Detection

In a collaborative project with the Strausfeld laboratory, a novel computational neuronal model of elementary motion detection based on anatomical, physiological, and behavioral observations of flies has been developed that serves as a working functional hypothesis as to how the underlying neuronal machinery may be organized. We are currently augmenting the existing EMD model with two important stages, working towards a more realistic model of the insect visual system. In the optics stage, light information is collected by each facet of the simulated compound eye. Collected light is then focused onto photoreceptors which further process the light information. A mathematical model of the photoreceptor stage is being used to simulate contrast adaptation under steady-state and dynamic conditions.

The Mothbot

The field of neuroscience is moving toward understanding how sensory systems compute under closed-loop control. It is important to step away from open-loop experiments, i.e. where an animal cannot interact with its sensory inputs, because in the real world sensory neurons are passengers on a moving body whose sensory inputs are intimately related to its behavior. The challenge with performing these experiments under natural conditions is that conventional electrophysiology equipment is too bulky to be placed on a freely behaving animal. To solve this problem, we have designed a robotic electrophysiology instrument whose velocity is determined by bioelectrical signals from an animal, in our case the hawk moths and flies (model organisms for visual motion detection, olfaction, and insect flight). This robotic instrument allows us to perform electrophysiological experiments while a moth is onboard and controlling the robot, which, in engineering terms, closes the loop. With this instrument we will characterize visual motion detection neurons and investigate the use of these neurons as biosensors for robots.



Related articles