1 April 2006
Sight System mimes housefly
The real-time machine vision dilemma pits resolution against throughput
By William Harman, Steven Barrett, Cameron Wright, and Michael Wilcox
Real-time digital imaging for use in machine vision systems does not work when it plays in control systems that employ low-power processors.
Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system uses the biological vision system of the common housefly as a model.
The goal is the implementation of new vision system sensors for motion detection and object tracking.
Development of a single sensor representing a single facet of the fly's eye is complete. This new sensor then incorporates into an array of sensors capable of detecting objects and tracking motion in 2-D space.
This system preprocesses incoming image data, resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity happens and eliminates typical resolution issues found in digital vision systems.
Part of the system relies upon the biological traits of the fly eye. As a part of the system, the developers had to create an analog-based sensor that mimics the characteristics of interest in the biological vision system. This array of sensors work toward solving real-world machine vision issues.
Circumventing the issues
Traditional machine vision systems typically employ charge coupled devices (CCD) or complimentary metal oxide semiconductor (CMOS) cameras.
While these devices can provide an excellent front end for image acquisition, the large amount of data acquired from these devices must transmit, generally serially, to a computer for processing.
The sheer volume of data traveling this way makes real-time analysis of an image difficult based solely on throughput issues inherent to these systems.
Circumventing these issues by limiting the amount of data sent to the processing/analysis portion of the system by reducing the number
of photoreceptive components in either of the CCD or CMOS devices is the way to go. This reduction solves the problem of throughput but then introduces the problem of resolution loss.
The loss of resolution can severely compromise the performance of applications utilizing these types of vision systems for motion detection or object tracking. The development of a vision system that is capable of circumventing the resolution as well as the throughput issues in the image capture/processing/analysis cycle for low-powered embedded controller systems has led to the study and development of a machine vision system based upon the physical structure and functional characteristics of the eye of the common housefly, Musca domestica.
Photoreceptive cells overlap
Initial research into the structure and function of the fly eye revealed the ommatidium of each of the facets consisted of eight photoreceptive elements—R1-R8.
The photoreceptors R1-R6 are in a near hexagonal pattern and function as independent photo transducers that convert light photons into an ionic current.
These photoreceptors connect to monopolar cells L1 and L2 in a configuration that has the R1, R2, and R6 neurons proximally connected to the L1 monopolar cell and distally to the R3, R4, and R5 neurons.
The L2 monopolar cell connects in the opposite configuration. The entire grouping of these cells as a unit is a cartridge-like structure and serves as the project model.
The remaining photoreceptors, R7 and R8, lie outside of this hexagonal pattern and connect directly to the medulla, and therefore they are not relevant to this pursuit.
Determining the responses of the individual photoreceptors and the underlying monopolar cells took place by sweeping a laser line at a constant velocity across the field of view of the eye facet. As the laser passes across each of the photoreceptive cells, the light's angle of incidence increases to a point to which the angle is perpendicular to the cell where the cell gets the maximum concentration of photons from the laser. At this point, the cell theoretically would give the greatest response to the stimuli. A photoreceptive cell was monitored during the implementation of this test to determine the characteristics of the cell's response, and it was determined the response was Gaussian in nature.
A feature unique to the vision systems similar to Musca domestica is a characteristic called hyperacuity.
Hyperacuity allows a vision system to achieve resolution greater than the sum of the number of photo elements or their spacing. In the case of Musca domestica, the fields of vision for the photoreceptive cells overlap, which assures there are no gaps within the vision field of a particular cartridge. The collective responses from the photoreceptive neurons conduct to the monopolar cells.
The response from the monopolar cells contains position information of the object as it passes across the lens of the facet under test. Since this information first realizes in the retina (ommatidium) of the fly eye using only the photoreceptive cells and the monopolar cells without the need for processing the signals within the brain, the entire cartridge acts as a real-time analog preprocessing stage for the fly's vision system.
Match biological signals
The synthesis of a vision system that would mimic the vision system of Musca domestica began with the creation of an imaging device that could collect the imaging data from its surrounding much like any other vision system.
The primary requirements for this vision system were the signals produced by the synthesized device must closely match the biological signals obtained from the earlier testing as well as being capable of achieving hyperacuity.
This initial biomimetic vision system only mimics the overall response of the L2 monopolar cell based upon the responses of the simulated R3, R4, and R5 photoreceptor cells.
Initial attention went to choosing a photoreceptive component that had a similar electrical response to varying light much the same as the photoreceptive neurons in the fly eye. Next, the research team determined fiber optic cable was the best means of channeling light photons directly onto the photoreceptive elements.
The team constructed an enclosure for the fiber optic cables to enclose the fiber optic elements as well as serve as a housing for a central lens that would simulate the corneal facet lens in the fly eye. To focus light photons onto the fiber optic cable that would transmit those photons to the photoreceptive element, a ball lens mounted onto each of the incident ends of the cables. To achieve hyperacuity, fiber optic cables with installed ball lenses where placed in an arrangement similar to the R3, R4, and R5 photoreceptive neurons in the fly eye. By placing the cables together in this manner, the vision fields of the ball lenses would overlap much in the same way as that of the fly.
The overall response from the three photoreceptors contained within the sensor cartridge then transmitted to an analog multiplier to simulate the monopolar cell L2. The analog multiplier performed the calculation, where M represents the middle photoreceptive element (R4 neuron), L represents the left photoreceptive element (R3 neuron), and R represents the right photoreceptive element (R5 neuron). The result of these calculations represents the synthesized response of the L2 monopolar cell found in the fly eye.
The test data from the newly created sensors and tracking algorithm has shown one can synthesize the vision system of Musca domestica and it is a viable solution to the machine vision problem.
Further testing on the sensors in underway to determine if they can refine the response by adding another lens to the photoreceptor, focusing the light from the fiber optic onto the photo element.
Once this testing is over, research will commence on the addition of photoreceptor to emulate the R1, R2, and R6 photoreceptive neurons. This research will include the addition of circuitry to emulate the L1 and L4 monopolar cells. The conclusion of this research will see the implementation of the new vision system sensors as a front-end image-gathering device for a motion detection/object tracking machine vision system.
The design and fabrication of an analog-based real-time machine vision system has made great strides with the research completed thus far. By successfully mimicking a proven biological vision system such as that in the Musca domestica, this new sensor will be capable of detecting motion and eventually be used as a vision system for object tracking with minimal hardware and greatly reduced processing time when compared to CCD and CMOS vision systems.
About the Authors
William Harman (firstname.lastname@example.org) has degrees in electrical engineering and works at the Naval Air Warfare Center – Weapons Division. Steven Barrett and Cameron Wright are professors at the Department of Electrical and Computer Engineering at the University of Wyoming. Michael Wilcox is part of the Department of Biology at the United States Air Force Academy. A grant from the National Institutes of Health, Center of Biomedical Research Excellence Program of the National Institutes of Research Resources helped produce this piece.
The style and the family
A complimentary metal oxide semiconductor (CMOS) is a major class of integrated circuits.
CMOS chips include microprocessor, microcontroller, static RAM, and other digital logic circuits.
The central characteristic of the technology is it only uses significant power when its transistors are switching between on and off states.
Consequently, CMOS devices use little power and do not produce as much heat as other forms of logic. CMOS also allows a high density of logic functions on a chip.
CMOS refers to both a particular style of digital circuitry design and the family of processes used to implement that circuitry on integrated circuits (chips). CMOS logic on a CMOS process dissipates less energy and is denser than other implementations of the same functionality.
As this advantage has grown and become more important, CMOS processes and variants have come to dominate, so that as of 2006, the vast majority of integrated circuit manufacturing by dollar volume is on CMOS processes.
CMOS logic uses a combination of metal-oxide-semiconductor field effect transistors to implement logic gates and other digital circuits found in computers, telecommunications, and signal processing equipment.
Typical commercial CMOS products are integrated circuits composed of millions (or hundreds of millions) of transistors of both types on a rectangular piece of silicon of between 0.1 and 4 square centimeters a chip.
A matter of capacitor
A charge-coupled device (CCD) is a sensor for recording images. It has an integrated circuit containing an array of linked, or coupled, capacitors.
Under the control of an external circuit, each capacitor can transfer its electric charge to one or another of its neighbors.
CCDs containing grids of pixels are in digital cameras, optical scanners, and video cameras as light-sensing devices. They commonly respond to 70% of the incident light making them more efficient than photographic film, which captures only about 2% of the incident light.
An image registers via a lens on the capacitor array, causing each capacitor to accumulate an electric charge proportional to the light intensity at that location. A one-dimensional array, used in line-scan cameras, captures a single slice of the image, while a two-dimensional array, used in video and still cameras, captures the whole image or a rectangular portion of it.
Once the array exposes to the image, a control circuit causes each capacitor to transfer its contents to its neighbor. The last capacitor in the array dumps its charge into an amplifier that converts the charge into a voltage.
By repeating this process, the control circuit converts the entire contents of the array to a varying voltage, which it samples, digitizes, and stores in memory.
Stored images can transfer to a printer, storage device, or video display.
Eyeballs of steel
Machine vision is the application of computer vision to industry and manufacturing.
While computer vision focuses on machine-based image processing, machine vision most often requires digital input/output devices and computer networks to control other manufacturing equipment such as robotic arms.
Machine vision is a subfield of engineering that encompasses computer science, optics, mechanical engineering, and industrial automation.
Eye on Machine Vision: Lower-cost systems attract manufacturers' attention
Thermal imaging sensors: New safety measures
IR is HOT For Monitoring Process Temperature: Automated infrared imaging technology gains inroads in niche temperature-sensitive segments
Sensor automates grinding of precision lenses: Faster, finer grinding could improve productivity and lower costs.
Vision Precision: Speed, control see eye to eye in discrete processing.
Return to Previous Page