Success through the lens
Automotive company tells vision success story, gives advice to vision systems newcomers
By Evan Wollak and Brian King
Industrial vision systems are versatile; so versatile, in fact, BorgWarner Transmission Systems in Bellwood, Ill., uses vision systems to detect visual defects, define parts for a robotic pickup, confirm unique identification markings, and, most importantly, stop assembly lines when defects are present.
If there is not an easy way to mechanically or electrically detect the defect or process variation, consider a vision system. There are many products targeted toward specific applications (low-cost simple cameras, high-resolution cameras, or cameras with serious processing power). Using the right camera for the job will help bring the solution to the plant floor quickly and under budget.
Most of the best candidates for vision inspection are those presented with a random orientation. The cost of engineering a mechanical system to orient parts and present them to a mechanical poka-yoke (mistake-proof) device is high; however, the time needed to develop and implement such a system is usually the largest constraint. You can mount a camera system with an off-the-shelf lens or light to a machine and wire it into an existing electrical system within a day or so. Complete the camera programming in an hour for simple applications. This quick deployment makes the system ideal for responses to customer complaints or product launches with tight timelines.
Vision systems are also useful to reduce changeover times. Machines that have mechanical inspections and run multiple part types often need part-specific tools. A camera system can change the inspection parameters automatically and drastically reduce setup times.
Any poka-yoke device requires regular rabbit-testing or calibration to remove any variation in the process when performing coordinate transformations. Whenever performing a vision measurement and comparing it to limits defined in millimeters or inches, check the camera’s definition of a millimeter or inch. This procedure is different for each system and will ensure the system performs as intended. Just like a micrometer or coordinate measuring machines, a camera that uses a standard measurement of length needs calibration.
Within a normal production setup, use “rabbit parts” to test the system. Rabbit parts are known defects presented to the vision system. If the system performs as intended, the rabbit parts are marked as defects and separated from normal production. If the rabbit parts are not flagged as defects, the production system is shutdown, and the system is evaluated. All parts produced on the suspect line since that last good rabbit test are quarantined. The interval for rabbit testing is determined by how sensitive the vision system is to variation, the shipping schedule (you can’t quarantine parts that have already shipped), and past lessons learned. The rabbit testing process is audited periodically to confirm it is functioning properly. The rabbit samples are treated like a gauge and have a regular calibration schedule.
BorgWarner uses one camera to define the position of a part before it is loaded into a die with a robot. The parts are traveling down a conveyor with a random orientation. The camera determines the X-Y position of the part and sends the robot to pick it up. The camera also identifies the theta (or angle) offset so the robot can orient the part properly in the die. This system runs five different parts without any setup changes. The camera identifies which part is presented, allowing any of the five parts to run at anytime.
The company also uses a camera to check every assembly coming off a particular assembly line before it is loaded into the packaging. This camera system checks for any one of seven defects and alerts the main assembly line to segregate the suspect part. This system has allowed us to easily offer our customers different varieties of the same product family and maintain a 100% quality rating. The machine tells the camera what permutation of that standard product it has assembled, and the camera checks to make sure the part was assembled correctly and with the right components.
It is tough to find a good mechanical system to detect parts defects similar to a caged ball-bearing assembly. Each of the roller elements have the potential to be inserted incorrectly or with the wrong part. The best solution for BorgWarner included two high-speed, high-resolution vision systems working together to inspect each rolling element for the check the orientation.
The key to a good vision system is working to provide the camera with an image that clearly shows the objects of interest with good contrast. The better the image the camera receives, the better the decision. The basics of capturing a good image are the same as those used to take a good picture at a wedding. Understanding how light reflects off the part, how the lens and focal length work together, and how much light is let into the camera are key to collecting good images. Also, the camera’s program should be easy to follow and help the user diagnose problems, especially for people who are not experts in the field of vision. Finally, resist the pressure to use a given camera system to detect objects beyond its original scope. These projects are often the most troublesome. Just because an object is in the camera’s field of view, does not mean there is good contrast between the good and bad states of that object.
Find someone to partner with that you can trust as you begin. Start simple and build more complex systems. Understand the machine is a set of systems that work together, the camera is only one part of the solution. The programmable logic controller, human-machine interface, and other systems must work together in order for the system to function and operate as intended. Consider a system that automatically calls for calibration after every 1,000 pcs and a rabbit sample every 500 pcs. The integration required for the machine to inform the operator and disposition parts correctly requires more than just a good camera program. This system could only be deployed by a team of people who are familiar with all the machine systems.
In considering traceability, know that regulated industries require keystrokes to be logged and operators identified through a secure log-in. In other cases, the sensor must have the intrinsic ability to report whether it has been moved or adjusted. These tend to be higher-level systems.
With throughput rate, the inspection decision must arrive prior to the next part’s arrival. Common smart cameras today may acquire images at 60 per second, and GigE cameras may acquire images nearly three times that fast, but the processor may require 30 to 300 milliseconds to calculate a result dependent on the complexity of the inspection. Thus true throughput is the speed of the camera plus the program.
Resolution (pixel density per millimeter) and bit-depth (number of brightness graduations between 0% and 100% intensity) are typical performance metrics. And some applications require color, IR, or X-Ray spectral response rather than basic monochrome capabilities. Some higher-end or specialty cameras are only available for PC-based systems.
Some inspection tasks are more computation-intensive than others, including presence/absence checking, gradient or brightness mapping, length or angle measurement, part location in X, Y, and angle, and template comparison.
In general, the more variable the part (appearance, position, composition, etc.) the longer it may take to inspect due to pre-processing tasks needed to normalize the appearance and orientation of the successive parts.
Certain vision processors are equipped with protocols to communicate efficiently with various machine controllers and computers.
Size or environmental concerns occasionally drive the selection decision as well. Typically, an aggressive environment drives the decision toward hardened cameras that can survive the rigors of the application. Sometimes space constraints require small or oddly shaped cameras.
Vision software vs. hardware
Users could benefit from vision software engineering the physical station to simplify the sensing tasks. The physical station could include the method of conveyance or nesting of the inspected part, the lighting that charges the camera, the lens and lens-filtering that focuses the image, and the camera’s orientation and rigidity. Make every effort to provide the parts in a consistent condition to minimize unwanted variability.
After all these physical and optical efforts are exhausted, the software should:
Interact with the machine or extended systems
Inform the operator
Inspect the parts
Flag ambiguous or unexpected operating conditions
Run self-checks on its own performance through calibration and periodic challenge testing using rabbit parts
Machine vision future
Three elements drive the future of machine vision, especially in a lean manufacturing scenario: poka-yokes, cost containment, and continuous improvement.
Higher computational horsepower and more experimentation in the user community will make machine vision solutions increasingly suitable for poka-yoke and cost reduction applications. Cost containment, at least in the present economic environment, will limit the spread of the technology to the proposals with the most impressive return on investment.
ABOUT THE AUTHORS
Evan Wollak (firstname.lastname@example.org) is senior manufacturing engineer at BorgWarner Transmission Systems in Bellwood, Ill. Brian King (email@example.com) is a technical manager at Industrial Eye, a machine vision integrator in Plainfield, Ill.
By Brian Dean, Cameron Wright, and Steven Barrett
Fly inspired vision sensors have been shown to have many interesting qualities such as hyperacuity (or an ability to achieve movement resolution beyond the theoretical limit), extreme sensitivity to motion, (through software simulation) image edge extraction, motion detection, and orientation and location of a line. Many of these qualities are beyond the ability of traditional computer vision sensors such as charge-coupled device (CCD) arrays. To obtain these characteristics, a prototype fly inspired sensor has been built and tested in a laboratory environment and shows promise.
Researchers at the Wyoming Information, Signal Processing, and Robotics (WISPR) Laboratories are working on a novel sensor that is based on the visual system of the common house fly, Musca domestica. This research effort has examined many interesting characteristics of the fly’s vision system. The most significant of these characteristics is the overlapping response of the fly’s photoreceptors. This overlapping response allows the fly’s visual system to have a movement resolution beyond the theoretical limit. This phenomenon is called hyperacuity. The overlapping response is also partially responsible for the fly’s extreme sensitivity to motion. The goal of the research at WISPR labs is to gain insight into the fly’s visual system and apply this insight in the design of a sensor that also possesses the beneficial characteristics of this system.
This sensor has the potential for application in a wide variety of military, commercial, and medical applications. We believe the sensor will serve as an augmentation for traditional CCD digital imaging applications in a hybrid-type configuration or use in a standalone role. In a military application, the sensors would be very good in assisting an autonomous robot to navigate about structures and obstacles. We also believe the sensor would be a welcome augmentation to unmanned aerial vehicles to detect and autonomously avoid power lines and antennas. Commercially, the sensor would be very good in aligning parts during automated assembly, inspecting label placement on bottles, and even inspecting railroad ties for damage and misalignment. Also, there is considerable interest in developing autonomous vehicles in the automotive industry. This sensor would be very effective in keeping a vehicle on the track of a visible line. In the medical arena, we are already testing the sensor as an assist device for wheelchair navigation and control. Although, the sensor has been initially developed for use at visible wavelengths, its operation could be extended to other wavelength bands such as the infrared.
Prototypes have already been built that demonstrate the hyperacuity seen in the fly. These prototypes have only been tested in a laboratory environment where lighting conditions are tightly controlled. This paper discusses research with the goal of allowing these prototypes to be tested in environments where lighting cannot be controlled.
It was found that the design that gave the best results mimicked the fly’s ambient light adaptation system. To achieve light adaptation, the fly uses two processes simultaneously. First, the fly has the ability to adjust how its photoreceptors respond to light, and second, the fly conditions the signals that come from these photoreceptors. The latter of these two is the process that can be mimicked in electronic hardware. This process is characterized by a log transform-subtraction-multiplication strategy. In this strategy, the signals received from the photoreceptors are first log transformed, and then an average is taken over the different facets of the fly’s eye. This average is subtracted from the log transformed input, effectively removing the mean value of illumination due to the background lighting.
Once this mean has been removed from all facets, a multiplicative step occurs to increase or decrease the sensitivity (i.e., gain) of individual photoreceptors. This multiplicative step is most important in low light conditions, but has a minimal affect at higher intensities of background illumination. Therefore, the subtractive step dominates the light adaptation process.
This process is mimicked in the system that was designed to allow the fly inspired sensor prototypes to adapt to ambient light conditions. The logarithmic step is skipped to ease implementation and reduce the amount of hardware. This resulted in a system that would work well indoors. To build a system suitable indoors and outdoors, this step will probably need to be included. The average is estimated with an ambient light detection circuit. An instrumentation amplifier is used to subtract this estimated average from every photoreceptor in the prototype. To avoid negative output voltages, an additional step is implemented in the light adaptation system. This additional step involves setting the desired dynamic range of the output by adding a constant voltage to the output of the instrumentation amplifier. This system requires calibration of the ambient light detection circuit and each photoreceptor; and even with perfect calibration, perfect light adaptation is not achievable due to non-ideal mismatches in the circuitry.
Though this system worked well, it has only been tested to date in a laboratory environment; additional considerations and tests need to be completed before the prototypes are ready to be tested in different real-world lighting conditions. Some of these considerations include filtering noise from incandescent and fluorescent lighting, time delays to slow light adaptation, and true averaging instead of estimates through an ambient light detector.
The experiments were performed using two OP906 photodiodes, three OP177 operational amplifiers, an AD620 instrumentation amplifier, and a LM317 voltage regulator. One of the photodiodes was used as the ambient light detector, and the other was used as the photoreceptor of the sensor. The next experiment involved calibrating the ambient light detector and the photoreceptor photodiode to the same value in the same level of light. Though the light adaptation circuit performed well in our experiments, we will need to make additional adjustments to the circuit for real-world environments, such as filtering AC noise coming from indoor lights and adding a delay to the light adaptation circuit. Changes in background illumination contain information, and if the light adaptation circuit responds too quickly, this information may be lost.
It is also possible for background lighting to get too intense for this circuit. Very intense background light will force many of the operation amplifiers in this circuit to saturate. The logarithmic transformation, as seen in the fly, may allow a larger light range without saturation.
Every sophisticated visual system must be able to handle a variety of different lighting conditions. The fly can handle light intensity over a range that spans 107 and does this using a log transform-subtraction-multiplication strategy. Our ambient light adaptation circuit is very similar to this biological strategy and produced very good results. However, more research is needed to obtain a robust circuit that can be placed on the fly eye sensor prototypes being designed at WISPR labs.
ABOUT THE AUTHORS
Steve Barrett, Ph.D., P.E. is Associate Professor Electrical and Computer Engineering, at the University of Wyoming in Laramie. E-mail him at firstname.lastname@example.org. Brian Dean, Ph.D., is a student at University of Wyoming. E-mail him at email@example.com. Cameron Wright, Ph.D., P.E. is Associate Professor Electrical and Computer Engineering. E-mail him at firstname.lastname@example.org.
Return to Previous Page