Automotive vision systems demand more for less
The obvious answer would be to use high quality optics to produce the image quality required, but their cost is generally prohibitive for most automotive applications
By Justin Roe and Stanton Earley
The statistics are startling. Auto accidents are one of the leading causes of premature death in developed countries.
As automotive companies and regulatory bodies grapple with the task of reducing auto accidents, demand for automotive safety systems is rapidly increasing.
From analog video cameras that provide a picture for the driver to see, to driver area systems that monitor the inside of the vehicle, to object avoidance systems, camera technology is at the forefront of automotive safety systems. The market for cameras in vehicles has increased from a few thousand a few years ago to about 7 million units in 2008. This growth will continue at the rate of approximately 120% per annum for at least the next five years. However, to reduce accidents, our cars will not only have to “see” but analyze and interpret what they see.
Beyond the backup camera
Automotive cameras made their debut with the use of simple analog video cameras providing the car or truck operator with backup and side views.
Since then, more advanced versions have come out, including the infrared camera for enhanced night vision.
The requirements were simple—produce an enhanced image of an area from a blind spot for the driver to see and use. There was no analysis or interpretation of the picture’s content.
The latest evolution of automotive safety systems requires a much more complex use of camera images. Driver area monitoring systems use machine vision to monitor the inside of the automobile. Examples of camera sensor innovation in this area include camera systems that modify the deployment of airbags depending upon the height or weight of the passengers. Other examples are camera systems that monitor the driver for signs of fatigue by recording and interpreting the eye profile of the driver, or the position of the driver’s head.
Topping the list in complexity of today’s automotive safety systems are object avoidance systems that monitor the driving lanes in front of the car. In their simplest form, these systems are early warning systems. For example, they monitor lanes and alert the driver if the car is wandering out of its lane or is too close to the car in front. More sophisticated systems that are under development monitor the trajectory of objects for possible collision and take over steering or braking to avoid an accident.
Volvo is a pioneer in this field. One of its current models and features will automatically apply the brakes if the car is likely to hit the car in front.
These applications require camera systems with machine vision and high computing power to carry out the accurate analysis and interpretation of what is happening.
With a system tasked with making such critical decisions, the camera’s image needs to be in complete focus across the whole image—stuck pixels, dead pixels, and variations in color can be highly detrimental to the system’s effectiveness.
In addition, different camera modules will be required to focus images differently. For example, object avoidance systems may need images that put a higher priority on the focus at the peripheral border of the image rather than the center of the image.
High performance camera
The reliability and accuracy of machine-vision based camera systems is heavily dependent upon the quality of the image from the camera module. There are several approaches to achieving high-quality images.
The obvious answer would be to use high quality optics to produce the image quality required, but their cost is generally prohibitive for most automotive applications. A much less obvious approach is to use lower-cost components yet create a camera system better than the sum of its parts. Lower cost components have a greater variability in tolerances, which has precluded them from mission critical applications.
For example, you can have two lens housings, which look identical, and have the same external dimensions, yet have different optical properties.
An assembly process that aligns the lens assemblies with respect to the physical properties is going to end up with two different cameras, with very different optical properties. This is far from satisfactory for an automobile safety system, which takes control of the braking or steering systems.
The greater variability in tolerances inherent in lower cost components requires an assembly process that automatically compensates for the variability in component parts to produce a completed camera system, which will achieve the high quality image required.
An advanced assembly process that aligns each lens and sensor pair using the optical qualities of the lens-sensor system as reference points to guide the assembly process solves this challenge.
One method is to use lasers to characterize the optical characteristics of the lens, and then align the sensor to it. However, this method suffers from two problems. It does not take into account any sensor variation. It also does not power the sensor up, so there is no way to carry out testing for dead pixels, stuck pixels, color correction, and a variety of other tests or trimming analyses. These tests are critical to final yield and unit cost, as they can remove sub-standard parts before further value-add goes into them.
In some cases, the active tests can compensate for problems during the assembly, further increasing performance and yield. This passive alignment of the lens using lasers is an inherently slower method that relies on many mechanical moves resulting in long cycle times.
A manufacturing system that aligns each lens to its sensor individually in 5 degrees of freedom based upon its optical characteristics, rather than its mechanical dimensions is ideal for assembling high-performance camera modules from low-cost components.
An active alignment system that focuses on the optical characteristics of a lens does not align to mechanical dimensions. It aligns to the image produced by the sensor itself. The sensor looks through the lens at a target, which facilitates the type of tests and algorithms for the camera’s characteristics, which need optimization.
The active alignment system powers up and communicates with the sensor using a standard sensor communications protocol. If the image of the target acquired by the sensor is out of focus further action transpires.
The system then adjusts the lens in 5 degrees of freedom using focus scores across multiple regions of the target image and optimizes the exact positioning of the lens in all dimensions—translational and rotational to achieve a crisp focus.
Once the active alignment system achieves optimal alignment, the lens exits and an adhesive goes on. After realignment, the lens receives a high intensity UV beam to cure the adhesive. The lens-sensor module goes through some final tests before being laser marked with a 2-D barcode, logged, and released onto the outward conveyor.
The key advantage of active alignment is the algorithms used for alignment are maximizing the very characteristics imperative for reliable operation of a specific automotive safety system.
Each safety technology requires different characteristics for effective operations. For instance, if accuracy in the center of the field of view is more important than periphery vision accuracy, then the assembly optimizes for that.
In other cases, consistency over a wide-angle field of view is more important. Moreover, in yet other applications, making sure a straight line in the real world is as close to a straight line on the sensor may be the overriding factor.
Whatever the critical characteristic (or set of characteristics) is, an alignment process that actually powers up the sensor during the assembly is the only way of optimizing to the specific criteria and achieving high performance while utilizing lower cost components.
As safety systems continue to evolve, camera technology will have to provide inexpensive ways to produce camera modules that meet specific requirements and are highly accurate. Lens alignment technology will be the key to producing highly focused images with lower-cost components.
Flexible automation systems that cannot only produce better camera modules but that can also maximize required characteristics will be crucial to the continued advancement of automotive safety systems.
ABOUT THE AUTHORS
Justin Roe (firstname.lastname@example.org) is a Chartered Engineer (U.K.) and is general manager and chief operating officer of Automation Engineering Inc., a builder of custom automation solutions integrating machine vision guided automation, precision motion control, automated alignment, and automated laser-processing systems. Stanton Earley (email@example.com) is business development manager at Unovis Solutions a systems and automation integration company in New York. He designs and deploys high volume turnkey manufacturing solutions in the automotive, telecom, and medical devices industries.
Pixel: In digital imaging, a picture element is the smallest piece of information in an image. Pixels lay out in a regular two-dimensional grid and are dots, squares, or rectangles. Each pixel is a sample of an original image, where more samples typically provide a more accurate representation of the original. The more pixels used to represent an image, the closer the result can resemble the original. The measures dots per inch (dpi) and pixels per inch (ppi) are sometimes interchangeable.
DOF: In mechanics, degrees of freedom are the set of independent displacements and/or rotations that specify completely the displaced or deformed position and orientation of the body or system.
Machine vision: A matter of pixels and software
Seeing it my way
Vision in hostile environments
Factory automation systems in sight