1 April 2002
Innovations in Motion Control: Part 1—Learned Effort
People and animals can move quickly and accurately without having the rigid or precise (i.e., expensive) structural characteristics of typical numerically controlled motion systems (e.g., robots). In this, the first of a three-part series explaining how modern computer control can emulate these properties, we'll consider the ability to learn an open loop effort.
When we observe Tiger Woods or Michael Jordan, it's possible to surmise what can be done with a learned effort. In both cases, the athletes have no opportunity to use feedback control because the ball is open loop once it's on its way. Hence, these gentlemen have learned the open loop effort needed to make motions by observing the results of previous attempts. What's the analog for a computer-controlled servo system?
Most recurring industrial tasks are good candidates for learning the open loop control effort (consider, for example, the typical tasks of electronic placement machines, machine tools, and industrial robots). Typically, a learned effort causes a 10:1 reduction in dynamic tracking error relative to a feedback system. Essentially, 90% of the motor control currents are repeatable for a given learned motion. The method doesn't require a good linear dynamics model in the system to be controlled.1
How It's Done
Consider position control of a two-mass system (Figure 1). Although the system contains a nonlinearity of static friction and is described by a fourth-order nonlinear differential equation, it's always stable because a spring-damper (PD) control system is, in fact, equivalent to the diagram of Figure 2. In Figure 2, we see that PD control of a mass, where the mass's position is fed back, is equivalent to a passive system with another spring (P) and damper (D). The usual conventions apply in the block diagrams. FM is the force on the motor armature from the magnetic field; XR is the desired motion; MA and MEOA are masses; and Kstructure is the stiffness of the connection between the armature and the end of arm (EOA).
There are some problems, however. Friction causes an offset in the final position. Further, making the PD system's gain very high provides both stable and accurate motion of the motor mass. But the second mass, EOA, would oscillate indefinitely afterward because its structural damping is taken as zero. Thus, in real systems we effectively detune the motor control to provide damping for the structural parts. We also move more slowly so the residual vibrations' amplitude will be low. We'd normally use a reference motion that's "smooth" and doesn't require the machine to exceed the motor's capabilities. And traditionally, we make the structural spring stiff by adding sufficient mass of materials.
Suppose we learned (and stored in RAM) the currents required to make the motion from XEOA = A to XEOA = B. Remember that A and B are arbitrary positions, but periodically the system must make this motion, perhaps once every assembly cycle of a circuit board. Remember also that in almost all current applications, the controller really controls XA, the motor armature position. If we did this, and assumed the friction for this motion was largely repetitive, then we'd expect the offset caused by friction to disappear to the extent the friction was the same for each motion. This is in fact what happens; however, we haven't solved the settling time problem.
Let's attempt that by passing the EOA position, XEOA, to the feedback controller (Figure 3, p. 22). This is a good idea, as it allows us to actively damp the vibrations using what amounts to a state-feedback control system.2 But we'd have a hard time building a fast system for such feedback because in real life it requires a good mathematical model of the system. Remember, this system is also nonlinear. In addition, there's the need to estimate the EOA mass's position and velocity. Part III of this series will deal with that.
However, we could certainly make this system stable, and if the controller could learn the feed-forward effort to make a particular move from A to B (as discussed earlier), we could reduce the dynamic error to close to zero. This, in turn, means the system is "fast," even if the feedback system isn't. Further, the learned control effort would minimize the EOA oscillations, even if the feedback system's settling time is significant. That is, the feed forward's tendency to introduce a compensating oscillatory input consequently makes the dynamic error approach zero.
Feed-Forward Learning Method
Consider how learning could take place. We'll use a stable feedback control system to make the motion from A to B (Figures 1, 3). Because this is an all-digital control system, we can store the required motion currents in RAM and use them when the same motion is next made (Figure 4). Using feed forward reduces this subsequent motion's tracking error. Now we record the new combined currents used to make that motion and remember them. After a number of cycles, the feed forward should eventually account for nearly all of the control effort; the feedback system's remaining error signal should become a random number representing the system's nonrepeatability—or so you'd think.
This is both true and false. It's false because the very simple memorization of the last-used effort is an unstable process; it won't converge to the needed feed-forward effort, even if the process is entirely repeatable. It's true because if a properly modified method is used, convergence occurs and leads to greatly reduced tracking error.3 Convergence is possible without a good system model but requires a stable feedback. As a rule, dynamic error reductions for robotic applications are 10:1 or better. Applications in mechanisms that don't require highly repeatable forces (e.g., a milling process) don't do so well. After all, F=Ma is highly repeatable, while cutting forces aren't.
You might ask if the memory requirements for learned feed forward are significant. Take the case of 100 distinct motions, defined by trajectory and payload: 2-second motions on average; four axes of control; 200-Hz sample rate for current requirements memorization; and 16-bit resolution of motor current commands. The RAM requirement: 0.320 megabytes.
Experience has shown that for mechanical systems with primary natural frequencies below 100 Hz, there's little need to exceed 200-Hz learning sample rate on memorized open loop effort. It's likely to be useful to interpolate the remembered numbers because a real feedback control system, at least on the motor loop, is probably sampled between 10 and 30 kHz. This high rate aids feedback system stability at high gains, improving performance. Of course, feed forward shouldn't affect stability. Mechanical systems on the scale of human beings (i.e., characteristic dimensions of 1 meter) are very difficult to build, even with massive moving parts above 100 Hz for primary natural frequencies.
Learned feed forward is a great way to reduce a control system's dynamic tracking error in those cases where motions, including payloads, recur. Each motion must be memorized separately, but there's no significant increase in controller cost for most applications, since memory is cheap. The reduced tracking error is one element in either increasing system performance or decreasing the mechanical weight and perhaps precision or both. Keep in mind if the springs shown in Figures 1 and 2 were infinitely stiff and of exactly known length, we could get speed and precision of motion simply by using a high enough gain on the PD control system that had a sufficiently high sample rate. The latter is the conventional approach to high-performance control. MC
1Sadegh, N., R. Horowitz, W. W. Kao, and M. Tomizuka, "A Unified Approach to the Design of Adaptive and Repetitive Controllers for Robotic Manipulators," Journal of Dynamic Systems, Measurement, and Control, ASME Transactions, Vol. 112, pp. 618–629, December 1990.
2Book, Wayne, and Mark Majette, "Controller Design for Flexible Distributed Parameter Mechanical Arms via Combined State Space and Frequency Domain Techniques," Journal of Dynamic Systems, Measurement, and Control, pp. 245–249, December 1983.
|Parts II and III will provide modern motion control system techniques related to vibration control and relative position estimation. Both allow further reductions in machine weight and precision.|
|The authors have been fortunate to be associated with a very talented pool of Georgia Tech graduate students and colleagues who, over the years, have shown us how to build motion machines much more intelligently. Many companies and government agencies have played a part, but we most recently recognize the support of the National Center for Manufacturing Science, Visteon, and CAMotion, Inc., a Georgia Tech spin-off company. Dr. Nader Sadegh has caused us to recognize the importance of learned feed-forward effort.|
Steve L Dickerson, Sc.D., is chairman of CAMotion, Inc. Contact him at 813 Ferst Drive, Atlanta, GA 30332-0405; tel: (404) 894-3255; fax: (404) 894-9342; www.camotion.com. Wayne J. Book, Ph.D., is HUSCO/Ramerez Professor of Fluid Power and Motion Control. Contact him at the Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332-0405; tel: (404) 894-3247; fax: (404) 894-9342; www.gatech.edu.