May/June 2013
Web Exclusive

Multivariable predictive control

Looking beyond the glass ceiling

By Allan G. Kern

Fast Forward

  • A better working knowledge of MPC will help industry achieve higher performance.
  • MPC is explained in terms most control engineers and others will understand.
  • Model-based predictive control is both a strength and a potential vulnerability of MPC.
 

web1

The glass ceiling of multivariable predictive control (MPC) refers to the low performance level of many MPC applications in industry today and to industry's tendency to focus on MPC's higher theoretical potential, rather than on overcoming the more immediate performance limitations, which have continued to persist. Explaining multivariable predictive control has always been an interesting part of the business. Today, many people from the control room to the boardroom readily relate to several now-iconic images, including the classic constraint corner, the variance reduction graph, and a typical model matrix (Figure 1). These images convey the benefits of MPC and say something of its mathematical sophistication, but they do not necessarily convey a practical understanding of how MPC actually works. This remains an important question because the "low glass ceiling" of MPC performance has persisted and emerged as an important limitation going forward.

web2

Figure 1: Classic multivariable control images include the constraint corner diagram, the variance reduction graph, and a typical model matrix.


With the benefit of modern familiarity with MPC, understanding how it works can now be fairly easily summarized in terms many people are familiar with. Control engineers, process engineers, and many others are familiar with the basics of cascade control, override control, feedforward control, and loop tuning. MPC can be explained usefully, and quite accurately, based mainly on these concepts.

MPC is basically a complete set of override controllers for a process, meaning it comprises an override controller for every interaction in the process. Each row of the model matrix can be thought of as a row of controlled variable (CV) override controllers, all cascaded, via an override selector, to the row's manipulated variable (MV). And each column of the matrix basically represents feedforward of each MV that affects that column's CV (Figure 2). In concept, this is basic process control, although prior to MPC most unit controls included only a few critical overrides and feedforwards, not all of them.

web3

Figure 2: Multivariable predictive control (MPC) can be thought of as sets of override and feedforward controls cascaded to each MV.


Part of the beauty of MPC is that the override selectors are simultaneously a mix of high, low, and target overrides. And to take advantage of the additional options (control degrees of freedom) this interconnectedness creates, MPC includes an optimization algorithm that finds the economic optimum value for each variable, whether it be its high limit, low limit, or some target in between. Of course, inside MPC you will not find compact DCS-like override and optimization function blocks, but it can be thought of that way.

Another essential aspect of MPC, at least for control engineers, who need to concern themselves with how well MPC actually controls, is that MPC uses model-based control. The elegance, rigor, and sophistication of model-based control should not be overlooked. But the important aspect to understand, from a practical standpoint, is the effective tuning of model-based control. Effectively, model-based controller tuning is ideal, i.e. it reflects the actual model. This, of course, is the theoretical strength of MPC - a complete set of overrides and feedforwards, all based on actual models - but it may also pose an Achilles heel. Ideal tuning is aggressive tuning. In practice, most controllers need to be detuned, to account for variable process gains and interactions. With MPC, movement can be detuned, but predictions cannot - model-based predictive control fundamentally relies on accurate models.

Traditional MPC, with its many overrides, feedforwards, and predictions, produces a lot of control action. This rigor is the theoretical strength of MPC, and it works beautifully in simulations where the process response exactly matches the model-based predictions. In simulations, they are one and the same.

However, as most control engineers learn to appreciate, too much control is often worse than too little. When loops interact, or process gains do not reflect controller gains, aggressive control action can cause process instability, rather than solve it. Even tuning a single loop controller is almost always a trade-off between ideal tuning and the practical necessity of detuning and commonly entails halving or even quartering the ideal (model-based) controller gain, to account for the variable nature of the process gain and interactions with other loops.

Similarly, control engineers who implement feedforward on a single-loop basis soon learn that feedforward is a true two-edged sword. It has tremendous potential to reject disturbances bumplessly, but if the feedforward model is imperfect it can as quickly do more harm than good. Feedforward is best used selectively, on individual loops, where needed to achieve disturbance rejection on critical variables, and then only when the loop lends itself to a reliable feedforward model.

Another important process control fundamental regards relative speed of controllers. For example, in cascade control, the lower loop must always be at least three or four times faster than the upper loop. This also applies to loops that interact. Where two loops interact (or "fight"), the more important loop is usually tuned more tightly, while the less important loop is tuned more slowly. This allows the important loop to work quickly, and limits the impact of the less important loop, at the cost of slower control and more transient error of the less important loop. If both loops are tuned independently and have similar natural speed, then, depending on the strength of the interaction, neither will control well and larger process instability will likely result.

All this might not be relevant to MPC if accurate and durable models were the norm, as in simulations. But in real processes, significant model inaccuracy is largely unavoidable. Process gains change in real time with valve position. Gains change hourly and daily with feedstock, feed rate, and product type. And gains change over time due to fouling and catalyst deactivation. In the course of loop tuning (or MPC trouble-shooting), one frequently notices many sources of variable process gain.

Overall, traditional MPC amounts to adding dozens of controllers to a process (one per model), all tuned aggressively, all with feedforward, all interacting, and most with changing process gains. In this situation, one might expect the result to be too much overly aggressive control action, leading to process instability, and in turn leading to "degraded" MPC performance, in the form of MV clamping and detuned movement. This pretty well describes what industry has experienced. When new MPCs are commissioned, you can often basically watch this unfold.

Industry has mainly responded by focusing on better model identifiers and less clumsy ways to detune MV movement, but these do not directly address the root limitation. A strategy I have often recommended is to use smaller matrices that focus on the most important subset of MVs and CVs, mainly those that remain in service even after degradation sets in. Eliminating the unused variables can make the remaining ones more stable, bring better focus to engineering support and operation, and more closely reflect actual operating priorities.

About the Author

web4

Allan Kern (Allan.Kern@APCperformance.com) has 34 years of process control experience and has authored numerous papers on multivariable control and practical strategies for process control success. He is a professional control systems and chemical engineer, a senior member of ISA, and a graduate of the University of Wyoming.