- By Allan Kern
- Cover Story
- APC progress has stalled, because its high cost of ownership limits its applicability. Most APC resources now go toward support and maintenance of existing applications, not new applications.
- Most APC benefits come from a minority of variables, while costs are compounded by the number of variables, which suggests applying the Pareto principle (80/20 rule) to APC controller design.
- Experience shows that detailed models and embedded optimizers are not always necessary for the essential role of multivariable control, which unlocks many new possibilities for APC.
APC paradigm now more affordable, agile, scalable, and reliable
By Allan Kern, PE
In this article, as in industry, advanced process control (APC) refers primarily to multi-variable control. Multivariable control means adjusting multiple single-loop controllers in unison, to meet constraint control and optimization objectives of an additional set of related process variables.
Multivariable control is a central aspect of nearly every industrial process operation. Historically, operators adjusted single-loop controller set points and outputs (i.e., “the available handles”) to control a superset of constraint and optimization variables (i.e., “controlled variables”). They did this based on experience, knowledge of the process, ongoing operating conditions, and input from the greater operating team, which includes supervision, process engineers, and production planning. APC endeavors to automate this task, in order to capture incremental gains in capacity, efficiency, quality, etc. Figure 1 depicts the essential difference between manual and automated multivariable control.
The most common automated multivariable control technology in use today is model-predictive control (MPC). Prominent characteristics of MPC include the use of detailed process models, embedded optimizers, and a generally large-matrix approach to application design, i.e., dozens of variables and often hundreds of models. This combination was expected to be transformative for process control, but it has met with unexpected consequences in cost, maintenance, and reliability. Industry has so far stood by MPC, so that more agile, affordable, and “owner-friendly” alternatives have been slow to emerge and evolve.
Within operating facilities, process optimization is carried out by many participants, such as production planning, process engineering, and operations. Together, these groups arrive at current constraint limits and optimization targets, and propagate them to the control systems via computer links, operating orders, word of mouth, etc. Most constraint limits and targets rarely change, while a handful change with operating conditions, such as feedstocks, equipment out of service, and time of year. On top of these activities, there may be similar sitewide and enterprisewide optimization layers (figure 2).
In this picture, the role of the embedded MPC optimizer comes into question. It may have made sense in 1985, when few other real-time optimization programs existed in industry, but today the entire optimization hierarchy is nearly as automated as it needs to or can be. This makes the embedded MPC steady-state optimizer largely redundant, while it continues to add cost and complexity to the MPC application.
MPC also incorporates “path” optimization, whose objective is to minimize transient cost and error as it moves the process from current conditions to optimal conditions. However, taking a simple straight-line path, while observing process speed limits along the way, may be a more effective strategy in most cases. As with driving a car, observing speed limits and arriving safely is usually more important than arriving quickly. Industry endorses this concept whenever it uses approaches such as move suppression, extended closed-loop response times, soft limits, and reduced optimization speeds. Why not just post a safe speed?
The essential role of APC at the control system layer is control, i.e., to push constraint limits and pursue optimization targets in the live process environment, where the related process values—not the limits and targets themselves—are subject to change in real time. Control needs to execute at high frequency, but optimization normally does not. This paradigm has the potential to simplify APC technology by eliminating embedded optimizers that are potentially redundant or unnecessary in most applications.
Model-based control requires reliable process models. In the original APC paradigm, this need was met by a plant test and subsequent model identification, with the resulting models expected to have an indefinite life expectancy. However, experience has shown that many models change frequently, even dynamically, for a wide variety of reasons. Over the years, the conventional wisdom regarding model life has been reduced to five years, and then to two years. Today industry is pursuing real-time model updates. But even this is unlikely to “square this circle,” for the same reasons that derailed autotuning. Model change poses a fundamental conundrum for autotuning and model-based control.
To move forward, APC needs to embrace the idea that process models are basically a moving target. This has always been a fact of life in the single-loop tuning world, where the principles of preserving process stability and respecting a degree of the unknown have always taken precedence over minimizing transient error. In retrospect, there is no reason these principles should not apply to multivariable control, too. Indeed, MPC experience shows that these principles remain universal and indispensable.
The same insight can be gleaned from examining how operators historically carry out manual multivariable control, which they do without relying on detailed models or optimizers. By virtue of their experience and training, operators know important constraints, optimal targets, and appropriate handles; they make moves that safeguard process stability and respect the historical degree of uncertainty; and they monitor actual process response—not yesterday’s or last year’s response—before making further adjustments accordingly (figure 3).
The effectiveness of manual multivariable control has always been dependent on the amount of time and initiative the operator has available, and upon each operator’s individual level of expertise. These—timeliness and consistency—are the hallmarks of automation.
In the original APC paradigm, where models were assumed to be reliable, having a larger matrix (more variables) and a denser matrix (more models) was considered the best practice, because in principle it resulted in a more complete solution. But in today’s world, where models are understood to be variable, more models can translate into more problems, for both control and optimization. Industry has experienced this in the high maintenance and degraded performance of many MPC applications.
The extended operating team, especially operators and process engineers, normally know a priori how to effectively manage process constraints and pursue optimization targets, by virtue of their knowledge and experience. This suggests that existing (established and proven) operating practices can provide the best basis for matrix design. It will also normally result in a much smaller and less dense matrix than the traditional plant test paradigm, whose strategy is to cast a wide net.
A smaller matrix can be expected to reduce cost and maintenance proportionately, especially if the remaining variables and models are the essential ones, already proven in use by virtue of actual operation. In the traditional paradigm, the APC project goal is usually “optimization,” based on a large-matrix strategy, but in the small-matrix paradigm, the central goal is “automation,” based on existing, proven, manual multivariable control operating practices. This may sound less lofty, but it could be a more effective focus for APC going forward.
Lessons from feedforward
The primary limitation in figure 3, from a process control standpoint, is the lack of model-predictive feedforward control action, which has always been a cornerstone of the MPC paradigm and a key piece of the expected transformation (of process control into a more exact science). However, feedforward is the single-loop equivalent of model-predictive control, and its long history tells a different story.
The potential power of feedforward (to reject disturbances proactively) has always been well known. Feedforward function blocks have been available since industry’s first distributed control systems (and in programmable logic controllers, analog, and pneumatic systems before that). Yet, historically, feedforward has found very limited usage, even at the much more manageable and selective single-loop level, due to the complexity, risk, and maintenance a feedforward model adds to any loop. Feedforward has a high bar and is generally warranted only where its benefits are substantial and a reliable model is possible. Using the installed base of control systems throughout industry as a guideline, perhaps one in 10 loops warrant the use of feedforward, and the rest will perform satisfactorily, if not more reliably, based on feedback control alone.
This calls into question the MPC paradigm of “wholesale” feedforward—literally hundreds of mass-produced feedforward models—and suggests it might be a source as much as a solution to the persistent maintenance and performance record of MPC. The top priority of APC—as with single-loop control—is to reliably close the loops, and not necessarily to use feedforward in doing so. Classic selective feedforward strategy is implicit in figure 3.
These perspectives point toward an APC paradigm that is more affordable, agile, scalable, and reliable, based on durable qualitative (not detailed) models, sans embedded optimizers, and with more intuitive and succinct matrix designs. Figure 4 compares the traditional and proposed paradigms.
In operating facilities, multivariable control applications come in all sizes—from a handful of variables to several dozens—so that a smaller footprint solution can bring progress on both fronts. It can provide more appropriate tools for the many applications that have remained below the radar of industry’s large-matrix paradigm. And it can provide an alternative reengineering strategy for industry’s many high-maintenance legacy applications.
The proposed paradigm derives from lengthy experiences and lessons under the traditional APC paradigm. To the extent this new paradigm has yet to fully emerge, industry may benefit from adopting it as a working vision going forward, to pursue these insights and lessons, encourage outside-the-paradigm thinking, move APC beyond its original paradigm, and bring about new and more viable and sustainable APC solutions for industry.
We want to hear from you! Please send us your comments and questions about this topic to InTechmagazine@isa.org.