01 August 2004
No magic wand
Keep eyes open to all factors, not just one single tool.
By Matt Bothe
Since the introduction of practical pneumatic control early in the 20th century—followed by digital control—conventional proportional-integral-derivative (PID) control has had considerable benefits over the manual alternatives, thereby leading to significant increases in manufacturing productivity.
Key benefits include reduced direct manpower requirements, increased equipment utilization, and reduced variation. However, throughout the final quarter of the 20th century, as international as well as domestic competition increased, more and more manufacturers desired to further increase equipment utilization through decreased variation. The problem was that limitations in computer technology and software availability slowed progress of control methodologies beyond conventional PID until the late 1980s and 1990s, when interests in advanced process control increased.
Considering the economics of supply and demand, expanded demand from manufacturers led to greater opportunities for suppliers to satisfy these demands through lucrative contracts. As a consequence—armed with more powerful computers, a variety of software packages, and implementation methods claiming benefits, often with unsubstantiated evidence of quality performance—unsuspecting users thought the product was the solution. They didn't consider the multitude of factors that had consequential influence over the ultimate success of advanced process control.
Implementing advanced process control can be complex, yet highly rewarding. Cost, however, need not be a determining factor. True advanced control projects can happen at a reasonable cost, using existing hardware and control infrastructure as well as software tools that reside on existing operating platforms. The complexities of such an effort depend largely on the process application and the influences that most significantly affect unit operations. For this reason, one can gain a great deal by understanding the process as well as those factors that directly affect performance, and not necessarily a vendor's software program.
An increasing amount of research and experience in advanced control techniques has demonstrated that few, if any, physical processes (continuous or batch, new or old, hybrid or legacy) exist that one cannot improve through advanced process control. Therefore, the question should not be "should we apply advanced control?" but rather, "where and how?" Despite the advantages and simplicity of the PID algorithm, conventional PID control is not without limitations. Among the shortfalls:
- It is inherently reactive, acting on variations in the controlled variable only after a disturbance propagates through the controlled process.
- It assumes process linearity as evidenced by the frequent need to retune as operating conditions change.
- It does not account for other measurable parameters interacting with the controlled variable.
- Conventional PID implementations do not compensate directly for measurable disturbances, but depend on changes to the controlled variables before indirectly compensating for the disturbance.
- Conventional PID algorithms do not inherently compensate for dead time and are not suited for processes with excessively long lag times.
Process variation is the key consequence of PID shortfalls, as well as a key "selling point" for advanced process control. Excessive variation leads to operating margins that one can exploit by applying one or more of the variety of advanced control methods—and create subsequent increases in operating efficiency. Because not all process operations are the same (even among similar product lines), no single control software program or method is the total solution. A user should consider software, for example, as a tool for process improvement, not the sole remedy.
In cooperation with conventional control, advanced control methods
- supplement (not replace) and enhance conventional PID control,
- reduce operational margins through decreased variability (which enables the targeted process to operate closer to constraints),
- provide predictive and supervisory control,
- offer practical interface to real-time optimization,
- are a decoupling method for interacting variables,
- provide dead-time compensation, and
- have a unique operational philosophy.
At an average, inherently continuous, highly leveraged production line such as a commodity chemicals plant, even a 1% efficiency improvement should result in significant recurring returns—with a payback period of less than one year for a typical advanced process control project. One percent represents a conservative estimate; imagine the savings for a typical 2–3% improvement.
Executing the plan
The processes involved in advanced process control projects go beyond those of platform selection and software configuration. They entail specific steps from process selection through post-project evaluation and analysis. The following steps are a minimum to ensure success:
1. Process selection. Following a preliminary review of all process units within a production facility, the process unit selected should have the following:
- the greatest potential for improvement in terms of absolute savings
- adequate instrumentation coverage for accurate and viable model identification
- positive product market conditions
- operator/maintenance personnel receptiveness
Although one should make the decision to apply continuous, batch, or supervisory control during process selection, the specific methods (i.e., linear versus nonlinear, predictive versus fuzzy) are more difficult to predict until collected data is processed for model identification. When it comes time to select a tool, remember, "a tool should be selected to fit the problem; do not try to adjust the problem to fit a specific tool."
2. Performance target definition. Performance targets provide project objectives and goals set during conception and metrics for analyzing project successes. The three key determinants of operational performance are efficiency, throughput, and quality.
Influencing the performance goals are the identification and screening of key controllable variables (CVs) that may directly or indirectly link to one or more of the key determinants listed above. These in turn are influenced by selected parameters capable of being directly manipulated to drive the targeted process to optimum performance.
Without measurable disturbances (DVs) (either by direct means or inferentially determined), controller adaptation would not be viable.
3. Data collection and processing. Advanced control models cannot undergo reliable identification without collecting real-time, time-determinant process data. Two classes of "direct measured" data are collected: dynamic and steady state. The most time-consuming steps of a classic advanced process control implementation, both ensure coverage of transient behaviors and measurement reconciliation. Although data collection often consists of plant testing via coordinated and practical moves of manipulated variables (MVs), data processing consists of screening, filtering, prioritizing, and grouping data to enable optimum model performance.
In some cases, one may not be able to directly collect data, but it is still required for model development. For these cases, inferential computation may be necessary. Other non-time-dependent data includes information that quantifies operator acceptance and maintenance receptiveness. In addition, with the magnitude and significance of data collected, security policies and authentication procedures should be standard throughout the data collection and processing time.
4. Analysis and characterization. The analysis parts of project execution include determining relationships among collected variables. These relationships involve associating variables with one or more of the performance determinants (efficiency, throughput, and quality), prioritizing them according to impact, and characterizing them as interacting or noninteracting parameters.
5. Model identification. Identifying the model consists, perhaps, of the most important, yet least time-consuming, step of the execution phases of advanced process control projects. The identification approach decision should occur during project conception, yet not be tied to any particular application. One should not select an advanced process control tool until wrapping up all prior project phases.
Although a user decides on batch, continuous, or supervisory control before or during process selection, a neural network or multivariable predictive control approach may not be possible until analyzing the data.
Following a thorough understanding of process performance factors, one may just need simple inexpensive control system enhancements such as Smith Predictors (for dead-time compensation), decouplers, or simple feedforward algorithms.
Other factors that influence the "tools of choice" include standards and regulatory compliance requirements. For example, ISA standard S88 may directly affect the way batch code is organized, or 21CFR, Part 11 may force certain remediation efforts to protect data collected and stored if applying classical methods. These influences apply particularly to U.S. Food and Drug Administration-regulated biotech, pharmaceutical, and food processing industries. Another example involves the Environmental Protection Agency, which often influences management of furnace and boiler controls, particularly if using production wastes as fuels.
6. Model testing and simulation. Following model identification, off-line testing is essential for establishing proper relationships (both dynamic and static).
A user can collect, screen, process, and apply additional real-time data to the model in multiple attempts to prove model integrity. An "open-loop" approach could prevent potentially harmful process upsets due to unperceived process anomalies and nonlinearity. As a precursor to model adaptation, simulation also provides invaluable opportunities to arbitrarily upset the "virtual plant" without affecting real production.
7. Closing the loop. Arguably the true "money-making" phase, loop closure (by feeding back controlled variables into the model and comparing them to their respective targets) enables consistent and continuous model adaptability in response to changing operating environments. The influx of disturbances is why we involve ourselves with process control. Without process disturbances, the open-loop model may be all that one needs to apply and adjust (within predefined constraints) for each operating condition.
Batch control, due to its inherent discrete nature, generally demonstrates qualities of open-loop control. Therefore, closing the loop for batch processes provides the greatest benefit for tasks involving production scheduling and related optimization activities.
Recurring effects and continuous adaptation are the primary contributors to the benefits of loop closure for advance control techniques. After all, elemental PID control, the most commonly applied algorithm in continuous control, has historically shaped many facets of manufacturing, arguably more so than any other control entity. Advanced controllers should apply a similar approach to self-adjustment. The loop closure implementation phase includes all tasks required to install the models into a viable control system and to establish all necessary links.
8. Model adaptation. Typically a product of closing the loop, model adaptation is a special characteristic of advanced control where the model updates its fundamental identifying components in response to changing environments. Unlike conventional PID, which applies an integral (or accumulative) term to readjust the model in response to sustained changes in operating conditions (without necessarily changing its response characteristics), advanced model adaptation intelligently compensates for the differences between "theory" and "reality," and/or between two or more unique operating conditions.
Although highly dependent on the way advanced control algorithms are authored, neural networks, for example, are synonymous with "artificial intelligence," considering their inherent ability to identify patterns through "learning" and recalling the same recognizable patterns as they occur. Therefore, one can successfully apply neural networks to highly nonlinear—and even highly discrete batch—processes. Model adaptation may also apply iterative loops to converge on optimum performance, such as fuzzy or iterative convergence control philosophies.
9. Performance assessment. At this point, the advanced control model is in production and most likely resulting in a return on investment for the owner. However, to meet the ultimate objectives, one should evaluate the performance of the controller.
10. Benefits analysis. "The proof is in the pudding." After a time of continuous operation, the recurring benefits should be quite evident. A marked improvement in efficiency, throughput, and/or quality should be easy to pick up on following a review of historical trends (either visually or statistically). If a user has not achieved targeted goals, and there is evidence that operational improvements are still feasible, one should perform additional data collection to further fine-tune the model. Otherwise, the owner should consider revising the process selection tasks. Overall, the total costs and benefits should correlate to determine the rate of return (throughout the payback period) and ultimate recurring rate of return throughout the lifetime of the controller (factoring in routine maintenance and downtime costs).
Tools of the trade
The tools applied to simplify the implementation of advanced process control tasks should not be the "solution," but rather methods to better organize data, coordinate tasks, compute model parameters, and manage the model execution after installation. These tools should not supersede the need to understand the process, identify opportunities to improve process performance, and exercise sound engineering judgment. Among the many tools on the market for continuous, batch, or supervisory control, or a combination of the three, all can fit in as either open or canned applications:
Open applications. These are highly customizable, and although the basic operating environments may be licensed (e.g., spreadsheets), the applications that reside within these environments may not be.
Canned applications. Although subject to licensing agreements, these can be cost-effective at the price of inflexibility. A user should not use canned applications for highly complex and specific processes such as many specialty chemical and pharmaceutical processes, but they can be a great value for common and well-known processes such as oil refining and power generation.
Two common approaches to advanced control include multivariable predictive (for linear processes) and neural networks (for nonlinear processes). Both consist of controlled, manipulated, and disturbance parameters. Other characteristics include variable feedback for model adaptation, feedforward for proactive manipulation, dead-time compensation, and constraint control. Neural networks provide the added capacity to "learn," or self-adapt to changing operating conditions.
Depending on the needs of the user, an important consideration for selecting software applications or customizing existing ones involves code compliance. Various standards and regulatory agencies provide guidelines and restrictions that may influence the development of control code.
Despite the ability for advanced controllers to adapt, excessive error can be a destabilizing factor for closed-loop control. Therefore, before identifying the fundamental control model, the strategies for selecting and collecting data should be a compromise between response (speed) and resolution (minimization of error).
One needs a thorough understanding of the process coupled with engineering judgment when determining the number of variables involved and the speed at which the controller responds to compensate for disturbances. The objective in controller design is to minimize error (ERR), where ERR = ((dAl/(MVm – MVl))2 + (dAu/(MVu – MVm))2)1/2 (the geometric mean between the actual curve and theory). The greater the nonlinearity, the greater the potential for error and the higher the resolution (reduction in MVu – MVl) required to compensate, leading to a greater reduction in the terms dAl2 and dAu2.
For linear models, such as multivariable predictive, error is a function of instrument tolerance, collection variation, and instrument coverage over the targeted process (the more measurements made over a broader range of possibilities, the greater the likelihood for an accurate model). For this case, the balance is between complexity and the amount of marginal benefit lost by not including all possible measurements. Well-prepared algorithms for model adaptation often possess a sufficient degree of tolerance to model error, thereby enabling the development of practical low-cost controllers.
Case in point
In the late 1990s through early 2001, a bulk chemical company in south Texas engaged in an aggressive advanced process control campaign focusing on reductions in process variability to enhance efficiency and throughput and to stabilize quality. The company installed three multivariable predictive controllers throughout the chemical facility, among which only one continues to operate. After a thorough review of all three controllers, keeping the differences in process characteristics in mind, key factors were as follows:
|1. Linearity over a broad set of operating conditions||Relatively small changes in operating conditions produced marked differences in product characteristics|
|2. Interacting components well understood||Process not well understood|
|3. Process well instrumented||Instrumentation coverage questionable|
|4. Considerable operator involvement||Little operator involvement|
|5. Opportunity for improvement questionable||Opportunity for improvement evident|
Following a thorough review of the two processes involving the unsuccessful installations, the user determined that the processes were much more nonlinear than expected and little operator involvement was encouraged. Therefore, as a follow-up to the facility's desire to improve process performance, and because the potential for process improvement existed, the user applied a nonlinear modified fuzzy technique.
The selected control methodology consists of the following characteristics:
- Parameters selected to serve as CVs, MVs, and DVs get specific and fuzzy roles.
- Selected ratios continually compute (using run-time averaging) to provide fast feedforward responses to known and measurable disturbances.
- Tuning consists of pre-bias, or lead, factors (typically for mild and abrupt changes in either direction), post-bias factors for recovery, dwell timers to compensate for dead time (or high-lag periods), and conventional gain/integral tuning to eliminate error.
- Dedicated "face plate" provided for simplified operator accesses to specific controller parameters.
- Provisions for an operator to enable and disable a controller following a six-month period of operation (for both controllers). Efficiencies in excess of 1.5% have been achieved, thereby contributing to record production periods (two months), highest overall yields (one month), and stable overall quality. Although there were high efficiencies and quality, market conditions played a major role in throughput.
Whether the successes are due to the controller or to possessing a better understanding of the influencing factors of production, or both, the processes the user followed were not subject to debate. The tools helped achieve a heightened level of consistency, but the thorough understanding of the process and significant operator involvement undeniably led to increased operator acceptance and more time to fine-tune the controllers to better match the dynamics of the processes. ?
Behind the byline
Matt Bothe is a licensed professional engineer in four states. He has B.S. degrees in chemical and electrical engineering from North Carolina State and an MBA from Texas A&M. He is a member of ISPE and ISA.
Return to Previous Page