September/October 2012
Special Section: Statistical Process Monitoring

Statistical process monitoring turns process noise into valuable information

Statistical Process Control can be used to predict future conditions enabling operators to intervene before a product no longer meets specifications

Fast Forward

  • Statistical Process Control (SPC) has been very effective in discrete manufacturing, yet the continuous-process version of SPC, statistical process monitoring (SPM), seems to have been applied less often.
  • Data obtained from field devices and combined with information from the data historian can be used to implement SPC to alert operators to impending problems.
  • Implementing a statistical process monitoring system is a learning experience that can yield improved operations to reduce process upsets and operate the process closer to the limits.
 
By Thomas Wallace and John Miller

Stat1Anyone who works in process control probably has wished there was a way to look into the future, to be able to predict process upsets before they occur. What we are talking about is statistical process control (SPC). It involves keeping track of small changes in process conditions to predict future conditions, and enables operators to intervene before a product no longer meets specifications. In discrete manufacturing, the characteristics of parts coming from a machining station-exact dimensions and perhaps surface finish and other things-can be monitored and made available to the operator. Gradual changes alert the operator that a cutting tool is getting dull, for example, so the tool can be changed before any bad parts are produced.

Yet the continuous-process version of statistical process control, statistical process monitoring (SPM), seems to have been applied less often. One reason for this could be that the only variables usually displayed to process operators have been PV and setpoint. It takes a sharp mind and many years of experience to learn to tell when a process is about to go out of control before it actually does. And in many process industries, the operators with the necessary knowledge and experience are rapidly reaching retirement age.

Yet SPM can be applied successfully to the process industries, and multiple articles on SPM are available.

Abnormal situations cause changes in many variables, some of which are detected in intelligent field devices but never reach the main process control system; in this article, we will discuss the information that can be obtained from those field devices, such as the pressure transmitters used in differential flow measurement, and ways in which that data can be combined with information from the data historian to alert operators to impending problems.

What is actually detected?

Each abnormal situation-plugged impulse lines, loss of agitation, entrained air, process leakage, cavitation, and column flooding-generates a specific signature, identifiable by a close analysis of process noise, standard deviation, coefficient of variation, or both. These, along with the mean, vary considerably from process to process, and SPM cannot identify the specific cause of an abnormal situation without the participation of the user, but they provide the data needed to make predictions.

Stat2  

One possible reason for not using SPM is there has not been a way to provide the operator with all the needed information, mostly because the data needed to create it has not been available. The signal that a process transmitter sends to a process control system tends to vary smoothly, as shown by the bottom curve in Figure 1; this is because updating the input to a process loop more frequently than once or twice a second can lead to process hunting and valve cycling. But the heavy filtering required to obtain that smooth variation masks the fact that the process itself tends to be noisy. The top curve in Figure 1 shows the actual process variable as seen by the transmitter, which has a sample rate of 22 Hz or so. The standard deviation of the process noise, as shown in the middle curve in Figure 1, represents the process signature.

Much of the mathematic calculations required to create the variables used in SPM are best done in the transmitter, as many of today's smart transmitters can calculate the individual SPM variables and calculate the appropriate adaptive limits and alert values. That information forms the first part of the SPM.

The second part of SPM is done by the host system. The host provides optimized displays to the operator and process engineer that replace traditional monitoring point displays. It uses the SPM data generated by the smart field device to create a process fingerprint. It also provides a data historian with time-synchronized alarms, handles alarm management, and correlates multiple process loops for multi-loop SPM and process optimization.

Implementation

All this sounds fine, but implementing it presents challenges to user personnel in design and configuration areas.

The control system designer must learn enough about SPM to design and implement the appropriate monitoring strategies. He or she must then determine protocol and diagnostics version requirements, then design and document the monitoring strategy by protocol and version, the configuration module templates, and the operator faceplates. Next comes designing the alarm management strategy and the data historian configuration. This entails a fair amount of effort and time, although software is now available that can automate many of the necessary calculations.

In designing for the configuration engineer who sets up the control system, it is necessary to eliminate up-front design time and minimize configuration, to pre-engineer the process history view chart (with alarms), to make sure the system works with HART and FOUNDATION fieldbus devices, and to provide meaningful user help functions.

Doing these involves designing ways for the configuration engineer to configure linked variables to control blocks, to configure alarm management logic, to determine and set alarm limits, to configure process history views, and to configure the operator faceplates.

For the system to be used and accepted by the operator, the system must provide consistent but customizable operator faceplates (Figure 2), with buttons to launch such displays as Detail Picture, PHV Trend, or Field Device View. It must provide alarm management to automatically enable and disable SPM alarms (Figure 3).

Stat3  

And for the process engineer, the system must provide the ability to fingerprint the process and capture fast sample based, time synchronized, SPM data and alarms, as shown in Figure 4.

It is also important to consider nuisance alarms, which operators and engineers greatly dislike. A plant shutdown or even a product grade change will cause sudden changes in variables that would cause the SPM system to generate alarms, so a necessary precaution in setting up the system is to include a way to logically toggle off all the SPM alarms during the change. When the setup or grade change is complete, another simple logic toggle can re-enable the alarms.

Fortunately, some of today's more modern control systems and field devices have capabilities that make many of these tasks considerably easier. For the control system, these may include preconfigured control module templates, process history view charts and operator faceplates, as well as help facilities to ease the learning process.

There are field devices that eliminate or automate many or all of the SPM design tasks, leaving just a few configurations tasks. These can include configuring a few linked variables to control blocks, setting alarm limits and linking process parameters or operator actions that would enable or disable pre-engineered alarm management logic, and configuring the operator faceplates-which may be as simple as specifying three parameters to be displayed.

Stat4  

 

Stat5  

How to get started

Implementing a statistical process monitoring system is a learning experience and requires some initial experimentation. Since an SPM system works by using process information to generate signatures, the first step in implementing the automated system is to consult with the operators and tap their insight and intuition to use the knowledge they possess as part of the implementation process.

The next step is to make an educated guess as to the problems that are likely to arise in the plant and the points at which these problems can be detected-pumps that are likely to cavitate, agitators that tend to stop, columns that are subject to flooding, lines that tend to plug, and so on.

The next step can be done broadly or in detail (painting with a broad or narrow brush), as desired. It begins by establishing trend lines of what normal operation looks like; establish a baseline when the process is running at steady state. It is best at this point to disable any SPM-based alarms until having developed a good understanding of normal operation.

Once that step is done, it is time to set the alarm limits and run with alarms enabled. In some systems, the field devices have the ability to automatically and adaptively calculate those limits.

When abnormal conditions occur, capture all the related data, analyze it, and adjust accordingly.

With all this information in hand, it is time to develop a reference book for use by the operators. This should begin by documenting what "normal" looks like (that is, the process signature). More than one "normal" signature may be required, based on plant operating parameters.

Similarly, capture the signatures of abnormal conditions and the alerts they generate. Capture the process signature of upstream, downstream, or any related parameters that could have a cause/effect relationship. Then examine the records to identify and document the earliest reproducible signature of each abnormal condition. This may involve data from the monitored point or from upstream or related points.

Next capture the corrective actions needed to re-establish control and the process signature of each of those corrective actions. Then document the most effective corrective actions for each abnormal condition, and update operator guidelines to include the solution as a best practice.

Getting started, engineer improvements

It is unwise to think everything in such a scenario will go exactly as expected from the start, which is why the initial steps are experimental. It is wise to expect the unexpected in the process, and to experiment, because at this point, it is impossible to know even what is unknown. Some process points will yield new insights, and others will not; only experience will reveal which is which.

If the control system provides templates, it is a good idea to start with them as is, without modification. This gives an opportunity to learn what the operators and others with a stake in the outcome find useful and what they do not. It also will provide guidance on where and how to use them in the process.

Modifying templates should be done slowly and deliberately. Modification and reuse are good, but they are not required. It is best to determine what is needed, modify accordingly, and document what was done and the result. That should be followed by trial runs to verify the modifications and make sure they deliver the hoped-for results.

Once that is done, the resulting arrangement can be adopted as a plant standard: Implement continuous improvement strategically as your goals change or you identify additional opportunities. Stop when you are comfortable with the results, but do not forget to re-evaluate periodically to allow for future improvements.

Business results expected

As experience is gained in implementation in one part of the plant, other parts should go along more easily, with less time needed for learning, design, and configuration. With experience, engineering will gain additional process engineering insight and learn to diagnose and eliminate root causes of process problems. Operators will gain additional insight into the processes they control, learn to anticipate and prevent abnormal situations, be able to reduce process upsets, and be able to operate the process closer to the limits.

ABOUT THE AUTHORS

Tom Wallace has more than 30 years of experience in the process industry with various divisions of Emerson Process Management.  His experience spans instrumentation, control systems, and asset management systems in product management, strategic planning, R&D program management, global marketing management, and industry marketing roles. He is currently a senior marketing manager with the Rosemount Division of Emerson. John Miller works for the Rosemount division of Emerson Process Management, developing and promoting advanced diagnostics technology, and researching future pressure products. He has a B.S. in Mechanical Engineering and an M.S. in Electrical Engineering, both from the University of Minnesota. He is an inventor on 23 patents and other pending patent applications.