01 August 2003
Weapon for mass production
Distributed microprocessing enables control of hybrid operations, without the traditional complexity, cost, and limits.
By Samuel Herb
Communicate any information over any network using any device! That's the vision of pervasive computing, and it is a vision a new breed of process controllers is bringing ever closer to the plant floor. The benefits include greater flexibility, greater scalability, and lower costs of purchasing and operating control technology.
With previous technology, manufacturers wanting to automate simple, discrete applications had few control options other than programmable logic controllers (PLCs). Those automating large-scale process applications had to use distributed control systems (DCSs), and those running mixed operations relied on a combined set of disparate products.
In today's highly competitive global markets, however, manufacturers increasingly need the flexibility to cross traditional control boundaries to offer greater customization and tighter enterprise integration, and to run more complex processes. The need for automation in highly regulated industries is even more pronounced.
Fortunately, automation technology has been advancing steadily and has converged to a point where manufacturers can adapt automation options to their processes and budgets with greater flexibility and lower cost than ever before. Regardless of whether their operations are large or small—and whether they are process, discrete, or some combination—effective, economical control solutions are well within reach.
MAKING DISCRETE OPERATIONS FLOW
Discrete manufacturing operations, such as automotive, aerospace, and machining, typically require relatively simple control functions. Turning a system on and off, managing conveyors, or positioning parts for machining are typical of functions that low-cost PLCs accomplish best.
PLCs have excellent logic handling capabilities, can tolerate rough (although not corrosive) environments, are flexible, and are generally very compact. They do, however, have certain drawbacks, which become more prevalent as applications grow larger, more complex, and more continuous.
PLCs are not, for example, well suited for unit control applications that go beyond simple single-loop applications. The more control variables or controllers involved, the more difficult it is to implement PLC-based solutions. Nor can PLCs support fast process loop response times as easily, which can be disastrous for proportional, integral, and derivative (PID) control.
You can implement PLCs with human-machine interfaces (HMIs) to give users more opportunity to interact with their applications, but using HMI from any vendor other than the original manufacturer typically requires separate configuration.
And, because PLCs sell at the device level, you often need an independent integrator to configure any complex applications; the added cost can counter the low-cost advantage of the PLCs themselves.
PCs can help control PLCs, but configuration can be a chore.
A LITTLE HELP FROM THE PC
Using PCs to manage PLCs helps somewhat in extending the applicability of the PLC. The low cost of the PC makes it possible—at least in theory—to program control applications from easily available off-the-shelf components, bringing multiple PLCs under PC control. Using a PC as an HMI to configure PLCs over a network is a low-cost approach that enables easy configuration of a robust control solution at a local process. It does, however, have some drawbacks. For one, you must configure each PLC separately and exercise considerable discipline to avoid the pitfalls of operating independent systems, such as duplicating tags.
There is also no peer-to-peer awareness. Using this architecture for complex control strategies that are confined to individual PLCs requires configuring each PC to communicate with each PLC to find specific variables and configuring it again for views, again for history, again for trends, and so on. Likewise, you must configure and reconcile multiple databases.
Because the PC software defines the control strategies, they are very flexible, but the extent to which you can leverage that flexibility depends on the availability of systems integrators with sufficient industry knowledge. If you have to bring in outside services, the expense can easily offset any cost advantages to the implementation.
Traditional DCS architectures handle large, complex processes.
CONTROLLING COMPLEX PROCESSES
The more continuous and complex discrete operations grow, the more they resemble process applications such as chemical, oil and gas, and food and beverage production, where sequences of multiple variables need regulation. Such processes were among the first to undergo automation. A central mainframe or minicomputer managed the initial systems.
Central computers enabled secure, compute-intensive operation for multiple loops, but the process control logic, user interface, and database were all part of the same system. When any part of that system went down, production stopped until a fix took place.
The distributed control system reduced that risk of central computers by distributing the user interface, control logic, and database functionality among different circuit boards within that computer, thereby reducing failure of the entire system.
Then, as industrial networks became secure enough to avoid ground loops and other "advances," it was also possible to distribute processing physically. This reduced the risk and high costs associated with mainframe or minicomputer implementations, without losing the benefits of central control.
With such architecture the user could configure the entire system from a workstation. Now, however, distributed databases were part of the mix as well. Each of the controllers could access the database of the other controllers in a peer-to-peer synchronization needed for complex strategies.
A copy of the database traditionally would reside on the workstation. Depending on the application and vendor, you might access it in real time, download it to controllers, and upload it for backup. Views, history, and trends could be configured as needed.
A significant benefit of the DCS architecture is that it allows the use of a single unified environment, often called an integrated development environment, to create the application.
This is different from PLC-based systems, which require users to use one application development package to configure the control strategy running in the PLC and another to configure the supervisory interface.
With a DCS, supervisory applications such as the database, alarms, trends, displays, and system management emanate automatically from the control strategy.
The distributed nature of the system also enables common creation of control strategies from a function block library, which simplifies creating, testing, and subsequently validating the control applications.
It is also much easier to implement redundancy at both the control and supervisory levels, making DCSs more appropriate for operations requiring high availability.
But despite such benefits for sequential, multivariable operations, DCSs were still not as well suited for rough factory-floor environments or high-speed switching as PLCs.
And, although less expensive than the central computers, the cost of implementation made DCSs cost effective only for applications involving numerous control loops.
The first generation of hybrid control systems integrated DCS and PLCs and kept many systems integrators employed.
CONTROLLING HYBRID OPERATIONS
Although PLCs could easily do rapid discrete actions, and DCSs originally were well suited primarily for process control loops, neither was particularly good at batch control. So when a complex process needed both discrete and analog processing, a hybrid kludge would come about.
Usually such a system was custom tailored for each application, which increased complexity significantly and added costs for configuring, programming, documentation, training, and troubleshooting.
Matching different systems across multiple protocols required complex linking strategies, which increased vulnerability, slowed data movement, and jeopardized the overall quality of the process information.
These first generation hybrid control architectures involve many and all combinations, including single-loop controllers. It is also possible to implement local operator interfaces with DCS controllers or PLCs, but when this is done, the DCS and the PLC would likely be configured separately, from different configuration stations, using different methods.
In addition to being harder and more expensive to configure and change, these first generation hybrid control systems do not enable easy peer-to-peer communication and synchronization, which makes integration with advanced control or enterprise systems very difficult.
A new generation of control systems is now emerging that enables control of hybrid operations without the complexity, cost, and limitations of the traditional approach.
New architectures leverage advances in microprocessors to bring cost-effective intelligent control to discrete operations, to smaller scale process operations, and to hybrid operations integrating discrete and process functionality.
Rather than simply networking traditional DCSs and PLCs together like the original hybrid approaches, this new breed links microprocessor-equipped controllers together to form a mini-DCS. In this way, a very small operation can enjoy economical multiloop control. Or, a large organization can enjoy greater flexibility in deploying control strategies throughout its operations.
The traditional DCS got its name because it distributed the functionality of a central computer; the distributed processor–based systems take this same functionality and distribute it physically into the plant.
This distributes risk, intelligence, and processing power deeper, without losing the benefits of central control and configuration. In this way, you can implement DCS functionality economically and technically from a few I/O points up to thousands of I/O points.
Users can configure the controllers and diagnostics as a single system, like a DCS, but can also use open technology to connect controllers to other HMIs, much as they would a PLC.
Even if each control were using a different control strategy, it would be possible to enforce a consistent HMI throughout the plants to simplify maintenance and consistency of operator navigation. This also makes it much easier to implement screen and alarm hierarchies at operator stations wherever process control functions require it.
Implementing a distributed processor system delivers cost benefits at just about every level. Yet it properly permits alarm detection and preliminary management within each controller, where it belongs.
There are, for example, distributed processor–based controllers designed to control one to four loops. One of these would cost about the same as a comparable PLC, but would bring the added ability to scale intelligently to become part of a larger system.
This type of controller is a self-contained micro-DCS that can handle up to four loops of continuous control with embedded I/O. It includes a wide range of function block libraries with control structures including cascade to ratio. It is expandable, seamlessly integrated via peer-to-peer communications, has multiple I/O options, and supports batching as well.
Distributed processor-based solutions provide additional control functionality. These solutions include an advanced HMI, through which users can implement continuous control, sequential control, batch control, set point control, trending and logging, touchscreen control, and recipe management, and an I/O Network integrating with field devices over digital networks such as Profibus and Modbus.
You cannot achieve this degree of functionality with a conventional DCS, particularly as the number of control loops drops below 100. It's not worth the money.
For example, the cost of implementing a typical DCS-based industrial boiler control installation involving an operator workstation, an engineering station, 50 points of I/O, and an historian would be about $25,000 for the first boiler and approximately $19,000 for subsequent boilers in the same facility.
Using distributed processors, however, the first boiler could be done for about $19,000, with each subsequent boiler added at $9,000.
Distributed processor-based automation architectures enable control processors from workstations, PCs, or built-in displays at local controllers.
LOWERING TOTAL COST
These comparisons rely on purchase and implementation price only. There are also significant benefits that lower the total cost of owning a system. With a distributed processor–based system, standards are much easier to implement and enforce.
The graphical interface using IEC 1131 tools such as function block, sequential function charts, and structured text make tasks using ladder logic control strategies much simpler and less expensive to configure. The single database also allows users to manage the control strategy and the tag data without mistakes.
The common HMI also means that configuration can happen on any PC—even on a laptop in an airplane—and suppliers typically provide software applications, which work with standard spreadsheets or other off-the-shelf applications. This also means that any PC could be used as a process simulator (using appropriate software, of course).
The overall integration of the components of distributed processor systems lends itself to much easier database management, which means faster access to required information by the various components, as well as by the operators and even by the business systems.
Other benefits, which are not always as obvious to users—until they get into the heat of installation—are the uniformity of quality assurance that comes from operating a system as a unit and the uniform documentation that is becoming increasingly important.
This is especially important when the system needs to go through validation testing for regulatory agencies. More and more industries are requiring increasing degrees of validation.
Uniformity also means that maintenance is more consistent, not just because of the system-wide diagnostics, but because of ease of troubleshooting and the ability to review the control strategy configuration and the designer's intention.
And because training applies to more of the components, there is less training. The uniformity of system-wide diagnostics also tends to reduce ambiguity, so locating problems is much faster and more effective.
With such scalable, affordable functionality, DCS capability can be delivered to applications that previously used primarily PLCs or that were not automated at all, including the following examples:
- Skid-mounted original equipment manufacturer (OEM) projects, such as gas delivery systems
- Boat monitoring and shipboard control systems
- Municipal water treatment systems
- Boiler systems
- Waste burning
- Offshore platforms
- Ethanol plants
- Metals processing
Where there are some control points that need never integrate or require alarm, trending, and other intelligence, PLCs may continue to be the control strategy of choice. But if there is even a chance that the application will change or grow, starting out with a micro- or mini-DCS will cost no more and will offer far greater potential for process improvement.
The trend is toward pervasive computing. Control intelligence is moving toward distributing intelligence ever closer to the process, the interface is moving closer to the user, and there is at least some interest in deploying communications across the Internet.
Whether closer, more mobile, and more public communications media are better or add more value to the manufacturing process remains to be seen. This will, of course, depend on users' evaluation of their control requirements.
They must as always weigh the trade-offs to pick the control solution that fits their process at a price they can afford. What is certain now, however, is that distributed microprocessor–based computing has brought us to a new plateau of scalability, flexibility, and economy.
Each manufacturer can now base control strategies far more on their business needs than on the limitations of available technology. IT
Behind the byline
Samuel Herb has a BSEE from Drexel University, is a member of the Industrial Computing Society, and is a senior life member of ISA. He is also a registered professional engineer. Herb is a process control specialist in the Foxboro Automation Platform Marketing group of Invensys, and he works on the Architecture by ArchestrA (A2) technology that he writes about here. Contact him at firstname.lastname@example.org.