01 January 2004
The future is now
Comparing costs, installation, maintenance, and upgrades.
By John Telford
Process monitoring is an emerging development-based on the need for heavier security and safety and not on the control of the actual manufacturing. One of the unique features of our efforts is the redirection of existing instrumentation techniques to monitor parameters we did not previously observe. This is a refinement in terms of the use of automated measurements to increase the quality envelope.
By monitoring the quality of the electrical power fed to a process, we can prevent damage to costly equipment. By monitoring atmospheric quality, we can prevent product damage that would not show up in inspections until further down on the assembly line. By monitoring atmospheric pressures in the shielded areas, we can prevent injuries to employees caused by hazardous component leakage. In many high-technology environments, this need to protect workers is becoming even more critical, because our modern systems are so complex.
Here is a set of guidelines useful for designing instrumentation systems as separate entities from process control systems.
A mainstream area
In the recent past, process control and instrumentation referred to the operation of a particular process. Controlling temperature in an oven and product heating times are examples of traditional process control and instrumentation applications. Some of these systems are safety significant, because if the system failed, it could cause plant or personnel damage. Redundancy and fail-safe measures emerged to reduce the possibility of such failures. Process monitoring systems, on the other hand, are not safety significant, but they do indeed increase the safety envelope by a considerable factor.
As control systems become more sophisticated, it is clearly possible to monitor other parameters than just those necessary to control a specific process. This ancillary data can serve in subsequent process analysis and add a measure of safety responsiveness within the facility. A run of bad product may correlate to data gathered from a separate power quality monitor. Power quality is not usually part of process control, but it is beginning to be an element of diagnostic system monitoring. Data gathering and subsequent analysis and correlation from previously unobserved sources are now becoming a mainstream area. Analytical methods are also emerging to deal with these areas.
Metals manufacturing study
The case in this study involves a light metals manufacturing area where the metals are toxic and very reactive. To provide a shielded environment, glove boxes work during the entirety of the processes. The glove-box enclosure provides a wealth of opportunity from the standpoint of systemic monitoring.
In the beginning, a modern, comprehensive system has to monitor all parameters of the work areas. In days gone by, process control involved only the actual devices within the glove boxes. Now it has to monitor temperatures, humidity, pressures, oxygen content, lighting levels, sound levels, glove integrity measures, and any toxic gas leakage from the glove boxes. Yes, alarm systems are in place to detect leakage of toxic gasses. But keep in mind the system we are considering is above the level of normal process controls.
Also included in the input stream is the power quality data: voltages phase to phase, voltages phase to ground, neutral to ground, and current flow. This system just gathers data and archives it. Reports from the data and correlation with product quality are available. Most significantly, when the aim of management is for a very high degree of safety, adding intelligence to any process gives a measurable improvement to the safety factors.
User friendly wireless
Process monitoring needs to include data from all sources. The power of modern computer systems is such that there is no longer an excuse to exclude seemingly irrelevant data. A comprehensive analysis program can quickly run through large amounts of available data and look for relationships and correlation. Process monitoring includes both a hardware and a software component. In designing a system, you should consider both components. As to the cost of a process monitoring system, it is definitely a cost to the business. No direct gains can attribute to the presence of such a system. Only a measure of improved safety and a higher level of consciousness of the manufacturing process exist. In the case of a government-sponsored facility dealing with highly toxic materials, a process monitoring system has become a necessity despite the cost.
The design process should include three areas: initial considerations, normal operation, and future maintenance.
Initially, a survey must take place to discover data gathering possibilities. You need to conduct interviews with personnel related to the processes. It's also important to consider data organization and access control. A series of spreadsheets are sometimes very useful to enter and display data and to create subsequent data analyses. Reports entered into spreadsheets are useful.
Consider the reliability and accuracy of all sensors during the design phase. Include manual data entry points. Portable data entry devices may serve to gather certain process data. Personal digital assistants (PDAs) are good enough now to run spreadsheet programs, and they can synchronize to a network server. Companies must set up communications so that the sensors transmit data to a common point of collection.
Wireless linkages are user friendly these days. Wireless data transmission eliminates the necessity of hard wiring the sensors and the PDAs. Data collection hardware must go where it can be most effective. Currently, data loggers have taken the place of computer-based monitors. They are much cheaper than computers and much more reliable.
Design considerations must also include servers to support the data storage, personnel to maintain the servers and the network, and physical facilities to house the servers, including access, environmental control, and redundancy systems to ensure reliability.
Probe various corners
Another area to consider-normal operation-involves three more subdivisions: analysis (including alarming), data acquisition, and archiving. You need the automatic or alarming portion to eliminate the need for a full-time system operator and to provide reliable alarm functions. The company must make decisions as to what constitutes a parameter of concern and at what point an alarm should go up. Further decisions as to the nature of the alarm have to happen-visual, aural, paging, e-mail, system shutdown, and the like. With good process monitoring, complex parametric alarms are possible. This means that instead of alarming when a single parameter goes high or low, an alarm is now possible when several parameters combine to produce a dangerous condition. This is a key advantage not available in older systems, and it adds a considerable measure of safety.
Data analysis is the second area of consideration for the normal operation of the system. Most operators seem to prefer manual operation at this point. The operator wants to probe various areas of the system and produce reports relative to the needs of the company. Again, you need to determine the amount of automation. A system with the most flexibility will have the most usefulness. Ongoing managerial support for the system comes from this aspect. If the manager can derive useful information from the system, he or she will continue to support the existence of the system. Continued software maintenance is necessary at this point, because management will demand more capabilities after learning of the usefulness of the system. Flexibility in the design of the process monitoring system then is critical to the future growth of the system.
Data archiving is important and requires much planning. When parameters are in the normal range, you might not need to store them. For instance, if a data stream consists of 1s, 2s, and 3s-where 3 is an alarm level, but a 1 or a 2 is not really of significance-it makes no sense to store millions of 1s and 2s.
You would merely store the time when a 3 came in. In trending data, however, you might need to store all data as it arrives. Even when the data may range up and down within accepted boundaries, it may prove important at some future time to study the cyclic nature of the data.
Responsible waste handling
You might also archive system health measures. A system health measure is a derived number. To have a quick measure of the overall operations of a facility, some managers like to see a simple roll-up number that indicates the presence or absence of impediments. In an electric power plant, the output wattmeter is a good measure of the product. It is not a good measure of the health of the plant. Another number might show possible problems arising in the facility. If all preventative maintenance was up to date, and the raw material supplies present in sufficient quantities, you might assign a number as a system health measure or index. As problems appeared throughout the plant-a low inventory of spare parts, an over temperature of a generator, or abnormal event-this health number would rise.
A manager looking at the number would instantly realize that he or she had better look into the problems in more detail. It is much like the warning light on a car's control panel, except it is a roll up of many complex numbers into a single index. It should be obvious that the collection of the data needs to automatically and manually register at the sensor and enter into storage.
The ongoing and future maintenance of the process monitoring system has to be a part of the system from the beginning. The projected life of the components and their replacement and/or upgrade are part of this plan. The accuracy and regular calibration of the sensors is a must. Data entry devices, personal digital assistants, keyboards, and the like need periodic replacement. A plan must be in place to assure the integrity of the system. Migration to improved software and hardware is difficult to plan but worth consideration. Useful lifetimes of hardware and software are usually less than five years. Good accounting should include all these costs.
Security of the data is another area. The data is not product definition, so it does not rate the same protection. It may be very important to the company ten years down the line to show, first of all, the company was monitoring conditions that pertain to the health of its workers and that during certain periods the environment was clear of harmful agents. One dismissed environmental lawsuit would more than pay for the entire system. Again, considering which data to save and how is important. Data stored on 10-inch floppy disks ten years ago is no longer available. Will compact disks be readable ten years from now?
In these days of responsibility for our waste products, a process monitoring system is becoming an excellent way to verify exactly which operations were active at what times and exactly which by-products emanated from the process. It is a powerful tool for management, operating people, and process engineers.
Swap out every year
You should build future maintenance of the process monitoring system into the design from the get go. Few designers are able to see past the first year of operation. Migration to more sophisticated technologies can be foreseen and should be part of the design phase. For example, right now we are on the brink of wireless technology for sensors and digital assistants. While the technology is not fully available just now, it will be in the near future. Systems designed now should include provisions for the introduction of wireless techniques in the very near future.
You should arrange data storage formats such that the data can easily transmute to the newer systems as they develop. Obsolescence of components and computers and life cycles of sensors should be written into the design if the system is to function after the initial start-up. Some sensors have to swap out every year. A maintenance schedule should be set up from the beginning of the system with people designated to care for the ongoing operation. MP
Behind the byline
John Telford is a technical staff member at the Los Alamos National Laboratory. Write him at firstname.lastname@example.org or at Mail Stop E539, Los Alamos, NM 87545.
The process data system for a light metals facility gathers data from the manufacturing line. Because all work is done in glove boxes (GBs), data gathering is somewhat simplified. Sensors monitor the atmosphere inside a GB for the presence of moisture and oxygen. The presence of either would adversely affect the quality of the product. Pressure in the GB causes the gloves to blow out and is a cause for an alarm. A normal but rising pressure is cause for investigation before a blowout occurs. Operators enter chemicals used in processes, thus keeping track of usage, waste, and possible product. Data enters spreadsheets using PDAs. Other data such as equipment rack temperatures, room temperatures, electrical voltages, electrical leakage,and arcs are processed through automatic sensors.
The network is a virtual private network. This means the network is a closed system. Only known sensors and input devices would input to the network server. Only bona fide workers and managers would have access to the data stored on the server. Designated employees would do archiving and analysis. Alarming is automatic and is aural, visual, and an electronic page to management.
Data archival is binary, so future analysis programs will be able to access the data and manipulate it in whatever format becomes available. Currently workers manipulate the data using spreadsheets. Longtime storage will be on tape and compact disk recordings.
The users and their managers will administer the system. A process control and instrumentation team will provide long-term care. A maintenance program built into the software will remind the users when a sensor replacement is necessary. The implementers chose self-calibrating sensors that send a message to the user when a sensor becomes defective.
Migration to wireless data transmission is part of the long-range plan. The change should be transparent to the users with the exception of the welcome loss of tether wires on their personal digital assistants. The ability to add further sensors and further data to the database is also in place.
Considerations for designers