Evaluating sensors for system migration
Sensors are vital for accurate control that are often overlooked in system migration, upgrades
- Industrial temperature sensors are usually calibrated in an ice, water, oil, or sand bath, in a furnace, or in a combination of these methods.
- To calibrate pressure sensors in the field, automated pressure sensor calibration equipment incorporating digital technology is used.
- Two conventional methods for evaluating the response time of sensors are the plunge test (for temperature sensors) and the ramp test (for pressure sensors).
By H. M. Hashemian
The goal of every system migration is a relatively easy, low-risk, and cost-effective transition from a legacy system to a new system that immediately and dramatically improves plant performance. Many equipment makers promise easy-as-pie "plug in" migration modules that require no replacement of existing field wiring, termination assemblies, system enclosures, or power supplies and cut migration downtime from weeks and months to "a day or less."
The reality is a bit different. Upgrading a system to advanced enterprise control computers and software without evaluating the performance of the sensors that supply these systems with data is an exercise in futility. To properly sense and communicate the parameters of a process parameter, sensors must be accurate. Accuracy means how well a sensor measures the value of a process parameter. To display data with the frequency required by the plant or industry regulations, sensors must be reasonably fast in revealing a sudden change in the value of a process parameter. Accuracy and response time are independent of each other for the most part.
Because sensor accuracy, responsiveness, and reliability are essential to plant systems, system migration must begin with a thorough sensor evaluation.
Temperature sensors accuracy
A typical industrial type of industrial temperature sensor, the resistance temperature detector (RTD), does not typically maintain accuracies of more than 0.05 to 0.12°C at 300°C, nor are they typically required to provide accuracies of more than 0.1°C at 400°C. Installing RTDs in a process also tends to introduce additional accuracy errors. The other most common form of temperature sensor, the thermocouple, usually cannot provide accuracies better than 0.5°C at temperatures up to 400°C. The higher the temperature, the lower the accuracy a thermocouple typically can achieve.
The accuracy of a temperature sensor is established through calibration-comparing their output to universal calibration tables or, in high-accuracy environments, calibrating them individually. RTDs, in contrast to thermocouples, can be removed and recalibrated after installation. Industrial temperature sensors are usually calibrated in an ice, water, oil, or sand bath, in a furnace, or in a combination of these methods. The type of calibration bath selected depends on temperature range, accuracy requirements, and the application of the sensor. The calibration process normally involves measuring the temperature of the calibration bath using a standard thermometer traceable to a national standard.
For individually calibrated RTDs, the accuracy provided by the calibration process depends on the accuracy of the equipment used to calibrate the RTD; inherent errors such as hysteresis, repeatability, and self-heating; and interpolation and fitting errors.
Although RTDs may be removed and recalibrated after installation (they actually undergo a kind of curing while they are in the process), thermocouples should not be. When a thermocouple is installed in a process, the gradient between the process temperature and the outside temperature creates an inhomogeneity over time at the point where the thermocouple protrudes into the process. This inhomogeneity cannot be repaired; a thermocouple that has lost its calibration should therefore be replaced.
Industrial thermocouples are not usually individually calibrated. Instead, their output is compared against standard reference tables. Sometimes, the manufacturers of thermocouple wire and thermocouple sensors will calibrate representative samples of the wire, applying that calibration to the rest of the wire or to the thermocouple sensors made with the wire. When thermocouples are calibrated, one of two methods is generally used: the comparison method (in which the EMF of the thermocouple is compared to a reference sensor) or the fixed-point method (the EMF of the thermocouple is measured at several established reference conditions, such as metal freezing points).
In evaluating the accuracy of a temperature sensor, it is important to consider not only the calibration of the sensor itself, but also the effect of installation and process operating conditions on that accuracy. For example, so-called stem losses result when heat is conducted from the sensing tip of the RTD through the length of the RTD (its "stem"), lowering the temperature of the RTD's sensing tip below the true temperature. Temperature measurement errors result.
Pressure sensors accuracy
The accuracy of pressure sensors for precision sensors is usually in the range of 0.25% of span, that is, the portion of the sensor's full range that the sensor is configured to indicate pressure for (e.g., a span of 500 to 1500 psi for a pressure transmitter that has a range of 0 to 2500 psi). For less rigorous applications, accuracy can be up to about 1.25% of span.
As with temperature sensors, the static performance or accuracy of a pressure sensor depends on how well the sensor is calibrated and on how long it can maintain its calibration. The initial calibration of an industrial pressure sensor (e.g., absolute and differential pressure sensors), known as its bench calibration, is achieved by applying a constant pressure source such as a deadweight tester. After a pressure sensor is installed, its accuracy can be evaluated by combining its initial calibration accuracy with the impact of environmental effects on that calibration, the impact of static pressure, and the instrument's drift rate. To calibrate pressure sensors in the field, automated pressure sensor calibration equipment incorporating digital technology is used. Automated pressure sensor calibration systems work by using a programmable pressure source to produce known pressure signals, which are then applied to the sensor to be calibrated. The output of the sensor is recorded, and the As-Found data (the data outputs produced by the sensor prior to its recalibration) is produced. The sensor is exercised with increasing and decreasing input signals to account for any hysteresis effect. Next, the system compares the As-Found data against the calibration acceptance criteria for the pressure sensor and automatically determines if the sensor must be recalibrated. If so, the system provides the necessary input signals to the sensor to be recalibrated and holds the input value constant until adjustments are made to its span and to the lowest pressure at which it is to be calibrated (known as the "zero"). After this calibration, the system produces a report that includes the As-Found and the As-Left (post-calibration) data and stores this data for trending and for incipient failure detection.
Evaluating sensor response time
Although accuracy can be restored by recalibration, response time is an intrinsic property that cannot usually be changed once a sensor is manufactured. Two conventional methods for evaluating the response time of sensors are the plunge test (for temperature sensors) and the ramp test (for pressure sensors). In the plunge test, a step change in temperature is imposed on the sensor in a laboratory by quickly drawing the sensor from one medium at a specific temperature and then immersing it into another medium (typically water flowing at 1 meter per second) at a different temperature.
In the ramp test, a hydraulic signal generator produces a ramp pressure signal, which is fed to the pressure sensor to be tested and to an ultrafast reference sensor. The output of the two sensors is then recorded. Measuring the asymptomatic delay between the output of the tested sensor and the reference sensor yields the former's response time.
The calibration and response time of process sensors, and in particular temperature sensors, depend to a large extent on process conditions, including static pressure, process temperature, ambient temperature, and fluid flow rate. This simple fact ensures that offline, manufacturer, or laboratory evaluation of a sensor (such as via the plunge and ramp tests) will be inadequate for gaining a true measure of the sensor's performance.
Several techniques, often referred to as in-situ or on-line testing, have been developed to verify the calibration and response time of sensors while they are installed in an operating process. For temperature sensors, the loop current step response (LCSR) test will verify the dynamic response of the most common temperature sensors-RTDs and thermocouples-where they are installed in an operating process.
The LCSR method provides the actual "in-service" response time of RTDs based on the principle that the sensor's output in response to a step change in temperature induced inside the sensor can be converted so as to give the equivalent response for a step change in temperature outside the sensor. To perform the LCSR test, each RTD is connected to the LCSR equipment, and a step change in electrical current is sent to each RTD using a Wheatstone bridge. An electrical current (usually 30 to 60 mA) is applied for about a minute to the sensor. This causes the RTD sensing element to heat by a few degrees above the process temperature. This, in turn, gradually increases the resistance of the RTD sensing element, producing an exponential transient at the output of the Wheatstone bridge. This transient, known as the LCSR signal, can be analyzed to yield the in-service response time of the RTD.
To accurately measure temperature while providing reasonable dynamic response, an RTD or thermocouple must extend to the end of its thermowell. The LCSR method is the only sensor evaluation method that can easily measure the inadequate insertion of sensors in thermowells. Finally, the LCSR method can also measure the response time of thermocouples, although the process for thermocouples is somewhat more complex.
Unlike RTDs and thermocouples, the response time of pressure, level, and flow sensors does not typically change after installation in a plant. This is because these sensors are electromechanical devices that respond at the same rate regardless of ambient or process temperature. Thus, traditional laboratory sensor testing techniques like the ramp test can be used to adequately evaluate pressure sensors. The difficulty in evaluating pressure sensors lies in their so-called sensing line-the process-to-sensor interface that connects the sensor to the actual process. These sensing lines add a few-millisecond sonic delay to the response time of pressure, level, and flow sensors. Although this delay is trivial, hydraulic delays, caused perhaps by blockages or voids in the sensing lines, can add tens of milliseconds to the response time of a pressure sensing system.
Fortunately, an evaluation method, called the noise analysis technique, has been developed to measure the response time of a pressure, level, or flow sensor while it is in service.
Noise analysis technique
The noise analysis technique makes it possible to measure the response time of pressure sensors and their sensing lines in a single test. Like the LCSR method, the noise analysis technique does not interfere with plant operation, uses the existing output of sensors to determine their response time, and can be performed remotely on sensors while they are installed in an operating plant.
The noise analysis technique is based on the principle of monitoring pressure sensors' normal AC output by way of a fast data acquisition system (e.g., a sampling rate of 1 kHz). The sensor's AC output, which is referred to as "noise," is produced by random fluctuations in the process arising from turbulence, random heat transfer and flux, vibration, and similar natural phenomena. Because this extraneous noise is at higher frequencies than the dynamic response of a pressure sensor, it can be removed from the signal using low-pass filtering. Once the AC signal or noise is separated from the DC signal using signal conditioning equipment, the AC signal is then amplified, sent through anti-aliasing filtering, digitized, and stored for subsequent analysis. This analysis yields the dynamic response time of the pressure sensor and its sensing lines.
A range of equipment is available to collect and analyze noise data for pressure sensors. Commercial spectral analysis equipment can collect noise data collection and perform real-time analysis, but this equipment cannot usually handle the variety of data analysis algorithms required to obtain accurate response-time results. This is why a PC-based data acquisition system consisting of isolation units, amplifiers, and filters for signal conditioning and anti-aliasing is often the optimal choice for noise data acquisition and analysis.
When to replace sensors
The simple answer is to replace the sensors at the manufacturer's stated product life, say, 20 years. However, this can be very expensive and counterproductive. The alternative is to continue using the sensors past their predicted life, but rely on a system for tracking sensor performance to determine if and when a sensor needs replacing. Experience has shown sensors with a good track record are very likely to continue to perform well far beyond the manufacturer's life span estimation. The consensus among plant maintenance personnel and sensor experts is a sensor should be used for as long as its calibration stability is reasonable and its response time has not degraded. It is said anecdotally that sensors working properly should be left alone, and aged sensors with a good track record are as good if not better than new sensors of the same design and from the same manufacturer. Evaluating sensor accuracy and response time using in-service methods like LCSR and noise analysis are critical to establishing sensor replacement schedules that are objective rather than anecdotal.
ABOUT THE AUTHOR
H. M. Hashemian has a Bachelor of Science degree in Physics, a Master of Science degree in Nuclear Engineering, a Doctor of Engineering degree in Electrical Engineering, and a Ph.D. in Nuclear Engineering. He has worked for AMS since 1977 when the company was founded. Hashemian specializes in process instrumentation, equipment condition monitoring, on-line diagnostics of anomalies in industrial equipment and processes, automated testing, and technical training. He has written two books and is the author of six book chapters, nine U.S. patents, and over 200 published papers.