Putting the Squeeze on Power Plants
Empirical modeling techniques improve online efficiency monitoring
By J.W. Hines and A. Usynin
Utilities producing energy are in a bind. They know they need to find new ways to increase their production, and they agree competitive power facilities must focus on efficiency to be an economically competitive energy source for the future.
Today’s nuclear power plants have steam power cycles with an early 1960s design and little flexibility in operational improvements at the system level. Manufacturers have redesigned and replaced turbines, steam generators, and other large components; but the system operations have changed little, if at all.
By using empirical model-based, optimization, controllers can increase steam power cycle efficiency.
Most utilities’ efficiency improvement programs have experienced gains through systems that monitor the thermodynamic performance of the vapor power cycle and attempt to track heat and identify losses. Many use models tweaked with operational data and requiring engineering time to implement. However, they do not provide direct guidance on how to optimize the power plant efficiency; they merely estimate where heat is going, which can lead to problems.
To efficiently manage plant thermodynamic performance, users need systems that give guidance on how to optimize the plant in its current configuration. How can they optimally align and operate bypass flow rates, feedwater heaters, and reheaters? Most current facilities have hard-wired controllable variables. Reheat lines do not have control valves. Feedwater heater-level settings control bypass flow rate, but there’s no control of other variables showing potential for improving thermodynamic efficiency. Redesign of current plant systems may not have a strong return on investment, but optimal design of future plants could provide an economic payoff. We need to establish next-generation designs with the ability to optimize thermodynamic efficiency for different operating and environmental conditions and to provide an economic justification for such designs.
Empirical thermodynamic performance modeling focuses on nuclear power plant applications. Work started 10 years ago with the Tennessee Valley Authority, modeling the Sequoyah Nuclear Power Plant Unit 1. The project team estimated heat rates within 0.1% of the calculated values. When the team used the trained model on Unit 2, heat rates were within 0.5%, validating the idea that you can model nuclear power plant heat rates accurately with empirical models. The team used a sensitivity analysis to determine how to change the most important variables to improve thermodynamic efficiency, but they didn’t implement the changes. This technology has a fast payback with a 0.1% efficiency improvement, equating to $263,000 per year at 2.5 cents per KWE-Hr.
Researchers focused their efforts in the application of regularization techniques to empirical models to increase predictive stability and robustness. Subsequent research in applying regularization techniques to get repeatable, robust, reliable results from empirical modeling techniques resulted in developing diagnostic systems and model enhancements several industries now use. With the application of these improved techniques, the team proved online empirical, model-based, thermodynamic optimization is possible, practical, and profitable.
The project team wanted to develop an optimization and control methodology for online thermodynamic optimization of steam power cycles. Through an initial pilot study, the team determined changing environmental and operational conditions cause thermodynamic efficiencies to change, and operational changes can result in improved thermodynamic performance for different plant conditions. In the winter, when cooling water temperatures are cooler than summer, you need to increase feedwater heating to reduce the irreversibility due to heat transfer across a large temperature difference. Another possible controllable variable is the optimum amount of reheat at different power levels.
Optimizing nuclear power plant operations at different power levels is a necessary feature of the optimization technique since some countries, such as France, already operate a large proportion of their units in load following configurations. U.S. utilities are also investigating the economic benefits of reduced power operations in certain situations. A plant could operate at a lower power to delay a refueling outage until labor or energy costs become less expensive, a term called end-of-cycle, coast-down procedures.
Because it isn’t practical to perform extensive experiments on an operating power plant to collect necessary information about plant parameter relationships with thermal efficiency, the team used system modeling and simulation for an efficient and accurate alternative. The team developed a dynamic pressurized water reactor (PWR) secondary system model using a vapor power cycle model called SIMULINK. This model consists of a steam generator, feedwater pump and level control system, high pressure and low pressure turbine with pressure control system, a bypass flow with an open feedwater heater, and a condenser.
Through the dynamic simulation of PWR secondary side, the team could study the effects of several important factors on thermal efficiency:
Reheat steam flow rate
Bypass extraction flow from high pressure and low pressure turbines to the feedwater heater
Cooling water temperatures and flow rate
The optimization system uses empirical models to capture a plant’s specific dynamics without using first principle models and correction factors. The team collected steady-state data from the SIMULINK model at environmental and operating conditions so it could develop a sufficiently robust empirical model to model the power production process.
The model then went into an operational state, which could optimize the thermodynamic performance. The uncontrollable state variables as well as initial controllable variables and response surface optimization techniques find the controllable variable values that maximize thermodynamic efficiency.
The team used a robust variant of response surface methodology (RSM) to locate the optimal operating point. RSM is a set of statistical techniques for empirical model building and model exploitation. The task of RSM is to assess the effect of predictor variables or factors on some measurable quantity called a response. To set up the optimization problem, the experimenter must first select the predictor variables. The team used expert judgment and correlation analysis to select the model variables. The experimenter then validates the model and performs optimization.
To develop the thermodynamic optimization model, the team collected data to give a measure of the thermodynamic performance and the values of state variables that specify the operating condition of the plant. In addition, they also wanted to measure the controllable variables to change the state of the plant.
After constructing the empirical optimization model, the team initialized the SIMULINK model (plant) to an operating state and input the values of the controllable and state variables to the empirical model to produce an estimate of the thermodynamic performance. They then used a robust, kernel-model-based response surface optimization technique to determine the controllable parameter values that maximize the thermodynamic performance or the output of the empirical model.
Simulation results show these techniques have the ability to save power plants hundreds of thousands of dollars a year. The multivariate optimization procedure can optimize several controllable variables at one time to produce the highest thermodynamic efficiency at a specific operating condition. The team could develop similar efficiency surfaces to encompass additional controllable variables and optimize efficiency. The optimal operating condition is different depending on environmental factors such as cooling water temperature, operating factors such as power level, or plant characteristics such as the condenser state of fouling.
The multidimensional optimization is an iterative procedure, adaptable to new operating conditions. The adaptive nature provides a method to identify optimal operating practices that you haven’t used in the past. Yet you need to collect data for each step into the unknown operating condition before selecting the next step. Since you base your decision on making the next optimal step on observations corrupted by noise, you shouldn’t make operational changes until you’ve considered the noise effects. The team solved the statistical complexity of the noise through hypothesis testing.
There are several current roadblocks to the actual implementation of these techniques. Before you implement, take note of these requirements:
1. Controllable variables, such as valves, must have instrumentation to measure their positions, and these values must be accessible to the optimization system.
2. To control reheat flow rate, there must be a valve in the reheat system to control reheat flow. Many nuclear plant designs do not contain this valve, but some have retrofitted the original design.
3. The relative error of measurements should be less than the expected value of the overall efficiency improvement. In the case of large deviations in the measurements, the empirical model will produce an unacceptably variant prediction of the optimal conditions.
As operating conditions change due to environmental changes, operating changes, or component changes, the originally established optimal combination of factors becomes non-optimal, and the gradient vector defined in the multidimensional operational space is different from zero. In this case, the empirical modeling method can estimate the non-zero gradient and then determine the direction that will lead operation to the new optimal combinations of factors. The design and implementation of these optimization techniques for power plants has the economical opportunity benefit on the order of hundreds of thousands of dollars a year.
About the Authors
J.W. Hines is Associate Professor at the University of Tennessee Nuclear Engineering Department in Knoxville, Tenn. A. Usynin is a Graduate Research Assistant at the University of Tennessee Nuclear Engineering Department.
Optimizing in Real Time
By Ellen Fussell Policastro
Kingsport, Tenn.-based Eastman Chemical Co. is using the real-time optimization portion of a software suite to calculate the most cost-effective way to operate its system. “We have lots of choices about how we produce power because of the complexity of our system,” said Lemuel Mixon, a technical associate at Eastman. “We can make or buy electricity. Most companies do this, but they have simple systems. Ours had gotten so complicated that it took too long to calculate the best way to operate.”
Before installing the new system, operators followed guidelines on how to run the equipment based on calculations done by hand years ago, Mixon said. “Instead of redoing the calculations, we depended on our past knowledge. But when coal became more expensive, we were still using those guidelines.”
The optimizer helped calculate plant operation costs and recommended purchasing power instead of making it to reduce overall energy costs. “We had gotten into the mindset of make all you can,” Mixon said. “That’s not always the right decision, especially when the cost of coal and natural gas start going up. The beauty of this system is if those processes change back and it becomes more cost effective for us to be generating electricity instead of buying it, we’ll know immediately.”
The Eastman system is complex, and not many applications in the world need that level of solution, he said. “We have tie lines to the electric company, so we can purchase electricity or make it ourselves”—the biggest driver behind savings, Mixon said.
Different pieces of equipment have different efficiencies at different levels. “You can run them at different settings to get the overall output you want.” The optimization package calculates where each piece of equipment should be run in order to get the minimum cost at the needed power, “and gives us the information we need to do our jobs more effectively,” Mixon said. “Before, we did it by hand, but we didn’t do it often.” Now the plant has more detailed and timely information.
The Eastman process works by four powerhouses making steam and electricity, which goes into headers and switch gears to distribution. Chemical plants use the steam and electricity from those facilities. “Our goal is to keep the power being produced at the same level it’s needed by the operation,” Mixon said. “We do it by maintaining header pressure, by controlling flow coming out of boilers. Electricity to us is a byproduct. We may be reducing steam from high pressure to low pressure, running it through turbines and producing electricity. We have one powerhouse that’s a central control. They monitor the optimizer screen. They’ll instruct the other powerhouses to move their boilers to where they need to be. The goal is to get the minimum cost of maintaining the demands of the chemical plants. We generally use the power company to balance out what we don’t produce.”
The biggest lesson for Mixon is, “your instrumentation is critical to the success of a project like this. If you don’t have good installation of instruments, then your readings and calculations will be off. So it’s important to have the right measurements installed the right way.”
ISA-SP67, Nuclear Power Plant Standards www.isa.org/community/sp67
ISA Data Processing and Management Community www.isa.org/dataprocessing
“Approaches for Migration of Legacy DCS Systems to Maximize Return on Existing Assets” by Ken Keiser and Todd R. Stauffer www.isa.org/techpapers/TP05ISA288
“Intelligent Field Devices and Asset Management Software: Discover the Benefits of Utilizing the Combination” by Brian L. LaBelle www.isa.org/techpapers/TP05AD024