01 October 2003
Preparation is key to avert disasters
Controllers, backup prevent lost production.
By Ellen Fussell
The Great Lakes Brewing Co., a microbrewery based in Cleveland, Ohio, invested in a remote-access paging modem combined with a technical phone-support program in the event of a problem. When lightning struck at the brewery, and key plant systems failed, the support team was able to identify and resolve production problems quickly. Great Lakes' access to support saved an estimated $2.4 million in potentially lost product.
While technical support and quick response are a few ways to help mitigate disaster after it strikes, it is also important to make sure your systems are rugged enough to endure extreme conditions.
Failures in electronic devices tend to come from voltage spikes or high temperatures from blocked vents on a cabinet, mostly due to abnormal operating circumstances rather than normal wear and tear. One way manufacturers can ease their minds is by having a reliable control system.
Designed to withstand more rugged environmental conditions—physical shock and erratic temperature fluctuations—electronic controllers can endure extreme applications such as launch pads for rockets, which have extensive vibration and noise levels in excess of 162 decibels.
During an intense summer storm, lightning hit a building at Shaw Air Force Base near Sumter, S.C. Shaw had installed a new controller-based airfield lighting system. Hundreds of planes taking off and landing each day rely on the lighting system. When lightning struck, the laptop that accessed the system did not make it through the storm. But by using the controllers, the airfield lighting system never skipped a beat.
In a situation like that, typically what allows controllers to live through it is the ruggedness, but more important is the design of the power to the controllers and the control system, said Ralph Williams, regional commercial manager at Rockwell Automation in Cleveland, Ohio.
That is because if you have an isolation transformer (a transformer that breaks the power away from a direct connection to your line) in your system, it isolates the power using a transformer, which reduces a spike on a line. "So if your incoming line has a near lightning strike—a high-voltage spike on your incoming line—the transformer reduces that spike on the other side of it, and that will help," Williams said. "If you have controllers in an outdoor environment, you should have lightning arrest capabilities, especially on your power side," he said.
Manufacturers can prepare in many ways—in the general design of their system overall; it's not just the controllers, he said.
WHAT TO LOOK FOR
People should look for a reliable controller, "one that's designed to industrial standards, that has good temperature ratings, has been thoroughly tested to a variety of specifications like shock and vibration, high-voltage testing, and a variety of tests that we put our equipment through in order to make sure they'll live in an industrial environment," Williams said.
But even the most rugged controllers would not live through some things, including floods and fires. However, there are some exceptions. During a stint in the field in St. Louis during the 1993 floods along the Mississippi down to the Gulf of Mexico, Williams said users learned how durable some controllers could be. "Even though their equipment was totally underwater, when the water subsided, they cleaned it off, powered it back up, and it ran," he said. "But that's not recommended, especially with floodwaters because they can bring all kinds of deposits containing metal. So you have to be careful."
After a fire destroyed a Nelson Forest Products plywood mill in Miramichi, New Brunswick, employees walked through the carnage and found at the control cabinets a mass of melted hardware, burned cables, and debris. But amidst the rubble, their processors were still intact—coming to life when they plugged them in with preserved program and memory.
"In this case the controllers were inside a cabinet, so that's what allowed them to live through that," Williams said. Although controllers can live through high temperatures, in a fire, what happens is "the power goes down so the controller doesn't operate," he said. "So it was in storage mode, and in storage mode it handles a much higher temperature."
When a company needs to get back on its feet after a software crash, they can restore previously used programs and manage backup and recovery services with an automation control center that centralizes, manages, and maintains information.
Hurricane Georges hit the Chevron petroleum refinery in Pascagoula on 27 September 1998, pushing over the refinery's dikes with 12-foot surging waves and bringing almost 17 inches of rain and 125-mile-per-hour winds. The seventeen-hour storm left the refinery flooded with 5 feet of salt water, with flood levels in some buildings ranging from 3 to 64 inches. Of the refinery's estimated 600 structures, four major office facilities were rendered permanently unusable. And all grade-level mechanical and electrical equipment, including motors, were submerged. Within a week after the hurricane, Chevron employees returned to work.
A control center helped manufacturers develop schedules to automatically perform routine operations—programmable logic controller (PLC) uploads and file backups—ensuring the company's software and data did not get lost.
With such a backup system, users have a record of the most current program versions running and the accepted configuration or program running. Managers know when changes are made to systems designated for tracking. They know who made changes and why. Operators know which program is running in which controller to safeguard production requirements. Companies have records of all changes made to devices, applications, and tracked project files. They can also restore previously used programs and correct invalid program changes in case of unauthorized changes.
HOW TO RECOVER
There are some things for users to look for that will help with their recovery effort. "Make sure your programs are backed up off site," Williams said. "You see many people not doing that. It's just like a bank. They back up their records off site, so they know whose money is in their vault. It's the same with resources on your plant floor. If you lose those, that could be one of the reasons a plant isn't able to come back," he said. "The cost may be too high. Programs in the controller and in their computers really need to be backed up and stored off site."
Another way is to make sure you have a list of all your assets on site. "Know where you can get replacements for those," Williams said. If manufacturers have a special motor, they should "make sure they know where a source for a replacement would be. Even the control equipment—like programmable controllers—they need to know where the replacement is for that," he said. In most cases the distributors would have the equipment on their shelves.
Also, many manufacturers offer software that will automatically back up programs. "So if you have 50 PLCs on your plant floor, your software will automatically back those up on a programmed time schedule," he said. "If someone made a change on the shift before you, this will back up that program and store it wherever you want, onsite or off site over a network. Most manufacturers have that."
Williams said one of the biggest things plant managers can do is "play the what-if game. Their whole production depends on everything working in the plant, not just their control systems," he said. Managers should ask themselves, "What if a tornado wipes out my entire plant? What if a fire or flood destroys part of our plant? And one of the biggest things is to find out what all your assets are in the plant, and how you can replace those assets if they are destroyed or disabled," Williams said. "That includes not just hardware but software. Running software constitutes a large portion of assets or resources. And those are always much more difficult to replace than the hardware." IT
Power grid problems, solutions
This summer's blackout in the Northeast and central U.S. and Canada is yet another example of how disaster can strike at any time—unannounced.
In a report from the Electric Power Research Institute (EPRI) researchers presented a "Framework for the Future" that examines the challenges facing the electricity sector in the U.S. The report outlines future economic and technical directions facing the industry. Five overarching goals include:
A solar-powered, integrated window system could reduce dependency on the same energy grid that caused the U.S. and Canada power outage this past summer. The dynamic shading window system (DSWS) uses a new solar-energy technology to convert the sun's light and diverted heat into storable energy that can also efficiently heat, cool, and artificially light an office building.
The system consists of clear plastic panels that fit between two panes of glass. On each panel are dozens of small, pyramid-shaped units, or modules, made from semi-translucent focusing plastic lenses that track the motion of the sun. Sensors, embedded in the walls of the roof, ensure the units are always facing the sun to capture all incoming rays, while at the same time deflecting harsh, unwanted rays from a building's interior.
Systems like this are becoming a reality because of the advent of thinner, smarter materials. These new materials allow researchers to shrink existing technologies to make new compact systems that are more effective and visually unobtrusive. Researchers have pared down the typical 4-feet-by-4-feet silicon solar panes to 1 square centimeter.
Engineering company ABB is pushing its battery energy storage system (BESS) as a solution for a blackout. In August, it unveiled the online Alaskan system that uses ABB power technology. The company said the U.S. state's $30 million BESS, the world's biggest, will cut power blackouts by over 60%. The energy storage system includes a massive nickel-cadmium battery, power conversion modules, metering, protection and control devices, and service equipment. The goal is to keep user interruptions at a minimum by providing power during system disturbances. The BESS will produce several minutes of power so the utility can bring generation back online.
Why did blackouts hit worldwide?
By Cris Whetton
North America, Britain, France, China, and even Finland have all suffered major blackouts in the space of two weeks. In France, the problems were due to abnormally severe weather. China's problems had nothing to do with the weather, but the waste dump that caught fire on 15 August and brought down high-voltage power lines had all the signs of being an illegal fuel-selling operation. The south London blackout of 28 August only lasted forty minutes. It was a false alarm, to which the operators responded by isolating the apparently failed transformer. One backup circuit was out for maintenance; the other failed shortly afterward because technicians incorrectly fitted automatic protection equipment during an earlier upgrade. No cascade of failures followed this incident which, like the Helsinki incident, remained localized, though to a rather large area.
Human error almost certainly caused the 23 August power failure—blacking out the Helsinki region of Finland, though the investigators are still looking into the incident. Somehow, a grounding link closed or never reopened, shorting the 110-kilovolt line between the Suvilahti and Kruunuhaka generating stations as the latter came back online after maintenance. This led to a cascade shutdown of the other generating stations feeding the city, though the cascade did not spread to the rest of the network, outside the Helsinki region.
In the 14 August power failure affecting the U.S. and Canada—as with London—politicians immediately knew the cause of the problem: it was someone else. And it may be years before they identify the cause.
The French experience shows the need to upgrade systems to take account of hotter, stormier summers. This summer, generating stations and process plants have had cooling problems. Now is the time to install extra capacity.
We often report refinery shutdowns occur because of a single failure in their power supply systems. The Chinese incident underscores the vulnerability of overhead power lines to fires. It also shows the dangers of running multiple circuits down the same corridor. Physically separate redundant systems would avoid common mode failures—in this case, damage by a widespread fire.
The London and Helsinki incidents show that backup systems can and do fail. There is no simple answer to this problem; all we can do is add third and fourth levels of redundancy, as many aircraft hydraulic systems do. Testing does not solve the problem, because testing can itself induce failures—nor does periodically interchanging the main and backup systems; this merely ensures they both wear out at the same time.
Uran Plant: An Indian experience
By Samir K. Laskar
Uran Plant is the first landfall point of the Bombay offshore field of Oil & Natural Gas Corporation, India. The plant handles nearly 16 million metric tons per annum (MMTPA) of oil and 12 MCMD of natural gas along with 1400 CMD of condensate—nearly half India's domestic production of oil and oil equivalent gas.
In response to two major catastrophic accidents in nearby Indian Petrochemical, Nagothane, and in Vizag refinery in the past decade, Uran plant management has taken a fresh look at the safety aspects of the plant. National and international authorities implemented expert committees' recommendations to enhance hydrocarbon/infrared and hydrogen sulfide, and place smoke detection systematic fire suppression systems strategically throughout the plant.
Some of the safety measures included:
Behind the byline
Samir K. Laskar is Supt Instrumentation Engineer at Oil & Natural Gas Corporation in Uran, New Mumbai, India.