01 March 2005
In with the new
Integrating technology to hike plant operations.
By Lance Abbott, Eric J. Heavin, and Daniel Shanyfelt
Technology offers industrial applications near-limitless opportunities to increase efficiency, improve regulatory compliance, expand product lines, and increase quality control.
Every process, from the production line to human resources to the most mundane administrative task, is a candidate for improvement through the utilization of the latest technological solutions. While the possibilities are endless, implementing these process improvements can be a daunting and costly task. Facilities have become risk averse after a history of poorly planned projects promising amazing results but often delivering products that are expensive, difficult to maintain, impossible to upgrade, and fall far short of their promised value. Careful planning, however, can lay a foundation upon which "process engineering" groups can implement solutions in a cost-effective way without having to become an information technology (IT) group.
The following is an overview of how to create this broad technology plan and examples of where this plan benefited development at the Savannah River Site Defense Programs (DP) facility and where holes in this plan led to extra work and cost.
Under the hood
To create an effective technology plan, engineers must first analyze their system as a whole and learn how to model that system using some of the fundamentals of modern software design. The initial analysis must be comprehensive. That means looking outside of the areas that most systems engineering groups are familiar with and determining how the process system relates to all of the support and administrative systems used to operate the facility. Starting from a major key component in the system and looking at all of its supporting and relational entities is crucial to determining data flow for that system and outer system involvement.
All of the different components should supply data to the "facility report" by means of facility work performed and the direct output from the process. This first big-picture look at all of the related components of the system allows the model to be broken down by system for more detailed analysis by those groups or individuals most intimately familiar with their operation. Breaking down individual model components allows one to determine items such as the source of the data, programs and software used, and actions or work performed.
Performing a thorough system analysis is not only an essential step in software modeling, but it also helps identify targets for improvement by providing a baseline against which improvements can be judged for cost and potential return.
Once you have analyzed the system components, you can approach software engineering solutions.
For the software engineering portion, programming and software design expertise is not required. However, a solid comprehension of Object Oriented Programming (OOP), Data Modeling, and n-tier solution design are essential to the success of even the smallest hardware or software project.
According to Evangelos Petroutsos and Asli Bilgin, authors of the book Mastering Visual Basic.Net Database Programming, "any well-designed distributed system should acknowledge four important points, which we refer to as the 'abilities' of an application: Interoperability, Scalability, Reusability, and Extensibility." These fundamentals, the authors said, are the tools needed to turn a system analysis into the "technology plan" where you can develop individual projects of any scope with the confidence they will integrate with and support the next project in the pipe as well as future projects. This kind of interoperability will drastically improve return on investment as the tools life span increases and maintenance costs reduce.
An overall analysis of the DP operations facility displaying key components that contribute in generating a daily facility operations report.
Clear as mud
Even after a thorough system analysis, it still may not be obvious which areas to target for improvement first. Obvious choices include the following:
Related functionality spread across multiple systems
Important data not collected or difficult to retrieve
Confusing user interfaces
Most of these were apparent before the system analysis, so what has this work really done to clarify the situation? First, a better picture of approaching system modification tasks and how much effort is involved in solving these problems is starting to form. Second, a number of more subtle, yet valuable opportunities are now available for consideration. Some targets that may be less obvious, but often present great opportunities for cost-effective improvement include the following:
|Revised FSB model
A comparison of the FSB initial design, below, and the revised FSB model, above.
There were two main projects that drove the initial DP system analysis and architecture configuration. These projects were the Facility Status Board (FSB) and the System Status Log (SSL). Both projects needed to enhance facility communication, eliminate data duplication, and reduce "operator error" by being able to implement better administrative controls.
The reason for the FSB project was the inefficiency of communication between on-coming and off-going shifts. There are several different buildings and systems that contribute to the shift turnover meetings. Each area needed to bring data from the separate control rooms, offices, and work areas on printouts to one central meeting room. The system analysis revealed several different input streams with duplicate data, isolated networks, and data validity issues. The solution for this project was to design or obtain a program that could run and update from multiple locations throughout the facility on a single accessible network for determining current facility status information. Each section of the facility then divided into different electronic "boards" of data where their equipment and area information could update for display to the rest of the plant.
The SSL project came about to meet two critical facility needs. One was to provide operations a tool to make large system status changes (lockouts, alarm inhibits, caution tags) of groups of points instead of a single point at a time. The second facility need was to provide an electronic log to improve accountability and provide easily accessible information of system configuration. The system analysis revealed the inconsistency of operators having to log system status changes performed on the DCS in separate log sheets or databases isolated from the DCS, and operators losing valuable time performing mass system status changes. The answer to this dilemma was to design or obtain a program that could add all the administrative controls and tracking information of performing a system status change to a tool that the operator could use on the DCS.
After laying the groundwork for the software solution and creating a proposal for development, thought should go to appropriately allocating resources to different project tasks. A knowledgeable modeler in contact with the different area "experts" should perform the task of data modeling. The initial design should be as all-encompassing as possible.
A good interviewing practice to follow for the data modeler would be to ask each process area expert about their system requirements, but also ask questions concerning expansion, frequency, and any special cases to allow for more design room and growth. Open-ended questions like, "Do you think you would utilize this addition?" or "Would it make sense if this was performed this way?" invoke thought into what is trying to be done, and they involve and challenge the area expert to take a different look at the current setup.
Interface design of an application should at least take into consideration comments of as many developers, designers, and users as possible. Interfaces should be as standardized as possible to minimize user and operator retraining during revisions or modifications and to assist in the goal of developing applications in modular form.
One area often ignored or paid little attention is the most critical part of creating architecture, the data model. A well-designed data model will outlast every software package in a plant, developed or acquired. A poorly designed data model, on the other hand, will increase installation cost, complicate maintenance, and shorten the life span of those same packages. Changes to a data model currently in use can and will require rework of every referencing program. Investing time and effort to understand all of the systems in the plant and how to represent them in a way that is complete, useful, and flexible will give greater returns than any other phase of development.
Many resources, however, make the field accessible to the amateur IT engineer. An important point is the data model for the FSB and SSL projects are distinct only because of a particular data isolation (isolated network due to classification) issue encountered as a result of the type of data that is handled in the DP facility. Unless there is a compelling reason not to, all facility data should use a single model.
For the FSB program, the initial data model consisted of having status boards created by the program directly bound to individual tables within the database. The design of each table came from what they desired to display on each particular status board. One of the first problems encountered with this model was how to display data on multiple boards without duplicating the data in the DB.
Basing the FSB application design so directly on the structure of the database made changes to either entity very difficult and time consuming with extensive testing required.
The newer FSB database model allowed for as much expansion or reduction as possible. We accomplished this by making a completely relational database with the ability to have multiple areas with dependent systems and pieces of equipment.
The newer model design came from the actual facility data architecture, rather than what they wanted to display on an individual status board. Using this structure made this data readily available for access by any type of program. This design also allowed for a much more versatile and distributable database, which could apply to not only the DP facility, but any other process facility.
The SSL data model was to replicate a relational version of the "flat file" DCS Continuous Database (CDB) in order to track, log, and perform the required system status changes. This database had to seamlessly integrate with the DCS.
The initial data model for the SSL program was very good at having the most current system information from the DCS but had several issues with keeping historical data. This mainly dealt with the ability to update DCS information (point names, type, etc.) that tracked within the different status logs. Since the database design called for a force-update table relationship with the DCS, all historically tracked points would update with the most current DCS information even if that wasn't the value at the time they entered the data. To be able to keep true historical data, the data model needed modification to allow for DCS point revisions.
As one can gather from reading the previous examples, we analyzed different designs of different data models during the development process. After multiple revisions, it became apparent that while there were no entirely "perfect" data models, there were clearly models that better met the requirements than others. All models are compromises between the principles of data integrity, normalization, performance, and functionality. The key to building a solid data model is understanding these compromises and making the choice that best suits the needs of the facility.
A good user interface can make or break a potentially brilliant application. Creating applications with a standardized and intuitive design increases the odds of user acceptance of the software. Adhering to a set of human factors standards helps ensure a consistent "look and feel" among existing and new utilities. Experience in the DP Facility has demonstrated that end users identify more with how they relate with a product than what functionality the product supplies. Therefore, the consistent look and feel of new or modified applications greatly increases user comfort. Time and effort spent standardizing interface logic is easily recaptured with savings in training and support. Creating standard objects and controls to use for the same functions within separate applications enhances the ability to re-use controls, forms, and program layouts.
No one knows more about a particular end user's job than the end user. Each new product initially decreases the user's overall familiarity with how to perform that task. For successful application acceptance and implementation, the user must recognize how the product benefits them at introduction.
As previously mentioned, using RAD techniques to familiarize the end user with the product and collect continuous feedback during the early stages of development are critical to integrating new applications in a continuously operating process environment. Taking the time to give users more input into the application contributes to a feeling of ownership on the part of the end user. When end users feel invested in the product, they are far more likely to view the product as a way to make their job easier than as another ill-conceived imposition by a process support group.
While important, user input must still balance with good development practices. Development of the interface must be a team effort between developers and users. The user's initial ideas on how an interface should operate may prove to be inefficient or unusable in practice. Working with the user to guide them to a final product that satisfies their needs and has a basis on solid design principle and HFE standards is a great example of a balanced development process.
Thorough planning and detailed design is essential in the success of any technology plan. Performing a detailed system analysis can not only better paint a picture of the effort of implementing new technological solutions into ones process, but also make known some of the less obvious system weaknesses.
For the DP facility, we saved thousands of man hours after the successful implementation of the FSB program. The facility also went from having two major process building's information readily available at shift turnover to five. The SSL program opened the door for developing custom applications for better plant operation such as tracking and logging operating caution tags on devices, performing system maintenance, and obtaining detailed DCS shift reports.
Behind the byline
Lance Abbott is a principal engineer at Westinghouse Savannah River Co. in Aiken, S.C., and Eric J. Heavin and Daniel Shanyfelt are engineers at Bechtel Savannah River Inc. in Aiken, S.C.
Return to Previous Page