- By David Lee
- Operations & Management
Elements of operator performance are complicated, so look to standards and best practices to guide improvements.
Operator Performance (OP) is a large topic and can be a very difficult subject to navigate. It is also a subject that is forever evolving. Keeping track of latest best practices and standards can be time-consuming and difficult. This article looks at the various elements of OP and some of the latest initiatives and focus areas relevant to automation professionals.
When I look at OP, I take a holistic view and break it out into four subtopics. I refer to these as pillars, as each, in my opinion, is an important foundation upon which a successful approach to OP is based. The four pillars can simply be stated as:
- Having the right number of people to perform the required tasks
- Ensuring those people are competent to perform those tasks
- Ensuring those people have appropriate tools to perform the tasks
- Having a conducive environment in which to perform the tasks
1. Number of People
The first pillar requires task analysis and workload calculations. Having the correct number of people continues to be perhaps the most challenging aspect of operator performance metrics as there are no standards on how to effectively calculate workload—although there are some recognized proprietary methodologies that are used to do so. However, getting this wrong can have a significant impact on, for example, the number of operator consoles, the size of the control room, and even the control center.
Another part of workload calculations is perhaps better understood: It is the managing of fatigue through hours of service and overtime limits. Fatigue can have an impact on shift schedules and number of shifts and that, in turn, could impact the size and structure of the operations’ team. Excellent guidance on fatigue risk management is provided in ANSI/API RP-755.
2. Operator Competency
The pillar relating to competency is one that has been paid a lot of attention to over the last few years. Competency, most often managed through training and development programs, starts all the way back at new-hire selection, when key competencies are defined for each position and then used as an input to the hiring process.
Once operators are hired, these competencies—along with technical skill and knowledge requirements—should be used as the basis of formal, individualized training and development plans. Qualification based certification and periodic requalification are used to maintain competence over time.
The use of a competency-based qualification process also facilitates job progression not solely based on seniority. Using seniority alone is a poor practice that often leads to people working in roles for which they are not suited.
Technology to increase competency and support training of console operators especially, has developed at a pace. There has been a significant Increase in the use of simulation and digital twins, especially as the cost of ownership and maintenance of those technologies has significantly decreased. The use of cloud-based hosting and software-as-a-service licensing structures have helped in this reduction in total cost of ownership.
3. Appropriate Tools
Perhaps the most obvious aid to improved operator performance is the toolset that the operator has. Increased use of more traditional advanced process control (APC) techniques can obviously help keep the process stable, freeing the operator from having to monitor and control complex loops. In some industries, batch control based on the ISA88 standard is prevalent, but increasingly, procedural automation including state-based control is becoming common. A standardized approach to implementation of these systems will soon benefit from the new ISA106 standard. ISA106, Procedure Automation for Continuous Process Operations, supports automatically detecting and reacting to process state changes, which is a way to remove workload from the operator while ensuring predictable response to abnormal situations.
As for the operator interface, its alarm management system is still in many cases an ineffective tool when it comes to providing an operator with prioritized, meaningful, and actionable queues to head off abnormal situations. The generally accepted best practices of ANSI/ISA-18.02, Instrument Signals and Alarms, provide a sound approach to developing an alarm management strategy. However, many people still stop at reducing normal alarm rates to meet the recommended KPIs and pay little attention to the problem of alarm floods potentially overwhelming the operator. The use of state-based dynamic alarming, to reduce the magnitude of alarm floods, as well as stale alarms, is not a new concept but it is still not common.
Along with the alarm management system, the human machine interface (HMI) is meant to provide the operator with situational awareness. The ANSI/ISA-101.01 standard, along with its technical reports, provides guidance for the implementation of a best practice HMI. The adoption of this standard has been very slow, however, and many companies live with poor design until they need to upgrade their process control system.
Arguably, the biggest impact of ANSI/ISA-101 on the operator is the adoption of its suggested four-level display hierarchy. By providing an effective Level 1 display, typically on a large screen, the operator gets a continuous view of critical operating parameters such that changes—especially those changing towards abnormal—can be easily identified and acted upon.
The use of properly designed Level 2 displays allows the operator to act and see the response to the action in one place, allowing the number of monitors to be reduced to meet good ergonomic practice.
As we look at the design of these levels, and indeed the traditionally P&ID based Level 3 displays, we must get away from earlier lazy practices. We also must start thinking about how we present process information, moving away from simple data to contextual information, to ultimately, simple decision support representations. Radar plots and trends are not the only answer!
An often-overlooked tool is the application of communication technology. This could be as simple as a plant radio or telephone, but increasingly collaborative environments can provide real benefits. There also is a trend toward the use of mobile devices as operator interfaces. Use of these tools and their applications in certain industries where there are no real control rooms, or where the operator is by design mobile, can be game-changing.
4. Operating Environment
Finally, there is the operating environment, which, in many cases, is the control room. Over the last couple of decades, control rooms have been moved out of process areas to safe locations or they’ve been built to be blast-resistant and otherwise address most safety concerns. However, addressing operator performance in control rooms has not been a consistent consideration. Guidance, in the form of ISO-11064, and even the ISA-RP60 series, has been out there for some time, but still many control rooms are dark, cramped, noisy, and distracting environments. Consoles are often designed without attention to good ergonomic practices, and chairs are an afterthought. More could and should be done to improve this critical pillar.
In conclusion, operator performance is a complicated subject, there are many interacting components that can lead to ineffective solutions. If you are going to spend your hard-earned capital on improvements, plan carefully and look to the standards and best practices to guide you.
Key Points to Remember
- Automation team members have important roles supporting operator performance
- Operator performance requirements affect the design of the control system and control room
- Automation professionals should be aware of industry best practices related to operator performance
- Operations and other stakeholder requirements must be considered in control system design
We want to hear from you! Please send us your comments and questions about this topic to InTechmagazine@isa.org.