1 May 2007

Focus on OPC

Popular, yes, but a closer look shows security issues

Fast Forward

  • Because of its vendor-neutral position, OPC allows for interconnectivity on the control floor and in the network.
  • OPC's connection with Microsoft's DCOM opens control systems up to worms and viruses.
  • Securely deploying OPC applications remains a challenge for engineers and technicians.
By Eric Byres, Joel Carter, Matt Franz,  Bill Henning,  John Karsch, and Dale Peterson

With widespread adoption of OLE for Process Control (OPC) standards for interfacing systems on the plant floor and the business network, it becomes a classic example of the benefits and risks of adopting IT technologies in the control world.

OPC is an industrial standard primarily based on the Microsoft Distributed Component Object Model (DCOM) interface of the Remote Procedure Call (RPC) service. Due to its vendor-neutral position in the industrial controls market, manufacturers use OPC technology in an effort to interconnect Human Machine Interface (HMI) workstations, data historians, and other servers on the control network with enterprise databases, ERP systems, and other business-oriented software. Furthermore, since most vendors support OPC, the perception is it is one of the few universal protocols in the industrial controls world, adding to its widespread appeal.

OPC Foundation is developing a new version of OPC, called OPC Unified Architecture or OPC-UA, based on protocols other than DCOM. This is in conjunction with Microsoft's goal of retiring DCOM in favor of the more secure .NET and service-oriented architectures. Once most OPC applications do make this migration from the DCOM-based architecture to NET-based architecture, industry will have the opportunity for far better security when it comes to OPC.

Since DCOM-based OPC is what is on the plant floor today and will continue to see use for years to come, we focused our investigation on how to secure on this type of OPC. Our research showed two main areas of security concern. The first is viruses and worms from the IT world may be increasingly focusing on the underlying RPC/DCOM protocols used by OPC, as noted in this attack trends discussion:

"Over the past few months, the two attack vectors that we saw in volume were against the Windows DCOM (Distributed Component Object Model) interface of the RPC (remote procedure call) service and against the Windows LSASS (Local Security Authority Subsystem Service). These seem to be the current favorites for virus and worm writers, and we expect this trend to continue."

The use of OPC connectivity in control systems and servers may lead to the possibility of DCOM-based protocol attacks disrupting control systems operations. An example of such a worm is the Blaster worm of 2003.

The second issue is securely deploying OPC applications has proven to be a challenge for most engineers and technicians. While OPC is an open protocol with the specifications freely available, engineers must wade through a large amount of detailed information to answer even basic security questions. There is little direct guidance on securing OPC, and our research indicates much of what is available may actually be ineffective or misguided.

These issues are not new for the OPC Foundation.

"End users need to speak out about security. That is the only way suppliers will ensure a safe system," said Thomas Burke, president and executive director of the OPC Foundation.

"We need to get together to strategize and collaborate to raise the awareness, and be in the position to recommend the strategy for addressing the opportunity to truly have complete secure reliable interoperability from devices through the enterprise," he said.

As a part of the research, few treatments of OPC were useful for those that were not experienced software developers. A review of OPC specifications, focusing on security details, could show users the potential risks of OPC deployments, but not how to deal with them. So rather than conducting a formal security analysis of OPC or DCOM, we focused on creating a set of observations and practices that will help end users secure their OPC systems. This article will discuss what OPC is and how it is really used. Future articles will look at observations and practices for better OPC security.

OPC defined

Before one can really secure OPC, you need to understand what it is. OPC is a software interface technology used to facilitate the transfer of data between industrial control systems, HMIs, supervisory systems, and enterprise systems such as historical databases. OPC provides a common interface for communicating with diverse industrial control products, regardless of the software or hardware used in the process. Before OPC, application developers had to develop specific communication drivers for each control system they interfaced with.

Now with OPC, suppliers no longer needed to develop separate drivers for each network or processor. Instead, they created a single optimized OPC client and/or server for their product. This OPC client would then communicate with OPC servers designed and sold by the manufacturers of the other networks and controllers. Once an OPC server exists for a piece of equipment or an application, it becomes easier to integrate its data with other OPC compliant software.

OPC is based on a client/server architecture where computers will run software that makes them either a client or a server or both in some cases. The OPC server is a software application that gathers information from devices (such as PLC, DCS, or SCADA controllers) using these devices' native protocols (such as MODBUS or PROFIBUS). The server then provides access to this data via COM objects and method calls, allowing multiple OPC clients to indirectly read and write to the field device via the OPC server.

An OPC client is an application that accesses data held by OPC servers. An HMI package may contain an OPC client that allows it to access data provided by an OPC server application resident on another machine. The HMI package could also act as an OPC server, allowing other OPC clients to access the data it has aggregated either directly from field controller or from other OPC servers.

To illustrate this client-server architecture, imagine a simple system with three basic components designed for controlling the water level in a tank:

  • A MODBUS-capable PLC performing the actual control
  • An OPC platform containing an OPC server and a MODBUS protocol driver
  • A HMI for operator access to the control system

The HMI will need to be able to write the set point in the controller, read the current water level, and monitor the controlled output (the pump) and alarms. If the HMI needs to read a value from the PLC, it sends a request via an OPC Application Programming Interface (API) call and the server translates this into a MODBUS message for communications to the PLC. When the desired information returns from the PLC to the OPC server it then translates that back to OPC for transmission to the HMI.

The relationship

One of the most important things to understand about OPC is it is an API and not an "on the wire" protocol. It is at a higher level of abstraction than communications protocols such as Ethernet, TCP/IP, or even the MODBUS Application Protocol. For most developers using the OPC API, the underlying network transport or data encoding used by the API to exchange data is irrelevant.

The core architectures and protocols underlying OPC are Component Object Model (COM), DCOM, and RPC. COM is a successor to Dynamic Link Libraries (DLLs) and is a software architecture developed by Microsoft to build component-based applications. It allows programmers to encapsulate reusable pieces of code in such a way that other applications can use them without having to worry about implementation details. In this way, you can replace COM objects with newer versions without having to rewrite the applications using them.

DCOM is a network-aware version of COM. It tries to hide the difference between invoking local and remote interfaces from software developers. In order to do this, all the parameters must be passed by value, and the returned value must also be passed by value. The process of converting the parameters to data to transfer over the wire is marshalling. Once marshalling wraps up, the data stream serializes, transmits, and restores to its original data ordering on the other end of the connection.

DCOM uses the mechanism of RPCs to transparently send and receive information between COM components on the same network. RPC allows system developers to control remote execution of programs without the need to develop specific procedures for the server. The client program sends a message to the server with the appropriate arguments, and the server returns a message containing the results of the executed program.

The information available from the OPC server organizes into groups of related items for efficiency. Servers can contain multiple groups of items, and a group can either be:

  • A public group, available for access by any client
  • A local group, only accessible by the client that created it

In a water tank example, where a MODBUS/TCP OPC server connected to a MODBUS capable PLC, we might configure a "WaterLevel" group on an HMI with five members:

1. "SP" (setpoint)
2. "CO" (control output)
3. "PV" (process variable)
4. "LoAlarm" (Low Water Alarm)
5. "HiAlarm" (High Water Alarm)

The HMI could register the "WaterLevel" group with the SP, CO, PV, and alarm members; then read the current values for all five items either at timed intervals or by exception. The HMI could also have write access to the SP variable.

One significant advantage of OPC not having to directly deal with the control device's internal architecture: The software can deal with named items and groups of items instead of dealing with raw register numbers and data types.



Inside OPC

OPC Servers are not monolithic applications, but consist of a number of related software components. Some of these components are part of the Windows operating system, while the OPC Foundation developed and released others.

Still other components are server applications developed by OPC vendors. Finally, end-users using programming languages such as Visual Basic may develop custom OPC applications.
The OPC Foundation provides a set of DLLs that defines the client and server OPC interfaces. These components marshal and unmarshal interface pointers and the method parameters. The "proxy" is the client-side code, while the "stub" is the server side marshalling code that interacts with the OPC server code developed the server developer. The proxy and the stub generate from the Interface Definition Language within the OPC Standard.

In the past, vendors distributed their own versions of these files, but this led to application incompatibility and version management issues. To solve this problem, the OPC Foundation chose to distribute a single approved version of these DLLs. Today, all vendors must include these components with their OPC servers. If a security bug was in one of these DLLs, it would affect all OPC implementations, and the OPC Foundation would have to issue new versions of the proxy stub libraries to patch the vulnerability.

The OPC Server Browser is a DCOM component used by the client software to retrieve information about OPC server applications that may be active on a given host. This component exposes interfaces that allow clients to query the Component Category Manager in order to find out what OPC servers are available. The OPC Server Browser allows remote clients to determine which OPC Servers are available without having to directly browse the host's registry, as was done in early OPC servers. The OPC Server Browser listens on an arbitrary TCP port located above 1024. It is also the "OPC Discovery Service Executable."

Given OPC's reliance on DCOM, it should come as no surprise that OPC applications heavily rely on a number of Microsoft components for configuration and operation. Like most Windows applications, OPC and DCOM make extensive use of the Windows registry. When someone installs an OPC server, it often adds entries to the Windows registry.

Based on in-lab observations of a number of OPC systems, it turns out OPC Servers and Clients require a surprisingly small set of Windows system services for operation. These include:

  • OpcEnum: OPC requires this service to be running so remote clients can determine which OPC Servers are running on a host.
  • Remote Procedure Call: Required by OpcEnum.
  • Server Process: OPC servers typically start as a service but a GUI client can configure and control the process.

OPC usage

A study team and ISA conducted a survey of OPC end users to determine how companies deploy OPC in their process and manufacturing environments. 

The first question in the survey asked "how does your company typically use OPC in its operations?" The response was they used OPC for data transfer to historians, data aggregation in HMIs, and supervisory control in a majority of the end-users facilities. What was surprising was 30% of end users reported employing OPC for data sharing to third parties such as business partners and suppliers. Since it is likely most third parties are not on site at users' production facilities, that means manufacturers are using OPC for data transfer beyond the plant floor.
The next question asked what OPC functionality their company used. The results indicate Data Access, Historical Data Access, and Alarms and Events are the primary OPC specifications that actually get used on the plant floor. The new UA OPC is rarely used, which is not surprising since the specifications were only released this year.

The next question asked respondents to indicate what impact the loss of OPC would have on their operations and what percent of the OPC systems deployed would have this impact. Over one quarter of the sites reported losing OPC would result in a loss of production. Also interesting is more systems would experience a loss of view by the operators than not.

While some users said they deliberately structured their systems to minimize safety and operational effects on loss of OPC-based information, others said the opposite:

"We control the motor drives by OPC with the DCS. If we lose the OPC we stop the production!"

Clearly OPC is not just being used for data management purposes on the plant floor, but rather is a critical component of many production systems. This highlights the need for better OPC security.

The final question was to determine which networks OPC travels over. In other words, is most OPC traffic restricted only the lowest layers of the control system or does it travel over upper layers such as the enterprise network or even the Internet? OPC saw use in two-thirds of the sites for transfers in layers 1, 2, and 3 of the network (the layers refer to the ISA-99 General Reference Model and not the OSI model). This aligns with the response to Question 1 of the survey, which indicated data transfer to historians, data aggregation in HMIs, and supervisory control was a primary use in the majority of facilities.

Also correlating with Question 1 was the fact 20% companies reported deploying OPC over the site business network, enterprise network, or corporate Intranet and 10% used OPC over the Internet. Clearly, the common belief OPC is only on the control network is incorrect.

We looked how companies actually deploy their OPC systems and found three basic architectures. The first and most common architecture is local OPC on control/supervisory network deployment and is used for connecting control and interlock traffic between different vendors' control systems. A vendor interface brings up data from PLCs/DCS on the control layer into a HMI or OPC concentrator (via protocols such as Common Industrial Protocol or MODBUS/TCP). It then stores this data in the OPC server for exchange with other vendors' OPC clients and servers. All traffic stays on the HMI layer and no OPC traffic crosses the firewall boundaries.

The second architecture uses OPC on the control/supervisory network and historian layer to transfer historical traffic between different vendors control systems. Again, a vendor interface brings up data from the PLC via a control protocol into a HMI or OPC concentrator and stores it in the OPC server to make it available for the data historian. This historian can sit in a Demilitarized Zone (DMZ) for shared control/enterprise data servers or up on the business network, depending on the site. Typically, the OPC traffic will cross at least one firewall or router with an Access Control List.

The third architecture uses OPC between to aggregate data between remote plant sites. Historic traffic between different field stations or remote sites transfers via OPC over the corporate wide area network (or Internet) to a central data historian. Again, this historian can sit in a DMZ or up on the business network, depending on the site. Typically, the traffic will cross at least two firewall interfaces.

The challenges of securing OPC deployments are clear. The inherent architectural complexity of OPC, the default security posture of OPC servers, and the lack of unambiguous guidance with regard to security all contribute to the difficulties of securing OPC deployments. As well, OPC's reliance upon the Microsoft platform is a curse and a blessing. While Windows has flaws, there are a wealth of practices for hardening Windows servers than can apply to OPC clients and servers.

About the authors

Eric Byres is chief executive of Byres Security Inc., eric@ByresSecurity.com; Joel Carter is a researcher at BCIT, jcarter@bcit.ca; Matt Franz was formerly at Digital Bond, mdfranz@threatmind.net; Bill Henning is a consultant; John Karsch is a researcher at BCIT, jkarsch@bcit.ca; and DalePeterson is director of the network security practice at Digital Bond, peterson@digitalbond.com.