- By Eric C. Cosman
- Connectivity & Cybersecurity
- For several years there has been a strong focus on the development of standards by several organizations. While certainly essential, they are not sufficient.
- There has also been considerable attention given to threat identification and vulnerability disclosure and mitigation. Again, this is necessary but not sufficient.
- Practices will ultimately be determined by asset owners. Absent regulation, they will be motivated by proof that consequences are worse than the effort expended.
From first steps to a sustained response.
Improving the state of cybersecurity in critical infrastructure has been a well-understood imperative for many years. Even without evidence of deliberate attacks, this should be considered an important aspect of improving system resilience. Although challenges have existed since the earliest days of using commercial-off-the-shelf (COTS) technology in industrial automation systems, it was not until the early 2000s that significant attention was given to potential compromises to system security. Since then, there has been significant progress in this area, but there is still much to be done.
Standards, guidance, and direction are available from several sources, but surveys and anecdotal reports have shown that many still struggle with how to turn this information into effective programs. Suppliers have a clear imperative to improve their products, but asset owners often struggle with how to get started.
Awareness is there
Considerable effort has gone into raising awareness of the potential risk that comes with increased connectivity and the use of popular operating systems and networking technology in the monitoring and control of critical infrastructure. This conversation was initially confined largely to the engineering and operations community, but it expanded quickly, first to the community of cybersecurity experts and ultimately to the popular press. Significant attacks and events continue to get broad coverage.
Those responsible for automation systems in critical infrastructure and other industrial sectors are now generally aware of the potential risk. Unfortunately, it is common to place more emphasis on the threat and vulnerability components of risk than on possible consequences. While general statements can be made about such consequences, only the asset owners can identify these for their specific situations.
Increased awareness has led to the development of a variety of standards and associated practices for industrial cybersecurity. Some of these are specific to individual industries or sectors, while others are more broadly focused. Unfortunately, as with any other technical subject, specialized expertise is often required to understand these standards. They are seldom written to be used by those without such expertise. Compounding the challenge, the requirements for establishing and maintaining a secure configuration can be quite complex, addressing a combination of technical capabilities, processes, and procedures.
Fully understanding these requirements and applying them to a specific configuration is often beyond the capability of the asset owners, requiring them to purchase professional services. Purchasing such services or creating projects to improve security requires that the necessary resources be justified in business terms.
Practical guidance required
Standards and practices are necessary, but not sufficient. Essentially, they should be considered as reference material and not step-by-step instructions. They capture effective and proven engineering practices, often employing clear use cases. They are certainly not prescriptive, as they must be written in a manner that allows for the broadest possible application and are developed on a timeline that is far too long for most people.
Practical guidance is somewhat different and must be based on the requirements and performance levels defined in standards. To be applied, it must be reasonable from the perspective of those applying it. This is a somewhat subjective measure, but in general it means that it should be possible to apply it without excessive complexity or effort. Guidance is often described in the form of cases studies. Whenever possible a selective or incremental approach is preferred. All elements of the guidance should be categorized as essential or optional.
It may sound obvious, but guidance must also be actionable. While standards may state what must achieved without too much regard to the methods used, guidance should address not only the outcome and applicable metrics but suggest suitable methods. Assuming that metrics are available, there must also be a means for measuring and reporting progress. Of course, there are several sources of useful guidance already available from various sources. None of these are suitable for all situations, so some selection and evaluation are required.
Perhaps the most well-known example is the NIST Cybersecurity Framework (CSF). It has a step-by-step approach to addressing cybersecurity with an implicit assumption that it is being done in response to an anticipated or actual attack or event. Another common request is for a “checklist” that contains simple steps that should be taken to improve security. Several of these have been developed, but perhaps the most popular is the “Top 20.” Such lists do not constitute a comprehensive response, but they do cover several simple and valuable measures that almost anyone can take. There are also several sector-specific sources of guidance of effective cybersecurity. Examples include NERC CIP for the energy sector and Responsible Care® for the chemical industry.
Why not more progress?
Given that several standards have been developed and practices are available, it would be reasonable to ask why there has not been more progress in securing critical systems. There are likely as many answers to this question as there are circumstances, but there are some common themes.
Just as with any proposed project or initiative, cybersecurity programs must be supported with a solid business case. Those building such a case encounter several common questions. Perhaps one of the first of these is whether a cybersecurity program should be the responsibility of the information technology (IT) function or assigned to operations or engineering. The “either/or” premise of this question is flawed, because the reality is that experience, resources, and expertise will be required from these and other organizations.
Even if the organizational alignment and accountability is clear, there will still be the fundamental question of why cybersecurity improvements must be made, or why now. This is best answered with a risk assessment, described in more detail below.
How to start?
Gaining general support for a cybersecurity program or response is only the first major hurdle. Once resources are available, those charged with designing and implementing such a program often find it difficult to find the best approach. A common question is “How or where do we start?” This requires a simple and direct response that spells out a step-by-step approach with suitable metrics at each stage.
There is often a temptation to structure the response as a project, but this will only be effective until the basic processes are in place. From that point on, cybersecurity must be conducted as an ongoing program. Although the response must be long term, these costs may be managed, just as they are for programs like quality improvement or safety.
Before you can secure a system or set of assets, it is first necessary to have complete information about those assets. All must be not only identified, but also described in terms of a defined set of relevant attributes. These include the obvious ones like name, location, and network address, as well as others that may not be so easily available.
Asset owners seldom have the time or resources required to maintain a comprehensive system and component inventory. Depending on size and complexity, the first response may be to retain a services company to collect the necessary information. Even in medium-sized systems, manual collection is impractical unless it includes provisions for detecting and recording changes. In all but the smallest and simplest installations it is problematic or even impossible to collect this information manually. This is one of the first opportunities addressed by new and existing suppliers.
The next step is a risk assessment that identifies possible consequences, as well as threats and vulnerabilities. Even after accepting that threats and vulnerabilities exist, many will be tempted to say, “it won’t affect me.” While it may be true that a particular facility may be an unlikely target, this does not mean that it cannot become “collateral damage” in an untargeted attack. There are many different approaches and methodologies for conducting risk assessments. In some cases, principles and techniques have been adapted from those used in other disciplines, such as functional safety.
Some of the earlier attempts to assess cybersecurity risk were developed by service providers. In many cases, these were initially viewed as proprietary or a source of competitive advantage. This eventually began to change with the release of textbooks, guides, and standards. Even with a clear methodology, the process for risk assessment may be difficult to replicate on a large scale.
It is at this stage that many programs stumble and fail to achieve the support and momentum to make them sustainable over the long term. There are likely many reasons for this, but a common one is that the focus does not shift from the decision makers to those who must execute the program. This cannot be limited to IT staff but must include those who are ultimately accountable for operations availability, reliability, safety, and performance. Those responsible for program execution must know where to go to get guidance and have their questions answered. They must trust these sources as being knowledgeable and appreciative of the constraints and realities that are inherent to their environment.
Answers, rationale, and explanations must also be delivered in terms that operations personnel can understand. Security experts must be able to check their complex jargon at the door, shifting to language that makes sense for the environment. Along with using appropriate language, it is also important to avoid rationale that appears to be based on a theoretical view of ideal security. Balances will have to be struck, and compromises made.
Finally, the detailed requirements spelled out in standards and practices should be seen as a source of reference and subject to some level of interpretation. They are used as references in developing more detailed policies and procedures that are tailored for the specific situation.
It is at this point that several of the questions posed earlier tend to reemerge. One of the first reactions may be to challenge the reality of the risk. Further analysis and explanation may be necessary to convince decision makers.
In all but the simplest configuration there will also be an inevitable question about how to prioritize imperatives and sequence the response. The results of the risk assessment are a major component of this analysis, but other factors may also have to be considered, such as the age and supportability of installed components and devices and the criticality of the application.
Focus on fundamentals
While there is no single approach or method when establishing a cybersecurity program that guarantees success, there are some fundamentals that are both common and essential.
The first of these is the use of a clear life cycle of the system under consideration, stretching from conception and specification through operation and support. The purpose is to provide a framework for the identification and definition of the required processes and those who execute them.
There are many models that could be used to describe such a life cycle, but standards such as ISA/IEC 62443 have adopted a “system of systems” approach that is adapted from the IEC 24748 standard.
This model shows the life cycle of automation systems as consisting of several steps or phases, beginning with product conception and requirement specification and ending with eventual decommissioning and replacement. At each step there must be clear accountability and responsibility and also well-defined conditions for proceeding.
For each life-cycle phase, it is essential to define the scope and specific responsibilities required for the situation. Although detailed role definitions will vary from one situation to another, standards can also be used to identify the principal roles required in a generic sense. The ISA/IEC 62443 standards have identified the principal roles shown in figure 2.
This diagram shows the primary responsibility of each of these roles with respect to each of the components of the system and its environment. It is also very important to understand and appreciate that the cybersecurity response is part of a much larger program of asset protection.
For installed systems, the asset owner has principal accountability for the automation system. This role is also responsible for the operation of this system. The integration and maintenance service providers are responsible for execution of the processes in their respective phases of the life cycle. The product supplier is accountable and responsible for the execution of the processes used to specify, conceive, and develop the automation systems and associated products.
The standards describe each of the life cycle stages and roles in more detail, as well as how they provide the foundation of an effective program. Examples and derivative case studies will also be used to give more practical guidance.
Regardless of the life-cycle phase or the specific roles involved, it is important to consider all possible approaches to providing the necessary resources. Depending on the situation, roles may be assigned to internal staff, contracted contingent staff, or delegated using purchased services.
Resource availability is of course always an important factor to consider. All resources are limited, whether they are financial or human.
Taking all the above into account, it should be apparent that the exact approach used must be tailored to the environment. The approach should draw from practices proven to work for general purpose information security while adapting them as required for an environment where protection of information may not be the primary concern. There are several elements to consider in formulating this response.
In all but the simplest of cases, the approach must be both scalable and repeatable. The number of automation and other operations systems in a company may be very large and widely dispersed geographically. In such cases, there may also be a desire to be able to compare performance and needs across a fleet or facility.
The challenge of establishing an effective cybersecurity response may be complicated, but it is not intractable. Just as with any continuous improvement program (e.g., safety, quality, preventative maintenance), the first step is to define the scope in terms of inventory and level of improvement required. Standards define what is required, and guidance provides examples of how to proceed. Security experts are available to provide program definition as a service. Asset owners must first focus on defining potential consequences as the basis for their business case.
We want to hear from you! Please send us your comments and questions about this topic to InTechmagazine@isa.org.