•  ISA Fall Leaders Meeting 2016

    PMCD Awards Ardis Bartle Member of the Year award

    Ardis Bartle has time and again proven to be a valuable member of ISA, and PMCD is proud to be associated with her. At the ISA Fall Leaders Meeting, held in Newport Beach, CA, Ardis was recognized for Division Leader of the year. At the same venue, Sohail Iftikhar, Director PMCD, awarded Ardis with the PMCD Member of the year award as well. On behalf of the entire membership of ISA, and PMCD specifically, we wish Ardis the best of luck, and thank her for her services.

     

                        Ardis Bartle Receiving Award



     PMCD awarded 2016 Outstanding Division – Honorable Mention Award

    PMCD was recognized at the Fall Leaders Meeting in Newport Beach, CA. Dennis Coad, A&T Department VP, presented, Sohail Iftikhar, Director PMCD with the award of 2016 Outstanding Division – Honorable mention.

     

    Sohail Receiving Award

  • PMCD Scholarship Recipients for Year 2016-2017: 


     

    1. Jere Haney Honor scholarship to Aaron Christopher Pineda  

     

    • University of Colorado Boulder     
    • Major in Mechanical Engineering, minor in Electrical Engineering     
    • Works in automation area    

     

     

     2. Hugh Wilson Honor scholarship to Bonnie Sim   

    • University of New Brunswick, Canada    
    • M.S. in Mechanical Engineering    
    • Research on control and sensing in wall jets related to fluid dynamics    
     
    3. Zhengyu Zhao  
    • Northern Alberta Institute of Technology 
    • B.S. in Instrumentation Technology     
    • Gold medalist in ISA Student Section competition in October 2015      
     
    4. Zubia Najam  
    • Northern Alberta Institute of Technology    
    • B.S. in Instrumentation Technology, pursuing another graduation in Electrical Technology     
    • Active in Edmonton Section      
     
      5.Shama Tajani   
    • University of Texas at Austin   
    • Major in Islamic Studies, pursuing medical school   
    • Active in volunteering activities in school and in summer camps     
    • Actively working on starting ISA student section at UT Austin    
     

     

     

      

  • PCS 2015 ABSTRACTS


    No. Title Full Name Company Email 
    1. DCS Migration: Lessons Learned

    Our refinery DCS migration program began in 2005. Our existing first-generation systems (installed 1982-92) were at/ over capacity, with no room for expansion. Eight steps were planned – one per operating console. The 8 steps were ranked according to capacity, equipment age and maintenance history. PAR performs a front-end loading for each study, and uses Main Automation and specialty graphics/configuration contractors to help implement each step. Hot cutover at junction boxes is used to minimize impact on unit operation. After two steps were completed, a 2.5B$ refinery expansion was announced, causing a 30-month delay in the original schedule. All HMI was replaced with the DCS vendor’s latest operating stations.  This would allow Console Operators to control legacy and latest generation DCS equipment transparently. Over 400 graphics were migrated to the new HMI system. The Central Control building was expanded, and a refinery-wide FTE air-blown fiber backbone was installed. We have completed 5 steps, on schedule and within budget. We are on schedule to complete our DCS migration two years before our DCS vendor’s announced end-of-support.

     

    See Hydrocarbon Processing, January 2015, “DCS Migration: Lessons Learned”, pp. 75-80.



    Randy Conley TOTAL Petrochemicals & Refining USA Inc randy.conley@total.com
    2. How to update your plant’s calibration program

    Technology is growing at an unprecedented pace and process control technology is no exception. However, many times, the calibration program still looks the same as it did 20 years ago, seemingly fallen to the way side. In a worst case scenario, plants do not have a structured calibration program in place and low quality calibrations are performed. Information is recorded using a pen and paper, and documents are scattered throughout filing cabinets. Printed maps guide technicians down disorganized routes to locate instruments. When a technician arrives to an instrument, they may or may not have the tools they need to communicate with it. Even when they find it, they don’t know what tolerance to use to determine if it passes or fails.  Value should be determined by engineering but it’s up to the shop technicians to determine if the numbers they are seeing are good or if an adjustment needs to be made.  These conditions make it challenging for technicians to efficiently perform calibrations. This disorganization can waste a lot of time and cost an organization a significant amount of money. Worst of all, these conditions also potentially create an unsafe environment.  All plants want to have a workable preventative maintenance program, but the reality shows the techs are too busy fighting fires to catch up and prevent them from happening.  An effective program that reduces work requires a calibration champion support from management and buy-in from the techs.  Support from management comes first; buy-in from techs comes with time and performing the process.

     

    It seems like an easy problem to identify and task to take on, right? Well, why then, aren’t all plants using an automated calibration system? It is 2015, right? The reason is that many times it is overwhelming to understand where to begin and figure out the proper steps for implementing a process change. If you don’t know what you are doing it can also be very time consuming and resource consuming, making the cost of failure high.

    Roy Tomalino Beamex roy.tomalino@beamex.com
    3. Demystifying Government-Validated Solutions: Navy Case Study Shows How Critical Infrastructure and Facilities Can Benefit

    The federal government and Department of Defense (DoD) facilities require resilient networks that assure availability of critical assets to support US armed forces at home and abroad. Current mandates provide significant incentives for these agencies to build more efficient and resilient systems that consume less energy and are protected from disasters, accidents and attacks.

     

     

    With recent reports of Havex, Black Energy and other malware variants targeting industrial control systems and SCADA operations, facility managers are intensely concerned about providing industrial controls systems security for buildings and utility systems. Many of these include an array of legacy components that cannot be equipped individually with modern or advanced security software. This presentation will focus on a case study of the Navy’s Enterprise Industrial Controls System (EICS) deployment for military-grade protection of both physical and cyber aspects, as well as analysis, modeling and prediction capabilities for building systems.

     

     

    Using a base-wide wired and wireless network that scales and extends across 15-plus bases, the system provides an advanced cyber-secure framework for an optimized industrial controls system (ICS) that seamlessly blends direct digital controls (DDC) and SCADA networks into a single, cohesive installation with command and control management. The platform also provides a video surveillance for comprehensive critical-infrastructure protection. All components adhere to the DoD instruction on information assurance (IA) implementation and FISMA (Federal Information Security Management Act) requirements. The EICS solution is a foundational system through which the Navy complies with congressional mandates to securely reduce energy consumption. It has been independently tested for vulnerability mitigation, and further allows energy managers operational flexibility that is compliant with DoD-grade IA.

     

     

    The presentation will outline how similar approaches and architectures can be directly applied for industrial critical-infrastructure applications. It will show that comprehensive and fully validated security systems can improve performance and extend security beyond the firewall without negatively impacting operations, schedules, workflows or budget.

     

     

    About the Presenter

     

     

    Frank Ignazzitto, Vice President of Marketing for Ultra Electronics, 3eTI, brings more than 30 years of technology and management experience to industrial and government decision makers tasked with managing highly complex networks and security systems. With a background that spans military service, international business management, and start-up business execution, his career has focused most recently on business leadership in the defense and energy industries. Mr. Ignazzitto has dedicated the last 15 years driving new technology adoption with the Department of Defense, the intelligence community, Homeland Security and many other federal agencies. His diverse technology experience includes human-machine interface, electro-optical nanotechnology and advanced fuel cell systems in addition to 14 years in the oil and gas sector.  After earning his BS in Engineering at the United States Military Academy, West Point, Mr. Ignazzitto served as an officer in the Air Defense branch of the US Army.

    Alice Ducq   alice.ducq@ultra-3eti.com
    4. New Technology for Toxic and Flammable Gas Detection

    Until now, industry norms for toxic gas detection have been limited to single-point detectors placed strategically throughout facilities.   Senscient has commercialized technology that introduces an advancement in toxic & flammable gas detection by using tuned lasers for monitoring fugitive hydrocarbon gases and toxic gases such as H2S, NH3, CO2, HF, and HCl across an open path.  This “open path” detection compliments single-point detection to significantly improve the early warning of a fugitive gas release, resulting in improved safety and decreased risk. This technology is ideally utilized to create a detection barrier around the perimeter of a plant, process unit, or storage area.

     

     

    This open-path gas leak detector has been developed using Enhanced Laser Diode Spectroscopy (ELDS).  ELDS is an advanced, highly robust form of Tuneable Diode Laser Absorption Spectroscopy (TDLAS) that was specifically developed for use in the most demanding safety applications.  ELDS gas detectors are capable of detecting even small fugitive leaks with absolute dependability and zero false alarms. This technology is the quickest, most sensitive and most reliable means of detecting gas leaks or releases that has ever been available.

     

     

    ELDS open-path gas detectors utilize laser diodes that operate at wavelengths specially selected for the target gas(es), to generate a beam that is sent from a transmitter along a path to a receiver.  The receiver performs real-time signal analysis utilizing Fourier Transform methods in order to distinguish between the monitored target gas(es) and other molecules present in the path.

     

     

    Any target gas present in the beam causes a change in the received signal (called a harmonic fingerprint) which is specific to the target gas and proportional to the quantity of gas in the beam path.  The receiver then provides multiple analogue outputs of the gas burden detected in the path.

     

     

    The bottom line is that open-path laser technology provides an additional layer of protection and vastly improved early warning for an industry that is seeking better ways to manage the high levels of risk associated with highly toxic and flammable gases.

    Jason Schexnayder Senscient jschexnayder@senscient.com
    5. Modifying Protection Method from Non-incendive to Intrinsically Safe Installation

    Process control equipment and instruments at classified locations in the oil and gas industries are required to be certified based on their electrical hazardous locations.  Complying with the certification requirements will ensure the safety of the workers and processing facilities by eliminating the chances of equipment becoming ignition sources.

     

    Protection technique type ‘n’ limited energy (Ex nL) was withdrawn and replaced with Intrinsically safe (IS) type ‘Ex ic’ in the NFPA 70 (National Electric Code, 2011 edition) and International Electro-technical Commission (IEC) standards. Under this circumstance, no more Ex nL certified equipment are expected to be available in the future. Since FOUNDATIONTM Fieldbus (FF) installations in Saudi Aramco are based on the Ex nL, this would require full compliance and understanding for the newly added requirements in IEC-60079-11 & IEC-60079-14.  Although the basic intent of both techniques is quite similar, they are still having dissimilarity especially on marking, installation, specifications and control drawing requirements. This paper will discuss the essential differences between the two protection methods and how to truly achieve full compliance with the recent IEC standard additions.

    Hamad  Balhareth Saudi Aramco hamad.balhareth.3@aramco.com
    6. "It Takes a Village": Maintaining APC Effectiveness

    In early 2012 our refinery began a program to renew our APC (Advanced Process Control) applications, which had fallen into disuse. In the 3 years since then, we have successfully implemented several new and rehabilitated existing APC applications.

     

    This paper explains how we have achieved high APC utilization rates and stakeholder acceptance by involving the stakeholders in designing, implementing stewarding and improving the APC applications. This participation was fostered by developing and implementing tools and on-going stakeholder training and reporting for key members of Operations, Technical, Planning & Economics and Maintenance departments. This approach is APC and DCS vendor-independent and can be used by any manufacturing facility that uses APC.


    Randy Conley TOTAL Petrochemicals & Refining USA Inc randy.conley@total.com
    7. A holistic view on preventing one of the largest process industry risks: tank overfills

    This presentation will discuss the risks associated to tank overfills and the potential benefits of better overfill prevention. Lessons learned from previous accidents will be presented. And more importantly, based on the new book ‘Engineer’s guide to overfill prevention’ which is based on the IEC 61511 safety life-cycle, a holistic approach customized for overfill prevention will be presented:

     

    •Regulatory requirements

    •Industry standards

    •How do you prevent an overfill from occurring?

    •Best-practices and RAGAGEP

    •Management Systems

    •Tank Overfill Risk Gap Assessment

    •Risk Assessment

    •Overfill prevention systems

    •Selecting equipment

    •Commissioning

    •Site AcceptanceTest

    •Proof-testing

     

     

    Additionally, the possibility to add an independent mitigation layer thru the usage of measurement in the secondary containment will briefly be discussed

    Carl-Johan Roos Emerson carl-johan.roos@emerson.com
    8. Proof-Testing Level Gauges remotely from the Operator Room: A realistic dream or not?

    Device verification, or simply proof-testing as some people prefer to call it, is nothing new. The purpose of the test procedure is of course to ensure that the equipment functions correctly within the intended operating range. This appears to be a very basic and reasonable requirement that must have been existent ever since the first piece of equipment was delivered to the process industry. Then why is this procedure still today incurring so many problems?

     Practical industry experience shows that proof-testing of level measurement devices is especially problematic, and in particular point-level sensors that are used for HiHi-alarm in overfill prevention systems. The current proof-testing procedures proposed by many device manufacturers are often extremely time-consuming and can cause operational disturbances or even worse, sometimes even critical safety incidents. The underlying reason for this is often that the procedures include requirements that will force humans to climb the tanks and/or altering the actual level. One of the more dangerous industry practices, which fortunately is becoming less common, is to move the level to the configured alarm-point, which typically is the HiHi-alarm. This practice is very dangerous and discouraged by for example the new standard for overfill prevention for bulk liquid storage tanks, API2350.

     Given this background, this session will first provide an introduction to the subject matter “proof-testing” by covering the basics such as why it is done, purpose, what devices need testing, test frequency, implications of IEC 61508/11, diagnostics vs. proof-testing, coverage factors, etc. Then the session will take this generic info and apply it specifically to level measurement devices. The key focus will be level devices used for overfill prevention, which by far is the most common use-case for level gauges in need of proof-testing. Level sensors used for underfill (LoLo-alarm) will also be covered, but not to the same extend as overfill prevention devices. Below follows an example of topics that will be covered during this session:

     the history of proof-testing level devices and how it is still affecting the industry today

     what is the difference with proof-testing level devices compared to other devices, e.g. pressure transmitters

     present popular proof-testing methods for level devices, and their associated advantages and disadvantages

     what different proof-test procedures actually test, and who is responsible for what with respect to the entire safety function and the process itself

     why the industry is moving from point to continuous measurements also in safety critical applications, and how this applies to level sensors in HiHi and LoLo applications

     requirements for level proof-testing according to the new overfill prevention standard, API2350

     a generic model for proof-testing radar level gauges and a comparison with point-level devices

     discussion of recent technology advancements and how they will affect tomorrow’s proof-testing procedures thereby making it possible to perform IEC 61508/11 compliant high quality proof-tests directly from the control room

     the possibility of performing partial proof-testing also for level measurement devices

     Wherever possible, the session will discuss the various subjects from both a theoretical view backed by standards such as IEC61508/11 and API2350, and compare this with a more practical hands-on view based on today’s practices actually used in real-world applications.

    Carl-Johan Roos Emerson carl-johan.roos@emerson.com
    9. Performance-Based Gas Detection System Design using Computational Fluid Dynamics (CFD) modeling of Gas Dispersion

    The process industries are gradually adopting performance-based design techniques for Fire & Gas detection systems.  This is increasingly important after recent major fire & explosion events have caused major loss of life and asset damage.   However, at the same time, industry continues to use methods for layout of gas detection based on uniform spacing that date back more than 25 years.   Even when applying performance-based design techniques, the ‘gas volume detection’ principal is valid in terms of detecting and mitigating a large explosion event; however, this same method of designing and assessing detector coverage can result in large number of detectors and a costly design. 

     

    This paper challenges the need for volume detection with using sophisticated modeling of gas release scenarios using computational fluid dynamics (CFD) for hazard modeling and risk analysis techniques.  These techniques align with the ISA TR 84.00.07 guidelines for combustible gas detection on a scenario-by-scenario basis, and this breaks the paradigm that CFD can only be used sparingly on complicated gas dispersion problems.  This paper will demonstrate practical application of CFD modeling and probabilistic risk analysis in order to improve on the performance of gas detection systems while at the same time, reducing unnecessary conservatism in the number of detectors.   The paper will present methodology for determining scenario-based gas detector coverage and include a worked application example for an offshore gas processing platform.

    Kevin  Mitchell Kenexis kevin.mitchell@kenexis.com
    10. Towards Plant Instrumentation Safety Instrumented Function (SIF) Asset Mgmt Excellence

    Reliability of Plant Instrumentation, in particular, Safety Instrumented Function (SIF) at PP(T)SB { PETRONAS Penapisan (Terengganu) Sdn Bhd } , a refinery situated East Coast of West Malaysia is under threat. The challenges encountered here is ability for the existing SIF instrumentation to meet the Safety Integrity Level (SIL) as calculated and reported in the SIF Study conducted in Year 2008 to 2010 - taking into account the aging plant and inherently non-robust plant design, and with lean maintenance organization with the unfavourable oil prices. Among focus areas is to be innovative in coming up the implementation strategy of the SIF Report Recommendation. Issues encountered by PPTSB being a brown field plant is the lack of cables spares to install additional initiators to meet the SIL 2 and SIL 3 required. The allocated CAPEX is also a challenge to implement the 2oo3 initiators/transmitters configuration to meet the SIL 3. Toward this end, PP(T)SB has embarked on using the unorthodox 1oo2 voting with 1 hour Automatic Sensor Override (ASO) for its transmitters, instead of the conventional 2oo3 voting for about 80 nos of safeguarding tags in compliance with the IEC standard. The project was completed in 2011 and 2012 for PC aromatics and Refinery respectively. As part of the continuous improvement, the 1oo2 with ASO logic was further modified in 2012 to ensure all failure modes of the transmitters are captured and sent to DCS for Operators further action. In 2014, we have identified 27 tags with 1oo2 with ASO configuration which have control or indicating transmitters to be further upgraded to 2oo3 voting. The cost of conversion is very minimum since we are utilizing the indicating or the control transmitters as the 3rd safeguarding transmitters to form a 2oo3 voting. For the shutdown Valves, the strategy is to have procedure and philosophy in place to ensure good maintenance of the shutdown valves which begins from completeness of shutdown valve database which serves as central repository for easy access and depository of critical information of all shutdown valves assets datasheet, photos, Bill of Materials, construction drawings etc. With the implementation of Systematic Shutdown Valves Philosophy, any potential dangerous failures of the shutdown valves are kept to the very minimum.

    SHARUL  A-RASHID PETRONAS PENAPISAN (T) SDN BHD sharul@petronas.com.my
    11. Safety Life Cycle Management: Evolution of HIPS at Saudi Aramco

    Safety Life Cycle Management as defined by IEC 61511 Functional Safety - Safety Instrumented Systems in the Process Industry Sector starts at the initial hazard assessment and steps through design, installation, operation and finally decommissioning with continual management, auditing, planning, and verification across all steps.

     

    High Integrity Protection Systems (HIPS) also referred to as High Integrity Pressure Protection Systems (HIPPS) are a specific type of Safety Instrumented Systems (SIS) designed to prevent over-pressurization of process equipment.

     

    Saudi Aramco has increasingly used HIPS for over 20 years largely in wellhead applications. Over this period of time the Safety Life Cycle Management has evolved with an increased understanding of HIPS and the lifecycle requirements. Initially a single individual with an informal process has developed into a formal process with Procedures, Specifications, Tracking System, and dedicated staff. The paper/presentation will discuss the Safety Life Cycle Management of HIPS and how it has evolved from past to present and predictions of the future.

    Chan Miller Saudi Aramco Chan.Miller@aramco.com
    12. Optimizing your process through lignin management

    Removing lignin is a very necessary but costly part of the pulping process. Therefore having

     

    state-of the-art measurements and control capabilities are extremely important in maximizing

     

    business performance and final pulp quality. It has been a constant challenge to accurately and

     

    reliably measure lignin from the blow line through the bleach plant. The industry has relied on

     

    laboratory measurements, and/or slower multi-point kappa analyzers to measure fiber kappa. An

     

    oftentimes equally important but neglected component is the dissolved (or filtrate) lignin moving

     

    through the process, which also can widely vary and consume a significant portion of bleaching

     

    chemicals. Until recently there have not been direct measurements of dissolved lignin and total

     

    bleach load. Operators have relied on surrogate values such as conductivity or chemical residual

     

    to account for black lignin carryover. Inaccurate measurement of dissolved lignin can lead to

     

    overuse of bleaching chemicals and increased costs to the operation. With these critical

     

    measurements being problematic, new innovative technologies have been developed to enhance

     

    the ability to optimize the pulping process and deliver significant sustainable gains in business

     

    performance for the industry. This paper will discuss the recent improvements in measuring fiber

     

    and filtrate lignin on-line and its impact on fiber line and bleach plant controls

    Brad Carlberg Brad Carlberg brad.carlberg@bsc-engineering.com
    13. Performing an IACS Cyber Risk Assessment per ISA 62443

    Assessing cybersecurity risk is generally considered to be one of the first and most fundamental steps in any solid IACS cybersecurity management program.  ISA 99.02.01 (now ISA 62443-2-1) published in 2009 includes requirements that organizations perform both high-level and detailed cybersecurity risk assessments on all identified IACSs.  These requirements were reinforced in 2014 by the NIST Cybersecurity Framework that also specifies cybersecurity risk assessments and directly references the ISA 62443 requirements.  While both of these documents require risk assessments neither provide information regarding "how" to perform such an assessment.  Guidance on how to perform IACS cybersecurity risk assessments is now available in the recently developed ISA 62443-3-2, "Security Risk Assessment and System Design".

     

    This presentation will provide an overview of the 62443-3-2 standard and demonstrate the IACS cybersecurity risk assessment process through an example.  

    John Cusimano aeSolutions john.cusimano@aesolns.com
    14. The Importance of & Complexities Associated with Selecting the Right Thermowell Material

    The material of construction is one of the first considerations in the proper selection of thermowells. In choosing the right material of construction, special attention needs to be given to the chemical compatibility with the process media (corrosion environment), pressure and temperature limits, and compatibility with the process piping or vessel material. Unfortunately, these factors are not all accounted for during engineering or design phases. Instead, thermowells with Stainless Steel (SS) material are generally selected because of their suitability in a wide range of process conditions. For instance, in natural gas and oil pipeline systems, the thermowell material chosen for installation is mainly affected by the corrosion conditions which the thermowell is exposed to. It follows that Stainless Steel, having good resistance to corrosion and chemical attack, is the preferred option.

     

    On the other hand, there are numerous instances in which having Stainless Steel as a default selection is less than ideal. In welded thermowells, where pipelines are usually Carbon Steel (CS), a default selection of Stainless Steel thermowell would initially seem suitable for installation in a Carbon Steel pipeline (weldolet fitting). However, per “API STANDARD 1104 – Welding of Pipelines and Related Facilities”, such an installation would require additional welding procedures and welder performance qualifications. This would lead to considerably more costs incurred when compared with a decision to use a Carbon Steel thermowell. Similarly, in threaded thermowells, Stainless Steel thermowells inserted in Stainless Steel threadolet fittings could experience thread galling; where the interface metal high points fuse or lock together as a result of the wearing away of protective oxide coating during well tightening.

     

    This paper will provide a detailed analysis of the different factors that must be considered when selecting a suitable thermowell material, and highlight why having Stainless Steel (SS) as a “one thermowell material type fits all purposes” is not the best approach. 

    Avwerosuoghene Omughelli Gulf Interstate Engineering kerryomughelli@yahoo.com
    15. Safe Integration: Secure Industrial to Business Network architectures for the Networked Enterprise

    With the new Industrial Network standards like ISA99 and NERC CIP companies are being forced to re-examine network connectivity to industrial infrastructure. This presentation will cover industrial networking best practices, secure architectures and segregation techniques that can be used by all businesses to prevent a minor business network breach from becoming an industrial catastrophe.    

    Jeff Melrose Yokogawa jeff.melrose@us.yokogawa.com
    16. Ethernet backbone improves design, implementation, and lifecycle management of safety systems

    Bringing Ethernet to the control system as a replacement for the control system rack enables the connectivity and integration required in the age of the Industrial Internet.

     

    Ethernet communications has become pervasive in today’s technological environment. Data from Distributed Control Systems (DCS), Safety Instrumented Systems (SIS), and Programmable Logic Controllers (PLC) are routinely gathered together, transmitted between islands of automation, and served to upper level Manufacturing Execution Systems (MES). Now this flexible, high-speed, low-cost communications layer is being introduced at the lowest level of control system architecture — the rack.

     

    Rack-based systems of various shapes, sizes, and descriptions have been the norm in the DCS and SIS environment.  We can now examine the benefits of the Ethernet-based system with regards to system design phase, operations and maintenance, and overall lifecycle impacts.

     

    This paper (based on a GE Intelligent Platforms Whitepaper) explores how Ethernet as a control system backplane provides a number of design-phase benefits. The flexibility to add and subtract I/O as required for the combination of Safety Instrumented Functions (SIF) in a (SIS) reduces costs, improves distributability, and simplifies overall control system architecture

    Max Erwin GE Intelligent Platforms jack.faett@ge.com
    17. Applying ISA/IEC 62443-3-2 for Assessing Cybersecurity Risks of Drilling Assets (including Existing and New Drilling Rigs)

    There is increasing focus on cybersecurity concerns in the Oil and Gas industry including for Drilling Assets and Drilling Control Systems (DCS). DCS software can be complex, and may control highly automated systems or simple individual pieces of equipment. DCS use sophisticated process steps that require execution timing of DCSs to be precise to the milliseconds and sometimes microseconds in operating the DCS equipment. This poses significant challenges on any cybersecurity protection applied to these systems, as they should not affect these real-time and very critical timing aspects.

     

    Cyber attacks on DCS have the potential of causing significant damage to the environment, injuries, and loss of life. However, the drilling process cannot simply terminate if a cyber attack is detected. In offshore or fragile environments, the results may be disastrous. There might even be situations where the consequence of responding to the cyber attack results in worse situations than the results of the cyber attack itself. Therefore, it is critical to identify, assess, and manage risks in an efficient and accountable manner, developing a balanced cybersecurity approach and strategy and to enable efficient and controlled response actions. This starts with understanding the cybersecurity risks to the Drilling Assets.

     

    A cybersecurity strategy should according to U.S. and International standards include the following: (i) protection strategies, (ii) monitoring, audit and detection capabilities, (ii) incident response and disaster recovery, and (iv) risk management and assessment. In the NIST Cybersecurity framework this is referred to as: (a) identify, (b) protect, (c) detect, (d) respond, and (e) recover. Managing risks is considered as the core basis of a cybersecurity strategy, as described in the NIST Cybersecurity Framework for the U.S. and the International cybersecurity framework standard ISO/IEC 21827 IT-ST – Systems Security Engineering – Capability Maturity Model. These two cybersecurity framework standards should be considered as the foundation of a cybersecurity strategy for protecting Drilling Assets.

     

    Of the existing cybersecurity standard of relevance for assessing risks of drilling assets, the “ISA/IEC 62443-3-2: Security Assurance Levels for Zones and Conduits” standard has been identified by the International Association for Drilling Contractors (IADC) as the most suitable. ISA/IEC 62443-3-2 is tailored for automation and control systems, although not drilling specific, and provides a prescriptive approach to: (1) identifying the critical systems, subsystems and components in a controls and automation system, (2) defining target security levels for these critical systems, (3) assessing the risks in respect of the target security level to identify gaps and allocate appropriate countermeasures. The standard provides a step-by-step approach that should be adapted to the specific needs of each company.

     

    The presentation will demonstrate how to use ISA/IEC 62443-3-2 to assess cybersecurity risk of new and existing drilling rigs. The presentation will also show how to adapt ISA/IEC 62443-3-2 to various types of drilling assets, such as land rigs, jackups and drillships, and discuss the time and resource needs for carrying out a risk assessment using ISA/IEC 62443-3-2 for the various types of drilling assets.

    Siv Hilde Houmb Secure-NOK AS sivhoumb@securenok.com
    18. Change your Board Operator to a Process Manager with State-Based Control

    Digital control systems can provide a great deal of information very quickly to operations. With current staffing levels for day to day operations abnormal situations can cause information overload beyond what can reasonably be responded to in the appropriate amount of time. State based control maximizes the digital control systems ability to detect and convey abnormal situations through dynamic alarm or state based management. Since what is normal and abnormal change based on the state of the facility not all possible alarms are relevant to operations in every state. Properly implemented state based control gives operators relevant alarms in the proper priority for required actions for the situation at hand.

     

    State based control goes beyond dynamic alarm management where it effectively informs operations of required actions to prevent undesirable consequences.  State based control manages the outputs of the process through operating state of the process. States include startup, shutdown and emergency situations as well as grade and product changes. States can also include managing degradations to optimize performance for equipment failures and process upsets to minimize impact and return the process to optimal performance as quickly as possible.

     

    Implementation of this strategy changes the classic board operator to a process manager that approves steps in the automated procedure. This greatly reduces the training requirement for competence and allows every shift to operate exactly the same way. The operator is a more efficient resource, performing tasks in a prioritized manner that requires human intervention. This result is bottom line improvements to safety, quality, and production. Knowledge capture is effectively embedded in the control system so that the best operating knowledge is always on the board.



    Katherine Persac Prosys marketing@prosys.com
    19. Cyber Security ICS - Monitoring, Policies, and Procedures

    Today’s Industrial Control Systems (ICS) are becoming more advanced and complicated than ever before. No longer are these systems truly isolated. With the advancement of automation technologies and the introduction of Ethernet and Transmission Control Protocols (TCP), Programmable Logical Controllers (PLC) and Distributed Control Systems (DCS), to name a few, are becoming more vulnerable to cyber attacks.

     

     

    The data derived from an ICS is proving itself to be invaluable to board members in decision-making to advance their company’s profits and gain a foothold in an increasingly competitive marketplace. The proliferation of data acquisition systems like Historian Servers accessing the information highway via Wide Area Network (WAN) connections increases the risks to the control network. Cyber security attacks are becoming more prevalent in the industry of automation as a whole.

     

     

    This presentation will describe the tools and methodologies required to mitigate cyber attacks and lay the foundation for an in-depth approach to defense techniques enhancing an industrial organization’s overall security posture.

    Daniel Crandell Enterprise Products drcrandell@eprod.com
    20. Introduction to Time-in-State

    Time-in-State™ is designed to monitor each sub-unit of a continuous process and provides guidance at operational level in real time to maintain the process in the Optimum Operating Envelope (OOE) or ideal state. This OOE is defined via workshops in collaboration with the process team and becomes an internal benchmark for each sub-unit in the value chain.

     

     

    Using the Time-in-State™ methodology enables and facilitates the following benefits:

     

    •Increased asset utilization (lower energy and cost, higher throughput, stability and reliability)

     

     

    •Production process management

     

     

    •Makes process conditions visible improving operational decision-making

     

     

    •Excellent alignment and training of personnel

     

     

    •Reduction in operational risk exposure

     

     

    •Provides a platform for continuous improvement

    Gerhard Greeff Bytes Universal Systems gerhard.greeff@bytes.co.za
    21. Case Study: Applying Time-in-State

    This case study will be initiated with a short video containing feedback and comments from clients.  The presentation will provide an overview of the Time-in-State™ project, the implementation process, continuous improvement and value that the methodology delivers to the operation.

    Gerhard Greeff Bytes Universal Systems gerhard.greeff@bytes.co.za
    22. Time-in-State Panel Discussion

    Discuss the practical application / use of the Time-in-State™ methodology to address energy efficiency, process management, equipment monitoring, positioning Time-in-State™ in the context OEE (Mining and Metals, Paper & Pulp, Oil & Gas, Chemicals and Food and Beverages)

    Gerhard Greeff Bytes Universal Systems gerhard.greeff@bytes.co.za
    23. KPI LIfecycle for Process Control

    Continuous improvement of production level process control can be enabled by efficient and effective use of Key Performance Indicators (KPIs). We introduce the KPI lifecycle, consisting of KPI definition, set composition, implementation, and assessment. This talk introduces new work contributing to the KPI lifecycle: 1) methods for each element of the KPI lifecycle that are structured, verified, and standard and 2) relationships and dependencies between KPIs and its importance to optimized process control

    John Horst National Institute of Standards and Technology john.horst@nist.gov
    24. Methods and Tools for KPI Assessment

    An important element of the KPI lifecycle is for target process stakeholders to make periodic assessments of the current effectiveness of the KPIs applied to the target process. These assessments are particularly called for when any non-trivial change is made to the target process. We will present a KPI Assessment Method which integrates leading research for accurate human elicitation and decision making, while keeping the conduct of the method accessible to any factory worker. The method provides a prioritized list of detailed actions for stakeholders to improve process control efficiently.


    John Horst National Institute of Standards and Technology john.horst@nist.gov
    25. Field Tests of KPI Effectiveness Methods

    The KPI Set Composition and KPI Assessment Methods have been implemented in several real plants for particular processes. The target process at two KPI events was the Environment, Health & Safety (EH&S) process. The live KPI events already conducted are helping to improve factory operations; however, the events also verify the effectiveness of the methods themselves. This builds confidence that the methods will apply successfully to any target process in any industry. The details of the challenges, lessons learned, and improvement results will be presented.

    John Horst National Institute of Standards and Technology john.horst@nist.gov
    26. A Software Platform to Enable Smart Manufacturing

    This presentation will introduce concepts related to the Industrial Internet of Things (IIoT) and how it can be harnessed by manufacturers to create business value.  The presentation will also describe in practical terms the attributes of a software platform that enables new levels of communication, visibility, accuracy, and flexibility in manufacturing.

    Brad Williams ThingWorx brad.williams@thingworx.com
    27. A KPI Standard to Improve Process Performance

    KPIs are becoming ubiquitous in manufacturing production, with some manufacturers claiming that "KPIs are everything." However, it is known that KPIs are not being defined, selected, and used optimally. Precisely defining a set of commonly used KPIs and defining the basic principles of the selection, implementation, and assessment of KPIs, would be helpful in the pursuit of optimal performance. A new standard (ISO 22400) exists for this purpose. Some parts of ISO 22400 are complete with other parts under development. The content and usefulness of all parts of ISO 22400 will be described, with an eye to helping both production managers and workers improve their production system performance

    John Horst National Institute of Standards and Technology john.horst@nist.gov
    28. KPIML - Defining and Exchanging Key Performance Indicators

    Having KPIs are fine, but if you can’t reliably get them from the generating system to where they are needed, then they quickly lose their value.  KPIML is an XML format for exchanging KPI definitions and KPI values in a vendor independent and technology independent format.  Building on the widely used MESA B2MML, the MESA KPIML offers vendors and end users a method for exchanging KPIs and for storing KPIs in appropriate data warehouses.  The MESA KPIML definitions are open source, royalty free definitions.  This session will explain the features and uses of KPIML by one of the authors.   

    Dennis Brandl BR&L Consulting, Inc. dnbrandl@brlconsulting.com
    29. MESA Manufacturing Operations Management Capability Maturity Model

    MESA is working with industry experts to develop a Manufacturing Operations Management Capability and Maturity Model (MOM-CMM) that will allow companies to objectively determine their capabilities for all areas of manufacturing operations; production, inventory, quality, and maintenance.  The MOM-CMM will define what tasks and management activities are needed to advance from lower levels of maturity to higher levels and will document the expected advantages available from the higher levels of maturity.  The MOM-CMM is based on the well know SEI CMMI (Software Engineering Institute – Capability and Maturity Model Integrated) that has become the benchmark for process improvements in the software industry.  The MOM-CMM is also based on the widely accepted ISA 95 standards.

    Dennis Brandl BR&L Consulting, Inc. dnbrandl@brlconsulting.com
    30. Implementation of new ‘Characterized’ I/O modules for an Operating Facility Upgrade

    This paper reviews the engineering development and implementation of new I/O technology (Characterized Modules, CHARM), for a major expansion at an existing facility.

    Once our client made the decision to implement the new CHARM technology for our project, our team had to go into high gear to make sure we developed the knowledge and work processes necessary to adapt to the new hardware.  It required close coordination with our MAC, workshops at the manufacturer for managers, engineers & designers, in-house meetings to pass on knowledge learned at the workshops, development of a design guide for use on the project, development of ICSS architecture drawings to reflect the change, and many discussion how best to implement the new I/O.  At this facility many of the process units are contained in modules (built elsewhere and sealifted to the site) with attached Local Equipment Rooms (LER) for the PCS, SIS, and other control equipment.  Initially the focus was on using the CHARM I/O in the LER Marshalling Cabinets while keeping open the possibility of installing the new I/O in field junction boxes ‘if a good opportunity’ presented itself.  However, as the project progressed through FEED, the decision was made to make the use of the new I/O in field junction boxes as the default rather than the exception.  This required a quick revision of drawings and documents (and mindset) to adapt to the new focus.

    This paper will review the operation of the new CHARM technology and its advantages and challenges.  The paper will also discuss the process of how the transition was made during the FEED stage of a project to early detailed design to make use of the new Characterized Modules I/O.  Additionally, the paper will review the transitions that occurred from an initial idea of installing the new I/O in place of the conventional I/O in marshalling panels in LER buildings to installing the new I/O in the field and making full use of the new I/O in eliminating both the LER marshalling, cross-wiring, I/O modules in a system cabinet, and the associated home run cabling that traditionally runs from field junctions boxes to the LERs.

    Implementation of new technology no matter how beneficial is a challenge all through the line from control systems engineering to interfacing with designers and other disciplines to developing new documentation to interfacing with 3rd party vendors and finally to training for construction and operations.  Planning, training and communication of the new technology are keys to its successful implementation.

    Manuel Hernandez Fluor Corporation manuel.hernandez@fluor.com
    31. Asset Performance Management 2.0 - Goal Setting

    Asset Performance Management is no longer viewed as an initiative of the maintenance department. Like quality, it is a strategically important continuous improvement process that involves multiple stakeholders across the organization. This new outlook on Asset Performance Management  (APM 2.0) requires a collaborative approach between multiple departments to achieve several goals, some of them seemingly conflicting with each other.

    Ananth Seshan Fifth Generation Technologies aseshan@5gautomatika.com
    32. Leveraging Mobility in Asset Performance Management 2.0 - Case Studies

    Mobility is become a necessity in today’s industrial practice.  In Asset Performance Management 2.0, mobility plays an important role in providing organizations the flexibility to empower their employees to engage in proactive asset performance management workflows from anywhere. Practical case studies of how mobility is helping organization to maximize their asset performance and reduce non-value added costs shall be discussed

    Ananth Seshan Fifth Generation Technologies aseshan@5gautomatika.com
    33. Leveraging Real Time Asset Intelligence for Asset Performance Management 2.0

    With the advent of Industrial Internet of Things and the associated capability to perform advanced analytics with Big Data, there is an emerging capability to gain insight into potential behaviors of equipment and their impact to the business of the organization in a future time period. Armed with such “asset intelligence” it is conceivable for organizations to generate proactive actions to gain competitive advantage.

     

     

    How can organizations embark on this journey? What level of maturity is seen in reality for such technology? Where does the “rubber meet the road”? What success stories have we witnessed so far? What barriers are there to implement asset intelligence?


    Ananth Seshan Fifth Generation Technologies aseshan@5gautomatika.com
    34. The Case for Change: Bridging the Gap from R&D to Manufacturing

     

     
    Driven by constant pressure to “innovate”, companies are forced to create or evolve their products frequently to meet customer or consumer preferences or needs.  This constant need to change a “definition” about a product is hampered by the existing means to communicate these changes from the “designer” to the manufacturing plant floor.  This process is anything but fast!  There is a huge “gap” in understanding between R&D and manufacturing that must be bridged.

    Erik Nistad Mondelez International erik.nistad@mdlz.com
    35. Automating Recipe Transformation from R&D to the Plant Floor

    Continuously improving time-to-market and time-to-full production has become a requirement in modern manufacturing companies. This means the information discovered in R&D and pilot plant operations must be sent quickly and correctly to the manufacturing facilities, and that the facilities can quickly turn that information into manufacturing instructions.

     

     

    This session describes the processes and methods for an Enterprise Recipe Management system to support NPDI (New Product Development and Introduction) processes - including information on business processes, approaches to solutions, methods for capturing site best practices, and tools used to implement ERM.  Additionally the session will cover the ability to algorithmically convert and transform the information used in the NPDI processes, such as general recipes, into the executable recipes used by commercial Batch Execution Systems (BES). The transformation methods can be complex, but the transformation can be reliable and repeatable by creating and following corporate standards, and significantly reducing time to first production and improving cross-site consistency.

    Erik Nistad Mondelez International erik.nistad@mdlz.com
    36. Use Case for Enterprise Recipe Management Solutions

    Automating the flow of product recipe data has significant potential to accelerate speed to market, reduce product variation, and enable flexible supply chain operations. This session will present a sample of Case Studies across different industries and the benefits for these industries. Sampling of industries to be discussed include Chemical, Consumer Products, and Food and Beverage.

    Erik Nistad Mondelez International erik.nistad@mdlz.com
    37. The Future of Manufacturing unConference Session


    What’s next for manufacturing? Will our systems continue to make sense just with new technologies added, or will we be working in totally new ways? Will people still be part of the equation? Is there a role for MES/MOM? No one knows for sure. Governments and public-private partnerships around the world are working on visions for manufacturing’s future. This interactive unconference session will start with a few compelling concepts and research findings about the future, and with participants bringing their hot topics to the table.  So come and join the conversation to get your innovation imagination running, or to ask (or answer) pragmatic questions about how some of this big vision might actually come to fruition. Bring your experience, your skepticism, and your out-of-the box thinking

    Mike Yost MESA International mikeyost@mesa.org
    38. Enterprise Manufacturing Solutions: Finally the Year of MES unConference Session

    Businesses will get smarter and faster as modern Information Technologies reshape manufacturing globally.  That means your production operations are now strategic - and need to be agile, responsive and reliable to match the speed of your business.  We think that makes this the year MES has finally arrived.  Do you agree? Where does MES or MOM stand at your company? What have you seen that’s making it more of a strategic decision for your business? This interactive unconference session will start with a few compelling research findings about MES market traction and maturity, and with participants bringing their hot topics to the table.  So come and join the conversation to get yourself up to speed on where MES/MOM adoption is, or to ask (or answer) pragmatic questions about how to ensure your company rides the wave effectively. Bring your experience, your skepticism, and your stories about MES/MOM and its progress!

    Mike Yost MESA International mikeyost@mesa.org
    39. Big Data - Systems - Analytics - Readiness

    A sneak peek at the new section from the new edition of the Metrics frame work and Guidebook

    John Jackiw Alta Via jjackiw@altavia.com
    40. Sustainability - What does that mean looking forward?

    Sustainability unto itself has many meanings.  How will you and your manufacturing operation use these concepts in connecting operations to business and to your customers? In this session we will cover several topics, from business sustainability to environmental sustainability.  How this will and has affected the manufacturing floor and the MES/MOM layers of information

    John Jackiw Alta Via jjackiw@altavia.com
    41. Metrics and Industry 4.0

    The opening of the DMDII (Digital Manufacturing Design Innovation Institute) is ushering in a new age of manufacturing.  We are looking at new ways to integrate metrics and the volume of metrics is increasing.  The MOM /MES layer has been working towards this for 25 years and it looks like the world is getting ready to accept this.

    John Jackiw Alta Via jjackiw@altavia.com
    42. Metrics that Matter Survey Results

    A look at the Metrics that Matter Survey Results.

    John Jackiw Alta Via jjackiw@altavia.com
    43. MESA Metrics Maturity Model - Can you be honest enough with yourself?

    MESA is releasing (may be released before the conference) a  Metrics maturity model that will be the readers guide to a high level assessment of your metrics and how you need to use them.However are you prepared to be brutally honest with youself?

    John Jackiw Alta Via jjackiw@altavia.com
    44. Metrics that Matter unConference Session


    John Jackiw Alta Via jjackiw@altavia.com
    45. Integrating Multigenerational Automation Systems

    Submission requested by Alan Bryant, session leader for the ChemPID track.

    Presentation will be based on the article:

    https://www.isa.org/standards-and-publications/isa-publications/intech-magazine/2014/may-jun/features/system-integration-integrating-multigenerational-sutomation-systems/

    Description:  Creating automation systems that include elements from different eras and vendors can result in problems, but that approach may be your only choice.

    Chad Harper MAVERICK Technologies chad.harper@mavtechglobal.com
    46. Developments of wireless in safety

    ISA84 WG8 has been working on developing guidelines for installation of wireless in safety applications. The primary focus of TR8 is to help design and implement wireless solutions so that they can be considered as non-SIS IPLs and can be appropriately credited in risk analysis. TR8 covers network design including IPL criteria, availability, commissioning and operations/maintenance of the wireless system. The paper aims at describing some of the work that has been performed so far on the technical report and the path forward.

    Murtaza Gandhi BakerRisk mgandhi@bakerrisk.com
    47. What is Smart Manufacturing?

     

    This presentation will discuss recent technology developments and their impact on the manufacturing industry.  The presentation will define related terminology.  The presentation will also discuss the business value driven by the technological innovations.

    Brad Williams ThingWorx brad.williams@thingworx.com
    48. The Internet of Things (IoT) and Manufacturing

    This presentation will discuss the impact of the Internet of Things on manufacturers both from the perspective of smart, connected products and internal operations.  Success stories will be presented.

    Brad Williams ThingWorx brad.williams@thingworx.com
    49. NIST's Cybersecurity Framework Over a Year In

    Recognizing that the national and economic security of the United States depends on the reliable functioning of critical infrastructure, the President issued Executive Order 13636, Improving Critical Infrastructure Cybersecurity, in February 2013. It directed NIST to work with stakeholders to develop a voluntary framework – based on existing standards, guidelines, and practices - for reducing cyber risks to critical infrastructure.

     

    NIST released the first version of the Framework for Improving Critical Infrastructure Cybersecurity on February 12, 2014. The Framework, created through collaboration between industry and government, consists of standards, guidelines, and practices to promote the protection of critical infrastructure. The prioritized, flexible, repeatable, and cost-effective approach of the Framework helps owners and operators of critical infrastructure to manage cybersecurity-related risk.

     

    This presentation will discuss the components of the Cybersecurity Framework; how it may be applied; and what resources are available to aid organizations.  In addition, the presentation will cover the Roadmap that details areas that are under study and the future schedule of revision. Finally, the presentation will discuss the feedback and adoption by industry.

    Suzanne Lightman NIST lights@nist.gov
    50. Toxic gas detectors for Shelter In Place (SIP)

    Many facilities with significant acute toxic hazards designate some of their buildings as toxic Shelter-in-Place locations (SIPs).  Toxic SIPs depend on reliable, timely toxic gas detection to allow the ventilation system configuration to be aligned to sheltering mode and to alert personnel that toxic emergency procedures are to be implemented.  Testing the effectiveness of SIPs has revealed that many SIPs are not very effective at preventing toxic gas infiltration, and it may be necessary to implement a fallback plan to effectively mitigate toxic risks.  One necessary aspect of an effective evacuation plan is to monitor toxic gas concentrations within the SIP.  Many SIPs have no indoor toxic gas monitoring, which means that occupants may unnecessarily evacuate the SIP into a highly lethal toxic cloud even though it is safe within the SIP.  The other extreme is also possible – that occupants perish within the SIP, even though highly reliable escape packs were available for them to use for safe evacuation.  This paper discusses the role of toxic gas monitors in support of an effective toxic SIP design

    Murtaza Gandhi BakerRisk mgandhi@bakerrisk.com
    51. Hardening Industrial Control Systems (ICS) to Avoid a Cyber Attack

    Which of your systems are most vulnerable to a cyber attack?  With the growing number and increasing severity of ICS cyber attacks, industrial cyber security is top of mind for operators.  The stakes are high for process control industries. Documented incidents in Iran and Germany illustrate the severity of industrial attacks and have provided an impetus for implementing greater security at the process control layer.

     

    Much like the shell of an egg, current cyber security practices focus mostly on the perimeter, or physical and IT layers, of control systems.  Once through the shell, an intruder can compromise the entire control system and wreak havoc within the plant.  Guarding the perimeter is not enough.

     

    A centralized vendor-agnostic configuration management program that handles the heterogeneity of industrial control systems is critical when designing layered defenses for ICS security. Any other approach creates cyber security silos or inadequate defenses that leave a plant vulnerable to attack. Industry best practices dictate use of automation software that captures the inventory of all control assets, establishes configuration baselines and policies, manages changes, and facilitates backup and recovery.  Organizations that adopt these configuration management best practices establish a layer of security that detects unauthorized changes, making plant operations safer, more compliant, and more profitable.

     

    In this session, we will discuss how to create and maintain a comprehensive asset inventory and how to conduct a threat assessment to implement a layered defense architecture for a process control network.  We’ll also explore the most common vulnerabilities of perimeter-based cyber security and examine defenses that will help your organization detect improper changes.

    Hector Perez PAS, Inc. hperez@pas.com
    52. Alarm Management with High Performance HMI: The Compounding Benefits of a Unified Solution

    Traditionally Alarm Management and High Performance HMI have been treated as two separate topics.  In reality Alarm Management is a subset component of High Performance HMI.  A well-developed alarm management program enables operators to be effective while reacting to abnormal situations.  When High Performance HMIs are coupled with alarm management the operators are empowered to proactively monitor and interact with the process to prevent escalating situations.

     

     

    In this session we will discuss the methodologies to implement: Effective alarm management with High Performance HMI and the added benefits that can be gained when implemented as a unified solution.Guarding the perimeter is not enough.

     

    A centralized vendor-agnostic configuration management program that handles the heterogeneity of industrial control systems is critical when designing layered defenses for ICS security. Any other approach creates cyber security silos or inadequate defenses that leave a plant vulnerable to attack. Industry best practices dictate use of automation software that captures the inventory of all control assets, establishes configuration baselines and policies, manages changes, and facilitates backup and recovery.  Organizations that adopt these configuration management best practices establish a layer of security that detects unauthorized changes, making plant operations safer, more compliant, and more profitable.

     

    In this session, we will discuss how to create and maintain a comprehensive asset inventory and how to conduct a threat assessment to implement a layered defense architecture for a process control network.  We’ll also explore the most common vulnerabilities of perimeter-based cyber security and examine defenses that will help your organization detect improper changes.

    Hector Perez PAS, Inc. hperez@pas.com
    53. How to Beat your Startup Date

    Startup and Commissioning is always a challenge, especially in an industry that is heavily impacted by an aging workforce.  We will review techniques for improving your startup date and compacting your overall project schedule.  We will look at some of the pitfalls to avoid as well as some proven methods that lead to a successful execution.  Some of the techniques that we will discuss are presented in the May/June 2015 issue of InTech magazine in the link below.

     

    https://www.isa.org/intech/20150604/


     

    Tim Green MAVERICK Technologies tim.green@mavtechglobal.com
    54. Novel Design of Ubiquitous Data-Centric Automation and Control Architecture

    The development of process automation systems including programmable logic controllers and distributed control systems has evolved to ensure effective control of all disparate equipment within processing facilities in a cohesive fashion for improved afety, higher production rates, more efficient use of materials and energy, and consistent product quality. Their fundamental architecture has advanced from large centralized system with all control hardware and input/output (I/O) racks mounted in large cabinets to highly distributed systems based on traditional hierarchical network layers including field control, plant control, and plant information networks.  Subsequently, this network-centric architecture has transformed to an application-centric architecture based on client-server communication model to improve systems integration. However, from users’ perspective in process industries, all automation solutions are suffering from three major disadvantages: requirement of extensive premature capital investments for sustaining the systems life cycle, inherent architectural constraints for achieving full utilization of the controllers’ resources, and inefficient distribution of real-time process data for achieving maximum yield of processing facilities.

     

    The premature capital investment is due to frequent obsolescence challenges of proprietary controllers and associated I/O racks. The controller monolithic architecture includes hardened built-in bond between the main control module and the associated I/O racks. The required composite scan time resolution of the controller for each process application depends on the size of logic memory utilized for the control application and the required number of I/O racks. Therefore, the controller’s resource utilization is dependent on the process application rather than the available capacity. The application-centric architecture is not effective for integrating highly interacting heterogeneous process control applications across multiple network layers for exploiting processing facilities to achieve maximum return on value.

     

    This paper presents a paradigm shift from the traditional application-centric architecture of process automation systems to an optimum ubiquitous data-centric automation and control architecture based on distributed autonomous process interface systems, virtually clustered process control servers, and real-time publish/subscribe data distribution service (DDS) middleware.

     

    Each autonomous process interface system is a standalone DDS-enabled I/O system with required hardware for connecting various filed process measurement and control devices. The real-time DDS middleware is the core enabler for achieving seamless cross-vendor interoperable communication environment to effectively and efficiently exchange real-time process data among all heterogeneous process control applications including sequential and regulatory control, advanced regulatory control, multivariable control, unit-based process control, and plant-wide advanced process control.

     

    Results of detailed performance analysis to evaluate the average communication latency and aggregate messaging capacity among process control applications and distributed autonomous process interface systems are promising. The overall performance results confirm the viability of the new proposal as the basis for designing an optimal and cost-effective evergreen collaborative automation platform to handle all process control applications and provide flexibility and economy of scale for maximum systems’ utilization. A conceptual reference model is developed to standardize the internal functions of the data-centric automation and control architecture and to ensure seamless portability and interoperability among all system components, and heterogeneous scalability and compatibility across multiple vendors.

    Ghalib Alhashim Saudi Aramco ghalib.hashim@aramco.com
    55. An Exciting Cyber Security Program Approach

    US nuclear power plants (NPPs) are required to provide high assurance that digital computer, communication systems, and networks are protected against cyber attacks [ref: Code of Federal Regulations Title 10 Section 73.54, “Protection of Digital Computer and Communication Systems and Networks”]. Included within the scope of 10FR73.54 are plant digital I&C systems. Since these systems monitor and control NPPs within their design bases (protecting public health and safety), digital I&C systems must be analyzed and considered from a programmatic and systematic engineering perspective with respect to existing plant/system design, processes and procedures when implementing a cyber security program. This approach should also reduce the work effort, cost and time to secure NPPs against cyber attacks.

     

    The Nuclear Industry is focused on complying with 10 CFR 73.54 by Licensee commitment dates using industry developed guidelines endorsed by the Nuclear Regulatory Commission (NRC). However, these efforts are generally more costly and require much more time than anticipated. For example, meeting the program requirements and the approach of performing critical digital asset (CDA) security control assessments is performed as a task rather than part of an integrated program that addresses the controls and all program requirements.  This makes the work effort very resource intensive and creates a complicated program to maintain and audit. These circumstances are due partly to fluctuating and numerous NRC requirements plus industry’s approach to meet these requirements.

     

    To address cyber compliance cost and schedule issues, the authors utilized the successful and systematic engineering and project management of their many large turnkey digital controls installations consisting of the design, installation, testing, startup, and operations of varied DCS systems. These multimillion dollar projects required a top down, big picture focus as well as detailed level efforts that extended to the computer bit level.   So, the methodology for these projects was utilized as the basis for this program since the cyber controls focus is plant process digital controls.

     

    The resulting cyber security program utilizes a consistent, systematic lifecycle approach to baseline, evaluate, integrate, upgrade, implement, test and maintain CDA cyber security controls; providing a high assurance that digital controls systems, business networks, communication systems, and all CDAs associated with plant safety, security, and emergency preparedness functions are adequately protected against cyber attacks. This lifecycle, risk-informed approach utilizes existing plant processes, procedures, and guidelines and translates existing industry cyber control guidelines into a systematic effort that is implemented programmatically rather than with a checklist mentality.

     

    Key program benefits include reduced work effort, cost, and improved auditability. More importantly, the program assures public health and safety is not subjected to unreasonable risks from cyber attacks.  All necessary elements and explanations are provided to successfully walk a plant through program implementation to comply with NRC requirements utilizing any ongoing plant efforts. Finally, although the program’s current emphasis is the nuclear power industry, the program is constructed such that it can be applied to hazardous process plants with minor alteration.

     

    This paper summarizes the development, application, implementation, and benefits of this Cyber Security program methodology.

     

     

       

     

     

    Meredith Allen  METCALFE PLLC Mmallen@metcalfepllc.com
    56. Effects of Wireless Sensor Network Communications on Simulated Industrial Processes


    Rick Candell NIST rick.candell@nist.gov
    57. Alarm Management -“We Rationalized Our Alarms, Now What?” 5 Tips for Better Success

    Many organizations have gone through the effort of rationalizing the setup and the alarm responses to reduce and redesign for better alarming. However, as time passes and change comes to the process, performance of the alarm system is not always managed and tracked.  This presentation will address what should happen after the rationalization process.  The other facet of alarm management is managing security and change.  Who has access to change, add or delete alarms?  And finally, how to gain buy in from the stakeholders in the alarm management lifecycle

    Michael Lyssy aeSolutions mike.lyssy@aesolns.com
    58. How to Make Money with Your Operator Graphics

    The operator interface to your control systems is an important part of the of the operators ability to control the plant processes. A well designed graphic that encourages operator response in an accurate and timely way is crucial to the operation and can make money for the plant. During normal run time, they can keep operators from shutting down the unit if issues are indicated and addressed before they alarm or trip the plant. Another opportunity for a high ROI is upon shut down or during start-up. A significant cost savings can be accomplished through the use of Permissive and Shutdown graphics for the operator. These graphics show the operator specifically what is keeping the plant from starting up or what exactly caused the plant to shutdown. This allows the operator to make the necessary adjustments to facilitate continuing the process without having to call a technician or engineer out in the wee hours of the morning. These graphics can save the plant many hours of down time and lost production revenue.

    Steve Ferrer ProSys, Inc., steve.ferrer@prosys.com
    59. ICS Cyber Security Process Lifecycle

    While conforming with regulations, standards, and best practices, this paper/presentation will propose a slightly altered process to the life cycle of design, acquisition, acceptance and operation of a typical control system. Slight changes to the lifecycle will improve the industrial control systems ability to prevent or withstand a cyber security problem.  These steps will put the facility in a better position to manage cyber security threats, performance, reliability, audits and response.  

     

    Author Bio:  Jim McGlone is the Chief Marketing Officer at Kenexis. Prior to joining Kenexis, Jim spent fourteen years in business development for Rockwell Automation and five years as the vice president of Tridium, a Honeywell technology business.

     

    Mr. McGlone holds an MBA in International Business, a BS in Computer Technology, and a BS in Nuclear Technologies. Jim also served the US Navy on two submarines as a nuclear reactor operator and electronic technician.

    James (Jim) McGlone Kenexis james.mcglone@kenexis.com
    60.  Reliability in Measurements: Common Misconceptions in Calibration Management

     

    Instruments are required to perform within appropriate specifications (e.g., within accuracy and precision tolerance levels). Because uncertainty in the accuracy and precision  of an instrument tends to grow with time since last calibrated, periodic calibration is needed to maintain the uncertainty under control.

     

    Calibration constitutes the most important time based maintenance policy for instruments; therefore, the definition of a cost-effective calibration strategy is a key factor to achieve reliable measurements to monitor, control and protect critical equipment and systems (equipment-systems under control).

     

    Calibrations strategies are frequently based on simplistic approaches that do not account for the importance of the instrument/measurement under consideration. These traditional strategies lack the granularity to identify groups of instruments that require greater calibration efforts to keep their reliability above expected targets.

     

    This paper describes, through guided examples, how common misconceptions in calibration management have influenced the definition of non-optimal calibration policies and strategies; introduces reliability concepts to develop a risk based approach; and compares the traditional way of defining calibration policies and strategies to the proposed risk based approach.

    Henry Johnston Genesis Oil and Gas Henry.Johnston@genesisoilandgas.com
    61. Incidents that Define Safe Automation

     

    The process safety management regulation was issued in 1992 to address the prevention or minimization of the consequences of catastrophic releases of toxic, reactive, flammable, or explosive chemicals. In the decade leading up to its promulgation, the process industry suffered significant loss events that caused worldwide attention to become focused on reducing the risk of process safety events.

     

                Since 1992, additional loss events have occurred that brought renewed effort in defining the requirements for safe automation on a global scale. Numerous industry standards and practices have been published to address different aspects of instrumentation and controls from basic electrical safety through performance-based standards for alarm management and safety instrumented systems.

     

                To emphasize the importance of safe automation practices, case studies of previous incidents are presented, including a brief description of the incident and the key automation lessons to be learned. There are typically many contributors to these incidents and some incidents have become synonymous with certain safety issues, e.g., Texas City 2005 related to siting of temporary and permanent structures. This paper does not make any attempt to replicate these previous lessons learned, but instead focuses on the contribution of inadequate design, installation, testing, maintenance, and operation of the process control and safety systems.

    Eloise Roche SIS-Tech Solutions eroche@sis-tech.com
    62. Independence and Separation of Automated Systems

    Layers of Protection Analysis (LOPA) exposes the role that automation plays in initiating events and in responding to abnormal operation. Automation that is specifically designed to achieve or maintain a safe state of a process in response to a hazardous event is now referred to as safety controls, alarms, and interlocks (SCAI) per ISA 84.91.01 [ISA 2012]. Guidelines for Safe Automation of Chemical Processes [CCPS expected 2016] addresses the use of process control systems and SCAI to ensure safe operation of process equipment.

     

    A key requirement is that the safety systems are sufficiently independent of the process control system such that no human error or component failure causes failure of both the process control and safety functionality. Sufficient independence can be achieved using various physical and functional means. Unfortunately with modern automation, sufficient independence may be difficult for anyone other than automation specialists to assess and understand.

     

    To assist practitioners with the independence assessment, this paper provides a general discussion of the issue of independence as it applies to safe automation.  Then, the paper uses 6 generic architectures to support an overview of modern automation and an explanation of how different architectures support or lose independence.

    Angela Summers SIS-Tech Solutions asummers@sis-tech.com
    63. High/Continuous Demand Hazardous Scenarios in LOPA

    Layer of protection analysis (LOPA) has become one of the most important risk analysis techniques in the process industry to determine the integrity requirements for protection layers, especially the safety integrity level (SIL) for safety instrumented functions (SIFs).  Once a SIL has been allocated to a SIF safeguard in LOPA, the SIF will be designed, installed, and operated according to ANSI/ISA 84.00.01/IEC 61511.  These standards require that a SIL verification be performed to assess that the integrity of the designed SIF meets the target integrity requirements determined in the LOPA.  A key question is what is the appropriate target integrity measure for your SIF based on the scenario's demand rate and how might that affect the LOPA methodology.

     

    One basic assumption in LOPA is that the safety integrity of the protection layers (including SIF's) is given by the well known average probability of failure on demand (PFDavg), which is the safety integrity measurement for low demand systems per ANSI/ISA 84.00.01/IEC 61511.  However, what if the hazard scenario involved has a high/continuous demand rate (nominally defined in the standards as more than once a year)?  ANSI/ISA 84.00.01/IEC 61511 explicitly defines the safety integrity measure for high/continuous demand SIF as the frequency of dangerous failures per hour (PFH), instead of PFDavg.   We also potentially have a mixture of safeguards operating in different modes, e.g. both low demand and high/continuous modes, in the same LOPA scenario.  Does LOPA still work?  Is your SIL determination correct?  Are your verification calculation going to be correct?

     

    In this paper, we present a method to allow the handling of high/continuous demand hazard scenarios in LOPA without changing the general LOPA framework. Calculation of high/continuous mode safety functions are illustrated with discussion of diagnostic and test interval effects provided. Cases encountered in actual projects are used as examples to showcase the proposed method.

    William Mostia SIS-TECH Solutions bmostia@sis-tech.com
    64. Automation Infrastructure Upgrades at an Oil Storage Terminal

    This paper discusses the approach taken to upgrade the automation infrastructure at an existing oil storage terminal comprising pipeline reception as well as truck and tank car loading bays. The project aimed at replacing obsolete components and subsystems with newer technology equipment, while improving site safety and security. A description of the various control subsystems involved and their interaction with the process equipment is provided. The project included the relocation of the main control room as well as upgrades of the power distribution and control systems. Compliance review with various standards and regulations is also discussed.

     

     

     The HMI software and PLC hardware were both obsolete. The article describes their modernization and implementation based on a Programmable Automation Controller (PAC) combined with a standard library of faceplates and standard routines for pumps, valves and process instruments.

     

    In addition, the paper discusses the implementation of a redundant overfill protection system as per API-2350, as well as a Safety Instrumented System (SIS) as per ISA-84/IEC 61511, integrated with the basic process control system (BPCS PLC) and HMI.

     

     

     The paper discusses the various measures taken by the team to reduce downtime associated with the project execution, taking into account the dynamic product demand and equipment availability. This included planning and communication on multiple fronts throughout the project. The conclusion summarizes what made the project a success

    Richard Caouette Letico Inc., rcaouette@letico.com
    65. FMEDA Predictions and OREDA Estimations for Mechanical Failure Rates: Explaining the Differences

    This paper first describes the distinction between failure rate prediction and estimation methods in general.  It then gives details about the procedures used to obtain generic failure rates for certain mechanical equipment using FMEDA predictions and OREDA estimations.  The results of the two methods when applied to a number of specific equipment items are compared and, when differences in the results exist between the two methods, plausible explanations for differences are provided.   Equipment examples include a representative of topside equipment items such as ball valves, butterfly valves, pressure sensors and level sensors, and representative subsea equipment items such as ball valves, butterfly valves and temperature sensors.  The relative merits of each method are discussed.

    Loren Stewart exida lstewart@exida.com
    66. Safety and alarming applications based on ISA100 Wireless system

    Industrial wireless instrumentation is being applied to a wide variety of applications today. End users expect large scalability, fast updates with deterministic latency along with a high reliability secure wireless network. These characteristics are mandatory to perform reliable wireless communication including for control and safety applications. Our latest wireless solution offering “Dependable Plant Wide Wireless” based on ISA100.11a standard (IEC 62734) delivers a fully redundant, large scalability, high speed deterministic network, capable of supporting the most demanding plants and mission critical applications.  In this presentation, we will introduce safety and alarming wireless system based on ISA100 Wireless. One is a wireless emergency shutdown valve system for floating roof tank. Second is wireless alarming system for Tsunami, fire prevention,  and Gas leakage detection.

    Toshi Hasegawa Yokogawa Electric Corporation Toshi.Hasegawa@jp.yokogawa.com
    67. Manufacturing Enterprise Solutions for Process Industries


    Aasim Waheed INTECH Process Automation Inc aasim.waheed@intechww.com
    68. Protecting the Data Chain for More Accurate MES Applications

    MES applications rely upon accurate process data (temperatures, flows, pressures, etc.) as inputs to KPIs that are used for making business decisions.  At most facilities, a data historian is used to collect process data from a combination of DCS, SCADA and PLC platforms, which in turn obtain process data from field instrumentation.  Many operating facilities have reported difficulties in ensuring that the data provided to the MES layer is accurate – the problem is that a change at any level of the “data chain” can impact the quality of the data in the MES solutions.  As an example, if an engineer were to change the engineering units of a transmitter setting the DCS and not make a similar change to the configuration of the historian, an MES solution would be reporting inaccurate data.

     

     

     

    In this presentation, PAS will discuss strategies and technologies to ensure integrity of the data-chain so that MES applications remain accurate.  These strategies will include proper change management, monitoring for change, and auto-discovering defects that affect data quality and accuracy.

    Mark Carrigan PAS, Inc. mcarrigan@pas.com
    69. Best Practices to Improve the Safety and Productivity of Plant Operations

     

     

    We have a lot of separate functions in our organizations - each with their own risk management systems. It’s very difficult to compare the priorities each system generates.

     

    Data is gathered to understand the health of equipment and systems and is typically held in closed "silos" inside our organizations, inaccessible in a useful manner to the majority. Standalone data automation systems (or even spreadsheets) are used to manage the masses of data, often jealously guarded by the subject matter experts who alone can access and interpret the data. Management processes are in place generating performance indicators, but the data they provide is often difficult to utilize effectively.

     

    Data is the root of the problem.

     

    As a priority, we need to improve the way we gather, analyze, and disseminate data. We need to find better ways to deal with management processes (most of which are effectively risk management systems). We need understand the combined effect of performance deviations and non-conformances on the safe and effective operation of our facilities. We need to identify how business priorities and work-related activities combine with plant conditions to impact fundamental process safety barriers that protect our people, our assets, our facilities and their surroundings against major accidents.

     

    Plant operators need to integrate operations management and risk management. By connecting the two, operators can better understand plant status in terms of risk, trends, and peak exposure and make proactive interventions to prevent major accidents.

     

    This paper and presentation will share best practices for Plant Managers operating in oil and gas, refining, chemical, and petrochemical industries to;

     

    •    Employ routine and efficient management of operational risk

    •    Understand the contributing factors to safety risk and their impact on process safety barriers in real-time

    •    Produce new leading indicators of operational risk that reflect the operational reality of the plant

    •    Close the gaps between maintenance, planning, and operational reality to achieve better plan attainment, wrench time and operational decisions

    •    See all risk across the operation, including time and space dimensions

     

     

    Presenter: Mike Neill, President – Petrotechnics North America

     

    Presenter Bio: Mike Neill is the President of Petrotechnics USA. With more than 35 years of experience, Mike has helped to improve safety and performance management for companies operating in hazardous industries around the world. Prior to joining Petrotechnics, Mike held roles in Operations, Drilling and Petroleum Engineering for BP Upstream, in Scotland, Norway, the South of England, and Egypt.

     

    Mike holds a BSc in Mechanical Engineering, MSc in Petroleum Engineering from Imperial College of Science and Technology at the University of London, and an MBA in Strategic Management from the Peter F. Ducker Graduate Management Centre, Claremont Graduate School in California. He is an active member of the Center for Chemical Process Safety (CCPS), Association of Chemical Engineers (AIChE), American Society of Safety Engineers (ASSE), Gas Processors Association (GPA), Ocean Energy Safety Institute (OESI), and Mike sits on the steering committee for the Mary Kay O’Connor Process Safety Center.

    Mike Neill Petrotechnics courtney.brewer@petrotechnics.com
    70. Upgrade of gas detection technology eliminates false alarms and improves safety performance

    This paper will describe lessons learned from the Terra Nova FPSO, which operates in the Grand Banks, off the East Coast of Canada.  The owners of the Terra Nova field are: Suncor Energy (38%), ExxonMobil (19%), Statoil (15%), Husky Energy (13%), Murphy Oil (10%), Mosbacher Operating Ltd. (4%), and Chevron Canada (1%).  The Terra Nova FPSO is a remote facility with limited egress.  Therefore, any hazardous gas release in the facility requires complete production shutdown, blow down of available inventory and isolation of electrical equipment that is not Zone 1 rated.  In 2010, the advent of sour gas in the production fluid stream and gas detector failures were the catalysts for an extensive overhaul and upgrade of the gas detection system on the Terra Nova FPSO.  A multi-disciplinary team, comprised of personnel from safety, risk analysis, operations, instrumentation and controls engineering was assembled to assess and upgrade the overall gas detection system on the FPSO.

     

     

    A detailed analysis of the facility, based on computational fluid dynamics (CFD) modeling, was performed.  In aggregate, more than 1,400 gas leak scenarios were simulated and used in the evaluation, detector selection process, optimization and overall design of the upgrade to the gas detection system.  Laser-based technology was selected to replace infra-red based gas detection technology after extensive testing in both onshore and offshore environments.  By combining a quantitative gas dispersion study with the implementation of new technology, Terra Nova was able to achieve:

     

    1.         An elimination of false alarms

     

    2.         An increase in gas leak detection coverage

     

    3.         Earlier warning for preventative and remedial action

     

    4.         A reduction in maintenance requirement 

     

    5.         A reduction in the exposure of operations personnel to hazardous locations and gases  

     

    6.         An improvement in the reliability and robustness of the FPSO’s overall gas detection system.

     

     

    Prior to the upgrade, false alarms from gas detectors were resulting in prolonged outages, damage to process equipment and production deferments of approximately 50,000-100,000 barrels per year.  The upgrade has solved these problems and resulted in a significant improvement in safety, reliability and process uptime.  Data from the Terra Nova data historian, maintenance management system and lost production tracking register was analyzed to quantify this performance.  This paper describes the methodology that was applied to the upgrade and presents an overview of the results.  Implementation of this retrofit and upgrade approach is expected to benefit numerous industrial facilities where the threat of a toxic or flammable gas leak exists.

    Rajat Barua Senscient rbarua@senscient.com
    71. Simulating SCAI

    The use of simulation in the verification of the safety instrumented system (SIS) application program is clearly discussed in IEC 61511. Simulators, when used in this way, are among the development and production tools that are themselves subject to assessment and verification.

     

         Simulators, however, have a much larger role to play in the process safety lifecycle and can be of benefit to the safety controls, alarms, and interlocks (SCAI) lifecycle as well.  Simulation can enhance the effectiveness and sustainability of critical safeguards through one or more of the following:

     

    •Assisting in the specification of protection layer response times

     

     

    •Facilitating the identification of dangerous combinations of output states

     

     

    •Verification of operating and maintenance procedures

     

     

    •Supporting initial and ongoing personnel training

     

     

         This paper will discuss the various ways in which process and application program simulation can be used to support more robust implementation of SCAI.

     By combining a quantitative gas dispersion study with the implementation of new technology, Terra Nova was able to achieve:

     

     

    1.         An elimination of false alarms

     

    2.         An increase in gas leak detection coverage

     

    3.         Earlier warning for preventative and remedial action

     

    4.         A reduction in maintenance requirement 

     

    5.         A reduction in the exposure of operations personnel to hazardous locations and gases  

     

    6.         An improvement in the reliability and robustness of the FPSO’s overall gas detection system.

     

     

    Prior to the upgrade, false alarms from gas detectors were resulting in prolonged outages, damage to process equipment and production deferments of approximately 50,000-100,000 barrels per year.  The upgrade has solved these problems and resulted in a significant improvement in safety, reliability and process uptime.  Data from the Terra Nova data historian, maintenance management system and lost production tracking register was analyzed to quantify this performance.  This paper describes the methodology that was applied to the upgrade and presents an overview of the results.  Implementation of this retrofit and upgrade approach is expected to benefit numerous industrial facilities where the threat of a toxic or flammable gas leak exists.

    Eloise Roche SIS-Tech Solutions eroche@sis-tech.com
    72. Effects of Wireless Packet Loss in Industrial Process Control Systems

     

     

    Timely and reliable sensing and actuation control are essential in networked control.  This depends on not only the precision/quality of the sensors and actuators used but also on how well the communications links between the field instruments and the controller have been designed.  Traditionally, these links are enabled by wires/cables, multiplexers and fieldbus protocols, which provide instant, reliable communications.  However, wired solutions do not scale well or support network reconfiguration.  Wireless networking, on the other hand, offers simple deployment, reconfigurability, scalability, and reduced operational expenditure, and is easier to upgrade than wired solutions.  Moreover, they support flexible communication bandwidth allocation depending on the needs of the control system.  Despite these advantages over their wired peers, the adoption rate of wireless networking has been slow in industrial process control due to the stochastic and less than 100% reliable nature of wireless communications and lack of a model to evaluate the effects of such communications imperfections on the overall control performance.

     

    In this work, we study how control performance is affected by wireless link quality, which in turn is adversely affected by severe propagation loss in harsh industrial environments, co-channel interference, and unintended interference from other devices.  These phenomena may result in significant and possibly bursty packet loss, which prevents the controller from tracking the critical performance metrics of the process under control and responding appropriately.

     

    We select the Tennessee Eastman Challenge Model (TE) for our study.  This is a widely-used chemical process model, which includes a variety of remote sensors and actuators linked to the controller.  A decentralized process control system, first proposed by N. Ricker, is adopted that employs 41 sensors and 12 actuators to manage the production process in the TE plant.  We consider the scenario where wireless links are used to periodically transmit essential sensor measurement data, such as pressure, temperature and chemical composition to the controller as well as control commands to manipulate the actuators according to predetermined setpoints.

     

    We consider two models for packet loss in the wireless links, namely, an independent and identically distributed (IID) packet loss model and the two-state Gilbert-Elliot (GE) channel model.  While the former is a random loss model, the latter can model bursty losses.  With each channel model, the performance of the simulated decentralized controller using wireless links is compared with the one using wired links providing instant and 100% reliable communications.  The sensitivity of the controller to the burstiness of packet loss is also characterized in different process stages.

     

    The performance results indicate that wireless links with redundant bandwidth reservation can meet the requirements of the TE process model under normal operational conditions.  When disturbances are introduced in the TE plant model, wireless packet loss during transitions between process stages need further protection in severely impaired links.  Techniques such as retransmission scheduling, multipath routing and enhanced physical layer design are discussed and latest industrial wireless protocols are compared.

    Yongkang Liu NIST yongkang.liu@nist.gov
    73. Technology Migration Unifies Plant Communication Infrastructure

    Technological advancements are changing the accessibility of plants’ existing infrastructure

     

    A plant’s infrastructure historically was planned in a pyramid type structure that formed multiple access levels. The basic levels were started at the top of the pyramid, the corporate level, and worked towards the manufacturing or local control environment. The manufacturing level includes devices, instruments, as well as batch, continuous and discrete control. But as more ethernet based control and Plant Asset Management systems are deployed, combined with the release of enhanced technologies, it is becoming apparent that the ethernet direct control and supervision infrastructure is merging with field device infrastructure for further improved operations.

     

    This technology migration trend creates the potential to combine existing assets with newer technologies including standardized wireless communications and ethernet based instrument protocols such as HART-IP. The most notable advancement in these communication enhancements is that each device, instruments included, has the ability to be addressed via an IP address from multiple host systems simultaneously.

     

    Users are able to leverage existing intelligent instruments in more ways because the communication enhancements are simple, flexible, reduce the dependence on multiple instruments by taking advantage of built in multivariable capabilities and allows users to centralize calibration. This creates the reduction of complexity integrating control systems and Plant Asset Management systems.

     

    The intent of this paper is to outline recent technological advancements, current and potential integrations with existing infrastructure, and new capabilities for users.

      This is a widely-used chemical process model, which includes a variety of remote sensors and actuators linked to the controller.  A decentralized process control system, first proposed by N. Ricker, is adopted that employs 41 sensors and 12 actuators to manage the production process in the TE plant.  We consider the scenario where wireless links are used to periodically transmit essential sensor measurement data, such as pressure, temperature and chemical composition to the controller as well as control commands to manipulate the actuators according to predetermined setpoints.

     

     

    We consider two models for packet loss in the wireless links, namely, an independent and identically distributed (IID) packet loss model and the two-state Gilbert-Elliot (GE) channel model.  While the former is a random loss model, the latter can model bursty losses.  With each channel model, the performance of the simulated decentralized controller using wireless links is compared with the one using wired links providing instant and 100% reliable communications.  The sensitivity of the controller to the burstiness of packet loss is also characterized in different process stages.

     

    The performance results indicate that wireless links with redundant bandwidth reservation can meet the requirements of the TE process model under normal operational conditions.  When disturbances are introduced in the TE plant model, wireless packet loss during transitions between process stages need further protection in severely impaired links.  Techniques such as retransmission scheduling, multipath routing and enhanced physical layer design are discussed and latest industrial wireless protocols are compared.

    David Burrell Phoenix Contact dburrell@phoenixcon.com
    74. How to influence “Easy Riders” and companies with no safety culture to invest properly in Safety

     

     

    Studding for 16 years the Corporate Social responsibility (CSR) and working for almost 20 years the safety to human life (Safety), is ease to realize both are issues that involve high costs for companies and therefore the reduction of short-term profits. Both are important for sustainability and survival of the company itself, just important to the long-term results.

     

    The government usually handles overall, long-term issues. In the case of Safety with regulation and auditing. Civil society plays an enormous role, with institutions like ISA, composed by professionals of several different sectors, including companies with long-term safety culture. 

     

    Thus, for all companies, Safety depend on a rational application of resources, maximizing the results and stimulating new actions. Both ALARP, ISA84/IEC61511 and SIL concept deals with the rationalization of choice of security functions.

     

    Especially those companies called by business term as “easy riders”, interested only in profit, and those with no consistent safety culture, becomes also highly necessary to discuss and manage the returns of investments in Safety. To keep in mind that even if some companies invest only by virtue, most of them still need to convince shareholders of the importance and need of solid arguments based on profitability to be able to do it. KPI's of various TIERS, as well as availability indicators can help, but you have to treat them in benchmark.

     

    It is very important to recognize that the returns on Safety have good amount of uncertainty involved, some of them requiring high individual investments or concurrent investment of several companies. Also to remember some of those decreases availability.

     

     

    Finally, to know that there are issues that generate returns in cascade, i.e. investment in one area positively affects another; a seemingly immediate return affects or reinforces other.

     

    The articles presents cases collected over the years, pointing to a possible path to increase safety compliance in process industry.

    We consider two models for packet loss in the wireless links, namely, an independent and identically distributed (IID) packet loss model and the two-state Gilbert-Elliot (GE) channel model.  While the former is a random loss model, the latter can model bursty losses.  With each channel model, the performance of the simulated decentralized controller using wireless links is compared with the one using wired links providing instant and 100% reliable communications.  The sensitivity of the controller to the burstiness of packet loss is also characterized in different process stages.

     

    The performance results indicate that wireless links with redundant bandwidth reservation can meet the requirements of the TE process model under normal operational conditions.  When disturbances are introduced in the TE plant model, wireless packet loss during transitions between process stages need further protection in severely impaired links.  Techniques such as retransmission scheduling, multipath routing and enhanced physical layer design are discussed and latest industrial wireless protocols are compared.

    Marcelo Mollicone Sym Safety marcelo.mollicone@symsafety.com.br
    75. Adapting NIST Cybersecurity Framework for Conformance Assessment

     

     

    Current industry risk assessment methodologies used in Industrial Control System (ICS) environments are sufficiently extensive that there is confusion among stakeholders as to the exact purpose and expected outcome of any given assessment.  A common, standardized ICS cybersecurity assessment methodology will dramatically reduce the level of stakeholder confusion and provide a rationalized “scorecard” to measure Enterprise-wide ICS cybersecurity exposure.  The presented methodology provides a common language to describe ICS cybersecurity conformance and exposure; represents the cybersecurity exposure of both individual industrial control systems and the overall Enterprise-wide ICS cybersecurity posture, establishing a common understanding of what “good” looks like; and defines an assessment methodology and reporting/scoring standard that communicates cybersecurity implementation expectations and provides clear conformance goals, enabling a higher degree of ICS organizational capability rationalization throughout the enterprise.  The NIST Cybersecurity Framework and its alignment with current industry standards and best practices provides the opportunity to develop enhanced internal standards and controls that support common risk management philosophies and implementations.

    to convince shareholders of the importance and need of solid arguments based on profitability to be able to do it. KPI's of various TIERS, as well as availability indicators can help, but you have to treat them in benchmark.

     

     

    It is very important to recognize that the returns on Safety have good amount of uncertainty involved, some of them requiring high individual investments or concurrent investment of several companies. Also to remember some of those decreases availability.

     

     

    Finally, to know that there are issues that generate returns in cascade, i.e. investment in one area positively affects another; a seemingly immediate return affects or reinforces other.

     

    We consider two models for packet loss in the wireless links, namely, an independent and identically distributed (IID) packet loss model and the two-state Gilbert-Elliot (GE) channel model.  While the former is a random loss model, the latter can model bursty losses.  With each channel model, the performance of the simulated decentralized controller using wireless links is compared with the one using wired links providing instant and 100% reliable communications.  The sensitivity of the controller to the burstiness of packet loss is also characterized in different process stages.

     

    The performance results indicate that wireless links with redundant bandwidth reservation can meet the requirements of the TE process model under normal operational conditions.  When disturbances are introduced in the TE plant model, wireless packet loss during transitions between process stages need further protection in severely impaired links.  Techniques such as retransmission scheduling, multipath routing and enhanced physical layer design are discussed and latest industrial wireless protocols are compared.

    Kenny Mesker Chevron kmesker@chevron.com
    76. ICS/SCADA Security - Building it IN

    This presentation will cover todays threats and latest known attacks on control systems. I will also cover designing control systems cyber security in and discuss the many technologies that are available to create a robust and manageable system. The presentation will describe defense in depth (DiD) and will also cover zones and conduits from ISA99/IEC62443.

      The presented methodology provides a common language to describe ICS cybersecurity conformance and exposure; represents the cybersecurity exposure of both individual industrial control systems and the overall Enterprise-wide ICS cybersecurity posture, establishing a common understanding of what “good” looks like; and defines an assessment methodology and reporting/scoring standard that communicates cybersecurity implementation expectations and provides clear conformance goals, enabling a higher degree of ICS organizational capability rationalization throughout the enterprise.  The NIST Cybersecurity Framework and its alignment with current industry standards and best practices provides the opportunity to develop enhanced internal standards and controls that support common risk management philosophies and implementations.

     

    Marco Ayala aeSolutions marco.ayala@aesolns.com
    77. Making a Business Case for IACS Cybersecurity

    Industrial Automation and Control Systems Security Program.  The target audience for this standard is plant personnel who are responsible for establishing a security program for industrial automation and control systems (IACS) used for process control.  Unlike other security standards that cover only technical considerations for cybersecurity the standard focuses on the critical elements of a security plan relating to policies, procedures, practices and personnel.  As such, it is a valuable resource to management for establishing, implementing and maintaining a comprehensive security plan for process control.

     

    One of the many references to ISA-62443-2-1 in the NIST Framework is under Function Identify (ID) and Category Business Environment (ID.BE).  As noted in the standard, establishing a business rationale is the first step in developing a security plan and is “essential for an organization to maintain management buy-in to an appropriate level of investment for the IACS cyber security program.”

     

    The presentation “Making a Business Case for IACS Cybersecurity” will emphasize the necessity of a business case that justifies the commitment of resources needed to manage cyber risks in the process environment.

    Don Dickinson Phoenix Contact ddickinson@phoenixcon.com
    78. Gas Void Fraction Eliminator How Much Money are You Paying for Air?

     

     

    The purpose of this presentation will be to describe the patented technology of Flow Regime Management systems™ (FRMs) used to restructure flow regimes and separate phases at full flow.  By utilizing a PLC and instrumentation, we have been able to develop a system that performs the separation and metering of the two phases automatically and with a high degree of accuracy.

     

    One of the challenges was to find an inexpensive and effective way of determining the amount of gas in the liquid.  By using a capacitive probe in a nontraditional method, we are able to calculate the gas to liquid ratio.  By the use of the PLC and instrumentation, the Gas Void Fraction Eliminator™ (GVFE) is capable of detecting and extracting the entrained air and slug flow, thus providing the measurement instrument with single phase that will ensure an accurate measurement of the liquid.  The GVFE can also be equipped with an outlet gas/air flow meter to determine how much air was removed from the process.  The GVFE does not require a decrease in incoming flow-rate, so there is no down or residence time associated with the separation.  The technology uses no internal moving components, making our GVFE virtually maintenance free.

    Lonnie Barker Siemens Industry Inc. lonnie.barker@siemens.com
    79. An Owner and Operator's Journey towards Safety and Performance with All Digital Wireless Control (ADWC)


    An Owner and Operator's Journey towards Safety and Performance with All Digital Wireless Control (ADWC)

     

     

     

    Reliability concerns continue in controls world while digital spread spectrum technologies are considered reliable and fault tolerant in key communication systems for civilian cellular, avionics and military communications applications. The wireless communication chips, drivers, and software have significantly evolved in last ten years, as more than a billion devices are manufactured, used and tested. However, process control networks to-date continue to be ancient wired continuous analog signals. An application focus on wireless technologies is growing with ISA100 initiative, PROFIsafe development, and wireless encapsulation of HART and other wired protocols. At this pints, it is arguable that current standards provide sufficient flexibility for communication technology selection, development and use of time tested consumer and military technologies for control systems specific technologies.

     

     

     

    This paper makes a case for ADWC - All Digital Wireless Control systems for owners and operators based on technical drivers that result in reliability, robustness, and in turn safer systems.

     

     

     

    We look at the wireless technologies from owner/operator’s reference point. First we identify potential wireless technology candidates such as DSSS, WCDMA, and FHMA for short range wireless control systems specific to owner/operator needs. We perform a technical analysis for suitability of these candidates for application specific performance. We then review status of current technologies versus needs and wants on the factory floor. We then identify current psychological barriers, lack of technical standards agility, and overall inertia towards wireless control systems. We then compare fault tolerance, reliability, diagnostic and self-healing capabilities of such networks with those of traditional wired 4-20mA signal transmission as well as coaxial multi-hop wired networking; a key focus being the use of diagnostic information and data for safer plants and continuous performance improvement. We then suggest some practical combinations of wired and wireless networks capitalize on strengths of both.

     

     

     

    In this paper we demonstrate a strong case for owners and operations to a planned move towards wireless networks to achieve communication reliability, predictability, network robustness, and in turn, improvement in safety and plant performance.

    Shahid Bashir ControlNex shahid.bashir@controlnex.com
    80. Useful Life of Safety Instrumented Systems

     

    The terms “Useful Life” and “Mission Time” are often misunderstood when used in the context of Safety Instrumented Systems (SIS).  Even industry experts sometimes strongly and starkly disagree on the subject.  Yet a proper understanding and treatment of these concepts is essential for the successful realization of the safety lifecycle.

     

    In this paper we will examine the definition of useful life per the IEC 61508 standard.  We then examine how that definition has been interpreted and applied in the process industries, including reported useful life for various common process industry SIS components.  Several common questions regarding available useful life data are discussed.  The FMEDA assumptions underlying the useful life numbers of IEC 61505 certified products are examined in the context of typical refinery SIS experience.

     

    We examine the practical impacts of the available information and discuss what options are available for an end user to comply with the IEC 61508 standard.  The interaction between useful life and SIS mission time is explored.  A strategy for proactive Useful Life Management (ULM) is outlined to address gaps in the current handling of useful life requirements by typical refinery end-users.

     

    It is clear that in order to successfully implement and maintain the requirements of IEC 61511 (and by proxy IEC 61508), end users must abandon the reactive “run-to-failure” mindset that has historically dominated SIS maintenance.  Effective and proactive Useful Life Management is essential for ensuring that SIS design calculations are meaningful and that installed SIS performance is acceptable.Often standards agility, and overall inertia towards wireless control systems. We then compare fault tolerance, reliability, diagnostic and self-healing capabilities of such networks with those of traditional wired 4-20mA signal transmission as well as coaxial multi-hop wired networking; a key focus being the use of diagnostic information and data for safer plants and continuous performance improvement. We then suggest some practical combinations of wired and wireless networks capitalize on strengths of both.

     

    In this paper we demonstrate a strong case for owners and operations to a planned move towards wireless networks to achieve communication reliability, predictability, network robustness, and in turn, improvement in safety and plant performance.

    Stephen Thomas Chevron stephen.thomas@chevron.com
    81. Field Calibration & Testing of Industrial Vibration Protection Systems

     

     

    From proximity probes to accelerometers, portable analyzers to “data pens”, the world of industrial vibration measurement is a complex array of technologies, instrumentation and solutions. Refineries, power generation plants, pipelines and petrochemical facilities all rely on these technologies to monitor and protect critical equipment such as turbines, pumps, motors and compressors. The vibration monitoring system is a critical component within the condition monitoring program, preventing downtime, promoting safety and diagnosing machine fault conditions.

     

     

     

    Perhaps due to the overwhelming amount of vibration technology, difficult to understand specifications and lack of field testing technology the vibration protection aspect of the condition monitoring program has often gone overlooked when it comes to calibration. But for plants seeking to improve safety, reduce downtime and prevent catastrophic failures, field verification of vibration monitoring instrumentation can be one of the most important tasks performed by the instrumentation and controls team.

     

     

     

    This paper and presentation simplifies and explains the intricacies behind various vibration sensing technologies, taking a closer look at the specifications and practical operation. It is a guideline to field testing proximity probes (eddy current probes), accelerometers, 4-20 mA velocity sensors and impact detectors routed to online control systems with alarm functionality. The paper reviews ISO and API standards regarding vibration calibration and how they impact industrial applications. Finally, how can this instrumentation fail and what are the warning signs? The paper discusses cabling errors, sensitivity drift, mounting considerations, troubleshooting and typical pitfalls.

    We then compare fault tolerance, reliability, diagnostic and self-healing capabilities of such networks with those of traditional wired 4-20mA signal transmission as well as coaxial multi-hop wired networking; a key focus being the use of diagnostic information and data for safer plants and continuous performance improvement. We then suggest some practical combinations of wired and wireless networks capitalize on strengths of both.

     

    In this paper we demonstrate a strong case for owners and operations to a planned move towards wireless networks to achieve communication reliability, predictability, network robustness, and in turn, improvement in safety and plant performance.

    Michael Scott The Modal Shop mscott@modalshop.com
    82. How Much Is Enough?

    Cybersecurity risk differs from the risk of earthquakes and equipment failures. The latter are random processes, which can be modeled statistically, while cyber attacks are systematic failures. Targeted attacks cause multiple simultaneous failures. Calculating cyber security risk is therefore a qualitative, not a quantitative process, and as such is very difficult to communicate to senior decision-makers responsible for funding decisions.

     

     Standard cyber security risk management advice, including widely-used ISA/IEC and NIST advice, talks about "threat modelling," as well as abstract "likelihood" and "vulnerability" scores to model defensive capabilities. The result is an abstract "score" which means nothing to senior decision-makers, and means little more than that to cybersecurity practitioners themselves. This article argues that attack training is vital to the design of cyber defenses, and that attack modelling is a much clearer mechanism for evaluating security investments and returns on investment than is an abstract, qualitative risk score.

     

     This article argues that to be effective, at least one member of cyber security risk evaluation teams must be trained as an attack specialist. This individual describes how to defeat each of, and all of, the site's defensive capabilities. After all, no security is perfect.

     

     With an attack specialist on the team, risk assessments are more accurate and can be communicated more effectively. With an attack specialist, the team is able to describe attacks and consequences, and express their degree of confidence in the ability of the site's defensive capabilities to repel the attack. Representative attacks can then be "stacked" in a grid, with threats on the X axis and increasing attack impact and sophistication on the Y axis.

     

     With this attack model, it is possible to draw a "confidence" line through the grid representing the design basis threat. Attacks below the line are repelled with the indicated level of confidence, and attacks above the line are not. This communication mechanism allows even unsophisticated decision-makers to understand the answers to "what if" questions. Eg: Q: What if we require the security system to repel attackers with a higher degree of confidence? A: the line drops - fewer attacks are repelled. Q: What if we give you the money to implement security measure X? The line moves up the chart, encompassing additional attack scenarios, or not.

     

     In addition, a properly-executed attack model shows decision-makers that there are always attacks possible "above the line" - these are risks the organization must either accept or transfer. Better yet, attack models can be tested, by penetration testing teams equipped with the resources attributed to a specific threat. Attack modelling carries out many of the same activities as are currently carried out by cyber risk assessment teams, but communicates the results in a way that is more useful both to the team and to senior management

    Andrew Ginter Waterfall Security Solutions andrew.ginter@waterfall-security.com
    83. Do your progress meetings hinder progress

    SCRUM and Process Control

     

    Have you been asked to report your progress in percent complete, while what you really want to say is “Leave me alone, I’ll be ready for startup”  There is an axiom in software development that says: 80% of the code takes 20% of the time and 20% of the code takes 80% of the time.

     

    There is a better way.  How do you think Google or Apple keeps track of the massive programming efforts that they undertake?  They sure as heck don’t sit everybody down in a big room and ask for progress updates. And they don’t go off programming for months without talking to their customers.

     

    They use agile software development techniques and the technique that is specific to this problem is SCRUM.  Remember the goal of agile development is the rapid development of usable code, and in process automation, this is the rapid development of control code that you can simulate.At least one member of cyber security risk evaluation teams must be trained as an attack specialist. This individual describes how to defeat each of, and all of, the site's defensive capabilities. After all, no security is perfect.

     

     

     With an attack specialist on the team, risk assessments are more accurate and can be communicated more effectively. With an attack specialist, the team is able to describe attacks and consequences, and express their degree of confidence in the ability of the site's defensive capabilities to repel the attack. Representative attacks can then be "stacked" in a grid, with threats on the X axis and increasing attack impact and sophistication on the Y axis.

     

     With this attack model, it is possible to draw a "confidence" line through the grid representing the design basis threat. Attacks below the line are repelled with the indicated level of confidence, and attacks above the line are not. This communication mechanism allows even unsophisticated decision-makers to understand the answers to "what if" questions. Eg: Q: What if we require the security system to repel attackers with a higher degree of confidence? A: the line drops - fewer attacks are repelled. Q: What if we give you the money to implement security measure X? The line moves up the chart, encompassing additional attack scenarios, or not.

     

     In addition, a properly-executed attack model shows decision-makers that there are always attacks possible "above the line" - these are risks the organization must either accept or transfer. Better yet, attack models can be tested, by penetration testing teams equipped with the resources attributed to a specific threat. Attack modelling carries out many of the same activities as are currently carried out by cyber risk assessment teams, but communicates the results in a way that is more useful both to the team and to senior management

    Scott Hayes Maverick Technologies
     scott.hayes@mavtechglobal.com
    84. Panel Discussion - Reliabililty

    The 3 –Ps of Production – Production more Production! Increase production at all costs.  This is the battle cry of many organizations for the simple fact that more production equals more profits.  Is this “reality”, or does it just foster a false sense of accomplishment through an active/reactive management philosophy?  If an organization really wants more profits and/or wants improved safety/environmental performance, they must first shift their organizations from a production centric organization to a Reliability Centric® Organization (RCO) where additional production, reduced costs, increase in profits and improved safety/environmental performance are ALL realized.  But to get there, the traditional 3-Ps of production, i.e., Production, Production, Production, must be replaced with the 3 Ps of Reliability, or Planning, Patience and Perseverance in order to drive the organization to a Reliability Centric® Organization.

     

    A reliability-centric organization makes reliability the focus of its Maintenance, Operations, Capital Investment (Projects) and Reliability departments.  It must have strong and independent reliability leadership that not only looks at traditional reliability improvements, but also influences/guides the organization’s Operations, Maintenance, Capital, and Turnarounds to improve its performance from a reliability standpoint.  This shift in focus, or more apropos, guiding principles, results in a philosophy of “Engineer it Right, Repair it Right, and Keep it Running” – which incidentally, are the three principles of the Reliability Centric® Organization.  However, it must be cautioned that this type of organization will have a limited impact on improving reliability/availability without the support, buy-in and long-term commitment from Operations/Maintenance leadership as well as the overall leadership of the organization.

    With this attack model, it is possible to draw a "confidence" line through the grid representing the design basis threat. Attacks below the line are repelled with the indicated level of confidence, and attacks above the line are not. This communication mechanism allows even unsophisticated decision-makers to understand the answers to "what if" questions. Eg: Q: What if we require the security system to repel attackers with a higher degree of confidence? A: the line drops - fewer attacks are repelled. Q: What if we give you the money to implement security measure X? The line moves up the chart, encompassing additional attack scenarios, or not.

     

     

     In addition, a properly-executed attack model shows decision-makers that there are always attacks possible "above the line" - these are risks the organization must either accept or transfer. Better yet, attack models can be tested, by penetration testing teams equipped with the resources attributed to a specific threat. Attack modelling carries out many of the same activities as are currently carried out by cyber risk assessment teams, but communicates the results in a way that is more useful both to the team and to senior management

    alan bryant Oxy alan_bryant@oxy.com
    85. Artificial Intelligence in Process Control

     

    Control systems can be improved using artificial intelligence. Fuzzy logic automatically solves problems that would normally require human intelligence. Fuzzy logic has many applications in control systems where the domain knowledge can be imprecise. Fuzzy logic is well suited where imprecision is inherent due to uncertainty in the modeling of control and measurement process. In this presentation it was shown that the fuzzy logic can be applied to process control to improve high accuracy and compensate for the nonlinear uncertainty that exist in the process. The Fuzzy logic implementation can utilize low cost software-based method instead of using expensive hardware to compensate for error. The proposed method, not only decreases the error and increase robustness in control application.


     

     

    A reliability-centric organization makes reliability the focus of its Maintenance, Operations, Capital Investment (Projects) and Reliability departments.  It must have strong and independent reliability leadership that not only looks at traditional reliability improvements, but also influences/guides the organization’s Operations, Maintenance, Capital, and Turnarounds to improve its performance from a reliability standpoint.  This shift in focus, or more apropos, guiding principles, results in a philosophy of “Engineer it Right, Repair it Right, and Keep it Running” – which incidentally, are the three principles of the Reliability Centric® Organization.  However, it must be cautioned that this type of organization will have a limited impact on improving reliability/availability without the support, buy-in and long-term commitment from Operations/Maintenance leadership as well as the overall leadership of the organization.

    With this attack model, it is possible to draw a "confidence" line through the grid representing the design basis threat. Attacks below the line are repelled with the indicated level of confidence, and attacks above the line are not. This communication mechanism allows even unsophisticated decision-makers to understand the answers to "what if" questions. Eg: Q: What if we require the security system to repel attackers with a higher degree of confidence? A: the line drops - fewer attacks are repelled. Q: What if we give you the money to implement security measure X? The line moves up the chart, encompassing additional attack scenarios, or not.

     

     

     In addition, a properly-executed attack model shows decision-makers that there are always attacks possible "above the line" - these are risks the organization must either accept or transfer. Better yet, attack models can be tested, by penetration testing teams equipped with the resources attributed to a specific threat. Attack modelling carries out many of the same activities as are currently carried out by cyber risk assessment teams, but communicates the results in a way that is more useful both to the team and to senior management

    Kash Behdinan Plc's Plus Int kbehdinan@bkppi.com
    86. Automating Smart Instrument configuration

    •One of the key solution to configuring large number of instruments in a project is to leverage central configuration and management system through control system to build and upload the configuration to each individual device

     

     

    Central Instrumentation Configuration system will eliminates the need to manually enter the settings line by line through dialogs of handhelds.

     

    •The device configuration is downloaded by the control system automatically through the fieldbus network.

     

    •Central Instrumentation Configuration system is able to electronically assign maintenance mark to each device.

     

    •Operators and maintenance people can set and adjust the operation/service status of the devices online

     

    •It stores all configuration data , diagnostic results, operation and maintenance recordsability departments.  It must have strong and independent reliability leadership that not only looks at traditional reliability improvements, but also influences/guides the organization’s Operations, Maintenance, Capital, and Turnarounds to improve its performance from a reliability standpoint.  This shift in focus, or more apropos, guiding principles, results in a philosophy of “Engineer it Right, Repair it Right, and Keep it Running” – which incidentally, are the three principles of the Reliability Centric® Organization.  However, it must be cautioned that this type of organization will have a limited impact on improving reliability/availability without the support, buy-in and long-term commitment from Operations/Maintenance leadership as well as the overall leadership of the organization.

     

    With this attack model, it is possible to draw a "confidence" line through the grid representing the design basis threat. Attacks below the line are repelled with the indicated level of confidence, and attacks above the line are not. This communication mechanism allows even unsophisticated decision-makers to understand the answers to "what if" questions. Eg: Q: What if we require the security system to repel attackers with a higher degree of confidence? A: the line drops - fewer attacks are repelled. Q: What if we give you the money to implement security measure X? The line moves up the chart, encompassing additional attack scenarios, or not.

     

     

     In addition, a properly-executed attack model shows decision-makers that there are always attacks possible "above the line" - these are risks the organization must either accept or transfer. Better yet, attack models can be tested, by penetration testing teams equipped with the resources attributed to a specific threat. Attack modelling carries out many of the same activities as are currently carried out by cyber risk assessment teams, but communicates the results in a way that is more useful both to the team and to senior management

    Kash Behdinan Plc's Plus Int kbehdinan@bkppi.com
    87. Tutorial: Safety Instrumented System Nuts and Bolts

    Safety Instrumented Systems, which provide substantial risk reduction for hazardous chemical processes, have their own set of terms, concepts and three-letter-acronyms.  Some examples are SIL, PFDavg, RRF, STR, SFF, CCF, Dangerous Failure Rate, Diagnostic Coverage, Demand Mode, and Architectural Constraints to name a few.  This presentation unveils the details underpinning these concepts and their use in the design of Safety Instrumented Systems.


    •Central Instrumentation Configuration system is able to electronically assign maintenance mark to each device.

     

    •Operators and maintenance people can set and adjust the operation/service status of the devices online

     

    •It stores all configuration data , diagnostic results, operation and maintenance records.It must have strong and independent reliability leadership that not only looks at traditional reliability improvements, but also influences/guides the organization’s Operations, Maintenance, Capital, and Turnarounds to improve its performance from a reliability standpoint.  This shift in focus, or more apropos, guiding principles, results in a philosophy of “Engineer it Right, Repair it Right, and Keep it Running” – which incidentally, are the three principles of the Reliability Centric® Organization.  However, it must be cautioned that this type of organization will have a limited impact on improving reliability/availability without the support, buy-in and long-term commitment from Operations/Maintenance leadership as well as the overall leadership of the organization.

     

    With this attack model, it is possible to draw a "confidence" line through the grid representing the design basis threat. Attacks below the line are repelled with the indicated level of confidence, and attacks above the line are not. This communication mechanism allows even unsophisticated decision-makers to understand the answers to "what if" questions. Eg: Q: What if we require the security system to repel attackers with a higher degree of confidence? A: the line drops - fewer attacks are repelled. Q: What if we give you the money to implement security measure X? The line moves up the chart, encompassing additional attack scenarios, or not.

     

     

     In addition, a properly-executed attack model shows decision-makers that there are always attacks possible "above the line" - these are risks the organization must either accept or transfer. Better yet, attack models can be tested, by penetration testing teams equipped with the resources attributed to a specific threat. Attack modelling carries out many of the same activities as are currently carried out by cyber risk assessment teams, but communicates the results in a way that is more useful both to the team and to senior management

    Joe Veasey aeSolutions joe.veasey@aesolns.com
    88. Workshop: Safety Instrumented Burner Management Systems - Codes and Standards Upd

    Invoking the concept of a Safety Instrumented – Burner Management System in all three of the NFPA 85, 86 and 87 series of codes / standards is a significant milestone for industry.  In 2002 when the ISA S84 committee first began developing the Technical Report, TR84.00.05 Guidance on the Identification of Safety Instrumented Functions (SIF) in Burner Management Systems (BMS), none of these codes / standards recognized the concept of a Safety Instrumented System.  This issue directly contributed to schedule delays in the development process, which ultimate resulted in pushing the final publication of the Technical Report out to December of 2009.

     

      

     

    However, recently all three of the NFPA codes / standards that govern fired device Burner Management Systems have been updated to invoke the concept of Safety Instrumented Systems.  These include the following:

     

     

    ·       NFPA 85 - Boiler and Combustion Systems Hazard Code 2015 Edition

     

    ·       NFPA 86 - Standards for Ovens and Furnaces 2015 Edition

     

    ·       NFPA 87 - Recommended Practice for Fluid Heaters 2015 Edition

     

     

    In addition,  API 556 has been updated from the previous 1999 edition to invoke the concept of Safety Instrumented Systems.

     

     

    ·       API 556 - Recommended Practice for Instrumentation, Control, and Protective Systems for Gas Fired Heaters 2011 Edition

     

     

    This presentation will highlight changes to the above codes / standards as they apply to the concepts of Safety Instrumented Systems.  This will include a discussion on equivalency clauses and / or linking paragraphs to ISA S84.00.01 - 2004 (IEC 61511 Mod) possibly allowing deviation from prescriptive requirements.  Also of significant note is the modification of logic solver requirements with inclusion of a direct reference mandating the use of Safety PLCs with minimum SIL capabilities in certain instances.

    Charlie Fialkowski Siemens  charles.fialkowski@siemens.com
    89. Workshop: Safety Lifecycle Compliance Journey: If I'd Have Known Then What I Know Now…

    This workshop will discuss the challenges of efficiently and cost effectively implementing IEC61511 in a global operating company.  Discussion will include lesson learned on initial granfatghering / baseline documentation, capital projects / gap closure, functional testing, organizational readiness, etc.

    Dave Rieder Chevron dave.rieder@chevron.com
    90. Safety Application Migration Meeting Current Functional Safety Compliance Best Practices

    This paper describes how legacy SIS should be migrated to new platforms in a manner compliant with the currently globally recognised best practices defined by standards IEC 61508 and IEC 61511.  These standards describe the functional safety management techniques required to avoid and reduce the introduction of systematic errors into SIS realisation and are pre-eminent in contractual requirements for SIS delivery.

     

    A choice of application translation methodologies comprising Manual, Tools Assisted and Automatic are described with qualitative comparisons and reference to the standards.  A recommendation is concluded that a hybrid Tools Assisted approach provides the best balance of compliance and efficiency.

    These include the following:

     

     

     

    ·       NFPA 85 - Boiler and Combustion Systems Hazard Code 2015 Edition

     

    ·       NFPA 86 - Standards for Ovens and Furnaces 2015 Edition

     

    ·       NFPA 87 - Recommended Practice for Fluid Heaters 2015 Edition

     

     

    In addition,  API 556 has been updated from the previous 1999 edition to invoke the concept of Safety Instrumented Systems.

     

     

    ·       API 556 - Recommended Practice for Instrumentation, Control, and Protective Systems for Gas Fired Heaters 2011 Edition

     

     

    This presentation will highlight changes to the above codes / standards as they apply to the concepts of Safety Instrumented Systems.  This will include a discussion on equivalency clauses and / or linking paragraphs to ISA S84.00.01 - 2004 (IEC 61511 Mod) possibly allowing deviation from prescriptive requirements.  Also of significant note is the modification of logic solver requirements with inclusion of a direct reference mandating the use of Safety PLCs with minimum SIL capabilities in certain instances.

    Dan Mulholland Trinity Dan.Mulholland@trinitysystems.com
    91. Workshop: Advanced Topics in Safety Instrumented Systems

    This presentation will summarize a variety of advanced safety instrumented system topics such as; the impact on performance of imperfect manual testing, bypassing, and human errors (e.g., design & maintenance); justification of prior use; certification is not the magic silver bullet; who should be responsible for what; and other topics of interest to the audience.

    Paul Gruhn Rockwell paulg@icshou.com
    92. Panel Session - IEC61511 Updates: What's new in the next release and what does it mean to you?

    This panel session provides attendees with the unique opportunity to directly ask questions of IEC61511 committee members.  The panel will discuss soon to be issued updates to IEC61511 and thier potential impacts on the process industry.

    Kevin Klein Chevron vin.klein@chevron.com
    93. Fire and Gas Design for the Process Industry; Are there Real Dangers behind the Smoke and Mirrors of F&G Detection Mapping?

    After the Piper Alpha disaster in the UKCS in 1988, the industry was given an abrupt awakening to the potential for disaster offshore. A breakthrough from this event was the increase in awareness of safety. Subsequently, the industry witnessed a great deal of time, money and effort invested in the development of appropriate technologies and safety systems to help prevent and mitigate the potential hazards naturally present on site. One such mitigation system which witnessed a dramatic increase in prevalence was Fire and Gas Detection technology. 

     

    This evaluation aims to analyse the dangers of inappropriate design of these systems for the hazards in question, and in particular, failing to take account of the hardware/ environmental drawbacks to ensure a safe and reliable system.

     

    As the process industry moves towards the reduction of the potential for ‘fail to danger’ in safety related systems (with an increase in the prevalence of IEC 61508 and IEC 61511), it is of great concern that designs of fire and gas detection technologies (whether one feels this can be classed as a SIS or not) applied today still provide this potential, and even worse, these drawbacks may never be accounted for in design. In particular, the guidance within ISA TR84.00.07 shall be reviewed with respect to fire and gas detection design in the process industry.  

     

    The main conclusion to be drawn from the paper is that far too much emphasis is placed on trusting the word of manufacturers of the detection devices. For those not previously involved in designing F&G, it is a widespread opinion that designing the system as per manufacturer manuals and data sheets is acceptable. It must always be anticipated that the manufacturers are selling a product and that the capabilities of the device may not in fact be appropriate for the hazard in question.

    James McNay Micropak JMcNay@micropack.co.uk
    94. Workshop: Cost Justification for Safety Instrumented System Compliance

     

    Does this sound like a familiar and somewhat painful theme to you?   New management has taken over your business unit and is now reviewing the Profit & Loss statements.  As part of this review process, questions begin to arise regarding why you are spending all of this money on Safety Instrumented Systems (SIS) grandfathering, functional testing, gap closure, etc.?  Can’t this expenditure be delayed or deferred in some fashion?  Is this a compliance issue mandated by a code or standard that carries the weight of law or is it simply a conformance related expenditure where some overzealous engineer(s) want to implement best practices?  Compliance implies someone could go to jail if we fail to implement the code / standards requirements and conformance implies these are optional design practices to be implemented only if the business can afford them, right?   

     

    So if you’ve answered yes to the above, first I’d like to say, “I feel your pain and have been there, done that.”  But more importantly, this white paper will provide you a roadmap and means for financially justifying compliance with IEC61511: Functional Safety – Safety Instrumented Systems for the Process Industry Sector through the use of fundamental accounting principles that will resonate with your upper management team.  By completion of a simple cost benefit analysis, one can demonstrate the value to the business of a performance based approach to risk management through implementation of the safety lifecycle as mandated in IEC61511. Simplistically, the safety lifecycle embodies a three step methodology to overall risk management, which can be summarized as follows:

    1.safety lifecycle documentation

    2.leading process safety indicators

    3.safe unit operations through corrective actions

     

    This paper will discuss how to complete a financial justification for the initial and ongoing costs associated with a safety lifecycle based approach to risk reduction.  By assessing costs as well as benefits in financial terms, one can demonstrate to management that implementation of the safety lifecycle is truly a sound investment for the organization.  Thus, this white paper will provide you with a means to more effectively communicate the need for implementation of the safety lifecycle in terms management can easily understand - dollars and cen

    Mike Scott aeSolutions Mike.Scott@aesolns.com
    95. Panel Session: Testing of Safety Instrumented Systems: The Good, The Bad and The Ugly


    Nick Sands Dupont Nicholas.P.Sands@dupont.com
    96. Looking for Trouble on OT Networks: Tools and Techniques to Identify Threats to ICS Communications

    In today’s OT networks, Industrial Control Systems such as SCADA use information to drive the physics of process control.  Maintaining mechanical integrity of the connected process requires thorough understanding of the communications between these components in order to maintain safe and efficient operations.  In this cyber-physical world, is often difficult to spot communications errors, cyber security threats, and poor network health problems.   The symptoms, however, are obvious:  slow HMI updates, unexplained shutdowns, and in the worst cases, dangerous failures of ICS components.  A robust and healthy OT network is key to preventing these failures.  This talk focuses on the tools and techniques used by professional cyber security including Network Security Monitoring (NSM), Intrusion Detection Systems (IDS), and manual analysis techniques with Wireshark that investigators use to find and isolate problems on OT networks before they cause harmful impacts, or worse found by our adversaries.

     But more importantly, this white paper will provide you a roadmap and means for financially justifying compliance with IEC61511: Functional Safety – Safety Instrumented Systems for the Process Industry Sector through the use of fundamental accounting principles that will resonate with your upper management team.  By completion of a simple cost benefit analysis, one can demonstrate the value to the business of a performance based approach to risk management through implementation of the safety lifecycle as mandated in IEC61511. Simplistically, the safety lifecycle embodies a three step methodology to overall risk management, which can be summarized as follows:

     

    1.safety lifecycle documentation

    2.leading process safety indicators

    3.safe unit operations through corrective actions

     

    This paper will discuss how to complete a financial justification for the initial and ongoing costs associated with a safety lifecycle based approach to risk reduction.  By assessing costs as well as benefits in financial terms, one can demonstrate to management that implementation of the safety lifecycle is truly a sound investment for the organization.  Thus, this white paper will provide you with a means to more effectively communicate the need for implementation of the safety lifecycle in terms management can easily understand - dollars and cen

    Bryan Singer Kenexis bryan.singer@kenexis.com
    97. Cybersecurity for Manufacturing Systems

    Manufacturing systems need to be protected from vulnerabilities that may arise as a result of their increased connectivity, use of wireless networks and sensors, and use of widespread information technology. Manufacturers are hesitant to adopt common cybersecurity technologies, such as encryption and device authentication, due to concern for potential negative performance impacts in their systems. This is exacerbated by a threat environment that has changed dramatically with the appearance of advanced persistent attacks specifically targeting industrial systems, such as Stuxnet. NIST is developing a cybersecurity risk management framework with supporting guidelines, methods, metrics, and tools to enable manufacturers, technology providers, and solution providers to assess and assure cybersecurity for manufacturing systems.  A testbed is being developed to measure the performance of manufacturing systems when instrumented with cybersecurity protections in accordance with practices prescribed by national and international standards and guidelines.  Examples of such standards and guidelines include the ISA/IEC-62443 suite of standards and NIST SP800-82.  The testbed will cover multiple types of industrial control systems used in manufacturing. Each system is intended to cover one or more aspects of industrial control.  The testbed will include a measurement system that will allow capture of network traffic and security events through a syslog capture server and Wireshark.  Some research areas of interest for the testbed include:  perimeter network security; host-based security; user and device authentication; packet integrity and authentication; encryption; zone-based security; field bus (non-routable) protocol security; and robust/ fault tolerant control.  Research outcomes from the testbed will highlight specific cases where cybersecurity technologies impact control performance as well as provide guidance to industry on best practices for implementing cybersecurity standards and guidelines without negatively impacting manufacturing system performance.

    Keith Stouffer NIST keith.stouffer@nist.gov
    98. Industrial Internet of Things

    Intelligent devices offer a significant opportunity to provide advanced functionality.  Digital communication technology is evolving to unlock more of the potential benefits of the technology available in intelligent devices.  Advanced functions are being distributed over large networks of intelligent devices.

     

    Technology associated with the Internet of Things (IoT) is making its way into industrial applications to support advanced functionality and enhanced access.  However, the needs of industry will modify the IoT technology to become an Industrial Internet of Things (I2oT).

     

    This paper will discuss some of the current communication technology trends and the industrial needs that will affect application of these trends in industrial environments.

     Examples of such standards and guidelines include the ISA/IEC-62443 suite of standards and NIST SP800-82.  The testbed will cover multiple types of industrial control systems used in manufacturing. Each system is intended to cover one or more aspects of industrial control.  The testbed will include a measurement system that will allow capture of network traffic and security events through a syslog capture server and Wireshark.  Some research areas of interest for the testbed include:  perimeter network security; host-based security; user and device authentication; packet integrity and authentication; encryption; zone-based security; field bus (non-routable) protocol security; and robust/ fault tolerant control.  Research outcomes from the testbed will highlight specific cases where cybersecurity technologies impact control performance as well as provide guidance to industry on best practices for implementing cybersecurity standards and guidelines without negatively impacting manufacturing system performance.

     

    Herman Storey   herman.storey@live.com
    99. Adoption of Wireless for Safety

    Industrial wireless instrumentation is being widely deployed for monitoring, control, and alarms, including safety alarms.  This session will focus on technical considerations and user acceptance when wireless is used for safety alarms.

    Jay Werb ISA100 Wireless Compliance Institute jay@jwerb.com
    100. High Performance Graphics No Pain No Gain

    The conversion process of moving from conventional graphics to High Performance Graphics (HPG) in the Axiall Lake Charles Chemical Facility. Presentaion includes background information, (human error statistics and human factors engineeriing),HPG concepts, graphic examples and lessons learned

    Robert Brooks Axiall Corporation rob.brooks@axiall.com
    101. Real-time Data & Process Simulator Integration

    Operating companies today are often faced with operational problems that they believe can only be solved via a change in the as-built equipment. These types of changes usually require, at minimum, an MOC and a turn-around, oftentimes, before a solution can be implemented. However, operating companies usually have many software based tools and operational data at their disposal that can be used to solve problems, temporarily and sometimes permanently, without modifying any as-built equipment in the field. This white paper discusses how operating companies can leverage their existing Process Information Management Systems (PIMS) and their existing Process Simulation software to solve operational problems without modifying as-built equipment in the field. Leveraging these software tools can often lead to higher operating revenues and reduced maintenance costs. This solution is discussed by first introducing PIMS and Process Simulators, how they can be powerful allies when paired correctly, a few short case studies of these integrative solutions in action, and the expertise required in a company to execute these types of solutions. For the case studies mentioned, this paper focuses on two projects; the first project was aimed at generating higher revenues for the customer’s power plants by increasing efficiency and gaining more uptime while the second project was aimed at reducing maintenance costs and operating costs in a heat exchanger bank for a customer at a refinery.

    Justin Conroy Radix U.S., LLC justin.conroy@radixeng.net
    102. Upgrade Considerations for Control Systems

     

    Today, many Technical Managers face the problem of building strong business cases for the migration of the current control system that has been running since the early 1980s, or earlier, to a more modern system. They are aware that a new automation system would bring efficiency and standardization to their facilities but it is hard to justify the cost of installing a new system when the old one is still functioning. How do you explain that you need to buy new hardware, risk possible unplanned downtime to migrate, and train the staff on how to maintain and operate a new system? This white paper discusses how managers can build a case for migrating a facility’s control systems to improve efficiency, reduce downtime, and improve visibility and reliability.

     

    As time passes, technology advances and changes. Hardware providers stop manufacturing certain products when the internal supply chain can no longer procure necessary components such as micro-chips for older PLCs or microcontrollers. They also know that government regulations such as RoHS (Restriction of Hazardous Substances) will soon be in effect. Hardware manufacturers will literally not be allowed to manufacture products containing lead that customers depend on for operations. Many of them are now offering plans to continue providing hardware and support at exorbitant costs that will increase exponentially with time as a way to help customers by giving them more time to plan migration strategies. They are intentionally discontinuing support and increasing the price of older components to highly encourage customers to upgrade before essential components are no longer available.

     

    As facilities’ control system lifecycle comes to a close, the system experiences increasing failure rates making it less reliable and potentially affecting key performance indicators (KPIs). It becomes a challenge for facilities to procure replacement parts when legacy components cost more than newer hardware, or are unavailable. Maintenance starts buying newer products that require ‘portals’ to communicate with the older parts of the system that utilize propriety hardware and software increasing points of failure in the system and prolonging upgrading. The control system begins to become a mix of technologies from different vendors and generations making standardization impossible. Worse still, the different systems require an extensive spare parts inventory.

     

    Managers can target portions of their control systems for upgrades based on price and availability of spare parts, obsolesces of components, partially or not supported proprietary hardware and software, and business KPIs. This gives managers the advantage of being able to standardize interfaces to modern systems such as Ethernet, OPC, Windows, and others that are not proprietary and capable of interfacing with different vendors without expensive interface adapters. In addition, networks such as Ethernet are able to integrate with IT and send large amounts of information quickly which was impossible with older proprietary networks. System and platform consolidation is possible, reducing the number of spare parts and maintenance needed due to technological developments such as increased CPU memory.

    Cheri Haarmeyer Radix Engineering & Software cheri.haarmeyer@radixeng.net
    103. Safety Alarm Management Challenges and Best Practices

    The ISA 18.2 Alarm Management and ISA84 Functional Safety standards present well-established work processes that can have significant impact on overall plant safety. The intersection of these good engineering practices is the management of safety alarms, which support operator response and process recovery from process safety excursions. Many incidents have been attributed to poor operator recognition of the impending loss of containment. Unfortunately, the requirements and guidance for safety alarms are currently distributed across multiple standards, technical reports, and industry publications. No single source includes comprehensive requirements or explanatory guidance on the key differences in the design and management of safety alarms versus other alarms.

     

    This paper will give an overview of guidance from ISA standards and technical reports, CCPS texts, and other recognized sources to put the overall safety alarm requirements into perspective.

     They also know that government regulations such as RoHS (Restriction of Hazardous Substances) will soon be in effect. Hardware manufacturers will literally not be allowed to manufacture products containing lead that customers depend on for operations. Many of them are now offering plans to continue providing hardware and support at exorbitant costs that will increase exponentially with time as a way to help customers by giving them more time to plan migration strategies. They are intentionally discontinuing support and increasing the price of older components to highly encourage customers to upgrade before essential components are no longer available.

     

     

    As facilities’ control system lifecycle comes to a close, the system experiences increasing failure rates making it less reliable and potentially affecting key performance indicators (KPIs). It becomes a challenge for facilities to procure replacement parts when legacy components cost more than newer hardware, or are unavailable. Maintenance starts buying newer products that require ‘portals’ to communicate with the older parts of the system that utilize propriety hardware and software increasing points of failure in the system and prolonging upgrading. The control system begins to become a mix of technologies from different vendors and generations making standardization impossible. Worse still, the different systems require an extensive spare parts inventory.

     

    Managers can target portions of their control systems for upgrades based on price and availability of spare parts, obsolesces of components, partially or not supported proprietary hardware and software, and business KPIs. This gives managers the advantage of being able to standardize interfaces to modern systems such as Ethernet, OPC, Windows, and others that are not proprietary and capable of interfacing with different vendors without expensive interface adapters. In addition, networks such as Ethernet are able to integrate with IT and send large amounts of information quickly which was impossible with older proprietary networks. System and platform consolidation is possible, reducing the number of spare parts and maintenance needed due to technological developments such as increased CPU memory.

    Mike Carter   mcarter@sis-tech.com
    104. The Past, Present, and Future of Alarm Management


    Nicholas Sands DuPont Nicholas.P.Sands@dupont.com
    105. Building the Automation Professional of Tomorrow Nicholas Sands DuPont Nicholas.P.Sands@dupont.com
    106. Alarm Life-Cycle Support to Get Alarm Floods Under Control

    Alarm floods remain one of the biggest challenges in alarm management. To get alarm floods under control, alarm related design knowledge from early life-cycle phases like HAZOP and LOPA studies needs to be easily accessible in later phases when additional information becomes available, so that decisions about advanced alarming methods like alarm suppression can be made with confidence. Having good Management of Change and Life-cycle support in place enables us to keep the alarm system consistent with the changing reality in the plant and allows continuous improvement


    This paper will give an overview of guidance from ISA standards and technical reports, CCPS texts, and other recognized sources to put the overall safety alarm requirements into perspective.

    Martin Hollender ABB AG martin.hollender@de.abb.com
    107. Best Practices in SIS HMI

    NOTE:  This is not a new abstract, already accepted.

     

    Safe operating limits and the status of the shutdown system is arguably among the most important information for operator to be aware of and to understand thoroughly.  If the operator does not understand where these limits are, then what might start as a minor deviation outside normal operations may result in an unexpected outage.  SIS design can be a complex system with varied voting schemes and varied levels of instrumentation available to the control system.   In modern systems that rarely shutdown and even more rarely trip, the details of the SIS design may not be commonly understood by the operator. 

     

    However, a thoughtful HMI design can render these complex systems more intuitive to understand and can support the operator during upset conditions, as well as through any actuation of the SIS system.  This presentation will discuss the range of common SIS instrumentation and methods for effective information presentation.

    Bridget Fitzpatrick Wood Group Mustang bridget.fitzpatrick@woodgroup.com
    108. Report from the Trenches – Current Alarm Management Practices

    This talk will review current alarm management practices (best and worst) and key learnings from them.  Some of the topics to be discussed will include: alarm philosophy, documentation, operator interaction with the alarm system, and human machine interface practices

    John Bogdan J Bogdan Consulting LLC  John.Bogdan@JBogdanConsulting.com
    109. Human Machine Interface (HMI) Design: The Good, The Bad, and The Ugly (and what makes them so)

    Poor HMI designs have been identified as factors contributing to abnormal situations, billions of dollars of lost production, accidents, and fatalities. Many HMIs actually impede rather than assist operators. Many of the poor designs are holdovers due to the limitations of early control systems and the lack of knowledge of system designers. However, with the advent of newer and more powerful systems, these limitations no longer apply. Also, decades of research has identified better implementation methods. Unfortunately, change is difficult and people continue to follow poor design practices. In fact, some new designs are actually worse than older designs! Just as a computer is not a typewriter, new HMI designs should not mimic those of old. The problem is that many designers often simply don’t know any better. This presentation will review why certain HMI designs are poor (with many examples) and show how they can be improved.SIS design can be a complex system with varied voting schemes and varied levels of instrumentation available to the control system.   In modern systems that rarely shutdown and even more rarely trip, the details of the SIS design may not be commonly understood by the operator. 

     

     

    However, a thoughtful HMI design can render these complex systems more intuitive to understand and can support the operator during upset conditions, as well as through any actuation of the SIS system.  This presentation will discuss the range of common SIS instrumentation and methods for effective information presentation.

    Paul Gruhn Rockwell Automation paulg@icshou.com
    110. Process Industry Accidents: Lessons Learned the Hard Way and How to Avoid Them (Keynote)

    Utilizing a collection of videos, photographs and stories, this presentation will cover lessons learned from previous accidents and how to best avoid any future accidents. Key topics covered in his presentation include:

    •Everyone needs training (yet they often don’t get it or accept it)

    •People must follow procedures (yet they often don’t, for a variety of reasons)

    •Even trained people make mistakes (and sometimes they do really stupid things)

    •Some people don't know what they don't know (and ignorance is definitely not bliss)

    •We’re not as immune or indestructible as we may think

    •We can't foresee every possible hazardous scenario

    •Reuse of software is not always successful

    •Near misses are often not followed up

    •The past is often ignored (and history definitely repeats itself)

    •The various personnel functional safety certification/certificate programs available (e.g., CFSE, TUV & ISA) and the differences between them

    Paul Gruhn Rockwell Automation paulg@icshou.com
    111. Reality Check: Pitfalls in Alarm Management at a Greenfield Site

    Most alarm management efforts have occurred at brownfield facilities, due both to their larger number and potential history with poor alarm management. Executives at a new company building a multi-unit facility realized the value of alarm management from incidents at past plants and were determined to “get it right”. This paper describes issues and problems in creating and implementing the alarm management program. The issues range from the initial alarm philosophy through rationalization. The impact of distributed authority and responsibility for aspects of alarm management will be highlighted. In particular, the interaction between hazard analyses and alarm selection will detailed. Problems created by interpretation of alarm guidelines and the pressures of cost overruns are delineated.


     

    •Reuse of software is not always successful

    •Near misses are often not followed up

    •The past is often ignored (and history definitely repeats itself)

    •The various personnel functional safety certification/certificate programs available (e.g., CFSE, TUV & ISA) and the differences between them

    David Strobhar Beville Engineering dstrobhar@beville.com
    112. Is Process Control Training REALLY No Longer Necessary?

    Several examples will be given to support the author's perception that process control training, both by industry and by academia, of newer control system engineers is on the decline.  Reasons for this decline are proposed, and a qualitative assessment of the effect on industry in the future is made.  This informal talk closes with examples of extremely beneficial advanced regulatory control techniques that are practically unknown in industry.

    Harold Wade   hlwade@aol.com
    113. VIRTUALIZATION – A POWERFUL TOOL FOR PROCESS CONTROL

    Virtualization Technology has been successfully applied at the Honeywell Performance Materials & Technology (PMT) production facilities around the world. Initially virtual systems were only used for off-line development and testing. When an Operator Training System was virtualized in 2008 it became apparent this technology could be applied to a production environment. In 2009 the first implementation of virtualization in a production environment at Honeywell PMT was commissioned. Four Profit Suite APC applications were implemented in six virtual machines running on a single physical server using VMWare ESXi hypervisor software. Today in 2015 virtual systems have become our standard architecture not only for APC, but also for many other on-line process control applications such DCS servers, historians, control loop diagnostics & monitoring, domain controllers, terminal servers, etc. This tutorial provides an introduction to virtualization technology and demonstrates the power of using this technology in both the off-line and on-line process control environment.

    John McIlwain Honeywell PMT john.mcilwain@honeywell.com
    114. Industrial Advances in Wireless Control

    A variety of technical challenges must be addressed when using wireless field devices in closed loop control.  It is possible to utilize the slow, non-periodic measurement updates of a wireless transmitter in closed loop by restructuring the traditional PID algorithms to provide PIDPlus capability.  Through the application PIDPlus it is possible to address control of faster processes such as liquid and gas flow using wireless transmitters and wireless valves.  Examples are used to demonstrate how this capability has been incorporated into commercial control systems.  Information will be provided on the control performance achieved in field applications using wireless transmitters and/or wireless throttling valves.


     

    Terry Blevins Emerson Process Management terry.blevins@emerson.com
    115. Addressing Cycling Problems in Pulp & Paper Processes


    This presentation demonstrates the harmful effects of cycling in a production process.  Oscillation can come from valve issues, tuning issues, process design, and operational procedures.  With emphasis on detecting oscillation, and determining the cause, several real-world examples are shown.  Learn how to identify cycling issues, identify their severity, and how to determine if multiple oscillations are related


    Steven Obermann Metso Minerals Industries Inc, stobermann@comcast.net
    116. Tutorial: Diagnosing the Root Cause of Oscillations 


    Steven Obermann Metso Minerals Industries Inc, stobermann@comcast.net
    117. Wireless MPC application for DWC control

    Model Predictive Control (MPC) operation is based on the process model, which can be conveniently used in event-based MPC operation. The process model can estimate control parameters when a measurement fails or when lab sampling is used as a control parameter. Wireless control is another case in which event-based MPC operation, including both slow sampling and randomness, is required. MPC wireless operation has been validated on a Divided Wall Column (DWC) process which can provide potentially huge savings in energy and capital cost compared to a conventional column design. In the installation at the University of Texas, WirelessHART transmitters take many of the process measurements used in control. This paper details event driven wireless MPC operation and design and discusses the application challenges illustrated by the test results

    Terry Blevins Emerson Process Management terry.blevins@emerson.com
    118. Big Data Improves Plant Safety

    Big Data is a ground breaking technology for many industries. Every industry is faced with the challenge of how to implement technology to achieve maximum benefits. The presentation will address the basic components of Big Data pipeline for the process industry: hardware and software infrastructure, data streaming, data preprocessing and data learning techniques. The core of data learning is Data Analytics (DA) which has proven its effectiveness in process fault detection and quality prediction both for batch and continuous processes. The real prospects are that Big Data based on DA will be among the leading directions for improving process effectiveness. The presentation will address these major challenges for professionals working on Big Data for the process industry.


     

    Mark Nixon Emerson Process Management mark.nixon@emerson.com
    119. Applying ISA101 HMI Concepts to Existing HMI Applications

    This presentation focuses on upgrading existing HMI applications to utilize the concepts discussed in ISA101, and shall offer an implementation strategy for controls engineers with existing system(s). Development of the upgraded HMI graphics may be taken in stages to allow for initial improvements to the system. Attendees will discuss how to present and champion HMI design standards, including Human Factors Engineering (HFE) principles, to system users, system owners, and plant managers.

    Michael Lennon Applied Control Engineering, Inc. lennonm@ace-net.com
    120. ISA101: From Philosophy to Operation

    The ISA101 standard provides an HMI lifecycle, which includes developing system standards, design, implementation, and operation.  A small DCS conversion project provided an opportunity to follow this lifecycle, a set of good practices for a successful HMI.

    Nicholas Sands Dupont Nicholas.P.Sands@dupont.com
    121. Using Procedural Automation to Standardize and Improve Operations in Continuous Processes

    Reducing the variation in how procedures are executed by using automation has been shown to improve operational efficiency and safety in many continuous processes but challenges remained due to inconsistent design and application of automation.  In 2010, driven by the needs of several large Chemical and Oil & Gas companies, a new committee - ISA 106 was formed and is working towards producing a standard for automating procedural operations in continuous processes applications. This presentation provides an overview of  released ISA106 technical report and will provide examples and discuss where automated procedural steps have benefited continuous process applications in different process areas such as refining, chemical processing, and offshore oil production.

    Marcus Tennant Yokogawa marcus.tennant@us.yokogawa.com
    122. Measuring and Eliminating Stale Alarms

    In this session we’ll consider various aspects of a specific class of nuisance alarms; stale alarms.  We’ll explore what current alarm management standards have to say relative to their definition, frequency quantification and benchmark recommendations. Session participation are encouraged to contribute as we consider the severity of issues created by stale alarms, the typical reasons they happen and various methods to eliminate them

    Kim Van Camp Emerson Process Management kim.vancamp@emerson.com
    123. Human Machine Interfaces In Harsh Environments: Why, Where and How

    Environmental conditions have a direct impact on the reliability and availability of industrial control systems. Not all of the components of an ICS operate in friendly circumstances. Extreme temperatures, water, humidity, dust, sunlight, vibration, electromagnetic emissions and susceptibility, and power transients will impact industrial operations.

    Most computers (industrial PCs, panel PCs, Linux terminals), human machine interfacea (HMIs) and operator interface terminals (OITs) are designed for normal environmental situations. A subset of these devices are designed to be deployment in harsh environments including many chemical and petroleum applications.

    Why would or should a controls engineer or plant designer specify HMIs and computers that are often more expensive that their less austere peers? Why not opt for the less expensive building blocks and hope those units will perform better than expected? Why not take the position that typical components may not last as long as those designed for harsh environments but the requirement to replace them may not come around that often?

    This session will explore how HMIs that will be deployed into punishing environmental conditions undergo a different design and engineering process than average components. It will look at what constitutes harsh industrial conditions. It will provide insight into when and where environmentally rugged HMIs may make sense in certain applications and locations within an ISC.

    Jeff Hayes Beijer Electronics jeff.hayes@beijerinc.com
    124. ISA106: What it is and What it isn't

    Discusses development of the ISA106 standard, including what it is, what it isn't, and its current status. Also includes a look ahead at future activities for the committee.

    Bill Wray Covestro LLC bill.wray@covestro.com
    125. ISA101: What It Is and What It Isn’t – Status Update and Future Activities

    ISA-101.01, Human Machine Interface (HMI) for Process Automation Systems was recently approved as an ISA/ANSI standard after several years of development and refinement. This presentation will give a brief outline of what IS in the standard and what ISN'T - it will also give some ideas on how it should be used. Finally the presentation will give an update on next steps for the standard committee, including the setting up of three new working groups with the aim of developing technical reports to enhance the standard.

    Maurice Wilkins Yokogawa maurice.wilkins@us.yokogawa.com