Electronic Logic Gains Human Touch
Cognitive software boosts PLC capabilities
By Tom Keeley and Helena Keeley
Because human language is always open to interpretation, you can’t audit human decisions the same way you can machines. That’s because humans aren’t machines and don’t perform with the same level of consistency, even with today’s tools. Yet, when a manufacturing process includes humans as supervisors and operators who make decisions as part of their responsibilities, these processes could suffer.
The rate of technical advancements in programmable logic controllers (PLCs) has slowed over the past several years. PLC suppliers have consistently followed the trends of smaller, cheaper, and faster. They have pursued alternative architectures and repackaging alternatives to redefine solutions to the same problems they have been addressing for the past 30 years. Some in the industry have suggested holonic manufacturing and agent-based systems to address discontinuities in the manufacturing process. [Holonic manufacturing is a way of organizing a manufacturing system to meet the challenges of coordinating increasingly diverse demand with faster product development, using key elements (or holons) such as machines, work centers, plants, parts, products, persons, departments, or divisions that have autonomous and cooperative properties.]
In few cases have the programmable controllers taken on more of the management functions usually left to human operators and operation supervisors. This provides an opportunity to integrate judgmental reasoning directly into the programmable controller, where access to information is closer to the real-time environment. By placing judgmental reasoning at the programmable controller level, management can make quicker decisions.
You can directly integrate subjective decision-making models into conventional programmable controllers by using a new programming paradigm, without changing the controllers themselves. The programmable controllers can then take on qualitative judgments such as safety, risk, survival, profitability, and asset management in a way operations management can audit. Because the manufacturing process requires explainable and auditable designs, this approach provides a dynamic graphical and auditable language. If you add cognitive capabilities to programmable controllers, users will likely find new applications in totally new markets for PLCs.
Conventional PLC solutions
PLCs have evolved from relay ladder logic replacing physical relays to more complex discrete applications, including support for analog input/output (I/O). In the 1980s and 1990s, industry developed more custom interfaces, created network architectures, and developed new programming languages. The conventional application of PLCs in factory environments has dealt with discrete manufacturing processes. These applications have tended to focus on the dynamics of isolated control signals. Recently, the integration of motion control instructions into sequential logic systems has allowed development of more complex systems.
Manufacturing architectures have evolved from massive centralized control, to hierarchical tree models, to distributed levels of control, and now to flat distributed architectures across a TCP/IP protocol. In conventional alarm systems, the operator is the sole problem solver of diagnosis. The alarm patterns are situation dependent and do not include any clues as to how to interpret the available data. The operator has to perform a complex inference process where measured data and alarms combine with his knowledge of process functions and properties. This bottom-up approach to diagnosis excludes the explicit consideration of system information, which the designer knows, such as the purpose of subsystems. Conventional alarm systems are situation dependent, and their design requires specification of dynamic, multi-variable, non-linear, inter-related information sets.
To some extent, adding PC functionality and specialty cards supporting fuzzy logic and neural nets have provided platforms for addressing more complex system issues. With all of these developments, however, the focus has not significantly changed. The overall objective of PLC vendors has been to create control elements that can work together in a coordinated fashion. The responsibility of the PLC has remained unchanged, except it is now also responsible for the concentration and distribution of messages in addition to its discrete logic control functionality. We’ve applied little focus to automating the human-reasoning component of the automation processes.
Humans in the loop
Humans play the part of adaptable machines where it appears impractical to automate the system. They’re like cost-effective robots performing repetitive tasks. Thus they’re the ultimate tools in flexible manufacturing today. In their other role of operator or supervisor, they make judgmental decisions regarding how the factory operates. That’s how they use their training and experience to monitor equipment operation and adjust parameters to tune the process. They control material flow and schedule maintenance, and they’re responsible for the overall performance of the systems under their control.
As operators or supervisors, humans’ subjective decisions could be the weak link in the manufacturing process. In fact, estimates attribute human error to up to 90% of all workplace accidents. They’re responsible for overseeing complex systems with thousands of various inputs and outputs, generating massive amounts of data. Numerous tools attach to these systems to display system status and warn human operators of problems that may need attention. But it is still left to humans to make appropriate judgments to control the systems. Human inattention or error could cause significant problems to the enterprise because some of the more critical issues may occur infrequently.
While you can control processes and duplicate machines to build consistent products, you can’t duplicate perfect human models, at least not yet, to best manage or supervise production lines. Similarly, since you can build machines according to well-understood plans, you can improve the designs over time so they perform better. With humans, even with the best training techniques, industry starts the process over with every operator.
PLCs control devices in high-risk environments. As such, the logic used to control those devices must be explainable and auditable. You should apply the same rule to define humanlike judgmental reasoning in these high-risk environments. Should a problem arise, you should be able to review the design and diagnose any action. Human language is ineffective in describing why any judgmental action is performed at any instant in time to the level of detail that industry expects out of a machine.
Electronic logic to the rescue
There are technologies that act as a collection of services that allow for embedding of humanlike reasoning in devices and software applications. They could be an option for manufacturing systems that require judgmental decisions or subjective actions as part of the process. In humans, the left brain is associated with numbers, language, and fixed rules. The right brain is responsible for images, interpretation of information, and subjective judgments. Using electronic knowledge technology is like adding a right brain component to a PLC’s left brain component. It incorporates a graphical methodology for defining humanlike reasoning that maps closely to the process the right brain uses to evaluate information.
Manufacturing systems need to process conventional hard-coded logic to address the deterministic situations in any control system. These same systems can benefit from judgmental reasoning when they have to adapt to changing demands from either component failures or changing business drivers. Knowledge technology can work well when you can automate judgmental decisions.
You can create a reasoning model off-line, just like the primary PLC program, by using a graphical language to define the reasoning policies. The size of graphical elements defines the importance of information. Wires or lines indicate relationships between information items when one decision or action impacts another. But you would need a human expert to define the rules. During the development of the reasoning model, the designer tests the logic by stimulating simulated inputs and observing outputs or graphing the interaction between various items. Because the language is dynamic, the designer gets immediate feedback relative to the changing importance of information and the dynamic relationships. The designer can see the system balance alternatives and the relative (analog) output control signals that define the relative decisions.
Humans address complex problems as they consider different actions related to different inter-related issues. Simultaneously, they are considering how each of those actions might influence the other actions. Humans balance potential actions for as long as they feel appropriate until they determine the best collective courses of action to take. In cases where there is no time penalty, they may stop thinking about the solution when they feel they’ve considered the best solution. When there is no apparent value to make a decision, they may just continue to think or worry about the situation, or they may just ignore the problem. Knowledge-enhanced technology accomplishes this balancing during a cognitive cycle. At the end of the cognitive cycle, it is ready to make a decision. With such tools, manufacturers could explain and audit these decisions by reviewing the logic in the graphical language without reverting to complex hard-coded mathematical solutions. The language is explicit; once an audit indicates a required change, you can correct and extend the reasoning model if you need to.
Today’s PLCs support techniques for parallel processing. Sequential function charts are one way to synchronize processes. Using this approach, you could accomplish the task with the knowledge-enhanced engine in one side of the chart while accomplishing the I/O in another.
One of the functional requirements of a human-supervised manufacturing system is to be able to address goals. In a knowledge-enhanced electronic system, these strategic goals are contributing factors to actions. These strategic values adjust the rules so when the system balances itself, the strategic goals participate in the decision-making process. Humans develop their own values and have their own personal objectives, which they might or might not reveal. Some of these objectives may be counter-productive to the goals of the enterprise: wealth, junk food, the desire to play rather than work.
Manufacturing systems constructed of devices with human-defined rules make it possible to create systems with well-understood and documented rules. It is possible to build systems that consider their own safety or survivability as part of the policies. In this way, PLCs can coordinate operation of work cells according to policies the manufacturing engineering function develops.
About the Authors
Tom Keeley is president of Compsim LLC in Brookfield, Wisc. (www.compsim.com). Helena Keeley is Compsim’s chief executive.
By Cris Whetton
Technology is removing more of the human element in today’s automation systems. There’s always opportunity for mishap when humans and computers interact, especially in a hazardous environment.
Take a look at what can happen when alarms and other safety devices malfunction or aren’t present to begin with:
Cris Whetton is InTech’s European correspondent.
“Using Internet Technologies to Create Web-based HMI - ISA 2001,” by Marcos Taccolini http://www.isa.org/techpapers/TP01ISA1024
Web Seminar: Overview of HMI Applications http://www.isa.org/websem/hmi
“Combining PC control and HMI,” by Murray A. McKay http://www.isa.org/intech/020701