Testing, comparing industrial Ethernets
One of NIST's long-term goals for this project is to develop standardized methods to measure industrial Ethernet performance metrics
By James Gilsinn and Freemon Johnson
Ethernet is now working in a wider variety of industrial devices and applications.
Industrial applications and systems require deterministic operations that traditional Ethernet and Transport Control Protocol/Internet Protocol (TCP/IP) suites did not originally support.
A standardized way to describe and test industrial devices is necessary in order to aid users to characterize the performance of their software and hardware applications.
The Manufacturing Engineering Laboratory (MEL) of the National Institute of Standards & Technology (NIST) has been working to develop a set of standardized network performance metrics, tests, and tools since 2002.
NIST is working on developing an open-source test tool, called Industrial Ethernet Network Performance (IENetP), to aid vendors in characterizing the performance of their devices. The IENetP test tool will be capable of conducting a full series of performance tests and reporting the results to the user.
The current version of the software is capable of analyzing network traffic and producing statistics and graphs showing the network performance of a device.
Once upon a time a problem
While Ethernet and the TCP/IP suite are inherently non-deterministic protocols, it is possible to use them for real-time industrial networks.
The development of high-speed Ethernet interfaces, switched network infrastructures, and specialized TCP/IP network stacks have allowed a multitude of industrial Ethernet protocols to operate in the millisecond range.
The large variety of different protocols and vendors has caused end users to ask many questions, including:
Which industrial network performs better for my application?
Which vendor’s products will satisfy my given requirements?
How will a particular device perform compared to another?
How does one performance metric compare to another?
How well will a particular product work in my control system?
Defining performance characteristics of industrial Ethernet applications and devices is analogous to comparing the performance of automobiles. How would one rate the performance of an automobile? Choosing the type of vehicle is one of the first steps when choosing an automobile, since the performance metrics are quite different for each type.
Does one’s application call for a sports car, an economical commuting car, a large pickup truck, or a minivan? Once we decide on the type of vehicle, it is necessary to compare vendors and choose which one has the best performance characteristics. In the context of a sports car, horsepower, 0 to 60 mph time, and cornering ability all describe different aspects of the performance of a sports car.
The weighting that one places on each of those metrics depends on their application. The same idea applies to the industrial control system workspace.
Having a standardized way to measure the performance metrics also aids end-users. Using an automobile example again, one standardized metric and method designed by the U.S. Department of Transportation is the fuel economy test. It uses a standardized series of tests and produces two commonly known metrics, city, and highway fuel economy.
These metrics compare vehicles from multiple vendors to determine how the vehicles meet the requirements for their particular application. One of NIST’s long-term goals for this project is to develop standardized methods to measure the industrial Ethernet performance metrics.
NIST occupies unique position
The mission statement of NIST is “to promote U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life.”
MEL is to specifically “promote innovation and the competitiveness of U.S. manufacturing through measurement science, measurement services, and critical technical contributions to standards. Developed collaboratively with our external partners in industry, academia, and other government agencies, MEL measurement and standards solutions allow our customers to overcome barriers to product and process innovation, to share manufacturing information seamlessly and accurately, and to take full advantage of the latest technologies essential to their competitiveness and future success.”
These broad mission statements give NIST and MEL the authority to investigate standards that help U.S. industries with respect to their long-term objectives. NIST and MEL promote standardization for the U.S. industry, which can easily benefit the entire industrial community.
NIST is in a unique position to provide a standardized approach for industrial Ethernet network performance metrics, tests, and tools. NIST can take a wide view at the issue of the industrial Ethernet performance and focus on what will provide the most meaningful metrics for both the vendors and the end users. NIST has no direct affiliation with any particular group or technology, which is why NIST is in a unique position to lead this effort.
Industrial Ethernet at NIST
NIST began looking at industrial Ethernet network performance in 2002. Industrial Ethernet was an emerging technology, and there was no common way to describe the performance of different devices. Due to the prevalence of Common Industrial Protocol-based networks in the U.S. auto manufacturing industry and existing relationships with those organizations, NIST choose to investigate the Ethernet/Industrial Protocol (EtherNet/IP) for its initial efforts.
This led NIST to join the EtherNet/IP Implementers Workshop series, part of ODVA (Open Device Vendors Association), to learn more about the network and promote the idea of network performance metrics and tests. The workshop series provides an open forum for vendors to discuss topics related to implementing EtherNet/IP and promote interoperability between the vendors.
Large portions of the workshop’s efforts aim at developing a set of interoperability recommendations and testing those recommendations at “PlugFests.” The PlugFests happen twice per year and allow vendors to bring their products and engineers to one location to see how well their devices interoperate with other vendor’s products in a collaborative environment.
In late 2005, NIST and the U.S. Council for Automotive Research (USCAR) signed a memorandum of agreement and formed the U.S. Alliance for Technology and Engineering for Automotive Manufacturing. The main focus of this effort is to improve the manufacturing processes used by the members of USCAR in order to reduce their costs and the costs of their first- and second-tier suppliers.
In 2006, NIST and ODVA formed a collaborative research agreement to develop a software test tool ODVA could use as the basis for a commercial laboratory capable of conducting fee-for-service performance testing on EtherNet/IP devices. NIST handed over the test tool to ODVA at the end of 2007, and ODVA started their performance testing service in 2008. The ODVA performance testing service provides a way for vendors to certify the performance metrics for their devices; however, it does not provide a way for vendors to obtain performance characteristics during their development lifecycle.
NIST is continuing its research into industrial Ethernet performance by developing the IENetP test tool. The test tool is freely available to anyone, allowing vendors to conduct performance testing on their devices at any stage of development and under various conditions.
Performance testing method
There are two types of communication methods in most industrial Ethernet devices:
Publish/subscribe or peer-to-peer
Command/response or master/slave
For publish/subscribe or peer-to-peer communications, two or more devices communicate with each other in some way that the devices themselves negotiate. This may be at an understood rate or at some predetermined condition.
For example, device A wants to get a digital input value from device B at a rate of 20 times a second. Device A sends a message to device B requesting the particular value and specifies the particular rate. Device B can accept this request or deny it based on its configuration. If device B accepts the request, then it starts sending messages to device A, and possibly other devices, 20 times a second. Other than the initial request, device A does not dictate when device B will send its messages. The true rate at which the messages transmit depends solely on device B’s internal hardware and software architecture.
For command/response or master/slave communications, two devices communicate with each other based on how the commander or master device dictates. Responder or slave devices can be relatively inexpensive and unintelligent, since their sole purpose is to process commands and respond back. Following the prior example, if device A wants to get a digital input value from device B at a rate of 20 times a second, device A sends a message to device B 20 times a second for that particular value. Device B responds back with the value as quickly as it can. The true rate at which the messages are sent depends on both device A’s and device B’s internal hardware and software architectures and the network connecting the two devices.
Based on these two types of communication methods, two main performance metrics appear—Cyclic frequency variability/jitter and latency.
When communicating at an understood rate, the ability for the devices to maintain the desired message rate is extremely important. Control loops based on this type of communication count on the message streams to maintain at the desired rate.
Control systems theory states the communications used in a control loop should operate at least twice as fast as the overall loop; however, this is not always the case in practice.
For tightly coupled control loops that operate at or near the same rates, variability or jitter in the packet interval may affect the system’s performance in unintended ways.
When responding to a particular command or pre-determined condition, the ability for a device to process the command or condition quickly is most important. An unexpected delay or latency in the response message coming from the device may seriously affect the system’s performance behavior.
Real-time EtherNet/IP typically uses a form of publish/subscribe communications with two parallel streams of traffic, each flowing in the opposite direction. For EtherNet/IP, the desired packet rate is the Requested Packet Interval (RPI). When the request is that a device produce network traffic at a particular RPI, it is required to send back an Accepted Packet Interval (API) to the requester. This API value represents the agreed upon rate that each device expects to receive network packets for that particular traffic stream. Most devices use the same API rate for ingoing and outgoing real-time network streams, even though it is not a requirement of the EtherNet/IP specification.
The performance test system uses network capture files to verify the device under test (DUT) maintains its desired API. The measured packet interval (MPI) is the rate at which the test system receives packets from the DUT.
Connect, measure metric
The basic methodology for the IENetP test system is fairly simple, regardless of the metric being measured. The process and test tool engine does not have to change to suit a particular metric, background traffic, or analysis method in the test. The following is a procedural listing of the basic methodology used by the performance test system:
Begin recording network traffic.
Establish a connection with the device under test (DUT).
Begin transmitting background network traffic, based on the particular test conditions.
Wait for a given amount of time.
Stop transmitting background network traffic.
Close the connection with the DUT.
Stop recording network traffic.
Analyze the network traffic capture, and report the results.
The current version of the IENetP test tool is not capable of communicating directly with the DUT, capturing traffic, or issuing background traffic. The current test tool is primarily a data analysis tool, and is only for step 8 in this methodology.
Future versions of the software will incorporate a greater portion of this methodology. Until the IENetP test tool is capable of communicating with the DUT directly, NIST plans to produce a recommended testing procedure that requires specific background traffic types and amounts. The user is responsible for transmitting the background traffic on the network with the current version of the tool.
Flexible test of system
The performance test system is, by design, extremely flexible, thus allowing the user to determine the performance metrics for their desired application. The test system can be as simple as attaching a crossover Ethernet cable between the tester and the DUT, or it could be as complex as a large set of infrastructure devices between the tester and the DUT.
When testing the performance for one particular device, it is important to isolate the device from a network to remove any latency introduced by other infrastructure devices. That is why it is best to attach the test system directly to the DUT to keep the latency to the absolute minimum.
When using a wireless DUT, it may be necessary to use a wireless access point or other network hardware to connect to the DUT unless the tester has a wireless interface, as shown. When trying to analyze the performance of a system, one may split the test system into two time-synchronized devices, although there is no requirement to split the functions for test systems with enough network ports. Network taps are here since they introduce no collisions or latency that a conventional network hub might introduce.
The IENetP test tool currently supports only data analysis. The data analysis method used in the most recent version of the test tool is a distribution analysis of the cyclic frequency variability/jitter of the MPI, calculating the following values: minimum, maximum, mean, standard deviation, skewness, and kurtosis. The objective is to measure how well the DUT adheres to its configured RPI/API value while operating in a variety of network conditions.
The objective of the IENetP test tool is to provide meaningful data to the vendor without having to know the statistical analysis in depth. The user interface will allow the vendor to specify a myriad of options to process and display data in various ways while the tool takes care of the underlying statistical analysis yielding those results. The IENetP test tool will allow the user the flexibility to generate customized reports that are relevant and meaningful for their device.
Over the horizon
The basic methodology and capabilities for the IENetP test tool have not changed, in principal, since they first came out in 2005 while working on the ODVA testing laboratory. Other than Ethernet/IP, we have not investigated any performance metrics, mathematical analysis methods, or networks. NIST is planning to release additional versions of the IENetP test tool to add these types of functionality.
NIST released the first version of the IENetP test tool in March 2009. The software is functional, but is still missing many capabilities. Version 2.x of the software will focus on adding additional mathematical analysis methods and performance metrics. While NIST is planning to improve the mathematical analysis methods, the test tool will hide the complexity of the calculations by presenting the user with easy-to-understand-and-compare data. The next major performance metric to investigate is latency, which will allow the test tool to analyze a larger number of industrial networks and communication protocols. Version 3.x of the software will focus on other industrial Ethernet protocols than EtherNet/IP. Some examples of other protocols are Modbus/TCP, ProfiNet, Foundation fieldbus HSE, ISA-100.11a, IEEE 802.11/WiFi, and ZigBee.
Later versions of the IENetP test tool will imply moving from an analysis tool to an active testing tool. This will require the test tool to be capable of communicating directly with the DUT and capturing network traffic without the need for additional intervention from the user or any extra hardware or software assistance.
ABOUT THE AUTHORS
James Gilsinn (email@example.com) is an electronics engineer at NIST/MEL. He is an ISA Senior Member. Freemon Johnson (firstname.lastname@example.org) is a computer engineer at NIST in Gaithersburg, Md.
InTech Market Study
Ethernet continues global growth trend
Users feel Ethernet works; future looks strong
By Gregory Hale
Ethernet continues to grow and help manufacturers gain a stronger foothold on a global basis, according to an InTech magazine survey.
The manufacturing realm has become more global, and 75% of survey respondents feel Ethernet has enabled their company to compete more on a more worldwide basis. That compares with 68% last year.
InTech conducted a web Zoomerang survey among readers seeking a snapshot of what automation professionals in the industry are thinking about Ethernet. While this survey is not statistically rigorous, it does give an anecdotal view of what is on readers’ minds.
When asked if their manufacturing facility uses Ethernet, 92% said yes, while 8% no.
Of the various protocols used, 46% said EtherNet/IP, while 33% said Modbus TCP/IP. The rest came in significantly below with Profinet at 7% and FF-HSE coming in at 5%. EtherCAT scored 2%, and CC-Link IE and SERCOS III each had 1%. “Other” came in at 5%.
On a global basis, everyone has a perception of various industrial Ethernet types. EtherNet/IP came in as the most popular with 86% saying they had a positive perception, 4% said it had a negative perception, and 10% were not aware of the offering. Modbus TCP/IP came in a close second with 84% having a positive perception, 11% negative, and 5% not aware. Profinet was next at 54% positive, 20% negative, and 26% not aware. FF-HSE was at 39% positive, 14% negative, and 47% not aware. EtherCAT came in at 25% positive, 8% negative, and 67% not aware. SERCOS III was at 16% positive, 12% negative, and 72% not aware; and CC-Link IE 10% positive, 7% negative, and 83% not aware.
A cyber eye
Competing across the globe opens up the company for cyber predators. With that in mind, 78% of respondents said they do have a cyber security plan in place, while 22% said they do not. That compares to last year where 74% said they had a plan.
In keeping with that thought, 66% said they remain satisfied with the level of security for their Ethernet system, while 34% said it could use more work.
When asked if their company uses an analog DCS, fieldbus, or fast Ethernet, the quick answer was 53% said they used a combination of these, while 31% said Ethernet, 11% said DCS, 5% said Fieldbus, and 2% said other. That compares to last year when 44% of respondents said they used a combination of the three. Also last year, Ethernet came in at 33%, DCS came in at 15%, and fieldbus at 6%.
Is anyone ever satisfied with their plant offerings? Well, 79% said they were happy with the present plant floor network and or fieldbus, while 21% said no they were not. That compares almost exactly to last year when 80% said they were satisfied.
Plans call for 34% of respondents to use Ethernet at the control level, while 31% will use it at the enterprise, 25% at the I/O level, and 10% at the device/sensor.
When asked what applications do you use or plan to use Ethernet, 25% said SCADA, 21% said continuous processing, 16% said maintenance, 11% said batch processing, 10% said discrete processing and machine control each, 5% said motion and/or robotics, and 1% said they do not or will not use Ethernet.
Obviously, Ethernet is becoming more vital in everyday plant operations as 85% of respondents said it is important to their plant operation that Ethernet operates at the plant level; 15% said it was not.
It looks like things will get busy internally at manufacturers’ as 45% of respondents, when asked who would add the Ethernet function at their plant if they decided to install the technology, said in-house expertise would handle the work, 30% said a system integrator would handle the chores, while 13% said a vendor would take care of it; 12% said no need to worry about that because they were not planning any work.
The time frame for adding the Ethernet function to their plant looks to fall into next year, with 46% giving the thumbs up; however, 23% added the second half of this year would work out just fine.
Having said all of that, is there potential for future Ethernet growth? The resounding answer to that question is yes, with 77% giving a positive nod.
Return to Previous Page