You are on page 1of 9

Ethernet and Determinism

by: Michael Roche, Principle Network Applications Engineer, Schneider Electric


There have been vigorous debates about Ethernet and determinism when Ethernet is
compared with other fieldbus that are widely accepted in industry to be deterministic.
Some view Industrial Ethernet systems as not deterministic when compared with those
other proprietary Industrial fieldbus systems. We will examine that Industrial Ethernet is
in fact, deterministic under some common conditions.
First though, we must study the definition of determinism. A deterministic system is
regarded as predictable, by producing a calculable, consistent response time between 2
end devices. The application time response (ART) requirement in a deterministic system
varies with each application. For example, a wastewater system may be a deterministic
system producing a consistent and predictable application response time of 500 ms,
whereas a CNC motion system may require an application response time of 1 ms. By
the same logic, any annual holiday is deterministic as it falls within a specified calendar
day.
These previous examples illustrate that determinism is a factor of each particular
application with varying customer and process requirements. The essence of
determinism is the predictability and consistency of each and all individual operations
that suits the application requirements at hand.
Many systems boast of determinism, but examined more closely, Ethernet can stack up
against the best of them. Consider that of a Distributed I/O system (DIO), where slave
I/O control must be controlled by a Master PLC. Loss of communications in a DIO
system results in losing control of your process. Any delays due to network modifications
that change the expectation of DIO ART performance can cause problems and require
unanticipated logic and timing changes just because you added some devices, or
another cable segment.
While each fieldbus has its baggage, Ethernet offers the best deal in terms of growing
your process and factory, with less pain and fewer concessions. Other Fieldbus make
you pay the price in very strict terms due to node and device count limitation.
Determinism in a DIO system can be considered one of two ways. The first is system
determinism which includes the PLC with logic solve and component servicing, the
network interface transport and the response from an I/O device. System determinism
can also be thought of as Application Response Time (ART). This can be defined as the
amount of time it takes for an input on an I/O device to turn on, for the change of state to
be detected by the PLC, and for the PLC logic to solve that change of state and write the
logical output of the I/O device as shown in Figure 1. ART includes several basic
components:

CPU Scan time and backplane module/communications processing


Packaging and transport from the PLC network interface
Transmission over the network or fieldbus
Receipt and processing by the I/O device
Response time for the I/O device to present a new condition confirmed value to the
originating PLC and for the PLC logic to process the change

The overall ART can also vary slightly based upon CPU scan drift, due perhaps to online
programming activity and burst processing requests from devices such as HMIs or
SCADA, process timing in terms of exactly when in the CPU scan cycle, the request was
received, and jitter on the network due to device additions/deletions, or periodic burst
communications activity. In most cases, it will take more than one CPU scan to resolve
the I/O input change and respond with a output message request, regardless of the
deterministic fieldbus used. We can conclude therefore, that even the most deterministic
systems experience timing drift from one transaction to another based upon very slight
deviations. With deterministic fieldbus, this time is typically within the range of 1 or 2
milliseconds. As we will see with conditioned Ethernet as a fieldbus, a one millisecond
drift is an unusual deviation due to the speed, precision and evolved features of Ethernet.
This article will examine the network transmission component only, and compare the
functionality of Ethernet with other fieldbus systems widely regarded as deterministic.
After all, response time drift due to non-periodic PLC CPU scan and the other noted
criteria is inherent in any automation system, regardless of the fieldbus used.
Network Determinism can be defined as the calculable and consistent transport of
automation application messages from the interface on one end device to the interface
of another end device. Network determinism refers specifically to the transport between
these device interfaces.
Fieldbus Comparison
Most Distributed I/O (DIO) fieldbus systems regarded as deterministic have been logical
ring/physical bus token passing networks like Profibus, Modbus Plus, and others. Using
Modbus Plus DIO as an example, the time to transmit a message request or response
can be calculated, and there is assurance that the PLC request or I/O device response

transmission time will be consistent. Most deterministic networks use token passing as a
way of guaranteeing that each device on the network has an opportunity to transmit and
receive at a predictable interval. Token passing networks can limit the amount of time
any single device can hold onto the token to ensure that access by all devices is granted.
The amount of time it takes the token to be passed from an originating point, through all
devices and back to the origin is the token rotation time.
The token rotation time will grow as the number of devices increase, or the amount of
data is increased because the token is passed sequentially for distribution to all devices.
In other deterministic networks such as Profibus, the transmission time will increase due
to reduced transmission speed as the network distance lengthens beyond certain
network length boundaries. Though in these cases, the overall transmission time may
increase with the amount of data, devices or distance, the fieldbus itself is considered
deterministic due to the calculable and consistent network transport message delivery
time between any two given nodes.
Because Ethernet was originally a bus contention network by design, it was dismissed
as arbitrary, non-deterministic and unsuitable for many Industrial applications. Due to the
fact that CSMA/CD Ethernet may cause varying message transmission times due to
MAC layer collision backoff algorithm retransmission timers and that possibility that
excessive collisions could cause the message to be discarded at the MAC layer,
Ethernet relied on higher layer protocols such as TCP or the application to retransmit the
message. This was an appropriate obstacle for Ethernet to compete as a deterministic
fieldbus with the established benchmark deterministic fieldbus.
As Ethernet has evolved however, obstacles such as network access and bus
contention have been removed. Since the advent of Ethernet switching introduced by
Kalpana in 1995, and IEEE 802.3x full duplex standard, collisions and bus contention
have been met with solutions. Operating in full duplex, any device on the Ethernet
network can transmit and receive simultaneously at any time without the risk of collisions.
In full duplex operation, the Ethernet CSMA/CD collision is not required and thus
disabled.
Comparing full duplex Ethernet with other deterministic fieldbus variables such as the
amount of data, devices and distance in traditional token passing deterministic fieldbus,
Ethernet offers several advantages. With regard to the number of devices, Ethernet has
no practical limit on the number of end devices via IP subnet masking. For example, a
class A network with 24 host bits and an 8 bit subnet mask offers over 16.7 million nodes,
which, we can agree is an impracticable subnet size, though it remains a mathematical
possibility.
Also consider that with point-to-point message distribution each node is able to
communicate directly with any other node therefore, the transport of a message query or
response is not appreciably affected by the number of devices on the network because
there is no sequential message distribution in switched Ethernet as there is in a token
passing topology. Logical ring fieldbus using token passing, must send the token
sequentially to each device and therefore the transport time increases as the number of
devices increases.
In the brief comparison above, Ethernet compares very favorably with the established
fieldbus, however, there are also threats to Ethernet transport determinism. Next we will

examine the threats, and methods that can be employed to neutralize those threats in a
properly articulated Industrial Ethernet network.
In a proprietary fieldbus/token passing system, traffic is generally confined to specific
message types and sequential circular flow between devices. However, in an Ethernet
system, some of the messages which permit the flexibility of free form point-to-point
communication require broadcasting to locate the resources necessary to build a
complete message request. The Address Resolution Protocol (ARP) is a helper protocol
which is used to bind the Ethernet hardware MAC address to the logical software stack
IP address. As the two protocols were developed independently of each other and at
different times, not in parallel, ARP has become a separate but core protocol for the
proper function of IP over Ethernet as the combination of Ethernet and IP have gained
acceptance. ARP requests, by design, are broadcast to all devices on the IP subnet or
VLAN broadcast domain. However, ARP requests can be disruptive when they are
excessive. Broadcast messages such as ARP, and others, are broadcast throughout the
Ethernet IP Subnet and are received, then processed by all devices. Processing
broadcast requests is an essential function of Ethernet over IP, even if the ARP request
is for another end device. Many other common protocols such as NetBIOS or IPX also
advertise services which can sometimes generate reciprocal broadcasts from all other
NetBIOS hosts on the subnet, as in the case of Microsoft Windows NetBIOS domain or
workgroup master browser elections initiatives. Or, a host configured for another network
attempting to locate its primary resources. Misconfigured host PCs can be disruptive to
some devices by generating 10 or more broadcasts per millisecond in a burst attempt to
login/register with unavailable network domain controllers, server shares, and other
resources.
If ARP or other broadcasts are excessive, these broadcasts can be disruptive by
congesting the buffers of all end devices on the subnet which delays or may prevent the
routine processing of important Unicast/Multicast automation application messages and
legitimate UDP client requests such as BootP/DHCP by servers. Ethernet switches
have evolved to combat excessive broadcast traffic with a feature called Broadcast Rate
Limiting, which clamps excessive broadcast traffic above the configured level. Choosing
managed Industrial Ethernet switches which support Broadcast Rate Limiting allow the
switches to protect the end devices from excessive broadcasts and ensures that any
disruption from broadcast storms is minimal and unlikely to affect the Industrial
application. A configuration guideline is to permit 2 general broadcasts per second on
the switch port for each device on the subnet, plus 2 per second for each target device
as a conservative number. This is derived from allowing 1 broadcast for each of 2
application services, such as DHCP, and derived from the IP standard broadcast interval
of 1 broadcast per second.
For example, if a device is communicating with 5 other devices on a subnet with 60
hosts, the Broadcast Rate Limit would be 130 broadcasts per second.
(Subnet devices x 2) + (number of targets x 2) = Rate Limit
Consider that if power is restored after an outage, all devices would boot nearly
simultaneously, and would be not only probing for duplicate address checks, but seeking
to obtain address and configuration information and trying to locate their assigned peers.
Broadcasts by clients are typically bound to 1 second intervals, though some devices
may alternate frame types within that one second boundary. All clients will as well seek

the MAC addresses of their peers to collect the MAC address information needed to
initiate a TCP connection. The Broadcast Rate Limit configured value must be
sufficiently democratic to allow all devices to receive advertisements from all connecting
clients on the subnet.
While the example number of 130 broadcasts per second would be an extraordinary
number of broadcasts, it would still not be disruptive to Ethernet. For example, 130
DHCP broadcasts of 512 bytes each would still only equal just over 0.5% utilization on a
100 Mbs Ethernet link.
Multicasting is also a useful real time message distribution method but can be disruptive
if flooded to all end devices on the subnet. In an unmanaged network, multicasts appear
as equal to broadcasts. That is, they are flooded to all host devices, even those end
devices that are not intended or required to receive those multicast messages.
Multicast filtering is another configurable feature available in managed Industrial
Ethernet switches that support GMRP or IGMP multicast filtering. Multicast filtering by
the Ethernet switch allows the switch packet engine to distribute those multicasts only to
those devices registered to receive them. This prevents flooding and consequently
disruption of end devices by buffering unnecessary messages.
By operating in full duplex mode and mitigating disruption from excessive broadcasts
and multicasts, Ethernet is positioned for deterministic performance. Ethernet typically
transmits at 100 Mbs on a switch with modern end devices. With packet sizes for
automation protocols typically under 500 bytes, the transmission time to send that 500
byte packet is 0.00004 seconds or 40 microseconds. There are some other factors such
as the Normal Velocity of Propagation (NVP), which is essentially the time to transmit a
bit down a given length of media. NVP is measured as a percentage of the speed of light.
Most Cat 5e cable has an NVP of 0.65-0.70 which will transmit that bit at up to 70% the
speed of light. For all practical purposes, this component, even on a 100 Meter cable
segment is 477 nanoseconds; so small, that it can be ignored.
As noted earlier, there are some variables in traditional fieldbus which deliver
deterministic performance, but does affect overall transmission time. For example on
Modbus Plus, the number of devices will affect token rotation time. On Profibus,
theoverall length of the network will affect the transmission speed. In either case, once
the network is established and stabilized, the transmission time will be consistent. For
an Ethernet network a similar variable is the number of Ethernet switches in the path
between the two end devices. Most Ethernet switches operate in Store and Forward
mode, which means that every packet must be completely buffered on the incoming or
ingress port, error checked using the Frame Check Sequence (FCS is 32 bit CRC), and
forwarded to the outgoing or egress port. This Forwarding Delay takes a small amount
of time to complete. Most Ethernet switches have a maximum forwarding delay of under
50 microseconds. The maximum forward delay is generally tied to forwarding delay
imposed by a maximum Ethernet frame of 1,518 bytes. As most Industrial automation
messages are less than 1/3 of the maximum Ethernet frame size, the forwarding delay in
most Industrial Ethernet switches is considerably less than the maximum forwarding
delay specification of the switch.
Testing performed in our lab has indicated that as more switches are added in the path
between two end devices, the total forwarding delay increases in a linear fashion. The

forwarding delay maximum value of 50 microseconds is a worst case value for most
switches. Using this value, these tests have shown that a packet would have to traverse
20 switches to equal a forwarding delay of 1 millisecond. Having 20 switches in the path
between 2 end devices is not common in a bus or star architecture, although it is
certainly possible as a Redundant Ethernet Ring architecture. Because the Redundant
Ethernet Ring is a physical ring/logical bus, a device can be just a couple of physical
switches away, but packets may have to traverse the backbone of the ring depending on
where the bus is terminated. Generally, the physical ring terminates to a bus at the Ring
Redundancy Manager switch.
Even crossing many switches, and using a very generous forwarding delay factor of 50
microseconds, you can see that the overall transmission time is quite small in Ethernet.
There are also Quality of Service/Class of Service traffic shaping technologies that help
guarantee priority delivery of message packets, should congestion on the Ethernet
backbone become a concern. Adding 3 User Priority bits to the IEEE 802.1p/Q Tag in
the MAC frame can establish a priority level of 0-7 for each packet. Priority 7 packets will
always be forwarded first and should congestion develop to the point where switches
may be forced to drop packets due to buffer overload, only the lower priority packets will
be dropped.
How packet priority is handled depends on the implementation in the switch. Some
switches use strict queuing, which means that if there is even a single high priority
packet, it will be plucked from the buffer and forwarded first. If there are many high
priority packets, all of them will be forwarded before a lower priority packet is forwarded.
Other implementations use a Weighted or Random Early Detection queuing mechanism,
where a majority of higher priority packets are forwarded, then a few of the lower priority
packets. This is similar to queuing mechanisms used by IP routers. A router will look at
the IP Type of Service (TOS) or DiffServ field and interleaves the frame with others
depending on the Queuing mechanism configured. This way, all low priority packets. At
the MAC layer, a similar feature is accomplished with Industrial Ethernet switches that
support IEEE 802.1Q/p.
While there are many methods for controlling broadcasts, multicasts and congestion,
there is little chance on most Industrial Ethernet networks that this will actually become a
problem because of the smaller automation application packet size. Smaller packets
take less time to transmit and interleave more easily. In our testing, we transmitted a
sample 71 Modbus TCP request packet including the Ethernet MAC overhead (interpacket gap, preamble, FCS), sent through a series of Ethernet switches operating in full
duplex. The results are shown in Figure 2. The same test was repeated with a 325 byte
Modbus TCP response shown in Figure 3. As you can see below in Figures 2 and 3, as
the number of switches in the path is increased, the transmission time increases
correspondingly. Note however, that the actual transmission time, even through several
switches, is quite a small value indeed.

This demonstrates that like the proprietary, deterministic fieldbus, once the switch path
is established, transmission time reaches a consistent steady state, variable by perhaps
a few microseconds.
Test Setup
The testing was conducted using a variety of managed and unmanaged Industrial
Ethernet switches. The packet generator used was the Spirent Smartbits 200 as shown
in Figure 4.

Modbus Request and Modbus Response packets were passed through a incremented
number of switches on the egress interface and received on the ingress port of the
SmartBits 200 to measure the Round Trip Time. Each packet had a nominal 96 bit time
Inter Packet Gap (IPG) to simulate a stream of traffic. The elapsed time was measured
using the system clock reference on the SmartBits 200. Because the purpose build
SmartBits uses specialized ASICs to generate traffic, the stream of traffic was constant
and not subject to operating system fluctuations found in software packet generators.
Queuing Effect with Prioritization
Even with broadcast rate limiting, multicast filtering and prioritization traffic shaping tools
configured and in place, it is possible that a maximum size, low priority, Ethernet frame
has started buffering at the switch ingress just ahead of a higher priority automation
application message, as shown in Figure 5. The maximum Ethernet frame will continued
to be buffered, and the automation application message packet will be forced to queue.
This is perhaps a rare instance but nonetheless possible. In this scenario, the maximum
queuing delay imposed on the automation application packet would be 121
microseconds on a 100 Mbs link. Not enough to disrupt an automation application and
well within a reasonable tolerance for determinism.

Designing for Determinism


The design of your network can play a role in maintaining a deterministic Ethernet
network as well. As mentioned, when operating in full duplex, the only real threat to
determinism is disruption by unnecessary protocols or excessive broadcasts/multicasts.
If you are deploying a Distributed I/O (DIO) network over Ethernet and want truly
deterministic performance, consider isolating the DIO devices on a dedicated PLC
communications adapter and dedicated switch as shown in Figure 6.

In Figure 6, the PLC is equipped with 2 Ethernet interface adapters. One adapter
services the Distributed I/O for determinism, while the second adapter services all other
factory communications including peer PLC comms, SCADA, HMI, MES and other client
devices. Having DIO devices on a dedicated Ethernet switch eliminates the propagation
of unnecessary or disruptive protocols.
Note that with fiber optic interfaces on modern Industrial Ethernet switches, there is no
practical limit in terms of distance that other deterministic fieldbus are subject to. Using
multi mode fiber media the DIO network could span 2 kilometers between each switch.
Using single mode fiber optic media, distances of greater than 20 kilometers between
each switch sufficient for virtually any industrial application. In comparison with Profibus,
this is accomplished with no degradation of speed. The optional router shown in Figure 6
also allows direct access to the DIO devices from the Factory LAN in a controlled
method. The router will not propagate broadcasts from the factory LAN and can be

configured for Access Control such that only authorized personnel may pass through the
router to the DIO network.
In conclusion, two approaches to Ethernet determinism have been outlined. The first is
to employ full duplex operation end-to-end, and use configuration options in Industrial
Ethernet switches to mitigate excessive broadcasts, unnecessary multicasts and
prioritize automation application traffic. This allows determinism within 1 millisecond on
nearly all automation applications.
The second option is to use a dedicated DIO LAN for deterministic I/O control over
Ethernet. With minimal additional cost, this option allows Ethernet to operate in a closed,
controlled environment for deterministic operation.
Because of these design and configuration options, Ethernet offers remarkable flexibility
compared to other, traditionally deterministic field bus, without the restrictions and
performance penalties. As costs decrease and offer selection increases, Ethernet is
evolving into the fieldbus of choice. With a little planning and using a few Industrial
Ethernet switch configuration features, Ethernet is ready to migrate into areas currently
serviced by proprietary fieldbus.

You might also like