You are on page 1of 15

Architecture guide

Guidelines for deploying TRILL


in an HP data center

Table of contents
Introduction............................................................................................................................................................................2
Solution overview..................................................................................................................................................................2
TRILL fundamentals ..............................................................................................................................................................3
Routing bridge (RBridge or RB) .......................................................................................................................................3
TRILL trunk ports...............................................................................................................................................................3
TRILL access ports ............................................................................................................................................................3
TRILL access alone ports..................................................................................................................................................3
TRILL hybrid ports .............................................................................................................................................................3
Designated routing bridge (DRB) ....................................................................................................................................3
Appointed VLAN forwarder (AVF) ....................................................................................................................................3
Distribution trees...............................................................................................................................................................4
TRILL timers .......................................................................................................................................................................4
HP IRF TRILL enhancement ..................................................................................................................................................5
TRILL deployment architectures .........................................................................................................................................6
Distributed spines .............................................................................................................................................................6
Consolidated spines ..........................................................................................................................................................7
Server ToR connectivity....................................................................................................................................................8
IRF LACP MAD Links...........................................................................................................................................................8
Layer 3 routing with distributed spine interconnects...................................................................................................8
Consolidated layer 3 routing using multitenant device contexts (MDC) ....................................................................9
TRILL deployment best practices ..................................................................................................................................... 10
Appendix: Sample TRILL configurations.......................................................................................................................... 12
Spine 1 ............................................................................................................................................................................. 12
Leaf 1 ............................................................................................................................................................................... 13
Additional links ................................................................................................................................................................... 15
Architecture guide | Guidelines for deploying TRILL in an HP data center

Introduction
Transparent Interconnection of Lots of Links (TRILL) is an evolutionary step in Ethernet technology designed to address
some of the shortcomings within Ethernet, specifically spanning tree and loop prevention. TRILL provides a mechanism that
allows every single node to have a tree rooted at itself, allowing the optimal (shortest path) distribution of traffic as well as
multipathing for failure recovery.

This architecture guide provides design guidelines and best practices for deploying simple, scalable, and stable TRILL
Ethernet Fabrics in data centers on HP Networking switches.

Solution overview
TRILL combines the simplicity and flexibility of Layer 2 switching with the stability, scalability, and rapid convergence
capability of Layer 3 routing.
All these advantages make TRILL very suitable for large flat Layer 2 networks in data centers with increased East/West
server traffic patterns as shown in figure 1.
Figure 1.TRILL fabric supporting East/West server traffic

TRILL fabric

VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM
Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor Servers

Link Aggregation Group (LAG)

Other benefits of TRILL include:


• Vendor neutral, non-proprietary technology
• No Spanning Tree Protocol (STP), loop-free, multipathing Ethernet fabric
• Distributed scale out Ethernet fabric with all Top of Rack (ToR) switch server ports having equal latency
• Stable underlay network for overlay SDN networks in the data center

2
Architecture guide | Guidelines for deploying TRILL in an HP data center

TRILL fundamentals
The following TRILL fundamentals shown in figure 2 are critical for TRILL deployment.
Figure 2. TRILL fundamentals

Routing bridges

TRILL trunk TRILL fabric


ports

Routing
bridges

TRILL access alone ports


TRILL access ports
VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM
Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor Servers

Routing bridge (RBridge or RB)


Routing bridge is the device that runs TRILL. It has the benefits of both bridges and routers.

TRILL trunk ports


Used to handle TRILL frames, these ports are located inside a TRILL fabric.

TRILL access ports


Used to handle non-TRILL frames and hello packets, these ports are located at the edge of a TRILL fabric.

TRILL access alone ports


Used to handle non-TRILL frames, these ports are located at the edge of a TRILL fabric.

TRILL hybrid ports


Combination of both TRILL trunk and TRILL access ports, used to handle TRILL frames, non-TRILL frames, and hello packets.
It is used to connect two TRILL RBs across a non-TRILL enabled switch. It is not required or recommended in a standard data
center deployment.

Designated routing bridge (DRB)


Similar to the designated IS (DIS) in IS-IS, a DRB exists in the broadcast network. It helps simplify network topology and
appoints AVFs for VLANs on each RB in the broadcast network.

Appointed VLAN forwarder (AVF)


To avoid loops with multihomed L2 switches, TRILL requires all traffic of a VLAN on a broadcast network enter and leave the
TRILL network through the same DRB port. The DRB is also the AVF of the VLAN.

3
Architecture guide | Guidelines for deploying TRILL in an HP data center

TRILL prevents network loops with multihomed L2 switches using DRB/AVF as shown in figure 3.

Figure 3. TRILL loop prevention with multihomed L2 switches

VLANs
100-200
L2 switch L2 switches

VLANs 100-200
VLANs 100-200 VLANs 100-200 VLANs 100-200

RBs DRB/AVF RBs DRB/AVF

TRILL TRILL
fabric fabric

Distribution trees
Distribution trees as shown in figure 4 are used to forward multicast, broadcast, and unknown unicast frames across a TRILL
network. RBs with higher priorities are selected as root bridges (or tree roots) of TRILL distribution trees, multiple tree roots
are advisable in order to load share traffic based on VLANs. The number of distribution trees is decided by the highest
priority RB and this number will be pushed down to all other RBs.

Figure 4. TRILL network with 2 distribution trees

Tree root of Tree root of


distribution tree 1 distribution tree 2

TRILL fabric
VLAN 150

VLAN 151

VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM
Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor Servers

TRILL timers
HP recommends TRILL timers remain at their default values for production deployments. In lab and test environments,
failover might speed up at the expense of higher CPU, refer to TRILL configuration guides of the appropriate product for
guidance on tweaking timers.

4
Architecture guide | Guidelines for deploying TRILL in an HP data center

HP IRF TRILL enhancement


HP is able to enhance TRILL deployments by utilizing Intelligent Resilient Framework (IRF) to provide N:1 device
virtualization. Using IRF in HP TRILL networks does not affect interoperability with other vendors or TRILL nodes.
IRF is not mandatory for TRILL deployment but provides the following complementary benefits: Reduction in the routing
protocol (IS-IS) table size, the ability to have larger domains and faster failure recovery. Figure 5 shows the reduction in
RBridges.

Figure 5. HP IRF TRILL enhancement example

TRILL fabric without IRF

10 routing bridges in total

TRILL fabric with IRF

2 switch IRF 2 switch IRF 2 switch IRF 2 switch IRF


6 routing bridges in total

5
Architecture guide | Guidelines for deploying TRILL in an HP data center

TRILL deployment architectures


TRILL is typically deployed in the data center using a CLOS network design with spine and leaf switches as shown in figure 6
without IRF, this is valid and supported by HP.
Leaf switches connect to spine switches while servers are connected to leaf switches. TRILL expansion is simple by either
adding additional spine or leaf switches to increase capacity or performance.
Figure 6. TRILL CLOS network design

L3 routers

Spine switches

TRILL fabric
10/40G
interconnects

Leaf switches

VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM
Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor Servers

App VMs + VMs that provide services as L3 routing,


Firewall, and Load Balancer capabilities

There are two variations to the typical TRILL deployment architecture: Distributed and consolidated spines.

Distributed spines
The design in figure 7 is typically used with fixed port spine switches and multiple 10G interconnects. Server PODs with leaf
switches may be replicated horizontally as the network grows.

Figure 7. Distributed spine network design

Rest of the network

L3 routers

TRILL spine switches 2 x 5900s

10G TRILL fabric


interconnects
TRILL
leaf switches

2 x 5900s (IRF) 2 x 5900s (IRF) 2 x 5900s (IRF) 2 x 5900s (IRF) 2 x 5900s (IRF) 2 x 5900s (IRF)

VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM
Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor

10G servers teaming 1G servers teaming Future server PODs


active/standby active/standby

6
Architecture guide | Guidelines for deploying TRILL in an HP data center

Consolidated spines
The design in figure 8 is typically used with modular chassis-based switches such as the HP FlexFabric 11900 or 12900 and
10G or 40G interconnects for a higher performance but even simpler network. Fixed port 5900 switches may also be used if
desired. Server PODs with leaf switches may be replicated horizontally as the network grows.
Figure 8. Consolidated spine network design

Rest of the network

L3 routers

TRILL spine switches 2 x 11900s


(IRF)

TRILL fabric
10 or 40G
interconnects
TRILL
leaf switches

2 x 5900s (IRF) 2 x 5900s (IRF) 2 x 5900s (IRF) 2 x 5900s (IRF) 2 x 5900s (IRF) 2 x 5900s (IRF)

VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM
Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor

10G servers teaming 1G servers teaming Future server PODs


active/standby active/standby

Note:
Always refer to the product quick specs for hardware performance specifics such as MAC table sizes, forwarding rate,
bandwidth oversubscription calculations, and to determine what the appropriate products to be used are.

Table 1 highlights the main differences between both designs.

Table 1. Main differences between distributed and consolidated spine design

Distributed spines Consolidated spines

• Single logical 10G/40G LAG link between leaf


Spine and leaf • Multiple 10G links with ISIS ECMP for efficient
and spine switches for efficient network
interconnects network utilization
utilization
• LAG between L3 router and TRILL spines are not
possible
• VRRP for server/VM default gateway
redundancy • LAG between L3 router and TRILL spines are
possible
• L3 routers need to load shared traffic by
L3 router considerations splitting VRRP master states for different • VRRP and L3 router functionality same as
VLANs, i.e., VLANs 100–199 VRRP master is on distributed switches
the left router, while VLANs 200–299 VRRP
master is on the right router, both routers
should function as a VRRP backup for each
other
• Multiple tree roots are required for efficient • Only one tree root is required on the central
Tree root
network utilization spine switch as the network topology is simpler

7
Architecture guide | Guidelines for deploying TRILL in an HP data center

Server ToR connectivity


The TRILL fabric is flexible enough to accommodate either one or both typical server connectivity options as shown in figure
9 based on server NIC teaming and LACP capability:
1. Active/Standby server NIC teaming to different IRF clusters or non IRF switches
2. Active/Active LAG NIC teaming to the same IRF cluster

Figure 9.Multihomed server connectivity options

TRILL fabric
TRILL leaf switches

2 x 5900s 2 x 5900s 2 x 5900s 2 x 5900s


(IRF) (IRF) (IRF) (IRF)
VM VM VM VM VM VM VM VM VM VM
Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor

10G servers teaming 10G servers teaming


active/standby active/active

Customers who do not require high availability for server uplinks may also choose to connect their servers into TRILL leaf
switches without multihoming enabled.
Customers should refer to their hypervisor/virtualization vendor for host configuration/deployment details.

IRF LACP MAD Links


To offset the risk of IRF virtual device partition, it is recommended to deploy Link Aggregation Control Protocol Multi-Active
Detection (LACP MAD) with different IRF domain IDs between TRILL leaf switches in distributed spine deployments as shown
in figure 9 to detect multiactive collisions. These LACP links would have dual functionality to also provide East/West network
connectivity for servers as they are part of the TRILL fabric as well.
In consolidated spine deployments, LACP links between leaf and spine switches should both be enabled with MAD to provide
the same functionality. This will remove cabling and the LACP MAD requirement between leaf switches.

Layer 3 routing with distributed spine interconnects


In a distributed spine deployment, in order to accommodate spine and leaf link failure towards the L3 VRRP master, it is
recommended for spine switches to be interconnected to provide an alternative path for traffic to stay within the spine layer
as shown in figure 10.

8
Architecture guide | Guidelines for deploying TRILL in an HP data center

Figure 10. Spine interconnects to accommodate link failure to VRRP master

L3 routers
VRRP master VRRP backup

TRILL spine switches

TRILL fabric

TRILL leaf Link failure


switches

VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM
Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor Servers

Consolidated layer 3 routing using multitenant device contexts (MDC)


It is possible to simplify the network even further by using MDCs instead of physical devices to provide 1:N virtualization for
consolidated Layer 3 routing as shown in figure 11, this will remove the requirement for VRRP on the routers as IRF N:1
virtualization is used to provide chassis redundancy. It should be noted that external cabling will be required for connectivity
between MDCs.
Figure 11. Combining MDCs and IRF for TRILL

Rest of the network

MDC 3 – Layer 3 routers


IRF device 40G LAG

MDC 2 - TRILL spine


switches

TRILL Fabric
TRILL spine switches

IRF IRF IRF IRF


device device device device
VM VM VM VM VM VM VM VM VM VM
Hypervisor Hypervisor Hypervisor Hypervisor Hypervisor

10G servers teaming 10G servers teaming


active/standby active/active

9
Architecture guide | Guidelines for deploying TRILL in an HP data center

TRILL deployment best practices


This section highlights best practices when deploying a TRILL network.
Table 2. TRILL deployment best practices
Explanation Sample configuration

• Number of RBs in a TRILL deployment


should not exceed 64
Maximum TRILL RBs • Not applicable
• When IRF is used, each N:1 virtualized
device only counts as one RB
• interface Ten-G1/0/1
port link-mode bridge
description ToTRILLRB
undo stp enable
trill enable
trill link-type trunk
#
Disable Spanning Tree
• STP should be disabled on TRILL ports interface Ten-G1/0/4
Protocol
port link-mode bridge
description ToServer
port link-type trunk
undo port trunk permit vlan 1
port trunk permit vlan 150 to 151
undo stp enable
trill enable

• interface Ten-G1/0/1
port link-mode bridge
• Simplify TRILL deployment and description ToTRILLRB
Dedicated VLAN for
troubleshooting by dedicating VLAN 1 undo stp enable
TRILL trunk ports
to TRILL trunk ports trill enable
trill link-type trunk
• Above configuration shows default VLAN 1 being used
• interface Ten-G1/0/4
• Remove VLAN used by TRILL trunk port link-mode bridge
ports on TRILL access ports
description ToServer
Minimize VLANs on • Only the required VLANs should be
permitted on TRILL access ports with port link-type trunk
TRILL access ports to
servers 802.1Q trunking to servers, this would undo port trunk permit vlan 1
prevent servers from receiving port trunk permit vlan 150 to 151
unnecessary broadcasts and
undo stp enable
consuming server resources
trill enable

• interface Ten-G1/0/4
port link-mode bridge
description ToServer
• If required, it is also possible to
Supporting 4K VLANs support 4K VLANs to servers by port link-type trunk
on TRILL access ports preventing TRILL hello frames from undo port trunk permit vlan 1
to servers being received or sent with the port trunk permit vlan 2 to 4094
following configuration
undo stp enable
trill enable
trill link-type access alone

• Speeds up TRILL failover during active • Trill


TRILL graceful restart MPU or IRF master failure
graceful-restart

• Loops on spine switches with TRILL
Loops on spine
access ports should be avoided as this
switches with TRILL • Not applicable
will prevent the non DRB/AVF switch
access ports
from forwarding or receiving traffic

10
Architecture guide | Guidelines for deploying TRILL in an HP data center

• Loops can be avoided by deploying L3


links between routers connected to
TRILL spine switches
• Create deterministic and optimal • <Spine1>
distribution trees on spine switches
trill
• If distributed spines are used,
tree-root priority 35000
configure both spine switches with
tree root priorities trees calculate 2
Deterministic
• If consolidated spines are used, nickname 0001
distribution trees
configuration of only 1 spine switch • <Spine2>
will be required trill
• The number of distribution trees to be tree-root priority 34000
used will be determined by the spine
with highest priority nickname 0002

• <Spine1>
trill
tree-root priority 35000
• Implement customized TRILL trees calculate 2
Customized TRILL nicknames to simplify TRILL nickname 0001
nicknames troubleshooting when using display
commands • <Spine2>
trill
tree-root priority 34000
nickname 0002

• interface Bridge-Aggregation1
undo stp enable
• Enable LACP MAD on IRF switches to link-aggregation mode dynamic
LACP MAD on IRF
offset the risk of IRF virtual device
switches
partition mad enable
trill enable
trill link-type trunk

11
Architecture guide | Guidelines for deploying TRILL in an HP data center

Appendix: Sample TRILL configurations


This section provides sample TRILL configurations for both spine and leaf switches based on figure 12.
Figure 12. Sample TRILL topology

TRILL fabric with IRF


Spine 1

TRILL fabric

Leaf 1

Spine 1
sysname Spine1
#
trill
tree-root priority 35000
trees calculate 2
graceful-restart
nickname 0001
#
lldp global enable
#
vlan 1
#
vlan 1000 to 4094
#
stp global enable
#
interface M-GigabitEthernet0/0/0
ip address 192.168.0.1 255.255.255.0
#
interface Ten-GigabitEthernet1/0/1
description ToLeaf1
undo stp enable
port link-mode bridge
trill enable
trill link-type trunk
#
interface Ten-GigabitEthernet1/0/2
description ToLeaf2
undo stp enable
port link-mode bridge
trill enable
trill link-type trunk
#
interface Ten-GigabitEthernet1/0/3
description ToLeaf3
undo stp enable
port link-mode bridge
trill enable
trill link-type trunk
#
interface Ten-GigabitEthernet1/0/24
description ToL3Routers

12
Architecture guide | Guidelines for deploying TRILL in an HP data center

port link-type trunk


undo port trunk permit vlan 1
port trunk permit vlan 1000 to 1100
trill enable
#
ip route-static 192.168.1.0 24 192.168.0.254

Leaf 1
sysname Leaf1
#
irf domain 1
# domain IDs should differ for each RB
irf mac-address persistent timer
irf auto-update enable
undo irf link-delay
irf member 1 priority 32
#
trill
graceful-restart
nickname 0011
#
lldp global enable
#
vlan 1
#
vlan 150
name VM
#
vlan 151
name VMkernel
#
vlan 1000 to 4094
#
irf-port 1/1
port group interface Ten-GigabitEthernet1/0/45
port group interface Ten-GigabitEthernet1/0/46
#
irf-port 2/2
port group interface Ten-GigabitEthernet2/0/45
port group interface Ten-GigabitEthernet2/0/46
#
stp global enable
#
interface M-GigabitEthernet0/0/0
ip address 192.168.0.11 255.255.255.0
#
interface Ten-GigabitEthernet1/0/1
description ToSpine1
undo stp enable
port link-mode bridge
trill enable
trill link-type trunk
#
interface Ten-GigabitEthernet2/0/1
description ToSpine2
undo stp enable
port link-mode bridge
trill enable
trill link-type trunk
#
interface Bridge-Aggregation1
description ToLeaf2

13
Architecture guide | Guidelines for deploying TRILL in an HP data center

undo stp enable


link-aggregation mode dynamic
mad enable
trill enable
trill link-type trunk
#
interface Ten-GigabitEthernet1/0/3
undo stp enable
port link-mode bridge
trill enable
trill link-type trunk
port link-aggregation group 1
#
interface Ten-GigabitEthernet2/0/3
undo stp enable
port link-mode bridge
trill enable
trill link-type trunk
port link-aggregation group 1
#
interface Ten-GigabitEthernet1/0/10
description ToServer1
port link-type trunk
undo port trunk permit vlan 1
port trunk permit vlan 150 to 151
trill enable
#
interface Ten-GigabitEthernet1/0/11
description ToServer2
port link-type trunk
undo port trunk permit vlan 1
port trunk permit vlan 1000 to 4094
trill enable
trill link-type access alone
#
ip route-static 192.168.1.0 24 192.168.0.254

14
Architecture guide | Guidelines for deploying TRILL in an HP data center

Additional links
For more information refer to the TRILL configuration guide of the specific product.
HP 12900 configuration guide
HP 11900 configuration guide
HP 5900 configuration guide

Learn more at
hp.com/networking

Sign up for updates


hp.com/go/getupdated Share with colleagues Rate this document

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for
HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as
constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.

4AA6-0794ENW, August 2015

You might also like