You are on page 1of 40

2013 Cisco and/or its affiliates. All rights reserved.

1
Architecture & Solutions Group
US Public Sector Advanced Services
Mark Stinnette, CCIE Data Center #39151
Date 16 October 2013
Version 1.4.2
2013 Cisco and/or its affiliates. All rights reserved. 2
This presentation will provide end-to-end configurations mapped directly to commonly
deployed data center architecture topologies. In this cookbook style; quick start guide;
configurations are broken down in an animated step by step process to a complete end-to-
end good clean configuration based on Cisco best practices and strong recommendations.
Each QSG will contain set the stage content, technology component definitions,
recommended best practices, and more importantly different scenario data center
topologies mapped directly to complete end-to-end configurations. This QSG is geared for
network engineers, network operators, and data center architects to allow them to quickly
and effectively deploy these technologies in their data center infrastructure based on
proven commonly deployed designs.
This Quick Start Guide (QSG) is a Cookbook style guide to Deploying Data Center
technologies with end-to-end configurations for several commonly deployed architectures.
2013 Cisco and/or its affiliates. All rights reserved. 3
Geographically dispersed data centers provide added application resiliency and workload allocation flexibility. To this end, the
network must provide Layer 2, Layer 3 and storage connectivity between data centers. Connectivity must be provided without
compromising the autonomy of data centers or the stability of the overall network. OTV provides an operationally optimized
solution for the extension of Layer 2 connectivity across any transport. OTV is therefore critical to the effective deployment of
distributed data centers to support application availability and flexible workload mobility.

OTV is a "MAC address in IP" technique for supporting Layer 2 VPNs to extend LANs over any transport. The transport can
be Layer 2 based, Layer 3 based, IP switched, label switched, or anything else as long as it can carry IP packets. By using
the principles of MAC routing, OTV provides an overlay that enables Layer 2 connectivity between separate Layer 2 domains
while keeping these domains independent and preserving the fault-isolation, resiliency, and load-balancing benefits of an IP-
based interconnection.






Overlay Transport Virtualization (OTV) provides the following benefits:
Scalability
Extends Layer 2 LANs over any network that supports IP (Transport agnostic)
Designed to scale across multiple data centers
Simplicity
Supports transparent deployment over existing network without redesign
Requires minimal configuration commands
Resiliency
Preserves existing Layer 3 failure boundaries
Includes built-in loop prevention
Failure boundary preservation and site independence preservation (failover isolation between data centers)
Efficiency
Optimized available bandwidth, by using equal-cost multi-pathing and optimal multicast replication
Multipoint connectivity
Fast failover
Virtual Machine Mobility
Benefits Overview
2013 Cisco and/or its affiliates. All rights reserved. 4
Additional benefits of using OTV for Layer 2 extension:

No need for Ethernet over Multiprotocol Label Switching (EoMPLS) or Virtual Private LAN Services (VPLS)
deployment for Layer 2 extensions

Use any network transport that supports IP

Provision of Layer 2 and Layer 3 connectivity using the same dark fiber connections

Native Spanning Tree Protocol (STP) isolation:
No need to explicitly configure Bridge Data Protocol Unit (BPDU) filtering

Native unknown unicast flooding isolation:
Unknown unicast not sent to the overlay

Address Resolution Protocol (ARP) optimization with the OTV ARP cache

Simplified provisioning of First Hop Redundancy Protocol (FHRP) isolation

Simplified addition of sites

Benefits Overview
2013 Cisco and/or its affiliates. All rights reserved. 5
Most Commonly Deployed
No Network Redesign or Re-Cabling
Join Interface connects back through the VDC
that has the SVIs on them
Separate OTV VDC or Appliance Switch
Dedicated Uplink for DCI
Join Interface has a dedicated link out to the
DCI transport (Core or WAN Edge)
Separate OTV VDC or Appliance Switch
OTV On a Stick
Inline OTV
Commonly Deployed Designs :: Aggregation Layer
L3 & Join Interfaces

L2 Internal Interfaces
2013 Cisco and/or its affiliates. All rights reserved. 6
vPC Domain
(vPC or vPC+ Supported)
Join Interfaces
Point-to-Point Layer 3 interface
M-Series Line Cards Only
OTV delivers Layer 2 extensions over
any type of transport infrastructure
Terminology & Components
OTV Edge Device
Performs OTV Functions
Internal Interfaces
Regular Layer 2 & Carries VLANs extended over OTV
Peer-Link
M-Series Line Cards Supported
F1 & F2E Line Cards Supported in 6.2(2)
Multicast or Unicast
Transports Supported
Authoritative Edge Device (AED )
Even VLANs
SVI Separation on the Aggregation VDC
Authoritative Edge Device (AED )
Odd VLANs
OTV Overlay Interface
2013 Cisco and/or its affiliates. All rights reserved. 7
OTV encapsulates packets into an IP header and where it sets the Don't
Fragment (DF) bit for all OTV control and data packets crossing the transport
network. The encapsulation adds 42 bytes to the original IP maximum transition
unit (MTU) size. So it is a best practice to configure the join interface and all
Layer 3 interfaces that face the IP core between the OTV edge devices with
the max possible MTU size supported by the transport
Filtering FHRP in both data centers on the OTV VDC is
required to allow for existence of the same default
gateway in different locations thus optimizing the
outbound traffic flows (server to client direction)
Terminology & Components
Site ID & Site VLAN are Deployed on Both OTV Edge Devices

Site Identifier ::
Use same Site ID within a single data center
Use unique Site ID between different data centers

Site VLAN ::
Use same Site VLAN between different data centers (not mandatory)
Site VLAN is active on internal interfaces but dont extend Site VLAN
The Site VLAN should be a dedicated VLAN
WEST DC EAST DC
HSRP Active HSRP Standby HSRP Active HSRP Standby
Filter HSRP
Filter HSRP Filter HSRP Filter HSRP
Site ID 1
Site VLAN 99
Site ID 2
Site VLAN 99
2013 Cisco and/or its affiliates. All rights reserved. 8
Layer 3 Interface (Towards Routed Core)

interface ethernet x/y
mtu 9216
ip address x.x.x.x/30
ip router ospf 1 area 0
ip ospf network point-to-point
ip pim sparse-mode
OTV Join Interfaces

interface ethernet x/y
mtu 9216
ip address x.x.x.x/30
ip router ospf 1 area 0
ip ospf network point-to-point
ip igmp version 3
Terminology & Components :: Layer 2 & Layer 3 Features
Layer 3 Interface (Towards OTV Join)

interface ethernet x/z
mtu 9216
ip address x.x.x.x/30
ip router ospf 1 area 0
ip ospf network point-to-point
ip pim sparse-mode
ip igmp version 3
Aggregation Switch :: Enable PIM

feature pim

ip pim rp-address x.x.x.x group-list 224.0.0.0/4
ip pim ssm range 232.0.0.0/8
Aggregation Internal Interfaces

interface port-channel x
switchport
switchport mode trunk
switchport trunk allowed vlan x, y
vpc x

interface ethernet x/y
channel-group x force mode active
Must enable Site
VLAN [x] on trunk
towards the
Aggregation Switch
[make vlan active]
OTV Internal Interfaces

interface port-channel x
switchport
switchport mode trunk
switchport trunk allowed vlan x, y

interface ethernet x/y - z
channel-group x force mode active
2013 Cisco and/or its affiliates. All rights reserved. 9
Feature Overview
Edge Device The OTV Edge Devices performs OTV functions, multiple OTV Edge Devices can exist at each site. OTV
requires the Transport Services (TRS) license. If you create the OTV Edge Device in a non default VDC; it
requires the Advanced Services license.
Internal Interfaces Internal interfaces are the site facing interfaces of the Edge device; carrying VLANs extended through OTV.
They are regular Layer 2 interfaces, switch ort mode trunk, and typically port channels in a vPC. No OTV
configuration is required on these interfaces.
Join Interfaces Join interfaces are one of the uplink of the Edge device; they are Layer 3 point-to-point routed interfaces
(physical interface, port channel, or sub-interface). Its used to physically join the Overlay network. No OTV
specific configuration required.
Overlay Interface Virtual interface is where most of the OTV configuration happens, logical multi-access multicast-capable
interface, encapsulates Layer 2 frames in IP unicast or multicast.
Authoritative Edge
Device (AED)
The AED is responsible for MAC address advertisement for its VLANs; forwarding its VLANs traffic inside
and outside the site. The extended VLANs are split across the AEDs (even & odd) in OTV multi-homing.
Site VLAN The OTV Site VLAN is used to discover OTV neighbor edge devices in same local site.
Site Identifier Same site Edge devices must use a common unique Site ID. Site ID is included in the control plane; an
overlay will not come up until a Site ID is configured; and should be on all local OTV Edge devices.
MTU Join interfaces and neighboring Core interfaces need to have MTU of 1542 (hard requirement). Best
practice to the max possible MTU size supported by the transport
FHRP Isolation Filtering FHRP messages across the OTV Overlay allows to provide the same active default gateway in each
data center site. Note, in future releases OTV will offer a simple command to enable these filtering
capabilities.
SVI Separation OTV currently enforces SVI separation for the VLANs being extended across the OTV link, meaning OTV is
usually in its own VDC for OTV functions and have SVIs in another Aggregation VDC.
Additional Features, Terminology, & Components
2013 Cisco and/or its affiliates. All rights reserved. 10
Feature Overview
OTV Requirements Nexus 7000 Series or ASR routers. LAN ADVANCED SERVICES (VDC) license & TRANSPORT
SERVICES (OTV/LISP) license. An M-Series line card is required in the OTV VDC for OTV functions.
Multicast Transport Multicast transport (OTV Control Plane) is ideal for connecting a higher number of sites. OTV Neighbor
relationships are built over a multicast enabled core / transport infrastructure. All OTV edge devices can
be configured to join a specific ASM (Any Source Multicast) group where they simultaneously play the
role of receiver and source. Edge devices join a multicast group; adjacencies are maintained over that
multicast group and a single update reaches all neighbors.
Unicast Transport Supported since NX-OS release 5.2. Unicast-only transport (OTV Control Plane) is ideal for connecting a
small number of sites. Requires the adjacency server. Each OTV devices would need to create multiple
copies of each control plane packet and unicast them to each remote OTV device part of the same logical
overlay.
Adjacency Server Used in OTV Unicast mode; usually enabled on an OTV Edge device; can have a primary and secondary;
and all other OTV Edge client devices are configured with the address of the adjacency server. The goal
is to be able to communicate with all the remote OTV devices, each OTV node needs to know a list of
neighbors to replicate the control packets to. Rather than statically configuring in each OTV node the list
of all neighbors, a simple dynamic means is used to provide this information; this adjacency server.
OTV Extend VLAN Enables OTV advertisements for those VLANs. OTV will not forward Layer 2 packets for VLANs not in
the extended VLAN range for the overlay interface. Assign a VLAN to only one overlay interface.
OTV Authentication OTV supports authentication of Hello messages along with authentication of PDUs.
Dual Homed OTV Edge
Devices
Leverage vPC or vPC+ for dual homed OTV Edge devices. The concept of the AED role along with the
site vlan allows multi-homing OTV Edge devices.
Additional Features, Terminology, & Components
2013 Cisco and/or its affiliates. All rights reserved. 11
Feature Overview
Selective Unicast
Flooding
In 6.2(2); some applications rely on unknown unicast frames; so selective unicast flooding can be enabled
on a per mac address per vlan to accommodate silent or uni-directional hosts. OTV default behavior is
no unknown unicast forwarding.
Command used: otv flood mac [xxxx.yyyy.zzzz] vlan [#]
Dedicated Data
Broadcast Forwarding
In 6.2(2); Dedicated broadcast group is a configurable option; useful for QoS purposes. A dedicated
multicast group can be configured for all broadcast transmission in an OTV overlay that utilizes multicast
transmission on the underlying OTV network. By default, the broadcast and control traffic will share the
same multicast group address. The broadcast group needs to be configured on all OTV Edge devices
connected to the OTV overlay network.
Source Interface with
Loopback
In 6.2(2)+ maintenance release; Logical interfaces as Join Interfaces; Loopback to guarantee interfaces is
up/up. An OTV Edge device can be configured to use a loopback interface as the join-interface for an
OTV overlay to increase availability. This feature requires the OTV Edge device to participate in the core
PIM multicast domain to support multiple paths. Prior to this feature only single homed Ethernet and port
channel interface options were available.
OTV VLAN Translation In 6.2(2); VLAN translation allows OTV to map a local VLAN (in DC 1) to a remote VLAN (in DC 2). In
previous NX-OS releases, the extended VLANs had to be identical in each site (ie. X to X). With the
VLAN mapping feature, VLANs can be translated, so they can be different in each site (ie. X to Y to Z)
providing more flexible deployment options. Both multicast and unicast enabled IP core networks are
supported. VLAN mappings have a one-to-one relationship.
Additional Features, Terminology, & Components
2013 Cisco and/or its affiliates. All rights reserved. 12
Supported Line Card Topologies :: NX-OS 6.1 and Prior Releases
OTV VDC must use only M-Series ports for both Internal and Join Interfaces
[M1-48, M1-32, M1-08, M2-Series]
OTV VDC Types (M-only)
Aggregation VDC Types (M-only, M1-F1 or F2/F2E)
Aggregation VDC
2013 Cisco and/or its affiliates. All rights reserved. 13
Supported Line Card Topologies :: NX-OS 6.2 and Later Releases
OTV VDC Join Interfaces must use only M-Series ports
[M1-48, M1-32, M1-08, M2-Series]
OTV VDC Internal Interfaces can use M-Series, F1 and F2E ports (F1 and F2E must be in Layer 2 proxy mode)
OTV VDC Types (M-only, M1-F1, M1-F2E)
Aggregation VDC Types (M-only, M1-F1, M1-F2E, F2, F2E, F2F2E)
Aggregation VDC
2013 Cisco and/or its affiliates. All rights reserved. 14
Physical View Connectivity Map
Layer 3 routed point-to-point interfaces. Will be using OSPF as the routing protocol.
Layer 2 interfaces. The Aggregation VDC connects through vPC to the OTV VDC.
Quick Start Guide Assumptions
OTV Characteristics
2-wide 7k Aggregation VDC
Multi-homed OTV VDC
Multicast enabled transport
Extend VLAN 10
OTV Site VLAN 99
2013 Cisco and/or its affiliates. All rights reserved. 15
[Admin / Default VDC]

no vdc combined-hostname

vdc AGG-1
vdc AGG-1 limit-resource module-type m1 f1 m1xl m2xl
cpu-share 5
allocate interface Ethernet [.]

vdc OTV-1
vdc OTV-1 limit-resource module-type m1 m1xl m2xl
cpu-share 5
allocate interface Ethernet [.]

[Admin / Default VDC]

no vdc combined-hostname

vdc AGG-2
vdc AGG-2 limit-resource module-type m1 f1 m1xl m2xl
cpu-share 5
allocate interface Ethernet [.]

vdc OTV-2
vdc OTV-2 limit-resource module-type m1 m1xl m2xl
cpu-share 5
allocate interface Ethernet [.]


Create Aggregation & OTV VDCs

Allocate the Interfaces to appropriate VDC role accordingly
Verify the Nexus 7000 has the proper licenses to support OTV and VDC.

OTV requires the Transport Services license
VDC requires the Advanced Services license

install license bootflash:///lan_advanced_services_pkg.lic
install license bootflash:///lan_transport_services_pkg.lic

show license usage

Step 1 :: install | validate licenses
Step 2 :: create aggregation VDC
Step 3 :: create OTV VDC
Perform Configuration Steps at
Both DC Sites (East & West)
2013 Cisco and/or its affiliates. All rights reserved. 16
feature lacp
feature vpc

vlan 10-20, 99

spanning-tree pathcost method long
spanning-tree port type edge bpduguard default
spanning-tree port type edge bpdufilter default
no spanning-tree loopguard default

spanning-tree vlan 10-20, 99 priority 0
spanning-tree pseudo-information
vlan 10-20 root priority 4096
vlan 1-10, 99 designated priority 8192
vlan 11-20 designated priority 16384

vpc domain 1
role priority 1
system-priority 4096
peer-keepalive destination [.] source [.] vrf
management
peer-switch
peer-gateway
auto-recovery
auto-recovery reload-delay
delay restore 30
ip arp synchronize

interface port-channel 2
switchport
switchport mode trunk
switchport trunk allowed vlan 10-20, 99
spanning-tree port type network
vpc peer-link

interface e3/1 , e4/1
channel-group 2 force mode active

feature lacp
feature vpc

vlan 10-20, 99

spanning-tree pathcost method long
spanning-tree port type edge bpduguard default
spanning-tree port type edge bpdufilter default
no spanning-tree loopguard default

spanning-tree vlan 10-20, 99 priority 0
spanning-tree pseudo-information
vlan 10-20, 99 root priority 4096
vlan 1-10, 99 designated priority 16384
vlan 11-20 designated priority 8192

vpc domain 1
role priority 2
system-priority 4096
peer-keepalive destination [.] source [.] vrf
management
peer-switch
peer-gateway
auto-recovery
auto-recovery reload-delay
delay restore 30
ip arp synchronize

interface port-channel 2
switchport
switchport mode trunk
switchport trunk allowed vlan 10-20, 99
spanning-tree port type network
vpc peer-link

interface e3/1 , e4/1
channel-group 2 force mode active See QSG :: vPC for more details
Configure Aggregation VDC :: Layer 2 vPC (Option)
Perform Configuration Steps at
Both DC Sites (East & West)
2013 Cisco and/or its affiliates. All rights reserved. 17
feature lacp
feature vpc
install feature-set fabricpath
feature-set fabricpath

vlan 10-20, 99
mode fabricpath

fabricpath switch-id 10

fabricpath domain default
root-priority 255

spanning-tree pseudo-information
vlan 10-20, 99 root priority 0

vpc domain 1
role priority 1
system-priority 4096
peer-keepalive destination [.] source [.] vrf
management
peer-gateway
auto-recovery
auto-recovery reload-delay
delay restore 30
ip arp synchronize
fabricpath switch-id 1000

interface port-channel 2
switchport mode fabricpath
vpc peer-link

interface e3/1 , e4/1
channel-group 2 force mode active

feature lacp
feature vpc
install feature-set fabricpath
feature-set fabricpath

vlan 10-20, 99
mode fabricpath

fabricpath switch-id 11

fabricpath domain default
root-priority 254

spanning-tree pseudo-information
vlan 10-20, 99 root priority 0

vpc domain 1
role priority 2
system-priority 4096
peer-keepalive destination [.] source [.] vrf
management
peer-gateway
auto-recovery
auto-recovery reload-delay
delay restore 30
ip arp synchronize
fabricpath switch-id 1000

interface port-channel 2
switchport mode fabricpath
vpc peer-link

interface e3/1 , e4/1
channel-group 2 force mode active See QSG :: FabricPath for more details
Configure Aggregation VDC :: Layer 2 FabricPath vPC+ (Option)
Default / Admin
VDC Only
Default / Admin
VDC Only
Perform Configuration Steps at
Both DC Sites (East & West)
2013 Cisco and/or its affiliates. All rights reserved. 18
feature ospf
feature interface-vlan
feature hsrp

vlan 10 20, 99

interface loopback0
ip address [.]/32

router ospf 1
router-id [.]
log-adjacency-changes detail
auto-cost reference-bandwidth 100Gbps

interface e1/1
ip address [.]/30
ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point

interface e1/10
ip address [.]/30
ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point

interface vlan 10
ip address 10.10.10.2/24
no ip redirects
ip router ospf 1 area 0.0.0.0
ip ospf passive-interface
hsrp 1
preempt
priority 110
ip 10.10.10.1


feature ospf
feature interface-vlan
feature hsrp

vlan 10 20, 99

interface loopback0
ip address [.]/32

router ospf 1
router-id [.]
log-adjacency-changes detail
auto-cost reference-bandwidth 100Gbps

interface e1/1
ip address [.]/30
ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point

interface e1/10
ip address [.]/30
ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point

interface vlan 10
ip address 10.10.10.3/24
no ip redirects
ip router ospf 1 area 0.0.0.0
ip ospf passive-interface
hsrp 1
preempt
ip 10.10.10.1

Configure Aggregation VDC :: Layer 3 Infrastructure

Allocate the following accordingly ::
IP addressing
OSPF areas
SVIs & HSRP Groups
Perform Configuration Steps at
Both DC Sites (East & West)
2013 Cisco and/or its affiliates. All rights reserved. 19
feature ospf
feature lacp
feature vpc

interface e 1/2
ip address [.] / 30
ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point

interface port-channel 10
switchport
switchport mode trunk
switchport trunk allowed vlan 10, 99
vpc 10

interface port-channel 20
switchport
switchport mode trunk
switchport trunk allowed vlan 10, 99
vpc 20

interface e5/1
channel-group 10 force mode active

interface e6/1
channel-group 20 force mode active

feature ospf
feature lacp
feature vpc

interface e 1/2
ip address [.] / 30
ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point

interface port-channel 10
switchport
switchport mode trunk
switchport trunk allowed vlan 10, 99
vpc 10

interface port-channel 20
switchport
switchport mode trunk
switchport trunk allowed vlan 10, 99
vpc 20

interface e5/1
channel-group 10 force mode active

interface e6/1
channel-group 20 force mode active

Configure OTV :: Layer 2 & Layer 3 Infrastructure @ Aggregation
Step 1 :: configure L3 link towards OTV Join Interface
Step 2 :: configure L2 vPC towards OTV Internal Interface
Perform Configuration Steps at
Both DC Sites (East & West)
The OTV internal interfaces carry the VLANs to be extended and the OTV site VLAN (used within the data center to
provide multi-homing). They behave as regular Layer 2 switch port trunk interfaces; in fact, they send, receive, and
process the Spanning Tree Protocol BPDUs as they would on a regular LAN bridge device.
2013 Cisco and/or its affiliates. All rights reserved. 20
Configure OTV :: Layer 2 & Layer 3 Infrastructure @ OTV VDC
Step 1 :: configure OTV Join Interfaces
Step 2 :: configure OTV Internal Interfaces
Step 3 :: create vlan to extend
Perform Configuration Steps at
Both DC Sites (East & West)
feature ospf
feature lacp

vlan 10

spanning-tree vlan 10 priority 32768

interface loopback0
ip address [.]/32

router ospf 1
router-id [.]
log-adjacency-changes detail
auto-cost reference-bandwidth 100Gbps

interface e 1/9
ip address [.] / 30
ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point

interface port-channel 10
switchport
switchport mode trunk
switchport trunk allowed vlan 10

interface e2/1, e2/2
channel-group 10 force mode active
feature ospf
feature lacp

vlan 10

spanning-tree vlan 10 priority 32768

interface loopback0
ip address [.]/32

router ospf 1
router-id [.]
log-adjacency-changes detail
auto-cost reference-bandwidth 100Gbps

interface e 1/9
ip address [.] / 30
ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point

interface port-channel 10
switchport
switchport mode trunk
switchport trunk allowed vlan 10

interface e2/1, e2/2
channel-group 10 force mode active

2013 Cisco and/or its affiliates. All rights reserved. 21
Configure OTV :: Enable Jumbo MTU
Step 1 :: increase MTU on Join Interfaces
Step 2 :: increase MTU on all Layer 3 Interfaces
Perform Configuration Steps at
Both DC Sites (East & West)
feature ospf
feature lacp
feature vpc

vlan 10

interface e 1/9
mtu 9216
ip address [.] / 30
ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point
feature ospf
feature lacp
feature vpc

vlan 10

interface e 1/9
mtu 9216
ip address [.] / 30
ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point


feature ospf
feature lacp
feature vpc

vlan 10 20, 99

interface e 1/2
mtu 9216
ip address [.] / 30
ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point

interface e1/10
mtu 9216
ip address [.]/30
ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point

interface e1/1
mtu 9216
ip address [.]/30
ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point



feature ospf
feature lacp
feature vpc

vlan 10 20, 99

interface e 1/2
mtu 9216
ip address [.] / 30
ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point

interface e1/10
mtu 9216
ip address [.]/30
ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point

interface e1/1
mtu 9216
ip address [.]/30
ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point


2013 Cisco and/or its affiliates. All rights reserved. 22
Configure OTV :: Enable Required Multicast
Step 1 :: enable PIM
Step 2 :: configure PIM sparse mode [AGG VDC]
(on all intra & inter data center Layer 3 links)
Step 3 :: configure IGMP v3 [AGG & OTV VDC]
(join interfaces only)
Step 4 :: configure Rendezvous Point (RP)
Step 5 :: configure Source-Specific Multicast (SSM)
Perform Configuration Steps at
Both DC Sites (East & West)
interface e 1/9
mtu 9216
ip address [.] / 30
ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point
ip igmp version 3
interface e 1/9
mtu 9216
ip address [.] / 30
ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point
ip igmp version 3



feature ospf
feature lacp
feature vpc
feature pim

ip pim rp-address [x.x.x.x] group-list 224.0.0.0/4
ip pim ssm range 232.0.0.0/8

interface e1/1
mtu 9216
ip address [.]/30
ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point
ip pim sparse-mode

interface e 1/2
mtu 9216
ip address [.] / 30
ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point
ip pim sparse-mode
ip igmp version 3

interface e1/10
mtu 9216
ip address [.]/30
ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point
ip pim sparse-mode
feature ospf
feature lacp
feature vpc
feature pim

ip pim rp-address [x.x.x.x] group-list 224.0.0.0/4
ip pim ssm range 232.0.0.0/8

interface e1/1
mtu 9216
ip address [.]/30
ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point
ip pim sparse-mode

interface e 1/2
mtu 9216
ip address [.] / 30
ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point
ip pim sparse-mode
ip igmp version 3

interface e1/10
mtu 9216
ip address [.]/30
ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point
ip pim sparse-mode



2013 Cisco and/or its affiliates. All rights reserved. 23
Finish OTV Configuration :: Overlay, Site-ID, Site-VLAN
feature otv

vlan 10

otv site-vlan 99
otv site-identifier 0000.0000.0002

interface Overlay 1
otv join-interface ethernet 1/9
otv control-group 239.1.1.1
otv data-group 232.1.1.0/24
otv extend-vlan 10

interface port-channel 10
switchport
switchport mode trunk
switchport trunk allowed vlan 10
feature otv

vlan 10

otv site-vlan 99
otv site-identifier 0000.0000.0001

interface Overlay 1
otv join-interface ethernet 1/9
otv control-group 239.1.1.1
otv data-group 232.1.1.0/24
otv extend-vlan 10

interface port-channel 10
switchport
switchport mode trunk
switchport trunk allowed vlan 10




feature otv

vlan 10

otv site-vlan 99
otv site-identifier 0000.0000.0001

interface Overlay 1
otv join-interface ethernet 1/9
otv control-group 239.1.1.1
otv data-group 232.1.1.0/24
otv extend-vlan 10

interface port-channel 10
switchport
switchport mode trunk
switchport trunk allowed vlan 10

interface e2/1, e2/2
channel-group 10 force mode active



feature otv

vlan 10

otv site-vlan 99
otv site-identifier 0000.0000.0002

interface Overlay 1
otv join-interface ethernet 1/9
otv control-group 239.1.1.1
otv data-group 232.1.1.0/24
otv extend-vlan 10

interface port-channel 10
switchport
switchport mode trunk
switchport trunk allowed vlan 10

interface e2/1, e2/2
channel-group 10 force mode active



Site ID 1
Site VLAN 99
Site ID 2
Site VLAN 99
Step 1 :: enable OTV feature
Step 2 :: configure site-vlan
Step 3 :: enable site-vlan on L2 trunks (make vlan active)
Step 4 :: configure site-identifier
Step 5 :: configure OTV Overlay Interface
Site ID 1
Site VLAN 99
Site ID 2
Site VLAN 99
interface e2/1, e2/2
channel-group 10 force mode active




interface e2/1, e2/2
channel-group 10 force mode active




, 99
, 99
, 99
, 99
, 99
, 99
, 99
, 99
OTV Characteristics
2-wide 7k Aggregation VDC
Multi-homed OTV VDC
Multicast enabled transport
Extend VLAN 10
2013 Cisco and/or its affiliates. All rights reserved. 24
OTV Configuration
The Layer 2 links are known as internal interfaces and are used by the OTV edge device to learn the
MAC addresses of the site and forward Layer 2 traffic across the sites for the extended VLANs.
The Layer 3 link is known as the join interface, which OTV uses to perform IP-based virtualization to
send and receive overlay traffic between sites. The IP address of this interface is used to advertise
reachability of a MAC addresses present in the site. There is one Join interface per OTV Overlay;
however, if multiple Layer 3 interfaces are present on the OTV edge device, the unicast extended traffic
can get routed over any of these links
OTV encapsulates packets into an IP header and where it sets the Don't Fragment (DF) bit for all OTV
control and data packets crossing the transport network. The encapsulation adds 42 bytes to the original
IP maximum transition unit (MTU) size. So it is a best practice to configure the join interface and all
Layer 3 interfaces that face the IP core between the OTV edge devices with the max possible MTU size
supported by the transport.
OTV uses site VLAN to allow multiple OTV edge devices within the site to talk to each other and
determine the AED for the OTV-extended VLANs. It is a best practice to use a dedicated VLAN as site
VLAN. The site VLAN should not be extended and should be carried down to the aggregation layer
across the VPC peer link.
Site ID 1
Site VLAN 99
NOTES
The OTV edge device is also configured with the overlay interface, which is
associated with the join interface to provide connectivity to the physical
transport network. The overlay interface is used by OTV to send and receive
Layer 2 frames encapsulated in IP packets. From the perspective of MAC-
based forwarding on the site, the overlay interface is simply another bridged
interface. However, no Spanning Tree Protocol packets or unknown unicast
packets are forwarded over the overlay interface.
Note: The overlay interface does not come up until you configure a multicast
group address and the site-VLAN has at least an active port on the device.
A VLAN is not advertised on the overlay network; therefore, forwarding
cannot occur over the overlay network unless the VLANs are explicitly
extended. Once the VLAN is extended, the OTV edge device will begin
advertising locally learned MAC addresses on the overlay network.
Key advantages of using multicast is that it allows optimal multicast traffic
replication to multiple sites and avoids head-end replication that leads to
suboptimal bandwidth utilization.
When sites are multihomed with OTV EDs, separation is achieved by electing
an authoritative edge device (AED) for each VLAN in the same site (site-id),
which is the only device that can forward the traffic for the extended VLAN
inside and outside the data center. The extended VLANs are split in odd and
even and automatically assigned to the site's edge devices.
The multicast control group identifies the overlay; two different overlays must
have two different multicast control groups. The control group is used for
neighbor discovery and to exchange MAC address reachability. The data
group however is an SSM (Source Specific Group) group range, which is
used to carry multicast data traffic generated by the sites
In the aggregation layer, Protocol Independent Multicast (PIM) is configured
on all intra- and inter-data-center Layer 3 links to allow multicast states to be
built in the core network.
Since PIM sparse mode requires a rendezvous point (RP) to build a multicast
tree, one of the aggregation switches in each data center is used as an RP.
Local RP allows both local sources and receivers to join local RP rather than
having to go to different data center to reach an RP in order to build a shared
tree. For more information about MSDP and Anycast features of multicast,
visit: http://www.cisco.com/en/US/docs/ios/solutions_docs/ip_multicast/White_papers/anycast.html
2013 Cisco and/or its affiliates. All rights reserved. 25
OTV Configuration :: Unicast-Only Mode
feature otv

vlan 10, 99

otv site-vlan 99
otv site-identifier 0000.0000.0002

interface Overlay 1
otv join-interface ethernet 1/9
otv use-adjacency-server [x] [y] unicast-only
otv extend-vlan 10

interface e 1/9
mtu 9216
ip address [z] / 30
ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point
feature otv

vlan 10, 99

otv site-vlan 99
otv site-identifier 0000.0000.0001

interface Overlay 1
otv join-interface ethernet 1/9
otv use-adjacency-server [x] [y] unicast-only
otv extend-vlan 10

interface e 1/9
mtu 9216
ip address [w] / 30
ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point
feature otv

vlan 10, 99

otv site-vlan 99
otv site-identifier 0000.0000.0001

interface Overlay 1
otv join-interface ethernet 1/9
otv adjacency-server unicast-only
otv extend-vlan 10


interface e 1/9
mtu 9216
ip address [x] / 30
ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point
feature otv

vlan 10, 99

otv site-vlan 99
otv site-identifier 0000.0000.0002

interface Overlay 1
otv join-interface ethernet 1/9
otv adjacency-server unicast-only
otv use-adjacency-server [x] unicast-only
otv extend-vlan 10

interface e 1/9
mtu 9216
ip address [y] / 30
ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point



Site ID 1
Site VLAN 99
Site ID 2
Site VLAN 99
Step 1 :: enable OTV
Step 2 :: configure site-vlan, site-id, Overlay Interface
Step 3 :: define role of adjacency server [primary]
Step 4 :: define role of adjacency server [secondary]
Step 5 :: define all other edge devices as clients
Site ID 1
Site VLAN 99
Site ID 2
Site VLAN 99
Primary Adjacency Server Secondary Adjacency Server
Assume :: enable site-vlan on L2 trunks (make vlan active)

interface port-channel 10
switchport
switchport mode trunk
switchport trunk allowed vlan 10, 99
Primary Adjacency Server :: Join Interface [x]
Secondary Adjacency Server :: Join Interface [y]

2013 Cisco and/or its affiliates. All rights reserved. 26
OTV Configuration :: Unicast-Only Mode
feature otv

vlan 10, 99

otv site-vlan 99
otv site-identifier 0000.0000.0001

interface Overlay 1
otv join-interface ethernet 1/9
otv use-adjacency-server [x] [y] unicast-only
otv extend-vlan 10

interface e 1/9
mtu 9216
ip address [w] / 30
ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point
feature otv

vlan 10, 99

otv site-vlan 99
otv site-identifier 0000.0000.0001

interface Overlay 1
otv join-interface ethernet 1/9
otv adjacency-server unicast-only
otv extend-vlan 10


interface e 1/9
mtu 9216
ip address [x] / 30
ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point
feature otv

vlan 10, 99

otv site-vlan 99
otv site-identifier 0000.0000.0002

interface Overlay 1
otv join-interface ethernet 1/9
otv adjacency-server unicast-only
otv use-adjacency-server [x] unicast-only
otv extend-vlan 10

interface e 1/9
mtu 9216
ip address [y] / 30
ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point



Primary Adjacency Server Secondary Adjacency Server
Primary Adjacency Server :: Join Interface [x]
Secondary Adjacency Server :: Join Interface [y]

Two pieces of configuration are required to deploy OTV across a unicast-only transport infrastructure: first, it
is required to define the role of Adjacency Server; whereas the other piece of configuration is required in
each OTV edge device not acting as an Adjacency Server (i.e acting as a client). All client OTV edge devices
are configured with the address of the Adjacency Server. All other adjacency addresses are discovered
dynamically. Thereby, when a new site is added, only the OTV edge devices for the new site need to be
configured with the Adjacency Server addresses. No other sites need additional configuration.

The recommendation is usually to deploy a redundant pair of Adjacency Servers in separate DC sites.
The configuration on the Primary Adjacency Server is very simple and limited to enable AS functionality (otv adjacency-
server command). The same command is also required on the Secondary Adjacency Server device, but also needs to
point to the Primary AS (leveraging the otv use-adjacency-server command). Finally, the generic OTV Edge Device
must be configured to use both the Primary and Secondary Adjacency Servers. The sequence of adjacency server
address in the configuration determine primary or secondary adjacency server role. This order is relevant since an OTV
edge device will always use the OTV neighbor-list (oNL) provided by the Primary Adjacency Server, unless it detects that
specific device is not available anymore (control plane Hellos are always exchanged as keepalives between each OTV
device and the Adjacency Servers).
NOTES
2013 Cisco and/or its affiliates. All rights reserved. 27
ip access-list ALL_IPs
10 permit ip any any
mac access-list ALL_MACs
10 permit any any
ip access-list HSRP_IP
10 permit udp any 224.0.0.2/32 eq 1985
20 permit udp any 224.0.0.102/32 eq 1985
mac access-list HSRP_VMAC
10 permit 0000.0c07.ac00 0000.0000.00ff any
20 permit 0000.0c9f.f000 0000.0000.0fff any
arp access-list HSRP_VMAC_ARP
10 deny ip any mac 0000.0c07.ac00 ffff.ffff.ff00
20 deny ip any mac 0000.0c9f.f000 ffff.ffff.f000
30 permit ip any mac any

feature dhcp
ip arp inspection filter HSRP_VMAC_ARP vlan 10

vlan access-map HSRP_Localization 10
match mac address HSRP_VMAC
match ip address HSRP_IP
action drop
vlan access-map HSRP_Localization 20
match mac address ALL_MACs
match ip address ALL_IPs
action forward

vlan filter HSRP_Localization vlan-list 10

mac-list OTV_HSRP_VMAC_deny seq 10 deny 0000.0c07.ac00 ffff.ffff.ff00
mac-list OTV_HSRP_VMAC_deny seq 11 deny 0000.0c9f.f000 ffff.ffff.f000
mac-list OTV_HSRP_VMAC_deny seq 20 permit 0000.0000.0000 0000.0000.0000

route-map OTV_HSRP_filter permit 10
match mac-list OTV_HSRP_VMAC_deny
feature otv

vlan 10, 99

otv site-vlan 99
otv site-identifier 0000.0000.0001

interface Overlay 1
otv join-interface ethernet 1/9
otv control-group 239.1.1.1
otv data-group 232.1.1.0/24
otv extend-vlan 10

interface port-channel 10
description ** OTV Internal Interface **
switchport
switchport mode trunk
switchport trunk allowed vlan 10, 99
mac packet-classify

otv-isis default
vpn Overlay1
redistribute filter route-map OTV_HSRP_filter
Step 1 :: create OTV HSRP access-lists (VACL)
Step 2 :: create OTV HSRP localization filters
Filter out HSRP v1 and v2
Filter out Gratuitous ARP
Step 3 :: create route-map to prevent advertisements of HSRP VMACs
OTV Configuration :: HSRP Filtering
The filtering of FHRP messages across the overlay is a
critical functionality to be enabled, because it allows
applying the same FHRP configuration in different sites.
The end result is that the same default gateway is
available (i.e. characterized by the same virtual IP and
virtual MAC addresses) in each data center. This
capability optimizes the outbound traffic flows (server to
client direction); but this does not solve the mechanism
to control and improve the ingress traffic (client to server
direction) as traffic will continue to go via the original DC;
solutions to solve this challenge include DNS Based,
Route Injection, or LISP.
The VLAN ACL is required to identify the traffic that
needs to be filtered. This configuration applies to the
HSRP v1 & v2 (bold) protocols. After applying the
Even though HSRP traffic is filtered via the VACL, the vMAC used to source the
HSRP packets is still learned by the OTV VDC. Therefore, OTV advertises this MAC
address information to the other sites via an IS-IS update. While this in itself is not
causing harm, it would cause the remote OTV the edge devices to see constant MAC
moves happening for the vMAC (from the internal interface to the overlay interface
and vice versa).
IP ACL's to drop HSRP Hellos and forward other traffic
MAC ACL's to drop non-IP HSRP traffic and forward other traffic
Create the VACL & apply the VACL to each extended vlan
Feature dhcp required for ARP inspection & create the ARP access-list to deny traffic from
the Virtual MAC & apply ARP ACL to each extended VLAN
Create mac-list to deny advertising of virtual MAC, create the route-map, and apply the
route-map to each overlay
configuration on the OTV VDC to the set of VLANs that
are trunked from the Agg VDC to the OTV VDC, all
HSRP messages will be dropped once received by the
OTV VDC. It is also required to apply a specific filter to
ensure suppression of the Gratuitous ARP (GARP)
messages that may be received across the OTV Overlay
from the remote sites.
2013 Cisco and/or its affiliates. All rights reserved. 28
feature otv

vlan 10, 99

key chain OTV-Key
key 1
key-string 0 Cisc0!

otv site-vlan 99
otv site-identifier 0000.0000.0001

interface Overlay 1
otv join-interface ethernet 1/9
otv control-group 239.1.1.1
otv data-group 232.1.1.0/24
otv extend-vlan 10
otv isis authentication-type md5
otv isis authentication key-chain OTV-Key

otv-isis default
vpn Overlay1
authentication-check
authentication-type md5
authentication key-chain OTV-Key


Step 1 :: configure OTV key chain
Step 2 :: apply md5 authentication to OTV Hellos
Step 3 :: apply md5 authentication to OTV PDUs
OTV Configuration :: Authentication
OTV supports authentication of Hello messages along with
authentication of Protocol Data Units (PDU)s
feature otv

vlan 10, 99

key chain OTV-Key
key 1
key-string 0 Cisc0!

otv site-vlan 99
otv site-identifier 0000.0000.0001

interface Overlay 1
otv join-interface ethernet 1/9
otv control-group 239.1.1.1
otv data-group 232.1.1.0/24
otv extend-vlan 10
otv isis authentication-type md5
otv isis authentication key-chain OTV-Key

otv-isis default
vpn Overlay1
authentication-check
authentication-type md5
authentication key-chain OTV-Key
Perform Configuration Steps at
Both DC Sites (East & West)
2013 Cisco and/or its affiliates. All rights reserved. 29
feature otv

vlan 10, 99, 100

otv site-vlan 99
otv site-identifier 0000.0000.0001

interface Overlay 1
otv join-interface ethernet 1/9
otv control-group 239.1.1.1
otv data-group 232.1.1.0/24
otv extend-vlan 10
otv vlan mapping 10 to 100
Step 1 :: configure vlan mapping
OTV VLAN Translation :: Translation Through Transit VLAN
When a different VLAN is used at multiple sites
A mapped VLAN can not be extended on another site
VLAN mappings have a one-to-one relationship
VLAN mappings can be added or removed without
impacting all mappings on the overlay interface
feature otv

vlan 20, 99, 100

otv site-vlan 99
otv site-identifier 0000.0000.0002

interface Overlay 1
otv join-interface ethernet 1/9
otv control-group 239.1.1.1
otv data-group 232.1.1.0/24
otv extend-vlan 20
otv vlan mapping 20 to 100
VLAN 10
VLAN 20 VLAN 100
2013 Cisco and/or its affiliates. All rights reserved. 30
feature otv

vlan 10, 99

interface loopback 10
ip address [.]/32

otv site-vlan 99
otv site-identifier 0000.0000.0001

interface Overlay 1
otv join-interface ethernet 1/9
otv control-group 239.1.1.1
otv data-group 232.1.1.0/24
otv broadcast-group 224.2.2.0
otv extend-vlan 10

Step 1 :: configure broadcast group
OTV Configuration :: Dedicated Broadcast Group
otv broadcast-group configuration line under overlay
Optional command
Useful for QoS purposes
The broadcast group needs to be configured for all OTV
Edge Devices connected to the OTV Overlay network
feature otv

vlan 10, 99

interface loopback 10
ip address [.]/32

otv site-vlan 99
otv site-identifier 0000.0000.0001

interface Overlay 1
otv join-interface ethernet 1/9
otv control-group 239.1.1.1
otv data-group 232.1.1.0/24
otv broadcast-group 224.2.2.0
otv extend-vlan 10

Perform Configuration Steps at
Both DC Sites (East & West)
2013 Cisco and/or its affiliates. All rights reserved. 31
feature otv

vlan 10, 99

otv site-vlan 99
otv site-identifier 0000.0000.0001

otv flood mac 1111.2222.0101 vlan 10

interface Overlay 1
otv join-interface ethernet 1/9
otv control-group 239.1.1.1
otv data-group 232.1.1.0/24
otv extend-vlan 10

Step 1 :: configure static OTV flood
[enabled per mac address]

OTV Configuration :: Selective Unicast Flooding
Normally, unknown unicast Layer 2 frames are not flooded between OTV sites, and MAC addresses are not learned across the overlay interface. Any unknown unicast
messages that reach the OTV edge device are blocked from crossing the logical overlay, allowing OTV to prevent Layer 2 faults from spreading to remote sites.

The end points connected to the network are assumed to not be silent or unidirectional. However, some data center applications require the unknown unicast traffic to
be flooded over the overlay to all the data centers, where end points may be silent. Beginning with Cisco NX-OS Release 6.2(2), you can configure selective unicast
flooding to flood the specified destination MAC address to all other edge devices in the OTV overlay network with that unknown unicast traffic.
Unknown
Unicast
feature otv

vlan 10, 99

otv site-vlan 99
otv site-identifier 0000.0000.0002

interface Overlay 1
otv join-interface ethernet 1/9
otv control-group 239.1.1.1
otv data-group 232.1.1.0/24
otv extend-vlan 10
MAC 1 MAC 2
2013 Cisco and/or its affiliates. All rights reserved. 32
OTV encapsulation is done on M-series modules

Note: The control-plane protocol used by OTV is IS-IS. However, IS-IS does not need to be explicitly configured. It runs in
the background once OTV is enabled.

In a multi-tenancy environment, the same OTV VDC can be configured with multiple overlays to provide a segmented
Layer 2 extension for different tenants or applications.

When multiple data center sites are interconnected, the OTV operations can benefit from the presence of multicast in the
core. Multicast is not mandatory in most OTV topologies (number of sites) since you can use the unicast-mode as well.

The same OTV VDCs can be used by multiple VDCs deployed at the aggregation tier, as well as by other Layer 2 switches
connected to the OTV VDCs. This is done by configuring multiple OTV overlays. It's important to note that the extended
VLANs within these multiple overlays should not overlap.

A separate Layer 3 link between the two aggregation VDCs should be configured as per best practices to carry any Layer
3 traffic between the them.

The overlay interface will not come up until you configure a multicast group address and the site-VLAN has at least an
active port on the OTV edge device.

Support for loopback interfaces as OTV Join interfaces is planned for 6.2(2) and later code releases.
Strong Recommendations and Key Notes
2013 Cisco and/or its affiliates. All rights reserved. 33
FHRP Filtering Note: It is important to stress how this outbound path (server to client) optimization functionality should be
deployed in conjunction with an equivalent one optimizing inbound traffic (client to server) flows to avoid asymmetric traffic
behavior (this would be highly undesirable especially in deployments leveraging stateful services across data centers).
White Paper discussing inbound traffic optimization solutions ::
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/DCI/4.0/EMC/EMC.pdf

It is important to note how OTV support requires the use of the new Transport Services (TRS) license. Depending on the
specifics of the OTV deployment, the Advanced License may be required as well to provide Virtual Device Contexts
(VDCs) support.

Before configuring OTV you should review and implement Cisco recommended STP best practices at each site. OTV is
independent from STP but it greatly benefits from a stable and robust Layer 2 topology.

If the data centers are OTV multi-homed, it is a recommended best practice to bring the Overlay up in single-homed
configuration first, by enabling OTV on a single edge device at each site. After the OTV connection has been tested in as
single-homed, then enable the functionality on the other edge devices of each site.

OTV currently enforces switch-virtual-interface (SVI) separation for the VLANs being extended across the OTV link,
meaning that OTV is usually in its own VDC. With the VDC license on the Cisco Nexus 7000 you have the flexibility to
have SVIs in other VDCs and have a dedicated VDC for OTV functions.

Configure the join interface and all Layer 3 interfaces that face the IP core between the OTV edge devices with the highest
maximum transmission unit (MTU) size supported by the IP core. OTV sets the Don't Fragment (DF) bit in the IP header
for all OTV control and data packets so the core cannot fragment these packets.

Strong Recommendations and Key Notes
2013 Cisco and/or its affiliates. All rights reserved. 34
With NX-OS 6.1 and earlier: Only one join interface can be specified per overlay; two methods are available
Configure a single join-interface, which is shared across multiple overlays
Configure a different join interface for each overlay, which increases the OTV reliability

For a higher resiliency, you can use a port-channel, but it is not mandatory. There are no requirements for 1 Gigabit-
Ethernet versus 10 Gigabit-Ethernet or dedicated versus shared mode.

The transport network must support PIM sparse mode (ASM) or PIM-Bidir multicast traffic.

OTV is compatible with a transport network configured only for IPv4. IPv6 is not supported.

Do not enable PIM on the join-interface.

Do not configure OTV on an F-series module.

Ensure the site identifier is configured and is the same for all edge devices on a site. OTV brings down all overlays when a
mismatched site identifier is detected from a neighbor edge device and generates a system message.

Mixing the Nexus 7000 and the ASR 1000 devices for OTV is not supported at this time when the devices will be placed
within the same site. However, using Cisco Nexus 7000s in one site and Cisco ASR 1000s at another site for OTV is fully
supported. For this scenario, please keep the separate scalability numbers in mind for the two different devices, because
you will have to account for the lowest common denominator.

Starting in NX-OS 5.2, the site-id command was introduced as a way to harden multihoming for OTV. It is a configurable
option that must be the same for devices within the same data center and different between any devices that are in
different data centers. It specifies which site a particular OTV device is in so that two OTV devices in different sites cannot
join each other as a multihomed site. This command is now mandatory.
Strong Recommendations and Key Notes
2013 Cisco and/or its affiliates. All rights reserved. 35
Using Virtual Port Channels (vPCs) and OTV together provides an extra layer of resiliency and is thus recommended as a
best practice.

OTV & FabricPath: Because OTV encapsulation is done on M-series modules, OTV cannot read FabricPath packets.
Because of this restriction, terminating FabricPath and reverting to Classical Ethernet where the OTV VDC resides is
necessary. In addition, when running FabricPath in your network, we highly recommend that you use the spanning-tree
domain <id> command on all devices that are participating in these VLANs. This command speeds up convergence times
greatly.

Strong Recommendations and Key Notes
2013 Cisco and/or its affiliates. All rights reserved. 36
42 Bytes overhead to the packet IP MTU size
(Outer IP Header + OTV Shim) (Original L2 Header without the 802.1Q header)
802.1Q header is removed and the VLAN field copied over to the OTV shim header
Outer OTV shim header contains VLAN, overlay ID number, and an external IP header
Consider Jumbo MTU Sizing along the path between the source and destination endpoints to account for the extra 42 bytes
OTV Encapsulation :: MAC in IP
20B + 8B + 14B* = 42 Bytes
of total overhead
6B 6B 2B 20B 8B
DMAC SMAC
Ether
Type IP Header
Payload 4B
CRC OTV Shim
802.1Q
DMAC SMAC
Ether
Type
802.1Q
14B*
Original L2 Frame
L2
Header
802.1Q header removed
* The 4 Bytes of the 802.1Q header have already been removed
V
L
A
N

2013 Cisco and/or its affiliates. All rights reserved. 37
How OTV Works :: Inter-Site Packet Flow (OTV Data Plane)
MAC Table
VLAN MAC IF
10 MAC 1 Eth 1
10 MAC 2 IP B
10 MAC 3 Eth 3
Assumption :: New MACs where learned in the VLANs that are OTV extended on the internal interfaces; an OTV update message was sent and replicated across the
transport and delivered to all remote OTV Edge devices; those MACs learned through OTV are then imported in the MAC address t ables of the OTV Edge Devices.
Step 1 :: The Layer 2 frame is received at the aggregation layer or OTV Edge Device. A traditional Layer 2 lookup is performed, the MAC for Host Bs information in the
MAC table does not point to a local Ethernet interface (as you would see in intra-site communication); but to the IP address of the remote OTV Edge Device that
advertised that MACs reachability information.
Step 2 :: The OTV Edge Device encapsulates the original Layer 2 Frame; with is the source IP of the outer header of the local Join interface & the destination IP which is
the IP address of the remote Edge Device Join interface.
Step 3 :: The OTV encapsulated frame (a regular unicast IP packet) is carried across the transport infrastructure and delivered to the remote OTV Edge Device.
Step 4 :: The remote OTV Edge Device decapsulates the frame exposing the original Layer 2 packet.
Step 5 :: The OTV Edge Device performs another Layer 2 lookup on the original Ethernet frame and discovers that it is reachable through a physical interface, which
means it is a MAC address local to the site.
Step 6 :: The frame is then delivered to the MAC destination of Host B
MAC 1 MAC 2
IP A IP B
MAC 1 MAC 2
MAC Table
VLAN MAC IF
10 MAC 1 IP A
10 MAC 2 Eth 2
10 MAC 3 IP A
WEST DC EAST DC
IP A IP B
MAC 1 MAC 2
MAC 1 MAC 2
Layer 2
Lookup
Layer 2
Lookup
Encap
Decap
2013 Cisco and/or its affiliates. All rights reserved. 38

External (public)

OTV Best Practices Guide
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/guide_c07-728315.pdf

OTV Technology Introduction and Deployment Considerations
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/DCI/whitepaper/DCI_1.html

Using OTV to Extend Layer 2 between Two Data Centers
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/white_paper_c11-644634.html

Nexus 7000 NX-OS OTV Configuration Guides
http://www.cisco.com/en/US/docs/switches/datacenter/sw/nx-os/OTV/config_guide/b_Cisco_Nexus_7000_Series_NX-
OS_OTV_Configuration_Guide.html

Cisco Nexus 7000 NX-OS Verified Scalability Guide (OTV Limits)
http://www.cisco.com/en/US/docs/switches/datacenter/sw/verified_scalability/b_Cisco_Nexus_7000_Series_NX-
OS_Verified_Scalability_Guide.html#reference_18192F87114B45D9A40A41A0DEF3F74D

Cisco Live 365 (sign up & search session catalog for OTV)
https://ciscolive365.com/
BRKDCT 3103 :: Advance OTV Configure, Verify and Troubleshoot OTV in Your Network; Andy Gossett (CSE)
Additional Resources & Further Reading
Great External
Resources
2013 Cisco and/or its affiliates. All rights reserved. 39

QuickStartGuide::VirtualPortChannel(vPC)
https://communities.cisco.com/docs/DOC-35728

QuickStartGuide::FabricPath
https://communities.cisco.com/docs/DOC-35725l

Additional Resources & Further Reading


2013 Cisco and/or its affiliates. All rights reserved. 40

You might also like