You are on page 1of 88

Implementing QoS with Nexus and NX-OS

BRKRST-2930

Follow us on Twitter for real time updates of the event:

@ciscoliveeurope, #CLEUR

Housekeeping

We value your feedback- don't forget to complete your online session evaluations after each session & the Overall Conference Evaluation which will be available online from Thursday Visit the World of Solutions and Meet the Engineer Visit the Cisco Store to purchase your recommended readings Please switch off your mobile phones After the event dont forget to visit Cisco Live Virtual: www.ciscolivevirtual.com Follow us on Twitter for real time updates of the event: @ciscoliveeurope, #CLEUR

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Session Goal
This session will provide a technical description of the NX-OS QoS capabilities and hardware implementations of QoS functions on the Nexus 7000, 5500/5000, 3000 and Nexus 2000 I t will also include a design and configuration level discussion on the best practices for use of the Cisco Nexus family of switches in implementing QoS for Medianet in additional to new QoS capabilities leveraged in the Data Centre to support FCoE, NAS, iSCSI and vMotion. This session is designed for network engineers involved in network switching design. A basic understanding of QoS and operation of the Nexus switches 2000/5000/5500/7000 series is assumed.

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Housekeeping
We value your feedback- don't forget to complete your online session evaluations after each session & the Overall Conference Evaluation which will be available online from Thursday Visit the World of Solutions and Meet the Engineer Visit the Cisco Store to purchase your recommended readings Please switch off your mobile phones After the event dont forget to visit Cisco Live Virtual: www.ciscolivevirtual.com

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Implementing QoS with Nexus and NX-OS


Agenda
Nexus and QoS
New QoS Requirements New QoS Capabilities

Understanding Nexus QoS Capabilities and Configuration


Nexus 7000 Nexus 5500 Nexus 2000 Nexus 3000

1K
Cisco Nexus

x86

Applications of QoS with Nexus


Converting a Voice/Video IOS (Catalyst 6500) QoS Configuration to an NX-OS (Nexus 7000) Configuration Configuring Storage QoS Policies on Nexus 5500 and 7000 (FCoE & iSCSI)

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Evolution of QoS Design


Switching Evolution and Specialization
Quality of Service is not just about protecting voice and video anymore Campus Specialization
Desktop based Unified Communications Blended Wired & Wireless Access

Data Center Specialization


Compute and Storage Virtualization Cloud Computing
VMotion

Consolidation of more protocols onto the fabric

Storage FCoE, iSCSI, NFS


Inter-Process and compute communication (RCoE, vMotion, )
2012 Cisco and/or its affiliates. All rights reserved. Cisco Public

FCoE

BRKRST-2930

NX-OS QoS Design Requirements


Where are we starting from
VoIP and Video are now mainstream technologies Ongoing evolution to the full spectrum of Unified Communications High Definition Executive Communication Application requires stringent Service-Level Agreement (SLA)
Reliable ServiceHigh Availability Infrastructure Application Service ManagementQoS

BRKRST-2930 All 2012 Cisco and/or its affiliates. All rights reserved. 14497_04_2008_c1 2007 Cisco Systems, Inc. rights reserved. Cisco Confidential

Cisco Public

NX-OS QoS Design Requirements


QoS for Voice and Video is implicit in current Networks
Application Class VoIP Telephony Broadcast Video Realtime Interactive Multimedia Conferencing Multimedia Streaming Network Control Call-Signaling Ops / Admin / Mgmt (OAM) Transactional Data Bulk Data Best Effort Scavenger Per-Hop Behavior EF CS5 CS4 AF4 AF3 CS6 CS3 CS2 AF2 AF1 DF CS1 Admission Control Required Required Required Required Recommended Queuing & Dropping Priority Queue (PQ) (Optional) PQ (Optional) PQ BW Queue + DSCP WRED BW Queue + DSCP WRED BW Queue BW Queue BW Queue BW Queue + DSCP WRED BW Queue + DSCP WRED Default Queue + RED Min BW Queue (Deferential) Application Examples Cisco IP Phones (G.711, G.729) Cisco IP Video Surveillance / Cisco Enterprise TV Cisco TelePresence Cisco Unified Personal Communicator, WebEx Cisco Digital Media System (VoDs) EIGRP, OSPF, BGP, HSRP, IKE SCCP, SIP, H.323 SNMP, SSH, Syslog ERP Apps, CRM Apps, Database Apps E-mail, FTP, Backup Apps, Content Distribution Default Class YouTube, iTunes, BitTorent, Xbox Live

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

NX-OS QoS Design Requirements


QoS for Voice and Video is implicit in current Networks
4-Class Model 8-Class Model
Voice Interactive Video

12-Class Model
Voice Realtime Interactive

Realtime

Multimedia Conferencing
Broadcast Video

Streaming Video Signaling / Control Call Signaling Network Control Critical Data Critical Data

Multimedia Streaming
Call Signaling Network Control Network Management Transactional Data Bulk Data

Best Effort Best Effort Scavenger

Best Effort Scavenger

http://www.cisco.com/en/US/docs/solutions/Enterprise/WAN_and_MAN/QoS_SRND_40/QoSIntro_40.html#wp61135 Time
BRKRST-2930 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public

NX-OS QoS Design Requirements


Attributes of Voice and Video
Voice Packets
1400 1400 Video Frame

Video Packets
Video Frame Video Frame

1000

1000

Bytes
600 Audio Samples 600

200

200

20 msec
BRKRST-2930

Time
2012 Cisco and/or its affiliates. All rights reserved. Cisco Public

33 msec
10

NX-OS QoS Design Requirements


Trust Boundaries What have we trusted?
Trust Boundary

Access-Edge Switches

Conditionally Trusted Endpoints Example: IP Phone + PC

Secure Endpoint Example: Software-protected PC With centrally-administered QoS markings

Unsecure Endpoint

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Trust Boundary

11

NX-OS QoS Design Requirements


What else do we need to consider?
The Data Center adds a number of new traffic types and requirements
No Drop, IPC, Storage, Vmotion,

New Protocols and mechanisms


802.1Qbb, 802.1Qaz, ECN,

Spectrum of Design Evolution

blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8 blade1 slot 1 blade2 slot 2 blade3 slot 3 blade4 slot 4 blade5 slot 5 blade6 slot 6 blade7 slot 7 blade8 slot 8

Ultra Low Latency Queueing is designed out of the network whenever possible Nanoseconds matter
BRKRST-2930

HPC/GRID Low Latency Bursty Traffic (workload migration) IPC iWARP & RCoE

Virtualized Data Center vMotion, iSCSI, FCoE, NAS, CIFS Multi Tenant Applications Voice & Video
Cisco Public

MSDC ECN & Data Center TCP Hadoop and Incast Loads on the server ports

2012 Cisco and/or its affiliates. All rights reserved.

12

NX-OS QoS Requirements


What do we trust and where do classify and mark?
Data Centre architecture can be

provide a new set of trust boundaries


Virtual Switch extends the trust

boundary into the memory space of the Hypervisor


Converged and Virtualized

vPC

N7K CoS/DSCP Marking, Queuing and Classification COS/DSCP Based Queuing in the extended Fabric

Adapters provide for local classification, marking and queuing

N5K CoS/DSCP Marking, Queuing and Classification N2K CoS Marking CNA/A-FEX - Classification and Marking

vPC

COS Based Queuing in the extended Fabric

COS Based Queuing in the extended Fabric N1KV Classification, Marking & Queuing

Trust Boundary

VM #2

VM #3

VM #4

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

13

NX-OS QoS Requirements


CoS or DSCP?
We have non IP based traffic to consider again
FCoE Fibre Channel Over Ethernet RCoE RDMA Over Ethernet

DSCP is still marked but CoS will be required and used in Nexus Data Center designs
PCP/COS 1 Network priority 0 (lowest) Acronym BK Traffic characteristics Background

0
2

1
2

BE
EE

Best Effort
Excellent Effort

3
4 5 6

3
4 5 6

CA
VI VO IC

Critical Applications
Video, < 100 ms latency Voice, < 10 ms latency Internetwork Control

IEEE 802.1Q-2005
BRKRST-2930 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public

14

NX-OS QoS Requirements


Where do we put the new traffic types?
In this example of a Virtualized Multi-Tenant Data Center there is a potential overlap/conflict with Voice/Video queueing assignments, e.g.
COS 3 FCoE and Call Control COS 5 NFS and Voice bearer traffic Traffic Type
Infrastructure

Network Class
Control vMotion Gold, Transactional

COS
6 4 5 2 1 3 5 1
Cisco Public

Class, Property, BW Allocation


Platinum, 10% Silver, 20% Gold, no drop, 30% Bronze, 15% Best effort, 10% No Drop, 15% Silver Best Effort
15

Tenant

Silver, Transactional Bronze, Transactional FCOE

Storage Non Classified


BRKRST-2930

NFS datastore Data


2012 Cisco and/or its affiliates. All rights reserved.

Implementing QoS with Nexus and NX-OS


Agenda
Nexus and QoS
New QoS Requirements New QoS Capabilities

Understanding Nexus QoS Capabilities


Nexus 7000 Nexus 5500 Nexus 2000 Nexus 3000 Nexus 1000v

1K
Cisco Nexus

x86

Applications of QoS with Nexus


Voice and Video Storage & FCoE Future QoS Design Considerations (Data Center TCP, ECN, optimized TCP)

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

16

Data Center Bridging Control Protocol


DCBX Overview - 802.1Qaz
Negotiates Ethernet capabilitys : PFC, ETS, CoS values between DCB capable peer devices Simplifies Management : allows for configuration and distribution of parameters from one node to another Responsible for Logical Link Up/Down signaling of Ethernet and Fibre Channel DCBX is LLDP with new TLV fields The original pre-standard CIN (Cisco, Intel, Nuova) DCBX utilized additional TLVs DCBX negotiation failures result in:
DCBX Switch

per-priority-pause not enabled on CoS values


vfc not coming up when DCBX is being used in FCoE environment
dc11-5020-3# sh lldp dcbx interface eth 1/40 Local DCBXP Control information: Operation version: 00 Max version: 00 Seq no: 7 Type/ Subtype Version En/Will/Adv Config 006/000 000 Y/N/Y 00 <snip>

DCBX CNA Adapter


Ack no: 0

https://www.cisco.com/en/US/netsol/ns783/index.html
BRKRST-2930 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public

17

Priority Flow Control

FCoE Flow Control Mechanism 802.1Qbb


Enables lossless Ethernet using PAUSE based on a COS as defined in 802.1p When link is congested, CoS assigned to no-drop will be PAUSED Other traffic assigned to other CoS values will continue to transmit and rely on upper layer protocols for retransmission Not only for FCoE traffic
Transmit Queues
Fibre Channel
One Two Three
R_RDY
STOP

Ethernet Link

Receive Buffers
One Two

PAUSE

Three Four Five Six Seven Eight

Four Five Six Seven

Eight Virtual Lanes

B2B Credits
BRKRST-2930

Packet

Eight
2012 Cisco and/or its affiliates. All rights reserved. Cisco Public

18

Enhanced Transmission Selection (ETS)


Bandwidth Management 802.1Qaz
Prevents a single traffic class of hogging all the bandwidth and starving other classes When a given load doesnt fully utilize its allocated bandwidth, it is available to other classes Helps accommodate for classes of a bursty nature
Offered Traffic 10 GE Link Realized Traffic Utilization
3G/s HPC Traffic 3G/s 2G/s

3G/s

3G/s

2G/s

3G/s
3G/s 3G/s 3G/s

Storage Traffic 3G/s

3G/s

3G/s

4G/s

6G/s

3G/s

LAN Traffic 4G/s

5G/s

t1

t2

t3

t1

t2

t3

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

19

Data Center TCP


Explicit Congestion Notification (ECN)
ECN is an extension to TCP that provides end-to-end congestion notification without dropping packets. Both the network infrastructure and the end hosts have to be capable of supporting ECN for it to function properly. ECN uses the two least significant bits in the Diffserv field in the IP header to encode four different values. During periods of congestion a router will mark the DSCP header in the packet indicating congestion (0x11) to the receiving host who should notify the source host to reduce its transmission rate.
Diffserv field Values in the IP Header 0x00 Non ENC-Capable Transport 0x10 - ECN Capable Transport (0) 0x01 ECN Capable Transport (1) 0x11 Congestion Encountered

ECN Configuration: The configuration for enabling ECN is very similar to the previous WRED example, so only the policy-map configuration with the ecn option is displayed for simplicity.
N3K-1(config)# policy-map type network-qos traffic-priorities N3K-1(config-pmap-nq)# class type network-qos class-gold N3K-1(config-pmap-nq-c)# congestion-control random-detect ecn Notes: When configuring ECN ensure there are not any queuing policy-maps applied to the interfaces. Only configure the queuing policy under the system policy.
BRKRST-2930 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public

WRED and ECN are always applied to the system policy

20

Implementing QoS with Nexus and NX-OS


Agenda
Nexus and QoS
Nexus and NX-OS New QoS Capabilities and Requirements

Understanding Nexus QoS Capabilities


Nexus 7000 Nexus 5500 Nexus 2000 Nexus 3000 Nexus 1000v

1K
Cisco Nexus

x86

Applications of QoS with Nexus


Voice and Video Storage & FCoE Hadoop and Web 2.0 Future QoS Design Considerations (Data Center TCP, ECN, optimized TCP)
BRKRST-2930 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public

21

Nexus 7000 I/O Module Families


M and F Series Line Cards M family L2/L3/L4 with large forwarding tables and rich feature set
N7K-M148GT-11/N7K-M148GT-11L

N7K-M108X2-12L N7K-M132XP-12/ N7K-M132XP-12L

N7K-M148GS-11/N7K-M148GS-11L

F family High performance, low latency, low power and streamlined feature set

N7K-F132XP-15
BRKRST-2930 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public

N7K-F248XP-25 Now Shipping


22

Nexus 7000 M1 I/O Module


QoS Capabilities
Modular QoS CLI Model 3-step model to configure and apply policies:
Define match criteria (class-map) Associate actions with the match criteria (policy-map) Attach set of actions to interface (service-policy)

Two types of class-maps/policy-maps (C3PL provides option of type)


type qos to configure marking rules (default type) type queuing to configure port based QoS rules

Ingress Queuing policies enforced here

Ingress QoS policies enforced here

Egress QoS policies enforced here

Egress Queuing policies enforced here

PHY

R2D2

EARL

EARL

R2D2 Egress Linecard

PHY

Ingress Linecard
BRKRST-2930 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public

23

Nexus 7000 M1 I/O Module


QoS Ingress Capabilities
Applied at ingress port ASIC Applied at ingress forwarding engine (ingress pipe)

Input Queuing & Scheduling

Ingress Mutation

Ingress Classification

Marking

Ingress Policing

COS-toqueue mapping Bandwidth allocation (DWRR) Buffer allocation Congestion Avoidance (WRED1 and tail drop) Set COS

CoS mutation IP Prec mutation IP DSCP mutation

Class-map matching criteria:


ACL-based (SMAC/DMAC, IP SA/DA, Protocol, L4 ports, L4 protocol fields) CoS IP Precedence DSCP

IP Prec IP DSCP QoS Group Discard Class

1-rate 2-color and 2-rate 3-color aggregate policing Shared policers Color-aware policing Policing actions:
Transmit Drop Change CoS/IPPrec/DSCP Markdown Set QoS Group or Discard Class
24

BRKRST-2930 1. WRED on ingress GE ports only

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Nexus 7000 M1 I/O Module


QoS Egress Capabilities
Applied at ingress forwarding engine (egress pipe) Applied at egress port ASIC

Egress Classification

Marking

Egress Policing

Egress Mutation

Output Queuing & Scheduling

Class-map matching criteria:


ACL-based (L2 SA/DA, IP SA/DA, Protocol, L4 port range, L4 protocol specific field) CoS IP Precedence DSCP Protocols (non-IP) QoS Group Discard Class

CoS IP Prec IP DSCP

1-rate 2-color and 2-rate 3-color aggregate policing Shared policers Color-aware aggregate policing Policing actions:
Transmit Drop Change CoS/IPPrec/DSCP Markdown

CoS mutation IP Prec mutation IP DSCP mutation

COS-toqueue mapping Bandwidth allocation Buffer allocation Congestion avoidance (WRED & tail drop) Priority queuing SRR (no PQ)
25

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

How to Configure Queuing on Nexus 7000


Key concept: Queuing service policies Queuing service policies leverage port ASIC capabilities to map traffic to queues and schedule packet delivery Define queuing classes
Class maps that define the COS-to-queue mapping i.e., which COS values go in which queues?

Define queuing policies


Policy maps that define how each class is treated i.e., how does the queue belonging to each class behave?

Apply queuing service policies


Service policies that apply the queuing policies i.e., which policy is attached to which interface in which direction?

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

26

Queuing Classes
class-map type queuing Configure COS-queue mappings Queuing class-map names are static, based on port-type and queue
tstevens-7010(config)# class-map type queuing match-any 1p3q4t-out-pq1 1p7q4t-out-q-default 1p7q4t-out-q6 8q2t-in-q1 1p3q4t-out-q-default 1p7q4t-out-q2 1p7q4t-out-q7 8q2t-in-q2 1p3q4t-out-q2 1p7q4t-out-q3 2q4t-in-q-default 8q2t-in-q3 1p3q4t-out-q3 1p7q4t-out-q4 2q4t-in-q1 8q2t-in-q4 1p7q4t-out-pq1 1p7q4t-out-q5 8q2t-in-q-default 8q2t-in-q5 tstevens-7010(config)# class-map type queuing match-any 1p3q4t-out-pq1 tstevens-7010(config-cmap-que)# match cos 7 tstevens-7010(config-cmap-que)# 8q2t-in-q6 8q2t-in-q7

10G ingress port type

1G ingress port type 1G egress port type 10G egress port type

Configurable only in default VDC


Changes apply to ALL ports of specified type in ALL VDCs Changes are traffic disruptive for ports of specified type

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

27

Queuing Policies
policy-map type queuing Define per-queue behavior such as queue size, WRED, shaping
tstevens-7010(config)# policy-map type queuing pri-q tstevens-7010(config-pmap-que)# class type queuing 1p3q4t-out-pq1 tstevens-7010(config-pmap-c-que)# bandwidth no queue-limit set exit priority random-detect shape tstevens-7010(config-pmap-c-que)#

Note that some sanity checks only performed when you attempt to tie the policy to an interface
e.g., WRED on ingress 10G ports

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

28

Queue Attributes
priority defines queue as the priority queue bandwidth defines WRR weights for each queue shape defines SRR weights for each queue
Note: enabling shaping disables PQ support for that port

queue-limit defines queue size and defines tail-drop thresholds random-detect sets WRED thresholds for each queue
Note: WRED and tail-drop parameters are mutually exclusive on a per-queue basis

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

29

Queuing Service Policies


service-policy type queuing Attach a queuing policymap to an interface Queuing policies always tied to physical port No more than one input and one output queuing policy per port
tstevens-7010(config)# int e1/1 tstevens-7010(config-if)# service-policy type queuing input my-in-q tstevens-7010(config-if)# service-policy type queuing output my-out-q tstevens-7010(config-if)#

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

30

QoS Golden Rules


Assuming DEFAULTS For bridged traffic, COS is preserved, DSCP is unmodified For routed traffic, DSCP is preserved, DSCP[0:2] (as defined by RFC2474) copied to COS
For example, DSCP 40 (b101000) becomes COS 5 (b101)

Changes to default queuing policies, or application of QoS marking policies, can modify this behavior

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

31

Implementing QoS with Nexus and NX-OS


Agenda
Nexus and QoS
Nexus and NX-OS New QoS Capabilities and Requirements

Understanding Nexus QoS Capabilities


Nexus 7000 Nexus 5500 Nexus 2000 Nexus 3000

1K
Cisco Nexus

x86

Applications of QoS with Nexus


Voice and Video Storage & FCoE Hadoop and Web 2.0

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

32

Nexus 5000/5500 QoS


QoS Capabilities and Configuration
Nexus 5000 supports a new set of QoS capabilities designed to provide per system class based traffic control

Lossless EthernetPriority Flow Control (IEEE 802.1Qbb)


Traffic ProtectionBandwidth Management (IEEE 802.1Qaz) Configuration signaling to end pointsDCBX (part of IEEE 802.1Qaz) These new capabilities are added to and managed by the common Cisco MQC (Modular QoS CLI) which defines a three-step configuration model Define matching criteria via a class-map Associate action with each defined class via a policy-map Apply policy to entire system or an interface via a service-policy Nexus 5000/7000 leverage the MQC qos-group capabilities to identify and define traffic in policy configuration

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

33

Nexus 5000/5500 QoS


Packet Forwarding: Ingress Queuing
Traffic is Queued on all ingress interface buffers providing a cumulative scaling of buffers for congested ports

In typical Data Center access designs multiple ingress access ports transmit to a few uplink ports

Nexus 5000 and 5500 utilize an Ingress Queuing architecture


Packets are stored in ingress buffers until egress port is free to transmit
v

Ingress queuing provides an additive effective The total queue size available is equal to [number of ingress ports x queue depth per port] Statistically ingress queuing provides the same advantages as shared buffer memory architectures

Egress Queue 0 is full, link congested

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

34

Nexus 5000/5500 QoS


Virtual Output Queues
Nexus 5000 and 5500 use an 8 Queue QoS model for unicast traffic
VoQ Eth VoQ Eth 1/20 1/8 Packet is able to be sent to the fabric for Eth 1/8 Packets Queued for Eth 1/20

Traffic is Queued on the Ingress buffer until the egress port is free to transmit the packet To prevent Head of Line Blocking (HOLB) Nexus 5000 and 5500 use a Virtual Output Queue (VoQ) Model Each ingress port has a unique set of 8 virtual output queues for every egress port (1024 Ingress VOQs = 128 destinations * 8 classes on every ingress port) If Queue 0 is congested for any port traffic in Queue 0 for all the other ports is still able to be transmitted
Eth 1/20

Unified Crossbar Fabric


Egress Queue 0 is free Eth 1/8 Egress Queue 0 is full

Common shared buffer on ingress, VoQ are pointer lists and not physical buffers
Cisco Public

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

35

Nexus 5000/5500 QoS


QoS Policy Types
There are three QoS policy types used to define system behavior (qos, queuing, network-qos)
Ingress UPC

There are three policy attachment points to apply these policies to


Ingress interface System as a whole (defines global behavior) Egress interface
Policy Type qos Function Define traffic classification rules Strict Priority queue Deficit Weight Round Robin System class characteristics (drop or nodrop, MTU), Buffer size, Marking

Unified Crossbar Fabric

Egress UPC

Attach Point system qos ingress Interface system qos egress Interface ingress Interface system qos

queuing

network-qos

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

36

Nexus 5500 QoS


QoS Defaults
QoS is enabled by default (not possible to turn it off) Three default class of services defined when system boots up Two for control traffic (CoS 6 & 7) Default Ethernet class (class-default all others)
Gen 2 UPC

Cisco Nexus 5500 switch supports five user-defined classes and the one default drop system class
FCoE queues are not pre-allocated When configuring FCoE the predefined service policies must be added to existing QoS configurations
# Predefined FCoE service policies service-policy type qos input fcoe-default-in-policy service-policy type queuing input fcoe-default-in-policy service-policy type queuing output fcoe-default-out-policy service-policy type network-qos fcoe-default-nq-policy

Unified Crossbar Fabric

Gen 2 UPC

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

37

Nexus 5500 Series


Layer 3 QoS Configuration
Internal QoS information determined by ingress Carmel (UPC) ASIC is not passed to the Lithium L3 ASIC Need to mark all routed traffic with a dot1p CoS value used to: Queue traffic to and from the Lithium L3 ASIC Restore qos-group for egress forwarding Mandatory to setup CoS for the frame in the network-qos policy, one-to-one mapping between a qos-group and CoS value Classification can be applied to physical interfaces (L2 or L3, including L3 port-channels) not to SVIs
If traffic is congested on ingress to L3 ASIC it is queued on ingress UPC ASIC Packet qos-group is not passed to Lithium, leverages CoS dot1p

Layer 3 Forwarding Engine

Routed packet is queued on egress from Lithium based on dot1p

Gen 2 UPC

Gen 2 UPC

Unified Crossbar Fabric Gen 2

Gen 2 UPC

Gen 2 UPC

class-map type network-qos nqcm-grp2 match qos-group 2 class-map type network-qos nqcm-grp4 match qos-group 4 policy-map type network-qos nqpm-grps class type network-qos nqcm-grp2 set cos 4 class type network-qos nqcm-grp4 set cos 2
Cisco Public

On initial ingress packet QoS matched and packet is associated with a qos-group for queuing and policy enforcement
BRKRST-2930 2012 Cisco and/or its affiliates. All rights reserved.

38

Nexus 5500 Series


Layer 3 QoS Configuration
Apply type qos and network-qos policy for classification on the L3 interfaces and on the L2 interfaces (or simply system wide) Applying type queuing policy at system level in egress direction (output) Trident has CoS queues associated with every interface 8 Unicast CoS queues 4 Multicast CoS queues The individual dot1p priorities are mapped one-to-one to the Unicast CoS queues This has the result of dedicating a queue for every traffic class With the availability of only 4 multicast queues the user would need to explicitly map dot1p priorities to the multicast queues
Gen 2 UPC Gen 2 UPC Unified Crossbar Fabric Gen 2 Layer 3 Forwarding Engine
8 Unicast Queues 4 Multicast Queues

Gen 2 UPC

Gen 2 UPC

8 Unicast Queues 8 Multicast Queues

Nexus-5500(config)# wrr-queue cos-map 0 1 2 3 Nexus-5500(config)# sh wrr-queue cos-map MCAST Queue ID Cos Map 0 0 1 2 3 1 2 4 5 3 6 7

wrr-queue cos-map <queue ID> <CoS Map>


BRKRST-2930 2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

39

Nexus 5000/5500 QoS


dc11-5020-4# sh queuing int eth 1/39

Mapping the Switch Architecture to show queuing


SFP SFP SFP SFP

Interface Ethernet1/39 TX Queuing qos-group sched-type oper-bandwidth 0 WRR 50 1 WRR 50

Egress (Tx) Queuing Configuration


UPC

Interface Ethernet1/39 RX Queuing qos-group 0 q-size: 243200, HW MTU: 1600 (1500 configured) drop-type: drop, xon: 0, xoff: 1520 Statistics: Pkts received over the port : 85257 Ucast pkts sent to the cross-bar : 930 Unified Mcast pkts sent to the cross-bar : 84327 Crossbar Ucast pkts received from the cross-bar : 249 Fabric Pkts sent to the port : 133878 Pkts discarded on ingress : 0 Per-priority-pause status : Rx (Inactive), Tx (Inactive) <snip other classes repeated> Total Multicast crossbar statistics: Mcast pkts received from the cross-bar

: 283558

Packets Arriving on this port but dropped from ingress queue due to congestion on egress port

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

40

Configuring QoS on the Nexus 5500


Create New System Class
Step 1 Define qos Class-Map
N5k(config)# ip access-list acl-1 N5k(config-acl)# permit ip 100.1.1.0/24 any N5k(config-acl)# exit N5k(config)# ip access-list acl-2 N5k(config-acl)# permit ip 200.1.1.0/24 any N5k(config)# class-map type qos class-1 N5k(config-cmap-qos)# match access-group name acl-1 N5k(config-cmap-qos)# class-map type qos class-2 N5k(config-cmap-qos)# match access-group name acl-2 N5k(config-cmap-qos)#

Create two system classes for traffic with different source address range Supported matching criteria

Step 2 Define qos Policy-Map


N5k(config)# policy-map type qos policy-qos N5k(config-pmap-qos)# class type qos class-1 N5k(config-pmap-c-qos)# set qos-group 2 N5k(config-pmap-c-qos)# class type qos class-2 N5k(config-pmap-c-qos)# set qos-group 3

N5k(config)# class-map type qos class-1 N5k(config-cmap-qos)# match ? access-group Access group cos IEEE 802.1Q class of service dscp DSCP in IP(v4) and IPv6 packets ip IP precedence Precedence in IP(v4) and IPv6 packets protocol Protocol N5k(config-cmap-qos)# match

Qos-group range for userconfigured system class is 2-5 Policy under system qos applied to all interfaces Policy under interface is preferred if same type of policy is applied under both system qos and interface
41

Step 3 Apply qos Policy-Map under system qos or interface


N5k(config)# system qos N5k(config-sys-qos)# service-policy type qos input policy-qos N5k(config)# interface e1/1-10 N5k(config-sys-qos)# service-policy type qos input policy-qos
BRKRST-2930 2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Configuring QoS on the Nexus 5500


Create New System Class(Continue)
Step 4 Define network-qos Class-Map
N5k(config)# class-map type network-qos class-1 N5k(config-cmap-nq)# match qos-group 2 N5k(config-cmap-nq)# class-map type network-qos class-2 N5k(config-cmap-nq)# match qos-group 3

Match qos-group is the only option for networkqos class-map Qos-group value is set by qos policy-map in previous slide No action tied to this class indicates default network-qos parameters. Policy-map type network-qos will be used to configure no-drop class, MTU, ingress buffer size and 802.1p marking Default network-qos parameters are listed in the table below
Default Value Drop class 1538 20.4KB No marking

Step 5 Define network-qos Policy-Map


N5k(config)# policy-map type network-qos policy-nq N5k(config-pmap-nq)# class type network-qos class-1 N5k(config-pmap-nq-c)# class type network-qos class-2

Step 6 Apply network-qos policy-map under system qos context


N5k(config-pmap-nq-c)# system qos N5k(config-sys-qos)# service-policy type network-qos policy-nq N5k(config-sys-qos)#

Network-QoS Parameters Class Type MTU Ingress Buffer Size Marking

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

42

Configuring QoS on the Nexus 5500


Strict Priority and Bandwidth Sharing
Create new system class by using policy-map qos and networkqos(Previous two slides) Then Define and apply policy-map type queuing to configure strict priority and bandwidth sharing Checking the queuing or bandwidth allocating with command show queuing interface
N5k(config)# class-map type queuing class-1 N5k(config-cmap-que)# match qos-group 2 N5k(config-cmap-que)# class-map type queuing class-2 N5k(config-cmap-que)# match qos-group 3 N5k(config-cmap-que)# exit N5k(config)# policy-map type queuing policy-BW N5k(config-pmap-que)# class type queuing class-1 N5k(config-pmap-c-que)# priority N5k(config-pmap-c-que)# class type queuing class-2 N5k(config-pmap-c-que)# bandwidth percent 40 N5k(config-pmap-c-que)# class type queuing class-fcoe N5k(config-pmap-c-que)# bandwidth percent 40 N5k(config-pmap-c-que)# class type queuing class-default N5k(config-pmap-c-que)# bandwidth percent 20 N5k(config-pmap-c-que)# system qos N5k(config-sys-qos)# service-policy type queuing output policy-BW N5k(config-sys-qos)#
BRKRST-2930 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public

Define queuing class-map

Define queuing policy-map

Apply queuing policy under system qos or egress interface


43

Configuring QoS on the Nexus 5500


Set Jumbo MTU Nexus 5000 supports different MTU for each system class MTU is defined in network-qos policy-map No interface level MTU support on Nexus 5000 Following example configures jumbo MTU for all interfaces
N5k(config)# policy-map type network-qos policy-MTU N5k(config-pmap-uf)# class type network-qos class-default N5k(config-pmap-uf-c)# mtu 9216 N5k(config-pmap-uf-c)# system qos N5k(config-sys-qos)# service-policy type network-qos policy-MTU N5k(config-sys-qos)#

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

44

Configuring QoS on the Nexus 5500


Adjust N5k Ingress Buffer Size
Step 1 Define qos class-map
N5k(config)# ip access-list acl-1 N5k(config-acl)# permit ip 100.1.1.0/24 any N5k(config-acl)# exit N5k(config)# ip access-list acl-2 N5k(config-acl)# permit ip 200.1.1.0/24 any N5k(config)# class-map type qos class-1 N5k(config-cmap-qos)# match access-group name acl-1 N5k(config-cmap-qos)# class-map type qos class-2 N5k(config-cmap-qos)# match access-group name acl-2 N5k(config-cmap-qos)#

Step 4 Define network-qos Class-Map


N5k(config)# class-map type network-qos class-1 N5k(config-cmap-nq)# match qos-group 2 N5k(config-cmap-nq)# class-map type network-qos class-2 N5k(config-cmap-nq)# match qos-group 3

Step 5 Set ingress buffer size for class-1 in network-qos policy-map


N5k(config)# policy-map type network-qos policy-nq N5k(config-pmap-nq)# class type network-qos class-1 N5k(config-pmap-nq-c) queue-limit 81920 bytes N5k(config-pmap-nq-c)# class type network-qos class-2

Step 2 Define qos policy-map


N5k(config)# policy-map type qos policy-qos N5k(config-pmap-qos)# class type qos class-1 N5k(config-pmap-c-qos)# set qos-group 2 N5k(config-pmap-c-qos)# class type qos class-2 N5k(config-pmap-c-qos)# set qos-group 3

Step 6 Apply network-qos policy-map under system qos context


N5k(config-pmap-nq-c)# system qos N5k(config-sys-qos)# service-policy type network-qos policy-nq N5k(config-sys-qos)#

Step 3 Apply qos policy-map under system qos


N5k(config)# system qos N5k(config-sys-qos)# service-policy type qos input policy-qos

Step 7 Configure bandwidth allocation using queuing policy-map


Cisco Public

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

45

Configuring QoS on the Nexus 5500


Configure no-drop system class
Step 1 Define qos class-map
N5k(config)# class-map type qos class-nodrop N5k(config-cmap-qos)# match cos 4 N5k(config-cmap-qos)#

Step 4 Define network-qos Class-Map


N5k(config)# class-map type network-qos class-1 N5k(config-cmap-nq)# match qos-group 2

Step 2 Define qos policy-map


N5k(config)# policy-map type qos policy-qos N5k(config-pmap-qos)# class type qos class-nodrop N5k(config-pmap-c-qos)# set qos-group 2

Step 5 Configure class-nodrop as no-drop class in network-qos policy-map


N5k(config)# policy-map type network-qos policy-nq N5k(config-pmap-nq)# class type network-qos class-nodrop N5k(config-pmap-nq-c) pause no-drop

Step 3 Apply qos policy-map under system qos


N5k(config)# system qos N5k(config-sys-qos)# service-policy type qos input policy-qos

Step 6 Apply network-qos policy-map under system qos context


N5k(config-pmap-nq-c)# system qos N5k(config-sys-qos)# service-policy type network-qos policy-nq N5k(config-sys-qos)#

Step 7 Configure bandwidth allocation using queuing policy-map


BRKRST-2930 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public

46

Configuring QoS on the Nexus 5500


Configure CoS Marking
Step 1 Define qos class-map
N5k(config)# ip access-list acl-1 N5k(config-acl)# permit ip 100.1.1.0/24 any N5k(config-acl)# exit N5k(config)# class-map type qos class-1 N5k(config-cmap-qos)# match access-group name acl-1 N5k(config-cmap-qos)#

Step 4 Define network-qos Class-Map


N5k(config)# class-map type network-qos class-1 N5k(config-cmap-nq)# match qos-group 2

Step 5 Enable CoS marking for class-1 in network-qos policy-map


N5k(config)# policy-map type network-qos policy-nq N5k(config-pmap-nq)# class type network-qos class-1 N5k(config-pmap-nq-c) set cos 4

Step 2 Define qos policy-map


N5k(config)# policy-map type qos policy-qos N5k(config-pmap-qos)# class type qos class-1 N5k(config-pmap-c-qos)# set qos-group 2

Step 3 Apply qos policy-map under system qos


N5k(config)# system qos N5k(config-sys-qos)# service-policy type qos input policy-qos

Step 6 Apply network-qos policy-map under system qos context


N5k(config-pmap-nq-c)# system qos N5k(config-sys-qos)# service-policy type network-qos policy-nq N5k(config-sys-qos)#

Step 7 Configure bandwidth allocation for new system class using queuing policymap
Cisco Public

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

47

Configuring QoS on the Nexus 5500


Check System Classes
N5k# show queuing interface ethernet 1/1 Interface Ethernet1/1 TX Queuing qos-group sched-type oper-bandwidth 0 WRR 20 1 WRR 40 Strict priority and 2 priority 0 configuration 3 WRR 40

WRR

Interface Ethernet1/1 RX Queuing qos-group 0: class-default q-size: 163840, MTU: 1538 drop-type: drop, xon: 0, xoff: 1024 Statistics: Packet counter Pkts received over the port : 9802 for each class Ucast pkts sent to the cross-bar :0 Mcast pkts sent to the cross-bar : 9802 Ucast pkts received from the cross-bar : 0 Drop counter for Pkts sent to the port : 18558 Pkts discarded on ingress :0 each class Per-priority-pause status : Rx (Inactive), Tx (Inactive)

User-configured system qos-group 2: q-size: 20480, MTU: 1538 class: class-1 drop-type: drop, xon: 0, xoff: 128 Statistics: Pkts received over the port :0 Ucast pkts sent to the cross-bar :0 Mcast pkts sent to the cross-bar :0 Ucast pkts received from the cross-bar : 0 Pkts sent to the port :0 Pkts discarded on ingress :0 Per-priority-pause status : Rx (Inactive), Tx (Inactive)
qos-group 3: User-configured system q-size: 20480, MTU: 1538 class: class-2 drop-type: drop, xon: 0, xoff: 128 Statistics: Pkts received over the port :0 Ucast pkts sent to the cross-bar :0 Mcast pkts sent to the cross-bar :0 Ucast pkts received from the cross-bar : 0 Pkts sent to the port :0 Pkts discarded on ingress :0 Per-priority-pause status : Rx (Inactive), Tx (Inactive) Total Multicast crossbar statistics: Mcast pkts received from the cross-bar N5k#
Cisco Public

class-fcoe qos-group 1: q-size: 76800, MTU: 2240 drop-type: no-drop, xon: 128, xoff: 240 Statistics: Pkts received over the port :0 Ucast pkts sent to the cross-bar :0 Current PFC status Mcast pkts sent to the cross-bar :0 Ucast pkts received from the cross-bar : 0 Pkts sent to the port :0 Pkts discarded on ingress :0 Per-priority-pause status : Rx (Inactive), Tx (Inactive) Continue
BRKRST-2930 2012 Cisco and/or its affiliates. All rights reserved.

: 18558

48

Implementing QoS with Nexus and NX-OS


Agenda
Nexus and QoS
Nexus and NX-OS New QoS Capabilities and Requirements

Understanding Nexus QoS Capabilities


Nexus 7000 Nexus 5500 Nexus 2000 Nexus 3000 Nexus 1000v

1K
Cisco Nexus

x86

Applications of QoS with Nexus


Voice and Video Storage & FCoE Hadoop and Web 2.0 Future QoS Design Considerations (Data Center TCP, ECN, optimized TCP)
BRKRST-2930 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public

49

Nexus 2000 QoS


Tuning the Port Buffers
Each Fabric Extender (FEX) has local port buffers You can control the queue limit for a specified Fabric Extender for egress direction (from the network to the host) You can use a lower queue limit value on the Fabric Extender to prevent one blocked receiver from affecting traffic that is sent to other non-congested receivers ("head-of-line blocking) A higher queue limit provides better burst absorption and less head-of-line blocking protection
Gen 2 UPC

Unified Crossbar Fabric

Gen 2 UPC

Nexus 2000 FEX ASIC

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

50

N5k/N2k QoS Processing Flow


1 Incoming traffic is classified based on CoS. 2 Queuing and scheduling at egress of NIF 3 Traffic classification, buffer allocation, MTU check and CoS marking at N5k ingress 4 Queuing and scheduling at N5k egress. 5 CoS based classification at the ingress of NIF ports 6 Queuing and scheduling at egress of HIF ports. Egress tail drop for each HIF port Nexus 5000
Unified Swtich Fabric

Unified Port Controller

Unified Port Controller

FEX(2148, 2248, 2232)


6 1

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

51

Nexus 2000 QoS


Tuning the Port Buffers
Each Fabric Extender (FEX) has local port buffers You can control the queue limit for a specified Fabric Extender for egress direction (from the network to the host) You can use a lower queue limit value on the Fabric Extender to prevent one blocked receiver from affecting traffic that is sent to other noncongested receivers ("head-of-line blocking) A higher queue limit provides better burst absorption and less head-of-line blocking protection
# Disabling the per port tail drop threshold dc11-5020-3(config)# system qos dc11-5020-3(config-sys-qos)# no fex queue-limit dc11-5020-3(config-sys-qos)# # Tuning of the queue limit per FEX HIF port dc11-5020-3(config)# fex 100 dc11-5020-3(config-fex)# hardware N2248T queue-limit 356000 dc11-5020-3(config-fex)# hardware N2248T queue-limit ? <CR> <2560-652800> Queue limit in bytes
BRKRST-2930 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public

10G Source (NFS)


10G NFS

Gen 2 UPC

Unified Crossbar Fabric

Gen 2 UPC

40G Fabric

Nexus 2000 FEX ASIC

1G Sink

52

Nexus 2248TP-E
32MB Shared Buffer
Speed mismatch between 10G NAS and 1G server requires QoS tuning Nexus 2248TP-E utilizes a 32MB shared buffer to handle larger traffic bursts

10G Attached Source (NAS Array)


NAS iSCSI

Hadoop, NAS, AVID are examples of bursty applications


You can control the queue limit for a specified Fabric Extender for egress direction (from the network to the host) You can use a lower queue limit value on the Fabric Extender to prevent one blocked receiver from affecting traffic that is sent to other non-congested receivers ("head-ofline blocking)
N5548-L3(config-fex)# hardware N2248TPE queue-limit 4000000 rx N5548-L3(config-fex)# hardware N2248TPE queue-limit 4000000 tx N5548-L3(config)#interface e110/1/1 N5548-L3(config-if)# hardware N2348TP queue-limit 4096000 tx
VM VM VM #2 #3 #4

1G Attached Server

Tune 2248TP-E to support a extremely large burst (Hadoop, AVID, )

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

10G NFS

53

Nexus 2248TP-E Counters


N5596-L3-2(config-if)# sh queuing interface e110/1/1 Ethernet110/1/1 queuing information: Input buffer allocation: Qos-group: 0 frh: 2 drop-type: drop cos: 0 1 2 3 4 5 6 Ingress queue xon xoff buffer-size limit(Configurable) ---------+---------+----------0 0 65536 Queueing: queue qos-group cos priority bandwidth mtu --------+------------+--------------+---------+---------+---2 0 0 1 2 3 4 5 6 WRR 100 9728 Queue limit: 2097152 bytes

Egress queues: CoS to queue mapping Bandwidth allocation MTU

Egress queue limit(Configurable)

Queue Statistics: ---+----------------+-----------+------------+----------+------------+----Que|Received / |Tail Drop |No Buffer |MAC Error |Multicast |Queue No |Transmitted | | | |Tail Drop |Depth ---+----------------+-----------+------------+----------+------------+----2rx| 5863073| 0| 0| 0| | 0 2tx| 426378558047| 28490502| 0| 0| 0| 0 ---+----------------+-----------+------------+----------+------------+----<snib>
BRKRST-2930 2012 Cisco and/or its affiliates. All rights reserved.

Per port per queue counters

Drop due to oversubscription


Cisco Public

54

Implementing QoS with Nexus and NX-OS


Agenda
Nexus and QoS
Nexus and NX-OS New QoS Capabilities and Requirements

Understanding Nexus QoS Capabilities


Nexus 7000 Nexus 5500 Nexus 2000 Nexus 3000 Nexus 1000v

1K
Cisco Nexus

x86

Applications of QoS with Nexus


Voice and Video Storage & FCoE Hadoop and Web 2.0 Future QoS Design Considerations (Data Center TCP, ECN, optimized TCP)
BRKRST-2930 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public

55

Nexus 3000 QoS


Overview
QoS is enabled by default on the Nexus 3000 (NX-OS Default)
All ports are trusted (CoS/DSCP/ToS values are preserved) by default Default interface queuing policy uses QoS-Group 0 (Best Effort - Drop class), WRR (tail drop), 100% throughput (bandwidth percent) Unicast and Multicast traffic defaults to a 50% WRR bandwidth ratio of the egress interface traffic data rate. (system wide Configuration) The default interface MTU is 1500 bytes (system wide configuration) Control plane traffic destined to the CPU is prioritized by default to improve network stability. QoS Policy Types (CLI):
Type (CLI) QoS Network-QoS Queuing Description Packet Classification based on Layer 2/3/4 (Ingress) Packet Marking (CoS), Congestion Control WRED/ECN (Egress) Scheduling - Queuing Bandwidth % / Priority Queue (Egress) Applied To Interface or System System Interface or System

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

56

Nexus 3000 QoS


Shared Memory Architecture
Buffer/Queuing Block
UC Queue 0 UC Queue 1 UC Queue 2 UC Queue 3 UC Queue 4 UC Queue 5 UC Queue 6 UC Queue 7 MC Queue 0 MC Queue 1 MC Queue 2 MC Queue 3 UC Queue 0 UC Queue 1 UC Queue 2 UC Queue 3 UC Queue 4 UC Queue 5 UC Queue 6 UC Queue 7 MC Queue 0 MC Queue 1 MC Queue 2 MC Queue 3

Egress port 1

Multi-Level Scheduling Per-port Per-group Deficit Round Robin

80% Shared

Egress port 2

. A pool of 9MB Buffer space is divided up among Egress reserved and Dynamically shared buffer

.
UC Queue 0 UC Queue 1 UC Queue 2 UC Queue 3 UC Queue 4 UC Queue 5 UC Queue 6 UC Queue 7 MC Queue 0 MC Queue 1 MC Queue 2 MC Queue 3

20% Per Port Reserved

Egress port 64

9MB Total
BRKRST-2930 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public

57

Nexus 3000 QoS Configuration


WRR Example
The next six slides contain configuration and verification examples for creating an ingress classification policy and an egress queuing policy to prioritize egress traffic if congestion occurs on the egress interface. The ingress classification policy trusts the IP DSCP values assigned by hosts and maps them into QoS-Groups. The egress queuing policy assigns a predefined bandwidth percentage to traffic class.
Example Traffic Class Definitions: Traffic Class Gold Silver Bronze Best Effort (Default) QoS-Group 1 2 3 0 Throughput Percentage 40 30 20 10

Ingress

Egress 10%

20%

30%

40%

Traffic is prioritized based on traffic bandwidth percentage Excess traffic is dropped based on bandwidth ratios
BRKRST-2930 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public

58

Traffic Classification Configuration


Ingress traffic is classified based on IP DSCP values and associated to different QoS-Groups. In this example, the hosts are trusted and set the IP DSCP values. However, if the hosts were not trusted, a classification policy could be configured to set/rewrite the DSCP values.

N3K-1(config)# class-map type qos match-all qos-group-1 N3K-1(config-cmap-qos)# description Gold N3K-1(config-cmap-qos)# match dscp 46 N3K-1(config-cmap-qos)# class-map type qos match-all qos-group-2 N3K-1(config-cmap-qos)# description Silver N3K-1(config-cmap-qos)# match dscp 36 N3K-1(config-cmap-qos)# class-map type qos match-all qos-group-3 N3K-1(config-cmap-qos)# description Bronze N3K-1(config-cmap-qos)# match dscp 26 N3K-1(config)# policy-map type qos traffic-classification N3K-1(config-pmap-qos)# class qos-group-1 N3K-1(config-pmap-c-qos)# set qos-group 1 N3K-1(config-pmap-c-qos)# class qos-group-2 N3K-1(config-pmap-c-qos)# set qos-group 2 N3K-1(config-pmap-c-qos)# class qos-group-3 N3K-1(config-pmap-c-qos)# set qos-group 3 N3K-1(config)# interface ethernet 1/30 N3K-1(config-if)# service-policy type qos input traffic-classification

Define the Class-Maps and match the DSCP values

Define the Policy-Map and set the QoS-Groups

Apply the Policy-Map to the interface or system

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

59

Queuing (WRR) Configuration


Egress traffic is matched on QoS-Group and guaranteed a percentage of bandwidth when traffic exceeds the egress Ethernet interface throughput. It is important to note that the class-default has to be modified to prevent the bandwidth percentage from being greater than 100%. In the example below the class-default has been reduced to 10% from 100%.
N3K-1(config)# class-map type queuing qos-group-1 N3K-1(config-cmap-que)# description Gold N3K-1(config-cmap-que)# match qos-group 1 N3K-1(config-cmap-que)# class-map type queuing qos-group-2 N3K-1(config-cmap-que)# description Silver N3K-1(config-cmap-que)# match qos-group 2 N3K-1(config-cmap-que)# class-map type queuing qos-group-3 N3K-1(config-cmap-que)# description Bronze N3K-1(config-cmap-que)# match qos-group 3 N3K-1(config)# policy-map type queuing traffic-priorities N3K-1(config-pmap-que)# class type queuing qos-group-1 N3K-1(config-pmap-c-que)# bandwidth percent 40 N3K-1(config-pmap-c-que)# class type queuing qos-group-2 N3K-1(config-pmap-c-que)# bandwidth percent 30 N3K-1(config-pmap-c-que)# class type queuing qos-group-3 N3K-1(config-pmap-c-que)# bandwidth percent 20 N3K-1(config-pmap-c-que)# class type queuing class-default N3K-1(config-pmap-c-que)# bandwidth percent 10 N3K-1(config)# interface ethernet 1/10 N3K-1(config-if)# service-policy type queuing output traffic-priorities

Define the Class-Maps and match the QoS-Group values

Define the Policy-Map and set the bandwidth percentages

Apply the Policy-Map to the interface or system

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

60

Queuing - Network-QoS Configuration


The network-qos policy instantiates the QoS-Groups when applied to the system policy. This enables the QoS-Groups and interface statistics collection per QoS-Group.

N3K-1(config)# class-map type network-qos qos-group-1 N3K-1(config-cmap-nq)# match qos-group 1 N3K-1(config-cmap-nq)# class-map type network-qos qos-group-2 N3K-1(config-cmap-nq)# match qos-group 2 N3K-1(config-cmap-nq)# class-map type network-qos qos-group-3 N3K-1(config-cmap-nq)# match qos-group 3

Define the Class-Maps and match the QoS-Group values

N3K-1(config)# policy-map type network-qos qos-groups N3K-1(config-pmap-nq)# class type network-qos qos-group-1 N3K-1(config-pmap-nq)# class type network-qos qos-group-2 N3K-1(config-pmap-nq)# class type network-qos qos-group-3

Define the Policy-Map and match the Class-Maps previously defined Apply the Policy-Map to the system

N3K-1(config)# system qos N3K-1(config-sys-qos)# service-policy type network-qos qos-groups

Notes: QoS-Group 0 is already included in the default QoS policy.

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

61

Queuing Priority Queue


This configuration example puts all packets with an IP DSCP value of 46 (EF) into a priority queue that is scheduled before any other queues (i.e. QoS-Group 0). The ingress interface classifies packets on the Ingress interface matching DSCP 46 and puts it in QoS-Group 1. The Egress Queue matches QoS-Group 1 and configures that QoS-Group as a priority queue. All other traffic is is placed the QoS-Group 0 (Best Effort/Drop queue).
N3K-1(config)# class-map type qos match-all dscp-priority N3K-1(config-cmap-qos)# match dscp 46 N3K-1(config)# policy-map type qos dscp-priority N3K-1(config-pmap-qos)# class dscp-priority N3K-1(config-pmap-c-qos)# set qos-group 1 N3K-1(config)# interface ethernet 1/30 N3K-1(config-if)# service-policy type qos output dscp-priority N3K-1(config)# class-map type queuing dscp-priority N3K-1(config-cmap-que)# match qos-group 1 N3K-1(config)# policy-map type queuing dscp-priority N3K-1(config-pmap-que)# class type queuing dscp-priority N3K-1(config-pmap-c-que)# priority N3K-1(config)# interface ethernet 1/10 N3K-1(config-if)# service-policy type queuing output dscp-priority N3K-1(config)# class-map type network-qos qos-group-1 N3K-1(config-cmap-nq)# match qos-group 1 N3K-1(config)# policy-map type network-qos qos-groups N3K-1(config-pmap-nq)# class type network-qos qos-group-1 N3K-1(config)# system qos N3K-1(config-sys-qos)# service-policy type network-qos qos-groups
BRKRST-2930 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public

Ingress Classification Configuration: The qos service-policy can be applied per interface or per system

Egress Queue Configuration: The queuing service-policy can be applied per interface or per system

Network-QoS Configuration: The network-qos service-policy is applied per system


62

Queuing WRED
The default WRR Queue behavior is to tail drop packets when congestion is experienced. A network-qos policy can be configured to enable WRED, which drops packets prior to experiencing congestion (based on min/max/probability ratios). This is beneficial for applications that use TCP, since the source can reduce its transmission rate when the TCP stream experiences lost packets.
N3K-1(config)# class-map type qos match-all class-gold N3K-1(config-cmap-qos)# match dscp 8 N3K-1(config)# policy-map type qos traffic-classification N3K-1(config-pmap-qos)# class class-gold N3K-1(config-pmap-c-qos)# set qos-group 1 N3K-1(config)# interface ethernet 1/20 N3K-1(config-if)# service-policy type qos input traffic-classification

Traffic Classification: Match packets with a IP DSCP 8 and transmit them in QoS-Group 1

N3K-1(config)# class-map type network-qos class-gold N3K-1(config-cmap-nq)# description Gold N3K-1(config-cmap-nq)# match qos-group 1 N3K-1(config)# policy-map type network-qos traffic-priorities N3K-1(config-pmap-nq)# class type network-qos class-gold N3K-1(config-pmap-nq-c)# congestion-control random-detect N3K-1(config)# system qos N3K-1(config-sys-qos)# service-policy type network-qos traffic-priorities

Network-QoS: Match packets in QoS-Group 1 and enable WRED for the QoS-Group

Notes: Bandwidth percentages were not configured in this example to keep it simple.
BRKRST-2930 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public

63

Implementing QoS with Nexus and NX-OS


Agenda
Nexus and QoS
Nexus and NX-OS New QoS Capabilities and Requirements

Understanding Nexus QoS Capabilities


Nexus 7000 Nexus 5500 Nexus 2000 Nexus 3000 Nexus 1000v

1K
Cisco Nexus

x86

Applications of QoS with Nexus


Voice and Video Storage & FCoE Hadoop and Web 2.0 Future QoS Design Considerations (Data Center TCP, ECN, optimized TCP)
BRKRST-2930 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public

64

Converting Catalyst 6500 to Nexus 7000


Whats Different?
Biggest change is introduction of queuing policies to apply portbased QoS configuration Catalyst 6500 uses platform-specific syntax for port QoS
mls, rcv-queue, wrr-queue, etc. commands

Nexus 7000 uses modular QoS CLI (MQC) to apply both queuing and traditional QoS (marking/policing) policies
Class maps to match traffic Policy maps to define actions to take on each class Service policies to tie policy maps to interfaces/VLANs in a particular direction

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

65

Typical Catalyst 6500 Egress Port QoS Configuration


mls qos Define trust behavior. Not ! needed DSCP is interface range gig3/1-48 Define DWRR weights. preserved (trusted) Nexus 7000 uses mls qos trust dscp by default bandwidth statements in wrr-queue bandwidth 100 150 200 !For 1p3q8t queuing policy-maps. wrr-queue bandwidth 100 150 200 0 0 0 0 !For 1p7q8t wrr-queue cos-map 1 1 1 Define COS-to-queue wrr-queue cos-map 1 2 0 mapping, and COS-towrr-queue cos-map 2 8 4 threshold mapping. Nexus 7000 uses wrr-queue cos-map 2 2 2 match statements and wrr-queue cos-map 3 4 3 queue-limit commands. wrr-queue cos-map 3 8 6 7 priority-queue cos-map 1 5 Example: 2 8 4 =
map COS 4 to queue 2, threshold 8 Enable QoS. Not needed QoS is always enabled on Nexus 7000

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

66

Equivalent Nexus 7000 Egress Queuing Policy


class-map type queuing match cos 5 class-map type queuing match cos 3,6-7 class-map type queuing match cos 2,4 class-map type queuing match cos 0-1 ! policy-map type queuing 10G-qing-out class type queuing 1p7q4t-out-pq1 priority level 1 queue-limit percent 15 class type queuing 1p7q4t-out-q2 queue-limit percent 25 queue-limit cos 6 percent 100 queue-limit cos 7 percent 100 queue-limit cos 3 percent 70 bandwidth remaining percent 22 class type queuing 1p7q4t-out-q3 queue-limit percent 25 queue-limit cos 4 percent 100 queue-limit cos 2 percent 50 bandwidth remaining percent 33 class type queuing 1p7q4t-out-q-default queue-limit percent 35 queue-limit cos 1 percent 50 queue-limit cos 0 percent 100 bandwidth remaining percent 45 ! int e1/1 service-policy type queuing output 10G-qing-out
BRKRST-2930 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public

match-any 1p7q4t-out-pq1 match-any 1p7q4t-out-q2 match-any 1p7q4t-out-q3 match-any 1p7q4t-out-q-default

Define COS-to-queue mapping in queuing classmaps (configurable for each port type in each direction) Define behavior for each queue in queuing policy-map

Define priority queue Size the queue Define COS-tothreshold mapping Define DWRR weight for queue (bandwidth remaining required when using PQ) Tie policy-map as service-policy on appropriate interface type in appropriate direction
67

ESE QoS SRND for Catalyst 6500


interface range TenGigabitEthernet4/1 - 4 wrr-queue queue-limit 5 25 10 10 10 5 5 wrr-queue bandwidth 5 25 20 20 20 5 5 wrr-queue wrr-queue wrr-queue wrr-queue wrr-queue wrr-queue wrr-queue wrr-queue wrr-queue wrr-queue wrr-queue wrr-queue wrr-queue wrr-queue wrr-queue wrr-queue wrr-queue wrr-queue wrr-queue wrr-queue wrr-queue wrr-queue wrr-queue wrr-queue wrr-queue wrr-queue wrr-queue wrr-queue random-detect random-detect random-detect random-detect random-detect random-detect random-detect random-detect random-detect random-detect random-detect random-detect random-detect random-detect random-detect random-detect random-detect random-detect random-detect random-detect random-detect cos-map 1 1 1 cos-map 2 1 0 cos-map 3 1 4 cos-map 4 1 2 cos-map 5 1 3 cos-map 6 1 6 cos-map 7 1 7 1 2 3 4 Enables WRED on 5 non-PQs 6 7 min-threshold 1 80 100 100 100 100 100 100 100 max-threshold 1 100 100 100 100 100 100 100 100 min-threshold 2 80 100 100 100 100 100 100 100 max-threshold 2 100 100 100 100 100 100 100 100 min-threshold 3 80 100 100 100 100 100 100 100 max-threshold 3 100 100 100 100 100 100 100 100 min-threshold 4 80 100 100 100 100 100 100 100 max-threshold 4 100 100 100 100 100 100 100 100 min-threshold 5 80 100 100 100 100 100 100 100 max-threshold 5 100 100 100 100 100 100 100 100 min-threshold max-threshold min-threshold max-threshold 6 6 7 7 80 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 80 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100

SRND queuing configuration for Catalyst 6500 1p7q8t port type Allocates buffer space to non-PQs

Sets the DWRR weights for non-PQs Sets WRED min thresholds for the non-PQs Sets WRED max thresholds for the non-PQs Assigns scavenger/ bulk to Q1 WRED threshold 1 Q4: NMS/ transactional data 68

Q3: Video Q7: STP Q6: RPs Q5: Call sig and Cisco Public critical data

Q2: Best effort

priority-queue cos-map 1 5
BRKRST-2930 PQ: VoIP 2012 Cisco and/or its affiliates. All rights reserved.

Mapping QoS SRND to Nexus 7000 (1)


class-map type queuing match-any 1p7q4t-out-pq1 match cos 5 class-map type queuing match-any 1p7q4t-out-q2 match cos 7 class-map type queuing match-any 1p7q4t-out-q3 match cos 6 class-map type queuing match-any 1p7q4t-out-q4 match cos 4 class-map type queuing match-any 1p7q4t-out-q5 match cos 3 class-map type queuing match-any 1p7q4t-out-q6 match cos 2 class-map type queuing match-any 1p7q4t-out-q7 match cos 0 class-map type queuing match-any 1p7q4t-out-q-default match cos 1
BRKRST-2930 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public

PQ: VoIP Q2: STP

Q3: RPs Q4: Call sig and critical data Q5: NMS/ transactional data Q6: Video Q7: Best effort

Q-Default: Scavenger/ bulk

69

Mapping QoS SRND to Nexus 7000 (2)


policy-map type queuing 10G-SRND-out class type queuing 1p7q4t-out-pq1 priority level 1 queue-limit percent 10 class type queuing 1p7q4t-out-q2 queue-limit percent 10 bandwidth remaining percent 5 random-detect cos-based random-detect cos 7 minimum-threshold percent 80 maximum-threshold percent 100 class type queuing 1p7q4t-out-q3 queue-limit percent 10 bandwidth remaining percent 5 random-detect cos-based random-detect cos 6 minimum-threshold percent 80 maximum-threshold percent 100 class type queuing 1p7q4t-out-q4 queue-limit percent 15 bandwidth remaining percent 20 random-detect cos-based class type queuing 1p7q4t-out-q5 queue-limit percent 10 bandwidth remaining percent 20 random-detect cos-based random-detect cos 3 minimum-threshold percent 80 maximum-threshold percent 100
BRKRST-2930 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public

Defines the PQ Sizes the PQ Enables COS-based WRED for the queue

I actually question enabling WRED on network control queues as described in SRND Your choice

Sets WRED min and max thresholds for the queue

random-detect cos 4 minimum-threshold percent 80 maximum-threshold percent 100

Sets the DWRR weight for the queue

70

Mapping QoS SRND to Nexus 7000 (3)


class type queuing 1p7q4t-out-q6 queue-limit percent 10 bandwidth remaining percent 20 random-detect cos-based random-detect cos 2 minimum-threshold percent 80 maximum-threshold percent 100 class type queuing 1p7q4t-out-q7 queue-limit percent 30 bandwidth remaining percent 25 random-detect cos-based class type queuing 1p7q4t-out-q-default queue-limit percent 5 bandwidth remaining percent 5 random-detect cos-based random-detect cos 1 minimum-threshold percent 80 maximum-threshold percent 100 ! int e1/1 service-policy type queuing output 10G-SRND-out

I chose slightly different queue-limit sizes vs SRND when all 8 queues enabled, sum of queue-limit percentages must equal 100

random-detect cos 0 minimum-threshold percent 80 maximum-threshold percent 100

Tie the policy-map to the interface as an output queuing service-policy

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

71

Summary
MQC configuration for both queuing and marking/policing polices
Departure from platform-specific Catalyst 6500 configuration model

Initially, queuing policy configuration model generates some confusion


But, its modular and self-documenting

99% of needed QoS features exist in NX-OS


DSCP-to-queue perhaps biggest gap

A few key default changes:


QoS always enabled Default port behavior is trust

Port QoS config conversion from Catalyst 6500 IS possible ;)

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

72

Implementing QoS with Nexus and NX-OS


Agenda
Nexus and QoS
Nexus and NX-OS New QoS Capabilities and Requirements

Understanding Nexus QoS Capabilities


Nexus 7000 Nexus 5500 Nexus 2000 Nexus 3000 Nexus 1000v

1K
Cisco Nexus

x86

Applications of QoS with Nexus


Voice and Video Storage & FCoE Hadoop and Web 2.0 Future QoS Design Considerations (Data Center TCP, ECN, optimized TCP)
BRKRST-2930 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public

73

Priority Flow Control Nexus 5000/5500


Operations Configuration Switch Level
On Nexus 5000 once feature fcoe is configured, 2 classes are made by default
policy-map type qos default-in-policy class type qos class-fcoe set qos-group 1 class type qos class-default set qos-group 0 FCoE DCB Switch

class-fcoe is configured to be no-drop with an MTU of 2158


policy-map type network-qos default-nq-policy class type network-qos class-fcoe pause no-drop mtu 2158

DCB CNA Adapter

Enabling the FCoE feature on Nexus 5548/96 does not create no-drop policies automatically as on Nexus 5010/20 Must add policies under system QOS:
system qos service-policy type qos input fcoe-default-in-policy service-policy type queuing input fcoe-default-in-policy service-policy type queuing output fcoe-default-out-policy service-policy type network-qos fcoe-default-nq-policy
BRKRST-2930 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public

74

Nexus 5000/5500 QoS


Priority Flow Control and No-Drop Queues
Tuning of the lossless queues to support a variety of use cases Extended switch to switch no drop traffic lanes

Support for 3km with Nexus 5000 and 5500


Increased number of no drop services lanes (4) for RDMA and other multi-queue HPC and compute applications
Gen 2 UPC

Support for 3 km no drop switch to switch links Inter Building DCB FCoE links

Configs for 3000m no-drop class

Buffer size

Pause Threshold (XOFF)

Resume Threshold (XON)


Unified Crossbar Fabric

N5020 N5548

143680 bytes 152000 bytes

58860 bytes 103360 bytes

38400 bytes 83520 bytes


Gen 2 UPC

5548-FCoE(config)# policy-map type network-qos 3km-FCoE 5548-FCoE(config-pmap-nq)# class type network-qos 3km-FCoE 5548-FCoE(config-pmap-nq-c)# pause no-drop buffer-size 152000 pause-threshold 103360 resume-threshold 83520
BRKRST-2930 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public

75

Priority Flow Control Nexus 7K & MDS

Operations Configuration Switch Level


N7K-50(config)# system qos N7K-50(config-sys-qos)# service-policy type network-qos default-nq-7e-policy

No-Drop PFC w/ MTU 2K set for Fibre Channel


show policy-map system Type network-qos policy-maps ===================================== policy-map type network-qos default-nq-7e-policy class type network-qos c-nq-7e-drop match cos 0-2,4-7 congestion-control tail-drop mtu 1500 class type network-qos c-nq-7e-ndrop-fcoe match cos 3 match protocol fcoe pause mtu 2112 Template default-nq-8e-policy default-nq-7e-policy default-nq-6e-policy default-nq-4e-policy
BRKRST-2930 2012 Cisco and/or its affiliates. All rights reserved.

show class-map type network-qos c-nq-7e-ndrop-fcoe Type network-qos class-maps ============================================= class-map type network-qos match-any c-nq-7e-ndrop-fcoe Description: 7E No-Drop FCoE CoS map match cos 3 match protocol fcoe

Policy Template choices


Drop CoS
0,1,2,3,4,5,6,7 0,1,2,4,5,6,7 0,1,2,5,6,7 0,5,6,7
Cisco Public

(Priority)
5,6,7 5,6,7 5,6,7 5,6,7

NoDrop CoS
3 3,4 1,2,3,4

(Priority)
4 4
76

Enhanced Transmission Selection - N5K Bandwidth Management


When configuring FCoE by default, each class is given 50% of the available bandwidth Can be changed through QoS settings when higher demands for certain traffic exist (i.e. HPC traffic, more Ethernet NICs)
N5k-1# show queuing interface ethernet 1/18 Ethernet1/18 queuing information: TX Queuing qos-group sched-type oper-bandwidth 0 WRR 50 1 WRR 50

1Gig FC HBAs

1Gig Ethernet NICs

Traditional Server

Best Practice: Tune FCoE queue to provide equivalent capacity to the HBA that would have been used (1G, 2G, )
BRKRST-2930 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public

77

Enhanced Transmission Selection N5K


Changing ETS Bandwidth Configurations
Create classification rules first by defining and applying policy-map type qos
Define and apply policy-map type queuing to configure strict priority and bandwidth sharing
pod3-5010-2(config)# class-map type queuing class-voice pod3-5010-2(config-cmap-que)# match qos-group 2 pod3-5010-2(config-cmap-que)# class-map type queuing class-high pod3-5010-2(config-cmap-que)# match qos-group 3 pod3-5010-2(config-cmap-que)# class-map type queuing class-low pod3-5010-2(config-cmap-que)# match qos-group 4 pod3-5010-2(config-cmap-que)# exit pod3-5010-2(config)# policy-map type queuing policy-BW pod3-5010-2(config-pmap-que)# class type queuing class-voice pod3-5010-2(config-pmap-c-que)# priority pod3-5010-2(config-pmap-c-que)# class type queuing class-high pod3-5010-2(config-pmap-c-que)# bandwidth percent 50 pod3-5010-2(config-pmap-c-que)# class type queuing class-low pod3-5010-2(config-pmap-c-que)# bandwidth percent 20 pod3-5010-2(config-pmap-c-que)# class type queuing class-fcoe FCoE Traffic given pod3-5010-2(config-pmap-c-que)# bandwidth percent 30 30% of the 10GE link pod3-5010-2(config-pmap-c-que)# class type queuing class-default pod3-5010-2(config-pmap-c-que)# bandwidth percent 0 pod3-5010-2(config-pmap-c-que)# system qos pod3-5010-2(config-sys-qos)# service-policy type queuing output policy-BW pod3-5010-2(config-sys-qos)#
BRKRST-2930 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public

78

ETS Nexus 7000


Bandwidth Management
n7k-50-fcoe-2# show queuing interface ethernet 4/17
Egress Queuing for Ethernet4/17 [System] --------------------------------------------------------Template: 4Q7E ----------------------------------------------------Group Bandwidth% PrioLevel Shape% ----------------------------------------------------0 80 1 20 -------------------------------------------------------------------------Que# Group Bandwidth% PrioLevel Shape% CoSMap --------------------------------------------------------------------------0 0 High 5-7 1 1 100 3 2 0 50 2,4 3 0 50 0-1 Ingress Queuing for Ethernet4/17 [System] -----------------------------------------------------------Trust: Trusted ----------------Group Qlimit% ------------------0 70 1 30 --------------------------------------------------Que# Group Qlimit% IVL CoSMap --------------------------------------------------0 0 45 0 0-1 1 0 10 5 5-7 2 1 100 3 3 3 0 45 2 2,4

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

79

DC Design Details
iSCSI Storage Considerations
iSCSI and DCB Where does PFC make sense in the non FCoE design? Extending buffering from switch to connected device End to End Need to consider network oversubscription carefully!
Fibre Channel and FCoE leverage very low levels of oversubscription No Drop for FC works due to capacity planning
Flow Control from the array to the switch
NAS iSCSI

Where does ETS make sense? Anywhere you want to guarantee capacity
VM VM VM #2 #3 #4

Flow Control from the server to the switch


BRKRST-2930 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public

80

DC Design Details
iSCSI

iSCSI Storage Considerations TCP or PFC


1. Steady state traffic is within end to end network capacity 2. Burst traffic from a source 3. No Drop traffic is queued 4. Buffers begin to fill and PFC flow control initiated 5. All sources are eventually flow controlled TCP not invoked immediately as frames are queued not dropped Is the optimal behaviour for your oversubscription?
Cisco Public

4G 10G

10G

1G

1G

1G

1G

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

81

Nexus 5500 and iSCSI - DCB


PFC (802.1Qbb) & ETS 802.1Qaz
iSCSI TLV will be supported in the 5.2 release (CY12) 3rd

Party Adapters not validated until that release


Functions in the same manner as the FCoE TLV
Nexus 5500 Switch

Communicates to the compatible Adapter using DCBX (LLDP) Steps to configure


Configure Class Maps to identify iSCSI traffic Configure Policy Maps to identify marking, queueing and system behaviour Apply policy maps
class-map type qos class-iscsi match protocol iscsi match cos 4 class-map type queuing class-iscsi match qos-group 4 policy-map type qos iscsi-in-policy class type qos class-fcoe set qos-group 1 class type qos class-iscsi set qos-group 4

Identify iSCSI traffic


DCB CNA Adapter

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

82

Nexus 5500 and iSCSI - DCB


PFC (802.1Qbb) & ETS 802.1Qaz
policy-map type queuing iscsi-in-policy class type queuing class-iscsi bandwidth percent 10 class type queuing class-fcoe bandwidth percent 10 class type queuing class-default bandwidth percent 80 policy-map type queuing iscsi-out-policy class type queuing class-iscsi bandwidth percent 10 class type queuing class-fcoe bandwidth percent 10 class type queuing class-default bandwidth percent 80 class-map type network-qos class-iscsi match qos-group 4 policy-map type network-qos iscsi-nq-policy class type network-qos class-iscsi set cos 4 pause no-drop mtu 9216 class type network-qos class-fcoe system qos service-policy service-policy service-policy service-policy

Define policies to be signaled to CNA

Nexus 5500 Switch

Define switch queue BW policies

Define iSCSI MTU and if single hop topology no-drop behaviour

DCB CNA Adapter

type type type type

qos input iscsi-in-policy queuing input iscsi-in-policy queuing output iscsi-out-policy network-qos iscsi-nq-policy

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

83

Conclusion
You should now have a good understanding of QoS implementation using the Nexus Data Center switches Any questions?

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

84

Recommended Reading
BRKRST-2930

Please complete your Session Survey


We value your feedback
Don't forget to complete your online session evaluations after each session. Complete 4 session evaluations & the Overall Conference Evaluation (available from Thursday) to receive your Cisco Live T-shirt Surveys can be found on the Attendee Website at www.ciscolivelondon.com/onsite which can also be accessed through the screens at the Communication Stations

Or use the Cisco Live Mobile App to complete the surveys from your phone, download the app at www.ciscolivelondon.com/connect/mobile/app.html
1. Scan the QR code (Go to http://tinyurl.com/qrmelist for QR code reader software, alternatively type in the access URL above) 2. Download the app or access the mobile site 3. Log in to complete and submit the evaluations

http://m.cisco.com/mat/cleu12/
BRKRST-2930 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public

86

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

87

Thank you.

BRKRST-2930

2012 Cisco and/or its affiliates. All rights reserved.

Cisco Public

88

You might also like