You are on page 1of 60

Providing Scalable Broadband Subscriber Solutions

with Virtualization and Orchestration

Nikolai Pitaev
Engineer, Technical Marketing
BRKSPG-2381
“What’s in it for me?"
This session will help you to understand Cisco virtual Broadband
Network Gateway (BNG) Solution
In this session Out of scope

Introduction and Overview. Other Service Provider and Enterprise use cases.

Building Blocks of the virtual BNG Solution:


1. CSR 1000V and XRv as vBNG incl. roadmap and scale
Detailed description of the physical BNG
2. Elastic Services Controller for VNF Lifecycle Mgmt.
3. Hardware and Hypervisors details
deployments
4. Smart Licensing for vBNG

Live Demo during the session if time permits. Troubleshooting and debugging deep dive

© 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 3
With our vBNG solution you can build
flexible

scalable

cost-effective
Broadband Aggregation
4
Agenda

1. Introducing IOS XE SP solutions


2. Building blocks of the vBNG solution
3. TCO calculation example
4. Live demo
5. Conclusion
virtual BNG Solution includes 4 different Products

vBNG VNF Orchestration


CSR 1000V (now), Cisco ESC Software for vBNG
XRv 9000 (July 2017) Lifecycle Management
vBNG
Solution
Smart License Hardware, Host OS
Automatic Provisioning, Cost UCS, KVM/VMware/…, Performance
Savings with License Sharing

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 6
Real life deployment example of ASR1000 as physical BNG
Functions Services Scalability
ASR 1006 as BNG (aka BRAS) Local termination and L2TP 29.000 Dual Stack PPP Sessions
ATM and Ethernet Voice services 64K configured QinQ Subinterfaces
Total of appr. 500 systems in production QoS parameterization from RADIUS 16.000 Policy Maps
Hierarchical QoS 3 ISG Services 400 Concurrent LI taps
High-Availability / ISSU Lawful Intercept (LI)
1-second accounting accuracy

Residential
ATM BNG

Ethernet BNG

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 7
Same Customer is using CSR1000V as virtual BNG
Implemented in a different country
Almost the same router config, same management interface as on ASR 1000
RT OSS

Network Control Orchestrator

HGWs
L2 VLAN vBRAS vLNS – Retail ISP LNS –Retail ISP
attachments
x86 servers
vBNG vLNS
IPv6 IPv6 Core
tunnels vRouter LNS
VPEF CSR vLNS LNS
VMs LNS
Internet CSR LNS
VMs

HGWs DHCPv4 RADIUS


IPv6 tunnel L2TPv2oIPv4 Retailer
tunnels RADIUS
end-points

WAN Network Data Center Physical LNS

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 8
CSR 1000V and XRv as vBNG
virtual network functions (VNF)
IOS-XE is a Swiss Army Knife

3.000+ Features
8 major Service Provider Use Cases
8 major Enterprise Use Cases
physical with ASR 1000
virtual with CSR 1000V

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 10
CSR 1000V is virtualized ASR 1001
Forwarding Plane Control Plane Infrastructure agnostic software
FFP Client / IOS • Familiar IOS XE software
Driver

Chassis Mgr. • No dependency on specific server or vSwitch


Chassis Mgr.
Forwarding Mgr.
Forwarding Mgr.

Linux Container
Throughput Elasticity
FFP code
• Licensable throughput from 10 Mbps to 10 Gbps
vCPU vMemory vDisk vNIC
• Footprint options from 1 to 8 virtual CPUs

Multiple Licensing Models


Hypervisor (VMware / Citrix / KVM / Microsoft)
• Term (1 or 3 Year), perpetual, hourly (AWS) usage
CPU Memory Disk NIC
Programmability
Physical Hardware
• NetConf/Yang, RESTConf and SSH/Telnet for
automated provisioning and management

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 11
Miercom tested CSR 1000V also as vBNG

Using just one or two vCPUs per VM, it delivers up to physical limit of 20
Gbps on x86 Server with two 10 GE ports and up to 5 Gbps on AWS.
Unlike classic routers a CSR 1000V setup has to be configured for optimal
performance on several levels. Major IO technologies like SR-IOV, fd.io
VPP, OVS-DPDK were tested as vSwitch.
Horizontal scaling – Performance of 3 x 2 vCPU VMs > 1 x 8 vCPU VM

vBNG Test:
• one VM with 2 vCPU on RHEL 7.2 with SR-IOV
• CPU: Intel® Xeon E5-2699 v3 @ 2.30GHz
• 8.000 Dual-Stack Sessions, 500K per Session

http://miercom.com/cisco

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 12
Impact of CPU speed to VNF performance
Two servers with: Impact of Different server Core Speeds
• 3.2 GHz (16 cores) CSR 1000v, IMIX, SR-IOV, IOS XE 16.3
• 2.6 GHz (24 cores)
SR-IOV used to eliminate I/O overhead 7.367
3.2 GHz, 16 core
20
IP forwarding tested, not vBNG
6.001
2.6 GHz, 24 core
For 1 VM, performance increase 18.101
proportional to the CPU Cycle difference
!.# &.'
#.$
≈ $
0 5 10 15 20 25

1x2vCPU 3x2vCPU
For 3 VMs, not proportional
• Bottleneck switched from CPU to IO (2x10 GE)
• See horizontal scale with 3 VMs?

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 13
CSR1000V vBNG key numbers to remember

8.000 PPP / IP sessions

2,5 Gbps throughput for PPP sessions per CSR *

5 Gbps throughput for IP sessions per CSR *

* single instance, IMIX, without IO/Performance optimization

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 14
For Your
Reference

CSR1000v vBNG Profile Details


Profile vPTA / LAC vLNS vISG

Session Type PPPoEoVLAN PPPoVLANoL2TP IPoEoVLAN

Features* Input/output ACL, IPv4/IPv6, HQoS, DHCP, Unclassified


ingress QoS (policing) / Input/output ACL, dual- MAC, HQoS,
egress QoS (shaping), stack service and TC Input/output ACL, ISG
vrf-awareness, IPv4/IPv6 accounting, CoA Service TC, L4R, PBHK, timeout,
dual-stack, AAA, ANCP Push uRPF/Security
vCPU 2 vCPU
Memory 8 GB

Sessions 8.000 / 8.000 L2TP Tunnels 8.000

Max Throughput without


2.5 Gbps 2.5 Gbps 5 Gbps
IO optimization

• Refer to the embedded profiles at upper right corner for details

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 15
Bottlenecks exists on different levels
Differentiate between System
performance and VM performance Intra-VM Bottleneck
x86 Host

Guest Guest
User Kernel
VM1
Application .. VM n
Application

At system level, other bottlenecks


may affect throughput
I/O driver I/O driver

• Physical NIC capacity


Hypervisor Bottleneck vNIC vNIC

Host User
Virtual Switching

/Qemu

• Hypervisor performance vSwitch Bottleneck vSwitch

• Number of concurrent VMs

Kernel
KVM
Host
• Performance tuning pNIC Driver pNIC Driver
pNIC Bottleneck pNIC pNIC

In throughput testing, there will Pkt


Pkt
Pkt
Pkt
Pkt
Pkt
Pkt
Pkt
Pkt
Pkt
Pkt
Pkt

ALWAYS be at least one bottleneck!


Understand WHICH bottleneck is
‘active’ and WHEN bottlenecks switch.

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 16
vBNG runs at 8,5 Gbps with VM-FEX technology
Customer provided vBNG config, RADIUS profile and traffic definition
VM-Fex bypasses
Tested internally in Cisco lab on KVM with VM-FEX
bottlenecks and emulated
Throughput summary: unconstrained system
CSR 1000v demonstrated requested 20 Gbps with 3 VMs
Above 8.5 Gbps per VM on average.
Guest-OS Guest-OS
UCS Type UCS C240 M4S (2 Processors, 36 Cores)
UCS Manager UCS 6248 Fabric Interconnect with UCS Manager 2.2 (3f) CSR CSR CSR

Performance
NIC Type Cisco UCS VIC 1225 Virtio-net Virtio-net

I/O Type 2x10GE, VM-FEX


Host-OS Qemu /
Qemu / vHOST
UCS Server OS Red Hat Enterprise Linux Server release 7.1 (Maipo) / KVM vHOST

Hypervisor KVM tap tap

CSR DUT Label BLD_MCP_DEV_LATEST_20150611_123025


Open vSwitch
SP Profile Sessions: 4.000 IPoE, Throughput License: ax_200G
Per Session Features: Input ACL (1ACE/ACL), 1 QOS Output Shaper
Un-constrainedPF driver
NIC-VM Path
with single Queue, Input Policing, Accounting (60min interval)
NIC
Acceptable Traffic Partial Drop Rate (PDR) 0.01% , RFC2544 with SP Traffic Profile PF

Loss
layer-2 sorter / switch / classifier
SP Traffic Profile 1430B = 75%, 578B = 16.6%, 80B = 8.3% ~ (Avg Pkt Size = 1175B)
x86 machine

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 17
Drop Rate Definition has significant impact to throughput

Throughput+as+a+funcCon+of+acceptable+
Typical definitions for Drop Rates Traffic+Loss+(%,+normalized,+KVM,+XE+3.13)+
• Non-drop Rate (NDR) = 0 packet loss 180%#

160%#

Normalized+Throughput+(%,+NDR+=+100%)+
• Partial Drop Rate (PDR) 0.01% or 0.05% 140%#

120%#

100%#

Small relaxation of PDR definition can 80%#

lead to significant higher throughput: 60%#

40%#

• If your use case accepts PDR of 0.05%, 20%#

you will have appr. 40% higher 0%#


0.00# 0.05# 0.10# 0.15# 0.20# 0.25# 0.30# 0.35# 0.40# 0.45# 0.50# 0.55# 0.60# 0.65# 0.70# 0.75#

throughput compared to NDR %+of+acceptable+traffic+loss+per+VM+

%#increase#in#Throughput#

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 18
Optimize your vBNG System on 4 different levels

BIOS (script example: CSCux48746) Host:


• Hyperthreading OFF • Enable Huge pages
• Speedstep OFF BIOS Host • Isolate Host CPU
• Turbo mode OFF • Host transparent huge pages OFF
IO • Disable kernel watchdog
VNF
vSwitch

Virtual Switching vBNG VNF


• VPP worker / OVS-DPDK PMD threads • Implement vCPU and Emulator pinning
• Design proper placement on socket • Tune VM NUMA allocation
• Use VM Huge page memory backing
• Disable VM Memballoon
• Increase vHost queue Sizes / VPP

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 19
Design your CPU mapping for better performance

x86 Server with 2 NUMA sockets, 8 cores each = 16 cores total


VPP or OVS-DPDK as vSwitch, NICs are mapped to worker / PMD threads
6 CSR 1000V VMs with 2vCPU each

Linux CSR1 CSR2 CSR3 Linux CSR4 CSR5 CSR6


VPP VPP VPP
VPP worker2
worker1
Emul vCPU0 vCPU1 vCPU0 vCPU1 vCPU0 vCPU1 Emul vCPU0 vCPU1 vCPU0 vCPU1 vCPU0 vCPU1

CPU00 CPU01 CPU02 CPU03 CPU04 CPU05 CPU06 CPU07 CPU10 CPU11 CPU12 CPU13 CPU14 CPU15 CPU16 CPU17
Socket0 Socket1

Physical Physical
Interface 1 Interface 2

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 20
Same example, different design

Do you see any room for improvement in following design?

Linux CSR1 CSR2 CSR3 CSR4 CSR5 CSR6


VPP VPP VPP
VPP
worker1 worker2 main
Emul vCPU0 vCPU1 vCPU0 vCPU1 vCPU0 vCPU1 vCPU0 vCPU1 vCPU0 vCPU1 vCPU0 vCPU1

CPU00 CPU01 CPU02 CPU03 CPU04 CPU05 CPU06 CPU07 CPU10 CPU11 CPU12 CPU13 CPU14 CPU15 CPU16 CPU17
Socket0 Socket1

To improve: Physical Physical


Interface 1 Interface 2
1. physical NIC – VPP mismatch
2. CSR3 – socket crossing “tax”
3. Emulator pin for VMs 4-6 on different socket

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 21
For Your
Reference

CSR 1000v IOS XE Threads to vCPU Associations


IOS XE processing threads in the Guest OS are statically mapped to vCPUs threads
vCPU threads in turn are allocated to physical cores by the hypervisor scheduler

CSR footprint Control Plane Data Plane PPE Data Plane HQF Data Plane Rx processing
/ Tx Processing
1 vCPU 0
2 vCPU 0 vCPU 1
4 vCPU 0 vCPU 1 & 2 vCPU 3
8 vCPU 0 vCPU 1-5 vCPU 6 vCPU 7

NOTE: vCPU allocations subject to change without further notice

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 22
For Your
Reference

Flexible Core assignment to Control and Data Planes


Default (Data Plane Heavy) Control Plane Heavy
vCPUs 1 2 4 8 vCPUs 1 2 4 8

Control Control
Shared 1 1 1 Shared 1 2 2
Service 1 Service 1

Data 1 3 7 Data 1 2 6

Service Plane Medium Service Plane Heavy


vCPUs 1 2 4 8 vCPUs 1 2 4 8

Control Control
Shared 1 2 2 Shared 1 2 4
Service 1 Service 1

Data 1 2 6 Data 1 2 4

*Available in IOS XE RLS 3.16.02 and later BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 23
CSR1000V vBNG Roadmap and Positioning
Now: PPP over VXLAN Termination
Next: Performance improvement, PPP over L2TPv3-in-IPv6 Tunnels
On radar: Box-to-box High Availability

vBNG Positioning within Cisco:


CSR 1000V is currently THE platform for vBNG solution within Cisco.
XRv-based SW has vBNG use case targeted for August 2017.
Positioning is similar to physical BNG: XRv does not support LNS, but has
higher scale compared to CSR1000V by using additional Cores

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 24
XRv 9000 scale and performance per VM
32.000 subscribers per VM, Geo-Redundancy
200 calls per second per VM, 100 CoA per second per VM
Performance:
UCS C240, 28 cores, Xeon E5-2697 v3 @ 2.6 Ghz, 128 GB RAM, 10x10Gig
32.000 IPoE Subscribers with H-QoS and ACL’s, 82 Gbps IMIX Throughput NDR
Expectation for 32.000 PPP Sessions: 5-10% less throughput, means 75 Gbps

Roadmap:
Dec'16 Apr-17 Jul-17 Sep-17
IPoE DEMO PPPoE Demo IPoE EFT PPPoE EFT

Feb-17 Jun-17 Aug-17 Nov-17


IPoE PoC PPPoE PoC IPoE FCS PPPoE FCS
(6.3.1) (6.3.2)

Disclaimer: numbers and dates are targeted numbers and subject to change till FCS
BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 25
For Your

XRv System Architecture


Reference

XRv provides separation of Control VM


Plane, Data Plane and Admin Plane
LXC LXC LXC
using Linux Containers (LXC).
IOS XR Virtual Forwarder
Admin IOS XR Control
RP CP
Plane (CP)
It shares the same OS, SW Plane Dataplane
IOS XR
architecture and infrastructure as LC CP DP
Agent
VPP +DriveDPDK
high end routing systems such as DP control
(DPC) (DPA) Drive
r
Drive
r r

NCS6000, ASR9000 and CRS. Ctrl

10G
10G
Ctrl Ctrl
Eth
Eth Eth
XRv Linux Kernel

vmxnet3

e1000
Linux

virtio
GE
WRL7 (3.14) Mg
bridge
mt
Eth
KVM, ESXi
(future: HyperV, AWS, bare metal, XEN,...) V
F
P
F

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 26
Cisco vBNG VNF Summary
CSR 1000V XRv 9000
IPoE – July 2017
Availability now
PPP – October 2017
2 vCPU VM: 28 vCPUs VM:
vBNG Scale 8.000 Sessions 32.000 Sessions
5 Gbps IMIX 80 Gbps IMIX

Use Cases vPTA, vLAC, vLNS, vLTS vPTA, LNS on radar

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 27
Elastic Services Controller
for VNF Lifecycle Management
ESC is used for VNF lifecycle management

Key Functions of the Elastic Services Controller (ESC) software:


1. start CSR1000V VM
2. apply day0 config
3. monitor the VNF for health and overload / underload
4. dynamically instantiate or remove instances as required

Works with OpenStack and VMware infrastructure

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 29
ESC enables scalable vBNG solution

1.000 active VMs supported by ESC


x
8.000 Subscribers per CSR1000V
=
8.000.000 Subscribers

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 30
Detailed ESC VNF lifecycle management For Your
Reference

Elastic Services Controller


VNF VNF
VNF Monitor Analytic Engine Rule Engine
Provisioning Configuration

Provision Configure
VM Service Service Custom Script
Predefined Action
Overloaded / Underloaded Action

VM VM / Service Service VM Custom Script


Create VM Service Predefined Action Action
alive Bootstrap Process alive Overloaded / Underloaded
Functional
Custom Script
Service DEAD Predefined Action Action
Custom Script
VM DEAD Predefined Action
Action
Predefined Action Predefined Action
Custom Script Custom Script
Action Action Simple Rules Complex Rules

One Event -> one action One Event -> multiple actions

Service Alive => advertise Service Alive => Advertise, Notify

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 31
ESC uses KPI thresholds for VM monitoring
Threshold Name Threshold Metric Value ESC Action Customized Action
Type
VM_ALIVE Rising/Falling ICMP Ping 3 successful Service Booted Verify CSR connectivity
Reachability pings Add to Radius
VM_OVERLOADED Rising Session Count >7000 Service Scale-Up (add VM) Adjust Radius Load-Balancing

VM_OVERLOADED_FULL Rising Session Count >8000 None Adjust Radius Load-Balancing


to exclude this CSR

VM_OVERLOADED_LIGHT Falling Session Count <2000 None Adjust Radius Load-Balancing

VM_OVERLOADED_EMPTY Falling Session Count <1 Service Scale-Down Remove CSR from Radius
(remove VM) Load-Balancing

KPI XML Definition: Specification of actions in the same file:


<kpi> <rule>
<event_name>VM_OVERLOADED</event_name> <event_name>VM_OVERLOADED</event_name>
<metric_value>7000</metric_value> <action>ALWAYS log</action>
<metric_cond>GT</metric_cond> <action>TRUE servicescaleup.sh</action>
<metric_collector> <action>TRUE sp_script_service_scale_up</action>
<type>SUBSCRIBER_SESSION</type> </rule>
<nicid>0</nicid> …
<poll_frequency>15</poll_frequency> <configuration>
<polling_unit>seconds</polling_unit> <dst>iosxe_config.txt</dst>
</metric_collector> <file>file://cisco/csr_SP_config.sh</file>
</kpi> </configuration>
BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 32
Smart Licensing is an integrated
part of the vBNG solution
Smart Licensing

The CSR 1000V first boots in evaluation mode with throughput limited to 2.5 Mbps.
Smart Licensing is used to automatically download and install needed license.

Two Options:
1. Connection over Internet to Cisco Smart Licensing Server.
2. Install a Smart Software Manager Satellite (SSMS) in your network.

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 34
Smart Licensing options

Router
Direct Deployment

Cisco Smart
Software Cisco Commerce
Firewall Workspace
Manager
Usage
Software

Unified
Proxy or SCH
Communications Transport Gateway

Offline Monthly
Router Inventory Update
Mediated Deployment

Firewall Air Gap

SSMS

Software

Unified
Communications

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 35
License sharing across different vBNGs saves money
Smart licenses can be shared among different CSR 1000V!

Example: 16 x CSR 1000V running as vBNG, each with 8.000 Broadband Sessions.
Option 1: 16 x L-CSR-BB-8K= would cost $24.000 * 16 = $384.000
Option 2: 1 x L-CSR-BB-128K-S= shared among 16 vBNGs would cost $128.000.
Result: saving of $256.000 on BB license, which is 66.67% saving!

PID Description Time GPL Price


… … … …
Option 1
L-CSR-BB-8K-S= 8K BB session smart license, 4G add-on memory, Perpetual Perpetual $24,000
… … … …
Option 2
L-CSR-BB-128K-S= 128K BB session smart license, 4G add-on memory, Perpetual Perpetual $128,000

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 36
Bringing Subscribers over
Backbone to vBNG in central
locations
Use VxLAN or L2TPv3 Tunnels to bring subscribers to vBNG

Subscriber located on many PoPs


vBNGs are at few central Data Center locations

Two existing solutions to bring subscribers to vBNGs:


• PPP / IP Sessions over VXLAN directly to vBNG
• PPP / IP Sessions over L2TPv3-in-IPv6 to VPP and then to vBNG

Additional solution for the termination of L2TPv3 Tunnels directly on


CSR1000V is currently under engineering scoping.

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 38
PPP Sessions over VXLAN directly to vBNG
VxLAN Tunnel is established between aggregation Switch and vBNG:

PPP inside VxLAN Tunnel


PPP vBNG
Client Access Switch
Node

vBNG extracts PPP packets out of the VxLAN Tunnel and terminates PPP.
Simple BDI Interface on vBNG:
interface BDI10
no ip address
vlan-id dot1q 2000
pppoe enable group global
!
pppoe enable group global
! BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 39
PPP over L2TPv3-in-IPv6 details with VPP
L2TPv3 Tunnel is established between aggregation Switch and fd.io VPP:

PPP inside L2TPv3 Tunnel


PPP fd.io vBNG
Client Access Switch VPP
Node
Extremely efficient packet processing, leverages DPDK
Full Packet Processing Software Stack – Layer 2, IPv4, IPv6, PBR, …
Scalar Packet Processing P2P memory access memory access memory access P2P
Instr Instr Instr Instr
(Standard x86 Packet Processing)
i-Cache / Memory access issues Thread Run Time (TRT)

memory access memory access memory access


memory access memory access memory access
Vector Packet Processing (VPP) memory access
memory access
memory access
memory access
memory access
memory access
Cisco Innovation MP2MP memory access
Instr Instr
memory access
Instr
memory access
Instr
(Pkt 0, 1, 2, 3,4) (Pkt 0, 1, 2, 3,4) (Pkt 0, 1, 2, 3,4) (Pkt 0, 1, 2, 3,4)
open source, used in VTF
Thread Run Time (TRT)

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 40
Termination of L2TPv3 Tunnels directly on CSR1000V
L2TPv3 Tunnel is established between aggregation Switch and CSR1kv:

PPP inside L2TPv3 Tunnel


PPP vBNG
Client Access Switch
Node

Currently under engineering scoping!


Simplifies design if no termination instance in front of CSR1000V is needed.

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 41
Load Balancing across multiple vBNGs
Question: how load balance PPP sessions across multiple vBNGs?
Solution: PADO delay aka PPP Smart Server Selection Feature.
vBNG1 vBNG2
Example 1:
bba-group pppoe global-server-selection PADI
pado delay 512 ! <- this value can be changed by ESC
...
PADO PADO delayed
Example 2:
bba-group pppoe selected-server-selection
pppoe server remote-id delay 512 string contains TEST
pppoe server circuit-id delay 256 string "mac 1111.2222.3333" Client
...

Similar concept for IP sessions by delaying the offer timers of the DHCP server.

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 42
L2TP Load Balancing
Question: ONE L2TP Tunnel with 16.000 Sessions, but vLNS supports 8.000
Solution: LTS = L2TP Tunnel Switch
LAC vLTS vLNS1: 192.168.101.1
8.000 Sessions
16.000 Sessions

vpdn-group tunnelout vLNS2: 192.168.101.2


request-dialin
LTS#shprotocol
l2tp tunnel
l2tp
L2TP Tunnel Information Total tunnels 3 sessions 32000
multihop hostname LAC
LocTunID RemTunID Remote Name State Remote Address Count VPDN Group
initiate-to ip 192.168.101.1 limit 8000
10203 1196 LAC est 192.168.100.1 16000 tunnelin
initiate-to
46604 7419 ip 192.168.102.2
vLNS1 limit
est 8000
192.168.101.1 8000 tunnelout
local
56623 name
53253 LTS vLNS2 est 192.168.102.2 8000 tunnelout
l2tp tunnel password 0 cisco
LTS#

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 43
TCO Calculation Example
Example: calculation for 250.000 subscribers in 6 rack units
Throughput calculation:
§ Max throughput for PPP per VM without optimization is 2,5 Gbps
§ 2,5 Gbps / 500 Kbps per customer = 5.000 customers per VM
§ 250.000 customers / 5.000 customers per VM = 50 VMs

VM, CPU and UCS calculation:


§ 2 vCPU per VM
§ 36 Cores per UCS C240 = 2 vCPUs for host + 34 vCPU for vBNG VMs
§ 17 VMs per UCS -> 42,5 Gbps per UCS. UCS limit was seen at 50Gbps.

Result:
§ 50 VMs / 17 VM per UCS = 3 UCS systems are needed for 250.000 subscribers
§ 6 Rack Units total = 3 x 2 RU per UCS C240

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 45
Live Demo
vBNG @ dCloud demo
270+ labs for Customers, Partners and Cisco Employees.
From scripted demos to fully customizable labs with administrative access!

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 47
vBNG orchestrated by ESC in OpenStack on dCloud
“All In One” Virtual Machine
ESC orchestrated
Manually started “dynamic” VMs: Tools
“static” VM:
CSR1kv as vBNG-1
FreeRADIUS
CSR1kv as PPP Client
CSR1kv as vBNG-2
VNC Server
ESC VM
...
SSH, Telnet, SCP
CSR1kv as vBNG-X

Wireshark

OpenStack

Ubuntu “Host”

dCloud Infrastructure

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 48
Summary of the key steps in the dCloud vBNG lab
1. Manually start PPP client VM
2. Start ESC VM
3. Define orchestration rules (VM_Overloaded, VM_Underloaded, VM_Alive)
4. Test Scale_Up and Scale_Down cases based on defined rules

5. Optional step: deploy Smart Licensing Satellite VM and configure Call


Home functionality for automatic license download

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 49
Summary of the key steps in our demo today
1. Manually start PPP Client VM
2. Start ESC VM
3. Define Orchestration Rules (VM_Overloaded, VM_Underloaded, VM_Alive)
4. Test Scale_Up and Scale_Down Cases based on defined Rules

5. Optional step: deploy Smart Licensing Satellite VM and configure Call


Home Functionality for automatic Throughput License Download

Scale_Up condition = if ESC will see 11 PPP Sessions or more on vBNG-1, it


will start new vBNG and apply day 0 configuration.

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 50
Summary: Bringing it all together
CSR 1000V benefits

CSR 1000V supports all key virtualization technologies including multi-vendor


Hypervisors, different image formats, I/O models and VM flavors.

CSR 1000V runs on variety of virtualized infrastructures, and it can be


orchestrated by many of NfV software including Cisco ACI/NSO/ESC,
OpenStack, and 3rd party.

CSR 1000V VNF provides variety of interfaces and Open API’s:


REST API’s, Netconf, XML, OpenStack, etc.

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 52
Key benefits of our end-to-end vBNG solution
Elasticity – add multiple vBNGs within MINUTES, not weeks compared to physical model
No need for SP to change HW design or physically move links to a different port
Simplified and centralized hardware replacement and logistics

SP Department Name Major Challenges per Department Solved


with vBNG

Product Management Time to market is too slow yes

Engineering & Design Complex design, disaster radius, feature gaps yes

Capacity Planning Department Slow reaction on demand changes, CAPEX & OPEX reduction yes

Operations SW and HW quality and stability, reduce number of upgrades partially

IT Department High costs and slow implementation of IT systems yes

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 53
Summary of the whole presentation
vBNG is one of major 8 CSR 1000V Service Provider Solutions.
vBNG is targeted for July (IP) and November (PPP) 2017 on XRv 9000

Your vBNG solution includes:


1. Cisco vBNG VNF: CSR 1000V (Swiss Army Knife) and/or XRv 9000
2. NSO / ESC Orchestration Software
3. UCS Server Hardware and Setup Optimization
4. Smart Licensing, Monitoring and Operation Guidance

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 54
Call to action

1. Test vBNG 2. Design your Solution 3. Calculate ROI and TCO


on your laptop or dCloud Create E2E design Define time to market
in your lab Nail down scale numbers OPEX & OPEX savings
with ESC and SSMS Hardware & Hypervisor Consider license reuse
2-3 weeks 1 week 1 week

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 55
Complete Your Online Session Evaluation
Give us your feedback and receive a
Cisco Live 2017 Cap by completing the
overall event evaluation and 5 session
evaluations.

All evaluations can be completed via


the Cisco Live Mobile App.

Caps can be collected Friday 10 March Learn online with Cisco Live!
at Registration. Visit us online after the conference
for full access to session videos and
presentations.
www.CiscoLiveAPAC.com
BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 56
With our vBNG solution you can build
flexible

scalable

cost-effective
Broadband Aggregation
57
Cisco Spark
Ask Questions, Get Answers, Continue the Experience

Use Cisco Spark to communicate with the Speaker and fellow


participants after the session

Download the Cisco Spark app from iTunes or Google Play


1. Go to the Cisco Live Melbourne 2017 Mobile app
2. Find this session
3. Click the Spark button under Speakers in the session description
4. Enter the room, room name = BRKSPG-2381
5. Join the conversation!

The Spark Room will be open for 2 weeks after Cisco Live

BRKSPG-2381 © 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public 58
Thank you

You might also like