This post details my approach to setting up a POC lab to prove the operation of Nexus 9K switches doing VXLAN using BGP EVPN, it will also show how I deployed and verified the configurations and operation using Ansible.
You will follow the process from the start and learn such things as “What is a VTEP?” and “What is a VNI?”
VXLAN Lab using Cisco Nexus 9000v
For this lab I will be using VMWare ESXi, 3 x Nexus 9000V switches and a local installation of Ansible running on Ubuntu.
The topology I will be building is below
The Nexus 9000v switch image is purely for educational purposes and is not intended to be used in production.
The switch software can be downloaded from www.cisco.com with a valid service contract. For my purposes I downloaded the .ova file.
Nexus 9000v – loader error
After the initial setup of the Cisco Nexus 9000v, you must configure the booting image in your system before you reload it.
Otherwise, the Cisco Nexus 9000v drops to the loader> prompt after reload/shut down.
NEX-9K-SPINE1# sh ver Cisco Nexus Operating System (NX-OS) Software Nexus 9000v is a demo version of the Nexus Operating System Software BIOS: version NXOS: version 7.0(3)I7(2) BIOS compile time: NXOS image file is: bootflash:///nxos.7.0.3.I7.2.bin NXOS compile time: 11/22/2017 13:00:00 [11/22/2017 21:55:29] Hardware cisco Nexus9000 9000v Chassis NEX-9K-SPINE1# conf t Enter configuration commands, one per line. End with CNTL/Z. NEX-9K-SPINE1(config)# boot nxos bootflash:///nxos.7.0.3.I7.2.bin Performing image verification and compatibility check, please wait.... NEX-9K-SPINE1(config)# end NEX-9K-SPINE1# copy run start [########################################] 100% Copy complete, now saving to disk (please wait)... Copy complete. NEX-9K-SPINE1# reload
If you get an error when trying this configuration you might be hitting this bug https://bst.cloudapps.cisco.com/bugsearch/bug/CSCvm37015
The workaround is to disable signature verification
NEXUS-9K-Core(config)# boot nxos bootflash:///nxos.7.0.3.I7.5a.bin Performing image verification and compatibility check, please wait.... ImageSignature verification & bootvar config Failed
Enter this command no feature signature-verification
NEXUS-9K-Core(config)# no feature signature-verification WARNING: This will disable digital image signature verification for all NxOS software attempted to be installed using any install method. Are you sure you want to continue? (y/n) : [n] y WARNING: Image Signature Verification has been Disabled! NEXUS-9K-Core(config)# boot nxos bootflash:///nxos.7.0.3.I7.5a.bin Performing image verification and compatibility check, please wait.... 2018 Oct 24 00:08:10 NEXUS-9K-Core %$ VDC-1 %$ %USER-2-SYSTEM_MSG: WARNING!! Image Verification has been Disabled! - bootvar WARNING: Image Signature Verification has been Disabled!
Save the config and reload.
I will not be showing you how to deploy an ova file as I am assuming if you are looking to deploy VXLAN you should be able to install an ova file!
So I have 3 switches installed and up and running
The first challenge is to get connectivity between the three switches
The interfaces within ESXi are mapped directly to E1/1 – E1/12 on the 9000v.
Network adapter 1 always goes to the Management interface this needs to go to your local network. Then configure the management interface on your Nexus 9000v with an IP in the same network.
My Local Network is called VM Network on the 192.168.1.0/24 network so I will configure the management interface as below
interface mgmt0 vrf member management ip address 192.168.1.179/24
Then I need to map Network adapter 2 as my first port to E1/1
The connection between Spine to Leaf 1 is via it’s own virtual switch
I will now address E/1 with the first point to point link
interface Ethernet1/1 description P2P link to Leaf1 ip address 10.1.0.1/30 ip ospf network point-to-point no shutdown
I need to do the same for Leaf 2 with a separate virtual switch. Now we have connectivity, let’s get some IP’s on the interfaces and verify connectivity.
All connections in a Leaf & Spine topology are made of L3 point to point links.
We will be running OSPF on these links to provide the connectivity for the underlay network.
The switches will also require a loopback interface, it is best practice to configure one loopback for the Router ID and a second for the VTEP
Final Underlay Config for all 3 switches
So the final config I have is with two interfaces e1/1 and e1/2 on the Spine switch connected to e1/1 on each leaf switch. Each point to point point link is addressed with a /30 range and there are two loopbacks configured on each switch. Finally OSPF has been configured and all interfaces are part of Area 0 so we have full IP reachability on the underlay network.
NEX-9K-SPINE-1
feature ospf int lo0 description Routing ID ip address 1.1.1.1 255.255.255.0 int lo1 description VTEP ID ip address 100.100.100.1 255.255.255.255 int e1/1 description To-Leaf-1 ip address 10.0.0.1 255.255.255.252 ip ospf network type point-to-point int e1/1 description To-Leaf-2 ip address 10.0.0.5 255.255.255.252 ip ospf network type point-to-point feature ospf router ospf UNDERLAY router-id 1.1.1.1 network 1.1.1.1 0.0.0.0 area 0 network 10.0.0.1 255.255.2.252
NEX-9K-LEAF-1
feature ospf int lo0 ip address 1.1.1.2 255.255.255.0 int lo1 ip address 100.100.100.2 255.255.255.255 int e1/1 ip address 10.0.0.2 255.255.255.252 ip ospf network type point-to-point feature ospf router ospf UNDERLAY router-id 1.1.1.2 network 1.1.1.2 0.0.0.0 area 0 network 10.0.0.2 255.255.2.252
NEX-9K-LEAF-2
feature ospf int lo0 ip address 1.1.1.3 255.255.255.0 int lo1 ip address 100.100.100.3 255.255.255.255 int e1/1 ip address 10.0.0.3 255.255.255.252 ip ospf network type point-to-point feature ospf router ospf UNDERLAY router-id 1.1.1.3 network 1.1.1.3 0.0.0.0 area 0 network 10.0.0.6 255.255.2.252
Note: The interfaces have been configured as ospf network type point-to-point. This eliminates any DR/BDR elections and ensures that only type-1 LSAs are sent on the network. This makes sure OSPF is running as lean as possible and makes convergence times quicker.
IS-IS – When running VXLAN using ACI the underlay network is configured using IS-IS, but this is all hands off as the controller is doing all the configuration. If you are running VXLAN using NX-OS it is recommended to use OSPF mainly because it is more commonly understood compared to IS-IS. The convergence times are about the same until you start to scale the networks and the routing tables get bigger, in which case IS-IS would be the slightly better choice.
For this lab and any production scenarios you are looking at I would recommend you use OSPF for the underlay protocol.
Before we go any further, let’s just verify we have L3 connectivity between all of our switches
NEX-9K-SPINE-1# ping 1.1.1.2 PING 1.1.1.2 (1.1.1.2): 56 data bytes 64 bytes from 1.1.1.2: icmp_seq=0 ttl=254 time=1.389 ms 64 bytes from 1.1.1.2: icmp_seq=1 ttl=254 time=0.917 ms NEX-9K-SPINE-1# ping 1.1.1.3 PING 1.1.1.3 (1.1.1.3): 56 data bytes 64 bytes from 1.1.1.3: icmp_seq=0 ttl=254 time=1.389 ms 64 bytes from 1.1.1.3: icmp_seq=1 ttl=254 time=0.917 ms
From the Spine switch I can ping the loopbacks on both leaf switches, so we are good to proceed to the next step.
Throughout this post there are a lot of acronmyms – so let’s just do a quick recap on what they mean.
VXLAN Terminology
- VNI – VXLAN Network Identifier – or also known as VXLAN Segment ID – this is the VXLAN Number
Leaf Node Configuration – L2 VNI
We now need to configure the Leaf switches in a systematic approach.
- L2 VNI
- L3 VNI
- VPC
First define your Layer 2 VLAN and assign it to a VXLAN Network Identifier
MP BGP-EVPN
MP-BGP EVPN is a control protocol for VXLAN based on IETF RFC 7342. Prior to EVPN, VXLAN overlay
networks operated using the flood-and-learn model. In this model, end-host information learning and VTEP
discovery are both data-plane based, with no control protocol to distribute end-host reachability information among
VTEPs.
MP-BGP EVPN changes this model. It introduces control-plane learning for end hosts behind remote
VTEPs. It provides control-plane and data-plane separation and a unified control plane for both Layer 2 and Layer
3 forwarding in a VXLAN overlay network.
So lets configure BGP on each switch
Before progressing you need to enable some more features.
feature nv overlay feature bgp feature pim feature interface-vlan feature vn-segment-vlan-based nv overlay evpn
BGP will require the LAN_ENTERPRISE_SERVICES_PKG licence if you do not have this it does not matter for a lab environment as the feature is enabled on an honor based system. If you are running this in production I recommend you purchase the license required.
NEX-9K-SPINE-1
router bgp 65000 router-id 1.1.1.1 address-family ipv4 unicast address-family l2vpn evpn retain route-target all template peer VTEP-PEERS remote-as 65000 update-source loopback0 address-family ipv4 unicast send-community send-community extended route-reflector-client address-family l2vpn evpn send-community send-community extended route-reflector-client neighbor 1.1.1.2 inherit peer VTEP-PEERS neighbor 1.1.1.3 inherit peer VTEP-PEERS
NEX-9K-LEAF-1
router bgp 65000 address-family ipv4 unicast address-family l2vpn evpn neighbor 1.1.1.1 remote-as 65000 update-source loopback2 address-family ipv4 unicast send-community extended address-family l2vpn evpn send-community extended
NEX-9K-LEAF-2
router bgp 65000 address-family ipv4 unicast address-family l2vpn evpn neighbor 1.1.1.1 remote-as 65000 update-source loopback2 address-family ipv4 unicast send-community extended address-family l2vpn evpn send-community extended
Let’s step through what each line of config is doing here
We are going to be running iBGP with a single Autonomous System (6412) so each device is running router bgp 64512, each leaf is neighbors wth the spine and the spine is neighbors with each leaf, using their loopback interfaces.
Then you configure the address family l2vpn evpn and under that configure it to send extended communities.
We should now have some BGP neighbors, let’s check that from the Spine switch
NEX-9K-SPINE-1# sh bgp l2vpn evpn summary BGP summary information for VRF default, address family L2VPN EVPN BGP router identifier 1.1.1.1, local AS number 6000 BGP table version is 6, L2VPN EVPN config peers 2, capable peers 0 0 network entries and 0 paths using 0 bytes of memory BGP attribute entries [0/0], BGP AS path entries [0/0] BGP community entries [0/0], BGP clusterlist entries [0/0] Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 1.1.1.2 4 6000 47 82 0 0 0 00:07:16 Idle 1.1.1.3 4 6000 0 0 0 0 0 01:26:52 Idle NEX-9K-SPINE-1#
So we now have full OSPF and BGP peerings setup.
The next step is to configure the Multicast
NEX-9K-SPINE-1
ip pim rp-address 1.1.1.1 group-list 224.0.0.0/4 <this sets the Rendezvous Point IP address to Loopback0 ip pim ssm range 232.0.0.0/8 - this sets the source specific multicast range
Then enable ip pim sparse mode on all the required interfaces
int e1/1 - link to Leaf-1 ip pim sparse-mode int e1/2 - link to Leaf-2 ip pim sparse-mode int loopback0 ip pim sparse-mode
NEX-9K-Leaf-1
ip pim rp-address 1.1.1.1 group-list 224.0.0.0/4 <this sets the Rendezvous Point IP address to Loopback0 ip pim ssm range 232.0.0.0/8 - this sets the source specific multicast range
Then enable ip pim sparse mode on all required interfaces
int e1/1 - link to Spine-1 ip pim sparse-mode int loopback0 ip pim sparse-mode
NEX-9K-Leaf-2 – same as Leaf-1
ip pim rp-address 1.1.1.1 group-list 224.0.0.0/4 <this sets the Rendezvous Point IP address to Loopback0 ip pim ssm range 232.0.0.0/8 - this sets the source specific multicast range
Then enable ip pim sparse mode on all required interfaces
int e1/1 - link to Spine-1 ip pim sparse-mode int loopback0 ip pim sparse-mode
VLAN and Layer 3 configuration
The next step in the configuration is to setup a VLAN locally on the switch and map it to a VXLAN – We also need to setup a special VLAN which will used specifically as a layer 3 VNI to route inter-VNI traffic. (more on that a bit later)
Configuration on all Leaf Switches
vlan 900 - used specifically as layer 3 VNI name L3-VNI-VLAN-900 vn-segment 1000900 - maps VLAN to VXLAN
The next VLAN is going to be our test host VLAN
vlan 50 name VLAN-50-Desktops vn-segment 10000050
Next we need to define the Layer 3 VRF for Inter-VNI traffic
vrf context EVPN-L3-VNI-VLAN-900 vni 10000900 rd auto address-family ipv4 unicast route-target both auto route-target both auto evpn
Next assign a L3 gateway for VLAN 50
interface vlan 900 no shut vrf member EVPN-L3-VNI-VLAN-900 ip forward
Next assign the L3 address
interface vlan30 no shut vrf member EVPN-L3-VNI-VLAN-900 ip address 10.0.0.1/24 fabric forwarding mode anycast-gateway
Next step is to configure the NVE interface – NVE is the Network Virtual Interface where VXLAN packets are encapsulated and decapsulated
interface nve1 no shut source-interface lo1 host-reachability protocol bgp member vni 10000900 associate-vrf member vni 10000050 suppress-arp mcast-group 239.1.1.50
to be continued…
We will now perform the same configuration but this time using Cisco Data Center Network Manager.
For a quick tutorial on how to install DCNM, check out this post
Cisco Data Center Manager Installation Tutorial
If you are looking to upgrade the software on your Nexus 9000 switches, check out this post
Nexus 9000 Software Upgrade Procedure
Hi Roger,
Thank you for sharing this. Was wondering if The rest of the Article is up? Would also like to check if it’s possible learn nsx on aci using this kind of setup?
Thanks,
Jef
hI Roger,
tHANKS FOR SHARING (SORRY FOR THE caPS), ITS NOT ALLOWING ME TYPE MY APPRECIATION IN LOWER CASE BUT i REALLY WANT TO THANK YOU IN A BIG WAY FOR PUTTING ALL THIS TOGETHER FOR US. iT IS REALLY HELPFUL WHEN THERE IS NOT MUCH AVAILABLE ON dcnm.
yOU ARE A rOCKSTAR!!!
Thank you I’m glad it helped