If you need the Gnome GUI, you can install the Ubuntu 20.04 Desktop. After the system bootup, ‘apparmor’ is recommended to be disabled, otherwise, you may meet permission denied when you assign SRIOV devices to the CSR1kV. You need to reboot to take effect.
ubuntu@ubuntu-kvm:~$ sudo nmcli connection modify netplan-eno1 ipv4.method manual ipv4.addresses 10.75.59.50/24 ipv4.gateway 10.75.59.1 ipv4.dns 64.104.123.245 ubuntu@ubuntu-kvm:~$ sudo nmcli connection modify netplan-eno1 connection.id eno1 ubuntu@ubuntu-kvm:~$ sudo nmcli connection up eno1 # Change the connections name to the devices name.
sudo nmcli connection ubuntu@ubuntu-kvm:~$ sudo nmcli connection NAME UUID TYPE DEVICE eno1 10838d80-caeb-349e-ba73-08ed16d4d666 ethernet eno1 enp216s0f0 6556c191-c253-3a5e-b440-c5b071ec29a4 ethernet enp216s0f0 enp216s0f1 8080672c-5784-375f-8eb9-a6ef57cbd4f7 ethernet enp216s0f1 ens1f0 58fb5b1f-c10a-3e7a-9ab9-a8c449840ce6 ethernet ens1f0 ens1f1 2a06a9d9-b761-3bdf-aa0b-3d44fff2158f ethernet ens1f1 virbr0 ca7d1e11-a82f-429c-9d91-fc985776232c bridge virbr0 #Set the MTU and disable ipv4 for the NIC to enable SRIOV.
sudo nmcli connection modify ens1f0 ethernet.mtu 9216 sudo nmcli connection modify ens1f0 ipv4.method disabled sudo nmcli connection up ens1f0 sudo ip link show ens1f0 2: ens1f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9216 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 40:a6:b7:0f:a7:74 brd ff:ff:ff:ff:ff:ff # Note: This value 9216 of MTU is derived from Cisco NFVIS. CSP5228-1# show pnic-detail mtu Name MTU ============================= eth0-1 9216 eth0-2 9216 eth1-1 9216 eth1-2 9216
SR-IOV Configuration
(1) Check the NIC to support SR-IOV
Check the NIC hardware infor.
1 2 3 4 5 6 7 8 9 10 11
sudo lshw -c network -businfo
Bus info Device Class Description ========================================================
sudo lspci -vv -s 5e:00.0 | grep -A 5 -i SR-IOV Capabilities: [160 v1] Single Root I/O Virtualization (SR-IOV) IOVCap: Migration-, Interrupt Message Number: 000 IOVCtl: Enable+ Migration- Interrupt- MSE+ ARIHierarchy+ IOVSta: Migration- Initial VFs: 64, Total VFs: 64, Number of VFs: 2, Function Dependency Link: 00 VF offset: 16, stride: 1, Device ID: 154c
(2) Change the GRUB parameters
1 2
sudo vi /etc/default/grub GRUB_CMDLINE_LINUX_DEFAULT="quiet hugepagesz=1G hugepages=64 default_hugepagesz=1G intel_iommu=on iommu=pt isolcpus=1-8,45-52"
Note:The hugepages should be no more than the physical memory, otherwise, it will fail to bootup.
default_hugepagesz=1G hugepagesz=1G hugepages=64 will allocate 64 1GB huge pages at boot, which are static huge pages. CSR 1000v will use these static huge pages for best performance.
isolcpus=1-8,45-52 will isolate the CPUs cores reserved for CSR 1000v, prevent other processes running on these cores, this can reduce latency for CSR 1000v. You can refer to [Check the capability of the platform][1] to get the CPU information.
# nmcli can set the SRIOV, as follow: sudo nmcli connection modify ens1f0 sriov.total-vfs 2 sudo nmcli connection modify ens1f1 sriov.total-vfs 2 # We can set the MAC address and set the trust on: sudo nmcli connection modify ens1f0 sriov.vfs '0 mac=b6:4f:02:37:5a:d8 trust=true, 1 mac=26:04:1d:1f:3d:a9 trust=true' sudo nmcli connection modify ens1f1 sriov.vfs '0 mac=76:6c:4e:16:7f:e2 trust=true, 1 mac=6a:f2:bd:97:71:65 trust=true' sudo nmcli connection up ens1f0 sudo nmcli connection up ens1f1
After reboot, check the dmesg.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
sudo dmesg | grep -i vf [ 6.599304] i40e 0000:5e:00.0: Allocating 2 VFs. [ 6.680111] i40e 0000:5e:00.1: Allocating 2 VFs. [ 6.730167] iavf: Intel(R) Ethernet Adaptive Virtual Function Network Driver - version 3.2.3-k << snip >> [ 17.931493] i40e 0000:5e:00.0: Setting MAC b6:4f:02:37:5a:d8 on VF 0 [ 18.111781] i40e 0000:5e:00.0: VF 0 is now trusted [ 18.112559] i40e 0000:5e:00.0: Setting MAC 26:04:1d:1f:3d:a9 on VF 1 [ 18.291464] i40e 0000:5e:00.0: VF 1 is now trusted [ 18.292231] i40e 0000:5e:00.1: Setting MAC 76:6c:4e:16:7f:e2 on VF 0 [ 18.475259] i40e 0000:5e:00.1: VF 0 is now trusted [ 18.475929] i40e 0000:5e:00.1: Setting MAC 6a:f2:bd:97:71:65 on VF 1 [ 18.659465] i40e 0000:5e:00.1: VF 1 is now trusted [ 18.728124] iavf 0000:5e:02.0 ens1f0v0: NIC Link is Up Speed is 25 Gbps Full Duplex [ 18.752275] iavf 0000:5e:02.1 ens1f0v1: NIC Link is Up Speed is 25 Gbps Full Duplex [ 18.776329] iavf 0000:5e:0a.0 ens1f1v0: NIC Link is Up Speed is 25 Gbps Full Duplex [ 18.800569] iavf 0000:5e:0a.1 ens1f1v1: NIC Link is Up Speed is 25 Gbps Full Duplex
(4) Check the VFs
You can check the VFs from lspci or ip link commands.
1 2
ubuntu@ubuntu-kvm:~$ sudo lspci | grep -i Virtual ubuntu@ubuntu-kvm:~$ sudo ip link show | grep -B2 vf
Good news is that the VFs names are well related to the physical NICs. For example, ens1f0v1 is one of the VFs of the physical NIC ens1f0 .
1 2 3 4 5 6 7 8 9
ubuntu@ubuntu-kvm:~$ ip link show | grep -E ens1f[0,1]v[0,1] -A 1 9: ens1f0v1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 26:04:1d:1f:3d:a9 brd ff:ff:ff:ff:ff:ff 10: ens1f1v1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 6a:f2:bd:97:71:65 brd ff:ff:ff:ff:ff:ff 15: ens1f0v0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether b6:4f:02:37:5a:d8 brd ff:ff:ff:ff:ff:ff 16: ens1f1v0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 76:6c:4e:16:7f:e2 brd ff:ff:ff:ff:ff:ff
We can change the MTU to 9216 (or you may not need 9216, 1504 is enough).
Using this method, KVM creates a pool of network devices that can be inserted into VMs, and the size of that pool is determined by how many VFs were created on the physical function when they were initialized.
<network> <name>ens1f0_sriov_pool</name><!-- This is the name of the file you created --> <forwardmode='hostdev'managed='yes'> <pfdev='ens1f0'/><!-- Use the netdev name of your SR-IOV devices PF here --> </forward> </network>
From the virt-manager, create a CSR1KV virtual machine step by step, and choose the virtual network interface from the SR-IOV pool.
Note: The first interface of csr1kv-1 is configured to macvtap Bridge mode, so you do not need to create a Linux bridge. However, csr1kv-1 can not communicate to the Linux host through this interface, but it can go out of the Linux host through the eno1. This is a known issue with macvtap.
After select the Virtual Network Interface, click the Begin Installation, and you can shut down the virtual machine. The actions above will create the csr1kv-1.xml file under the directory: /etc/libvirtd/qemu/
KVM Performance Tunning
KVM performance tuning are related to NUMA, Memory Hugepage and vCPU pinning. The main reference is Redhat Linux 7 PERFORMANCE TUNING GUIDE
We will use virsh edit csr1kv-1 to do the performance tuning.
(1) Check the capability of the platform
1 2 3 4 5 6 7 8 9
ubuntu@ubuntu-kvm:~$ virsh nodeinfo CPU model: x86_64 CPU(s): 88 CPU frequency: 1000 MHz CPU socket(s): 1 Core(s) per socket: 22 Thread(s) per core: 2 NUMA cell(s): 2 Memory size: 394929928 KiB
Please note that, if hyper-threading is enabled in BIOS setting, the parameter “emulatorpin” should be set, the cpuset are from the “virsh capabilities”, for example, siblings=’1,45’. When the core 1 is pinned, core 45 should be set in emulatorpin.
CSR1000v-1#show sdwan running-config system system-ip 1.1.10.1 site-id 101 sp-organization-name CiscoBJ organization-name CiscoBJ vbond 10.75.58.51 port 12346 ! hostname CSR1000v-1 username admin privilege 15 secret 9 $9$4/QL2V2K4/6H1k$XUmRNf.T7t3KDOj/FmoNexpEypCxr482dExXHDnohSI ip name-server 64.104.123.245 ip route 0.0.0.0 0.0.0.0 10.75.59.1
interface GigabitEthernet1 no shutdown arp timeout 1200 ip address 10.75.59.35 255.255.255.0 no ip redirects ip mtu 1500 mtu 1500 negotiation auto exit
interface Tunnel1 no shutdown ip unnumbered GigabitEthernet1 no ip redirects ipv6 unnumbered GigabitEthernet1 no ipv6 redirects tunnel source GigabitEthernet1 tunnel mode sdwan exit
clock timezone CST 8 0 ntp server x.x.x.x version 4
show sdwan control local-properties show sdwan control connections show sdwan control connection-history show sdwan running-config show sdwan bfd sessions show sdwan omp peers show sdwan omp routes
(3) CSR 1000v Smart License Registration
Before Smart License registration, you need :
CSR 1000v’s control connections are up;
Configure ip http client source-interface GigabitEthernet2
If the version is 16.12.x and bellow, you need to allow service all sdwan interface GigabitEthernet2 tunnel-interface allow-service all In the 17.2.x and above versions, there is an allow-service https
CSR 1000v can access URL:https://tools.cisco.com/its/service/oddce/services/DDCEService The command license smart register idtoken xxxxxx will do the registration. You can find the idtoken from your Smart Account smart license inventory. show license status to check the status.
1 2
CSR1000v-1#show platform hardware throughput level The current throughput level is 200000000 kb/s
Performances and limitations
(1) The performance of the SR-IOV
A performance test was done after the SR-IOV setup, the CSR1KV was configured as 8vCPU and 8G Memory, the packet drop rate was 0%.
Packet Site
SDWAN Performance (Mbps)
CEF Performance (Mbps)
128Bytes
800
2843.75
256Bytes
1431.26
6500.00
512Bytes
2581.26
10578.13
1024Bytes
3731.26
15500.00
1400Bytes
4306.26
18171.88
Note: These test results are not to represent the official performance data. Different servers and network cards may have different test results. The above data is for demo only.
(2) The limitation of the SR-IOV
The main limitation of the SR-IOV is the number of VLANs on each VF, the maximum VLAN of an VF is limited to 63 in ixgbevf. So, the active number of sub-interfaces on an interface of the CSR1KV that uses the SRIOV VFs is limited to 63. There is some notes in the Cisco CSR 1000v and Cisco ISRv Software Configuration Guide:
SR-IOV (ixgbevf) Maximum VLANs: The maximum number of VLANs supported on PF is 64. Together, all VFs can have a total of 64 VLANs. (Intel limitation.)
SR-IOV (i40evf) Maximum VLANs: The maximum number of VLANs supported on PF is 512. Together, all VFs can have a total of 512 VLANs. (Intel limitation.) Per-VF resources are managed by the PF (host) device driver.