
Utilizing DPDK Virtual Devices for Exceptional Path Performance in OVS-DPDK Scenario
Explore the potential of DPDK virtual devices like TAP, KNI, and Virtio-user for exceptional path performance in OVS-DPDK scenario. Learn about optimizing throughput, implementing multi-queues, and configuring MTU settings. Enhance your understanding of virtual devices based on NIC in kernel for improved packet processing efficiency.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
December 10-11, 2019 | Westford, MA Utilizing DPDK Virtual Devices in OVS Yinan Wang / Lei Yao Intel
DPDK Virtual Devices Introduction Virtual Devices for Exceptional Path - TAP - KNI - Virtio-user Virtual Devices Based on NIC in Kernel - PCAP - AF_PACKET - AF_XDP
Virtual Devices for Exceptional Path TAP / KNI / Virtio-user VM virtio-net Potential usage in OVS-DPDK scenario: Container network. vhost-user OVS-DPDK Container PMD Driver TAP / KNI / Virtio-user User Space Software Kernel Interface Kernel Space NIC PKTs PKTs
Virtual Devices for Exceptional Path OVS-DPDK Iperf Performance Single Host Iperf Performance in OVS-DPDK Throughput(Gbps) OVS-DPDK Iperf -s Iperf -c TAP / KNI / Virtio-user TAP / KNI / Virtio-user Virtio-user TAP KNI User Space 1 queue 2 queue 4queue Kernel Space Multi-queues Software Kernel Interface TAP Y Software Kernel Interface KNI N Virtio-user Y
Virtual Devices for Exceptional Path BKMs on supporting three VDEVs in OVS-DPDK: TAP: KNI: Virtio-user: 1G hugepage 2M hugepage Normally, OVS-DPDK calculate Tx queue number according total port numbers. TAP device require (rxq number = txq number). Launch OVS-DPDK with KNI, KNI s MTU is 2034 by default . KNI_MTU= (mbuf_size) - RTE_ETHER_HDR_LEN Need re-config the MTU for KNI interface in kernel side for better usage. ok ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk- extra="--single-file-segments 4K page System request: VT-d on; NIC bind to vfio-pci ; ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk- extra= --no-huge
Virtual Devices Based on NIC in Kernel PCAP/ AF_PACKET/ AF_XDP Potential usage of these VDEVs in OVS-DPDK scenario: Receive packets from kernel NIC to user space w/o nic-pmd. OVS-DPDK PCAP/ AF_PACKET/ AF_XDP DPDK PMD User Space Kernel Space NIC NIC
Virtual Devices Based on NIC in Kernel PCAP/ AF_PACKET/ AF_XDP Phy loopback Performance Phy loopback Performance DPDK Testpmd Throughput(mpps) PCAP/ AF_PACKET/ AF_XDP User Space Kernel Space PCAP AF_PACKET AF_XDP NIC in Kernel
Virtual Devices Based on NIC in Kernel AF_XDP performance gains in DPDK AF_XDP Phy loopback Performance Two tips for using AF_XDP in OVS-DPDK: Throughput(mpps) a) AF_XDP has dependencies on kernel version, recommend v5.4, includes some bug fixes b) ovs-vsctl add-port br1 afxdp -- set Interface afxdp type=dpdk options:dpdk-devargs=net_af_xdp0,iface=[interface name], start_queue=[x],queue_count=[x] v19.05 v19.08 v19.11