Edge Overlay Model for Cloud Networks

netcloud 2013 n.w
1 / 15
Embed
Share

Explore the implementation of a non-tunneling edge overlay model using OpenFlow technology for cloud datacenter networks. Learn about distributed tunnels, network virtualization, VLAN limitations, and L2-in-L3 tunneling to address challenges in multi-tenant environments. Discover tunneling protocols like VXLAN and NVGRE, along with the associated problems and solutions.

  • Cloud Networks
  • Edge Overlay
  • OpenFlow
  • Network Virtualization
  • Tunneling Protocols

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. NetCloud 2013 Non-Tunneling Edge-Overlay Model using OpenFlow for Cloud Datacenter Networks Nagoya Institute of Technology, Japan Ryota Kawashima and Hiroshi Matsuo

  2. Outlines Backgrounds Edge-Overlay (Distributed Tunnels) Proposed method Evaluation Conclusion 2

  3. Backgrounds Network Virtualization Multi-tenant Datacenter Networks Each tenant uses virtual networks Each virtual network shares the physical network resources Tenant 2 VM VM Tenant 1 Tenant 3 VM VM VM VM VM Virtual network 172.16.2.0/16 VM VM Virtual network 10.0.0.0/8 Virtual network 192.168.0.0/24 Physical network 10.0.0.0/8 3

  4. Backgrounds VLAN limitations Each virtual network has its own VLAN ID A VLAN tag is inserted into Ethernet frames VM's frame VLAN Ethernet Payload FCS VLAN ID (1 4094) is included Problems with VLAN The maximum number of VLANs is 4094 Physical switches learn VMs' MAC addresses 4

  5. Backgrounds Edge-Overlay L2-in-L3 tunneling VM VM Virtual switch Virtual switch Physical server Physical server VLAN problems can be addressed Over 16 million virtual networks can be supported VMs' MAC addresses are hidden from physical switches Existing network devices can be used Virtual switches provide many high-level functions 5

  6. Tunneling protocols 24bit ID VXLAN VM's frame IP Ethernet (Physical) Ethernet (Virtual) FCS UDP Payload VXLAN (Physical) NVGRE 24bit ID VM's frame Ethernet (Physical) Ethernet (Virtual) IP FCS NVGRE Payload (Physical) 64bit ID TCP-like header NIC offloading (TSO) STT VM's frame Ethernet (Physical) Ethernet (Virtual) IP FCS STT TCP-like Payload (Physical) 6

  7. Problems with Tunneling (1 / 2) IP Fragmentation at the physical server Payload Fragmentation Header Payload Header Payload VM Physical Server Payload Payload Fragmentation Payload Header Payload Header Payload Header 7

  8. Problems with Tunneling (2 / 2) Compatibility with existing environment IP Multicasting should be supported (VXLAN) Load balancing (ECMP) is not supported (NVGRE) Firewalls, IDS, load balancer may discard the frames (STT) TSO cannot be used (VXLAN, NVGRE) Practical problem Supported protocols differs between products (vendor lock-in) 8

  9. Proposed Method Yet another edge-overlay method Tunneling protocols are not used No IP fragmentation at the physical server layer OpenFlow-enabled virtual switches No VLAN limitations Compatibility with existing environment 9

  10. Method1 - MAC Address Translation MAC addresses within the frame are replaced SRC address : VM1's address => SV1's address DEST address : VM2's address => SV2's address VM 1 VM 2 VM1 => VM2 SV1 => SV2 SV1 => VM2 Virtual Switch Virtual Switch Physical Server (SV1) Physical Server (SV2) VMs' MAC addresses are hidden from the physical switches 10

  11. Method2 Host-based VLAN Traditional VID is globally unique Proposal VID is unique within a server Tenant 1 Tenant 2 Tenant 1 Tenant 2 VM VM VM VM VM VM VID=10 VID=20 VID=20 VID=30 VID=10 VID=10 Server Server Virtual Network (VID10) Virtual Network (VID20) Server Server VID=20 VID=20 VID=10 VID=10 VM VM VM VM Tenant 2 Tenant 2 Tenant 1 Tenant 1 11

  12. Feature Comparison Proposal L2 VXLAN L2 / L3 NVGRE L2 / L3 STT L2 / L3 VLAN L2 Physical Network MAC address hiding - No. of virtual networks Unlimited 16 million 16 million 18 quintillion 4094 IP Multicasting - Required - - - Load balancing (ECMP) - FW, IDS, LB Transparency - IP Fragmentation (Physical) TSO support - Occur Occur Occur - - - 12

  13. Performance Evaluation VM-to-VM communication VM2 (Receiver) VM1 (Sender) Iperf server Iperf client GRE / VXLAN tunnel Virtual switch Virtual switch OpenFlow Controller Physical server 1 Physical server 2 13 GbE switching hub

  14. Evaluation Result (UDP) Fragmentation by VXLAN encapsulation Fragmentation by GRE encapsulation Fragmentation at the VM The no. of frames = 3 The no. of frames = 5 The performance of proposed method was equal to "Optimal" IP fragmentation affected the no. of frames and performance 14

  15. Conclusion Yet another Edge-overlay method No tunneling protocols No IP fragmentation at physical server layer Higher throughput than tunneling protocols L2 network Future Work Further evaluation is necessary 10/40 GbE environment MPLS support 15

Related


More Related Content