ESNET Testbed Research Team Evaluation and Methodology

mdtmftp evaluation @esnet testbed n.w
1 / 14
Embed
Share

Explore the evaluation and methodology of mdtmFTP at WAN and LXC-based VM environments, comparing it with other data transfer tools. Dive into ESNET's Testbed components, infrastructure, and data transfer tests between nersc-tbn-1 and nersc-tbn-2. Discover how ESNET Testbed supports PROXMOX-based Linux container technology, facilitating Linux Container VMs on nersc-tbn-1 and nersc-tbn-2. Analyze the performance metrics and tools like mdtmFTP, FDT, BBCP, and GridFTP used in data transfer evaluations.

  • ESNET Testbed
  • Data Transfer
  • Evaluation
  • Methodology
  • WAN Environment

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. mdtmFTP evaluation @ESNET Testbed MDTM Research Team

  2. Goal Test and evaluate mdtmFTP at WAN environment Test and evaluate mdtmFTP at LXC-based VM environment Compare mdtmFTP with other data transfer tools

  3. ESNET Testbed - 1 100G Component of Esnet SDN Testbed StarLight NERSC nersc-tbn-1 2xIntel HaswellXeon E5-2643 6 cores Motherboard: superMicro X10DRi (PCIe Gen3) 2x40G Mellanox NICs Support high performance I/O operation (Write) An array of 24 HDDs NERSC Site Router 100G To Esnet Production Network VLANS: 4012: All hosts 4020: Loop from NERSC to Chicago and back, all NERSC hosts 10GE (MM) Dedicated 100G Network exoGENI Rack 100G 8x10GE (MM) Corsa SDN switch to SDN Testbed (coming Summer 2015) 100G 100G 40GE Star-cr5 core router 100G StarLight 100G switch nersc-tbn-1 NICs: 2x40G Mellanox 1x40G Chelsio 2x10G Myricom Disk: 24 HDDs 2x10GE (MM) 100G 2x40GE Alcatel- Lucent 100G SR7750 Router nersc-tbn-2 2xIntel HaswellXeon E5-2643 6 cores Motherboard: superMicro X10DRi (PCIe Gen3) 2x40G Mellanox NICs Support high performance I/O operation (Read) An array of 12 SSDs nersc-tbn-1 Note: These hosts have no data disks 1x40GE 5x 10GE (MM) star-tbn-1 NICs: 4x10G Myricom 1x10G Mellanox nersc-tbn-2 NICs: 4x40G Mellanox 1x40G Chelsio 2x10G Myricom Disk: 24 SSDs 1x40GE 3x40GE 2x10GE (MM) Alcatel- Lucent 100G SR7750 Router nersc-tbn-2 star-tbn-1 star-tbn-2 NICs: 4x10G Myricom 4x10GE (MM) star-tbn-2 star-tbn-3 NICs: 4x10G Myricom 1x10G Mellanox 5x10GE (MM) exoGENI Rack star-tbn-3 Data transfer: DTN nersc-tbn-2 nersc-tbn-1 . 95ms RTT loop between nersc-tbn-1 and nersc-tbn-2.

  4. ESNET Testbed - 2 ESNET Testbed supports PROXMOX-based Linux container (LXC) technology http://proxmox.com/ nersc-tbn-1 and nersc-tbn-2 each runs a Linux Container VM

  5. Evaluation Methodology - 1 Transfer data from nersc-tbn-2 to nersc-tbn-1 Performance metric: Time-to-Completion Data transfer tool mdtmFTP (developed by FNAL) http://mdtm.fnal.gov FDT (developed by CalTech) http://monalisa.cern.ch/FDT/ BBCP (developed by SLAC) https://www.slac.stanford.edu/~abh/bbcp/ GridFTP (developed by University of Chicago) http://toolkit.globus.org/toolkit/docs/latest-stable/gridftp/

  6. Evaluation Methodology - 2 Transfer Mode Client-Server data transfer 3rd-Party data transfer Data Transfer Scenarios: Large file transfer: Transferring a 100GB large file from nersc-tbn-2 to nersc- tbn-1. Folder transfer 1: Transferring a folder that has 30 10G files from nersc-tbn-2 to nersc-tbn-1 Folder transfer 2: Transferring a Linux-3.18.21 folder from nersc-tbn-2 to nersc-tbn-1

  7. Evaluation Methodology - 3 Data transfer tool configuration Data Transfer tools # of Parallel Streams Pipelining Currency TCP parameters FDT 4 N/A N/A System configuration GridFTP 4 -PP -CC 8 System configuration BBCP 4 N/A N/A System configuration mdtmFTP 4 N/A 2 I/O threads System configuration Note: when # of parallel streams > 4, data transfer performance has negligible changes

  8. Result Client/Server mdtmFTP FDT GridFTP BBCP Time-to-Completion (seconds) 74.18 79.89 91.18 Poor performance Larger file data transfer 1 x 100G (Smaller is better) mdtmFTP FDT GridFTP BBCP Time-to-Completion (seconds) 192.19 217 320.17 Poor performance Folder data transfer 30 x 10G (Smaller is better) mdtmFTP FDT GridFTP BBCP Time-to-Completion (seconds) 10.51 DO NOT Work 1006.02 Poor performance Folder data transfer Linux 3.12.21 (Smaller is better) Note: BBCP performance is very poor, we do not list its results here

  9. Result Client/Server Relative performance improvement (base: GridFTP) Relative performance improvement (base: GridFTP) Relative performance improvement (Base: GridFTP) 1.4 2 120 1.2 100 1.5 1 80 0.8 1 60 0.6 40 0.4 0.5 20 0.2 0 0 0 mdtmFTP FDT GridFTP mdtmFTP FDT GridFTP mdtmFTP GridFTP Large File Data Transfer (1x100G) Folder Data Transfer (30x10G) Folder Data Transfer (Linux 3.12.21) GridFTP s Time-to-Completion Relative performance improvement (base: GridFTP) = other tools Time-to-Completion Note: Larger is better

  10. Result 3rd party data transfer mdtmFTP FDT GridFTP BBCP Time-to-Completion (seconds) 34.976 N/A 106.84 N/A Larger file data transfer 1 x 100G (Smaller is better) mdtmFTP FDT GridFTP BBCP Time-to-Completion (seconds) 95.61 N/A Do not work N/A Folder data transfer 30 x 10G (Smaller is better) mdtmFTP FDT GridFTP BBCP Time-to-Completion (seconds) 9.68 N/A Do not work N/A Folder data transfer Linux 3.12.21 (Smaller is better) Note: BBCP and FDT support 3rd party data transfer. But BBCP and FDT can not run 3rd party data transfer on ESNET testbed due to testbed limitation

  11. Result 3rd party data transfer Relative performance improvement (Base: GridFTP) 3.5 3 2.5 2 1.5 1 0.5 0 mdtmFTP GridFTP Large File Data Transfer (1x100G) GridFTP s Time-to-Completion Relative performance improvement (base: GridFTP) = other tools Time-to-Completion Note: Larger is better

  12. mdtmFTP status Much stable now, but still has bugs Need more time to improve the code, and package the software Only 1.0 FTE available mdtmFTP is now evaluated and tested under controlled environments Work with ORNL to use mdtmFTP for KSTAR science data transfer Work with FNAL CMS to test and deploy mdtmFTP Work with ESNET to test and evaluate mdtmFTP on Linux Container-based VM

  13. mdtmFTP roadmap FNAL officially started mdtmFTP development at Jan 2015. 1.0 FTE reuse some basic Globus modules for rapid prototyping GridFTP protocol module, Globus xio module, Globus security model We open a new software engineer position for mdtmFTP development We plan to replace mdtmFTP s Globus modules

  14. Summary Test and evaluate mdtmFTP at WAN environment Test and evaluate mdtmFTP at LXC-based VM environment Compare mdtmFTP with other data transfer tools Test results approve that we are in the right direction mdtmFTP status mdtmFTP roadmap

Related


More Related Content