Implementing EPA Configuration Status for Efficient Virtual Computing

epa configuration status francisco rodr guez indra n.w
1 / 10
Embed
Share

Explore the current status of EPA configuration with CPU pinning, hugepages, thread policy, and NUMA nodes. Understand the implemented settings, what users might expect, and considerations for independent EPA settings versus opinionated flavors. Find a simple proposal for honoring the information model in OS CPU pinning.

  • EPA Configuration
  • Virtual Computing
  • CPU Pinning
  • Thread Policy
  • NUMA Topology

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. EPA Configuration. Status Francisco Rodr guez (Indra)

  2. Current status According to the information model (& onboarding guide, to some extent) CPU pinning in virtual-compute-desc.virtual- cpu.pinning.policy DEDICATED|SHARED|ANY Thread policy in virtual-compute-desc.virtual- cpu.pinning.thread-policy AVOID|SEPARATE|ISOLATE|PREFER Hugepages in virtual-compute-desc.virtual-memory.mempage- size LARGE|SMALL|SIZE_2M|SIZE_1GB|PREFER_LARGE

  3. Current status node-cnt defines the number of NUMA nodes to expose to the VM, while mem-policy defines if the memory should be allocated strictly from the local NUMA node (STRICT) or not necessarily from that node (PREFERRED). The rest of the settings request a specific mapping between the NUMA nodes and the VM resources and can be explored at detail in the OSM Information Model Documentation NUMA Topology virtual-compute-desc: virtual-memory: numa-node-policy: node-cnt: 2 node: - id: 0 memory-mb: 2048 num-cores: 1 # or paired-threads or threads vcpu: - 0 - 1

  4. Current status What is really implemented: If there is a numa section in the descriptor CPU Policy set to dedicated (pinning) Hugepages size was set to Large (not anymore) Mempolicy set to strict Thread policy set to: require if paired-threads is defined (number is ignored) isolate if cores is defined (number is ignored) prefer if threads is defined (number is ignored) Only one numa node is allowed For VIO, affinity to numa 0 is always set Settings for pinning, thread policy and contents of the numa-node-policy ignored Hugepages setting is honored after https://osm.etsi.org/gerrit/#/c/osm/RO/+/12057/

  5. Current status Easy to see what is implemented https://osm.etsi.org/gitlab/osm/ro/-/blob/master/RO-VIM- openstack/osm_rovim_openstack/vimconn_openstack.py#L1245

  6. A fundamental question Independent EPA settings Or opinionated flavors? Implied in the information model and onboarding guide What is implemented today What users probably expect Risky?

  7. Simple proposal for honoring the IM in OS CPU pinning Map to hw:cpu_policy Thread policy Map to hw:cpu_thread_policy (it is not 1 to 1 exactly) NUMA topology Map to hw:numa_nodes, hw:numa_cpus.N, hw:numa_mem.N, hw:mem_policy https://docs.openstack.org/nova/pike/admin/flavors.html Avoid the hardcoded values for numa.nodeAffinity to node 0 and latency_sensitivity_level to high if VIM type is VIO. Find a better way to do it.

  8. Needs beyond current Information Model Requirements for vmware_extra_config: Cores per socket set as the number of vCPUs: one socket configuration Numa.vcpu.followcorespersocket: true Force the use of Hyper-threading Numa.vcpu.PreferHT: true Manual CPU affinity: manually assign vCPUs to a node. No possibility of CPU pinning (Infrastructure restriction). Sched.vcpu0.affinity: 8; sched.vcpu1.affinity: 9; Exclusive affinity: vCPUs should not be assigned to the same vCPUs Sched.affinity.cpu.exclusive: true Default VLAN per SRIOV interface pciPassthru10.defaultVlan": "4095

  9. Production example Example:

  10. Thank you!

Related


More Related Content