
HPC Resources for PRIMAGE Simulations at PLGrid Infrastructure
Discover the high-performance computing resources available for PRIMAGE simulations at the PLGrid Infrastructure in Poland, featuring the Prometheus cluster and support for MPI, Singularity, and more. Learn about the procedures and requirements for accessing these resources, including user registration, computing grant negotiation, job submission using SSH and Slurm, and collaboration guidelines. Contact Marek Kasztelnik for more information.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
T2.3 (M1-M48) HPC Resources for PRIMAGE Simulations Marek Kasztelnik, Tomasz Guba a, Marian Bubak | Academic Computer Centre Cyfronet AGH, Krak w, Poland 12/12/2018 http://dice.cyfronet.pl/ H2020 EU PROJECT | Topic SC1-DTH-07-2018 | GA: 826494
PLGrid Infrastructure >7 000 users, all five Polish academic HPC centres integrated Synergy between domain-specific researchers and IT experts Scientific Software Computing resources 500+ applications, tools, libraries 5+ PFLOPS 130 000+ cores http://apps.plgrid.pl Storage 60+ PB archives backups distributed access fast scratch filesystem Tools for collaboration project tracking (JIRA) version control (Git) teleconferencing (Adobe Connect) Computational Cloud (PaaS based on OpenStack) 2
Primary HPC Resources for PRIMAGE Prometheus cluster (fair-shared with other users) Available though PL-Grid (Polish Grid Infrastructure) 2.4 PFlops computing power, 10 PB of disk, 282 TB of memory, Infiniband FDR 56 Gb/s 2160 regular nodes (24 Xeon 2.5 GHz cores, 128 GB RAM) 72 GPGPU nodes (24 Xeon 2.5 GHz cores, 128 GB RAM, 2 Nvidia K40 XL cards) 3 BigMem nodes (12 Xeon 3.4 GHz cores, 768 or 1536 GB RAM) A lot of software modules and licenses (e.g. Ansys, Matlab): Module/package browser available at: https://apps.plgrid.pl/ Possibility to set up dedicated license server if PRIMAGE acquires additional, dedicated licences PLGrid-provided licences are available on the fair-share use basis Support for MPI, Singularity and RSM (Ansys) 3
HPC Resources: Procedure Procedure and requirements PRIMAGE users who want to use HPC need to register in PLGrid Infrastructure (https://portal.plgrid.pl/) User manual: https://docs.cyfronet.pl/display/PLGDoc/User+Manual Exact instructions on how to affiliate oneself inside PLGrid will be provided by Cyfronet PRIMAGE project needs to negotiate a computing grant (number of CPU hours, storage, etc.) Cyfronet will do it, but first we need to learn: Your approximate CPU/disk consumption requirements in the first PRIMAGE year Any special requirements? (GPGPUs, BigMem jobs, or jobs expected to execute longer than 72h) Negotiated annually, can be renewed and/or extended if we run out of resources All published PRIMAGE research supported by such computational grant should be co-authored by Cyfronet Basic mode of use: SSH and Slurm (queuing system) for submitting jobs Manual: https://kdm.cyfronet.pl/portal/Prometheus:Podstawy (in PL only, apologies) Advanced modes, UIs and APIs: in the following presentation (T2.5) All on-cluster data are accessible only to members of the PRIMAGE A dedicated PLGrid group (plggprimage) was setup and we control who belongs to it Group storage separation is secured with on the OS/POSIX file access rights level 4
http://dice.cyfronet.pl/ Marek Kasztelnik | m.kasztelnik@cyfronet.pl