
GRID Computing and Data Storage Solutions
Explore the topics of off-line and GRID computing along with data storage strategies discussed at the PADME General Meeting. Key points include data rates, storage systems, CPU allocation, and disaster recovery plans for the experiments. Learn about the setup for automatic data transfers, GRID computing power allocation, and the availability of tape libraries for data access optimization.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
Off-line & GRID Computing Emanuele Leonardi PADME General Meeting - LNF 17-18 January 2017
Spectrometer Veto 192 ch FADC High Energy Positron Veto 32ch FADC Target 32ch FADC ECAL 616 ch FADC SAC 49 ch FADC TPix 65536 ch TBD BTF Beam Trigger Distribution System L1 DAQ L0 DAQ Cosmics 1 process per DAQ board 1 process per DAQ board ADC zero suppression (?) SW/Random L1 DAQ Event build RAW data Temporary disk buffer Central Data Recording Facility CNAF (+LNF?) Trigger signal Charged Filter (Vis) 2 or more tracks Data flow PADME on-line computing model PADME experiment Neutral Filter (Inv) 1 or more ECAL clusters E. Leonardi - PADME G.M. - On-line Computing site 2 17/01/2017 2
DAQ data rates Detector DAQ out MB/s RAW data GB/d RAW data TB/365d RAW event KB/event Target 1.26 109 40 24.6 Calorimeter 2.70 234 85 52.8 SAC 1.08 93 34 21.1 E veto 0.60 51 19 11.6 P veto 1.31 113 41 25.6 HEP veto 1.15 100 36 22.5 TimePix 1.50 130 47 29.3 Total 9.60 830 303 187.5 Output estimates assume that zero-suppression is active, include trigger ADC samples (25%) and a 1% autopass stream (10%), and do not include ROOT data-compression factor (testbeam: 50%) 3 17/01/2017 E. Leonardi - PADME G.M. - Off-line & GRID Computing
STORAGE - CNAF CNAF storage system accepts PADME data 100TB tape storage allocated for 2016 10TB buffer area available for automatic data transfer Data transfer: Automatic - GRID interface via PADME-VO Manual - SCP to special user account (leonardi) Need to set-up GRID host at LNF to enable automatic transfers All testbeam data (DAQ+RAW) have been copied to CNAF (3.8TB) 4 17/01/2017 E. Leonardi - PADME G.M. - Off-line & GRID Computing
STORAGE - LNF KLOE tape library is available at LNF Same characteristics/costs as CNAF Willing to grant access to other experiments (e.g. us) Useful to have two copies of data Disaster recovery Tape degradation recovery Data access optimization Clarify details in 2017 and request financing for 2018 5 17/01/2017 E. Leonardi - PADME G.M. - Off-line & GRID Computing
CPU 2000 HepSPEC of GRID computing assigned in 2017 MC production and analysis Can allocate CPU power at LNF Tier1 Details to be discussed with Elisabetta Vilucchi ASAP CPU in Tier1 can also be accessed out-of-GRID Good candidate for data reconstruction Need to define plan before 2018 6 17/01/2017 E. Leonardi - PADME G.M. - Off-line & GRID Computing
CVMFS CNAF set up a CVMFS (CERN VM File System) area for PADME Used to distribute the PADME software environment on the GRID Reachable only in sites that accept the PADME-VO Need an explicit configuration at the site currently visible only at CNAF and at ROMA1 for testing /cvmfs/padme.infn.it 7 17/01/2017 E. Leonardi - PADME G.M. - Off-line & GRID Computing