Paganoni, M. (2010) Distributed computing in the LHC era. Il nuovo cimento C, 33 (6). pp. 33-37. ISSN 1826-9885
|
Text
ncc9762.pdf - Published Version Download (240kB) | Preview |
Abstract
A large, worldwide distributed, scientific community is running intensively physics analyses on the first data collected at LHC. In order to prepare for this unprecedented computing challenge, the four LHC experiments have developed distributed computing models capable of serving, processing and archiving the large number of events produced by data taking, amounting to about 15 petabytes per year. The experiments workflows for event reconstruction from raw data, production of simulated events and physics analysis on skimmed data generate hundreds of thousands of jobs per day, running on a complex distributed computing fabric. All this is possible thanks to reliable Grid services, which have been developed, deployed at the needed scale and thouroughly tested by the WLCG Collaboration during the last ten years. In order to provide a concrete example, this paper concentrates on CMS computing model and CMS experience with the first data at LHC.
Item Type: | Article |
---|---|
Uncontrolled Keywords: | Computers in experimental physics ; Computer interfaces |
Subjects: | 500 Scienze naturali e Matematica > 530 Fisica |
Depositing User: | Marina Spanti |
Date Deposited: | 02 Apr 2020 15:29 |
Last Modified: | 02 Apr 2020 15:29 |
URI: | http://eprints.bice.rm.cnr.it/id/eprint/17084 |
Actions (login required)
View Item |