Computing Challenges at the Large Hadron Collider (LHC)
Presenter
October 16, 2018
Abstract
Maria Girone - CERN
CERN was established in 1954, with the mission of advancing science and exploring fundamental physics questions — primarily through elementary particle research. The Large Hadron Collider (LHC) at CERN is the world's most powerful particle accelerator colliding bunches of protons 40 million times every second. This extremely high rate of collisions makes it possible to identify rare phenomenon and to declare new discoveries such as the Higgs boson in 2012.The high-energy physics (HEP) community has long been a driver in processing enormous scientific datasets and in managing the largest scale high-throughput computing centres. Today, the Worldwide LHC Computing Grid is a collaboration of more than 170 computing centres in 42 countries, spread across five continents.In this keynote talk, I will discuss the ICT challenges of the Large Hadron Collider (LHC) project, with attention given to the demands of capturing, storing, and processing the large volumes of data generated by the LHC experiments. We’re working to tackle many of these challenges together with ICT industry leaders, through a collaboration known as ‘CERN openlab’.
These demands will become even more pressing when we launch the next-generation “High-Luminosity” LHC in 2026. At that point, the total computing capacity required by the experiments is expected to be 50 to 100 times greater than today, with storage needs expected to be on the order of Exabytes. In addition, I will discuss the approaches we are considering in order to handle these enormous data, including the deployment of resources through the use of commercial clouds, and exploring new techniques, such as alternative computing architectures, advanced data analytics, and deep learning.