A Virtual talk by Tim Bell
The Large Hadron Collider is the world’s largest machine - a physics experiment whose work is growing and could challenge the next-generation of computers expected to run it.
TNMOC houses big systems built like LHC to help put humans in charge of the big-data age. Decades later and the European Centre for Nuclear Research (CERN) that operates LHC is collaborating with three other huge physics experiments to once again get ahead - to tackle challenges they foresee in a new generation of High Performance Computers to support their work and that of other data-intensive projects.
LHC’s physics experiments generate 30Pb of data annually. That yield is expected to hit 1Eb by 2030 as the result of a large-scale upgrade that starts next year.
How do you make an expected generation of exascale HPC systems that will perform a billion, billion calculations per second work at the scale and reliability demanded by scientists unlocking the secrets of the universe?
Find out with CERN’s compute infrastructure manager Tim Bell who joins TNMOC on August 18 to discuss LHC’s mega physics experiments, provide a sneak peak at infrastructure CERN’s scientists depend on, and discuss LHC’s high-performance future.
Tim will discuss:
The scale of LHC - a 27-km in size facility running at the extremes of physics - and the expectations on the IT infrastructure that Tim’s team deliver for 15,000 scientists.
How the LHC experiments leverage the resources at over 150 laboratories and universities in a worldwide computing grid
LHC’s forthcoming High-Luminosity upgrade - why it matters, what it means and how it’ll seriously challenge LHC’s existing compute and storage infrastructure.
Ways CERN expects to collaborate with other big science projects and open source software communities to ensure exascale computing and exabyte scale data management delivers the accurate results these projects - and our understanding of the universe - depends on.
Spiral Jetty photo credit: Greg Rakozy @unsplash