VIP access for Very Important Particles

From 12 to 14 March, 100 data acquisition specialists from ALICE, ATLAS, CMS and LHCb took part in a workshop* to share their experiences and exchange ideas. This gave the Bulletin the opportunity to take another look at some of the principles of data acquisition.

 

Participants in the data acquisition workshop held at Château de Bossey in March. Image: Andrei Kazarov.

The more debris, the better: at least when it comes to particle collisions. The LHC does marvellously well in this respect, but the four main experiments have another aim: detection. Every second, millions of protons interact with millions of others, creating a chaotic torrent of secondary particles. The picture is extremely complex, much to the joy of the physicists, who hurry around trying to “maybe” register the event…

Event or non-event?
That is the question. Gathering data is good, but gathering the best data is better. Fortunately, thanks to the “trigger system”, it is possible to isolate the data we’re looking for. Initially, all new data are recorded. The recording is very quick – just a few microseconds – which is just long enough for the trigger system to evaluate the quality of the event. If it is judged to be of poor quality, the data are deleted, otherwise the event moves on to a second finer sorting phase. Then, if it still makes it through the mesh of a final net, it is permanently recorded.

Reconstructing the puzzle
Once the data are sorted, the reconstruction phase can start. But how can we determine which elements come from the same event? To identify the data, researchers use the LHC’s clock, which is accurate to 25 ns and allows them to give a time “barcode” to each element in the data. All that remains is to group together the data with the same barcode. The computers then take care of reconstructing the corresponding event. Finally, this information is sent to the CERN Computer Centre, where it is stored on tapes.

Ever more powerful detectors
During LS1, some improvements will be made to the four experiments’ data acquisition systems, but the major changes will mostly take place in 2018 (LS2) and 2022 (LS3). During LS2, ALICE will switch to continuous readout from its most important detectors, which will correspond to an increase in the recording rate by a factor of 100 or, to put it another way, the experiment will be able to analyse 100 times as many events as it can at present. In parallel, its data storage capacity will be increased by a factor of 20, to reach 80 GB/s. The LHCb experiment will increase its readout rate by a factor of 40 and will thus eventually rely totally on software trigger algorithms. During LS2 and LS3, ATLAS and CMS will need to prepare for an increase in instantaneous luminosity by a factor of 10-20 and a corresponding increase in the number of simultaneous interactions.

To keep up with these increases, ATLAS and CMS plan, among other things, on the upgrade of their respective tracking detectors, to incorporate track information into their upgraded trigger systems. This would enhance their ability to select the best data. They are also proposing to increase the readout rate by a factor of 5 and increase the rate at which collisions of interest are recorded to mass storage. These changes will in turn require significant upgrades to the DAQ systems to deal with the movement of data to and from the second level of selection and subsequently to mass storage.


*The workshop was organised at the Château de Bossey Ecumenical Institute on the initiative of those responsible for the acquisition of data in the four experiments (David Francis, Beat Jost, Frans Meijers and Pierre Vande Vyvre).

by Anaïs Schaeffer