DAQ

The DAQ operated efficiently for the remainder of the pp 2012 run, where LHC reached a peak luminosity of 7.5E33 (at 50 ns bunch spacing). At the start of a fill, typical conditions are: an L1 trigger rate close to 90 kHz, a raw event size of ~700 kB, and ~1.5 kHz recording of stream-A with a size of ~500 kB after compression. The stream-A High Level Trigger (HLT) output includes the physics triggers and consists of the ‘core’ triggers and the ‘parked’ triggers, at about equal rate. Downtime due to central DAQ was below 1%.

During the year, various improvements and enhancements have been implemented. An example is the introduction of the ‘action-matrix’ in run control. This matrix defines a small set of run modes linking a consistent set of configurations of sub-detector read-out configurations, and L1 and HLT settings as a function of LHC modes. This mechanism facilitates operation as it automatically proposes the run mode depending on the actual LHC conditions to the DAQ operator.

Online Cloud

The HLT farm has a sizeable amount of computing resources (in total ~13,000 CPU- cores providing ~200 kHEPSpec06). Some of them could be used as ‘opportunistic’ resources for offline processing when the HLT farm is not needed for data-taking or online system development. An architecture has been defined where the HLT computing nodes provide a cloud infrastructure. Dedicated head nodes provide proxies from the dedicated online network to Tier-0 services. OpenStack has been chosen as the cloud platform.  The OpenStack infrastructure has been installed on the cluster, virtual machine images containing the offline production software have been produced by the Computing project and testing of offline workflows has started.

DAQ upgrade for post-LS1 (DAQ2)

The DAQ2 system for post-LS1 will address: (i) the replacement of the computing and networking equipment which has reached end-of-life, (ii) capability to read-out the majority of legacy back-end sub-detector electronics FEDs, as well as the new micro-TCA-based back-end electronics with the AMC13 module (currently a 5 Gbps link to central DAQ), (iii) extendibility for increasing sub-detector read-out bandwidth and HLT farm extension, and (iv) improvements taking into account operational experience.

Progress has been made with the definition of the architecture, definition and implementation of the DAQ link to connect to the AMC13 card, evaluation of 10/40 Gbps Ethernet and Infiniband network technologies, measurements with event builder demonstrator systems and definition of a file-based HLT and storage system.

A new custom card, called FEROL (Front End ReadOut Link), has been developed to provide the DAQ link to the AMC13 card and interface between the custom electronics and commercial 10 Gbps Ethernet networking equipment. This PCI card will be housed in the current FRL modules, replacing the Myrinet NIC. A few pre-production cards have been produced (see Image XYZ), firmware has been developed and a test setup has been established to assess functionality and performance. A stripped-down version of TCP/IP has been implemented in the FPGA on the FEROL, providing reliable data transmission with a throughput close the wire speed (10 Gbps) into a PC with a commercial NIC and running the standard TCP/IP stack.

Image 2: Ensemble of the pre-production FEROL card (on top) housed in the existing FRL compact-PCI card. The two lower connectors on the left side of the FRL are for the SLINK-64 LVDS cables to the legacy FEDs. The FEROL card can support four optical links; the two lower SFP cages support up to 5 Gbps for the AMC13 DAQ link, whereas the two upper SFP+ cages support 10 Gbps and can be used for 10 Gb Ethernet to commercial networking equipment and potentially for a future version on the AMC13 with a 10 Gpbs DAQ link.


by F. Meijers