Performance of the ATLAS Trigger System

The ATLAS Trigger System reduces the event rate from the bunch-crossing rate of 40MHz to an average recording rate of 200Hz by selecting primarily high-pT events. The ATLAS Trigger is composed of three levels. The first level (L1) is implemented in custom-built electronics, the two-stage High Level Trigger (HLT) is implemented in software executed on large computing farms. The L1 consists of calorimeter, muon and forward triggers to identify electron, photon, jet and muon candidates, as well as event features such as missing transverse energy. These inputs are used by the L1 Central Trigger to generate an L1 Accept (L1A) decision. L1A and timing information is sent to all sub-detectors and summary information is sent to the subsequent levels of the Trigger System. In this paper the performance of the ATLAS Trigger System in 2010 and 2011 is presented.


Introduction
The Large Hadron Collider (LHC) is designed to collide protons at a center-of-mass energy of 14 TeV with an instantaneous luminosity of 10 34 cm −2 s −1 . At this design luminosity the ATLAS detector [1] will be exposed to an average of 25 pp interactions every 25 ns. While the total interaction cross section is of the order of 100 mb, the cross section for discovery physics is many orders of magnitude smaller. The Trigger System has to reduce the input bunch crossing frequency of 40 MHz to an average of 200 Hz, the limit of the storage rate. Increasingly demanding LHC conditions make a challenging environment for the Trigger System. From the end of 2010 to the time of this conference, the instantaneous luminosity has increased from 10 32 to 10 33 cm −2 s −1 , the number of bunches from 348 to 1380 and the bunch spacing was reduced from 150 ns to 50 ns.
The following sections contain a brief overview of the ATLAS Trigger System, followed by a discussion of recent improvements of the L1 sub-systems and their performance. The final section is devoted to the HLT system.

Overview of the ATLAS Trigger System
A schematic overview of the ATLAS Trigger System is shown in figure 1. It consists of three levels: The Level-1 (L1) trigger is a hardware trigger implemented in custom-built electronics. It uses coarse-granularity information from the muon chambers, calorimeters and forward detectors. The L1A decision is made by the Central Trigger Processor (CTP) [2] with an overall latency of less than 2.5 µs reducing the event rate to about 75 kHz. If an event is accepted, the detector data are -1 - passed from front-end electronics to the Read-Out System (ROS) to be later accessed by the higher trigger levels.
The High Level Trigger (HLT) is a software based trigger, running on large computer clusters. It is subdivided into the Level-2 (L2) trigger and Event Filter (EF), and is used to refine the L1 decision to select interesting events in order to reduce the event rate further. The L2 is seeded by L1 Regions of Interest (RoIs) and has access to the full detector granularity. Within each RoI, L2 executes fast reconstruction algorithms that use detector information not available at L1. It has a nominal average processing time of 40 ms and reduces the output rate to around 3 kHz. If the event is accepted, the data fragments are requested by the Event Builder from all ROSs and assembled into a full event data structure. This data is then passed to one of the EF nodes where offline algorithms are run in a custom online framework. This trigger level further reduces the event rate to reach a maximum of 200 Hz within an average processing time of roughly 4 seconds.

The ATLAS Level-1 Trigger System
The L1 trigger decision is formed by the CTP based on the information from the calorimeter trigger towers, dedicated muon trigger detectors, and other additional trigger inputs from forward detectors. The L1 calorimeter trigger [3] is a mixed system, receiving data from the electromagnetic and hadronic calorimeters. It consists of three main sub-systems: the PreProcessor, the Cluster Processor and the Jet/Energy-sum Processor. The L1muon trigger receives low granularity input from dedicated trigger detectors. Two detector types are used: Resistive Plate Chambers (RPC) in the barrel region (|η| < 1.05) and Thin Gap Chambers (TGC) in the end-caps (1.05 < |η| < 2.4). Cerenkov Integrating Detector). The BPTX is used to monitor the phase between bunches and the LHC clock which drives the ATLAS electronics. Futhermore, BPTX measures the bunch pattern, needed for restricting physics triggers to pp bunch crossings only. Up to 160 trigger inputs are used by the CTP to form up to 256 distinct L1 triggers. The CTP subsequently produces the L1A which together with the LHC timing signals is distributed to the detector front-end and readout systems via the Timing, Trigger and Control (TTC) system.

Performance of the Central Trigger Processor
The Central Trigger system is fully operational since 2007 and was running smoothly in 2010 and 2011 with typical L1 output rates of around 50 kHz. Several improvements to the monitoring of the CTP have been implemented during this data taking period, some of which will be discussed below.
To protect front-end buffers from overflowing, the CTP introduces dead-time by vetoing a number of triggers after an L1A ("simple deadtime"), or by restricting the number of triggers in a given period ("complex deadtime"). The simple deadtime is currently set to 5 BC after an L1A, the complex deadtime to 8 triggers in 80 µs. The dead-time needs to be minimized to maximize the data taking efficiency. An L1A timing monitoring was implemented at the full L1 rate to check the dead-time mechanism and to correlate trigger patterns with detector front-end problems. Moreover a new per-bunch dead-time monitoring has been developed. Figure 2 shows the contributions of simple and read-out deadtime 1 to the total deadtime fraction for different bunches in a train. The first bunch in a train sees no simple deadtime, the second bunch sees the simple deadtime of the first bunch. Subsequent bunches in a train see the dead-time of the previous two bunches. This per bunch dead-time fraction can be used to derive corrections for the luminosity measurement.
The CTP receives and controls the timing signals coming from the LHC and uses these signals to ensure that all the ATLAS sub-detectors are synchronized to the LHC. In ATLAS, the phase between the bunches and the LHC clock is monitored using the BPTX. The phase has to be kept very   stable, as it directly affects the performance of the sub-detectors. Due to its transmission through optical fibers several kilometers long, the distribution of the LHC clock signal to the ATLAS detector is sensitive to environmental effects and regular adjustments of the clock phase upstream of the CTP are needed. Figure 3 shows the time difference between the LHC bunch arrival time and the LHC RF clock during 2010-2011 stable data taking. As can be seen on the figure, a fine-delay of 2 ns using the RF2TTC board [5] was applied to the clock on 25 June in order to shift the beam phase closer to zero. Starting in July 2010 the beam phase is being kept constant within ±500 ps. During the winter shutdown in 2011 a new module (CORDE) was installed to adjust the timing signals with an even higher precision (±10 ps).

Performance of the L1 Muon Trigger System
The muon trigger chambers are arranged in three planes, each plane consisting of 2 or 4 layers. The L1 muon candidate is constructed using hits in one of the planes and searching for additional hits within a road in adjacent trigger planes. The width of these roads defines the muon p T thresholds -4 -at L1. There are six programmable thresholds available at L1, three for the low p T range (0, 6 and 10 GeV) and three for the high p T range (15, 20 GeV). The muon triggers with "high p T " thresholds require that a coincidence is found in three RPC chambers or in a set of TGC chambers. The barrel muon triggers with "low p T " thresholds require a coincidence only in two RPC chambers. The endcap muon triggers require coincidences in a possibly smaller number of the TGC chambers depending on the geometry and magnetic field configuration of a specific region. The efficiencies of the L1 muon triggers as a function of the muon p T reconstructed offline are shown in figure 4 for barrel (left plot) and endcap (right plot) for six of the L1 muon trigger items. A high efficiency is observed for both systems within the detector acceptance. In 2011, studies demostrated that a minimum p T threshold of 4 GeV is needed to control the rate for the lowest di-muon triggers. This trigger requires a coincidence in 3 layers in the barrel, whereas for the endcap it uses roads based on 4 GeV. An important reduction of the rate is observed (∼ 60%), while keeping the efficiency as high as before. Due to overlapping trigger chambers and muons curling in the magnetic field a single muon may be counted as two trigger candidates. The Muon to CTP Interface (MUCTPI) takes data directly from the muon processors and resolves overlaps, thereby reducing the di-muon trigger rate and improving the di-muon purity, before passing the muon multiplicities of each event on to the CTP. A new MUCTPI firmware was deployed for improved overlap handling with p T matching, alleviating a particularly high fake rate of the low threshold di-muon triggers.

Performance of the L1 Calorimeter Trigger System
In 2011, the final tuning of the timing and calibration of the L1 Calorimeter (L1Calo) trigger has been performed using high luminosity proton-proton collisions. The L1Calo trigger decision is based on dedicated analogue trigger signals provided by the LAr and Tile calorimeters. The analogue signals of the ∼ 250, 000 calorimeter cells are summed to 7168 trigger towers and then sent over twisted-pair cables (30 − 70 meters) to L1Calo analogue receivers. Arrival time differences of trigger signals are from two sources: time of flight differences of particles depositing energy at different locations in the calorimeter, and length differences of trigger cables. Nanosecond precision is required when adjusting these arrival time differences because small timing discrepancies lead to an underestimation of the reconstructed energy. To fine adjust the timing a custom PHOS4 chip is used with a precision of 1 ns. After this the signals are passed through a FIFO, which provides a coarse timing adjustment in steps of 25 ns. This procedure is done in the PreProcessor system where the analogue signals are digitized. Figure 5 shows the achieved timing at the start of the 2011 data taking period showing that for almost all trigger towers the timing of the trigger towers was better than ideal performance. Note that 5 ns precision is needed for an L1Calo energy measurement of 2%. The plot on the right shows an even better situation for a later run after application of correction factors to the hardware timing delays.
The number of FADC counts seen in a trigger tower needs to be calibrated to be translated to transverse energy in GeV. This energy calibration is a crucial aspect of the operation of the L1Calo trigger. The calibration is derived from the analysis of dedicated calibration pulse runs in between stable data taking. Several energy steps are taken in each calibration run, and gains are then determined by comparing the energy seen in the calorimeters and in L1Calo. An excellent -5 - agreement between the L1Calo and calorimeter measured energies is observed in energy correlation plots (see figure 6).

Performance of the High Level Trigger
The HLT reconstruction is based on energy from the calorimeters and tracking information from the inner detector and muon spectrometer. The performance of the various trigger algorithms has been studied using p-p collisions and can be found here [6]. Figure 7 (left plot) shows the measured efficiency turn-on curves for the three trigger levels for one electron trigger, as a function of the electron transverse energy as determined by the main offline reconstruction. At L1, this electron trigger requires an EM cluster above 14 GeV that is used to seed a L2 electron trigger sequence. The L2 requires E T > 19 GeV together with additional selections on calorimeter shower shape and matching of the cluster to the track in the Inner Detector. The EF subsequently makes tighter selection cuts and requires E T > 20 GeV. As seen in the figure, the 20 GeV electron trigger exceeds 98% efficiency at an offline reconstructed E T of 22 GeV. Figure 7 (right plot) shows the efficiency of the muon trigger using Z → µ µ events versus the muon p T for data and simulation. The results illustrate a good agreeement (∼ 2 %) between data and MC.

Trigger operations
During 2010-2011 data taking, the ATLAS Trigger System operated reliably while both luminosity and pileup increased. Figure 8 (left plot) shows the observed rates for several triggers, where the good extrapolation of the trigger rate can be seen as a function of luminosity. As the luminosity increases the trigger rate can be controlled by using tighter trigger items for certain object selections, while always keeping the main triggers as stable as possible. To maximize the overall trigger rates the trigger prescales can be dynamically controlled during a particular run, without need to stop and restart (see incremental steps in rate in figure 8, right plot).

Conclusion
The ATLAS trigger system has been succesfully deployed and commissioned using p-p collision data during 2010 and 2011. Important improvements have been made to monitor the Level-1 performance. The trigger system has worked reliably and with excellent performance during these years of data taking. Thanks to its flexibility it was able to cope with the increasing luminosity and pile-up, keeping the data taking efficiency very high.