An upgraded ATLAS Central Trigger for post-2014 LHC luminosities

the physics goals of ATLAS while keeping the total Level-1 rates at a maximum of 100 kHz. To provide this added functionality, the Central Trigger Processor will be upgraded during the planned LHC shutdown that begins in 2013.


Introduction
The trigger system of the ATLAS detector [1] is structured in a 3-level architecture in order to stepwise reduce the event rate from roughly 1 GHz interaction rate to a few 100 Hz recorded for permanent storage. A schematic view of the ATLAS trigger system is shown in figure 1. The first level (L1) is hardware-based and reduces the rate to below 75 kHz within the maximum latency of 2.5 µs, using only information based on a reduced granularity compared to what is available offline to calculate quantities relating to physics object energy and multiplicity. The actual L1 trigger decision -the level-1 accept (L1A) -is taken by the Central Trigger Processor (CTP), which combines information from the calorimeters, muon trigger detectors and a number of detectors in the forward region. The calorimeter trigger at L1 distinguishes signatures of jets, electrons/photons and taus/single hadrons with the help of trigger towers from the electromagnetic and hadronic calorimeters and calculates total and missing transverse energies. For the muon triggers there are two types of detectors: In the barrel region (|η| < 1.05) resistive plate chambers are used while in the end-cap region (1.05 < |η| < 2.4) thin gap chambers provide the trigger information. In the forward region there are additional detectors that give input to the CTP: the beam pick-up detectors, Minimum Bias Trigger Scintillators and the Zero Degree Calorimeter. Moreover, for luminosity measurements there is the ALFA detector (absolute luminosity for ATLAS) as well as LUCID (a Luminosity measurement Using Cherenkov Integrating Detectors). The beam condition monitoring also provides inputs to the CTP. More information on these detectors can be found in [1]. The L1A prompts the read-out of the entire detector. Information on so-called regions of interest, where the object that passed the trigger threshold was located in the detector, is passed on to the second trigger level (L2) and the data acquisition (DAQ). Here, information based on the full granularity of the detector is used to refine the trigger decision with software algorithms running on computer farms, reducing the rate to a few kHz. Finally, the event filter (EF) software uses the full event information and more sophisticated algorithms to reach a further reduction of the rate to roughly 400 Hz. L2 and EF are commonly referred to as the High Level Trigger (HLT) [2].

Hardware Overview
The Central Trigger Processor (CTP) [4] is implemented in custom-built VME electronics. The CTPMI board, the interface to the LHC [5], receives the timing signals from the LHC. These signals are the bunch crossing clock (BC) and the orbit signal which is issued with the beam revolution frequency. The CTP is responsible for the synchronisation and distribution of the timing information throughout the entire detector system. The trigger signals from the sub-detectors arrive at three input boards (CTPIN), each of which accepts 4x31 inputs that are synchronised, aligned and monitored with scalers. 160 input signals are selected by a switch matrix and transmitted via the Pattern In Time (PIT) bus to the CTPCORE board, which forms the L1A, and to the CTPMON board for per-bunch monitoring. In the CTPCORE board up to 256 trigger items are formed according to the L1 trigger menu as flexible logical combinations of the input trigger conditions using look up tables. Moreover, the so-called bunch group masking is included in the formation of each item: There are eight distinct bunch groups, each with its own specific purpose. The physics bunch group, for example, consists of those bunch crossing IDs where the two beams meet in ATLAS. These bunch groups can be used in a logical AND with the other trigger conditions. In addition each item has a priority and a pre-scale factor that is applied before the L1A is formed as the logical OR of all trigger items. Following protective dead-time rules or on request by sub-detectors, the CTP can veto triggers. The priority of a trigger item determines how much dead-time this specific item sees. There are two types of preventive dead-time in the CTP: the simple (fixed) dead-time of a programmable number of bunch crossings following each L1A, and the complex dead-time which is implemented as a leaky-bucket algorithm, limiting the average L1 rate to a programmable value. The dead-time and L1A rate are monitored per bunch in the CTPCORE module.
Four output modules (CTPOUT) receive the L1A and timing information via the common backplane (COM bus) and fan them out to the Trigger, Timing and Control (TTC) partitions of the sub-detectors. The sub-detectors can send calibration requests to the CTPCAL module via the calibration bus.

Current Operation
The CTP operation during the first years of data taking has been very smooth: The availability of the system was close to 100%. Nevertheless, there are improvements going on constantly in order to get a more detailed picture of the system during runtime and to detect and identify problems as quickly as possible and prevent them in future.
One of the few issues seen in 2011 were problems with the orbit signal that might be missed or shifted in time, leading to incorrect timing information for the entire experiment. A number of additional monitoring features and cross checks have been implemented to catch such problems quickly: The number of orbit signals and their length in terms of bunch crossings is constantly checked for irregularities. Cross checks of the physics bunch pattern as derived from the rates in the CTPMON module with respect to what is expected according to the configuration database have been implemented in order to detect a global shift in the orbit signal.
While the electrical PIT bus will be kept but used with double data rate, i.e., 80 instead of 40 MHz, upgrade plans require the redesign of the CTPCORE module, the COM backplane and the CT-POUT modules. The new CTPCORE++ module (see figure 2) will be equipped with optical inputs in addition to the electrical ones in order to provide the connection to new or upgraded sub-systems like the topological processor. It will be capable of the formation of 512 trigger items and will have 256 per-bunch counters for the trigger item monitoring. The number of bunch groups will be doubled to 16 and the masking will be applied after the item formation instead of being part of the item. The upgrade plans include the partitioning of the L1A generation, where all partitions share a common trigger menu and timing. There will be four partitions: one primary (physics) partition, which is the only one providing information for L2 and DAQ read-out, and three secondary partitions that have their own selection of trigger items and dead-time handling and will be used for detector commissioning, calibration, etc. The existing COM backplane allows for the implementation of two partitions and will therefore be replaced to accommodate four. It could then allow for an additional CTPOUT module which would provide the possibility to increase the number of TTC partitions. The CTPOUT modules themselves will have to be adapted to these changes and the new CTPOUT++ modules can in addition be used for a per-bunch busy monitoring. The aim is to install the upgraded CTP in time to be ready for detector commissioning from Q1'2014 onwards.