Future Plans of the ATLAS Collaboration for the HL-LHC

These proceedings report the current plans to upgrade the ATLAS detector at CERN for the High Luminosity LHC (HL-LHC). The HL-LHC is expected to start operations in the middle of 2026, aiming to reach an ultimate peak instantaneous luminosity of 7.5×1034cm-2s-1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$7.5\times 10^{34}~\hbox {cm}^{-2}~\hbox {s}^{-1}$$\end{document}, corresponding to approximately 200 inelastic proton–proton collisions per bunch crossing, and to deliver over a period of twelve years more than ten times the integrated luminosity of the large hadron collider (LHC) Runs 1–3 combined (up to 4000fb-1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4000\,\text{ fb }^{-1}$$\end{document}). This is a huge challenge to all sub-systems of the detector which will need extensive upgrades to allow the experiment to pursue a rich and interesting physics programme in the future.


Introduction
A broad range of observations are well described by the Standard Model (SM) of particle physics, while there are many contemporary phenomena which are not simply explained by the same framework, such as dark matter, matter-antimatter asymmetry, neutrino masses. Therefore it is common to assume at present that the Standard Model, although mathematically self-consistent, should be embedded in a larger theory, e.g., Supersymmetry (SUSY).
Utmost priority is given to the high precision studies of the production and decay properties of the SM Higgs boson as well as searches for new physics beyond the Standard Model (BSM). This is complemented by heavy-flavour, heavy-ion and diffractive physics, that together shape the large and rich LHC experimental programme. To provide the required statistics, LHC and parts of the accelerator complex at CERN will undergo a major upgrade called the high-luminosity LHC (HL-LHC) upgrade. The new HL-LHC baseline parameters significantly exceed those of the nominal LHC design which was a main factor in the construction of the existing experiments.
To mitigate the risks and minimise any possible issues arising from the increasingly harsher operational conditions at HL-LHC as well as to profit from the technological evolution over the past 10-20 years, the ATLAS detector [1] upgrade is planned according to the following considerations: -The higher luminosity at HL-LHC implies higher pileup, i.e., up to 200 proton-proton collisions per bunch crossing. -The high pileup results in high hit occupancies and particle fluxes in all sub-systems, especially in the detectors closest to the beam pipe. The existing inner detector will have to be dismantled and entirely be replaced with a newly built one. -The higher pileup will typically lead to higher trigger and data rates (including background and fakes) from all sub-subsystems.
-The individual event size will increase and thus a larger amount of data will need to be transmitted, processed and stored. -The detectors are expected to have total radiation doses up to a dozen MGy over the lifetime of HL-LHC. -All sub-systems require new readout electronics capable of streaming full-granularity data off-detector at the 40 MHz collision rate to ensure the best physics performance at the highest luminosity. -Higher-granularity data will be made available for performing more sophisticated algorithms at trigger level. -Faster and improved event selection algorithms are needed to increase the selectivity for the signal and reject the background. -Tracking information will be used as early as possible in the trigger selection to improve selectivity and to reject background. In particular, hardware-based track triggering will be considered. -Potential commodity hardware accelerators will be considered to offload suitable computing-intensive processing tasks from the main CPU. -Detector performance for physics shall be kept at least as good as in Runs 1-2 at instantaneous luminosities that are up to 5 times higher than in the initial LHC design. -The Advanced Telecommunications Computing Architecture (ATCA) standard will replace the VME standard as the basis for modular electronics. -Many sub-systems will be based on complex Printed Circuit Boards (PCBs) hosting high-performance Field Programmable Gate Arrays (FPGAs), in which complex processing algorithms will be implemented. -Homogeneous interface between front-end/trigger electronics and commodity multi-gigabit networks will be used to transport data. -There is a trend in the trigger sub-system to move from trigger-specific to offline-type algorithms.
-The trend in the data acquisition sub-system is from custom to commodity hardware, and from hardware to software-based solutions. -The S/G-Links will be replaced with low power GigaBit Transceiver (lpGBT) & Versatile Link+ protocol for front-end/back-end fast transmission of large data volumes. -Solutions will be explored and implemented that tend to reduce system complexity by utilisation of higher density PCBs with high-performance FPGAs and fewer optical links at higher speeds. -Design will focus on improving physics performance by better signal handling and processing, higher dynamic range, lower noise, better coverage, lower material budget, high-speed optical links, lower power consumption, high radiation hardness, accessibility, redundancy, simplicity, and maintainability.
The ATLAS upgrade plans for HL-LHC have been documented in detail in the Phase 2 Upgrade Letter of Intent [2] and the Phase 2 Upgrade Scoping Document [3]. Dedicated technical design reports (TDRs) were prepared and two of them, namely the Strip TDR [4] and the Muon TDR [5], were approved by the end of 2017. The remaining four TDRs are expected to be endorsed by April 2018. An interest was expressed in a new High-Granularity Timing Detector (HGTD) whose design is ongoing. According to the current plans, the proposed upgrade programmes will be refined and implemented during a period of 9 years (2018-2026). The completion of the ATLAS Phase 2 TDRs is a major milestone in the overall HL-LHC upgrade [6], Fig. 1.
The main challenges and changes to the ATLAS detector arise from the need for a new trigger and data acquisition (TDAQ) architecture and the replacement of the readout electronics of most of the detector subsystems.

From LHC to HL-LHC
The 25th anniversary of the LHC physics experimental programme was celebrated with a symposium on 15th of December 2017 at CERN. This event marked the tremendous success achieved by the whole community. Most of the initial design goals were reached and surpassed, records in terms of performance benchmark parameters were being set every year since the start of the machine operation in 2010.
The LHC legacy so far is marked by the following milestones: -In March 2010, it was successfully commissioned for proton-proton collisions with a 7 TeV centre-of-mass energy.  The ambitious physics goals coupled with various practical and technical constraints require high statistics data samples to be collected on a much shorter timescale than the present machine configuration would allow. Based on careful assessment of the beam parameters and hardware capabilities a new baseline setup for LHC has been proposed. In this design, the primary means for significantly enlarging the data sets is to reach the highest luminosity by maximising the number of collision events. Therefore, the HL-LHC is defined by the following targets: -a peak luminosity of ∼ 7.5 × 10 34 cm −2 s −1 -an integrated luminosity of 300 fb −1 per year, resulting in 3000-4000 fb −1 in about dozen years (e.g., by 2037) This challenging goal will require a factor of 5 increase in peak luminosity compared to the current running conditions. LHC will be upgraded in stages starting in 2019-2020 (Phase 1) and aiming to commission the ultimate HL-LHC machine configuration in the period 2023-2025 (Phase 2). Upon completion of detailed design studies, the HL-LHC upgrade project moved to construction phase on 1st of November 2015. This date marks the beginning of the next stage of progress of the most powerful accelerator machine at the high-energy frontier. Some of the novel technological advances are 11-12 T superconducting magnets, compact superconducting cavities for beam rotation, improved beam collimation and 300 m long high-power superconducting links with small energy dissipation.

ATLAS Future Plans for HL-LHC
The HL-LHC is particularly characterised by its dense beam bunches packed with 2.2 × 10 11 protons per bunch (the nominal being 1.15 × 10 11 ), low emittances and finely tuned beam optics to increase the probability of proton-proton interactions.
The number of collisions per bunch crossing, called the pileup, μ, at the ultimate HL-LHC mode of operation is expected to reach 200. This is an immense upward step with respect to the LHC design and currently attained μ-values of up to about 75. As some of the present detector sub-systems of ATLAS are not capable of performing optimally at μ ≈ 75, the peak luminosity is leveled down to μ ≈ 58 by separating the beams at the start of every LHC fill. For the remainder of the fill μ decreases, eventually reaching values down to 10, before beams are dumped. This scheme resulted in average pileup of μ ≈ 38.3 in 2017. The sensitivity to ever increasing pileup conditions is of a particular concern to the experiment because certain resources scale exponentially rather than linearly with μ, thus making extrapolations and precise predictions harder to obtain than they have been under the lower pileup conditions characteristic of the data on which current observations are based. Without an adequate upgrade of core hardware and software systems ATLAS would not be able to sustain the harsher operational conditions of HL-LHC.
The ATLAS detector will undergo a major upgrade in the two phases that have been planned to date, and hereafter referred to as Phase 1 and Phase 2, respectively, in synchronisation with the HL-LHC schedule. The primary motivation and scope of innovations in the various sub-systems are determined by the physics priorities and by the availability of the state of the art technologies. An additional concern for the upgrades is material aging, including technology obsolescence, given that the ATLAS detector was proposed > 25 years ago and many of its components were manufactured and installed > 10 years ago. The radiation doses and damages incurred by the sensors and electronics during the Run 1 (2010-2012), Run 2 (2015-2018) and Run 3 (2021-2023) operation should be considered as well.
The ATLAS collaboration is responsible for all the aspects of the various detector parts and data handling. This constitutes an enormous human endeavour and a unique technological achievement. The detector weighs 7000 tonnes and measures 46 m in length and 25 m in diameter. More than 3000 physicists from about 182 institutions in 38 countries are involved as of today. The ATLAS detector can be partitioned according to its main functional units, namely, inner tracker, calorimeters, muon spectrometer, and trigger and data acquisition. The following sections present a brief overview and a few highlights from the upgrade technical design considerations per sub-system.

Inner Tracker
The existing tracking detector will be decommissioned and replaced by an entirely new Inner Tracker (ITk). It comprises one of the most critical systems of the upgraded detector which will play a crucial role in the data taking at very high pileup. ITk is an all-silicon system that consists of a pixel detector at small radius and a large area strip detector surrounding it. It will be installed as a monolithic unit into the centre of the ATLAS detector.
Good hermeticity, provision of sufficient number of hits on track, as well as minimised tracking performance degradation due to material within the detector volume are key requirements in the engineering design of ITk. The implementation of these specifications will ensure the ability to identify charged particles with high efficiency and purity and measure their properties with high precision in the presence of up to the average pileup of 200 that is expected at the HL-LHC The upgraded detector should at least preserve, and if possible enhance, the physics performance of the current Run 2 detector, under the Phase 2 operational conditions.
The combined Pixel and Strip ITk detectors provide a total of 13 hits for tracks with η < 2.6, with the exception of the barrel/endcap transition of ITk Strip, where the hit count is 11 hits. The ITk Pixel endcap system is designed to supply a minimum of at least 9 hits from the end of the Strip detector coverage to |η| ≈ 4. The full ITk will be read out at 1 MHz L0 rate in the single L0 trigger scheme (Sect. 7).

ITk Pixel Detector
The experience gained over more than 20 years of construction and operation of the present ATLAS Pixel detector serves as a basis for the upgraded design that will choose from among a selection of sensor technologies, including high-resistivity, planar and 3D silicon detectors. The new ITk Pixel detector is a hybrid assembly of a high-resistivity silicon detector bump-bonded to a CMOS binary chip which can be read out at a very high speed. Novel technologies will have to be analysed in detail and solutions are needed to adapt to the HL-LHC conditions.
Careful simulations have been performed to optimise and solidify the layout and critical physical parameters of the Pixel design before the mechanical construction can start. The Pixel detector consists of 4 barrel layers and 4 layers of so-called endcap ring (Fig. 2). The two innermost barrel layers and the inner-most endcap ring layer are replaceable, the others are designed to operate for the entire lifetime of the HL-LHC.
The barrel inclined layout concept with tilted sensors in the high-η region allows for several hits per layer (tracklets) and less material to be crossed for a given low incidence angle. This leads to improved d 0 and z 0 resolution and higher tracking efficiency due to having many tracklets as close as possible to the interaction point. The endcap layout consists of scattered rings (instead of flat disks), called optimised rings, such that this very specific pattern provides constant number of hits versus η. The larger η region is entirely covered by the Pixel volume with increased number of rings at very high η.
The front-end chip consists of a 150 µm thick silicon wafer with a 20 µm diameter Sn-Ag bump-bond per pixel channel. The total number of channels is > 400 M (that is 5 times more than the current Pixel detector).
The 3D pixel sensors consist of an active silicon layer of 150 µm thickness and a passive support layer of 100 µm thickness. They occupy the innermost pixel layer (0) and have relatively large thickness. Planar pixel sensors are 100 (150) µm thick active silicon in layers 1 (2)(3)(4). Monolithic CMOS pixel sensors as modules are being pursued for the outermost barrel layer (4).
The three outer Pixel layers, barrel and endcaps, will support full readout at 4 MHz. For the two inner Pixel layers full readout at 4 MHz would require a prohibitive number of data cables; this part of the detector will only support full readout at 1 MHz, and will be readout at L1 (Sect. 7).

ITk Strip Detector
The silicon-strip detector with a silicon area of ∼ 165 m 2 is the largest component which has to be manufactured and integrated anew in the detector. Therefore its design has to be finalised and the production started as early as possible. Based on the experience obtained during the construction of the current ATLAS SCT, simplicity in the design was pursued for improved producibility (i.e., aiming at mass production and low cost). The design is equally focused on material reduction for better physics performance.
The strip barrel consists of 4 cylinders around the beam-line, each of length of 2.8 m. The strip endcaps have six disks on each side of the barrel, that extend the length to z = ± 3 m. The whole strip system covers the region η = ± 2.5 (Fig. 2). The strips on the inner two cylinders are 24.1 mm long (short-strips) and those on the outer two cylinders are 48.2 mm long (long strips). The shorter strips are required at smaller radii to cope with the higher hit occupancies, whereas the long strips are suitable in the lower occupancy region at larger radii.
A silicon-strip module is made of one sensor and one or two low-mass PCBs, called hybrids, hosting the readout ASICs, the ATLAS Binary Chip (ABCStar) and the Hybrid Controller Chip (HCCStar). The kapton flex hybrids are directly glued to silicon sensors with electronics-grade epoxy. The silicon modules are glued directly onto cooled rigid carbon-fibre planks that are designed to reduce radiation length. Kapton service tapes are directly co-cured onto carbon-fibre skins. This design implements the concept of high integration of services and only a limited number of connections that is realised on cards at one of the ends of the stave or petal structures.
The number of modules is 17,888 containing ∼ 60 M channels which is a factor of 10 larger than that of the current SCT detector.
The readout ASIC chips will be produced in a 130 nm technology, which has the benefits of reduced power, low noise, improved radiation tolerance, and higher circuit density, while having a reasonable price. The access to this technology has been secured via CERN and its availability is also supported by the high demand from external commercial technologies.
To support full detector read-out at 1 MHz L0 rate the on-detector electronics has adopted the so-called "star architecture" with point-to-point connections between each ABCStar and the HCCStar on the hybrid, thus removing a bottleneck in data transfer to the HCC and making more efficient use of the available bandwidth. The Strip detector will support L0 readout at 4 MHz for 10% of the modules belonging to the region of interest (RoI) identified at L0, followed by full readout at L1 (Sect. 7).

Calorimetry
It is not planned to change the sampling elements (i.e., the lead blocks and scintillating tiles) for the LAr and Tile calorimeters, respectively. However, the expected increase in the fluxes of particles from the HL-LHC demands the replacement of the current electronics after Run 3 because many on-detector components will not sustain the planned radiation dose of HL-LHC.
The new trigger system (Sect. 7) requires access to the digital information from each calorimeter cell at a 40 MHz rate and with a maximum latency of about 1.7 µs. Changing the readout to continuously digitise and send the calorimeter-cell data off-detector using high-speed optical links has an additional advantage to minimise single event effects (SEE) in the digital pipelines. The numerical precision, the improved accuracy in the energy calibration and the reduced level of electronics noise, targeting similar values to the offline information, will improve the trigger selectivity significantly. The calorimeter information will consist of the reconstructed and calibrated energy, time (i.e., bunch crossing identification or BCID) and quality factor for each cell.

LAr Calorimeter
The readout system records the energies in the range of 0.5 MeV to 3 TeV for the ∼ 182 k channels. The signals are shaped and digitised, and sent to both the trigger and the data acquisition (DAQ) systems. The maximum readout rate at L1 trigger is currently 110 kHz. The readout architecture consists of the front-end, FE, (ondetector) and back-end, BE, (off-detector) electronics components. An extensive FE (58 crates) is needed due to performance requirements, e.g., to achieve the needed resolution and event rates. The FE performs, on one hand, per-channel analog processing, 12-bit ADC digitisation, and optical data transmission at 1.6 Gb/s to the BE electronics, and on the other hand, analog signal summation and transmission to the L1 calorimeter trigger (L1Calo). The BE performs digital filtering, calculates the deposited energy and provides the readout interface to DAQ.
The Phase 1 upgrade will focus on improving that calorimeter functionality related to L1 triggering. The amount of information will increase by a factor of 10, whereby an existing trigger tower will contain 10 supercells, each supercell belonging to one of 4 layers in depth of the LAr calorimeter (in the full η-range) and is 4 times finer in η in the two middle layers (for η < 2.4). The FE analog summing electronics will be replaced with LAr Trigger Digitizer Boards (LTDB) and these digitised signals will be processed remotely by the LAr Digital Processing System (LDPS) in the BE, which also serves as interface to the new L1 calorimeter trigger system (Sect. 7).
For HL-LHC in Phase 2, an evolution of the readout electronics architecture is needed towards a freerunning scheme where all data are sent off-detector. The Phase 1 FE boards (FEB) will be replaced by FEB2 boards, while the LAr Signal Processor System (LASP) hardware implementation will be an evolution of the LDPS. The full data stream of detector signals digitised at 40 MHz will be available in the LASP modules. This will allow a determination of calibrated cell energies and of signal times with respect to the bunch crossing time including an active correction for out-of-time pileup. The new FE electronics parts have to be radiation resistant (up to 30 kGy). At high pileup and over time the main characteristics of the electronics devices such as dynamic range, linearity, noise, analog pulse shaping time, etc, should not degrade.
In Phase 2 a main requirement for the LAr upgrade imposed by the TDAQ system (Sect. 7) is the support for 1 MHz (up to 4 MHz) accept rate, in terms of data processing, output bandwidth and buffering capability. In addition, the LAr system shall implement data processing, e.g., noise rejection and zero-suppression, to adapt to the bandwidth demands of the trigger processors.
The existing control and low-voltage (LV) power distribution scheme will need to be changed in Phase 2 because of the limited radiation tolerance of the current FE components and compatibility with the new TDAQ design. The new FE electronics will mostly use voltages lower than the present ones (1-4 V instead of up to 11 V). The LV power supply system will be also relocated to a more accessible position that will be reachable in short access periods during data-taking.

Tile Calorimeter
The plan for the Tile calorimeter is to replace the FE electronics and the high-voltage (HV) regulation devices. In addition, 768 PMTs covering the cells that are most exposed to radiation will be replaced with the latest model PMTs.
The Phase 2 TDAQ architecture (Sect. 7) foresees a fully digital calorimeter trigger with higher granularity and precision, compared to the current system with analogue summing which is done on-detector. Tile digital trigger input will be ready only for Run 4, unlike Run 3 for LAr. In Phase 1, the Tile on-detector electronics will remain unchanged, as will the trigger signal digitisation accomplished by L1Calo Pre-Processor Modules (PPMs). The PPMs will be equipped with new TREX processors (Rear Transition modules) driving signals to the new L1 trigger system. Data will be digitised at 40 MHz and sent off-detector , whereby more of the on-detector electronics and complexity is moved out of the detector, thanks to the availability of high-quality, high-speed optical links and receivers (e.g., based on the so-called GBT protocol). It is also absolutely essential for the Phase 2 system to be consistent with the new trigger and latency in view of the large increase of rate and pileup compared to those of the Phase 1 upgrade.
The accessibility to the electronics for maintenance will improve by splitting the current super-drawer body into four parts (called mini-drawers). Such partitioning into smaller independent readout units will limit the number of inactive cells due to failures affecting a full super-drawer. In the worst case, the dead region would be one-eighth of the area affected by a failure in the current system. The new Tile on-detector electronics will be housed inside these mini-drawers. In addition, the new HV system will be moved outside the detector radiation environment.

Muon Spectrometer
The muon spectrometer consists of Cathode Strip Chambers (CSC), barrel resistive plate chambers (RPC), endcap thin-gap chambers (TGC), and monitored drift tubes (MDT). RPC and TGC are currently used for triggering. The muon detector components qualify for longer running and higher rates (and radiation doses) than originally anticipated, thanks to the conservative original safety factors, therefore they will not be replaced. In Phase 1 the New Small Wheel (NSW), consisting of Micromegas (MM) and small-strip TGCs (sTGCs), will replace the CSC and the MDT chambers of the innermost endcap small wheel (which are the ones closest to the beam line and exposed to the highest rates).
In Phase 2 a large fraction of the FE on-and off-detector readout and trigger electronics for RPC, TGC and MDT will be replaced to be compatible with higher trigger rates and longer latencies specified by the new L0 trigger (Sect. 7). The power system for the RPC, TGC and MDT chambers and electronics will need to be replaced due to component obsolescence, aging, and radiation damage. A high-η trigger is under consideration to extend the angular acceptance for muon identification.
All data from each beam-crossing from the TGC and RPC detectors will be transmitted to the counting room USA15, where the full information will be available for trigger processing. In the NSW region (1.3 < |η| < 2.7), the MM and sTGC chambers will both take the dual role of trigger and precision chambers. The (barrel inner small sectors 7 and 8) BIS78 MDT chambers will be replaced by integrated stations of new RPC and small-diameter MDT (sMDT) chambers to enhance the trigger coverage in this region.
In addition to the upgrades of the detectors and of the trigger and readout electronics, the LV and HV power system of the muon spectrometer (both controls and power supplies) will also be replaced, in order to ensure safe and reliable operation throughout the full operation period of the HL-LHC. The strategy is to develop new, strictly backwards-compatible devices. In such an approach the replacement can be spread over a period of several years, between Phases 2 and 3, at a time when the present components reach the end of their expected lifetime. This does not give room for design changes in an approach where an entirely new power system would be developed.

RPC System
A fundamental issue of the RPCs is the following: to ensure their continued operation at the HL-LHC, these chambers will have to be operated at a reduced performance (i.e., efficiency), in order to respect the original design limits on currents and integrated charge. This can be achieved by reducing the gas gain through lowering the operating voltages. This leads to hit inefficiencies up to 35% in the areas of highest backgrounds, to an unacceptable level if no compensating measures were taken. To maintain a high trigger efficiency, acceptance and robustness, new RPC chambers with increased rate capability will be installed on the inner (BI) MDT chambers of the barrel. Some of the chambers in the accessible high-rate regions will be refurbished as well.

TGC System
The current EIL4 TGC doublet chambers will be replaced by TGC triplets with finer readout granularity, reducing the rate of random coincidences in this region to a negligible level. A coincidence requirement of the Big Wheel TGC chambers with chambers in front of the cryostats greatly reduces the rate of these fake triggers, and this is one of the motivations for the NSW project. The EIL4 TGC chambers cover the region around the NSW, at 1.05 < |η| < 1.3 in large sectors (small sectors will be covered by the BIS78 RPCs).

MDT System
The MDT chambers will be integrated into the L0 trigger (Sect. 7) in order to sharpen the momentum threshold. Some of the MDT chambers in the inner barrel layer will be replaced with new sMDT. The electronics chain of the MDT precision chambers will be completely re-designed.
The MDT information will be used to sharpen the turn-on curves of the highp T triggers, significantly reducing the number of lowp T muons passing the selection. All MDT data will be sent to the counting room, and the MDT-measured precision coordinates will be used in the L0 trigger to improve the quality of trigger candidates ("seeds") provided by RPCs in the barrel and by TGCs and NSW in the endcaps. This requires the development of new MDT readout electronics to make the MDT coordinates available to the trigger logic within the L0 trigger latency. Alternatively or in addition, the MDT hits can also be used as confirmation of loose trigger candidates from the RPC or TGC-NSW to maximize the acceptance by including regions with limited trigger detector coverage, adding redundancy to the system.

Trigger and Data Acquisition
The design of the upgraded architecture of the TDAQ system supports a baseline scenario with a single-level hardware trigger (L0) with a maximum rate of 1 MHz and 10 µs latency. The evolved two-level architecture specifies a Level-0 (L0) trigger rate up to 2-4 MHz and 10 µs latency, followed by a Level-1 (L1) trigger rate of 600-800 kHz and latency up to 35 µs. The L0-only option resembles the current L1 hardware trigger with the first (i.e., lowest) level of the trigger being labeled L0 instead of L1. The main change and challenge in the dual L0-L1 option is the implementation, for the first time, of hardware-based track reconstruction in the L1 trigger system and combining tracks with calorimeter-and muon-based L1 trigger objects. Each sub-system of ATLAS will be capable of evolving to the dual-level hardware trigger architecture as a mitigation strategy in case pile-up conditions at the HL-LHC either drive the readout capabilities of detectors to the limits of the bandwidth available, or in case the rates of hadronic trigger signatures with the needed thresholds exceed the current allocations. The L0 trigger system is composed of the L0 calorimeter trigger (L0Calo), the L0 muon trigger (L0Muon), and the central trigger processor (CTP) sub-systems. In the Phase 1 upgrade the current cluster and jet processors will be replaced by the new calorimeter feature extraction trigger processors (FEX): eFEX for e/γ and τ , jFEX for single jets, and gFEX for larger-radius (or multi-jet) triggers and the calculation of global quantities such as E miss T and H T . In Phase 2 the FEX systems will be complemented by a new forward FEX (fFEX) to reconstruct forward electrons (jets) in the region 2.5(3.2) < |η| < 4.9, thus matching the pseudo-rapidity coverage of the new tracker system. The new L0Muon sub-system will use the upgraded RPC and TGC sector logic and NSW trigger processors for the reconstruction of muon candidates. In addition, the MDT information will be used in new dedicated processors to improve robustness and efficiency of the muon trigger, its p T resolution and selectivity. The Global Trigger will replace and extend the Run 2 and Phase 1 topological processor (L1Topo), by accessing full-granularity calorimeter information to refine the trigger objects calculated by L0Calo, perform offline-like algorithms, and calculate event-level quantities before applying topological selections. The final trigger decision is made by the CTP, which can apply flexible prescales and vetoes to the trigger items. The CTP also drives the trigger, timing and control (TTC) system network to start the readout process of the detectors.
The result of the L0 trigger decision is transmitted to all detectors and trigger processors, upon which the resulting detector and trigger data, respectively, are transmitted to the DAQ system through the so-called readout and dataflow sub-systems. Both sub-systems are based on commodity PC servers and standard networking infrastructure. The readout sub-system includes the front-end link exchange (FELIX) component which implements the interfaces to the detector-specific electronics via custom, point-to-point serial links, e.g., the CERN Versatile Link+ and lpGBT protocol. FELIX also acts as an interface to the rest of the DAQ system via custom input/output (PCIe with Xilinx FPGA) cards that convert the detector front-end protocol data into standard packets for use in a commodity multi-gigabit network.
The upgraded event filter (EF) system provides high-level, software-based trigger functionality, and consists of a CPU-based processing farm complemented by hardware based tracking co-processors for the trigger (HTT). The EF system refines the trigger objects in order to reduce the maximum output event rate. The HTT includes both regional (rHTT) and global (gHTT) track reconstruction capabilities. The EF trigger decision enables the transfer of data corresponding to selected events from the DAQ system to permanent storage.
The HTT baseline design is based on associative memory (AM) ASICs for pattern recognition and FPGAs for track reconstruction and fitting. This choice is motivated by the extensive experience with this technology within ATLAS (i.e., the ongoing integration of the current fast tracker, FTK, into the HLT trigger) and the capability to evolve the HTT system for use in the hardware-based L1 trigger.
The baseline decision may be reconsidered in case of evolution or a major technological breakthrough of other technologies, such as commodity CPU-based servers, systems based on accelerators (e.g., GPGPUs) or future architectures based on devices integrating machine learning capabilities.
The most cost-effective computing platform for the EF software is the commodity PC server, which shows a trend towards systems hosting multi-core CPUs with an increasing core-count. Another suitable technology is the heterogeneous hardware architectures, incorporating GPGPUs or FPGAs. Therefore the upgraded EF software should allow for both parallel algorithm execution at event level (i.e., multiple events) as well as exploitation of internal parallelism (intra-event, i.e., concurrent processing of so-called regions-of-interest, or intra-algorithm). The optimal degree of parallelism will be found by balancing the potential benefits in throughput and the effort needed to modify and maintain code. It is envisaged to establish interfaces that will allow for operation of the EF as a rather heterogeneous system, possibly containing different classes of hardware, i.e., GPGPUs and commodity servers.

Summary
The current status and plans of the ATLAS experiment for the future HL-LHC proton-proton physics data taking were presented. This upgrade programme will be refined and evolved in the coming years.