Energy Reconstruction Performance in the ATLAS Tile Calorimeter Operating at High Event Rate Conditions Using LHC Collision Data

The discovery of particles that shape our universe pushes the scientific community to increasingly build sophisticated equipments. Particle accelerators are one of these complex machines that put known particle beams on a collision course at speeds close to that of light. The Large Hadron Collider (LHC) is the world's largest and most powerful beam collider, operating with 13 TeV of energy collision and 25 ns of bunch-crossing interval. ATLAS is the largest LHC experiment, comprising several subsystems which provide data fusion to reconstruct each collision. When collisions occur, subproducts are produced and measured by the calorimeter system, which absorbs these subproducts. Typically, a high-energy calorimeter is highly segmented, comprising thousands of dedicated readout channels. The present work evaluates the performance of two cell energy reconstruction algorithms that operate in the ATLAS Tile Calorimeter (TileCal): the baseline algorithm OF2 (Optimal Filter) and COF (Constrained Optimal Filter), which was recently proposed to deal with the signal superposition (pile-up) that is, increscent, present in LHC operation. In order to evaluate the energy estimation efficiency, real data acquired during the nominal LHC operation at high luminosity condition were used. The statistics from the energy estimation is employed to compare the performance achieved by each method. The results show that the COF method presents a better performance than the OF2 method, pointing out benefits from using this alternative estimation method.


I. INTRODUCTION
Since the most remote times, the humanity searches for the universe origin and composition. After the electron discovery, a large number of subatomic particles have been identified and their properties were exhaustively explored. High-energy particle colliders are machines that are used to understand the fundamental composition of the universe [1].
The largest particle collider currently in operation is the Large Hadron Collider (LHC), which was built at CERN and comprises a ring of 27 kilometers at approximately 100 meters underneath the ground. In the LHC machine, protons beams are accelerated approximately to the speed of light and put in a frontal collision route at every 25 ns, reaching a maximum energy of 13 TeV. The higher both, the collision rate and luminosity (number of particle interactions per cm 2 per second) are [1,2], the greater is the likelihood of occurrence of rare particles. Therefore, LHC has been gradually increasing its luminosity, reaching unprecedented conditions and exploring a large and ambitious physics program. ATLAS [3] is the largest LHC experiment and plays a fundamental role in particle detection research, as it covers a large physics program, including the Higgs boson discovery and characterization [4] and possible beyond Standard Model physics [1]. The ATLAS experiment is composed by several subdetectors: a particle tracking system, which measures the momentum of charged particles, the calorimeter (highly-segmented energy measurement system) and a muon chamber, which detects and measures the muon momentum [5]. The ATLAS calorimeter system comprises two main sections: the electromagnetic, which is responsible for the measurement of the energy of electromagnetic particles (such as electrons and photons) and hadronic, responsible of measuring the energy of hadronic particles (such as protons and neutrons). The Tile Calorimeter (TileCal) is the main ATLAS hadronic calorimeter [5,6] and provides accurate energy deposition measurement [7].
For TileCal, a fast electronic pulse is conditioned by a sharper circuit in such a way that the processed pulse amplitude is proportional to the deposited energy [8]. The resulting pulse width is approximately 150 ns. Analog to digital conversion (ADC) is performed at the particle collision rate (every 25 ns), and the time samples are used for estimating the pulse parameters, such as amplitude, phase and pedestal (signal baseline). A proper energy estimation is very important for efficient particle identification and characterization [9].
In modern calorimeters, the energy measure is typically provided by computing the amplitude of the detector response signal. In the electromagnetic calorimeter of the CMS experiment, the baseline method operates considering an unipolar pulse and compute the time when the pulse reach its maximum value [10]. In the ALICE experiment, the energy estimation process is made computing only the amplitude of the received pulse [11]. In the ATLAS electromagnetic calorimeter, both amplitude and temporal position of the digitized samples signal are computed [12].
Energy estimation in TileCal is performed by an optimal filtering method (OF2), which is designed to minimize the electronic noise subjected to a set of constraints [13,14]. Its ©Copyright 2020 CERN for the benefit of the ATLAS Collaboration. CC-BY-4.0 license. approach is similar to the method described in [12]. Typically, the electronic noise is Gaussian distribution [15] and, in this case, the OF2 operates close to its optimal condition. However, since the readout window comprises 150 ns, and due to the high luminosity and high event rate operation [3], more than one signal may appear within a given TileCal readout window, causing the signal pile-up effect. Such signal superposition introduces a nonlinear component to the noise, which can no longer be modeled by the Gaussian distribution. Therefore, the OF2 performance is degraded as it operates in suboptimal conditions [16].
The pile-up effect is a well known problem in energy estimation. The paper presented in [17] provides a discussion about the pile-up amplitude distribution and its effects on the timing electronic circuit responsible for event counting. In [18], it was discussed a method for pile-up corrections based on the measurement of the experiment dead time. In high energy physics experiments, a energy estimation method is proposed in [19]. This method try to reduce the pile-up contribution by subtracting from it a delayed contribution from the previous received sample. In [20], the proposed method uses a combination of pulse shape discrimination and partial dynamic integration to detect and remove peak pile-up (pileup where two pulses start so close together that they cannot be separated) and to correct for tail pile-up (pile-up where the second pulse starts in the tail of the first pulse).
An alternative method, called Constrained Optimal Filter [21,22] was introduced aiming at dealing with the signal pile-up conditions. In the Constrained Optimal Filter (COF) approach, the superimposed signals are not seen as noise and a linear signal deconvolution transformation is computed in order to recover the signals that are present within the readout window. As a result, only the signal of interest (and its energy) is used for reconstruction from a particular collision.
Recently, a method based on sparse representation was proposed to recover a set of superimposed signals considering unipolar and bipolar pulses [23]. Despite being a more sophisticated and robust technique, it has not been validated yet for the TileCal signals.
The energy estimation accuracy at the channel level is crucial to efficiently reconstruct each collision. The particle reconstruction procedure strongly relies on the energy information provided by the calorimeter systems. Therefore, in this work, a performance comparison between the OF2 and COF algorithms using real LHC collision data is presented. The data were acquired during a high luminosity operation, where the signal pile-up is likely to affect the TileCal channels. The parameters from the estimation error distribution are used for both qualitative and quantitative analysis.
The text is organized as follows: Section II describes the TileCal and both energy estimation methods (OF2 and COF). Section III presents both the data used and the evaluation method employed to compare the two algorithms, along with the obtained results for a single partition of the TileCal. Conclusions are derived in Section IV.

II. THE TILECAL
The TileCal provides precise measurements of hadrons, jets, taus and contributes to the reconstruction of the missing transverse energy as well as to provide input signals to ATLAS online particle selection system (trigger) [3]. It consists of three cylinders, one long barrel (LB) split into two readout partitions, LBA and LBC (LBs), and two extended barrels (EBs), EBA and EBC, covering the most central region |η| < 1.7 1 of ATLAS, as shown in Figure 1. In order to sample the particle energies, TileCal uses iron plates as the absorber material and plastic scintillating tiles as the active material. In that way, the particles produced in the collisions travel through the calorimeter and the light produced in the scintillating tiles is proportional to the energy deposited by the particles in the absorber material. Then, the light is transmitted by wavelength shifting fibers and feed photomultiplier tubes (PMTs), which finally generate analog pulses. Figure 2 shows the TileCal structure. Each of TileCal central and extended barrel modules are divided, respectively, into 46 and 32 readout channels, resulting in almost 10,000 pulses to be processed.
The shaping circuit provides a well-defined pulse, which is approximately invariant channel by channel. The important pulse parameters (see Figure 3) are the amplitude (proportional to the energy), the phase deviation that is associated to the delay between the collision time and the effective reading performed by the electronics, and the pedestal, which is the baseline added to the analog signal in order to avoid negative analog to digital conversion [13]. These three parameters as well as the digitized time samples, which are represented by dots, are shown in Figure 3. 1 ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the z-axis along the beam pipe. The x-axis points from the IP to the centre of the LHC ring, and the yaxis points upwards. Cylindrical coordinates (r, φ) are used in the transverse plane,φ being the azimuthal angle around the z-axis. The pseudorapidity is defined in terms of the polar angle θ as η ≡ − ln tan(θ/2) .

A. TileCal Energy Reconstruction Methods
The diagram of the energy estimation process is shown in Figure 4. The TileCal data acquisition system operates with seven samples of the received signal (y(n), in order to estimate the signal amplitude A). The two methods that will compared in this are described next.
1) Optimal Filter: OF2 is the current energy estimation method in TileCal [13,14] and it is based on a weighted sum of the incoming time samples. To compute its coefficients, the OF2 method considers the received signal model y(n) as: where A is the pulse amplitude, τ is the phase deviation, ped is the baseline added to the signal and w(n) the electronic noise, usually modeled as Gaussian distribution. The signal h(n) is the normalized reference TileCal pulse shape (ḣ(n) is its time derivative), which can be measured through a specific calibration data taking. The amplitude to be translated into the energy estimated value is obtained as: where N represents the number of time samples collected in a single readout window and c(n) are the OF2 coefficients, which are calculated through an optimization process using the Lagrange Multipliers method, in order to minimize the noise contribution in the amplitude estimation procedure. For this purpose, the following constraints may be imposed: The parameters λ, and κ are the Lagrange multipliers and C(n, j) corresponds to the noise covariance matrix elements. Since the OF2 uses the noise covariance information to minimize the signal uncertainties, its precision is considerably degraded in signal pile-up conditions as signal pile-up is not properly modelled by Eq. 1. In fact, in pile-up conditions, the use of the correct noise covariance matrix partially absorbs the noise statistics, improving the energy estimation efficiency [24]. Figure 5 illustrates the signal pile-up effect. The pulse in black corresponds to the signal of interest to be estimated, while the pulse in red comes from a time-shifted collision that was acquired within a same readout window, distorting the output pulse (in magenta).
2) Constrained Optimal Filter: COF computes a linear transformation that recovers the amplitude of the superimposed signals for a given readout window, so that the central pulse becomes assigned to the collision of interest and can be decoupled and reconstructed.
In this context, the COF method assumes that the energy deposition from each collision is an input of linear timeinvariant system [25]. The received TileCal signal results from the convolution of the received amplitude vector from the acquired time samples a(n) and the impulsive system response h(n): where i is the convolutional auxiliary variable (0 ≤ i ≤ N ) [25]. The N parameter depends on the electronic data acquisition system and the reference pulse. In the TileCal, the readout window comprises N = 7 samples. Here, the noise w(n) can be considered Gaussian, since the pile-up contribution is already included in the signal model, not contributing noise term. Therefore, to obtain an estimation for the deposited energy, it is necessary to apply a deconvolution process to estimate the a(n) coefficients. The method equations are: where: and where p (1 ≤ p ≤ N ) is the number of collisions within a readout window and C is the noise correlation matrix. The algorithm performs in two steps: firstly, considering p = N , the method achieves the best approximation for the deconvolution process, estimating N signals within the readout window. It is worth mentioning that if the C matrix can be neglected (white noise, C ≈ I), the energy estimation process for N signals becomes:â = H −1 p y.
Once the N amplitudes are estimated, the COF method compares all the a(n) values with a predefined threshold (usually associated to the electronic noise variance) and the amplitudes above the threshold are selected. Thus, H T p is computed again to improve the signal deconvolution process, calculating only the components that probably have information of interest and not only noise contribution. Before computing the (11), the method subtracts the pedestal value, which is usually calculated through a calibration run and store in a database.

III. RESULTS
This section describes the data set used and the efficiency measurements applied for evaluating the performance of each energy reconstruction method.

A. Data set
Real data acquired during 2018 LHC nominal operation were used. For these data, the average number of interactions per bunch crossing < µ > was approximately equal to 90, increasing the probability of observing signal pile-up in the TileCal readout channels. These unprecedented conditions are expected for next LHC data taking period and future operations. Furthermore, the data used in this work correspond to the socalled ATLAS ZeroBias stream, where no particle pre-selection (trigger) is configured and the information comprises mainly noise (electronic noise plus pile-up noise).
The analysis was made considering only the LBA partition of the TileCal (see Figure 1). However, it should be stressed that the other TileCal partitions presented similar behaviour.

B. Performance evaluation
The efficiency quantities used to evaluate the performance for OF2 and COF methods were based on the statistics provided by the energy estimation distribution. Firstly, the energy estimation distributions were produced considering all TileCal readout channels that belong to the TileCal LBA partition (see Figure 6), which comprises approximately 2800 readout channels from the LBA electronics. Each LBA channel has 15,756 entries, producing a dataset with a total of 4.5 × 10 7 samples.
As the data comprises both electronics noise and pile-up noise, the sharper the energy estimation distribution is, the better the resolution is. As it can be seen, the OF2 distribution shows a larger dispersion, producing a larger estimation error. Table I shows the mean and RMS values for both methods. The distribution peaks are close to zero, as expected from unbiased parameter estimation methods in zero-bias condition. The differences in the distributions are due to the approach used by the two algorithms. The OF2 method describes the noise as a zero-mean uncorrelated Gaussian multivariate function, and thus, the efficiency is degraded as the signal pile-up effect introduces non-Gaussian components to the noise, which are not handled by the minimization procedure used in the OF2. Since the COF treats the pile-up signal as additional signals to be estimated, the estimated noise comprises only the usual  In order to provide a detailed view of the energy estimation performance, the percentage difference between the OF2 and the COF RMS values, taken from the energy distribution, is computed individually for each channel: where σ OF 2 is the RMS related to the OF2 algorithm while σ COF represents the RMS associated to the COF algorithm. A positive value of σ z indicates that the COF method presents a better accuracy (smaller estimation error) with respect to the OF2 method. Negative values of σ z shows the opposite situation. Figure 7 shows the σ z values for each readout channel in the TileCal LBA partition. The x-axis corresponds to the module number while the y-axis is the channel index. The zaxis is the σ z value associated to a given module and channel pair. The hot points on the heatmap (σ z > 0) show channels for which the COF algorithm produces a smaller RMS than the OF2 method. As it can be noted, the great majority of the heatmap presents positive values of σ z . The heatmap also indicates that the COF presents an average energy estimation improvement of around 25% with respect to OF2. It is worth mentioning that there are some non-instrumented or problem-

ATLAS Preliminary Tile Calorimeter
TileCal LBA 90 ≈ > µ = 13 TeV, < s Fig. 7. Heatmap of the LBA partition. The white points corresponds to non instrumented channels or disabled channels (extracted from [26]). atic readout channels. In these cases, the heatmap shows white points to avoid a misinterpretation.

IV. CONCLUSIONS
Searching for new physics, modern high event rate experiments, such as the ATLAS experiment at the LHC face an unprecedented increase on the number of interactions per collision, pushing the calorimeters design to the limit in order to deal with the immense amount of data that is produced and the resulting pile-up effect on signal reconstruction. For the ATLAS calorimeter, this work presented a performance comparison between the OF2 and COF methods, using high pile-up experimental dataset. It was shown that the current baseline method (OF2) is not capable of handling well the high pile-up scenario, while the recently proposed COF method was designed to deal with such conditions, incorporating a signal pile-up model in its design. The COF algorithm has improved considerably the energy estimation efficiency with respect to the current OF2 approach, and it has become a promising alternative for next high-luminosity LHC operation period.