Performance of ALICE detector and electronics under first beam conditions

ALICE (A Large Ion Collider Experiment) is a general purpose heavy-ion detector at the CERN LHC addressing the physics of strongly interacting matter and the quark-gluon plasma in nucleus-nucleus collisions. ALICE has been recording physics data since the first proton-proton collisions at LHC as reference for the heavy-ion programme and to address physics topics for which it is complementary to the other LHC detectors. This paper aims to provide a concise description of the ALICE experiment, with entire emphasis on instrumentation and electronics related topics, as it has been operating since the beginning of the LHC 7 TeV p-p collisions programme (March 2010) up to date. The performance, stability and reliability of selected detectors and central systems are discussed. A description of the typical proton-proton data taking experience is given. Selected topics related to timing and synchronization of systems, luminosity measurements and SEU effects are finally presented.


Introduction
ALICE (A Large Ion Collider Experiment) [1][2][3] is a general purpose heavy-ion detector running at the CERN LHC [4] that focuses on QCD, the strong-interaction sector of the Standard Model. It is designed to address the physics of strongly interacting matter and the quark-gluon plasma in nucleus-nucleus collisions. The experimental programme also includes recording proton-proton collisions data to collect reference measurements for the heavy-ion programme and to address topics for which ALICE is complementary to the other LHC experiments.
The ALICE detectors were designed to cope with multiplicities up to 8000 charged particles per rapidity unit [5]. More recent studies place the expected value of the multiplicity density between 1200 and 2500 charged tracks per unit of pseudo-rapidity [6]. The rate of heavy ions (Pb-Pb) collisions at the LHC nominal luminosity of 10 27 cm −2 s −1 will be about 8000 per second. Very high granularity detectors are required to handle the extreme track multiplicity. The low interaction rate enables the use of high-granularity but relatively slow tracking detectors, such as a time projection chamber and silicon drift detectors. High-momentum resolution and excellent particle identification (PID) capabilities are also crucial requirements for the experiment. The expected size of events is about 90 MB for Pb-Pb collisions, as a consequence of the high granularity tracking. This demands a DAQ system with challenging data bandwidth requirements from the detectors front-end to the event building computing farms and to permanent data storage.
The ALICE apparatus is shown in figure 1. It consists of a central barrel part and a forward muon spectrometer. The central barrel detectors are installed inside a large solenoid magnet and cover polar angles from 45 • to 135 • . The Inner Tracking System (ITS) includes six layers of silicon pixel (SPD), drift (SDD) and strip detectors (SSD). They provide vertex reconstruction, tracking and particle identification by dE/dx. The large cylindrical Time Projection Chamber (TPC) is the core tracking detector. The Time of Flight (TOF) identifies hadrons at intermediate transverse momenta. Arrays of Ring Imaging Cerenkov (HMPID) detectors extend the hadron identification to high p t . Electron identification is achieved by the Transition Radiation Detector (TRD). Two electromagnetic calorimeters (PHOS and EMCal) are located inside the solenoid. All detectors except HMPID, PHOS and EMCal have full azimuthal coverage. The forward muon arm consists of several passive absorbers, a large dipole magnet (3 Tm field integral), ten planes of muon tracking stations and four planes of triggering chambers. Several smaller detectors (PMD, FMD, ZDC, V0, T0) for global event characterization and triggering are located at small angles. An array of scintillators on top of the solenoid magnet (ACORDE) is used to trigger on cosmic rays.

Custom integrated circuits
Several custom ASICs were developed to fulfill the requirements of the ALICE detectors. Table 1 lists the ASICs at the core of the front-end electronics of a selection of ALICE detectors. This partial summary accounts for almost 300000 chips. For the Inner Tracking System, hybrids based on Al-Polymide flex circuits are employed to interconnect the sensor modules and the readout ASICs. The radiation levels expected in the Inner Tracking System for the 10 years running scenario range from 2200 Gy and 8 · 10 11 cm −2 (1 MeV n eq ) down to 26 Gy and 4.1 · 10 11 cm −2 (1 MeV n eq ). These levels required the use of hardening by design techniques for the ASICs used in the silicon trackers. Enclosed layout transistors were systematically used and triple module redundancy was employed in the digital blocks.
The detectors farther than the ITS from the interaction point are subjected to lower levels of radiation, <2 Gy and 2.4 · 10 11 cm −2 (1 MeV n eq ). The use of standard CMOS technologies for their on-detector integrated circuits was therefore allowed. Moreover, commercial FPGAs are employed on some of these systems, with a main SRAM based device and a secondary PROM based one that ensures the in system configuration of the primary one.  Table 2 details the operating status of the detectors of the Inner Tracking System. The Silicon Pixel Detector is characterized by a binary readout, with each hit producing a single data bit. The Silicon Drift and the Silicon Strip Detectors have front-end circuits capable of providing digitized amplitude information. The fraction of channels active in the readout is above 90% for the drift and strips detectors, while it is below 80% for the pixel detector. The fraction of well performing channels is above 98% in the active modules for all three detectors. The active fraction of the pixel detector is limited by the performance of its evaporative cooling system. Reduced cooling capability is observed on some cooling lines. The flow of cooling fluid C 4 F 10 inside the under-performing lines is insufficient to keep cold the modules that they serve. Ongoing detailed investigations point to a combination of causes. Evidence was found of clogging of the micro-porous filters of the cooling lines. This probably adds to a difficult control of the thermodynamic state of the cooling fluid in the long pipes from the cooling plant to the detector. An electrical issue on the interconnections of the hybrid affects one module only out of the total 120.

Status of detectors and central systems
Biasing and front-end electronics issues oblige to keep off 16 modules of the 260 of the Silicon Drift detector. Further 4 modules are malfunctioning due to broken electrical connections. The Silicon Strips detector is operated keeping 67 modules (6 half ladders) off because of electrical issues in the electronic circuits controlling and reading half ladders. Other 74 individual modules are kept out of readout because of configuration issues or excessive electronic noise.  Table 3 summarizes the operating status of a selection of detectors of the central barrel and of the muon tracker. The fraction of active channels in the TPC is 99.7% of the total 557568. Out of all channels, 768 are off because of front-end circuits damages induced by high voltage trips, 362 channels are off because of broken ASICs and 576 channels are kept off because of excessive noise due to their position in the detector. The TRD detector is partially installed at date, 7 modules being mounted out of the 18 foreseen in the design. Three more modules are planned to be installed during the break of LHC running over December 2010 and January 2011. On the installed modules, 91.6% of channels are active. High voltage issues are responsible for 6.2% of the missing fraction and the remaining 2.2% is related to front-end electronic issues. The 95% of the TOF detector channels is read out. The missing fraction is connected with TDC boards that need to be replaced and to issues with the cooling of the crates. Access to the instrumentation for the repairing actions is not possible during 2010 and 2011. HMPID operates with 85% of its channels being active in the readout. The lack of coverage is due to broken Cerenkov radiators. Only 0.2% of the HMPID frontend channels are noisy or dead. The Muon Tracker is operated with an active channels fraction of 96% of the total 1.08 · 10 6 . Issues with low voltage connections causing biasing instabilities are responsible for the missing part.
The ALICE Central Trigger, DAQ and High Level Trigger (HLT) are interconnected as shown in figure 2. The trigger decision and messages are propagated to the detector front-end readout (FERO) electronics using the TTC system [7] and via unified trigger interface boards (LTUs) identical for all detectors. Data flows from the FEROs to the front-end computers of the DAQ system using a custom developed data link (DDL). This provides ALICE with the required large bandwidth between the detectors and the DAQ. Recorded data are first stored to a transient storage system in ALICE control rooms and then transferred to the CERN Computing Center for permanent storage on the Grid. In preparation for the runs with heavy ions, tests are periodically realized on the data throughput of the DAQ. A throughput of 4.5 GB/s from the DAQ computers to the transient storage is regularly achieved. Throughput from DAQ to permanent storage reaches 2.5 GB/s, safely above the needed 1.25 GB/s. A redundant path relays data from the FEROs to the HLT computing farm such that it is fully independent from the Central Trigger and from the DAQ. The HLT decision is fed back into the DAQ using DDL links. The HLT front-end and infrastructure nodes are completely installed (∼1000 CPUs). Of the foreseen 153 computing nodes, 53 are installed to date. This configuration provides ALICE with full processing capability for p-p events. Heavy ions processing power is still partial but sufficient for the runs at lower luminosity foreseen during the first heavy ions LHC period.

Data taking with p-p collisions
ALICE has been recording p-p collisions since the very first ones produced by the LHC [8]. The main goal for the 7 TeV p-p run of 2010 is to efficiently collect a set of ∼ 1 · 10 9 minimum bias events. For these reference sample it is desired to have on average one interaction during the full drift time of the Time Projection Chamber. This can be achieved with an interaction rate of the order of 10 kHz, considering that the drift time in the TPC is 94 µs. With this condition the fraction of TPC events containing more than one interaction is of the order of 40%. In order to obtain p-p interactions at a rate close to the desired 10 kHz, the ALICE Collaboration requested to configure the LHC with displaced beams at point 2. The requested configuration (β * = 3.5 m, ∆x = 266 µm, 3.8 σ ) has been adopted since July 1 st , resulting in a luminosity L ≃ 2.5 · 10 29 cm −2 s −1 with a 14 colliding bunches filling scheme. The reduction of luminosity in ALICE is clearly visible in the delivered integrated luminosity plots of the LHC programme coordination.
The p-p trigger configuration includes three main components: an interaction (minimum bias) trigger requiring at least one particle in the V0 or in the SPD detectors, a muon trigger selecting events with at least one high p t muon in the forward muon arm, a high multiplicity trigger selecting events with a large number of tracks also based on the SPD trigger capability. Allocation of DAQ bandwidth is done by time sharing. Every two minutes, minimum bias triggers are enabled for 59 s, rare triggers for 60 s and calibration triggers for 1 s. The High Level Trigger selection is running and events are tagged, but no events are rejected because of the sufficient DAQ capability to record all the triggered events. Figure 3 shows the accumulation of recorded events since the beginning of the data taking with 7 TeV p-p collisions. The average ratio between data recording time and stable beams time has been 79% and the typical duration of runs is 1 h. A typical 3.5+3.5 TeV p-p run recorded in the month of August showed a rate to tape of 530 Hz (420 Hz minimum bias, 60 Hz high p t muon events, 34 Hz high multiplicity events plus calibration triggers). The average event size was ∼980 kB and data were stored at 530 MB/s.

Synchronization of electronics with collisions and measurements of luminosity
The fine tuning of the phase of clock signals of several detectors has been completed a short time after synchronous and stable collisions were delivered by LHC. Dedicated measurements and calibration runs were done to this purpose. Two examples (SPD, TOF) are detailed in the following.
The SPD is made of 120 electronically independent readout modules. Combining precise measurements of the fiber lengths (with Optical Time Domain Reflectometry) and of the clock transmitters phases, the clock signals at the inputs of the modules have been time aligned to ±1 ns of relative skew. All these clocks are derived from a single source (LTU) and they can be shifted in time using the fine delay tuning circuit of the TTCrx chip. Finally, dedicated runs with colliding beams allowed to find and set the optimal time relationship between the SPD clocks and the collisions.
The clock signals of the TOF detector are distributed on fibers to 72 crates inside the magnet. TOF uses a dedicated low jitter clock in addition to the one received via the TTC. The skew due to all propagation delays was minimized, achieving a relative alignment of all front-ends to ±1 ns. The fine delay tuning functionality of the TTCrx chips was used also in this case. After calibration with cosmic rays TOF achieved a time resolution of 88 ps. With such time accuracy, TOF was the first system to detect the variations of the LHC clock phase with respect to the timing of collisions. This phenomenon has later been deeply investigated and traced to the change with temperature of the refractive index of the several km long fibers distributing the LHC clock signals to the experiments. Seasonal drifts and day-night variations (∼ 70 ps peak to peak) due to ambient temperature cycles are traced by TOF using a method based on statistical analysis of collision data.
A system based on the Beam Pickup Experiment devices (BPTX) provides bunch-by-bunch monitoring of each beam in ALICE. Two BPTX sensors are installed on each LHC beam line, ∼145 m away from the interaction point. Their electrical pulses are detected using two BPIM electronic boards, developed by the LHCb collaboration but tailored to ALICE requirements with minor modifications. This system is used for the regular monitoring of the timing of the bunches with respect to the LHC clock. The average temperature derivative of the timing of collisions with respect to the clock was ∼ 0.45 ns · K −1 over a period of two months (March 3 to May 9). The same temperature dependence profile was measured for both beams. The BPTX/BPIM in ALICE are also used for checking the filling scheme, detecting ghost bunches, tuning the average z position of the interaction point (cogging). The BPTX pulses can also be used in the first level trigger decision, signaling the passage of filled bunches on one or both beams.
The monitoring of the luminosity has been done to date measuring the average rate of the signal produced by the V0 detector and used in the minimum bias interaction trigger. This method allows the measurement of the average luminosity delivered to ALICE. Interaction rates have been between 6 kHz and 10 kHz since the displacement of the beams. A different approach is being deployed based on data recorded by the Central Trigger. Interaction flags are generated on a bunch by bunch basis by the CTP, based on the first level trigger inputs. For each orbit, a complete Interaction Record of the bunches with asserted interaction flags is produced and recorded. Bunchby-bunch specific luminosities are obtained from this data.
The size of the luminous region is evaluated by an algorithm applied to the data after they are on disk, in order to achieve the finest spatial resolution. During each run, a computationally intensive track fitting algorithm is run cyclically (each 4 min) on the ITS and TPC data. Events of the sample are grouped by the number of tracks. For each event in each group, the coordinates of the primary vertex are reconstructed. The distributions of the transverse (x and y) vertex coordinates obtained for each group of events are fitted. The σ x and σ y of the fits are monotonically decreasing functions of the number of tracks as shown in figure 4. The asymptotic values of the σ x and σ y curves are the measurements of the transverse dimensions of the luminous volume. With this method the finite resolution of the vertex reconstruction is unfolded and the real dimensions of the region of collisions is obtained. Its longitudinal (z) dimension, being much larger than the finite resolution of the vertexing, is obtained directly from the distribution of the z coordinates of the reconstructed vertexes. Table 4. Single Event Upset events measured in the pedestal memories of the TPC ALTRO chips. Leftmost column is an estimate of the integrated luminosity during the measurements.

First directed measurements of Single Event Upsets in the Time Projection Chamber
Direct investigations on Single Event Upsets have been recently started by the group operating the TPC detector. The measurements have been done by writing, prior to LHC physics fills, fixed bit patterns to the pedestal memories of all ALTRO chips of the TPC. At the end of the fills, the memories were read back and scanned for changes of cell values. Table 4 shows the first preliminary outcomes of these studies. The first column in the table gives estimates of the integrated luminosity delivered by LHC during the measurements. In two independent sessions, few tens of bit flips have been observed on the 5.6 · 10 9 bits read back. A cross-check measurement during a period with no beams in the machine showed no bit flips in the ALTRO memories. SEUs events also appear to be more frequent in the ALTROs located in regions subjected to higher particle fluence.