The ATLAS Beam Conditions Monitor

Beam conditions and the potential detector damage resulting from their anomalies have pushed the LHC experiments to build their own beam monitoring devices. The ATLAS Beam Conditions Monitor (BCM) consists of two stations (forward and backward) of detectors each with four modules. The sensors are required to tolerate doses up to 500 kGy and in excess of 1015 charged particles per cm2 over the lifetime of the experiment. Each module includes two diamond sensors read out in parallel. The stations are located symmetrically around the interaction point, positioning the diamond sensors at z = ±184 cm and r = 55 mm (a pseudo- rapidity of about 4.2). Equipped with fast electronics (2 ns rise time) these stations measure time-of-flight and pulse height to distinguish events resulting from lost beam particles from those normally occurring in proton-proton interactions. The BCM also provides a measurement of bunch-by-bunch luminosities in ATLAS by counting in-time and out-of-time collisions. Eleven detector modules have been fully assembled and tested. Tests performed range from characterisation of diamond sensors to full module tests with electron sources and in proton testbeams. Testbeam results from the CERN SPS show a module median-signal to noise of 11:1 for minimum ionising particles incident at a 45-degree angle. The best eight modules were installed on the ATLAS pixel support frame that was inserted into ATLAS in the summer of 2007. This paper describes the full BCM detector system along with simulation studies being used to develop the logic in the back-end FPGA coincidence hardware.


Introduction
One of the worst-case scenarios in Large Hadron Collider (LHC) operation arises when several proton bunches hit the collimators designed to protect the detectors. While the accumulated radiation doses from such unlikely accidents correspond to those acquired during several days of normal operation, and as such pose no major contribution to the integrated dose, the enormous instantaneous rate might cause detector damage. The ATLAS Beam Conditions Monitor (BCM) is designed to detect such incidents and trigger an abort before they happen. Further, beam gas interactions are a worry, especially in the early days of LHC running.
Common elements of both of these backgrounds are that they initiate charged particle showers, originating well up-or down-stream of the ATLAS interaction point. Given two detector stations placed symmetrically about the interaction point at ± z, showering particles hit the BCM stations with a time difference t = 2z/c. At the LHC design luminosity collisions add coincident signals ( t =0) in these detectors every at every bunch crossing (25 ns). To optimally distinguish these two classes of events the BCM stations should be located ~3.8 m apart at z = ± 1.9 m, resulting in t of 12.5 ns (figure 1). The BCM also provides complementary luminosity measurements [1] to those coming from LUCID [2], the main ATLAS luminosity monitor. Adding the BCM information to the ATLAS trigger will allow corrections for bunch-to-bunch luminosity variation. Finally, during the commissioning of the LHC collider, when tracking detectors are switched off, the BCM is likely to be the first detector to report proton collisions in ATLAS.

Beam conditions at the LHC
The BCM is suspended from the ATLAS Beam Pipe Support Structure (BPSS) that also supports the pixel detector. This places the BCM sensors at radius of r ~ 55 mm, about 20 mm outside the beam pipe, at |z| = 183.8 cm upstream and downstream of the interaction point, corresponding to a pseudo-rapidity of η ~ 4.2. The resulting z gives an almost ideal t of 12.3 ns. An estimate [3] predicts about one particle per cm 2 of sensor from a single 7 TeV proton hitting the TAS collimator -the collimator nearest to the ATLAS interaction point. This is to be compared with ~½ particle/cm 2 resulting from minimum bias proton interactions in each bunch crossing (every 25 ns) at LHC design luminosity of 10 34 cm -2 s -1 [1]. To be optimally able to distinguish these two situations the BCM should be sensitive to single minimum ionising particles (MIPs). Given MIP sensitivity one is then also able to use BCM information for proton-proton collision luminosity assessment. With proton interactions inducing signals every 25 ns fast processing of the MIP signals is paramount. A fast rise time (~1 ns), narrow pulse width (~3 ns) and base line restoration in 10 ns are necessary to prevent pile-up. The radiation field at this location will expose the BCM sensors to 10 15 particles, mostly pions, per cm 2 and an ionisation dose of ~500 kGy in 10 years of LHC operation. An additional constraint stems from the fact that BCM is integrated into the BPSS and covered with layers of pixel services. This renders it almost inaccessible, with any intervention requiring a disassembly of a substantial part of the pixel services, an action unlikely to be approved. Thus a simple and robust design was privileged.

Detector modules
The BCM detector modules include two novel parts. The first,a set of diamond sensors that sit in the very intense radiation region less than 6 cm radially from the LHC beams. The passage of charged particles, either from proton-proton collisions or the secondary products of lost protons, ionises the diamond, generating MIP signals. Second, at a larger radius, but still only 5 cm from the diamond sensors themselves, sits a two stage RF amplifier that boosts the signal from the diamond and transmits it, in analogue form, 15 m off the detector to be digitised. In this section we will discuss the two main components of the detector modules -the diamond sensors and the signal pre-amplifiers.

Diamond sensor material
Chemical Vapour Deposited (CVD) diamond possesses some remarkable properties which make it an attractive material for use in the BCM system. Increasingly, solid-state particle detectors are required to have fast signals, operate at high rate and, very often, operate in high radiation environments reliably for several years. While silicon, the de-facto standard of solidstate detectors, is very well established in particle detector applications, diamond detectors are competitive in environments that place a premium on radiation hardness and fast signal formation such as the ATLAS BCM. Typical designs for diamond particle sensors are based on a bulk of free-standing CVD material, usually a few hundred micrometers thick, with electrodes on opposite sides of the diamond bulk as shown in figure 2. Prior to deposition of contacts the diamond surfaces are polished, smoothing the surface on the growth side and removing significant amounts of lowgrade material from the substrate side. Metal contacts that form suitable carbides are evaporated or sputtered on both diamond surfaces and annealed. A covering layer of, for example, Aluminium is applied to allow wire-bond connections to the readout electronics. The dimension of electrodes, deposited with lithography, range from tens of micrometers to centimeters. For sensor operation, a bias voltage is applied between the electrodes to generate a drift field. A traversing charged particle will ionise the atoms in the crystal lattice and leave a trail of primary ionisation charge of 36 electron-hole pairs per micrometer [4], [5], denoted as Q gen , along its path. The drift of electrons and holes in the applied electric field induces a current pulse on the electrodes. The induced current, I, can be calculated by the Shockley-Ramo theorem [6], [7] for a uniform constant field between the two electrodes as: where Q gen denotes the total generated ionisation charge, v the drift velocity, and d the gap between the electrodes, which is equal to the thickness of the sensor. Readout electronics then measures either the current amplitude or, in case of charge sensitive amplifiers, the integrated current or total charge measured: Q meas . The ionisation charge however is reduced by charge trapping during its drift. A common figure of merit for the characterisation of CVD diamond sensors is the mean distance electrons and holes drift apart before being trapped, called the charge collection distance (CCD) CCD = d Q meas /Q gen , (3.2) which can be related to the product of electron and hole mobility and the lifetime τ i of the electrons and holes as CCD=( τ e + τ h ) E under the assumption that the sensor thickness is larger than the CCD and the electric field, E, is uniform. As diamond sensors are usually operated at high field strength, the charge collection distance is usually quoted where the CCD saturates at 1 V/ m. For applications, such as the BCM, an initial charge collection distance beyond 200 m is required in order for diamond sensors to produce reliable single MIP signals. Figure 3 shows a recent 13cm diameter CVD wafer ready for tests with contact pads spaced at 1 cm intervals.
In polycrystalline CVD (pCVD) diamond sensors, charge collection distances of 275 m have been achieved. In these diamonds, typically 500 m thick, the charge signal distribution shows a mean charge of 9800 electrons with 99% of the distribution above approximately 3000e- [8], [9]. The best samples reach a charge collection distance above 300 m (figure 4).
Polycrystalline CVD diamond sensors are ideally suited for use in the BCM system as they are only sensor material known to fulfil our requirements in terms of signal speed and radiation hardness. The sensor of choice is the pCVD diamond material developed by RD42 [10] and produced by Element Six Ltd. 1 The timing properties of the ionisation current signal are excellent due to the high velocity of carriers (> 10 7 cm/s), at our operating field of 2 V/ m, and short trapping times even before irradiation. Another clear benefit is the very low leakage current (less than 1 nA) allowing operation at room temperature without cooling. Radiation hardness is proven up to fluences of 2.2 × 10 15 p/cm 2 with signal degradation of only 15% [11].
The sensor dimensions are 1 cm by 1 cm with metal electrodes covering 8 mm by 8 mm. They are around 500 m thick, which, with a bias of 1000 V, results in an electric field of 2 V/ m. At 1000 V typical sensors have a leakage current of less than 100 pA and CCD of around 250 m. The two sensors are assembled in a back-to-back or 'double-decker' configuration

Readout amplifiers
The signal is fed through a 5 cm long 50 transmission line on the printed circuit board (figure 6) to the front-end amplifier. In this way the radiation field at the amplifier location is decreased by about 30%. The front-end [12] designed by FOTEC 2 is a two-stage RF current amplifier utilising the 500 MHz Agilent MGA-62563 GaAs MMIC low noise amplifier in the first stage and the Mini Circuits Gali 52 In-Ga-P HBT broadband microwave amplifier in the second stage. Each stage provides an amplification of 20 dB, with the first stage exhibiting an excellent noise factor of 0.9 dB.
Sensors and FE electronics are mounted in a module box (figure 6) designed to shield RF at the BCM operating frequencies. Each of the amplification stages is isolated in a separate shielded com-partment. The amplified signal is fed into a high-quality 50 coaxial cable. In prototype tests the signals were digitised with a high bandwidth (> 1 GHz) digital oscilloscope. In ATLAS, digitisation is done with a radiation tolerant ASIC placed outside the calorimeters 15 m from the BCM modules.
To verify radiation hardness of the amplifiers, several of them were irradiated with protons, neutrons and photons, and subsequently tested. Degradations of amplification at the level of 0.5 dB were observed with the second-stage Gali amplifier. A crucial test was performed by exchanging the first-stage Agilent amplifier of a BCM module with one irradiated to a mixed fluence of 5 × 10 14 protons/cm 2 and 5 × 10 14 neutrons/cm 2 . Comparing both assemblies with 90 Sr source signals from a standard float-zone silicon diode, an amplification loss due to radiation of 20% was observed with no change in the noise (figure 7). 2

Off-detector readout electronics
The back-end of the BCM readout is responsible for digitising and acquiring the signals from the modules while introducing minimal noise, storing them in a ring buffer, performing some basic analysis and generating outputs for the various parts of the ATLAS DAQ system that allow the BCM information to be read-out for further offline analysis. A Field Programmable Gate Array (FPGA) was chosen to perform these functions because of its high-speed parallel data processing capabilities. We will describe each part of the readout system in turn.

The NINO digitisers
The signal from the sensors and front-end amplifiers travels 15 m through a high-quality coaxial cable to the digitisers that are placed in a radiation shielded environment behind the ATLAS calorimeters. There the signals are digitised by a radiation tolerant eight input channel NINO chip, an ASIC originally designed for ALICE experiment at CERN [13].
MIP signals from the diamond sensors all have similar shape with amplitudes that follow a Landau distribution. When multiple particles traverse the sensors simultaneously we see a sum of individual MIP signals, still keeping similar shape. Studies showed that the optimal signal-tonoise ratio with our front-end amplifiers is achieved with the addition of a low-pass filter that provides a bandwidth limit of 200-300MHz. Signals to the NINO board are thus fed into a 200 MHz filter of fourth order with a 50 impedance. The NINO then converts analogue signal of varying amplitude into a digital signal a fixed time after the original analogue signal but having a duration correlated to the input amplitude. The resulting digital signal encodes the charge seen at the front-end in terms of a Time-over-Threshold (see figure 8). Due to the relatively small dynamic range of the NINO inputs the signals from the BCM front-end amplifiers are first split by a voltage divider in a ratio of 12:1 then fed into different NINO channels. The NINO thresholds are set such that the larger signal is used for truly minimum ionising signals (up to about 10 MIPs) while the smaller signal comes into play if a BCM sensor sees a signal of more than 10 MIPs, which could happen in catastrophic beam loss situations. Each of the NINO outputs is connected to the circuitry that drives a laser diode over 70 m of single mode 1.3 m optical fibre that brings the signals to a receiver board in the ATLAS counting room.

FPGA based signal decoders and coincidence detection logic
The sixteen optical signals (eight high amplitude and eight low amplitude) are fed into two receiver boards that translate the optical into electrical (PECL) differential signals that are connected to an FPGA board. The optical input signals and PECL differential signals are available, for oscilloscope inspection on the front panel of a double width 6U VME module. The optical receiver board also fans out the same signals, at 50 , through a LEMO-00 connector on the front panel to be used for monitoring purposes.
The PECL signals are then fed into the main part of the BCM readout: two Xilinx ML410 development boards [14], each mounted in a 19", 1U housing (also by Xilinx). These were chosen since the small BCM readout system did not warrant the design and manufacture of a custom board. The ML410 board contains a Xilinx Virtex-4 FX60 FPGA that features eight Rocket-IO Serial Multi-Gigabit Transceivers, two PowerPC cores and 56kB logic blocks. This model was chosen for the excellent sampling capabilities of the Rocket-IO channels (up to 6.5 Gbps) [15]. The incoming data is sampled synchronously with the LHC bunch clock at a rate of 2.56 Gbps (a time slice of 390 ps) by multiplying the LHC bunch clock in two separate phase locked loops by a factor of 64. The Rocket-IO channels require transitions in the incoming data stream, so a fixed pattern is generated and XOR-ed with the BCM/NINO signals. Internally, the complementary XOR operation is performed, restoring the original waveform. The data are then stored in a DDR2 RAM that acts as a ring buffer capable of storing BCM hit information from all eight modules (at both thresholds) for up to 900 LHC bunch orbits. In parallel, an edge detection algorithm determines the arrival times of pulses and performs a time-to-digital conversion. At the same time, pulse widths are encoded to digitise the Time-over-Threshold information from the NINO.
The basic hit-or-miss information from every detector is provided to the ATLAS Central Trigger Processor (CTP) [16] and thus can be used for the ATLAS Level 1 Accept (L1A) decision. To be used in this way, these signals must be provided within 1.5 µs of the actual beam crossing in ATLAS. This is the most time-critical path of the BCM read-out, so processing is performed as fast as possible. This algorithm is structured as a pipelined binary search-tree taking advantage of the FPGAs internal structure of Look-Up-Tables having four inputs [17]. The pipeline latency is 5 LHC bunch clock cycles or 125 ns, which easily achieves the required latency even when the FPGA input and output overheads and cable delays are included. The digitisation and acquisition parts have been implemented and verified on a Xilinx ML405 evaluation board. Pulses with a fixed frequency from an HP pulse generator were used as input signals and the pulse widths measurements on a Tektronix TDS5104B scope compared with the values obtained by the FPGA algorithm. Figure 9 shows the distribution of the FPGA digitised times -for an input pulse width of 4.5 ns -demonstrating the excellent performance of the Rocket-IO acquisition.
Additional analysis to be performed by the FPGA includes the calculation of in-time and out-of-time coincidences of signals between detectors in the two BCM stations. Continuously accumulating histograms will provide status information about the beams and interaction point in ATLAS. These histograms will be read out by BCM monitoring software on a timescale of minutes.
The FPGA also has to act as a Read-Out Driver (ROD). It provides data in the ATLAS Raw Event Format after a L1A over a Read-Out-Link adhering to the S-Link specification [18] as well as interfacing to the ROD Crate DAQ (RCD) framework and the Local Trigger Processor for integration in the ATLAS Trigger and Data Acquisition system [19]. For this we use the standard ATLAS S-Link interface, HOLA [20]. An ethernet connection to the RCD controller is foreseen. The FPGA is also connected via ethernet to a PC for slow read-out and integration into the ATLAS Detector Control System via its PVSS-JCOP interface. This gives us the possibility of adjusting on-board analysis and acquisition parameters. Figure 10 shows a schematic of the BCM readout and its connection to the rest of ATLAS.

Testing and qualification of prototype detector modules
Prototype BCM detector modules were subjected to a number of tests to ensure they had suitable MIP detection performance. Prototype assemblies were tested with electrons from a 90 Sr source, with 125 and 200 MeV/c protons at Massachusetts General Hospital radiation therapy facility in Boston, and with high energy pion beams at KEK and the CERN SPS. Results from these tests are summarised briefly here. For more details see refs. [21], [22].
The most important conclusions of these studies were that: • Inclining the sensors at a 45º degree angle with respect to the trajectory of the particle to be detected resulted in a √ 2 increase of signal and had no effect on noise; • The use of double-decker sensors on same amplifier input doubled the signal, while increasing the noise by ~30%, improving the signal to noise ratio (SNR) by ~50%; • The timing differences between independent modules exhibited a FWHM of 1.5 ns; • Limiting the readout bandwidth to 200 MHz improved the SNR by 20% while only degrading the time correlations by 10%; Figure 10. Overview schematic of the ATLAS-BCM readout system. Figure 11. Typical minimum-ionising particle signal superimposed on base-line fluctuations as recorded by a LeCroy oscilloscope in a 90 Sr source test. The noise is estimated from data in the first 20 ns time interval.
• Off-line processing of fully digitised analogue wave-forms confirmed that optimum SNR is achieved with a low-pass filter having a pole at 200-400 MHz.

Bench tests
With the final production modules, extensive qualification tests were performed, using a 90 Sr source as MIP signal equivalent. The BCM signal was recorded with a LeCroy oscilloscope (4 GHz sampling), triggered by a scintillator behind the diamond sensor. This configuration results in a trigger on electrons above 2 MeV from the 90 Sr source. These in turn deposit about 10% more charge in the diamond sensors than true MIPs. Using a 200 MHz bandwidth limit on the scope gives single event signals such as the one shown in figure 11. The signal is taken as the maximum reading within 2 ns of the trigger, and the noise estimated from the baseline fluctuations in a 20 ns interval well before the trigger. The noise was found to be independent of the electric field across the sensors up to 3 V/ m. Good reproducibility of signals has been observed, with signal amplitude stable to better than 4% during a 24 hour test. SNR values of ~8 have been routinely obtained at 2 V/ m bias with the 90 Sr electrons incident perpendicular to the diamond sensors.
A peculiar feature has been observed with the diamond leakage current in the BCM modules rising by a factor of more than 100 to several hundred nA on a time scale of days. In addition, this leakage current shows an erratic behaviour on a time scale of minutes, rising and falling by factors of ten. This, yet to be understood phenomenon, has been observed before in the BaBar experiment at lower electric fields of 1 V/ m [11]. As at BaBar, we observe that the excess current vanishes if the diamond is placed in a strong magnetic field. Applying a 2 T field, as will be present in the ATLAS Inner Detector, in a realistic geometry with the BCM module inclined to 45º reduced the current to well below 10 nA for a period of nearly three days ( figure 12). In any event, the BCM readout noise is observed to be independent of the leakage current up to 500 nA (figure 13).

JINST 3 P02004
-12 - Figure 14. Comparison of amplitude distribution obtained at the MGH testbeam from a module with a double diamond sensor (left) and a single diamond sensor (right). In the left plot a peak with half of the signal is clearly visible, corresponding to instances where the beam particle went only through only one of the sensors.

Beam test results
Measurements presented here were obtained in various stages of development of prototype sensors and the final readout modules. They were tested with low momentum protons (125 MeV/c and 200 MeV/c) at the Massachusetts General Hospital (MGH) in Boston, high energy pions at KEK and at the SPS at CERN and are compared to bench tests with electrons from a 90 Sr source.
The low momentum protons available at the MGH deposit signals in the diamond that are equivalent to 2.3 MIPs. The performance of a single diamond sensor was compared to performance of a module equipped with two diamond sensors (see figure 14). The double sensor module shows twice the signal while the noise increase is only 30%. These tests further confirmed that inclining the detectors at an angle of 45 o with respect to the beam increased the signal by a factor of √ 2 without having any effect on noise. In a pion beam at KEK the detector response to single MIPs was studied. Typical signal and noise distributions gave an SNR of about 7.5. Here, the SNR distribution was obtained by dividing the signal amplitudes by the RMS of baseline fluctuations in time intervals where no pion beam was present. We also observed that including a 200 MHz low-pass filter improved the SNR by about 20% with respect to measurements made with the originally intended 500 MHz amplifier bandwidth limit (see figure 15). This was confirmed by applying first order filters offline to the data taken at full bandwidth (see figure 16). The typical timing resolution was estimated from the time difference distribution for simultaneous events from two different detectors (see figure 17). The width of this distribution was about 1 ns, more than sufficient for our timing needs. We observed less than a 10% change in the width of the timing distribution when the bandwidth limit of 200 MHz had been added. The testbeam signal amplitude measurements compare favourably to those made on the same modules using a 90 Sr source. A source setup was developed which was used for the reception tests of the final detectors. A typical distribution of signals and noise obtained at a 200 MHz limited bandwidth is shown in figure 18.   A further test-beam campaign was carried out in the summer of 2006 at the CERN PS (T11 and T9) and SPS (H6 and H8) pion beams. The aim was to thoroughly evaluate all modules produced and select the eight best for installation. Four BCM modules were put in the beam simultaneously ( figure 19). Signals from two were amplified in an ORTEC FTA810 300 MHz amplifier and read out with a CAEN V1729 12-bit ADC with 2 GHz sampling. For these, complete analogue and timing information was recorded. Signals from other two modules were fed into prototype NINO boards [13] which in turn were recorded by a CAEN ADC. The NINO threshold settings were varied run-by-run to study efficiency and noise occupancy under realistic conditions. An eight plane (four horizontal and four vertical) silicon telescope, provided by the University of Bonn, produced precision tracking of the beam pions on an event-by-event basis. The coincidence signal from two plastic scintillators was used to trigger the readout. Events from the BCM and silicon telescope were recorded synchronously by their respective DAQ systems and the data re-assembled off-line. The BCM was read out with production services through to the NINO digitisation. The high voltage was supplied by an ISEG EHQ-8210 modified to provide 1 nA current monitoring. Low voltages (3 and 11 V) for the front-end amplifiers were sourced from a modified version of the custom ATLAS-SCT power supplies that will be used to power the BCM. These voltages were merged into a single multi-core power cable. The analogue signal was readout by the NINO through a 1.5 m long stretch of GORE 41 0.19"diameter coaxial cable and 12 m length of Heliax FSJ1RN-50B ¼" diameter coaxial cable from ANDREW -the final powering and readout foreseen for ATLAS.
The testbeam pions had momenta of 3.5 (T11) and 12 GeV/c (T9). An analysis of NINO threshold scans produced efficiency and noise occupancy estimates. Tracks with hits in all reference telescope planes and having a good fit quality were selected. Tracks that crossed the central 3x5 mm 2 region of the diamonds were used to compute the efficiency while those missing the diamond by more than 2 mm provided a sample for noise occupancy estimates. The corresponding NINO signal was sought in a 60 ns time window around the arrival time of the beam particle provided by the trigger scintillators. An example of the hit distribution from the reference telescope and the corresponding NINO signals can be seen in figure 20. The resulting efficiencies and noise occupancies as function of NINO threshold are shown in figure 21. The efficiency saturates at thresholds below 30 mV, approaching values above 95% for thresholds as low as 20 mV. Fifty percent efficiency is reached for thresholds between 70 and 90 mV depending on the BCM module under study. As the full threshold range of the NINO spans 300 mV, an additional amplifier with a gain of ~3 has been added to the final ATLAS system. The noise occupancy exceeds the 10 -3 level for thresholds of 50 mV, rising to 1% at 20 mV. At the very lowest thresholds, we believe we are observing the intrinsic NINO noise. Figure 22 shows the spatial distribution of tracks that generated a BCM pulse of 30 mV or about 1/3 of a MIP.
In 2007, we performed further testbeam studies with three spare BCM modules. These tests included production versions of all elements of the back-end readout including NINO discriminators, LVDS to optical converters and optical receivers at the front-end input to the FPGA coincidence detection logic boards. While we have not fully analysed these testbeam data we have already extracted a measure of the overall system SNR including both the analogue performance of the front-end modules and the digital performance of the NINO discriminators. Following [23] the noise in a self-triggering digital readout system can be extracted from the 'beam-off' count-rate through a fit of the form:  From figure 23 we extract a noise value of 31 mV. One can then extract the median signal from a study of the efficiency (count rate for events that are known to have beam particles from an external tracking telescope) versus threshold for the same module. As figure 24 shows, the median efficiency for this module is reached at a threshold of 335 mV. Thus, we conclude that this module, typical of those installed in ATLAS, has a median-signal to noise ratio of 11:1.

Quality assurance with production modules
In late fall 2006, qualification tests of the final modules were performed to select the eight most reliable for installation. Before assembly, all modules were cleaned with Vigon EFM solution in order to remove remnants of solder flux and organic pollutants. Afterwards, the modules were subjected to a thermo-mechanical test. Before and after this test, all modules were characterised in our 90 Sr setup to measure their SNR. Figure 25 shows a typical signal and noise spectrum.     For one of the final modules a test of accelerated aging was performed. Its temperature was increased to 140 o C for 14 hours. This simulates more than 10 years operation at 20 o C, assuming the activation energy of 0.8 eV characteristic of the epoxy and solder used to assemble the module. No change in terms of signal to noise was observed. All modules were baked at 80 o C for 12 hours to expose infant mortality in the readout chips. The modules will experience a similar temperature when the LHC beam-pipe is baked out. We then performed a series of thermal cycles to generate stresses due to thermal coefficient of expansion mismatch between components in the BCM modules. Each module experienced ten temperature cycles with humidity set to zero and temperature ranging from -25 o C to 40 o C. Both ends of this range are more extreme than expected in normal ATLAS operation except for beam-pipe bake-out. The comparison of results from bench measurements with 90 Sr before and after thermo-mechanical treatments shows no change in SNR. More importantly, no modules failed during these acceptance tests.
During the acceptance tests, all modules were tested with both positive and negative electric fields. The diamond sensors exhibit slight differences in leakage current and signal size depending on the polarity which is understood to be a vestige of the direction the CVD sensor material was grown. When building BCM modules we attempted to pair diamonds such that their preferred polarities agreed. As a result, a number of the final modules prefer a positive electric field configuration while others prefer a negative field configuration. Acceptance test results for the relevant polarity of bias voltage of the eight best modules selected for installation in ATLAS are summarised in table 1.

Mechanical support, alignment and detector integration
The BCM modules are mounted in brackets supported from a cruciform on the pixel Beam Pipe Support Structure (BPSS). One station of the final BCM assembly is shown on the pixel BPSS in figure 26.  In January 2007, the eight modules shown in table 1 were mounted on the ATLAS pixel support frame. The positions of each of the modules in the BPSS frame were measured using the mechanical survey equipment in place to ensure the parallelism of the BPSS bars and overall straightness of the pixel detector support structure. When combined with high resolution photographs of the BCM module boxes (figure 5), that include images of the diamond sensor locations as well as the edges of the G10 BCM module boxes, this survey allows us to predict the positions of the BCM sensors with a precision of 1 mm. This spatial information will be used to relate observed rate differences between the different BCM stations to the position of the LHC beam providing O(1 mm) precision with a very rapid turnaround -perhaps even before it has been deemed safe to switch on other ATLAS detector systems.
Noise measurements of BCM modules were repeated after installation in the BPSS and again after partial installation of the readout of pixel readout system, in order to check for noise interference between the two systems. In these tests two BCM modules were measured, one positioned directly below the pixel system being readout at the time, and a second BCM module

JINST 3 P02004
-20 -the furthest away from the active pixel modules. Two measurements of the BCM module noise were performed. For the first, a random trigger was used and only one pixel readout unit was active. For the other, all pixel readout modules available were active and the trigger was a 40 MHz clock from the pixel timing module, that simulated the LHC bunch clock for the pixel readout system. The BCM module noise was computed from baseline fluctuations in a 20 ns window a fixed time before the trigger -just as had been done in the module qualification measurements described above. The noises measured were all compatible with those measured in the acceptance tests (see table 1). In particular, no difference in noise was observed in any of the pairs of tests (random trigger and partial pixel readout vs. synchronised trigger and full pixel readout) or for BCM modules close to (within 10 cm) the active pixel readout and those some 4 m away -on the other side of the pixel support frame.

Beam conditions monitor simulation studies
We have developed a full GEANT [24] model of the BCM detector modules and included it in the full ATLAS detector simulation. This has allowed us to expand on the simulations used for the original design [3] and begin detailed studies of different algorithms that could be implemented in our readout system. Here we report on the characteristic BCM responses from LHC proton-proton collisions as well as those resulting from protons that have been lost from the machine. We include a study of module occupancies for single proton collisions, typical of luminosities of 5x10 32 cm -2 s -1 luminosity and the full design luminosity of 10 34 cm -2 s -1 where over twenty simultaneous proton collisions are expected.
Our BCM model includes all the material in the module boxes (see section 3) as well as the connectors and cables that service the module. A picture of the GEANT volumes simulated is shown in figure 27. This is embedded in a full description of the ATLAS pixel geometry, which in turn is embedded in a full model of the ATLAS inner tracker. Thus, our simulations include the effect of secondary particles produced anywhere in the ATLAS tracker volume that arrive within 40 ns of the bunch crossing associated with the proton collision under study. As one can see from figure 28, the bulk of the particles arrive at a BCM sensor about 6 ns after the collision -21 -time. Only a small fraction of the particles seen arrive more than 9 ns after the collision point, indicating that the production of secondaries from elsewhere in the ATLAS experiment should not be a significant background. By the same token, it is clear that the BCM readout and coincidence logic need only consider signals within a few nanoseconds of the nominal arrival time in order to capture >99% of the hits from collisions.
We have begun rudimentary simulations of the BCM detector systems response to LHC lost protons. Figure 29 shows the arrival time of charged particles at the BCM stations for five 7-TeV protons incident on the inner radius of the ATLAS Forward Calorimeter system. While this is not a likely point of impact for lost protons, it is clear that such lost protons produce tens of secondaries that traverse the BCM sensors. We see the striking characteristic, that about half the BCM modules are hit 6 ns before the nominal collision time. This is from secondaries that are travelling with incident protons (all particles travel essentially at the speed of light) but at radii large enough to hit the BCM modules on their way into the interaction region. The remainder of the BCM modules are hit about 6 ns after the nominal collision time as the secondaries generated by the lost proton leave the interaction region. We see the same general characteristic time spread for the arrival of particles (>95% within a few ns of the nominal particle crossing time).   A more likely source of lost protons -and one that will be difficult to detect with other safety systems in place in the LHC -come during the injection of pilot bunches in the LHC. Here the currents are so low that the standard beam loss monitors around the LHC are of limited use. We have investigated a number of potential loss scenarios that include losses to due to the failure of critical components during injection. These can result in 450 GeV protons (the LHC injection energy) hitting either the TAS collimators, designed to protect the low beta quads and the experiment or even, in the case of multiple component failures, find their way directly to the vacuum chamber inside the ATLAS experiment. Figure 29 shows the BCM hit rates (top) and coincidence rates for both beam losses on the TAS collimators (solid) as well as directly on the beam-pipe (dashed ). While the coincidence rates are not as large as during LHC collisions at full luminosity and full machine energy the BCM should be sensitive to these losses during the early stages of injection and thus provide fast feedback. Figure 30 shows the number of BCM modules hit for a single 14 TeV proton-proton collision [25] corresponding to a proton-proton collision luminosity of a 5×10 32 cm -2 s -1 . It is clear that this represents for efficient detector of collisions on a crossing-by-crossing basis. Instead, if we assume that we are dominated by collisions we can use the single module count rates to determine the collision point location -up/down (ATLAS-y) or inside/outside (ATLASx) the LHC ring -by comparing the rates from the various stations. We are using our simulations to quantify how many collisions are necessary and with what precision we can measure the beam(s) x, y and z positions.   Figure 31 shows the number of hits in all eight BCM modules at the LHC design luminosity (10 34 cm -2 s -1 ). Here, we see an average of one hit per BCM station. We are in the process of including a more realistic model of the single module detection efficiencies and from there plan to compute the efficiency for the forward-backward coincidences among the BCM stations that would be characteristic of proton-proton collisions. At this stage our baseline choice will require coincident signals from two BCM modules in each of the forward and backward directions to robustly identify proton-proton collisions at the full LHC design luminosity.

JINST 3 P02004
-24 -We continue to refine our simulation of possible beam loss scenarios and collisions and use these to guide the development of the FPGA algorithms that we will use to implement our coincidence strategies when we see the first beams.

Summary
Beam tests of BCM production modules have shown that adequate performance in terms of SNR and timing can be achieved with pCVD diamond sensors and fast RF current amplifiers. The modules have undergone final thermo-mechanical tests and the best eight were installed on the Pixel-BPSS in early 2007 which in turn was installed in the ATLAS cavern in June 2007. Testbeam studies of spare modules show a median-signal to noise of 11:1 for particles incident at 45-degrees, a performance we expect to be representative of the modules installed in ATLAS. In addition to refining our simulations of the expected response of the BCM system, we are in the process of implementing the FPGA logic that will be used to identify signals from minimum-ionising particles and apply the necessary coincidence logic to distinguish collisions from beam-losses. The BCM system will be ready for first proton collisions at the LHC, where we will build experience with the actual beam conditions and provide a stable and reliable signal of proton loss rates to ATLAS.