The ATLAS beam conditions monitor

The ATLAS beam conditions monitor is being developed as a stand-alone device allowing to separate LHC collisions from background events induced either on beam gas or by beam accidents, for example scraping at the collimators upstream the spectrometer. This separation can be achieved by timing coincidences between two stations placed symmetric around the interaction point. The 25 ns repetition of collisions poses very stringent requirements on the timing resolution. The optimum separation between collision and background events is just 12.5 ns implying a distance of 3.8 m between the two stations. 3 ns wide pulses are required with 1 ns rise time and baseline restoration in 10 ns. Combined with the radiation field of 10/sup 15/ cm/sup -2/ in 10 years of LHC operation only diamond detectors are considered suitable for this task. pCVD diamond pad detectors of 1 cm/sup 2/ and around 500 /spl mu/m thickness were assembled with a two-stage RF current amplifier and tested in proton beam at MGH, Boston and SPS pion beam at CERN. To increase the S/N ratio two back-to-back diamonds were read out by a single amplifier and the detectors inclined to 45 degrees. Limiting the bandwidth at the readout to 200 MHz provided further improvement; S/N ratio of nearly 10:1 could be achieved with MIP's. Amplifiers of the two stages were irradiated with protons and neutrons to 10/sup 15/ cm/sup -2/. Evaluating the irradiated electronics with silicon pad detectors, 20% degradation in S/N ratio was observed. Ten detector modules are being assembled and tested at CERN for their final installation into the ATLAS pixel support structure in the beginning of 2006.


Introduction
One of the worst-case scenarios in Large Hadron Collider (LHC) operation arises when several proton bunches hit the collimators designed to protect the detectors. While the accumulated radiation doses from such unlikely accidents correspond to those acquired during several days of normal operation, and as such pose no major contribution to the integrated dose, the enormous instantaneous rate might cause detector damage. The ATLAS Beam Conditions Monitor (BCM) is designed to detect such incidents and trigger an abort before they happen. Further, beam gas interactions are a worry, especially in the early days of LHC running.
Common elements of both of these backgrounds are that they initiate charged particle showers, originating well up-or down-stream of the ATLAS interaction point. Given two detector stations placed symmetrically about the interaction point at ± z, showering particles hit the BCM stations with a time difference Δt = 2z/c. At the LHC design luminosity collisions add coincident signals (Δt =0) in these detectors every at every bunch crossing (25 ns). To optimally distinguish these two classes of events the BCM stations should be located ~3.8 m apart at z = ± 1.9 m, resulting in Δt of 12.5 ns (Fig. 1).
The BCM also provides complementary luminosity measurements [1] to those coming from LUCID [2], the main ATLAS luminosity monitor. Adding the BCM information to the ATLAS trigger will allow corrections for bunch-to-bunch luminosity variation. Finally, during the commissioning of the LHC collider, when tracking detectors are switched off, the BCM is likely to be the first detector to report proton collisions in ATLAS.

Beam Conditions at the LHC
The BCM is suspended from the ATLAS Beam Pipe Support Structure (BPSS) that also supports the pixel detector. This places the BCM sensors at radius of r ~ 55 mm, about 20 mm outside the beam pipe, at |z| = 183.8 cm upstream and downstream of the interaction point, corresponding to a pseudo-rapidity of η ~ 4.2. The resulting Δz gives an almost ideal Δt of 12.3 ns.
An estimate [3] predicts about one particle per cm 2 of sensor from a single 7 TeV proton hitting the TAS collimator -the collimator nearest to the ATLAS interaction point. This is to be compared with ~½ particle/cm 2 resulting from minimum bias proton interactions in each bunch crossing (every 25 ns) at LHC design luminosity of 10 34 cm -2 s -1 [1]. To be optimally able to distinguish these two situations the BCM should be sensitive to single minimum ionising particles (MIPs). Given MIP sensitivity one is then also able to use BCM information for proton-proton collision luminosity assessment. With proton interactions inducing signals every 25 ns fast processing of the MIP signals is paramount. A fast rise time (~1 ns), narrow pulse width (~3 ns) and base line restoration in 10 ns are necessary to prevent pile-up. The radiation field at this location will expose the BCM sensors to 10 15 particles, mostly pions, per cm 2 and an ionisation dose of ~500 kGy in 10 years of LHC operation. An additional constraint stems from the fact that BCM is integrated into the BPSS and covered with layers of pixel services. This renders it almost inaccessible, with any intervention requiring a disassembly of a substantial part of the pixel services, an action unlikely to be approved. Thus a simple and robust design was privileged.

Detector Modules
The BCM detector modules include two novel parts. The first,a set of diamond sensors that sit in the very intense radiation region less than 6 cm radially from the LHC beams. The passage of charged particles, either from proton-proton collisions or the secondary products of lost protons, ionises the diamond, generating MIP signals. Second, at a larger radius, but still only 5 cm from the diamond sensors themselves, sits a two stage RF amplifier that boosts the signal from the diamond and transmits it, in analogue form, 15 m off the detector to be digitised. In this section we will discuss the two main components of the detector modules -the diamond sensors and the signal pre-amplifiers.

Diamond Sensor Material
Chemical Vapour Deposited (CVD) diamond possesses some remarkable properties which make it an attractive material for use in the BCM system. Increasingly, solid-state particle detectors are required to have fast signals, operate at high rate and, very often, operate in high radiation environments reliably for several years. While silicon, the de-facto standard of solid-state detectors, is very well established in particle detector applications, diamond detectors are competitive in environments that place a premium on radiation hardness and fast signal formation such as the ATLAS BCM. Typical designs for diamond particle sensors are based on a bulk of free-standing CVD material, usually a few hundred micrometers thick, with electrodes on opposite sides of the diamond bulk as shown in Fig. 2. Prior to deposition of contacts the diamond surfaces are polished, smoothing the surface on the growth side and removing significant amounts of low-grade material from the substrate side. Metal contacts that form suitable carbides are evaporated or sputtered on both diamond 3 surfaces and annealed. A covering layer of, for example, Aluminium is applied to allow wire-bond connections to the readout electronics. The dimension of electrodes, deposited with lithography, range from tens of micrometers to centimeters. For sensor operation, a bias voltage is applied between the electrodes to generate a drift field. A traversing charged particle will ionise the atoms in the crystal lattice and leave a trail of primary ionisation charge of 36 electron-hole pairs per micrometer [4,5], denoted as Qgen , along its path. The drift of electrons and holes in the applied electric field induces a current pulse on the electrodes. The induced current, I, can be calculated by the Shockley-Ramo theorem [6,7] for a uniform constant field between the two electrodes as: [1] where Qgen denotes the total generated ionisation charge, v the drift velocity, and d the gap between the electrodes, which is equal to the thickness of the sensor. Readout electronics then measures either the current amplitude or, in case of charge sensitive amplifiers, the integrated current or total charge measured: Qmeas. The ionisation charge however is reduced by charge trapping during its drift. A common figure of merit for the characterisation of CVD diamond sensors is the mean distance electrons and holes drift apart before being trapped, called the charge collection distance (CCD) CCD = d Qmeas/Qgen , [2] which can be related to the product of electron and hole mobility μ and the lifetime τ i of the electrons and holes as CCD=(μ τ e+μ τ h) E under the assumption that the sensor thickness is larger than the CCD and the electric field, E, is uniform. As diamond sensors are usually operated at high field strength, the charge collection distance is usually quoted where the CCD saturates at 1 V/μm. For applications, such as the BCM, an initial charge collection distance beyond 200 μm is required in order for diamond sensors to produce reliable single MIP signals. Figure 3 shows a recent 13cm diameter CVD wafer ready for tests with contact pads spaced at 1 cm intervals.
In polycrystalline CVD (pCVD) diamond sensors, charge collection distances of 275 μm have been achieved. In these diamonds, typically 500 μm thick, the charge signal distribution shows a mean charge of 9800 electrons with 99% of the distribution above approximately 3000e- [8,9]. The best samples reach a charge collection distance above 300 μm (Fig. 4).
Polycrystalline CVD diamond sensors are ideally suited for use in the BCM system as they are only sensor material known to fulfil our requirements in terms of signal speed and radiation hardness.
The sensor of choice is the pCVD diamond material developed by RD42 [10] and produced by Element Six Ltd. [11]. The timing properties of the ionisation current signal are excellent due to the high velocity of carriers (> 10 7 cm/s), at our operating field of 2 V/μm, and short trapping times even 4 before irradiation. Another clear benefit is the very low leakage current (less than 1 nA) allowing operation at room temperature without cooling. Radiation hardness is proven up to fluences of 2.2x10 15 p/cm 2 with signal degradation of only 15% [12].

Readout Amplifiers
The signal is fed through a 5 cm long 50 Ω transmission line on the printed circuit board (Fig.   6) to the front-end amplifier. In this way the radiation field at the amplifier location is decreased by about 30%. The front-end [13] designed by FOTEC [14] is a two-stage RF current amplifier utilising the 500 MHz Agilent MGA-62563 GaAs MMIC low noise amplifier in the first stage and the Mini Circuits Gali 52 In-Ga-P HBT broadband microwave amplifier in the second stage. Each stage provides an amplification of 20 dB, with the first stage exhibiting an excellent noise factor of 0.9 dB.
Sensors and FE electronics are mounted in a module box (Fig. 6) designed to shield RF at the BCM operating frequencies. Each of the amplification stages is isolated in a separate shielded compartment. The amplified signal is fed into a high-quality 50 Ω coaxial cable. In prototype tests the signals were digitised with a high bandwidth (> 1 GHz) digital oscilloscope. In ATLAS, digitisation is done with a radiation tolerant ASIC placed outside the calorimeters 15 m from the BCM modules.
To verify radiation hardness of the amplifiers, several of them were irradiated with protons, neutrons and photons, and subsequently tested. Degradations of amplification at the level of 0.5 dB were observed with the second-stage Gali amplifier. A crucial test was performed by exchanging the first-stage Agilent amplifier of a BCM module with one irradiated to a mixed fluence of 5x10 14 protons/cm 2 and 5x10 14 neutrons/cm 2 . Comparing both assemblies with 90 Sr source signals from a standard float-zone silicon diode, an amplification loss due to radiation of 20% was observed with no change in the noise (Fig. 7).

Off-Detector Readout Electronics
The back-end of the BCM readout is responsible for digitising and acquiring the signals from the modules while introducing minimal noise, storing them in a ring buffer, performing some basic analysis and generating outputs for the various parts of the ATLAS DAQ system that allow the BCM information to be read-out for further offline analysis. A Field Programmable Gate Array (FPGA) was chosen to perform these functions because of its high-speed parallel data processing capabilities. We will describe each part of the readout system in turn.

The NINO digitisers
The signal from the sensors and front-end amplifiers travels 15 m through a high-quality coaxial cable to the digitisers that are placed in a radiation shielded environment behind the ATLAS calorimeters. There the signals are digitised by a radiation tolerant eight input channel NINO chip, an ASIC originally designed for ALICE experiment at CERN [15].
MIP signals from the diamond sensors all have similar shape with amplitudes that follow a Landau distribution. When multiple particles traverse the sensors simultaneously we see a sum of individual MIP signals, still keeping similar shape. Studies showed that the optimal signal-to-noise ratio with our front-end amplifiers is achieved with the addition of a low-pass filter that provides a bandwidth limit of 200-300MHz. Signals to the NINO board are thus fed into a 200 MHz filter of fourth order with a 50 Ω impedance. The NINO then converts analogue signal of varying amplitude into a digital signal a fixed time after the original analogue signal but having a duration correlated to the input amplitude. The resulting digital signal encodes the charge seen at the front-end in terms of a Time-over-Threshold (see Fig. 8). Due to the relatively small dynamic range of the NINO inputs the signals from the BCM front-end amplifiers are first split by a voltage divider in a ratio of 12:1 then fed into different NINO channels. The NINO thresholds are set such that the larger signal is used for truly minimum ionising signals (up to about 10 MIPs) while the smaller signal comes into play if a BCM sensor sees a signal of more than 10 MIPs, which could happen in catastrophic beam loss situations.
Each of the NINO outputs is connected to the circuitry that drives a laser diode over 70 m of single mode 1.3 μm optical fibre that brings the signals to a receiver board in the ATLAS counting room.

FPGA Based signal decoders and Coincidence Detection Logic
The sixteen optical signals (eight high amplitude and eight low amplitude) are fed into two receiver boards that translate the optical into electrical (PECL) differential signals that are connected to an FPGA board. The optical input signals and PECL differential signals are available, for oscilloscope inspection on the front panel of a double width 6U VME module. The optical receiver 6 board also fans out the same signals, at 50 Ω, through a LEMO-00 connector on the front panel to be used for monitoring purposes.
The PECL signals are then fed into the main part of the BCM readout: two Xilinx ML410 development boards [16], each mounted in a 19", 1U housing (also by Xilinx). These were chosen since the small BCM readout system did not warrant the design and manufacture of a custom board.
The ML410 board contains a Xilinx Virtex-4 FX60 FPGA that features eight Rocket-IO Serial Multi-Gigabit Transceivers, two PowerPC cores and 56kB logic blocks. This model was chosen for the excellent sampling capabilities of the Rocket-IO channels (up to 6.5 Gbps) [17]. The incoming data is sampled synchronously with the LHC bunch clock at a rate of 2.56 Gbps (a time slice of 390 ps) by multiplying the LHC bunch clock in two separate phase locked loops by a factor of 64. The Rocket-IO channels require transitions in the incoming data stream, so a fixed pattern is generated and XOR-ed with the BCM/NINO signals. Internally, the complementary XOR operation is performed, restoring the original waveform. The data are then stored in a DDR2 RAM that acts as a ring buffer capable of storing BCM hit information from all eight modules (at both thresholds) for up to 900 LHC bunch orbits. In parallel, an edge detection algorithm determines the arrival times of pulses and performs a time-to-digital conversion. At the same time, pulse widths are encoded to digitise the Time-over-Threshold information from the NINO.
The basic hit-or-miss information from every detector is provided to the ATLAS Central Trigger Processor (CTP) [18] and thus can be used for the ATLAS Level 1 Accept (L1A) decision. To be used in this way, these signals must be provided within 1.5 µs of the actual beam crossing in ATLAS. This is the most time-critical path of the BCM read-out, so processing is performed as fast as possible. This algorithm is structured as a pipelined binary search-tree taking advantage of the FPGAs internal structure of Look-Up-Tables having four inputs [19]. The pipeline latency is 5 LHC bunch clock cycles or 125 ns, which easily achieves the required latency even when the FPGA input and output overheads and cable delays are included.
The digitisation and acquisition parts have been implemented and verified on a Xilinx ML405 evaluation board. Pulses with a fixed frequency from an HP pulse generator were used as input signals and the pulse widths measurements on a Tektronix TDS5104B scope compared with the values obtained by the FPGA algorithm. Figure 9 shows the distribution of the FPGA digitised times -for an input pulse width of 4.5 ns -demonstrating the excellent performance of the Rocket-IO acquisition.
Additional analysis to be performed by the FPGA includes the calculation of in-time and outof-time coincidences of signals between detectors in the two BCM stations. Continuously accumulating histograms will provide status information about the beams and interaction point in ATLAS. These histograms will be read out by BCM monitoring software on a timescale of minutes. 7 The FPGA also has to act as a Read-Out Driver (ROD). It provides data in the ATLAS Raw Event Format after a L1A over a Read-Out-Link adhering to the S-Link specification [20] as well as interfacing to the ROD Crate DAQ (RCD) framework and the Local Trigger Processor for integration in the ATLAS Trigger and Data Acquisition system [21]. For this we use the standard ATLAS S-Link interface, HOLA [22]. An ethernet connection to the RCD controller is foreseen. The FPGA is also connected via ethernet to a PC for slow read-out and integration into the ATLAS Detector Control System via its PVSS-JCOP interface. This gives us the possibility of adjusting on-board analysis and acquisition parameters. Figure 10 shows a schematic of the BCM readout and its connection to the rest of ATLAS.

Testing and Qualification of Prototype Detector Modules
Prototype BCM detector modules were subjected to a number of tests to ensure they had suitable MIP detection performance. Prototype assemblies were tested with electrons from a 90 Sr source, with 125 and 200 MeV/c protons at Massachusetts General Hospital radiation therapy facility in Boston, and with high energy pion beams at KEK and the CERN SPS. Results from these tests are summarised briefly here. For more details see Refs. [23,24].
The most important conclusions of these studies were that: • Inclining the sensors at a 45º degree angle with respect to the trajectory of the particle to be detected resulted in a √2 increase of signal and had no effect on noise; • The use of double-decker sensors on same amplifier input doubled the signal, while increasing the noise by ~30%, improving the signal to noise ratio (SNR) by ~50%; • The timing differences between independent modules exhibited a FWHM of 1.5 ns; • Limiting the readout bandwidth to 200 MHz improved the SNR by 20% while only degrading the time correlations by 10%; • Off-line processing of fully digitised analogue wave-forms confirmed that optimum SNR is achieved with a low-pass filter having a pole at 200-400 MHz.

Bench Tests
With the final production modules, extensive qualification tests were performed, using a 90 Sr source as MIP signal equivalent. The BCM signal was recorded with a LeCroy oscilloscope (4 GHz sampling), triggered by a scintillator behind the diamond sensor. This configuration results in a trigger 8 on electrons above 2 MeV from the 90 Sr source. These in turn deposit about 10% more charge in the diamond sensors than true MIPs. Using a 200 MHz bandwidth limit on the scope gives single event signals such as the one shown in Fig. 11. The signal is taken as the maximum reading within 2 ns of the trigger, and the noise estimated from the baseline fluctuations in a 20 ns interval well before the trigger. The noise was found to be independent of the electric field across the sensors up to 3 V/μm. Good reproducibility of signals has been observed, with signal amplitude stable to better than 4% during a 24 hour test. SNR values of ~8 have been routinely obtained at 2 V/μm bias with the 90 Sr electrons incident perpendicular to the diamond sensors.
A peculiar feature has been observed with the diamond leakage current in the BCM modules rising by a factor of more than 100 to several hundred nA on a time scale of days. In addition, this leakage current shows an erratic behaviour on a time scale of minutes, rising and falling by factors of ten. This, yet to be understood phenomenon, has been observed before in the BaBar experiment at lower electric fields of 1 V/μm [12]. As at BaBar, we observe that the excess current vanishes if the diamond is placed in a strong magnetic field. Applying a 2 T field, as will be present in the ATLAS Inner Detector, in a realistic geometry with the BCM module inclined to 45º reduced the current to well below 10 nA for a period of nearly three days (Fig. 12). In any event, the BCM readout noise is observed to be independent of the leakage current up to 500 nA (Fig. 13). In a pion beam at KEK the detector response to single MIPs was studied. Typical signal and noise distributions gave an SNR of about 7.5. Here, the SNR distribution was obtained by dividing the signal amplitudes by the RMS of baseline fluctuations in time intervals where no pion beam was present. We also observed that including a 200 MHz low-pass filter improved the SNR by about 20% with respect to measurements made with the originally intended 500 MHz amplifier bandwidth limit (see Fig. 15). This was confirmed by applying first order filters offline to the data taken at full bandwidth (see Fig. 16). The typical timing resolution was estimated from the time difference distribution for simultaneous events from two different detectors (see Fig. 17). The width of this distribution was about 1 ns, more than sufficient for our timing needs. We observed less than a 10% change in the width of the timing distribution when the bandwidth limit of 200 MHz had been added.

Measurements
The testbeam signal amplitude measurements compare favourably to those made on the same modules using a 90 Sr source. A source setup was developed which was used for the reception tests of the final detectors. A typical distribution of signals and noise obtained at a 200 MHz limited bandwidth is shown in Fig. 18.
A further test-beam campaign was carried out in the summer of 2006 at the CERN PS (T11 and T9) and SPS (H6 and H8) pion beams. The aim was to thoroughly evaluate all modules produced and select the eight best for installation. Four BCM modules were put in the beam simultaneously ( Fig. 19). Signals from two were amplified in an ORTEC FTA810 300 MHz amplifier and read out with a CAEN V1729 12-bit ADC with 2 GHz sampling. For these, complete analogue and timing information was recorded. Signals from other two modules were fed into prototype NINO boards [15] which in turn were recorded by a CAEN ADC. The NINO threshold settings were varied run-by-run to study efficiency and noise occupancy under realistic conditions. An eight plane (four horizontal and four vertical) silicon telescope, provided by the University of Bonn, produced precision tracking of the beam pions on an event-by-event basis. The coincidence signal from two plastic scintillators was used to trigger the readout. Events from the BCM and silicon telescope were recorded synchronously by their respective DAQ systems and the data re-assembled off-line. The BCM was read out with production services through to the NINO digitisation. The high voltage was supplied by an ISEG EHQ-8210 modified to provide 1 nA current monitoring. Low voltages (3 and 11 V) for the front-end amplifiers were sourced from a modified version of the custom ATLAS-SCT power supplies that will be used to power the BCM. These voltages were merged into a single multi-core power cable. The analogue signal was readout by the NINO through a 1.5 m long stretch of GORE 41 0.19"diameter coaxial cable and 12 m length of Heliax FSJ1RN-50B ¼" diameter coaxial cable from ANDREW -the final powering and readout foreseen for ATLAS.
The testbeam pions had momenta of 3.5 (T11) and 12 GeV/c (T9). An analysis of NINO threshold scans produced efficiency and noise occupancy estimates. Tracks with hits in all reference telescope planes and having a good fit quality were selected. Tracks that crossed the central 3x5 mm 2 region of the diamonds were used to compute the efficiency while those missing the diamond by more than 2 mm provided a sample for noise occupancy estimates. The corresponding NINO signal was sought in a 60 ns time window around the arrival time of the beam particle provided by the trigger scintillators. An example of the hit distribution from the reference telescope and the corresponding NINO signals can be seen in Fig. 20. The resulting efficiencies and noise occupancies as function of NINO threshold are shown in Fig. 21. The efficiency saturates at thresholds below 30 mV, approaching values above 95% for thresholds as low as 20 mV. Fifty percent efficiency is reached for thresholds between 70 and 90 mV depending on the BCM module under study. As the full threshold range of the NINO spans 300 mV, an additional amplifier with a gain of ~3 has been added to the final ATLAS system. The noise occupancy exceeds the 10 -3 level for thresholds of 50 mV, rising to 1% at 20 mV. At the very lowest thresholds, we believe we are observing the intrinsic NINO noise. Figure   22 shows the spatial distribution of tracks that generated a BCM pulse of 30 mV or about 1/3 of a MIP.
In 2007, we performed further testbeam studies with three spare BCM modules. These tests included production versions of all elements of the back-end readout including NINO discriminators, LVDS to optical converters and optical receivers at the front-end input to the FPGA coincidence detection logic boards. While we have not fully analysed these testbeam data we have already extracted a measure of the overall system SNR including both the analogue performance of the frontend modules and the digital performance of the NINO discriminators. Following [25] the noise in a self-triggering digital readout system can be extracted from the 'beam-off' count-rate through a fit of the form: Ln(Noise Rate) = A + [Threshold -offset] 2 / [2 * sigma 2 ]. [3] From Fig. 23 we extract a noise value of 31 mV. One can then extract the median signal from a study of the efficiency (count rate for events that are known to have beam particles from an external tracking telescope) versus threshold for the same module. As Fig. 24 shows, the median efficiency for this module is reached at a threshold of 335 mV. Thus, we conclude that this module, typical of those installed in ATLAS, has a median-signal to noise ratio of 11:1.

Quality Assurance with Production Modules
In late fall 2006, qualification tests of the final modules were performed to select the eight most reliable for installation. Before assembly, all modules were cleaned with Vigon EFM solution in order to remove remnants of solder flux and organic pollutants. Afterwards, the modules were subjected to a thermo-mechanical test. Before and after this test, all modules were characterised in our 90 Sr setup to measure their SNR. Figure 25 shows a typical signal and noise spectrum.
For one of the final modules a test of accelerated aging was performed. Its temperature was increased to 140 o C for 14 hours. This simulates more than 10 years operation at 20 o C, assuming the activation energy of 0.8 eV characteristic of the epoxy and solder used to assemble the module. No change in terms of signal to noise was observed. All modules were baked at 80 o C for 12 hours to expose infant mortality in the readout chips. The modules will experience a similar temperature when the LHC beam-pipe is baked out. We then performed a series of thermal cycles to generate stresses due to thermal coefficient of expansion mismatch between components in the BCM modules. Each module experienced ten temperature cycles with humidity set to zero and temperature ranging from -25 o C to 40 o C. Both ends of this range are more extreme than expected in normal ATLAS operation except for beam-pipe bake-out. The comparison of results from bench measurements with 90 Sr before and after thermo-mechanical treatments shows no change in SNR. More importantly, no modules failed during these acceptance tests.
During the acceptance tests, all modules were tested with both positive and negative electric fields. The diamond sensors exhibit slight differences in leakage current and signal size depending on the polarity which is understood to be a vestige of the direction the CVD sensor material was grown.
When building BCM modules we attempted to pair diamonds such that their preferred polarities agreed. As a result, a number of the final modules prefer a positive electric field configuration while others prefer a negative field configuration. Acceptance test results for the relevant polarity of bias voltage of the eight best modules selected for installation in ATLAS are summarised in table 1.

Mechanical Support, Alignment and Detector Integration
The BCM modules are mounted in brackets supported from a cruciform on the pixel Beam Pipe Support Structure (BPSS). One station of the final BCM assembly is shown on the pixel BPSS in Fig. 26.
In January 2007, the eight modules shown in Table 1 were mounted on the ATLAS pixel support frame. The positions of each of the modules in the BPSS frame were measured using the mechanical survey equipment in place to ensure the parallelism of the BPSS bars and overall straightness of the pixel detector support structure. When combined with high resolution photographs of the BCM module boxes (Fig. 5), that include images of the diamond sensor locations as well as the edges of the G10 BCM module boxes, this survey allows us to predict the positions of the BCM sensors with a precision of 1 mm. This spatial information will be used to relate observed rate differences between the different BCM stations to the position of the LHC beam providing O (1 mm) precision with a very rapid turnaround -perhaps even before it has been deemed safe to switch on other ATLAS detector systems.
Noise measurements of BCM modules were repeated after installation in the BPSS and again after partial installation of the readout of pixel readout system, in order to check for noise interference between the two systems. In these tests two BCM modules were measured, one positioned directly below the pixel system being readout at the time, and a second BCM module the furthest away from the active pixel modules. Two measurements of the BCM module noise were performed. For the first, a random trigger was used and only one pixel readout unit was active. For the other, all pixel readout modules available were active and the trigger was a 40 MHz clock from the pixel timing module, that simulated the LHC bunch clock for the pixel readout system. The BCM module noise was computed from baseline fluctuations in a 20 ns window a fixed time before the trigger -just as had been done in the module qualification measurements described above. The noises measured were all compatible with those measured in the acceptance tests (see Table 1). In particular, no difference in noise was observed in any of the pairs of tests (random trigger and partial pixel readout vs. synchronised trigger and full pixel readout) or for BCM modules close to (within 10 cm) the active pixel readout and those some 4 m away -on the other side of the pixel support frame. .

Beam Conditions Monitor Simulation Studies
We have developed a full GEANT [26] model of the BCM detector modules and included it in the full ATLAS detector simulation. This has allowed us to expand on the simulations used for the original design [3] and begin detailed studies of different algorithms that could be implemented in our readout system. Here we report on the characteristic BCM responses from LHC proton-proton collisions as well as those resulting from protons that have been lost from the machine. We include a study of module occupancies for single proton collisions, typical of luminosities of 5x10 32 cm -2 s -1 luminosity and the full design luminosity of 10 34 cm -2 s -1 where over twenty simultaneous proton 13 collisions are expected.
Our BCM model includes all the material in the module boxes (see section 3) as well as the connectors and cables that service the module. A picture of the GEANT volumes simulated is shown in Fig. 27. This is embedded in a full description of the ATLAS pixel geometry, which in turn is  Figure 29 shows the BCM hit rates (top) and coincidence rates for both beam losses on the TAS collimators (solid) as well as directly on the beam-pipe (dashed ). While the coincidence rates are not as large as during LHC collisions at full luminosity and full machine energy the BCM should be sensitive to these losses during the early stages of injection and thus provide fast feedback. Figure 30 shows the number of BCM modules hit for a single 14 TeV proton-proton collision [27] corresponding to a proton-proton collision luminosity of a 5x10 32 cm -2 s -1 . It is clear that this represents for efficient detector of collisions on a crossing-by-crossing basis. Instead, if we assume  We continue to refine our simulation of possible beam loss scenarios and collisions and use these to guide the development of the FPGA algorithms that we will use to implement our coincidence strategies when we see the first beams.

Summary
Beam tests of BCM production modules have shown that adequate performance in terms of SNR and timing can be achieved with pCVD diamond sensors and fast RF current amplifiers. The performance we expect to be representative of the modules installed in ATLAS. In addition to refining our simulations of the expected response of the BCM system, we are in the process of implementing the FPGA logic that will be used to identify signals from minimum-ionising particles and apply the necessary coincidence logic to distinguish collisions from beam-losses. The BCM system will be ready for first proton collisions at the LHC, where we will build experience with the actual beam conditions and provide a stable and reliable signal of proton loss rates to ATLAS. Fig. 1: Sketch of ATLAS detector with two BCM stations at z = ±1.9 m and a sketch of a background event on the TAS collimator and a genuine interaction event.