Design and Performance of the Detector Control System of the ATLAS Resistive-Plate-Chamber Muon Spectrometer

Muon detection plays a key role at the Large Hadron Collider. Resistive Plate Chambers (RPC) provide the barrel region of the ATLAS detector with an independent muon trigger as well as a two-coordinate measurement. The chambers, arranged in three concentric layers, are operated in a strong magnetic toroidal ﬁeld and cover a surface area of about 4000 m 2 . The RPC Detector Control System (DCS) is required to monitor and safely operate tens of thousand of channels, distributed on several subsystems, including low and high voltage power supplies, trigger electronics, currents and thresholds monitoring, environmental sensors and gas and electronic infrastructure. The DCS is also required to provide a level of abstraction for ease of operation as well as speciﬁc tools allowing expert actions and detailed analysis of archived data. The hardware architecture and the software solutions adopted are shown in detail along with a few results from the commissioning and ﬁrst running phases. The material presented here can be used in future test facilities and projects.


Introduction
Resistive Plate Chambers (RPC) [1] provide the barrel region of the ATLAS detector [2] with an independent muon trigger as well as a two-coordinate measurement. With a surface area of about 4000 m 2 , an active detector gas volume of about 18 m 3 , and several tens of thousand of power and control items, all to be operated in strong magnetic field and radiation area, the Detector Control System (DCS) of the ATLAS RPC belongs to the state of the art Detector Control Systems in High Energy Physics. The system has to monitor the detector conditions, control all related subsystems including the supply of low and high voltages (LV, HV), the trigger electronics, the detector infrastructure and the environment conditions and has to store all the relevant information onto the central ATLAS database for later queries or analyses. In addition detector problems and faults need to be spotted promptly in order to guarantee an efficient data taking and a safe operation.
In this document, after a brief detector description, the architecture of the RPC DCS will be presented starting from the hardware requirements, the choice made and its implementation. In later sections the computing infrastructure, the integration with the rest of ATLAS and the software solutions adopted will be discussed. Some results on the performance from the first LHC running will also be shown along with an outlook on future developments. Further results and detector studies using the DCS can be found elsewhere [3,4].

The Atlas Resistive Place Chambers
A transversal view of an RPC layer is shown in Fig. 1. An ATLAS RPC is made of 2 such layers, each with two 2 mm thick bakelite laminate plates. The plates are kept apart at 2 mm by insulating spacers, enclosing a gas volume filled with a mixture of C 2 H 2 F 4 (94.7 %) -C 4 H 10 (5%) -SF 6 (0.3 %). The external surface of the plates is coated by a thin layer of graphite paint to allow a uniform distribution of the high voltage along the plates. The smoothness of the inner surfaces is enhanced by means of a thin layer of linseed oil. The HV working point is chosen to be at 9.6 kV at a temperature of 22 • C and a pressure of 970 mbar. In these conditions the RPC work in saturated avalanche mode inducing, for a minimum ionizing particle, a prompt charge of about 1 pC on the pick-up strips and delivering in the gas an average total charge of 30 pC [5]. The discharge electrons drift in the gas and the signal, induced on pick-up copper strips, is read out via capacitive coupling, and detected by the front-end electronics. Read-out strips have a typical width of ∼ 30 mm and are grouped in two (η and φ) read-out panels with strips orthogonal to each other. Custom read-out electronics amplifies, discriminates and converts the detector signals to ECL standard. These signals are passed to the on-detector trigger electronics (PAD) [6] which, by requiring appropriate coincidences in η and φ detector layers, provide ATLAS with a Level 1 trigger decision as well as with the detector data.

The Power and Control System
The above described detector elements will require several power and control lines starting from the supply of the HV for the gas gain, but also the supply for the front-end electronics, the trigger PAD system and the infrastructure (environmental sensors and system monitoring) which is all to be operated in the detector area with radiation and high magnetic field.  The hardware chosen by the RPC collaboration for the power system is based on the commercial CAEN EASY (Embedded Assembly SYstem) solution [7], which consists of components made of radiation and magnetic field tolerant electronics (up to 2 kG) and is based on a master-slave architecture. Branch controllers, hosted in a CAEN mainframe, act as master boards, allowing to monitor and control electronics in up to 6 remote EASY crates. Fig. 2 shows such a setup. The mainframes (SY-1527) and the branch controllers are located in the ATLAS counting rooms. The radiation and magnetic field tolerant electronics, including power supplies, HV, LV, ADC and DAC boards, are located in the experimental cavern. Table 1 lists the CAEN components connected to the ATLAS RPC DCS and required to operate the detector. The layout of the HV and LV supply has been organized to minimize detector inefficiencies in the case of single failure of one service. The high number of ADC channels installed is used to monitor the current of each individual RPC gap (∼ 3600). The remaining channels are used to monitor with high granularity the current draw of the front end electronics and RPC gas and environmental sensors (temperature, atmopheric pressure, relative humidity and gas flow). The possibility of control by tuning thresholds, and monitoring the current of each RPC gap has shown to be very powerful for tracing problems and fine tune the detector. It is worth mentioning that the ratio between the RPC number of read-out channels (370,000) and the number of threshold channels or of the gap current measurement channels is N DAQ

DCS Architecture and Software Framework
In accordance with ATLAS and CERN official guidelines, all RPC DCS applications have been developed using the commercial Supervisory Control And Data Acquisition software PVSS II [8] (now version 3.8) integrated by the CERN Joint Control Project (JCOP) framework components [9]. This environment provides the required scalability to monitor and control large and complex systems with several thousands of channels allowing a structured and hierarchical organization and a very flexible user interface. It supports the most common standards to connect to hardware devices and to external databases, namely ORA-CLE, which are in use at CERN.
For the RPC, given the large size of the system to control, the load of the control and monitoring tasks was distributed over several systems to allow reliable performance in terms of stability and speed. The present configuration, in use since the very first LHC collisions, is sketched in Fig.3. While the remote hardware (HV and LV supplies etc.) placed in the experimental cavern was allocated from the beginning, the infrastructure in terms of CAEN mainframes and controlling computers was adapted and upgraded during the commissioning phase. The number of mainframes was upgraded from two to four in order to guarantee the best performance and stability. In this configuration each mainframe is in charge of 4 or 5 branch controllers for a total of about 5000 channels, ∼20 EASY crates and four of the 16 detector sectors. The computers have also been upgraded to Intel dual quad processors with 8 GB of memory and run Windows Server 2003. Although PVSS is available both on Linux and Windows platforms, the need for a communication based on OPC with the CAEN mainframes (as most of the common off-the-shelf solutions) biased the choice toward Windows. The described hardware configuration with the latest versions of firmware (CAEN  3.0 with Event Mode enabled) has shown over the last year of cosmic and luminosity running a satisfactory stability.

System Hierarchy, User Interfaces and Finite State Machine
The ATLAS DCS has been designed as a distributed system made of several computers organized in a hierarchical structure. A Finite State Machine (FSM) provides the translation from the infinite conditions the detector and its thousand of analog and digital devices might be, to a limited set of known states. At the lowest level of the hierarchy (Fig. 3) are the Local Control Stations, i.e. systems which have a direct connection to hardware devices (LV, HV channels, services, infrastructure monitoring etc). At a higher level each subdetector has a Subdetector Control Station which owns the subdetector top node and provides access for subdetector user interfaces. At the highest level is the Global Control Station which connects the whole experiment and summarizes the ATLAS state. In ATLAS the various phases of data taking preparation and running are described by states (SHUTDOWN, TRANSITION, STANDBY, READY, ...), their transition commands (GOTO SHUTDOWN, GOTO STANDBY,GOTO READY, ...) and alarm severity conditions (OK, WARNING, ERROR, FATAL). The commands, sent from the central DCS or the Subdetector Control Station, are propagated through the FSM tree down to the hardware devices. The hierarchical tree structure allows only vertical data flow: commands move downwards, while alarms and state changes propagate upwards. Node masking is available to exclude or allocate subcomponents of the tree for specific studies or maintenance. An access control mechanism enforces the system safety allowing only authorized users to perform specific actions. In Fig. 4 the FSM top panel of the RPC DCS is shown with a synoptic view of the 16 detector sectors (HV and LV) as well as a summary of the main infrastructure blocks (Power, Trigger Electronics, DAQ, Gas, Data Quality etc). The RPC FSM top node is connected to the ATLAS central DCS and is used to communicate and exchange states, commands and severities. Information from external systems (LHC beam conditions, central gas system, etc.) are distributed and used to verify the proper detector conditions or to trigger automatic safety actions in case of failures. Several detector specific panels and expert tools enriched by histograms, tables and trends are provided to study the detector performance and help during maintenance periods. The navigation through the FSM tree discloses many monitor panels and expert tools.

System and Software Peculiarities
Several special features have been developed within the RPC DCS. Due to space limitation, only a few are described here.

Device Representation and FSM Design
In the RPC DCS, all hardware devices (i.e. power, ADC or DAC channels, an electronic board, etc.) have been extended with a set of user defined registers describing attributes like calibration constants, set-point values and ranges, geometry, channel type, quality and mask information. This information along with the actual values returned by the hardware are used as input parameters for a dedicated custom internal function, which contains the channel type identifier and a bit-pattern in which each bit represents a single conditional check on the input registers and which is a slowly changing function. A copy of such function, which is updated only on change, (named RpcFsm), completely describes the state of the original hardware device and is the building block for any more complex device like a detector chamber. Fig.5 illustrates such scheme. The device, which is developed outside the JCOP framework, is run only when either the RpcFsm functions change or an FSM Action on the detector is performed. A dedicated PVSS manager handles this with minimum overhead and returns the control to the standard FSM framework when done.

Environmental Sensors and HV Working Point Correction
The RPC detector performance and aging is strongly related to the environmental parameters, namely the temperature (T ), the atmospheric pressure (P ), and the relative humidity. The gas gain, the noise rate and the dark current of the chamber depend on these parameters. A HV correction (V appl ), has been put in place according to the formula: V eff = V appl · (T /T 0 ) · (P 0 /P ) where T , P are measured and T 0 , P 0 , V eff are the reference values (see Sec. 2). An extended network of about 400 sensors has been installed on the detector in order to monitor temperature differences that can be easily of a few degrees between, for instance, the top and the bottom of the experimental cavern. During running condition, temperature and pressure changes do con-tribute to voltage corrections up to a couple of hundreds volts. In the DCS software a dedicated manager running on the Local Control Stations, when enabled, updates every few minutes the working point. This is done by setting appropriate PVSS Command Conversion factors on the HV set points which conversely remain constant.

Currents and Peak Measurements in Gap Currents
A special feature of the ADC board, requested to CAEN when designing the DCS, was for ADC channels to be capable both of averaged and peak sensitive readout. While the first allows monitoring the average current corresponding to the ionization rate and the chamber dark current, the peak readout, which is tunable with threshold and gate parameters, can be used to spot HV noise and to study events with large multiplicities as cosmic shower events or beam background effects. On November 21st 2009, the DCS precisely measured the LHC splash events intentionally generated for detector and timing studies by colliding one proton beam bunch against a closed collimator upstream of ATLAS [4].

Online Data Quality through the RPC DCS
Collecting all relevant information from the detector, its HV and LV settings, the current draw, the trigger rates and errors, the DCS is able to automatically deliver Data Quality flags. This information, calculated online and delivered per trigger tower is stored on the central ORACLE database and provides a good estimation of the detector conditions during data taking [3].

Conclusions
The ATLAS RPC DCS is a complete and reliable solution for all auxiliary systems involved in detector operation. In the design an effort has been put to be able to control and monitor in great detail the detector performance. The system, fully operative well before the first collisions, has shown to be extremely flexible and powerful. The very large number of detector elements monitored trough the DCS will provide statistical information about the RPC behavior and represent an uncommon tool for a deeper understanding of RPC detector physics. The ATLAS RPC DCS provides a template system for present and future experimental facilities.