The central trigger control system of the CMS experiment at CERN

The Large Hadron Collider will deliver up to 32 million physics collisions per second. This rate is far too high to be processed by present-day computer farms, let alone stored on disk by the experiments for offline analysis. A fast selection of interesting events must therefore be made. In the CMS experiment, this is implemented in two stages: the Level-1 Trigger of the CMS experiment uses custom-made, fast electronics, while the experiment's high-level trigger is implemented in computer farms. The Level-1 Global Trigger electronics has to receive signals from the subdetector systems that enter the trigger (mostly from muon detectors and calorimeters), synchronize them, determine if a pre-set trigger condition is fulfilled, check if the various subsystems are ready to accept triggers based on information from the Trigger Throttling System and on calculations of possible dead-times, and finally distribute the trigger decision (``Level-1 Accept'') together with timing signals to the subdetectors over the so-called ``Trigger, Timing and Control'' distribution tree of the experiment. These functions are fulfilled by several specialized, custom-made VME modules, most of which are housed in one crate. The overall control is exerted by the central ``Trigger Control System'', which is described in this paper. It consists of one main module and several ancillary boards for input and output functions.


Contents
is the final stage of the CMS Level-1 Trigger [4]. For each LHC bunch crossing it has to take the decision to reject or to retain a physics event for further evaluation by the High Level Trigger [5]. For normal physics data taking the decision is usually based on trigger objects (e.g., regions of high energy deposition in the calorimeters, or tracks in the muon chambers), which contain information about energy or momentum, location and quality. However, special trigger signals -so-called "technical triggers" and "external conditions" (e.g., signals from beam pickup electrodes) -delivered by the subsystems can also be used. The trigger objects are received from the Global Calorimeter Trigger and the Global Muon Trigger (figure 1). The input data coming from these subsystems are first synchronized to each other and to the LHC orbit and then sent via the crate backplane to the Global Trigger Logic module, where up to 128 trigger algorithm calculations are performed in parallel.
Each of the 128 possible algorithms during a given data taking period represents a complete physics trigger condition and is monitored by a rate counter. In the last step a final OR is applied to include or exclude individual algorithms and generate a Level-1 trigger request ("Level-1 Accept", L1A) that instructs the subsystems to send the recorded data to the Data Acquisition System for further examination by the High-Level Trigger software. All algorithms can be prescaled to limit the overall L1 trigger rate. In fact, eight final ORs are provided in parallel to operate detector subsystems independently for tests and calibration. This and the possibility to redefine "algorithms" at any time between runs make the CMS trigger system extremely flexible, which will be of vital importance to adapt to changing conditions during the long expected lifetime of the experiment. In case of a Level-1 Accept the Global Trigger is read out like any other subsystem. The readout requests (L1A) are sent by the Trigger Control System over the Global Trigger crate backplane to all Global Trigger boards, including those of the Global Muon Trigger [7,8] housed in the same crate, where the arrival time of the L1A signal is translated into the corresponding Ring Buffer address. On each board a Readout Processor circuit extracts data from the Ring Buffers, adds format and synchronization words and sends the event record to a readout module, the Global Trigger Front-End board (GTFE). There the incoming data are checked, combined into two Global Trigger event records (a "Data Acquisition" (DAQ) and an "Event Manager" (EVM) record) and sent via two S-Link64 [6] interfaces to the Data Acquisition system. The data in the EVM record are used by the data acquisition system to correctly assemble an event while the data in the DAQ record serve to monitor the functioning of the trigger. Monitoring memories to spy the event records for online checks are implemented. A more detailed description of the Global Trigger can be found in [3] and [4].

Role of the central Trigger Control System
The central Trigger Control System (cTCS) has two main tasks [9]: • it controls the Level-1 Accept rate and limits it if necessary; • it provides control signals for all the readout and trigger electronics. To achieve the first task, the cTCS can receive status information from the readout electronics as well as from the Data Acquisition System (DAQ) and trigger electronics via the so-called Trigger Throttling System (TTS). The front-end readout electronics uses a "synchronous TTS" (sTTS) and the Trigger Control System also has an "asynchronous TTS" (aTTS) input intended to receive throttling signals from the DAQ side. Currently, only the sTTS path is used; if the DAQ is not ready to accept data, instead of issuing an aTTS signal it just exerts backpressure on any one of the front-end readout systems, which is then propagated to the Global Trigger via the sTTS. In the case of some of the subsystems (in particular, the silicon strip tracker) this is not sufficient due to the limited size of the subdetector buffers and the non-negligible signal propagation time between different parts of the experiment. Therefore, local "state machines" ("emulators") implemented in dedicated VME modules (installed next to the cTCS crate) are used to emulate the occupation of the front-end buffers of individual subsystems. When a subsystem is not ready to accept triggers, they are temporarily inhibited. In addition a "throttling" circuit guarantees that triggers are distributed according to well-defined rules specifying the maximum admissible instantaneous and average rates of L1A signals (in other words, the rates of L1A signals measured over several different time windows; see section 3.3 below), and thus to minimize the danger of buffer overflows in subsystems. The central trigger control also monitors and records the deadtime of the various subsystems (see section 3.9 below).
The second main task of the cTCS, in addition to distribute the L1A signals to all subsystems, is to deliver special control signals for reset commands, calibration purposes, and tests ("Bunch Crossing Zero" (BC0) marking the beginning of each proton orbit in the LHC, resynchronization and reset commands, and other so-called "BGo" or control commands). These signals are distributed via the "Trigger, Timing and Control" (TTC) distribution tree of the CMS experiment [10][11][12]. Thus, the functions of the cTCS are to • limit the trigger rate, according to programmed Trigger Rules, based on signals from subsystems; signals from subsystem emulators; • generate and distribute control signals; • generate calibration and test trigger sequences; • monitor the dead time of all subsystems.

Trigger Control logic
An overview of the Trigger Control logic is given in figure 2 and has been briefly described in [13]. Starting from the connections in the Global Trigger crate itself (center left) and proceeding in a clockwise direction, the central Trigger Control board is interfaced with the following main modules and systems: The TCS board receives timing information from the LHC interface via the TTC system's TTCmi ("TTC machine interface") and the timing module (TIM) housed in the Global Trigger crate.
• Trigger decisions are received from the FDL (Final Decision Logic) module in the Global Trigger crate.
• The TCS board can receive warnings about high buffer occupancies from the DAQ via the aTTS (asynchronous Trigger Throttling System; not implemented in CMS at the moment).
• Emulators calculating the state of front-end derandomizing buffers send a warning to cTCS if the buffers are close to getting full.
• cTCS sends control signals ("BGo" commands) to TTCci modules of detector partitions: Start Run, Stop Run; Resynchronize, Event Counter Reset, Bunch Crossing Zero (BC0), Orbit Counter Reset, etc. BC0 could also be received directly from the LHC clock interface (see figure 2) but to ensure proper synchronization all TTCci modules in CMS are set up to get it from cTCS. The clock, however, is always derived directly from the LHC clock interface in order to ensure a high-quality signal.
The hardware implementation of this system is shown in figure 3. Left of the center one sees the electronic module of the Trigger Control System (the module with the red LED display at the top) with all its cable connections.

DAQ partition controllers (DAQ-PTC)
The Trigger Control logic reflects the optional segmentation of the CMS readout system into 8 DAQ-partitions (figure 4).
The Global Trigger Processor therefore generates up to 8 Final-OR signals in parallel and the TCS chip contains 8 DAQ-Partition Controllers (DAQ-PTC) running independently of each other. The Run Control Software can start and stop the DAQ-PTCs without any restrictions except that only one DAQ partition is allowed to trigger at a given bunch crossing. This allows to use parts of the detector (which can be freely assigned to any of the DAQ partitions) independently, which is very useful for testing different subsystems at the same time.
In practice, this system is not frequently used at present because the CMS data acquisition system is not divided into partitions and stand-alone tests of individual subsystems are mostly performed by using special "Local Trigger Controller (LTC)" boards. However, the implementation has been foreseen in the trigger software and all functions with regard to configuration and control of the eight DAQ-Partition Controllers are in place. Thus, only an upgrade of the data acquisition system and integration with the run control system's command chain is needed to make full use of  this feature. One significant advantage over using separate LTC boards would be that all latencies would be exactly the same as during global data taking. The front-end and trigger electronics is divided into up to 32 detector partitions according to the 32 "trees" of the TTC system (also briefly called "partitions", in contrast to the "DAQpartitions" defined above). All crates connected by TTC fibers to the same TTCci board (VME interface board for the TTC system) belong to the same detector partition. Each detector partition is connected to only one DAQ-partition at a time. The corresponding DAQ-Partition Controller (DAQ-PTC) receives the Status Signals of the connected partitions and provides the L1A signal and BGo-commands (for calibration etc.). For example, the RPC-positive-endcap and positive-CSC partitions (two muon detector subsystems made of resistive plate and cathode strip chambers covering partly the same region) could be connected to DAQ-PTC2 for running alignment procedures, and all four Tracker and all six ECAL partitions could be connected to DAQ-PTC3 for calibration measurements. For a normal physics run, DAQ-PTC0 controls all partitions and uses the Final-OR 0 signal as the common physics trigger.
Each Partition Controller (PTC) serves one "DAQ partition" and provides all functions to run a group of detector partitions independently. It consists of: • a Trigger Merger that combines all trigger sources (Final-OR from trigger logic as well as Random, Test and Calibration Triggers produced by the Trigger Control System) into a L1A signal; • a PTC State Machine that runs the control procedures according to the states of the connected partitions; -5 -2011 JINST 6 P03004 Figure 3. The 9U VME crate housing the various modules of the CMS Global Trigger system.
• a counter for the Bunch Crossing number; • a Bunch Crossing table that defines at which bunch crossing numbers in the orbit BGo commands and calibration triggers will be sent; it also defines periods in the orbit where triggers are suppressed (such as during the gaps within an LHC orbit where no particle collisions are expected); • a Calibration Logic system to run calibration cycles; • a Periodic Signal Generator for Bunch Crossing Zero (BC0) and Start of Gap commands as well as calibration cycles, periodic test triggers and Private Orbit signals (Private Orbits are periods corresponding to individual LHC orbits which are reserved for subsystem tests and where normal trigger signals are disabled); • a Random Trigger Generator; • a common counter for the Orbit number, which will be reset only by PTC0 when starting a new data taking run.

Time Slice distribution
The Trigger Controller distributes the beam time between the active DAQ-partitions in round-robin mode, activating them consecutively for programmable periods of time.  may vary between 10 and 2550 orbits. During inactive periods each DAQ-PTC inhibits L1As and calibration cycles but still sends control commands to its detector partitions and also receives and monitors status signals in the usual way.

Trigger Throttle logic
A common TTS (Trigger Throttling System) circuit prevents excessive instantaneous trigger rates for all partitions to avoid problems in the readout system. On the one hand, the Trigger Control System reacts to status information from the various subsystems of the CMS experiment. On the other hand, and independent of the momentary state of the systems, instantaneous trigger rates are limited according to a set of pre-set throttle rules allowing a defined number of trigger signals per time period.

Status signals
The synchronous Trigger Throttling System (sTTS) interacts with detector partitions while the asynchronous aTTS allows to receive status signals from the DAQ system. All eight DAQ partitions are served by this common system.  The Trigger Control System receives the status signals either via conversion modules (see figure 7 below) from the detector and DAQ partitions or, in cases where this is not feasible because of time constraints, from electronic emulators. Status signals from the many parts of the CMS detector are grouped into one signal per detector partition in special electronic modules by taking the most severe condition found at its inputs and passing it on to its output ("Fast Merging Modules", FMM [14]). The receiving unit first waits until a new signal state becomes steady for 75 ns, in order to suppress spurious pulses. Then it decodes the signal's four bits into seven states as shown in table 1.
Triggers are distributed at nominal rate only when all systems signal "READY". When at least one of the systems signals "WARNING OVERFLOW", triggers can be either sent at a reduced rate (see below), or blocked altogether. If at least one of the systems sends any other status signal ("BUSY", "OUT OF SYNC", "ERROR") triggers will be stopped until the system is again in state "READY" (or possibly "WARNING OVERFLOW").

Throttle rules
Apart from the state signals described in section 3.3.1, the trigger rate is also limited by the following two sets of rules, one for "normal trigger rate", the other one for "low (reduced) trigger rate". Each set consists of four rules. The first rule defines the minimum time between two consecutive triggers. The other rules allow a specified number of triggers within a programmable period. The default rules for normal data taking will introduce less than 1% dead time. The rules for "low rate" (applied when a "WARNING OVERFLOW" status signal is received from a subsystem, and it has been decided not to block triggers for this condition) allow fewer triggers within the same periods.

State Machine
The State Machine of the Trigger Control System is shown in figure 5. There exist eight copies, one for each DAQ-Partition. The eight DAQ-Partitions are almost identical except for a few monitoring counters (see 3.9 below) provided only for Partition "zero", which is the one used for normal data taking with the whole CMS detector.
The Trigger Control System handles all the status signals received from the sTTS (synchronous Trigger Throttling System, signals from detector partitions) and from the aTTS (asynchronous Trigger Throttling System, from DAQ and high-level trigger) with a state machine programmed either to stop L1A signals or to deliver Reset and other BGo command signals if needed.
In the following, the functioning of the State Machine shown in figure 5 is described from top to bottom. When starting a data taking run first the Partition Trigger Control State Machine (PTC-SM) checks for possible errors before sending a first resynchronization BGo command via the TTC system to the readout and trigger electronics of the detector partitions participating in this run. Then the main DAQ-PTC0 broadcasts optionally a command to reset the common orbit counter before sending the START command to all connected detector partitions. The next BGo command clears all event number counters in the system so that event records from the detectors can be combined correctly later by software. The PTC-SM enters now the data taking "triangle" moving between the BUSY, READY and WARNING states according to the status information and the throttle rules. In case of any hardware or synchronization error the PTC enters the respective status and waits for software intervention. A "hardware reset" command initializes the following procedure: First the PTC-SM sends the BGo command "HARD RESET" to all connected detector partitions and then waits for a programmable time up to 200 ms to allow the subsystems to reload programmable logic chips. Then the BGo commands "RESYNC" and a consecutive "CLEAR EVENT NUMBER" initialize the electronics to take data again and the PTC-SM enters the data taking "triangle". A partition temporarily disconnected or busy inhibits L1A signals, but as soon as it becomes ready again data taking is resumed automatically.

Trigger signal distribution to subdetectors
The TCS sends the L1A and the four BGo bits with a strobe to each connected TTCci module (each TTCci serves one subdetector partition), and for test purposes also the 40-MHz clock and a BC0 (Bunch Crossing Zero) signal. The L1A is sent via channel "A" of the TTC system while BGo commands are sent via channel "B" (hence the name "BGo" ; using the separate A-channel for the time-critical L1A signals avoids introducing extra latency due to the time needed for signal decoding into the trigger path; the BGo signals are not so time-critical). These bits go via the backplane to the L1AOUT printed-circuit boards where they are converted into differential signals and transmitted to the TTCci modules.

Calibration circuit
According to a programmed bunch-crossing table one or several calibration cycles can be performed during one orbit. Calibration cycles can be made either during every orbit or inserted periodically every n-th orbit. First the calibration controller sends a "WARNING TEST ENABLE" and then a "TEST ENABLE" command, which are used to start the calibration procedure in the subdetectors (e.g., inject a test pulse into the front-end electronics or start a laser). Then the following L1A allows to receive the calibration data. The time period between the "TEST ENABLE" and the L1A is defined in the bunch-crossing table.

Pseudo-random trigger
A programmable pseudo-random trigger simulates the Poisson distribution of real events during data taking. The frequency can be chosen between 0.005 Hz (using optional prescaling) and 19 MHz, far beyond the maximum data taking speed. Thus, the TCS allows to test the CMS trigger and readout system at maximum speed but also to insert pseudo-random triggers at a very low frequency during data taking. Adding a certain amount of pseudo-random triggers during normal data taking is useful for background studies. In addition, this allows to randomize triggers in case of very regular trigger patterns (as produced by beam-pickup electrodes during special tests) and thus to avoid the creation of resonances which could be harmful for the detector electronics.

Luminosity segments
To break down the data of a run (the data taken between the "start run" and the "stop run" signals) into manageable subsets, the data are subdivided into so-called "luminosity segments" based on the orbit number. At present, the CMS default length for a luminosity segment is 2 18 orbits, which is about 23 seconds. For such a period the luminosity of LHC is expected to be nearly constant.

Monitoring counters
A number of monitoring counters has been foreseen to obtain information on rates and deadtimes. These counters are separate for each of the eight DAQ-partitions (as mentioned above in 3.4 some of them have been implemented only for the main DAQ-partition 0, which is used for normal data taking). Apart from counters for the different trigger types ("physics" or standard trigger, calibration triggers, pseudo-random triggers) there are counters for "lost triggers" (generated triggers that were suppressed due to throttle rules, subsystems in "BUSY" state etc.), deadtime counters (overall deadtime, deadtimes during and outside the times in the LHC when protons are delivered), orbit and event number counters.
The counter for trigger number and the various deadtime counters accumulate during the whole data taking run while counters for physics, calibration, pseudo-random and test triggers are refreshed every luminosity segment period.

Status simulation and monitoring
A second chip on the TCS board contains electronics to simulate the states of all subdetectors. This option is used for tests to allow sending trigger and control signals to subdetectors while ignoring their status. It is also used to operate subdetectors which currently have no readout system of their own and are therefore always in state "ready" but have no way of asserting this status themselves (e.g., the "Regional Calorimeter Trigger").
For each detector partition counters monitor how often subsystems enter the warning state or show errors either per luminosity segment period or during the whole data taking run.

Hardware implementation
The central Trigger Control System consists of the TCS module itself (figure 6), four "Conversion boards" (CONV6U, figure 7) and two L1AOUT boards (figure 8). The TCS module and the two L1AOUT modules are housed in the 9U Global Trigger Crate (figure 3), the Conversion boards are located in a 6U crate in the same rack (the Global Trigger rack). The TCS module receives the Final-OR decisions from the FDL ("Final Decision Logic") board. The four Conversion boards receive fast control signals with status information from all subsystems, combine bits of four subsystems and send them via a Channel Link [15] to the TCS board. Emulators that model the timing behavior of the derandomizer buffers of the silicon-strip tracking detector are supplied by the subdetector groups as 6U VME boards and housed in a crate very close to the 9U Global Trigger Crate.
On the TCS board all input bits are recorded in the TCSM (Trigger Control System Monitoring) chip and forwarded to the TCS logic chip, which are both implemented using Xilinx FPGAs (Field Programmable Gate Arrays; the TCS chip is an XC2V3000 with 3 × 10 6 gates and 14336 slices while the TCSM chip is an XC2V1500 with 1.5 × 10 6 gates and 7680 slices).
This chip contains the circuits for the Trigger Throttling System and the calibration control and sends the trigger requests to all connected TTCci boards to broadcast them to the CMS readout electronics. In parallel, TCS sends a record via an S-Link64 interface to the Event Manager of the Data Acquisition system.

Control and monitoring software
For controlling and monitoring the Trigger Control System and the rest of the Global Trigger, a software package has been developed within the CMS Trigger Supervisor (TS) framework [16]. This framework, used by all Level-1 Trigger subsystems, is written in the C++ programming language and based on the XDAQ "middleware" for distributed computing.

XDAQ executives and applications
XDAQ "executives" are normal processes on a Linux machine. These can be accessed either from a Web browser or from a different process via SOAP (a protocol on top of HTTP). One executive encapsulates several XDAQ "applications". Predefined applications provide functionalities such as accessing a database or relaying monitoring data in a standard way. Furthermore it is possible to add custom XDAQ applications which implement specific services.

Trigger Supervisor Cells
A Trigger Supervisor Cell is a particular type of XDAQ application designed for the Level-1 Trigger system to provide, upon customization by subsystem experts, the following services.
Commands Procedures to be invoked by the Cell itself or remotely (via SOAP) by a different process. A generic web interface to Commands is available.
Operations Finite State Machine (FSM) representations. Transitions between states can be launched locally or remotely. The Configuration Operation is defined to specify the interaction of Level-1 Trigger subsystems with the CMS run control system. A generic web interface for Operations is available.
Monitoring Infrastructure for defining items to be monitored and implementing routines to update the data. A generic web interface to monitoring data is available.
Panels Custom Graphical User Interfaces accessible as web applications. These can be implemented to provide advanced interfaces for experts and CMS operation teams.
The Trigger Supervisor system is deployed on several Linux machines in the CMS "private network" and consists of TS Cells and other XDAQ applications with access to the CMS database and the Level-1 Trigger hardware systems and communicating with each other via SOAP. The main process is the Global Trigger Cell (big box), controlled either by the CMS run control system (top left) or Level-1 Trigger operators or GT experts (bottom left). Auxiliary XDAQ services are depicted in blue color. One such service (Sensor, an XDAQ monitoring data source) is running inside the GT Cell executive, whereas others (TStore, to access the database; WS Eventing, to "route" monitoring data; Live Access Service, to collect and provide access to monitoring data) function as distinct executives outside the Cell (right). Other external processes, data store systems and interfaces shown on the right are the CMS database (containing configurations, conditions and monitoring data); the Log Collector storing application log messages; several consumers of Global Trigger information (Web Based Monitoring, Luminosity System, Level-1 Page, DAQ Monitoring) accessing the database or the Live Access Service; the CAEN-VME Linux service for hardware access over a PCI bridge connected to the CAEN controller in the GT VME crate; the Global Trigger itself with its connections to the TTC, trigger and data acquisition systems.

Configuration and control services
Prior to every unit of data taking CMS configures all its systems. The Global Trigger configuration process is handled by the GT Cell configuration service. Several database record identifiers are forwarded by the run control system to define the setup. The following steps are performed to configure the Trigger Control System.  • Initially a cold-reset procedure is automatically executed if the TCS chips are found to be in an uninitialized state (for instance after a power cut). In this case firmware is flashed onto the chips from PROM memories.
• Setup parameters are downloaded from the database (or retrieved from a local cache) and applied to the TCS. In particular, the firmware version is checked; the partitioning of the data taking into several DAQ partitions is defined; the eight DAQ-PTCs are set up; automatic or periodic resynchronizing or resetting of the detector by software is configured.
• The FMM status signals of CMS partitions not participating in the data taking are flagged to be ignored.
When a run starts the applied setup is recorded in the database. On "Start" due trigger sources are enabled; on "Pause" they are temporarily disabled; on "Resume" they are re-enabled; on "Stop" they are disabled. These transitions can be executed for the eight DAQ partitions independently of each other. A stopped DAQ-PTC can be reconfigured without disturbing other running DAQ-PTCs. (This feature is currently not integrated with the CMS run control system.)

Monitoring services
The hardware status and trigger conditions of the Global Trigger are monitored continuously. Persistent storage of monitoring data in the database is adopted for the most important quantities. Monitoring of the TCS comprises the following items: • every 30 seconds: TCS status flags, Finite State Machine states of DAQ-PTCs etc.; • every second: Status of 32 connected detector partitions (this information is used to automatically resynchronize the detector if this is configured), eight DAQ partitions, Global Trigger; • every 5 seconds: number of sent triggers, event number, LHC orbit number; • every luminosity segment: number of incoming and lost candidate Level-1 Accepts, number of generated Level-1 Accepts for each type (physics, random, calibration), dead time counters.
These periodic monitoring data as well as general information about the configuration state are forwarded to the XDAQ monitoring infrastructure for live access by several consumers such as the Web Based Monitoring, which is an essential resource for physics analysis, or the Luminosity System, which provides important input for the interpretation of physics data by calculating the recorded luminosity from the delivered luminosity by correcting for the deadtime measured by the TCS.

Graphical User Interfaces
Specialized Graphical User Interfaces have been designed to provide monitoring and control tools for shifters and Global Trigger experts and to allow maintenance of the hardware setups in the database. Each panel is briefly described in the following with regard to its functionality for the Trigger Control System.
Partitioning panel Monitor the status of the GT and the partitions and their assignment to the DAQ partitions; turn on or off ignoring backpressure from partitions via FMM signals; assign time slots to each DAQ partition.
Configuration panel Toggle on or off trigger sources; set the random trigger frequency; send resynchronization of reset commands to a DAQ partition; switch on or off sending of signals to partitions; apply predefined setup (identified by a GT Key) to the whole Global Trigger; apply predefined setup (identified by a GT Partition Key) to the functions related to a single DAQ partition.
Trigger Monitor panel Monitor trigger number, event number, luminosity segment number; monitor all trigger and deadtime counters; access data for most recent luminosity segments. See figure 10.
Run Settings panel Define Run Settings (including selection of TCS trigger sources) and store them as a GT Run Settings Key; display settings applied for passed runs.
Configuration Editor Browse, create, compare and validate database setups for any Level-1 system (including GT and TCS); compare database settings to applied hardware settings; edit hardware settings directly. This is a versatile application originally developed for the Global Trigger and later used by all Level-1 subsystems.
Hardware Monitor Access the three most recent samples of all hardware monitoring items.

Performance and results
The Global Trigger and the Trigger Control System were already used for a number of years before the LHC startup during the integration of the CMS detector and electronics and for numerous detector tests with cosmic particles. They have been operating continuously in production mode since the first collisions of LHC beams. Over this period a number of additional requests have been made by the various subsystems interacting with the Global Trigger, and they have been taken into account in updates of the firmware of the Trigger Control System and other modules of the Global Trigger. On the software side, improvements based on running experience were implemented in order to maximize the usability of the central trigger control system. The system has proven to be highly reliable and at the same time flexible. All design objectives have been reached. In particular, it has been shown that the system runs stably at a rate of 100 kHz, which is the design value of the CMS Level-1trigger.