CMS Bulletin



Since the last report, much visible progress has been made, as the LS1 programme approaches the halfway point. From early October, technical and safety shift-crew have been present around the clock, allowing detectors to stay switched on overnight, ensuring that safety systems are operational and instructions for non-expert shift-crew are clear.

LS1 progress

Throughout the summer, whilst the solenoid vacuum tank and YB0 surfaces were accessible, an extensive installation programme took place to prepare for Tracker colder operation and the PLT installation, in 2014, the Phase 1 Pixel Tracker installation, in 2016–’17, and the HCAL Phase 1 upgrade completion, ending in LS2. This included pipework for N2 or dry air to flush the Tracker bulkhead region, many sensors to monitor temperature and dew point in the Tracker and its service channels, heating wires outside the Tracker cooling bundles, supports for the new vacuum-jacketed, concentric, CO2 Pixel cooling lines, the PLT cooling line now necessary following the decision to change to silicon sensors and additional power lines to the HCAL RBX’s to accommodate the Phase 1 upgrade.

The logistic configuration then changed to allow priority access to the +z endcap disks and the –z barrel wheels (with the +z barrel wheels thus closed over the vac tank). Repair and maintenance thus took place on the DT and RPC chambers in YB–1 and YB–2, and those accessible from the –z face of YB0. This work included a major gas leak repair campaign for the barrel RPCs. In addition the current HPD photomultipliers of HO were exchanged for SiPMs. The –z wheels are now closed over the vac tank and RPC repairs are being completed on YB–2. As DT maintenance on barrel wheels is completed, the sector collector crates of the DTs are being replaced by optical drivers and moved from the balconies in the UXC into the USC. Open access to the –z nose has allowed ME1/1 patch panels to be replaced and HE source calibration to be prepared.

Concurrently, the YE4 (+z) disk has been assembled on YE+3, moved to the +z end headwall and detached onto newly installed mountings on the blockhouse. The new ME4/2 chambers have recently been mounted and cabled on YE+3; their commissioning is expected to finish near the beginning of the CMS Week (at the time of submission of this report). Meanwhile, installation of the new RE4/2 super-modules, covering the commissioned ME4/2s, has started. The ME1/1 chambers in the YE+1 nose have meanwhile been re-installed after a total refit in SX5 of the on-chamber and on-disk electronics and power-supply systems. Commissioning will start as soon as cabling and cooling connections are finished. In parallel the maintenance and re-commissioning of CSCs and RPCs in other layers of the positive endcap is ongoing with frequent changes of disk configuration allowing access to different layers.

Other ongoing work is largely independent of the configuration. For instance, on both “blockhouse” platforms the vacuum group is working. At –z, a new gas injection station is under construction allowing faster venting and pumping of the beam pipe in the future, while at +z the corresponding system and also the vacuum pumps are undergoing major refurbishment. The C6F14 cooling plants in UXC are currently being insulated in preparation for Tracker operation at –20 ºC.

Areas of concern

Throughout the entire reporting period intensive work continued inside the vacuum tank at both ends to improve the sealing of the Tracker bulkhead and patch panel regions, deliver dry gas and monitor the humidity and temperature of the environment. A first test in November showed that the cooling plant can deliver cold C6F14, and that the bulkhead environment dew point is well below the lowest likely operating temperature. However, after reverting to room temperature, water was found to be leaking in many places from the insulated pipe-bundles between the cooling distribution plant and the Tracker. Inspection revealed damage in many places to the aluminium vapour barrier of the adhesive foam tape wrapping the insulated pipe bundles. This allows cavern air with a dew point of several tens of degrees to reach the –20 ºC fluorocarbon lines. Ice formation is inevitable and cumulative, also leading to melt-water leaking out of the bundles when cooling is stopped. The potential damage from ice (particularly) and also from melt-water is not tolerable. In reaction to this a very careful inspection and repair campaign has to be planned for January and February before the master-cooling test with the exposed vac tank.

On 15 November, one of two galvanic feed-throughs for heating cables on the outside of ES (–z) was found severely damaged by overheating or arcing. A Technical Incident Panel was held on 25 November and quickly arrived at preliminary conclusions. It rapidly became clear that a repair in situ would be neither feasible nor a sufficient action. Investigations in preparation for the TIP revealed that filter capacitors built into these feed-throughs are not rated for the voltages expected to be used as soon as the ES runs at colder temperatures; moreover the connector pins are not gold-plated. Despite several complicating factors, the most probable explanation of the observed damage is a recent application of higher voltage than previously applied. A clear recommendation was to remove both ES– and ES+ for examination and, at a minimum, repair and exchange of the affected feed-throughs. Space has been prepared in haste in the radiation workshop of SX5. ES+ was removed from the EE on 29 November and on 4 December it was transported to the surface. A first inspection showed no sign of damage on the inside of the feed-through and nominal heating wire resistivity. More detailed tests and checks are ongoing but currently it appears that exchanging the feed-through will solve the problem. The removal to the surface of ES– is foreseen for Monday 9 December.

The construction contractor for the new 43.4 mm (i/d) vacuum chamber (beampipe) for the Phase1 Pixel Tracker has continued to experience problems in completing the final weld, probably related to the new challenge of creating a Beryllium chamber with conical end-pieces. Another attempt in late November produced a mechanically good weld which was unfortunately porous. It was agreed to make one last attempt (with the existing –z conical section and its now very short cylindrical interface) at cutting out the weld and welding in an insert. Meanwhile the manufacturer also agreed to start precautionary pre-machining of a new end-cone with a long cylindrical interface. Fortunately, the latest weld seems mechanically good and has passed initial leak testing. We eagerly await final results. There is still two months’ contingency between the currently estimated “ready for installation” date and the planned installation start.

Planning for the LS1 endgame

The top-priority objective of LS1 is preparing the Tracker to operate at –20 ºC. To address the concerns about vapour barrier damage (above), the configuration of the detector with vac tank exposed for the “master test” of the cooling will be prolonged longer than originally foreseen. This configuration must now be used in parallel for work on the YB+1 and YB+2 and on the positive face of YB0, along with preparatory work for ME1/1 and RE4 (re-)installation at the –z end.

Completing the re-installation and commissioning of ME1/1 at both ends and the construction of the second YE4 (–z), complete with pushback mechanism, are also mandatory LS1 tasks before closure of the detector can start. After the “master cold-test” is over and a feasible Tracker operating temperature determined, the mirror image of today’s logistic configuration has then to be established, but kept for as short a time as reasonably possible before beginning the sequence of closing the detector. This begins with the installation of the beam pipe in early June 2014 and concludes with the full-field magnet test in November 2014. This magnet test will be the ultimate check on the success of sealing the Tracker for colder operation. The handling of this endgame will be difficult and might require some compromises in what can be safely achieved. Technical Coordination plans another LS1 workshop at the end of February to review the status and work out the strategy and schedule options in detail. The very constructive daily interaction with the sub-system field coordinators (and with Run Coordination as commissioning activity builds up) will continue to be vital to translate this into a continually optimised and time-efficient day-to-day planning.

Phase 2 upgrade studies

Priority must be given to successful completion of the LS1 project and the oversight of the remaining approved Phase 1 detector projects (including EYETS 2016–’17 and LS2 planning). However, Technical Coordination intends to pursue, during 2014, a limited set of objectives in support of the work to develop a Technical Proposal for Phase 2. These will focus on the continuing development of a schedule for LS3 not exceeding 30 months, the identification of tasks (mostly infrastructure modifications and end-of-lifetime replacements in preparation for HL-LHC) which can be carried out pre-emptively in LS2, integration and tooling studies for the revision of the YE1 nose (EE/HE/ME1) and (in close consultation with the BRIL project) radiation simulations of particle fluences, beam induced backgrounds, dose rates and activation levels. These simulations are essential to convergence upon an effective Phase 2 design. Associated with these technical objectives, there are serious challenges ahead in developing a viable resource model for common activities over the next decade.


Maintenance work and consolidation activities on the magnet cryogenics and its power distribution are progressing according to the schedules.

The manufacturing of the two new helium compressor frame units has started. The frame units support the valves, all the sensors and the compressors with their motors. This activity is subcontracted. The final installation and the commissioning at CERN are scheduled for March–April 2014.

The overhauls of existing cryogenics equipment (compressors, motors) are in progress. The reassembly of the components shall start in early 2014.

The helium drier, to be installed on the high-pressure helium piping, has been ordered and will be delivered in the first trimester of 2014.

The power distribution for the helium compressors in SH5 on the 3.3kV network is progressing. The 3.3kV switches, between each compressor and its hot spare compressor, are being installed, together with the power cables for the new compressors. The 3.3kV electrical switchboards in SE5 will be consolidated. The consolidation will take place during the cooling and ventilation maintenance scheduled for the beginning of May 2014, to avoid any impact outside this period on the detector operations. The modifications include the connection of the compressor switch cells on the LHC machine power network, the change of the switch control coils to 48 V, and the reconfiguration and commissioning of the three PLCs for the auto transfer from the LHC machine power network to the LHC loop power network.

As part of the general consolidation of the uninterruptible power supplies (UPS) at Point 5, the UPS for the cryogenics and the magnet control system in the USC5 and SHL5 will be replaced, by March 2013. The existing safe room for cryogenics in SHL5 will be moved, and the cryogenics control racks and the UPS will be installed in SH5.


One of the most important tasks for LS1 was achieved this autumn when all the electronics racks in the USC55 counting rooms were switched from the standard powering network to the CMS low-voltage UPS. This long-sought move will prevent fastidious power cuts of the CMS electronics in case of short power glitches on the main powering network, as already assured to the detector front-end electronics in UXC55. In the same time, a study to update the dedicated UPS units for some crucial detector sub-systems, as the Magnet Control System (MCS), the Detector Safety System (DSS) and the IT Network Star-points, has been lunched. A new architecture, with fully redundant UPS units, able to assure power supply in case of long network outage (up to a maximum of five hours, in the case of the Magnet) has been recently presented by the EN-EL group and is currently under evaluation.

The dry-gas plant recently commissioned in SH5 has passed a first test in order to understand the time needed to switch from dry-air to dry-nitrogen. The test has shown that in about five minutes the plant can deliver more than 100 m3/h of dry nitrogen at 8 bars, which is fully satisfactory. A more complete test will be done to check the maximum delivery capacity at full power (i.e. with the two compressors working in parallel) in the coming days.

The CMS water cooling systems are back into operation, with the last distribution loops being opened when the thermal screens and muon chambers were put back in place with new connectors.

On the perfluorocarbon C6F14 circuits, the second stage of works for allowing a safe operation of the Tracker at low temperature is ongoing. In September, a pilot installation of an insulated cabinet to house the Silicon Strip 1 cooling plant gave encouraging results for the improved accessibility and the environmental control. The same modifications are being applied to the other plants starting 11 November. At the same time, the refurbishment of the electro-pneumatic cabinet, improving the reliability of the system and eliminating the leaks on the instrument air circuit is starting and will be completed by the beginning of December. This activity, together with the last modifications on the primary circuit, will end with a full performance test assessing the full activities of LS1 on these systems.

In preparation for the Phase I Upgrade, the CO2 full-scale prototype of the Pixel cooling is being commissioned in building 186. All components for the manifold, the accumulator and most of the plant are validated and the purchases for parts to be installed at P5 will start in the next weeks. For P5, the transfer lines to distribute CO2 on YB0 have been purchased and are now being built, for installation in January 2014. The two additional tenders for the remaining paths up to the cooling plants in USC are in preparation and will be sent out by the end of the year.

Image 1: The TIF CO2 cooling plant: full-scale prototype for Pixel Phase I Upgrade.



Pixel Tracker

Maintenance of the Pixel Tracker has been ongoing since it was extracted from inside CMS and safely stored at low temperatures in Pixel laboratory at Point 5 (see previous Bulletin).   

All four half cylinders of the forward Pixel detector (FPIX) have been repaired and the failures have been understood. In October, a team of technicians from Fermilab replaced a total of three panels that were not repairable in place. The replacement of panels is a delicate operation that involves removing the half disks that hold the panels from the half cylinders, removing the damaged panels from the half disks, installing the new panels on the half disks, and finally putting the half disks back into the half cylinders and hooking up the cooling connections. The work was completed successfully. The same team also prepared the installation of the Phase 1 Pixel pilot blade system, installing a third half disk mechanics in the half cylinders; these half disks will host new Phase 1 Pixel modules that will be installed early next year. Along with the newly designed front-end and supply tube electronics these new modules will form the Pixel pilot blade to test Phase 1 Pixel readout during 2015 and 2016 running.

The failure of the barrel Pixel detector (BPIX) modules, which correspond to ~2.3% of all BPIX channels, are due to broken wire bonds or faulty services, such as broken analogue-to-optical transducers and disconnected sense wires. It was decided to start the repair and replacement work with the modules and services that are located on the outermost layer due to easy access. Eight new modules were brought to the Pixel laboratory and replacement work is ongoing (seven of the eight successfully replaced as of end November). With these replacements the portion of BPIX with failures will decrease to ~1.1% of all BPIX channels. After the module replacement on the outermost layer a risk assessment will be performed to decide whether or not to replace the modules on the inner layers.

In parallel, a calibration effort has started on the whole detector. One half of the BPIX and one fourth of the FPIX have been calibrated at 0 ºC. It is planned to calibrate again at the lowest temperature achievable in the Pixel laboratory.

We have almost completed the preventive maintenance replacing the VME power supplies located in USC and in the test stands. The VME PCs in USC are also being replaced with more modern PCs with new VME controller cards.

Image 2: One of the pilot half disks with cooling tubes attached but not connected to the manifolds. Paper-made modules demonstrate the final position of the real Phase I Pixel modules.

Strip Tracker

After an intense Ssmmer, which ended with the commissioning of the dry-gas membrane system, a huge refurbish programme of the C6F14 cooling plants and a preliminary humidity sealing of the bulkheads and the service channels, time has come to check the status of the detector. Much care has been taken to re-balance all cooling loops and crosscheck procedures guaranteeing a smooth and safe restart avoiding overpressure on the detector. This was successful and we had no leak rate increase at 4 ºC nor at –10 ºC. The Strip Tracker’s recommissioning at 4 ºC started carefully by checking control-ring integrity and continued with calibration runs to assess detector status and situation of historical failures and components masked during Run 1.

We joined the Global Run in November (GRiN) on the first day. The ‘Tracker Going Cold’ campaign attained an important checkpoint: initial temperature was progressively being lowered in steps to 0 ºC, –5 ºC and –10 ºC while Tracker HV was on and DAQ was running. The Pixel loops – in by-pass – were even operated at –25 ºC. Numerous new environment sensors allowed a good understanding of the dynamics of humidity and temperatures during the process. During transitions, Laser Alignment System (LAS) data was taken, suggesting no movements more than a few microns, being cross-checked by the alignment software MillePede II. Warming up showed stable humidity in the Strip detector and its service channels but showed some RH increase during transients in the 6- and 12-o’clock channel positions where the densely packed, non-insulated Pixel pipes are located. This effect is under investigation. We’ve seen a detector running flawlessly with cosmic data acquired at each temperature step. Data quality, monitored online by the Tracker DQM group and analysed further offline, was sound at 4 ºC and encouraging at lower temperatures. Later on, CMS performed the foreseen latency shift by +12 bunch-crossings in preparation for L1 Trigger Upgrade and Tracker participated without problems; dead-time increase was 0.1% within expectation.

Annual maintenance of the cooling plants including new insulation of plants started right after GRiN. In parallel, the Strip DAQ was being moved to new VMEPCs including new interface cards. DCS started preparations for migration to WinCC 3.11, superseding the older PVSS.

On the offline front, performance for Run 2 and beyond is being addressed by adding gain and bad-component calibrations to the Prompt Calibration Loop at Tier-0 and mastering detector features, such as Lorentz Angle (LA) or noise correlation. In parallel with Tracker Upgrade simulations, our DQM team improved the validation suite, encompassing new geometries for Phase 1 and Phase 2. Preparing for the future goes together with providing the best of Run 1 detector performance and a big effort of the Alignment group is to correct and control systematic biases and time. Updated conditions were delivered for 2011 Legacy reprocessing of data and MC production using alignment of local-y coordinate in end-caps and updated LA and backplane corrections. New conditions enhance the Z mass resolution by an impressive 10% and largely improve the φ-dependent curvature bias.

Tracker had a very active year and welcomes 2014 with new challenges and intense preparation for Run 2: the best time to come on board!


One half of the ECAL barrel (EB–) and both endcaps (EE) participated in the CMS Global Run in November (GRiN). This was used as an opportunity to exercise the ECAL DAQ and trigger systems following software development work during LS1 and verify the operation of ECAL with an increased latency of +12 BX. It was also used to check the status of EB– following the reconnection of low-voltage cables that were disconnected at the start of LS1 to allow the replacement of HCAL photodetectors. The cables in EB+ will be reconnected and re-commissioned in 2014. The GRiN was also used to verify the successful repair in September 2013 of a region of 75 channels in the positive endcap (EE+), which had not been fully operational since the LHC’s startup in late 2009. A campaign to refurbish all EB/EE low-voltage AC/DC convertors was carried out in parallel with these activities.

Monitoring data (laser and LED light) has been taken regularly to measure the recovery of crystal transparency during LS1. The blue solid-state laser (DP2-447), used to derive the ECAL response corrections, has been re-commissioned at Point 5 following planned refurbishment by the manufacturer, and the laser DAQ software is being upgraded. Humidity sensors have been installed in the ECAL laser room to study the effect of humidity variations on laser performance. A technical test of the application of response corrections in the ECAL barrel at Level-1 and HLT was successfully carried out during the GRiN. These corrections were applied weekly in EE during 2012 and will be extended to EB in 2015.

A serious issue with a connector on the exterior of the minus end of the Preshower (ES–) was discovered after the GRiN. The connector had been damaged beyond repair due to excessive heat. The likely reason for the damage has been understood and a Technical Incident Panel helped to define the path forwards, which involves removing both preshower disks and replacing the damaged connector and three others of the same type. The ES– was removed from CMS in early December and the ES+ due to be removed about a week later. Repairs will take place on the surface, either at P5 or on the Meyrin site, before reinstallation in late spring 2014, prior to the installation of the beam pipe. A detailed Twiki has been prepared and is regularly updated with the latest information and background material.

An ECAL DPG workshop, "Towards ultimate energy calibration and resolution with Run1 Data" was held on 21 and 22 November. The workshop demonstrated a significant amount of progress in understanding the ECAL performance, and in refining our calibration techniques.

The workshop highlighted the importance of improved simulation of the detector noise and better modeling of out-of-time pile-up. Retraining the energy corrections using the more realistic Monte Carlo yields a 6% improvement in resolution for the best-photon category in EB, compared to the 22 January re-reconstruction of 2012 data. In addition, the extra smearing (per electron) needed to match the Monte Carlo is substantially reduced when improved MC and calibrations are used, especially in EE (was 3-4%, depending on EE electron category in the Moriond 2013 dataset; now <2%).

A significant improvement in our understanding of the material upstream of ECAL has been obtained by using several data-driven techniques performed jointly by the ECAL DPG and Tracker POG. A new Tracker material scenario, derived from these results, is being prepared and is expected to reduce the residual discrepancy in resolution between data and MC even further.

Regarding Run 1 data, the reference paper for electron/photon energy scale and resolution using 2010/’11 data, was completed and published in the Journal of Instrumentation in September.

Work is continuing on several fronts in the area of ECAL upgrades, in preparation for the Technical Proposal in 2014 as well as for the planned upgrades in LS3. A series of longevity tests of existing detector components are being carried out. These involve accelerated ageing tests of the on-detector front-end electronics of EB and EE, as well as measurements of the performance of irradiated barrel APDs. Studies of the feasibility of optical bleaching and thermal recovery of irradiated EE crystals are ongoing, and are expected to conclude soon.

A significant amount of work is ongoing to tune the ECAL reconstruction algorithms and evaluate the performance of the current detector at high event pile-up and including detector ageing into the simulation. Finally, following a kick-off meeting held during the October CMS Upgrade week, a set of four working groups has been formed to define the scope and required functionality of the ECAL barrel front-end electronics upgrade (needed after LS3 to provide additional Level-1 Trigger bandwidth and latency).


The DT group is undertaking substantial work both for detector maintenance and for detec-tor upgrade.
Maintenance interventions on chambers and minicrates require close collaboration between DT, RPC and HO, and are difficult because they depend on the removal of thermal shields and cables on the front and rear of the chambers in order to gain access. The tasks are particularly critical on the central wheel due to the presence of fixed services. Several interventions on the chambers require extraction of the DT+RPC package: a delicate operation due to the very limited space for handling the big chambers, and the most dangerous part of the DT maintenance campaign. The interventions started in July 2013 and will go on until spring 2014. So far out of the 16 chambers with HV problems, 13 have been already repaired, with a global yield of 217 recovered channels. Most of the observed problems were due to displacement of impurities inside the gaseous volume. For the minicrates and FE, repairs occurred on 22 chambers in YB–1 and 14 in YB–2. Maintenance of the YB–1 and YB–2 wheels has finished.

Upgrade activities continue to evolve in good accordance with the schedule, both for the theta Trigger Board (TTRB) replacement and for the Sector Collector (SC) relocation from UXC to USC:

  • The TTRB work aims at reconstituting the stock of spare boards for the long-term op-eration of the chamber minicrates. The intervention has been done on all minicrates of MB1 chambers in wheel YB–2, and on some YB+2 MB1 minicrates.
  • With the SC work, data and trigger primitives from each of the 250 DT chambers will be available in USC on optical fibers. This work is the corner stone for any long-term upgrade plan of the DT system. All optical-fiber trunk cables have been received, verified and installed between USC and UXC. All of the Wiener crates are installed.
 Pro-duction of all  CUOF mezzanines, OFCU-RO boards and OFCU-TRG boards has finished: the success rate for passing QC tests and burn-in is very high.
 CUOF mezzanines have been assembled on CUOF motherboards.
  • SC relocation has been performed for wheel YB–1, and the new electronics cabled up. System commissioning is going on with test pulser and cosmic data: first measurements of the new trigger latency show an increase of +3BX as expected by design.

Regarding Phase 2 upgrade strategies, design studies have started for rebuilding the minicrates, in view of their installation during LS3. The studies are also revisiting the entire architecture of DT back-end RO and trigger electronics, aiming at simplification.


Great progress has been made on the CSC improvement projects during LS1, the construction of the new ME4/2 muon station, and the refurbishing of the electronics in the high-rate inner ME1/1 muon station. CSC participated successfully in the Global Run in November (GRiN) cosmic ray test, but with just stations +2 and +3, due to the large amount of work going on. The test suite used for commissioning chambers is more comprehensive than the previous tests, and should lead to smoother running in the future.

The chamber factory at Prevessin’s building 904 has just finished assembling all the new ME4/2 chambers, which number 67 to be installed plus five spares, and is now finishing up the long-term HV training and testing of the last chambers. At Point 5, installation of the new chambers on the positive endcap went well, and they are now all working well. Gas leak rates are very low. Services are in good shape, except for the HV system, which will be installed during the coming month. We will then be waiting for a window of opportunity to install ME4/2 chambers on the negative endcap.

Meanwhile, the on-chamber DCFEB electronics production is nearly complete, and 55 of the 72 ME1/1 chambers have been refurbished. Of these, 36 chambers for the positive endcap have been reinstalled. These are being cabled up, and a new LV delivery system (with components called OPFCs, Maratons , Junction Boxes, and Patch Panels) is being commissioned. While a two-chamber “demonstrator” has been successfully commissioned, the rest of the commissioning of the positive endcap is waiting for the services, especially LV, to be completed, and will be done sector by sector for the next month while we are waiting for additional readout electronics to arrive from the vendor. In particular, the off-chamber ODMB electronics board has undergone some delays, but some 10 more pre-production boards are expected this year, and full production is expected in the spring.


In the second part of 2013 the two main activities of the RPC project are the reparation and maintenance of the present system and the construction and installation of the RE4 system. Since the opening of the barrel, repair activities on the gas, high-voltage and electronic systems are being done in parallel, in agreement with the CMS schedule.

In YB0, the maintenance of the RPC detector was in the shadow of other interventions, nevertheless the scaffolding turned out to be a good solution for our gas leaks searches. Here we found eight leaking channels for about 100 l/h in total.

10 RPC/DT modules were partially extracted –– 90 cm –– in YB0, YB–1 and YB–2 to allow for the replacement of FE and LV distribution boards. Intervention was conducted on an additional two chambers on the positive endcap to solve LV and threshold control problems. Until now we were able to recover 0.67% of the total number of RPC electronic channels (1.5% of the channels were dead at the end of 2013 run).

In YB–1, where a total leak rate of about 260 l/h was measured, we found 13 leaking chambers. The problem was found to be a failure of one or two of the T connectors in the gas-distribution lines. In seven chambers, the broken T connectors were accessible without extracting the chamber and were repaired, reducing the total leak rate to 135 l/h.

The HV problems (disconnected or single-gap-operation-mode chambers) were mostly due to a broken tripolar connector on the chamber. Until now we were able to fix most of the problems by replacing the connectors or disconnecting only one gap out of the four/six gaps. This will allow an increase of the efficiency of about 1.9% of the RPC η partitions (rolls) and recovery of 0.5% of the total electronic channels.

Since June 2013, 72 chambers have been successfully built, tested (at Belgium, India and CERN) and assembled (at CERN) in 36 Super Modules (SM) for the RE4 positive endcap.

On 11 and 12 November, a trial installation was performed, and three Super Modules were successfully mounted and dismounted by a Pakistani/Italian/Mexican/Belgian/CERN team, confirming that the installation of six SMs per day is possible with a six-person team. Important protocols on SM orientation, storage space allocation at Point 5, SM lifting, workforce, logistics and teamwork were established during this exercise. The most important result was the reassessment of the RE4 positioning in Z. The RE4 plane was approved to be moved 20 mm closer to IP to avoid conflict with YE4 plane. The installation of the RE4 positive endcap is scheduled for December 2013 and the tools, procedure and workforce needs are set.

Integration of the services on the negative endcap have already started. 36 double gas patch panels were installed on the YE–3 periphery last week. Cooling mini-manifolds, gas piping between the gas-distribution rack and the patch panels, RPC trigger fibers, high-voltage, low-voltage and signal cables should be installed by end January 2014.  The gas and cooling connections of the installed Super Modules on the positive end (RE+4) should be completed by Christmas.

We still have 36 SMs to go for the RE4 negative endcap. The gap production of the 216 gaps needed to build the 72 chambers is ongoing in South Korea, as well as the chamber production and testing at India/Belgium/CERN. Several gap shipments are foreseen for November, December and January. The first batch of SMs is expected at the end of February by a Belgian/Bulgarian/Georgian/Mexican team. The installation of the negative side is foreseen in April 2014.

During and after the installation of the RE4 chambers, the most relevant issue for the DPG will be the development of the appropriate code needed for the commissioning of these chambers. Most of the code needed is already written and has been extensively used and tested during 2011 and 2012 Runs, however the inclusion of the new chambers to all our routines is an important task that is getting started and needs to be done as soon as possible.

RE4 inclusion in the software is moving forward in parallel with studies done on the previous data (Run 2012). Studies concerning the detector ageing and the RPC background have been of special interest.


A new Muon misalignment scenario for 2011 (7 TeV) Monte Carlo re-processing was re-leased. The scenario is based on running of standard track-based reference-target algorithm (exactly as in data) using single-muon simulated sample (with the transverse-momentum spectrum matching data). It used statistics similar to what was used for alignment with 2011 data, starting from an initially misaligned Muon geometry from uncertainties of hardware measurements and using the latest Tracker misalignment geometry. Validation of the scenario (with muons from Z decay and high-pT simulated muons) shows that it describes data well. The study of systematic uncertainties (dominant by now due to huge amount of data collected by CMS and used for muon alignment) is finalised. Realistic alignment position errors are being obtained from the estimated uncertainties and are expected to improve the muon reconstruction performance.

Concerning the Hardware Alignment System, the upgrade of the Barrel Alignment is in progress. By now, data taking for calibration of all the MABs and the MiniMABs is finished. MiniMABs are installed on YB0, YB–1 and YB–2. Patch cables for the BoardPC repositioning are installed on YB0, YB–1, YB–2 and YB+2.


Trigger Studies Group (TSG)

The Trigger Studies Group has just concluded its third 2013 workshop, where all POGs presented the improvements to the physics object reconstruction, and all PAGs have shown their plans for Trigger development aimed at the 2015 High Level Trigger (HLT) menu.

The Strategy for Trigger Evolution And Monitoring (STEAM) group is responsible for Trigger menu development, path timing, Trigger performance studies coordination, HLT offline DQM as well as HLT release, menu and conditions validation – this last task in collaboration with PdmV (Physics Data and Monte Carlo Validation group).

In the last months the group has delivered several HLT rate estimates and comparisons, using the available data and Monte Carlo samples. The studies were presented at the Trigger workshops in September and December, and STEAM has contacted POGs and PAGs to understand the origin of the discrepancies observed between 8 TeV data and Monte Carlo simulations. The most recent results show what the rates of the existing triggers would be at 13 TeV.

Together with PPD, the group has coordinated a new Monte Carlo production campaign, addressing the requests from all POGs and PAGs, in view of the detailed Trigger studies foreseen for next year. The production is starting using CMSSW 6.2.x and taking into account the post-LS1 configuration of the detector.

The "half-rate" toy menu, designed with the OpenHLT workflow to halve the individual rate of each 8 TeV trigger, has been shown, when taken as a whole, to indeed yield half of the total rate of the original menu.

This shows how the HLT menu could look like in 2015 in the absence of substantial improvements in objects reconstruction and selection criteria, and POGs and PAGs are using it as a starting point for the development of the HLT paths for Run 2.

On the validation side, the group has followed the HLT validation of the CMSSW 7.0.x release cycle, using both an ad-hoc event-by-event validation tool and the DQM monitoring from the POGs and PAGs.

STEAM is currently starting a dedicated campaign aimed at developing, improving and rationalising the DQM modules, where all paths and sensitive quantities should be monitored throughout release cycles and HLT menu changes. In view of this activity the group has also produced various data skims of good events, as selected by the PAGs, to be used in future validation cycles.

During LS1, the Software, Tools, Online Releases and Menus group (STORM) has followed the activities of the DAQ, Computing and Offline sectors of CMS, in order to make sure that HLT menus and software can cope with the modifications in the underlying systems. The "half-rate" menu has been integrated in the current CMSSW releases, and will be included together with the 8E33 HLT menu in the upcoming Monte Carlo production.

Besides helping the development and integration of the paths themselves, STORM is also actively working together with the Offline experts for the optimisation of the algorithms, in order to use the computing power of the online farm as efficiently as possible, and in that way increase the physics capabilities of the HLT menu.

Another big chunk of the STORM-related activity deals with the improvements foreseen for the ConfDB database back-end, the web-based browser and the GUI. The back-end has been redesigned for faster access, and the tools that deal with the CMSSW releases are being adapted for the migration to Git.

Since the end of Run 1, the Field Operations Group (FOG) has been preparing for data taking in Run 2, with particular focus on the transition to the file-based HLT and on analysis of the CPU capacity of the HLT farm. For the file-based HLT, our main task has been the creation of a test stand that can be used to emulate the new online environment. To determine the CPU capacity we have been running the 2012 8E33 HLT menu over very high pile-up data on three of the newest HLT farm machines.

We have also been participating in monitoring upgrade task force and provided on-call HLT experts during the Global Run in November. FOG has also been documenting and preparing tools in preparation for training new HLT experts required for data taking in Run 2.

Level-1 (L1) Trigger

In July, integration testing for the upgrade to the Level-1 trigger commenced with tests of the components of the calorimeter trigger upgrade at the CMS integration centre at Prevesin’s Building 904. The optical mezzanine cards necessary to duplicate ECAL trigger data for parallel commissioning of the new Trigger were tested, alongside prototypes of the processing electronics planned for use in the upgrade. The successful testing of the mezzanine cards has allowed orders for the full production of these items to be placed. In late September, testing of a slice of the calorimeter trigger began with a demonstration of the time-multiplexed trigger concept. During the test the 10 Gb/s optical links required for the upgrade were run without errors for long periods and the latency of representative algorithms and links was measured, verifying the estimates made in the Technical Design Report. Testing will continue throughout 2014, building up a complete slice of the trigger, including the calorimeter inputs, the global trigger and later the muon trigger systems.

Studies of Trigger algorithms have continued, with a workshop in mid-September. Particular emphasis has been on clustering and isolation for eγ and tau triggers, pile-up estimation and subtraction in hadronic triggers and improvements in the momentum estimation and isolation of muons. Studies indicate significant improvements are expected with the new trigger systems and initial versions of the algorithms are planned to be available in CMSSW in early 2014. Development and optimisation will continue throughout the lifetime of the Trigger as has been the case with the existing Trigger.


The focus of Run Coordination during LS1 is to monitor closely the advance of maintenance and upgrade activities, to smooth interactions between subsystems and to ensure that all are ready in time to resume operations in 2015 with a fully calibrated and understood detector.

After electricity and cooling were restored to all equipment, at about the time of the last CMS week, recommissioning activities were resumed for all subsystems. On 7 October, DCS shifts began 24/7 to allow subsystems to remain on to facilitate operations. That culminated with the Global Run in November (GriN), which   took place as scheduled during the week of 4 November.

The GriN has been the first centrally managed operation since the beginning of LS1, and involved all subdetectors but the Pixel Tracker presently in a lab upstairs. All nights were therefore dedicated to long stable runs with as many subdetectors as possible.

Among the many achievements in that week, three items may be highlighted. First, the Strip Tracker has been operated at lower temperature for the first time, going from +4 °C to 0 °C, –5 °C and –10 °C. Global runs were also performed during each transient, to monitor movements with the laser alignment system. Second, halfway in the week, the trigger latency was increased by 0.3 μs relative to Run 1, in order to buy some time needed by the L1 trigger upgrade. These new settings will be the baseline for Run 2. Third, few runs were performed including the relocated DT sector collectors in YB–1, validating the work done in the last months and increasing the confidence for the intervention on other wheels in the coming months.

The GriN ended on a high note, with a few runs with the participation of (some fraction of) all CMS subsystems.  While the purpose was not to perform an alignment campaign, CMS still being partially opened), the exercise was also a valuable test for the offline teams in order to validate the recent changes in the reconstruction, calibration and validation infrastructure.

Looking forward, a new layout for the control room at P5 has been developed, with input from CMS Safety, from the Technical Implementation Group, and from each subdetector operation team. The design addresses all major problems, and satisfies all known constraints. Its most attractive feature is that it only requires moving furniture (and associated network and power). Improved DCS station location and a more appropriate visitor path are among the objectives. The schedule is being worked on to minimise the impact on the commissioning operations.


Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources.

Operations Office

Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.


Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.


Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.


Figure 3: The volume of data moved between CMS sites in the last six months


The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up. This allowed CMS to stay at below 90% of the tape pledges. But continuous clean up will be needed in 2014 and beyond to maximise the usefulness of the occupied tape space. The new Tier-1 site in Russia is coming online and is now used routinely for MC production. Additional T3 sites request to be added to the CMS computing systems as well. In 2014, CMS computing will concentrate on integration of new systems like the full deployment of the CMS data federation (AAA) and the finalisation of system implementations like the global GlideInWMS condor pool. A major focus will be the efficiency and latency of the systems like the latency for workflows in the production system.

Computing Development

The Computing Development project has concentrated on completing the CRAB3 prototype for analysis users. The final effort is on the data publication step. A production release in expected in the spring of 2014. Additionally there has been effort on planning for the transition from the dataset bookkeeping system version 2 to version 3.

Computing Evolution and Upgrade

The resources of the CMS online HLT farm completed the first validated production workflows for reconstruction and the HLT was certified for offline computing activity. The HLT has demonstrated the use of 6000 running cores reading data from EOS in the Tier-0. This resource will be critical to reprocessing the 2015 data.

Computing Integration

Computing Integration has concentrated on testing the new CRAB3 analysis submission tool. When the next version of CMSSW, which supports event level parallelism, is released integration will have an intensive period of commission as the experiment moves to multi-core for production resources.

Physics Support

CMS held its first Data Analysis School (CMSDAS) in India during 7-11 November 2013. About 25 Facilitators (teachers) came to the Saha Institute, Kolkata, India to get 40 students (India, Malayaisa, Taiwan) ready for the 2015 physics analysis. This was made possible by the generous travel support from LPC (Fermilab), CERN, DESY and Taiwan. January 2014 will mark two more CMS Data Analysis Schools – LPC, Fermilab (7–11 January) and CERN (13–17 January). The latter will be the first CMSDAS hosted at CERN. Over 100 students are expected to attend the two schools.

A new automated tool – Data format viewer – was developed showing information about CMSSW data formats. This replaces the Twikis used so far for the same purpose.

One year after RemoteGlideIn scheduler was introduced in CRAB2  almost all users have turned to it and Physics Support could turn off the last CRAB2 server last August. Support for submission via gLite is also ramping down and will terminate in 2014 while we focus in commissioning the new CRAB3-based analysis tool. It is interesting to note that currently almost all analysis jobs failures are due to either stage-out problems or to job hitting time or memory limits.


Figure 4: The evolution of the percentage of analysis submission by interface

Computing Resource Management Office

The Mid-2013 Resource Utilization Report and the Extra Resource Request Report were submitted to the Computing Resource Scrutiny Group and reviewed in the fall. The computing requests for 2015 were endorsed by the scrutiny group.

The Mid-2013 Resource Utilization Report and the Extra Resource Request Report were submitted to the Computing Resource Scrutiny Group and reviewed in the fall. The computing requests for 2015 were endorsed by the scrutiny group.



The second half of 2013 has been devoted to the validation of the 6_2_X cycle, while developing the 7_0_X one. In order to cope with the needs of the Physics group, its deadline has been extended till the beginning of next year.

The Core Software team has completed the migration of the CMSSW software repository from CVS to Git, hosted by GitHub ( To date almost 400 developers have forked the official repository and we have over 80 single contributors per month. For a quick overview of what's happening in CMSSW you can look at the "Repository Pulse" pages at It is highly recommend, although not required, to move usercode areas under GitHub. We provide support for centrally supported "cms-analysis" repositories (

Work progresses on allowing CMSSW jobs to utilise more than one CPU core, i.e. the multi-threaded framework. A major milestone was reached in September when the CMSSW framework became fully thread-safe and was able to run simple jobs utilising many CPU cores. The ongoing challenge is to update the simulation, reconstruction and analysis code so it can make efficient use of the new multi-core facilities offered by the multi-threaded framework.

The Generator group is facing a twofold challenge: support for the long-term legacy Monte Carlo production at 7 and 8 TeV based on the 5_3_X release, and the new 13 TeV production based on the 6_2_X release. An important step achieved for legacy production is the possibility to use different Pythia8 versions in 5_3_X, which allows having available the latest version of the program without removing the version used so far in the production cycle. Similarly we are working to backport Tauola++ in 5_3_X without breaking backward compatibility with the older fortran version of Tauola. The main change for 13 TeV production is the move of the baseline parton-shower code from Pythia6 to Pythia8. This change requires a novel validation effort, but it has also become an opportunity to review all our validation workflows. Attention is given to new NLO codes, as well as to a renewed tuning effort. In order to address all the challenges, three subgroups have been created last July: integration/validation, matrix element generators, and tuning.

In Simulation, there have been recent advances on several different fronts. For Fast Simulation, work continues on the integration of the Full Sim digitisation and pile-up mechanisms into the FastSim framework. Excellent MC-Data agreement is found for calorimeters, the capability to simulate out-of-time pile-up with FastSim is also now available, the capability to mix reconstructed tracks from the minbias pile-up events has been introduced for this purpose. Finally, several new flexible structures have been introduced to the FastSim code to help with simulations for a variety of upgrade geometries in the Tracker and calorimeters. To help with group management, we are looking to add two Level-3 positions, to be responsible for Tracker/Muon infrastructure and calorimetry.

The Full Simulation team has been working to integrate the interfaces required by the new version of Geant 4, version 10, which will be capable of multi-threaded processing. We expect to begin the adoption of Geant 4.10 early in the new year. Large gains in processing time have been made recently with the optimisation of the Russian Roulette configuration used to parametrise the impact of low-energy photon and neutron hits, with some smaller gains from an optimised treatment of mathematics libraries and other code changes. We see an almost 30–40% improvement in the time required to simulate interactions with Geant 4 using these methods. Going forward, we expect to deploy a new workflow for pile-up simulation called "Pre-Mixing" in the early part of the new year. This will substantially reduce the number of files needed for Monte Carlo production by creating single events with large amounts of pile-up, then using these for production.

The L1 Trigger offline software group is preparing the code necessary to emulate the L1 Trigger for the 2015 run, using an interim trigger, and in 2016 and beyond, using the full upgrade trigger. The design effort has focused on unifying the code across subsystems, by developing common solutions to shared problems such as an evolving hardware configuration.

Since our summary in the previous report in June, several developments took place in CMS reconstruction software. The legacy production release 53X remained generally unchanged until September (5_3_12_patch1), when new algorithms used in b-tagging and in tau reconstruction were added to aid future or ongoing analyses. A similar work was added on top of the 6_2_X cycle. Among the most interesting features coming with the new 7_0_X release, there are the reorganization of the electron/photon (eγ) reconstruction, which will deliver eγ reconstruction fully integrated in the particle-flow algorithmic flow and provide a more complete Global Event Description. Many major technical infrastructure changes were also made in the software to comply with the multi-threading environment where we plan to be able to run next year. Work to converge on a tracking configuration robust under a high-pileup environment with 25ns bunch spacing is ongoing as well.

The Physics Analysis Tools software is very stable and routinely used in physics analyses by the collaboration. Beside detailed web documentation, users are also regularly trained to use PAT during five-days tutorials. A multi-variate electron-identification analysis package has been added and the software has been adapted to the new defaults in the tau and b-tagging reconstructions. In development releases, final polishing of the so-called "unscheduled" processing mode takes place as well as updates required for a multi-threaded CMSSW framework version, where the latter task affects almost all software modules.

The basis for upgrade software development has moved to the CMSSW_6_2_X release cycle, which will be the basis for the 2014 simulations in support of the Phase 2 Technical Proposal. Currently CMSSW_6_2_0_SLHC4 is available, supporting the Phase 1 Pixel and HCAL upgrades as well as the BE5D Phase 2 outer tracker, including critical developments in the Phase 1 HCAL local reconstruction necessary for good performance at high pile-up. Other recent developments include the integration of tracking using the BE5D tracker. The next step is to refine the tracking configuration used to be optimised for the very high pile-up expected in Phase 2. Other geometry components targeted for the Technical Proposal release are in progress, and targeting either fast simulation or full simulation use cases.



Despite the LHC shutdown, the past six months of 2013 have been extremely intense for our coordination area. All the PPD teams are engaged on three major fronts: the exploitation of the 2011 and 2012 data, the preparation of the post-LS1 data taking and the support to the studies for the upgrade of the detector.
Alignment and Calibration and Database (AlCaDB)

Work in the AlCaDB project followed the planning and moved into a consolidation phase. On the AlCa side, efforts mainly concentrated on providing and validating new calibration and alignment conditions as needed for the re-processing campaigns of 2011 and 2012 data and simulation of multiple upgrade scenarios. Also, work on improvements on the Global Tag Collector tool to manage these are ongoing. On the DB side, the major redesign of the core conditions software, as discussed in various meetings in the last months of 2012, is being finalised according to schedule. We plan to change over to the new DB structure in early 2014, well before the validation for the planned Monte Carlo processing campaigns.

Data Quality Monitoring (DQM)/Data Certification

With the certification of the 2012 data complete, attention has turned to the increased performance and automation of the DQM system for 2015 data taking. For data certification, we have used the Global Run in November to test the functionality of an automated run registration feature and the updated data certification procedure.

For the online, work is ongoing for data handling system to interface the new DAQ framework, a new integration of publishing protocols from DQM, and a more automated approach to the monitoring of the online DQM framework and cluster.

For the offline, significant strides have been made in preparing for a completely multi-threaded DQM environment for data processing. Successful tests will be used as a model for the migration of all software packages in the coming months. In addition to multi-threading, the functionality of the DQM GUI has been enhanced to include more comparative tools, which are needed to effectively validate simulation and data release cycles.

Physics Data Monte-Carlo Validation (PdmV)

The preparation for the legacy reprocessing of the 2011 7 TeV data and corresponding Monte Carlo (MC), has comprised multiple steps of validation of the alignment and calibration conditions. The reprocessing of the 5_3 CMSSW release was submitted in September and is now nearly completed. After extensive validation of the High Level Trigger menu ported to 53x, a final round of validation of alignment and calibration conditions will take place before the end of the year.

The construction of a suite of datasets and monitorable quantities for physics validation of MC versus data is in progress, as a joint venture with the DQM and PdmV teams. A set of samples spanning key final states and pile-up conditions have been identified and made available. The goal is to establish a well-understood set of benchmarks and deploy automated and DQM-based comparisons. This eases the physics validation of MC versus data and will be essential whilst facing the new data-taking conditions after LS1.

The Event Interpretation defines, in the context of the CMS event reconstruction, a disambiguating procedure to ascribe Particle Flow blocks to physics objects and provides a collection of reconstructed particles for which manipulations are already performed. The PdmV and DQM teams are developing extensions to the existing validation workflows that will permit the monitoring also of the event-interpreted reconstructed objects and their comparison with standard Particle Flow objects.

The MC production has seen the introduction of new campaigns, in order to provide samples in preparation of the 2015 data-taking exercise at 13 TeV. The McM platform for the MC production management is now generally used.


Since the last CMS Bulletin, the CMS Physics Analysis Groups have completed more than 70 new analyses, many of which are based on the complete Run 1 dataset. In parallel the Snowmass whitepaper on projected discovery potential of CMS for HL-LHC has been completed, while the ECFA HL-LHC future physics studies has been summarised in a report and nine published benchmark analyses.

Run 1 summary studies on b-tag and jet identification, quark-gluon discrimination and boosted topologies have been documented in BTV-13-001 and JME-13-002/005/006, respectively. The new tracking alignment and performance papers are being prepared for submission as well.

The Higgs analysis group produced several new results including the search for ttH with H decaying to ZZ, WW, ττ+bb (HIG-13-019/020) where an excess of ~2.5σ is observed in the like-sign di-muon channel, and new searches for high-mass Higgs bosons (HIG-13-022). Search for invisible Higgs decays have also been performed both using the associated (HIG-13-018/028) and the VBF (HIG-13-013) production channels. A combined limit on the invisible decay branching fractions have been set to BR(H→invisible) of <52% (at 95% confidence level). The final Run 1 VH(bb) search, which sees a 2.1σ excess consistent with the SM Higgs boson has been submitted to a journal. The final Run 1 H(WW), H(ZZ), H(ττ) analyses have been approved and are in the final stages of preparing for journal submission.

The SUS PAG continued harvesting the 8 TeV dataset in the search for “natural” SUSY. New results include searches for sbottom and stop by gluino-induced production by using the “razor” variable  (SUS-13-004) and specific (1,2,3 leptons + b jet) topologies (SUS-13-007/013/08, respectively) as well as searches for gluinos decaying to top-pairs and neutralinos (SUS-13-016). The lower limits on the gluino mass have been pushed up to 1350 GeV at a 95% CL. A new series of searches with Higgs in the final state have started to appear, such as searches for stop pair production or electroweak partner pair production with Higgs bosons in the decay chain (SUS-13-014/017/021). Chargino and neutralinos masses are probed up to about 200 GeV in the latter search.

The EXO PAG conducted other searches for new physics. Among them is an analysis looking for RPV gluino decays in three jets (EXO-12-049) giving a lower gluino mass limit of 650 GeV at a 95% CL and a new interpretation of the mono-lepton analysis (EXO-12-060) in terms of dark matter (EXO-13-004).

The B2G group produced several new results with the full 8 TeV dataset. The B2G-13-001 analysis shows a combined search for tt resonances. When interpreted in specific models, for example the RS KK gluon, the lower mass limit is pushed up to M > 2.7 TeV at a 95% CL.  The B2G-12-008 analysis searched for excited tops (and bottom squarks in an RPV scenario) excluding masses between 300 and 703 GeV (250 GeV and 326 GeV).
In precision physics, the TOP group continued to confront NLO QCD calculations with measurements of the ttbb production cross-section at 8 TeV (TOP-13-010) and global event variables (TOP-12-042). The CMS measurements of the top-quark mass have been combined to give the latest CMS value of 173.49 ± 0.36 (stat.) ± 0.91 (syst.) GeV (TOP-13-002). One of the highlights of EPS 2013 was the first observation of the tW single-top-quark production mechanism with a significance of 6σ (TOP-12-040).

Derivations of the strong coupling constant up to the Q scale of 1.4 TeV has been performed through the measurements of the three-jet mass double differential distributions (SMP-12-027) and through the measurement of the inclusive jet cross-section (SMP-12-028). The latter analysis allows also to reduce the uncertainty on the gluon PDF with respect to the DIS-only derivation. The di-photon production differential cross-section has been measured and found to be in agreement with NNLO calculations (SMP-13-001).


Figure 5: A grand summary of our Run 1 cross-section measurements, spanning over almost six orders of magnitude.

One of the most prominent results of this year has been the first observation of the B0s→μμ decay by both the CMS (BPH-13-004) and the LHCb collaborations after a 25-year-long relay race of different facilities to establish this rare decay. The measured decay rate by the CMS experiment is (3.0 ± 1.0) x10–9. The combination with LHCb (BPH-13-007) gives a decay rate of (2.9 ± 0.7) x10–9, in good agreement with the SM predictions. Other results from the BPH group include the first ever measurement at a hadron collider of the cb2cb1 cross-section ratio (BPH-13-005) helping to shed light on the quarkonia production mechanism.

The forward physics PAG (FSQ) has produced the first observation of pure electroweak production of a Z boson in association with forward/backward jets at 8 TeV (FSQ-12-035) presented at the SUSY 2013 conference.
The HIN group continues to analyse data from PbPb runs at 2.76 TeV, pPb run at 5.02 TeV and high-statistics pp run at 2.76 TeV. Very interesting results have been presented at the Hard Probes 2013 conference, where the measurement of the nuclear modification factor in pPb (HIN-12-007) made a big impact. Other recent results from the HIN group include the comparison of photon-jet correlations in PbPb, pPb and pp events (HIN-13-006), and studies of inclusive jet, as well as Υ(ns) production in pPb and PbPb collision events (HIN-13-001/003).


The three post-LS1 Phase 1 Upgrade projects (the L1-Trigger, Pixel Tracker, and HCAL) are all making excellent progress and are transitioning from the prototype to the execution phase. Meanwhile plans are developing for Phase 2, a major Upgrade programme targeting the third long shutdown, LS3. News on Phase 1 is included under the respective projects; we only provide a brief summary here.

Phase 1
The plan for the L1 Trigger relies on the installation during the present shutdown of optical splitting for the Trigger input signals. This will allow the new Trigger system to be brought online and fully commissioned during beam operation in 2015, while CMS relies on the existing legacy Trigger for physics. Once fully commissioned the experiment can switch over to the new Trigger, which will provide greatly improved performance at high event pile-up, by 2016. System tests of the splitter system, and of the new architecture of the calorimeter trigger were very successful, and the work in LS1 is on-track. Prototype boards for the new trigger are either under study, or in production, with full system test planned for the first half of next year. There has been good progress on developing new algorithms, but more work is needed on algorithms and software to be ready for physics in 2016. Help wanted and welcome! The success of this project will make a major impact on our physics in Run 2.

Prototype components and modules for the Pixel detector have been tested and validated. The final version of the readout chip is being submitted to foundry, and the production sensors for BPIX are ordered, and pre-production FPIX sensors are under test. A full CO2 cooling system has been commissioned at the Tracker Integration Facility and installation of the pipework and cooling systems at P5 are planned for 2014.

The R&D programme to develop silicon photo-multipliers for HCAL has been very successful, with sensors from two vendors meeting specification for photo-electron efficiency, dynamic range, radiation tolerance and neutron insensitivity. The new micro-TCA back-end electronics is in production for HF, which is the first detector to receive the upgrade. The prototype new readout chip (QIE10) performs very well for both charge measurement and timing. This is the first version of the QIE with an integrated TDC.

Phase 2
The overall scope for the Phase 2 upgrade was further developed, following the June Upgrade Week held at DESY. The performance longevity of the existing (or Phase 1) detectors is a driving concern in defining the scope and has been extensively studied. It is clear that the entire tracking system and the endcap calorimeters must be replaced. In addition, the very high pile-up anticipated beyond LS3 will require a further upgrade to the Trigger, with incorporation of tracking into all physics objects and major increase in the Trigger bandwidth. A description of the full upgrade scope, along with an initial cost estimate was documented and presented to the CERN Resource Review Board (document ref. CERN-RRB-2013-124). The emphasis is now on simulation studies to motivate the upgrades, on R&D to develop the technologies and on conceptual design of the detector upgrades. The goal is to submit a Technical Proposal covering the whole Phase 2 programme by September 2014, and to prepare TDRs for the individual detectors by 2016/’17.

An ECFA workshop on HL-LHC was held in October. This was the first time that all experiments and the accelerator and theory communities presented and discussed plans for the high-luminosity phase. Working groups (on physics areas, tracking, calorimeters, machine-experiment interface etc) met several times ahead of the workshop, and presented summaries at the meeting. The final report is available online. The workshop was a great success, discussions are underway to continue and extend this approach.

The studies presented at the ECFA workshop used DELPHES to demonstrate the physics potential at HL-LHC. To fully develop the CMS plan for Phase 2 Monte Carlo simulation of the detector is needed. This is a critical juncture in establishing Phase 2, and development of the simulation and reconstruction code for the new detector is key. The goal is to produce a CMSSW release early in 2014 and to embark on a Monte Carlo and analysis campaign in the spring. This is a great place and time to forge the future of CMS – and to gain experience for the beam conditions we will be facing. Join the effort!


The organisation of the Open Days at the end of September was the single biggest effort of the CMS Communications Group this year. We would like to thank all volunteers for their hard work to show our Point 5 facilities and explain science and technology to the general public. During two days more than 5,000 people visited the CMS detector underground and profited from the surface activities, which included an exhibition on CMS, a workshop on superconductivity, and an activity for our younger visitors involving wooden Kapla blocks. The Communications Group took advantage of the preparations to produce new CMS posters that can be reused at other venues. Event display images have been produced not just for this occasion but also for other exhibits, education purposes, publications etc. During the Open Days, Gilles Jobin, 2012 winner of CERN Collide@CERN prize, performed his Quantum show in Point 5, with the light installation of German artist Julius von Bismarck.

Image 3: CERN Open Days at CMS welcomed more than 5,000 visitors. Congratulations to CMS Volunteers  for making this possible.

During CERN Open Days the CMS shop sold almost 3,000 items; for the first time custom duties and VAT were due, but the sales went smoothly despite the late introduction of this change.

The Art@CMS programme started this year. It brings international artists to CMS with the goal of creating CMS- and HEP-inspired artwork. These will be further exposed in other venues, exporting our scientific work in places not usually accessible to us. Art@CMS has been featured in place such the City of London School in their Unseen Dimensions exhibition and the Deutsches Museum in Bonn, the Faszination Ursprung exhibition and the RWTH Aachen Art of Science, Beauty in Creation exhibition. Further artists exhibitions are also taking place on Thursday this week, next April and then during the Miami CMS Week in 2014.

The internal CMS Posters Contest was launched earlier this year. We received a small number of superb entries. Congratulations go to all participating institutes. We will keep records of all the posters for possible future use. Two winning posters are shown in the "pas perdu" during the CMS Week. They are also available online: The Story of CMS (by University of Napoli) and The CMS Detector and Sub-detectors (by University of Ghent). In addition, the contest enabled us to discover an outstanding idea for a game by the collaborators from the University of Bologna. The concept of the game is explained in the poster below, and all sub-detectors are requested to contribute in order to complete the project.

Image 4: Poster explaining the CMS game concept.

We hope the results will motivate more CMS institutes to participate in future contests.


CERN Safety rules and Radiation Protection at CMS

The CERN Safety rules are defined by the Occupational Health & Safety and Environmental Protection Unit (HSE Unit), CERN’s institutional authority and central Safety organ attached to the Director General. In particular the Radiation Protection group (DGS-RP1) ensures that personnel on the CERN sites and the public are protected from potentially harmful effects of ionising radiation linked to CERN activities. The RP Group fulfils its mandate in collaboration with the CERN departments owning or operating sources of ionising radiation and having the responsibility for Radiation Safety of these sources.

The specific responsibilities concerning "Radiation Safety" and "Radiation Protection" are delegated as follows:

  1. Radiation Safety is the responsibility of every CERN Department owning radiation sources or using radiation sources put at its disposition. These Departments are in charge of implementing the requirements laid down in CERN’s Safety rules and documents or specified by DGS-RP in order to ensure the safe operation of their existing and future installations (accelerators, beams, experiments). The Departments are also in charge of training their personnel in matters of Radiation Protection according to the rules specified by DGS-RP.
  2. Radiation Protection is the responsibility of the DGS-RP. 
    Its duties include operational radiation protection which comprises assessment of radiological risks, classification of work places in radiation zones, implementation of control measures, monitoring radiation levels for different radiation areas and impact of radiation on the environment, monitoring the implementation of regulations and of specific rulings, approval of ALARA plans, control and characterization of radioactive material and waste.

At CMS, Safety officers are nominated in order to provide help and support to the collaborators in the enforcement of the Safety rules:

The CMS Radiation Safety Officer (RSO)2, appointed by the PH department Head in consultation with the GLIMOS and the Technical Coordinator, is qualified in the hazards of ionisation radiation and with regulations and techniques of radiation protection. Among other duties, he/she ensures3:

  • that supervisors of technical work are familiar with and apply the Radiation Safety Manual (Code F) ;
  • that the installations comply with the regulations in force ;
  • that the technical means of radiation safety and protection are in place and operational ;
  • that the organisation conditions are adequate for the safe operation of the installation

In addition, CMS GLIMOS and DGS-RP group nominated 20 Radiation Protection Experts (RPE4) holding a certificate as Swiss Radiation Protection Expert and who proved technical competence and excellent conduct during the pratical exercises.

They execute the following tasks in non-designated or supervised5 radiation areas of CMS

  • risk assessment of work places with respect to ionizing radiation,
  • radiological controls of material leaving the CMS experimental areas,
  • monitoring the compliance of workers in CMS experimental areas with CERN’s radiation protection rules.

These members of personnel are supporting the CMS collaboration in the enforcement of the Safety Code F6 and in particular in making sure that, in the CMS supervised radiation areas (i.e. UXC55 cavern, SX5 RP Workshop) the ALARA7  process is applied. This process is part of the « JOLi » principles : Justification, Optimisation and Limitation.

For example, from the time a pregnancy is established through to the birth of the child, the equivalent dose at the surface of the abdomen of occupationally exposed women must not exceed 1 mSv, and that to the fœtus, from external radiation or from the incorporation of radionuclides, must also not exceed 1 mSv. Women who are breast-feeding are not allowed to perform any work involving radioactive substances that entails a risk of internal or external contamination.

There’s an obligation of results in Safety, which implies that, no matter the means are made available, only the result counts.

In this respect, any personnel have the rights and the obligation to stop any dangerous work. In particular, as CMS collaborator, if you witness a misbehavior in the enforcement or application of the Safety rules and procedures, you shall react and remind your colleague in order to protect him/her first, but as well the CMS collaboration. If this behavior is repeated, it is extremely important to involve the supervisors, including the Technical Coordination through the Safety officers (GLIMOS, RSO, TSO, etc.), the Technical Coordination Oncall (16 5000) or the Shifters at the SCX5 control room (Technical Shifter DCS or SLIMOS).

For more information, please consult the following CERN’s bulletin articles :

and keep in mind that there’s « no true excellence without excellence in Safety ».

CMS Safety office: -


[1] Source « HSE Unit Website » 
[2] The CMS RSO is Stéphane BALLY
[3] Source CERN « A9 Code »
[4] Source EDMS 941627 « Organisation of Operational RP for CERN’s Experiments »
[5] effective dose received can exceed 1 mSv in any consecutive 12-month period, without exceeding 6 mSv.
[6] Source EDMS 335729 v.2 « Radiation Protection »
[7] As Low As Reasonably Achievable (below the appropriate dose limits, economic and social factors being taken into account)


PDF Version