CMS Bulletin

TECHNICAL COORDINATION

 

In the reporting period, CMS has collected about 10 fb–1 of pp collision data. Considering the prolonged duration of the current running period, with no opportunity for thorough maintenance since 2009, the reliability of the infrastructure and common systems was satisfactory. . Once again, CMS was fortunate that some failures coincided with LHC downtime, avoiding major data losses.

After repeated difficulties over the summer with reconnection of the cold box following an unexpected stop, the procedure was revised. For reliability, and to reduce the risk of a fast discharge, the reconnection is now done at a reduced field of 2 T, resulting in a mechanical on-off-on cycle. It is also time consuming, with two times two hours for the ramp-down/up and several hours for refilling the He dewar before restoring the field to 3.8 T. In response to this, a mode of operation has been successfully tested that allows doubling the refill rate of the dewar. Reconnection at a somewhat higher field is also under study, with the aim of having the magnet operational again within about five hours of a cold-box stop.

As the cold box was disconnected twice since the last CMS week due to insufficiently planned or misguided interventions, very strict rules have now been imposed on those interventions, including a formal ban on any interventions on the electrical system of the magnet during beam operation.

Other infrastructure incidents have mostly affected the cooling system and involved faulty sensors. The C6F14 cooling was stopped three times, the first due to a faulty level-indicator in the main reservoir falsely indicating a major leak, the second due to a failing power-LED in the control system found to be in series with the cooling PLC, and the third due to a low liquid-level alarm caused by an incorrect threshold setting. A major overhaul and review of the system and its operational procedures is planned during LS1 to further improve its fault tolerance and redundancy.

YETS

LHC proton-proton beam operation will end on 17 December at 06:00. The following 24 hours will be used for calibration at full field. In the morning of 18 December the magnet will be ramped down, followed by some hours’ calibration at 0 T. In parallel, the RP sweep in the UXC will take place. On 19 December the detector will be brought into a safe state for the year-end-closure of CERN, including a stoppage of all flammable gases. At 15:00 the shift operation will be terminated. From then on, two daily safety tours will take place. In addition the installation of CASTOR will be prepared during 19–21 December.

On 3 January, well before the official re-opening of CERN on 7 January, the cool-down of the magnet will start, to ensure readiness for a cosmic run on 12–14 January, prior to the first beam in the LHC. Also on 3 January, CASTOR will be installed at the negative end with the beam pipe under vacuum. As the ZDC crane is unfortunately still not mounted on the TANs, the ZDC will be reinstalled inside the TAN with conventional means, provided the radiation level permits. The installation is foreseen for 7 January to allow for the longest possible cooling time.

Shift operation is planned to start on 5 January. For the safety tours between 20 December and 3 January as well as the shifts between 3 January and 7 January, volunteers are still highly welcome – please contact Technical Coordination.

LS1

The third TC workshop in preparation for LS1 took place during 26–27 November at Chateau Bossey, with an overflow session on 5 December at CERN. Due to intensive work on the planning of sub-projects and on the overall schedule, a robust master schedule –– with branching points, milestones and a small (three-week) contingency –– will be available shortly. Major rework of the schedule in late spring/early summer 2013, with an early exposure of YB0, now gives more time for successful humidity sealing and cooling system insulation to be installed in the vactank area, allowing the Tracker to be operated cold, one of the most important objectives of LS1.

The list of main objectives for LS1 was reviewed and left unchanged:

·       Prepare CMS to be able run the Tracker cooling fluid at –25 ºC.

·       Install a 45mm o/d beam pipe and make provisions for an installation of a new pixel detector before LS2.

·       Complete the DAQ upgrade and make provisions for a Trigger upgrade before LS2.

·       Eliminate of all CMS specific risks to detector integrity, data quality and data taking efficiency, requiring the substantial overhaul and consolidation of the magnet, the electrical system, the cooling systems, the gas systems and the dry gas distribution.

·       Complete all upgrade and consolidation work on the muon systems that requires a long shutdown, such as YE4, ME1/1, DTSC.

·       Complete the first stage of the consolidation and upgrade of the HCAL photon transducers, the HO and HF, and the exchange of the CCMs.

·       Complete the installation of the 4th endcap muon layer (ME4/2, RE4)

·       Correct all known faults, affecting the physics performance of CMS. (Each subsystem has a prioritised list of interventions.)

The preparation of SX5 as a centre for detector maintenance and repair is well advanced. The areas for the muon detectors will be operational by the end of the year. The HVAC system for the pixel lab is being installed. The lab will be ready to receive the pixel detector when it is extracted. The various storage areas including those for activated parts are under preparation and will be ready at the beginning of LS1.

Technical oversight of LS1projects involved a large number of reviews for a great variety of projects. They showed that the projects are in a well-advanced state and are expected to be ready as required by the master schedule.

A few more reviews are still outstanding:

·       An EDR for RPC4, scheduled for early 2013, when we will have some experience with the gap production currently starting at Kodel.

·       A MPR for ME4/2 and ME1/1, which are closely interlinked. It will be held early in 2013 when hopefully all questions on the new on-board electronics and the increased LV power consumption can be answered and the new ODMB with copper control circuitry will be available.

·       A Pixel-Beam Pipe Integration Review to revisit all aspects of the installation of the old and new pixel detectors around the new beam pipe, particularly the tests and measurements to be done in LS1. The review will be held in spring 2013.

In general the preparations for LS1 are proceeding well, consistent with a robust and credible master plan. If resources are made available as planned and there are no major surprises in the next 2 months, CMS will start LS1 well prepared.

MAGNET

 

Following the unexpected magnet stops last August due to sequences of unfortunate events on the services and cryogenics [see CMS internal report], a few more events and initiatives again disrupted the magnet operation. All the magnet parameters stayed at their nominal values during this period without any fault or alarm on the magnet control and safety systems.

The magnet was stopped for the September technical stop to allow interventions in the experimental cavern on the detector services.

On 1 October, to prepare the transfer of the liquid nitrogen tank on its new location, several control cables had to be removed. One cable was cut mistakenly, causing a digital input card to switch off, resulting in a cold-box (CB) stop. This tank is used for the pre-cooling of the magnet from room temperature down to 80 K, and for this reason it is controlled through the cryogenics control system. Since the connection of the CB was only allowed for a field below 2 T to avoid the risk of triggering a fast dump, the magnet had to be ramped down to 2 T. It took several hours to refill the dewar with liquid helium before ramping up to 3.8 T.

On 19 October, a problem occurred on the primary water cooling system. The magnet was preventively ramped down to 3 T for a couple of hours to avoid a slow dump as the busbar cooling circuit temperature was increasing. The CB was only partly affected, and it didn’t stop. With only the third turbine still working, the missing refrigeration power was delivered by the dewar, and its level decreased only to 4 m3, well above the critical threshold.

Later in October the symptoms of the CB blockage with impurities reappeared. Without intervention, the refrigeration power would have been reduced before the end of November. The refrigeration power delivered by the CB was brought to its maximum, increasing the helium flow rate and the turbine speeds. But despite the gain in refrigeration power, the trend didn’t change and the CB blockage rate was reinforced. The projection indicated a risk of running short of refrigeration for the last week of the LHC physics in December. Therefore, a regeneration of the CB (circulation of warm gas to remove the impurities) was necessary during the 4th machine development (MD4) period at the end of November.

Another human error on the cryogenics took place before the MD4, on the evening of 22 November, during the connection on the cryogenic control system of an impurity analyser, causing the stop of the entire cryoplant. It was again necessary to ramp down the magnet to 2 T to connect the CB. This last event changed the values of the parameters characterising the CB blockage, and over the couple of days before the MD4, it was eventually no longer possible to make any estimate for the coming weeks.

The magnet was ramped down at the start of the MD for the CB regeneration. The magnet was ramped up to 3.8 T on 28 November, before the end of MD4.

INFRASTRUCTURE

The CMS Infrastructures teams are preparing for the LS1 activities. A long list of maintenance, consolidation and upgrade projects for CMS Infrastructures is on the table and is being discussed among Technical Coordination and sub-detector representatives. Apart from the activities concerning the cooling infrastructures (see below), two main projects have started: the refurbishment of the SX5 building, from storage area to RP storage and Muon stations laboratory; and the procurement of a new dry-gas (nitrogen and dry air) plant for inner detector flushing. We briefly present here the work done on the first item, leaving the second one for the next CMS Bulletin issue.

The SX5 building is entering its third era, from main assembly building for CMS from 2000 to 2007, to storage building from 2008 to 2012, to RP storage and Muon laboratory during LS1 and beyond. A wall of concrete blocks has been erected to limit the RP zone, while the rest of the surface has been split between the ME1/1 and the CSC/DT laboratories. Access doors and floor surface have been prepared, installation of basic infrastructure such as power sockets, lighting spots, extraction units, washing facilities etc. is under way. A specific water-cooling and gas-distribution network will also be put in place. The area will be fully equipped with infrastructures before LS1 starts and will receive the first components from CMS at the very beginning of the shutdown.

Regarding cooling infrastructures, a few cooling incidents disturbed the data acquisition during summer, most of them due to failure of acquisitions sensors. A deep investigation by the EN/CV teams has followed and systems have been modified or prepared for modifications to be implemented during LS1.

Components for the consolidation activities foreseen on the Yoke and Rack circuits have been purchased, and the projects involving major reworks of EN/CV circuits have been agreed (in particular the study for the decoupling of the magnet Busbar circuit from any other system).

For the C6F14 cooling circuits, a review held on 4 October within the Tracker community has endorsed the design for modification proposed by EN/CV/DC. Most components have been procured and the 3D models to integrate the mechanical modifications are ready.

In addition to consolidation and maintenance activities, the construction of a brand new system has started: the CO2 cooling plant for the upgrade of the pixel detector. For this system, a lot of work is going on for the integration of the necessary ancillary services for the installation of the final system at P5.

Image 1: Layout of the SX5 building at P5 (courtesy G. Duthion).

TRACKER

Pixels Tracker

With the 2012 proton-proton run almost complete, the pixel detector continues to operate well in an environment with large pile-up and high L1 rate. During this period, the pixel detector has shown excellent stability, with the number of current active channels from each the BPIX and FPIX the same as from the first month of 2012 running, resulting in 96.3% of the detector active. This total includes the recovery of six FPIX channels, temporarily disabled due to an unexpected dependence on the magnetic field. From a dedicated study that identified a small crack in an optical cable connector, a repair was made which restored 120 ROCs in the FPIX.

During 2012 there has been a close collaboration of the online operations with the offline studies, resulting in the first dedicated HV bias scans used for the pixel Lorentz Angle measurement. These scans help to better understand this important parameter that changes with temperature, irradiation, and bias voltage. This is in addition to all other scans taken throughout 2012, namely the timing and HV depletion-depth scans, the latter performed at monthly intervals during the entire year. These measurements are essential to study changes in the effective space-charge distribution in irradiated pixel silicon sensors. They are important for the further operation of the present pixel detector and for the design of the planned upgraded detector.

With peak instantaneous luminosities exceeding 7E33, reducing the impact of radiation effects that affect individual hardware components (SEUs) has been a key activity in 2012. Through improvements in the readout software and firmware, the pixels are now able to automatically reconfigure whenever an SEU causes the pixels to go into a permanent BUSY state. In the rare cases when the reconfiguration fails to fix the problem the affected channels are disabled on the fly by operating in a RunningDegraded state. In addition, the maximum time spent in BUSY has been reduced from 2 seconds down to 0.25 seconds, decreasing the total dead-time from the pixel detector to below 1%. On the online monitoring side the pixel DQM can now show the fraction of ROCs that become silent due to SEUs as a function of time in a run.

The work on the pixel lab at P5 continues to make significant progress, with the installation of the floor, electrical supplies, and pixel chillers complete. The current work on cooling and the dehumidifier is progressing well, and the entire project is on schedule and should be ready in time for the upcoming LS1.

Strip Tracker

During 2012, the number of active channels has been quite stable and are now at 97.5%. Few modules, which were already misbehaving before, were eventually masked from the DAQ resulting in a loss of 0.3%.

Over a delivered luminosity of 20.7 fb–1 (up to 17 November), the strip tracker caused data losses of 213 pb–1 (downtime), 70 pb–1 (bad quality) and an additional but unavoidable 156 pb–1 at the beginning of fills (raising the high voltage). The overall losses correspond to 2.1% of total delivered luminosity, and are equivalent to 13% of CMS losses. A lot of effort was made throughout the year to improve the strip tracker’s stability and high level of efficiency.

A semaphore implemented in DCS now continuously checks the beam conditions, allowing the raising of high voltages automatically at the declaration of “Stable Beams”. This simplified procedure is in place since three months, and results in an extra 0.5 pb–1 being recorded per physics fill.

The strips suffered from a few hardware failures: one VME crate PSU, a VME link card and a disk controller on one DCS PC. The tank level sensor stopped functioning for the SS1 cooling plant, causing detector downtime; the same happened recently for SS2, but luckily without consequences. In both cases the sensors were recovered with a reset.

A continuous effort was made to fight DAQ instability, identifying sources of downtime and looking for fixes or quick recovery. The use of soft-error recovery was successfully commissioned to the most frequently affected component (TIB-2.8.1, aka FED101), determining a relevant downtime saving. Experts are now looking to extend this mechanism to other less frequent problems. Several improvements were made on FED firmware, in order to make FEDs robust to any non-standard situation, such as data not being sent occasionally by the detector front-end or extra data received without L1 accept associated. Several smaller problems in the current implementation of the resync mechanism have also been found and fixed. The situation is continuously improving but the scarceness of the different event types makes debugging difficult and time consuming.

Calibration runs were made to monitor detector stability and bias scans were performed to provide voltage depletion measurements, crucial for understanding the detector evolution with radiation.

Three years of running are close to the end and the strips, despite the dramatic increase of the instantaneous luminosity, performed in an excellent and stable way thanks also to the daily activity of the operation team. DQM shows that the noise occupancy continues to be at a negligible level and the hit efficiency is stable at a very high value.

The strip tracker is now looking forward to the 25 ns pilot run and the proton-lead collisions. The preparation for LS1 is now well established, thanks also to several comprehensive reviews that have been held. A full report was presented at the 3rd Technical Coordination workshop.

ELECTROMAGNETIC CALORIMETER (ECAL)

 

ECAL has been stably running with an up-time efficiency of 99.4% during Run 2012D, with about half of the inefficiency due to a single downtime episode. More than 99% of the collected data are certified good by ECAL for offline analysis. The monitoring system and calibration chain have also been working smoothly, with an excellent stability of the new laser source, after final tuning during the technical stop in September. Some drifts in the response upon monitoring corrections and some degradation in the resolution throughout Run 2012C and 2012D have been observed and will be corrected in the next reprocessing. Calibration constants for the full 2012 dataset –– derived with well-established procedures –– are going to be delivered by the end of the pp run. In parallel to this, studies of the performance evolution have been carried out to predict the longevity of ECAL towards HL-LHC. Radiation damage effects are studied from P5 data, particularly in the endcaps, and in dedicated test beams.

As in all human activities, some players are stepping down at the end of the year, and new players are going to replace them. Albeit thankful to the former for what they did, and to the latter for the promise to do, we do not list their names. Individuals are doomed to die, and must accept their fate. The ECAL of the braves and the visionaries stands!

HADRON CALORIMETER (HCAL)

 

During last three months of LHC operation in 2012 (October–December) the HCAL performed well. Out of a total of 6.5 fb–1 recorded by CMS, 170 pb–1 had to be declared ‘bad’ during the certification process due to HCAL-related problems.

Monitoring of HCAL readout using LED detected a continuous loss in the gain of photomultipliers in the HF. The gain loss is found to be related to the current drawn by the PMTs. The LED data are used to correct the calibration of the channels and L1 look-up tables are routinely updated when the maximum deviation in any of the channels reaches the level of 2%.

Laser data are used to monitor radiation damage in the HF quartz fibers and HE scintillators. The 2012 data (20 fb–1 delivered) showed radiation-related loss of transparency in the quartz fibers, leading to 8% signal loss at high η (η =5) in HF. In the front sampling layers of HE towers, the scintillators also show radiation damage. For integrated luminosity of 20 fb–1, we observe the signal reduction of up to 30% at the highest η (η =3) region and at the level of 10% in the middle η region (η=2). Analysis of the laser runs supplemented with analysis of collision data provides corrections to the HCAL energy calibration constants.

The HCAL group continues preparations for work planned during LS1. We will focus on three major tasks: Installation of new, thin window, multi-anode PMTs and new readout cables on HF; replacement of readout on HO from present HPDs to SiPMs; and replacement of boards responsible for timing in HBHE and HO Clock and Control Modules (CCMs).

In October, the HCAL group evaluated the first prototypes of the QIE version 10 ASIC in the H2 test beam in the North Area. The QIE10 is fabricated with a rad-hard SiGe 0.35-micron technology and provides ADC and TDC information. The dynamic range of the QIE10 is an order of magnitude larger than the current QIE8 chip. Signals from a quartz-fiber Cherenkov calorimeter with PMT readout were digitised with the QIE10 for several pion and electron energies. The data match the expected performance specified in the Upgrades TDR and show that the ASIC development is on schedule to provide final readout chips for the HF front-end upgrade scheduled for 2015.

In the same test beam period, the EE+HE was read-out with HE longitudinal segmentation using SiPMs from HPK, Ketek, and FBK. The performance of the combined EE+HE matches the expectation from the Upgrades TDR. These tests complete the validation of the optical readout implementation planned for the HB/HE front-end upgrade in 2018.

MUON DETECTORS: DT

 

It is three years without access to the chambers and their front-end electronics, and the DT collaboration is preparing for a substantial work of maintenance and upgrade during LS1 in 2013/14.

Even though, thanks to the constant care provided by the on-site operation team, the fraction of good channels is still very high at 98.8% and the downtime caused to CMS as a consequence of DT failures is to-date <1%, the original robustness of the system has deteriorated and many small local interventions are needed to restore it.

The excellent stability and linearity with luminosity of the DT trigger rates has been
exploited and currently DT rates are published as an additional online luminometer in WBM.
The calibration is calculated from the correlation with pixel luminosity for 2011 data while
a preliminary independent calibration for 2012 data based on the analysis of Van der Meer scan data
 (project of CERN summer student Jessica Turner) has to be refined into periods
to take care of changes to the DT trigger system (Vdrift, changes in the overlap with CSC).


Thorough measurements of the noise from single-hit rate outside the drift-time box as a function of the LHC luminosity show that the noise rate and distribution are amazingly consistent with expectations from the simulations in the Muon TDR, which have guided the detector design and construction. The MB4 at the top of CMS show the largest rate. Extrapolations to luminosities of 1-2 1034 cm–2s–1 expected in 2015–2018 are well within the operation range of the DT chambers.

The upgrade activities planned for LS1 evolve in good accordance to the schedule, both for the theta Trigger Board (TTRB) replacement and for the Sector Collector (SC) relocation from UXC to USC. The TTRB work aims at reconstituting the stock of spare boards for the long-term operation of the chamber Minicrates. With the SC work, data and trigger primitives from each of the 250 DT chambers will be available in USC on optical fibers. Since the project was reviewed in an ECR by a panel of experts from the CMS Electronics and Technical Coordination teams in June, measurements on the signal transmission from the chambers up to the USC electronics have been going on with the prototypes of the new electronics, in order to optimise the design parameters of the optical transmission links. This work is the corner stone for any long-term upgrade plan of the DT system.

MUON DETECTORS: CSC

 

The CSC muon system has run well and very stably during the 2012 run. Problems with the delivery of low voltage to 10–15% of the ME1/1 chambers were mitigated in the trigger by triggering modes that make use of coincidences between stations 2, 3, and 4. Attention now focuses on the ambitious upgrade program in LS1. Simulation and reconstruction code has been prepared for the post-LS1 era, for which the CSC system will have a full set of 72 ME4/2 chambers installed, and the 3:1 ganging of strips in the inner section of ME1/1 (pseudorapidity 2.1–2.4) will be replaced by flash digitisation of each strip.

Several improvements were made to the CSC system during the course of the year. Zero-suppression of the anode readout reduced 15% from the CSC data volume. The response to single-event upsets (SEUs) that cause downstream FED readout problems was improved in two ways: first, the FED monitoring software now detects FEDs that are stuck in a warning state and resets within about 4 seconds, instead of several minutes previously with the DAQ shifter’s manual intervention. Second, the FED monitoring software now separates SEU errors from other (hardware) types of errors, and a “zero-tolerance” policy removes all of the SEUs by hard resets. Previously up to eight chambers were allowed to be in error before such a reset was issued.

The chamber factory at B904 now runs smoothly, turning out at least the anticipated one chamber per week. Temporary problems with wire breakage and the wire tension measuring machine were swiftly overcome. The factory has produced 27 of the necessary 67 chambers, and 17 are fully instrumented, tested and ready for installation at P5.

A full set of pre-production DCFEBs, the on-chamber boards used for strip digitisation in the ME1/1 upgrade have been installed on a test chamber at B904 and put through their paces; only minor issues remain to be addressed. An upgrade to the on-chamber ALCT mezzanine board will be installed on ME1/1 as well as ME4/2 chambers (50 out of 160 have been produced). Among off-chamber electronics, the OTMB mezzanine card has passed many tests, while the ODMB is undergoing rework, as an ASIC used for optical control of the DCFEBs will not be ready in time. A new low-voltage system that supplies more current to the ME1/1 electronics has been planned out and is now being prototyped. Reviews of the upgrades will be coming soon: a DCFEB PRR on 6 December, and ME1/1 ESR and ME4/2 MPR likely to happen in February.

A workshop held on 15–16 November on CSC planning for LS1 made it clear to the group that a large task is ahead in 2013, when electronics must be ready for ME1/1 and chambers for ME4/2, after which a great deal of careful installation work and testing must be done. The CSC group is being asked to supply additional effort, and some reorganisation around LS1 task forces will take place during the December CMS Week.

MUON DETECTORS: RPC

The RPC system is operating with a very high uptime, an average chamber efficiency of about 95% and an average cluster size around 1.8. The average number of active channels is 97.7%. Eight chambers are disconnected and forty are working in single-gap mode due to high-voltage problems. The total luminosity lost due to RPCs in 2012 is 88.46 pb–1.

One of the main goals of 2012 was to improve the stability of the endcap trigger that is strongly correlated to the performances of the detector, due to the 3-out-3 trigger logic.

At beginning of 2011 the instability of the detector efficiency was about 10%. Detailed studies found that this was mainly due to the strong correlation between the performance of the detector and the atmospheric pressure (P). Figure XXY shows the linear correlation between the average cluster size of the endcap chamber versus P. This effect is expected for gaseous detectors and can be reduced by correcting the applied high-voltage working point (HVapp) according to the following equation: HVapp = HVeff (1–α+α*P/P0), where α is a parameter to estimate from the data (α < 1.0) and HVeff is the effective HV working point (WP).

Figure 1: Cluster size versus atmospheric pressure before (red) and after (blue) the automatic HV working-point correction (endcaps: above; barrel: below).

Many improvements have been introduced since 2011 to stabilise the detector performance and they are: a “slow” (once per fill) WP automatic correction, with α = 1, (July 2011); a “fast” WP automatic correction (anytime it changes of 3 V) (July 2012); and an α value equal to 0.8, estimated with the first data of the 2012 (November 2012). Thanks to the first two steps the chamber efficiency instability went down from 10% to 4–5% and, finally, with α equal to 0.8, the amplitude of the oscillations due to atmospheric pressure variations has been reduced by a factor of 10 for the cluster size and by a factor of 4 for the efficiency (1–2%).

A detailed study has shown that the contribution of RPC hits to the standard Muon Reco is in a range of 1% to 7%. The contribution reaches a value of 5–7 % in some specific η regions: overlap between wheel 0 and ±1 and overlap barrel endcap.

A new muon object, based on the matching between a Tracker track and RPC hits, has been released in CMSSW 6.1.X.

The RE4 upgrade project is progressing well. The first batch of gaps has been delivered at CERN, Ghent and BARC (India) and is being tested right now. Few imperfections on the graphite surface of the gaps have been detected and some improvements in the gap production have been already introduced.

A first chamber has been produced at the laboratory in B904 at CERN and will be tested with cosmic rays in December. Ghent will begin to assemble chambers in December, and BARC in January. The power system has been ordered and will be delivered at CERN in May 2012.

The organisation chart for 2013 and 2014 is under preparation and will be presented at the next CMS Week. The Run Coordination team will be re-organised and put side-by-side with the Commissioning Coordination team for this special period, keeping in mind the regular maintenance tasks as well as installation and commissioning of RE4.

MUON DETECTORS: ALIGNMENT

 

A new muon alignment has been produced for 2012 A+B data reconstruction. It uses the latest Tracker alignment and single-muon data samples to align both DTs and CSCs. Physics validation has been performed and shows a modest improvement in stand-alone muon momentum resolution in the barrel, where the alignment is essentially unchanged from the previous version. The reference-target track-based algorithm using only collision muons is employed for the first time to align the CSCs, and a substantial improvement in resolution is observed in the endcap and overlap regions for stand-alone muons. This new alignment is undergoing the approval process and is expected to be deployed as part of a new global tag in the beginning of December.

The pT dependence of the φ-bias in curvature observed in Monte Carlo was traced to a relative vertical misalignment between the Tracker and barrel muon systems. Moving the barrel as a whole to match the Tracker cures this pT dependence, leaving only the φ dependence induced by the Tracker. Two temporary misalignment scenarios will be used to reprocess the Z′ samples, which were the most affected. One simply takes the existing scenario, which has large uncertainties, and corrects for the vertical displacement. A second scenario smears chamber positions according to the statistical precision of the 2011 track-based alignment. The former scenario is too conservative, while the latter is too optimistic as it does not take into account systematic uncertainties. The optimal solution is to estimate the systematic uncertainties in the current alignment procedure, which includes a systematic study of internal chamber alignment and angular residuals, and to use this information to provide a new set of APEs (alignment position errors) and a more realistic MC scenario. This work is in progress.

TRIGGER

Level-1 Trigger

Data-taking continues at cruising speed, with high availability of all components of the Level-1 trigger. We have operated the trigger up to a luminosity of 7.6E33, where we approached 100 kHz using the 7E33 prescale column.  Recently, the pause without triggers in case of an automatic "RESYNC" signal (the "settle" and "recover" time) was reduced in order to minimise the overall dead-time. This may become very important when the LHC comes back with higher energy and luminosity after LS1. We are also preparing for data-taking in the proton-lead run in early 2013. The CASTOR detector will make its comeback into CMS and triggering capabilities are being prepared for this. Steps to be taken include improved cooperation with the TOTEM trigger system and using the LHC clock during the injection and ramp phases of LHC.

Studies are being finalised that will have a bearing on the Trigger Technical Design Report (TDR), which is to be ready early next year. For muons, the question of how and in which system muon isolation may be best achieved is important: this might be the Global Trigger, the Global Muon Trigger, or the Calorimeter Trigger. Another important question is how improved calorimeter trigger information will be made available right after LS1. By early next year the hardware platforms will have to be chosen so that in 2014 systems can be installed first in the CMS Electronics Integration Facility in B904 and then at P5, to be ready for parallel operation in 2015. The project recently passed an important milestone with the completion of Part 1 of a Conceptual Design Review of the L1 upgrade.

Trigger Studies Group

Since the last CMS Week the Trigger Studies Group (TSG) has successfully developed and deployed trigger menus targeting luminosities of 7E33 and 8E33 (the latter was never used in real running).  Dedicated menus were also prepared for data-taking during the high-pileup and b*=90m fills. Other highlights include the successful development and integration of triggers for heavy-ion physics in preparation of the 2013 proton-lead run.

The HLT continues to perform well and numerous improvements have been made to control rates at the highest luminosities/pile-up. The average “core” and “parking“ rates are ~350 Hz and ~600 Hz, and the cross-section is almost constant up to the highest pile-up. The CPU performance of the HLT is being monitored on a regular basis; the CPU time per event scales linearly with luminosity/pile-up and fits within the total budget of the extended HLT farm.

The TSG organised a two-day workshop in November to focus on plans and actions for the 2014/2015 run.  New members who wish to contribute to this effort are very welcome. The TSG also welcomes Simone Gennai who will take over from Tulika Bose as the new Deputy Trigger Coordinator.

DAQ

The DAQ operated efficiently for the remainder of the pp 2012 run, where LHC reached a peak luminosity of 7.5E33 (at 50 ns bunch spacing). At the start of a fill, typical conditions are: an L1 trigger rate close to 90 kHz, a raw event size of ~700 kB, and ~1.5 kHz recording of stream-A with a size of ~500 kB after compression. The stream-A High Level Trigger (HLT) output includes the physics triggers and consists of the ‘core’ triggers and the ‘parked’ triggers, at about equal rate. Downtime due to central DAQ was below 1%.

During the year, various improvements and enhancements have been implemented. An example is the introduction of the ‘action-matrix’ in run control. This matrix defines a small set of run modes linking a consistent set of configurations of sub-detector read-out configurations, and L1 and HLT settings as a function of LHC modes. This mechanism facilitates operation as it automatically proposes the run mode depending on the actual LHC conditions to the DAQ operator.

Online Cloud

The HLT farm has a sizeable amount of computing resources (in total ~13,000 CPU- cores providing ~200 kHEPSpec06). Some of them could be used as ‘opportunistic’ resources for offline processing when the HLT farm is not needed for data-taking or online system development. An architecture has been defined where the HLT computing nodes provide a cloud infrastructure. Dedicated head nodes provide proxies from the dedicated online network to Tier-0 services. OpenStack has been chosen as the cloud platform.  The OpenStack infrastructure has been installed on the cluster, virtual machine images containing the offline production software have been produced by the Computing project and testing of offline workflows has started.

DAQ upgrade for post-LS1 (DAQ2)

The DAQ2 system for post-LS1 will address: (i) the replacement of the computing and networking equipment which has reached end-of-life, (ii) capability to read-out the majority of legacy back-end sub-detector electronics FEDs, as well as the new micro-TCA-based back-end electronics with the AMC13 module (currently a 5 Gbps link to central DAQ), (iii) extendibility for increasing sub-detector read-out bandwidth and HLT farm extension, and (iv) improvements taking into account operational experience.

Progress has been made with the definition of the architecture, definition and implementation of the DAQ link to connect to the AMC13 card, evaluation of 10/40 Gbps Ethernet and Infiniband network technologies, measurements with event builder demonstrator systems and definition of a file-based HLT and storage system.

A new custom card, called FEROL (Front End ReadOut Link), has been developed to provide the DAQ link to the AMC13 card and interface between the custom electronics and commercial 10 Gbps Ethernet networking equipment. This PCI card will be housed in the current FRL modules, replacing the Myrinet NIC. A few pre-production cards have been produced (see Image XYZ), firmware has been developed and a test setup has been established to assess functionality and performance. A stripped-down version of TCP/IP has been implemented in the FPGA on the FEROL, providing reliable data transmission with a throughput close the wire speed (10 Gbps) into a PC with a commercial NIC and running the standard TCP/IP stack.

Image 2: Ensemble of the pre-production FEROL card (on top) housed in the existing FRL compact-PCI card. The two lower connectors on the left side of the FRL are for the SLINK-64 LVDS cables to the legacy FEDs. The FEROL card can support four optical links; the two lower SFP cages support up to 5 Gbps for the AMC13 DAQ link, whereas the two upper SFP+ cages support 10 Gbps and can be used for 10 Gb Ethernet to commercial networking equipment and potentially for a future version on the AMC13 with a 10 Gpbs DAQ link.

RUN COORDINATION

 

With the analysis of the first 5 fb–1 culminating in the announcement of the observation of a new particle with mass of around 126 GeV/c2, the CERN directorate decided to extend the LHC run until February 2013. This adds three months to the original schedule. Since then the LHC has continued to perform extremely well, and the total luminosity delivered so far this year is 22 fb–1. CMS also continues to perform excellently, recording data with efficiency higher than 95% for fills with the magnetic field at nominal value.

The highest instantaneous luminosity achieved by LHC to date is 7.6x1033 cm–2s–1, which translates into 35 interactions per crossing. On the CMS side there has been a lot of work to handle these extreme conditions, such as a new DAQ computer farm and trigger menus to handle the pile-up, automation of recovery procedures to minimise the lost luminosity, better training for the shift crews, etc. We did suffer from a couple of infrastructure issues that resulted in the magnet being at values lower than nominal for short periods of time.

Some time was spent performing Van der Meer scans to get a precise measurement of the luminosity. A couple of special runs were taken with large β* (90 m and 1000 m) for diffractive physics, during which CMS and TOTEM were able to successfully exchange triggers to read out the same collisions in both experiments. A last Van der Meer scan with more Gaussian beams was performed in November to compare the results with the previous one and try to reduce the systematic error on the absolute luminosity determination.

At the end of this year the LHC will explore operation with a bunch spacing of 25 ns. They will first spend some days scrubbing, then study the beam-beam effects in “machine development” mode, and finally deliver collisions in a 25ns pilot run. The number of bunches and pile-up for the physics fill will depend on the outcome of the scrubbing and machine development periods. All this will be very useful to understand possible issues for operating at 25 ns after LS1.

The proton-proton data-collection period is scheduled to end on 17 December 2012. On 7 January 2013, only three weeks later, the machine will begin preparations for the proton-lead run that will last until mid-February. CMS will start powering on again on 3 January to allow sub-detectors to be ready to take cosmic rays by 9 January, just before the beams are expected to be reinjected into the LHC. Christmas will be short this year for the operations crew.

After the proton-lead run we will hold a Run Coordination workshop in Torino (13–15 February). Goals for the workshop include a review of the operational developments, experience, and system performance during 2012. We will plan the CMS commissioning activities during LS1, and examine the challenges and identify the limitations for operation in the different possible running scenarios (e.g. 50 ns vs. 25 ns) after LS1.

We are also preparing to get cosmics data in different magnet (ON/OFF) and trigger (all, bottom-only) configurations to get a snapshot of CMS before the Long Shutdown. During the length of the shutdown we will only be able to take cosmics data.

COMPUTING

 

Introduction

Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently. 

Operations Office

Figure 2: Number of events per month, for 2012

Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integration with emphasis on increasing flexibility and reducing production tails. Keeping the sites operational and the central services up and running required constant attention and care. The transition to GlideIn WMS submission is progressing and the deployment of new technologies like CVMFS for the software distribution and the global CMS xrootd federation will help in the future to increase productivity of the computing infrastructure. This work will not stop during LS1. After the final processing of 2012 data, we will concentrate on upgrade MC and 13TeV MC campaigns and all the planned improvements and upgrades to the computing infrastructure.

Figure 3: Re-reco – number of events per month in 2012

Figure 4: Monte Carlo – number of events per month in 2012

Physics Support

The team effort followed a two-pronged approach: from one side with education, documentation and training activities; from the other by providing real-time help and troubleshooting on using the distributed computing system. Focus has been on stabilising the CrabServer operations and on introducing a new scheduler (remoteGlideIn) aimed at simplifying and improving Crab2 for both users and support.

The first ever CMS Data Analysis School (CMSDAS) was held in Asia at National Taiwan University, Taipei in September. The goal of having 100% Asian students was reached. The next CMSDAS schools are being held at the LHC Physics Center (LPC) at Fermilab (8–12 January 2013) and Terascale at DESY (14–18 January 2013). A Physics Analysis Tools (PAT) tutorial was held at CERN during 3–7 December 2012. A survey regarding "CMS Analysis Practices" will be announced shortly for feedback.

Computing Integration

The present gLite middleware components are reaching their end of support and will be developed further by the European Middleware Initiative (EMI) project. Therefore, a WLCG effort is ongoing to establish the new middleware flavours available at the Grid sites. The CMS integration team is participating by running tests using the CMS-specific submission tools and applications at selected sites that already provide the new middleware. Special emphasis is put on the validation against the various storage techniques used at the sites. Problems that were found during the testing either got fixed or were provided with workaround procedures.

Computing Evolution and Upgrade

The resources of the CMS online farm (about 3000 computers) will be available during LS1 for offline work. An Open Stack cloud layer has been deployed on a part of the cluster as a minimal overlay so as to leave the primary role of the computers untouched while allowing an opportunistic usage of the cluster. Jobs will be submitted to the system by the standard computing tools via a GlideIn WMS. Data present on the Tier-0 and the CAF will be accessible thanks to an xrootd cache server. The system will be fully operational during the first months of 2013.

Computing Resource Management Office

The Mid-2012 Resource Utilization Report and the Extra-2013 Resource Request Report was submitted to the Computing Resource Scrutiny Group. Feedback was provided at the RRB Meeting on 30 October: it was generally positive, in particular since none of the approved resource requests have a deficit beyond 10% compared to the Extra CMS request. The official 2013 pledges provided by participating countries/sites show a deficit of 14% in Tier-1 CPU compared to our needs, which will be partially compensated by the utilisation by Computing of the CMS HLT Farm during the LS1. One very positive note is an excess of +11% of pledged Tier-2 disk resources compared to the original CMS request.

The Resource Planning Spreadsheet is being updated, in view of the feedback received and in particular in terms of data-management plans and the 2013 re-processing plans. ESP approvals for 2012 are being finalised and a review for 2013 crediting policy is on-going. It has been agreed for the CMS Computing Shifts during LS1 to continue with the Computing Run Coordination (CRC), but without the requirement to be located at CERN, and with the Computing Shift Personnel (CSP) procedures, but with reduced manpower (–1/3). Consequently, a 2013 CSP Planning process was launched in collaboration with all participating institutes.

OFFLINE

 

Introduction

A first round of improvements targeting the post-LS1 period has been the primary activity of the Offline group since the last CMS Week. These improvements have gone into the 6_1 release cycle, which we plan on finalising during CMS week. They span all areas, from simulation to reconstruction algorithms, in the context of an adoption of a Global Event Description as the basic paradigm for all physics objects measurement.

The Offline and Computing Week at the beginning of November was an opportunity to define the work plan for the first part of 2013. In particular the plan for the deployment of the core components of the new multi-threaded CMSSW framework was presented, and discussed with Trigger and PPD as it will have far reaching effects in those areas. Our next milestone is a release to be used for mass production of post-LS1 simulation samples in the Fall of 2013.

Fast and Full Simulation

Recent efforts in the Simulation group have been directed at improving the agreement of Fast and Full Simulations with collider data. The Fast Simulation has been modified to enable the use of the Full Simulation code that emulates the digitisation step. This will allow the use of more realistic noise models, calibration constants, and greater ease of maintenance, all with essentially no loss in CPU performance. On the Full Simulation side, work with Geant4 continues, with the latest work incorporating the mid-2012 9.5 patch01 release into the CMS development environment. Other modifications include the addition of a new bremsstrahlung model that has been shown to improve the simulation of electromagnetic showers. In both Full and Fast Simulation, extensive additions and modifications have been made to incorporate various upgrade geometries and detector configurations. Full Simulation models of the HCAL and pixel upgrade geometries were used for the TDR studies this summer, and simulations of the L1 Trigger upgrades are ongoing. Further work in all of these areas will take place over the next few months.

Reconstruction

Several improvements and new developments have been made in the reconstruction for CMSSW 6.1.0 in the last few months. The content of the AOD has been reviewed, and the removal of several, little used collections has allowed reducing the size of the AOD by approximately 30%. The most time-consuming parts of the reconstruction have been reviewed to reduce the CPU time. This effort has also shown further areas of the code that could be improved during the next development phase after CMSSW 6.1.0. Muon reconstruction has seen an improvement to the selection of the best refit for the high-PT (TeV) muons. Track reconstruction has been improved in several areas, such as the identification of duplicate tracks or the reconstruction of muons in the Tracker. The latter uses a dedicated outside-in seeding algorithm with information from the muon sub-detectors. In addition, the retuning of iterative tracking has allowed reducing the fake rate. In the next year, the track reconstruction will be substantially rewritten to improve CPU and memory performance and allow parallel execution.

Analysis Tools

The analysis tools packages are stable over several latest release series. Critical updates are backported to otherwise frozen 2012 analysis releases CMSSW_5_2_X/CMSSW_5_3_X. A major restructuring of PAT (unscheduled mode) is completed in its core part: it will become available in CMSSW_6_1_0. Several high-level PAT tools are being adopted for the unscheduled mode: tau, MET and PF2PAT. This work and the global event description development continue in CMSSW_6_1_X and beyond. The existing statistics tools are stable and the documentation is being expanded. The PAT tutorial is routinely offered to the CMS collaborators. A new statistics tools tutorial has been developed and successfully tested at CMSDAS. New tools are considered, with cmssh –– an integrated CMS development environment –– being the most recent addition. It will be demonstrated at CMS Week.

Core Software

The framework group is completing the re-design of the core of CMSSW to allow cmsRun to utilise many CPUs in the same job. Work will begin in earnest at the beginning of the year to implement the design. The plan is to have the re-design ready by fall 2013.

PHYSICS PERFORMANCE AND DATASET (PPD)

 

Introduction

Since July, the activities have been focused on very diverse subjects: operations activities for the 2012 data-taking, Monte Carlo production and data re-processing plans for 2013 conferences (winter and summer), preparation for the Upgrades TDRs and readiness after LS1.

The regular operations activities have included: changing to the 53X release at the Tier-0, regular calibrations updates, and data certification to guarantee certified data for analysis with the shortest delay from data taking. The samples, simulated at 8 TeV, have been re-reconstructed using 53X. A lot of effort has been put in their prioritisation to ensure that the samples needed for HCP and future conferences are produced on time.

Given the large amount of data that have been collected in 2012 and the available computing resources, a careful planning is needed. The PPD and Physics groups worked on a master schedule for the Monte Carlo production, new conditions validation and data reprocessing.

The CMSSW release 53X will be used for future conferences 2013. No general reprocessing of the collision data will be done for the winter conferences, but only few specific primary datasets for the Higgs analyses will be reprocessed with improved ECAL calibration for 2012 C and D. For summer 2013, a complete reprocessing of the data will be performed including all the latest conditions and calibrations.

The preparation of the various TDRs for the Upgrade projects required the production of specific Monte Carlo samples. Given the increasing attention on physics studies for HL-LHC, in the past few months, the PdmV effort has been on integrating the existing Physics validation tools and procedures.

The Global Event Description team had a three days workshop at FNAL in August that will be summarised in a detailed plan to be made in collaboration with the POGs and Reconstruction coordinators and presented at the December CMS Week. This plan will describe the steps for the reconstruction improvements from now to September 2013.

Finally, an overall schedule of the PPD activities for LS1 will be set-up during the PPD Workshop between 28 and 30 November.

Alignment and Calibration and Database (AlCaDB)

Most of the time of the AlCaDB project concentrated on supporting the data-taking and data-analysis activities of CMS. In between that, work was on-going on new tools to ease that work and make it more robust. On the AlCa side, a new web tool to manage the Global Tags was introduced in the summer, and a new and significantly improved version of the DropBox tool to upload conditions to the DB was developed. Both tools are now in use and their usage will ease the work of the AlCaDB team and the users for the upcoming data-taking as well as the future MC productions and data/MC reprocessings.

Data Quality Monitoring (DQM)/Data Certification

DQM has been working smoothly in online and offline environments. Data Certification is carried out regularly and the certification efficiency is steadily increasing, thanks to the effort of all parties involved, from Run Coordination to shifters and certification experts. Yet times are changing, and we need to shift our attention to the LS1 development period. We will have to face many interesting challenges, from multicore, multithread event processing in CMSSW to an improved, but radically changed, DAQ processing. We need to be able to profit from this exciting period to rethink the main DQM Framework and eventually make it an even better and more flexible system. One of our main goal here is to establish a strong DQM development team and hopefully build a solid and spread DQM culture among the collaboration: your contribution and participation in this effort is vital for our success!

Physics Data Monte-Carlo Validation (PdmV)

A validation book-keeping system for CMS (valDB) has been commissioned and fully deployed for centralisation of the validation reports from more than 80 collaborators scrutinising the effect of changes on software, conditions and trigger selection. The effort of joining the upgrade activity in the mainstream of validation is well under way. New prompt workflows have been deployed to validate the changes of calibrations needed in the HLT and in the prompt reconstruction to compensate for radiation-induced aging effects. We are planning development for LS1, at which time collaborators will shift their activity towards development. Several monitoring services and scheduling tools for Monte Carlo production have been put in place to support the complex needs from analysis community, merging experience from Computing, PPD, and Offline. The re-writing of the production management system (formerly known as PREP) is well under-way and will be ready for deployment during the winter. We are looking forward to working on other aspects of Data and Monte-Carlo validation during the long shutdown, in preparation for 2013 summer conferences and for ever-improved quality of CMS data.

PHYSICS

 

The period since the last CMS Bulletin has been historic for CMS Physics. The pinnacle of our physics programme was an observation of a new particle – a strong candidate for a Higgs boson – which has captured worldwide interest and made a profound impact on the very field of particle physics.

At the time of the discovery announcement on 4 July, 2012, prominent signals were observed in the high-resolution H→γγ and H→ZZ(4l) modes. Corroborating excess was observed in the H→W+W mode as well. The fermionic channel analyses (H→bb, H→ττ), however, yielded less than the Standard Model (SM) expectation. Collectively, the five channels established the signal with a significance of five standard deviations. With the exception of the diphoton channel, these analyses have all been updated in the last months and several new channels have been added. With improved analyses and more than twice the integrated luminosity at 8 TeV, the fermionic channels are now more consistent with the SM expectation as can be seen in the figure.  The larger dataset has also allowed for more precise studies of the properties of the new boson.  The mass has been measured to be 125.8 ± 0.5 GeV and the data are not in favour of the 0 pseudo-scalar hypothesis, with a CLs value of 2.4%.

Figure 5: All five channels are consistent with the Standard Model expectation.

With the discovery of a strong candidate for a fundamental scalar, the question of what is stabilising its mass at ~125 GeV has become a principle goal of our field. Though constrained by previous null results from the LHC, Supersymmetry remains a strong possibility for such a “natural” solution to this problem. Third-generation squarks play a special role in natural SUSY, so searches for stops and sbottoms have received increased attention in the SUS and newly formed B2G PAGs. For stops/sbottoms produced via gluino cascades, searches have excluded gluino masses of ~1.1 TeV for almost any stop/sbottom mass. For direct production of stops/sbottoms, 95% confidence level (CL) limits reach ~600 GeV. The SUS PAG also has several analyses targeting EW production of chargino/neutralinos where masses from up to ~650 GeV are now excluded. With these increasingly stringent limits on R-parity-conserving SUSY models, R-parity-violating (RPV) scenarios must also be considered. At HCP, CMS presented 95% CL limits on the mass of RPV stops of 850–1000 GeV.  Meanwhile the EXO PAG continues its broad search programme for physics beyond the SM.

In addition to these searches, CMS has performed a number of important SM analyses. The TOP PAG has entered a new phase of precision measurements wherein they have measured the top-pair and single-top (t-channel) production cross-sections at both 7 and 8 TeV, and the top-quark mass has been measured with ~1 GeV precision to be 173.4 ± 0.4 ± 0.9 GeV, reaching the highest precision ever achieved by a single experiment. The SMP PAG continues its physics programme of testing perturbative QCD and precision tests of electroweak interactions. The BPH PAG has demonstrated that CMS is competitive with dedicated experiments in the study of B-physics and quarkonia. FSQ has carried out the first search for exclusive γγ production at the LHC. In heavy-ion physics, CMS made a splash at the QM2012 conference where eight new analyses on 2.76-TeV PbPb collision data were presented. More recently, an observation of a “ridge” in proton-lead collisions based on a single pilot pPb run earlier this year (published in October) has captured the attention of the heavy-ion community.

Finally, the POGs, including the new luminosity (LUM) POG, continue to study and recommend the best reconstruction and selection criteria for each of the physics objects used in analysis. There are also preparations ongoing for the 2015 run, including the development of the “global event description” (GED).

There has also been a lot recent activity on the “future physics” of CMS that Physics Coordination has been overseeing. The HIG and SUS PAGs have carried out physics studies for the upgraded CMS detector in support of the pixel and HCAL TDRs submitted to the LHCC in September. They are now joined by EXO in producing similar performance studies to be included in the L1 Trigger Upgrade TDR. These PAGs all now have sub-groups dedicated to future physics.  They have produced an Energy Frontier contribution to the European Strategy Preparatory Group for Particle Physics that projects the potential performance of the CMS experiment in key measurements (e.g. Higgs couplings) under HL-LHC and HE-LHC scenarios. These studies are part of a long-term programme that will include contributions to other European bodies, the Snowmass workshop in the US, the technical proposals envisaged for the Phase 2 upgrades, and ultimately a new Physics TDR.

To conclude, the CMS Physics output has never been as rich as in the past six months. Nearly 85 new analyses on 7/8-TeV pp collision data, were presented at the ICHEP 2012 conference in July. Last month, CMS presented 25 new results on 8-TeV data at the HCP symposium in Kyoto. These preliminary results are being finalised for publication or have already been published, including the paper documenting the new boson discovery. CMS has also attained the milestone of 200 papers published with collision data from the LHC.

UPGRADES

 

Good progress is being made on the projects that will be installed during LS1. CSC chamber production for ME4/2 is progressing at a rate of four chambers per month, with 25 built so far, and the new electronics for ME1/1 is undergoing a pre-production integration testing. For the RPC chambers, gap production is underway with first deliveries to the chamber assembly sites at CERN and Ghent. The third site at Mumbai will begin production next month. For the PMT replacement in the forward hadron calorimeters (HF), the 1728 PMTs are all characterised and ready to be installed. Testing of the electronics boards is going well. Preparations to replace the HPDs in the outer calorimeter (HO) with SiPMs are also on-track. All components are at CERN and burn-in of the new front-end electronics is proceeding.

There are three major upgrade projects targeting the period from LS1 through LS2: a new pixel detector, upgraded photo-detectors and electronics for HCAL, and development of a new L1 Trigger. The new pixel detector will provide more robust tracking with four barrel layers and three disk stations at each end, and higher data bandwidth with a new readout chip and DAQ. The HCAL upgrade replaces the present HPDs with SiPM photo-sensors and a new DAQ. The improved signal-to-noise ratio of SiPMs with an optimised DAQ will allow depth segmentation and improved timing to mitigate event backgrounds and pile-up. The Trigger upgrade will provide improved granularity and isolation in the calorimeter triggers and more robust tracking for muons. With these three upgrades CMS will continue to perform well at the significantly higher pile-up anticipated beyond LS1.

Technical Design Reports were presented to the LHCC in September for the pixel and HCAL upgrades. The reports describe the motivation for the upgrades, the performance of the new detectors, detailed technical descriptions and a summary of the project organisation, cost and schedule. A strong emphasis was placed on the performance studies, carried out under Physics Coordination where the PAGs applied realistic analyses with simulated data. This involved a concerted effort including Offline, PPD and Computing Coordination to produce and validate the samples. The TDRs were well received and the committee concluded, “The LHCC endorses the HCAL and pixel Upgrades without reservations.” The cost information was then presented to the RRB in October and the projects are now proceeding to formalise MOU and prepare for Engineering Design Reviews. Both projects will ramp up rapidly in 2013.

A TDR for the L1 Trigger Upgrade is in preparation, again with extensive simulation studies involving the Physics, Offline, PPD and Computing coordination areas. We expect to submit this TDR to the LHCC in early 2013, and the project to again ramp up rapidly in 2013.

Studies are on-going to understand the performance longevity of the detector systems in terms of radiation damage and ageing, as input for developing the scope of the Phase 2 upgrade planned for LS3. At the same time the Tracker project and the working groups for the Trigger Performance and Forward Detector are developing requirements and options for Phase 2. The goal is to document the performance longevity of the Phase 1 detector, and the Phase 2 requirements and options by summer 2013 to provide a basis for a Technical Proposal in 2014.

EDUCATION AND OUTREACH

 

An estimated audience of a billion people! An incredible statement that summarises the extent to which the discovery of the Higgs-like boson announced on 4 July reached the world. From regional newspapers to worldwide journals and television/radio programmes, news spread fast and wide: this was probably the biggest scientific news item in history. The CMS Communication Group played a 5-sigma-significant role in producing and disseminating information, images, videos etc. to accompany the announcement. The CMS Statement on our search for the Standard Model Higgs boson was translated into 24 languages by our very own CMS physicists, and downloaded more than 100,000 times, with parts of the text appearing verbatim in nearly 10,000 news articles. Event displays –– static and animated –– showing candidate SM Higgs decays featured on the front covers of newspapers and magazines and appeared on hundreds of television shows. CMS physicists around the world, at CERN and Melbourne in particular, made use of these materials when being interviewed by the world’s press. Social media was also used to good effect, with Twitter, Facebook, Google+ and YouTube being updated regularly before/during/after the CERN seminar and throughout the week of ICHEP. An estimated 5 million people saw the Twitter message “we have observed a new boson….” on CERN’s account, a figure helped enormously through re-tweets by the public and celebrities (including MC Hammer and Will.I.Am).

Maintaining the attention of the public following 4 July is obviously not an easy task. But regular CMS News items on our website, accompanied by announcements on our social media, have helped see a steady increase in people following CMS activities, sharing the information and, crucially, talking about it through these media. And following a few trials at CERN and one in ICHEP, Google+ “Hangouts” are now a weekly activity with participation from CMS physicists as well as from other LHC (and non-LHC) experiment’s scientists and theorists. The audiences for these live Q&A sessions are growing rapidly – helped by advertising from some popular science-focused Facebook groups – and the resulting YouTube videos are being watched by thousands.

The number of visitors to CMS at P5, Cessy, has increased by more than a factor of 3 in 2012 compared with 2011, reaching more than 10,000. This is expected to double in 2013, not including the planned Open Days in September. To enhance the visitor experience many new exhibits have been introduced, including a 1:1 photo of CMS (similar to that in B40) and a 4m x 4m animated aerial view of the LHC ring, showing proton acceleration and collision. A detailed interactive 1:20 model of CMS has now arrived at CERN and will be displayed at P5 from early next year.

A number of highly-detailed images of the CMS detector have been produced by Tai Sakuma using the Google “Sketchup” package. One of these images has been used in a CMS article in “Science” magazine that summarises our Higgs search (due to be published on 21 December) whilst a series of others showing a zoom into the detector has been printed and displayed at P5. Tom McCauley has also used Sketchup to render CMS event displays, with good results. CMS data are also being read into the Unity game engine, as used by the Camelia package that aims to display events from all LHC experiments in the near future.

Another Google product has also been exploited to showcase the worldwide collaboration involved in the construction of parts of the CMS detector. These Google Earth movies are available on the CMS website, along with many other multimedia materials, including some presentations for the public. The popular animated “slice” through CMS, showing how different particles interact, has also been updated with a dedicated Powerpoint version that works on OS X and Windows operating systems.

On the Education side, activities based on analysing real CMS data are going from strength to strength. A new web page has been produced that details one of the main uses –– the so-called “Masterclasses” –– and a tutorial on this subject was given to the CMS Collaboration during a recent WGM. In addition to being able to view and analyse large samples of dilepton and lepton+MET events over a wide mass range, a set of “golden” Higgs candidates will soon be available.

As Christmas is fast approaching, CMS has the perfect gifts for friends and family: a 2013 calendar and some specially designed Swiss chocolates. Both are available from the CMS Secretariat.

Finally, the CMS Communications Group is undergoing some personnel changes. Achille Petrilli has taken over from Lucas Taylor as head of the group. Ellie Rusack (DocDB expert and producer of the Google Earth videos) has moved to pastures new, and Vidmantas Zemleris (web-site technical guru) has joined the CMS computing group. And after more than a decade being responsible for CMS Education and Outreach, Dave Barney is stepping down. We wish them all the best in their future endeavours.

Content


PDF Version