CMS Bulletin

CMS MANAGEMENT MEETINGS

The Agendas and Minutes of the Management Board meetings are accessible to CMS members at:
http://indico.cern.ch/categoryDisplay.py?categId=223

The Agendas and Minutes of the Collaboration Board meetings are accessible to CMS members at:
http://indico.cern.ch/categoryDisplay.py?categId=174

TECHNICAL COORDINATION

In this report we will review the main achievements of the Technical Stop and the progress of several centrally-managed projects to support CMS operation and maintenance and prepare the way for upgrades.

Overview of the extended Technical Stop 

The principal objectives of the extended Technical Stop affecting the detector itself were the installation of the TOTEM T1 telescopes on both ends, the readjustment of the alignment link-disk in YE-2, the replacement of the light-guide sleeves for all PMs of both HFs, and some repairs on TOTEM T2 and CASTOR. The most significant tasks were, however, concentrated on the supporting infrastructure. A detailed line-by-line leak search was performed in the C6F14 cooling system of the Tracker, followed by the installation of variable-frequency drives on the pump motors of the SS1 and SS2 tracker cooling plants to reduce pressure transients during start-up. In the electrical system, larger harmonic filters were installed in S4 to reduce CMS sensitivity to a variety of power transients. A public address system was installed and commissioned in UXC and USC, improving and simplifying significantly the communication from the surface to the underground, especially in the presence of magnetic field. On the surface the cooling power for the filter farm was enlarged from 600 kW to 1 MW to allow for a large increase in processor capacity over the coming years, so that the HLT bandwidth can be maintained or even enhanced against a background of increasing luminosity and pile-up. 

Just before the end of beam operation in 2010, the filters in the cold-box of the magnet cryo-system were regenerated for the first time at 3.8 T. In 2010, the majority of magnet on-off cycles were made necessary by the need to conduct this regeneration. Following the successful test at 3.8 T, the culmination of a progressive programme of work by the magnet team in the last quarter of 2010, it is expected that such cycles will now be avoided.

In the shadow of the activities highlighted above, the full annual maintenance of all electrical supply and water-cooling installations was completed.

Detail of critical path activity

The Technical Stop started with the dump of the last HI beam in the evening of 6th December, 2010. By 10th December, the beam pipe was at atmospheric pressure and filled with neon, allowing the lowering of HF to start. The HF at the –z end was lowered onto two risers on 15 th December. Two days later the HF+ was lowered to the floor, allowing the installation of cables and pipes for T1 to start. This infrastructure was already in place on the –z end and the remaining time before the end-of-year closure of the laboratory was used to install T1(–z) on top of the HF– for commissioning. This represented one of the key milestones of the Technical Stop. The holiday period was used to complete the installation of all infrastructure and services for T1(+z) and to test and commission T1(-z). On 6th January, shortly after the end of the Christmas holidays, the alignment link disk was re-adjusted. This is a delicate operation, as it requires access to YE2 through YE3 with the beam pipe in place. The link disk has a sophisticated kinematic suspension. Unfortunately, one of its supports, which is inaccessible with the endcap closed, showed unexpected large friction, making the adjustment difficult. At the second attempt, on 7th January, the adjustment was finally successful. Four out of six –z laser lines are now fully working, sufficient, considering built-in redundancy, to fully profit from the alignment system.

After several insertion tests with each half of the telescope, T1 was finally inserted into the –z endcap on 10th January. In parallel, the HF+ was brought back onto two risers, allowing T1(+z) to be lifted into place on top of it. Following testing and commissioning, T1(+z) was successfully installed in the +z endcap on 14th January. The HFs were the busiest place in UXC, since, in parallel to the T1 installation, all light-guide sleeves of both HFs were replaced. To reach all phototubes with the collar shielding still on the platform required complicated scaffolding constructions setup by our UXC crew. The HF work on –z and +z ends was finished on 18th and 26th January respectively.

On 27th January, the detector was back in beam configuration and pump down of the beam pipe started reaching an acceptable operating vacuum just 13 days later, a clear demonstration that no damage had been caused. On 11th February, the magnet was brought back up to 3.8 T. Though in principle the ramp-up was problem-free, with all components moving within safe limits, the final configuration of the large ensemble of heavy objects in the forward region is different to any configuration seen before, with a larger-than-usual gap between the rotating shield and the collar shield. To ensure a fully closed shielding around CASTOR, the collar will be shimmed at the next Technical Stop when the magnet is off. This observation of yet more variety in the detailed mechanical behaviour further supports the necessity of redesigning the entire region between HF and the Rotating Shielding.

After a week of tests and cosmic data-taking, CMS was declared ready for beam on 18th February, exactly as foreseen by the planning and a tribute to the teams from the CMS collaboration, four CERN departments and various specialist contractors who took part in safely completing an intensive work programme, expertly scheduled by our experimental area management team.

Facilities for supporting operation, maintenance and upgrade

In parallel to the work on CMS and its infrastructure, the transformation of the SX5 surface assembly building into a service centre for CMS detector maintenance and repair continued. Progress has been made in the conversion of the SHL alcove into a clean room and laboratory for Pixel, BRM and fibre-optic monitoring systems. Floor and walls are being shielded, insulated and painted according to the requirements for a Class 100,000 clean-room inside an RP Zone. Although the upper floor laboratory has progressed satisfactorily, completion of the clean-/cold-room is delayed until March 2012 due to redirected resources in CERN and in CMS. Studies for a contingency plan in case of possible pixel tracker removal during 2011/12 are underway, relying on the relatively low activation levels to be expected at that stage.

Building 904 made further progress on its transformation into the future CMS Muon System assembly laboratory. The necessary renovation work on the roof, floor, fire detection, access systems and network has been finished. The gas system is under construction. The CSC production line sent from FNAL has arrived and is being re-commissioned. Temperature- and humidity-controlled laboratory units will start to be delivered during this CMS Week. Outside the laboratory, a 500 m2 metallo-textile long-term storage building has been setup. Along with the temporary storage tent at Point 5, this will be used to store tooling and assembly equipment so that the ISR can be emptied and the interior of SX5 freed-up.

Facilities for supporting operation, maintenance and upgrade

Technical Coordination is involved in many aspects of the preparation of upgrade activities. The postponement of the first long LHC shutdown (LS1) until 2013 has many actual and potential implications. Indeed the revised overall plan for the next 10 years will not be clarified until the middle of this year and even then, many evolutions are possible, depending on the performance of the accelerator and experiments and the physics yield. The general planning philosophy being followed by Technical Coordination is similar to that during the construction phase, namely, be ready for all likely schedule evolutions including the official one, maximising the chances of exploiting the opportunities, or meeting the challenges, which may arise.

The likely delay of one or two years to the second long shutdown has implications for the Tracking upgrade project and has initiated a study of whether the four-layer pixel tracker could be installed in an extended year-end stop. This is only possible if the matching smaller-diameter beam pipe is pre-installed in LS1. High priority is therefore being given to completing the favoured design, which has an outer diameter of 45 mm to allow for a pixel detector with a 12-facet inner layer. The aim is to be able to order the pipe in September 2011. This gives enough time for intensive tests after delivery to CERN and for detailed study of the installation procedures of the pipe and the pixel tracker. Meanwhile the Pixel Luminosity Telescope is making good progress towards being ready-to-install in the year-end stop of 2011-’12 should the opportunity arise. Facilities for beam tests and final assembly at CERN are being prepared and a Manufacturing Progress Review will be held soon.

Meanwhile the delay to LS1 has mostly positive effects upon the Endcap Muon upgrade project, both in allowing more time for the assembly areas to be prepared and providing some contingency for the delivery of the YE4 shielding wall. Material for the disk sector cases is now being delivered to the manufacturer and the design of the complex assembly tooling is nearing completion. The engineering and electronics design review process for YE4, RPC4, CSC4 and the ME1/1 upgrade will be launched imminently.

MAGNET

The magnet operation was very satisfactory till the technical stop at the end of the year 2010. The field was ramped down on 5th December 2010, following the successful regeneration test of the turbine filters at full field on 3rd December 2010. This will limit in the future the quantity of magnet cycles, as it is no longer necessary to ramp down the magnet for this type of intervention. This is made possible by the use of the spare liquid Helium volume to cool the magnet while turbines 1 and 2 are stopped, leaving only the third turbine in operation. This obviously requires full availability of the operators to supervise the operation, as it is not automated.
The cryogenics was stopped on 6th December 2010 and the magnet was left without cooling until 18th January 2011, when the cryoplant operation resumed. The magnet temperature reached 93 K.

The maintenance of the vacuum pumping was done immediately after the magnet stop, when the magnet was still at very low temperature. Only the vacuum pumping of the magnet cryostat was left in operation during the entire technical stop.

Full maintenance was performed on the cryogenics, in particular the change of the filters, the circulation of warm
(70 °C) and dry nitrogen in all the circuitries of the cold box to remove traces of trapped humidity, and the change of a noisy helium flow regulating valve on the cold box.

A minor upgrade of the magnet control system was made. This was to do with the end of the field ramp down at very low current, to set the configuration of the dump resistor contactor below 100 A. The magnet safety system was checked. All the actions of the magnet safety system and control system were tested with a current in the magnet below 5 kA.

At the restart of the magnet, the regulation of the power converter was re-tested, following the maintenance on the converter and the FGC upgrade. The polarity of all the capacitors was reversed during the technical stop.

On 9th February 2011, the magnet was ramped up to 2 T, and the following day the final ramp to 3.8 T was done.

At present, the magnet and its subsystems are perfectly stable. There is no noticeable pressure drop increase on the filters of the cold box and no temperature increase on the first heat exchanger, since the restart of the cryoplant. Only the valve that was changed during the technical stop proved to be noisier than the previous one, and it should be changed during a technical stop as soon as the new part (of a different kind) is available. Hopefully a fine-tuning of the cryoplant parameters (pressure and flow) brought a limitation of the noise in the service cavern. An intervention is also foreseen during the March 2011 technical stop on a leaking oil pump gasket of the compressor unit.

INFRASTRUCTURE

During the last winter technical stop, a number of corrective maintenance activities and infrastructure consolidation work-packages were completed. On the surface, the site cooling facility has passed the annual maintenance process that includes the cleaning of the two evaporative cooling towers, the maintenance of the chiller units and the safety checks on the software controls. In parallel, CMS teams, reinforced by PH-DT group personnel, have worked to shield the cooling gauges for TOTEM and CASTOR against the magnetic stray field in the CMS Forward region, to add labels to almost all the valves underground and to clean all the filters in UXC55, USC55 and SCX5. Following the insertion of TOTEM T1 detector, the cooling circuit has been branched off and commissioned. The demineraliser cartridges have been replaced as well, as they were shown to be almost saturated. New instrumentation has been installed in the SCX5 PC farm cooling and ventilation network, in order to monitor the performance of the HVAC system and measure the actual power extracted by the cooling water. The two pumps of this circuit have been changed with more powerful ones, allowing the extension of the PC farm during the 2011 run.

The hardware of the low-voltage harmonic filters of the powering infrastructure has been reinforced with the addition of new modules for a total of additional 180 A. This would reduce the overheating of the low-voltage cabinets in USC55. A site general safety test, supervised by EN-EL personnel, has been completed as usual after the winter shutdown.

The winter shutdown was very busy for the CMS Infrastructure team, as all the major interventions had to be squeezed into a very short time, when the detector is off. Completing all the maintenance operations on the CMS Infrastructures has been a great achievement and the detector is now ready again for a long data-taking run.
 

TRACKER

The strip tracker took data very efficiently during 2010 with system availabilities of above 97% in the pp running and close to 100% during the heavy-ion running. The number of active channels in the readout is largely stable around 98%.

The maintenance and development during the extended technical stop have been focussed on improving the operating conditions of the main silicon strip cooling plants SS1 and SS2, which have been items of concern (see last Bulletin). In order to stabilise and smooth the operation of SS1 and SS2, larger bypass valves and variable frequency drivers (VFDs) have been introduced. Possible noise induced by operation of the VFDs on other parts of CMS has been evaluated and no increased noise has been reported so far. The leak rate of every single line on SS2 was measured with the precise test-rig. Besides the known leaky lines, ten other SS2 lines were measured to leak between 120 g/day and 1200 g/day under the given test conditions, establishing a good reference for future measurements. Remote-filling lines are being installed to allow refilling of tanks without access to the experimental cavern. After the Tracker was restarted, a new leaky circuit was found and closed (SS2 line 30 on TID minus). This is under investigation and makes a total of 5 closed lines out of 180. Of 48 modules on line 30, 42 can be operated for the time being without direct cooling. In addition, the operating pressure for SS2 circuits was reduced by 0.5 bars to increase the safety margin for operation, whilst maintaining the same operating temperature at the front-end.

The strip tracker DAQ started very smoothly after the technical stop, taking only one shift (8 hours) from the first power switch-on to the first successful global running. In addition, the online software was upgraded to the latest release of the XDAQ software framework (version 10), and a full re-installation of all PCs involved in data-taking to confirm the possibility of fast recovery has been successfully completed. Also, a significant reduction of the time needed to stop and start runs (from 30 s to 10-15 s) was achieved.  During a week of continuous global running, a total of 1.2 million tracks from cosmic muons was recorded, following a request from the Tracker Alignment group. Work is ongoing to enable a fast switch from collision to cosmic data-taking in inter-fill periods. Tracker interlocks have been fully checked-out.

Monitoring of the various operational parameters of the system has received ever-increasing attention. Efforts have started to improve the calibration of the detector control units (DCUs) providing in-situ measurements of temperatures, currents and voltages on the detector modules. An important step to consolidate standard tools was the commissioning of the Tracker Web Based monitoring (TkWBM) which is now available from the central CMS monitoring page.

The dry gas system was largely overhauled; a new switch-over panel from nitrogen to dry-air has been installed to allow for easy maintenance by the DT group and fixing some standing monitoring issues. New types of humidity sensors (fibre and ceramic type) are under investigation, in preparation of Tracker cold operation after the first long shutdown.

The first data-taking with pp collisions has been very successful. The first, low luminosity runs have been efficiently used to confirm the ideal sampling point and to measure in-situ the depletion voltage (with bias voltage scans) of each individual sensor. Radiation-induced current is still below our measurement precision. Radiation damage projections indicate that the present running conditions of the silicon strips, with coolant at 4 °C, will not result in significant impact on the strip Tracker before the first long shutdown.

ELECTROMAGNETIC CALORIMETER (ECAL)

All components of ECAL – EB, EE and ES – operated well throughout 2010 with few problems, and negligible evolution of dead channels. About 2% of the ES silicon sensors were unplugged in the second part of the year due to unacceptable increases in leakage currents attributed to radiation damage of the surfaces.

The LHC winter technical stop allowed many improvements to the ECAL infrastructure at Point 5. For example, the High Voltage distribution systems for the EE and ES were both improved, with further modifications planned for the ES later in the year. Monitoring and alarming of power supplies was also improved, increasing the level of safety. Some cables in the USC and UXC were re-worked, recovering the operation of some environmental monitoring sensors and improving robustness overall.

A thorough Readiness Review Workshop was organised at the end of January 2011 to review 2010 data quality and online and offline operations, and to prepare for the higher luminosities in 2011. All presentations, as well as detailed minutes and outcomes, can be found at: https://indico.cern.ch/conferenceDisplay.py?confId=120490.

The stability of the complete system allowed ECAL shifts to become obsolete, with central shifters providing the necessary day-to-day monitoring, supported by a team of ECAL experts on call. This will be the modus operandi in 2011.

Small transparency changes in the crystals observed in 2010 are consistent with expectations due to radiation damage. The laser and LED monitoring systems follow this evolution excellently and a fast feedback from these systems will be incorporated into the single channel response calibration during 2011 data-taking. Data taken with a special HLT π0 trigger stream, used to calculate inter-calibration constants, provide a cross check of the transparency corrections.

There have been a few changes to firmware of the Data and Trigger Concentrator Cards (DCC and TCC) to protect against minor instabilities in data collection, to improve reliability and performance and to adapt to the expected increase in luminosity and occupancy in the 2011 run. For example, masking of individual problematic crystals, rather than whole trigger towers, has increased the number of working trigger channels to around 98.7% in the endcaps (up from 98% in late 2010). In order to mitigate the impact at the L1 trigger of Anomalous Calorimeter Signals (ACS) due to highly ionising particles in the Avalanche Photo-Diodes (APDs), the trigger front-end readout was modified to tag energy deposits with ACS-like topologies at L1 and was tested in late 2010. This is being further optimised with first 2011 collision data, before being incorporated in the standard running.

Prompt feedback from early 2011 collisions has confirmed the ECAL health status and performance.

 

HADRON CALORIMETER (HCAL)

All the HCAL calorimeters are ready for data-taking in 2011 and participated fully in the cosmic running and initial beam operations in the last few weeks. Several improvements were made during the winter technical stop, including replacement of the light-guide sleeves in HF, improvements to the low voltage power connections, and separation of HF from HB and HE in the DAQ partitions.

During the 2010 running a form of anomalous noise in the HF was identified as being caused by scintillation when charged particles pass through a portion of the air light-guide sleeve. This portion was constructed from a non-conductive mirror-like material called “HEM”. To suppress these anomalous signals, during the recent winter technical stop all sleeves in the detector were replaced with sleeves made of Tyvek. The detector has been recommissioned with all channels fully operational. Recalibration of the detector will be required due to the differing reflectivity of the new sleeves compared with the HEM sleeves. The necessary techniques for recalibration are in place, using a combination of φ-symmetry and di-electron events from Z boson decay. A timing scan was performed for HF and CASTOR with the first LHC collisions, and only minor adjustments were indicated.

The HO Ring 1 HPDs were operated at 6.5 kV during 2010 since several HPDs suffered discharge problems at nominal voltage. They operated stably throughout the year at this reduced voltage, so it was decided to try operation at 7.5 kV to improve performance for the 2011 physics run, and to carefully monitor the discharge rates. So far, none of the HPDs have started to discharge. If discharge does start, the affected HPD will be reduced to 6.5 kV.

Lumi, the luminosity measurement for CMS, uses data supplied by HF. HF and HB+HE were separated in DAQ partitions to reduce the number of times that electronics problems will require an interruption to the delivery of Lumi data.

A conductive grease was applied to connections at the low voltage power supplies to reduce wear and corrosion. This is a partial mitigation for a long-term problem with increased resistance in connections on the CAEN power supplies used by several CMS detectors.

The cause of the operation problem, which occurred roughly every week or two in 2010, has not been identified, in which synchronisation was lost for an RBX and its data was garbled. This is a particularly difficult problem to troubleshoot, but it is believed that it is most likely due to a susceptibility to clock glitches. Specific monitoring has been added to identify the behavior and improvements to the recovery sequence are being developed.
Preparations continue for the replacement of the HO photodetectors with silicon photomultipliers and for the replacement of the HF PMTs during the next long shutdown. The first batch of SiPMs for HO has been delivered and the full set of electronics boards has been fabricated. The Quality Assurance program has begun, as well as detailed planning for the installation during the shutdown. For the HF PMT replacement, the order for the tubes has been placed and engineering work is underway on the cable plant and readout box changes required to support multianode PMTs. Eight of the new phototubes were installed in the detector in November last year. One of these tubes was configured during the extended technical stop with split-anode readout for evaluation purposes. The results of these in-situ tests will help refine the final design for the multianode modification of HF.

The calibration of HB and HE was improved using φ-symmetry with non-zero suppressed data and η-dependent corrections using isolated track data with the full 2010 data set. More detail is provided in the DPG section of this bulletin. With these corrections applied anomalies were seen in specific locations in HB in splash events taken when the LHC re-established circulating beam. These anomalies appear to be due to misplaced filters on the HB layer-0 fibers for HBminus iphi 6 and 32. The scintillator in layer 0 (L0) is thicker than the other layers, so a filter was installed to reduce the L0 light before combining all layers together in the HPD readout. The post-calibration splash data suggests that the filter is missing in iphi 32, and placed incorrectly in iphi 6, resulting in a non-linear energy response below about 15 GeV in these two sectors. A correction for this problem is now included in the reconstruction software.

MUON DETECTORS: DT

The DT system has behaved highly satisfactorily throughout the LHC 2010 data-taking period, with more than 99% of the system operational and very few downtime periods. This includes operation with heavy ions collisions in which the rate of muons was low and no impact was observed in the buffer occupancies. An unexpected out-of-time high occupancy was observed in the outermost chambers (MB4) and its origin is under investigation.

During the winter technical shutdown many interventions took place with the main goal of optimising the system. One of the main improvements is in the slow control mechanism through the DTTF boards: the problem that was preventing us from monitoring the OptoRX modules properly has been fixed satisfactorily. Other main changes include the installation of a new VME PCI controller to minimise the downtime in case of crate power cycle and the reduction from 10 to the design 5 FEDs, that became possible thanks to the good agreement of the event size with our expectations during LHC operation. This increases the spares count. Firmware was upgraded on all systems. It is worth highlighting the ones regarding the Barrel and Wedge Sorter modules that will improve the ghost-busting mechanisms both with the cancellation of double tracks in the wheels and sectors crossings and with a ghost-cancelation scheme based on timing that will reduce DT pre-firing below the present 1%.

The complete system of six vd chambers (VDC) including HV, electronics, trigger and DAQ was successfully installed. It replaces the single prototype VDC that monitored the DT exhaust gas during the last two years. The VDCs, small drift chambers the size of a shoebox, measure the drift velocity every 10 minutes with a small statistical uncertainty of 0.1%. A possible deviation from the nominal value could be caused by a contamination of the gas mixture or changes in pressure or temperature, and would be signalled by the VDC system. Such an unwanted deviation would imply a wrong measurement of positions and momenta in the DT system. See Figure 1.


Figure 1: VDC drift velocities

Drift velocities measured precisely for a period of one day with 6 VDCs analysing gas from the DT system. The values are stable only during a maintenance operation, causing a contamination of the chamber gas with air; a big deviation from the standard values is seen.

In summary, the DT system comes out of the technical stop in a very good shape and eager to start taking collisions data. A small concern remains for the Anderson Power LV connectors. It appears that the lubricating campaign has not reduced the overheating rate and further actions are being taken into consideration. Off-line, the performance was studied in detail using the complete 2010 data sample. The local trigger efficiency was evaluated using minimum-bias events and decays from W and Z bosons, and was found to be ~1% better than the TDR expectations. Measurements of “local” reconstruction efficiency (track reconstruction within a chamber) were obtained using the tag-and-probe technique: efficiencies are typically of ~90% or higher. The spatial hit resolution was calculated from the width of the distribution of the residuals between the rechits and the reconstructed segments. Resolutions range from 200-350 microns in the φ-view and from 250-450 microns in the θ-view. The MC is well tuned to reproduce these resolution values.

The overall resolution in the local arrival time measurement was evaluated to be about 2.4 ns. Systematic biases in the different chambers and φ-regions are within 0.2 ns.

Electronics noise is negligible during pp collisions. The rate of out-of-time background was measured by counting the number of hits in a time window where no signal from particles originating in the pp collisions was expected; it was found to be 0.004 Hz/layer/cm2 for inner chambers and 0.005 Hz/layer/cm2 for outer chambers.

Concurrently the DT off-line software has undergone a consistent refinement. The DT calibration workflow has improved. The procedure is now able to fully exploit the DQM GUI to visualise the results of the calibration and its validation. The workflow infrastructure was reworked so that it is now completely automated. The ALCARECO assigned to calibration now allows the DT team to quickly react in case a change of conditions manifests and update the calibration constants with lower latency than previously.

Regarding the local reconstruction, a tuning of the drift velocity in the first layer of the external wheels was introduced, to take into account known small deviations due to the magnetic field. This slightly improves the spatial resolution in that region.

 

MUON DETECTORS: CSC

The earliest collision data in 2011 already show that the CSC detector performance is very similar to that seen in 2010. That is discussed in the DPG write-up elsewhere in this Bulletin. This report focuses on a few operational developments, the ME1/1 electronics replacement project, and the preparations at CERN for building the fourth station of CSC chambers ME4/2.

During the 2010 LHC run, the CSC detector ran smoothly for the most part and yielded muon triggers and data of excellent quality. Moreover, no major operational problems were found that needed to be fixed during the Extended Technical Stop. Several improvements to software and configuration were however made.
One such improvement is the automation of recovery from chamber high-voltage trips. The algorithm, defined by chamber experts, uses the so-called "Expert System" to analyse the trip signals sent from DCS and, based on the frequency and the timing of the signals, respond appropriately. This will make the central DCS shifters’ lives easier because they won't have to deal with the few channels (out of 9000) which occasionally trip in the CSC system. The algorithm has been implemented, fully tested at 904, and is being carefully evaluated in "listener-only" mode at Point 5 prior to activation.

The group also tried to minimise the impact of a potential downtime if one of the CSC computers were to die during running. Together with the CMS DAQ group, an exercise was performed on all CSC online computers to emulate this situation. On each computer, the operating system was completely wiped out, as if it were a new computer freshly installed in the rack. Then, the operating system was installed automatically from the network, including the CSC-specific libraries needed to run. This revealed a few problems on both the CMS DAQ side and on the CSC side, which would have taken hours to sort out if they had occurred during actual running. All such problems have been fixed.
Attention has recently focused on a CSC low voltage supply issue. Six of the 473 CSC chambers are not supplying data due to a 7.5 V “digital” power supply line that is read back as a value close to zero on the chambers themselves. There are two 7.5 V lines, one digital and one analogue, in the supply cables. We find the digital line is not working while the analogue line is working. Since they are in the same cable, it is hard to argue that there are cable disconnects. There were four chambers in such a state at the end of 2010 but, since the start of 2011, two more chambers have acquired this condition. The connectors/cables are inaccessible, so troubleshooting is going on using remote methods.

With respect to planning for the future, new ME1/1 cathode electronics is being developed to provide individual readout for every strip in ME1/1 and to ensure deadtimeless readout at higher luminosities. The original electronics for the readout of the ME1/1 chambers required a three-fold ganging of the cathode strips in the inner ME1/1a section of the chambers, which has been found to compromise the effectiveness of these chambers in triggering and pattern recognition.

Since December, the ME1/1 electronics work has progressed on multiple fronts. A prototype of the new digital front-end board for the CSCs, the DCFEB, was send for fabrication and assembly. The prototype of the Trigger Mother Board mezzanine was completed and has undergone extensive bench testing at Texas A&M. A new ASIC, the EMU_CC was submitted to the foundry in February. This ASIC will allow the clock, trigger and control signals to be transmitted optically to the new boards. Meanwhile, there was extensive work on other items needed for the project, including firmware, test beds, and long-range procurements. The plans and status of this upgrade were presented at the ATLAS-CMS Common Electronics Workshop for the LHC Upgrades (ACES2011) at CERN in March.

Meanwhile, the ME4/2 CSC upgrade project has moved into its setting-up and preparatory phase. The production of 71 chambers for the completion of the outer ME4 station ring located on the YE3 endcap disks will take place at CERN, in Prevessin building 904. The goal is to deliver fully tested CSC to P5 ready for installation in UXC during the long LHC shutdown of 2013-2014.

The 1000 m2 CSC factory area allocated to this task is being organised to meet all necessary requirements in terms of infrastructures (power, gas), factory equipment (storage, furniture, clean rooms) and chamber construction tooling (production machines, chambers tables and carts). A team of experts from CERN, PNPI, IHEP, US with the help of a few students and the CERN and CMS support services is actively involved at CERN in the preparations. In parallel, the procurement of all chamber mechanical parts and electronics components is underway in US.

The 904 factory will be equipped with two clean rooms which will be allocated to: 1) the wire winding of the anode panels and wire soldering and 2) the panel assembly and chamber sealing. The completion of the clean room construction is expected sometime by the end of April.

An important step of the production workflow will be the chamber leak and HV testing and the final assembly and testing of the on-chamber electronics. This represents the last step before the final chamber QC/QA inspection and packing. We are currently defining all hardware and software requirements for the optimisation of each individual sub-step.

Chamber production machines (tension, winding, automatic soldering, glue dispensers, tooling etc.) have been shipped from Fermilab late last year and are now being re-assembled and re-commissioned in 904. A complete set of chamber parts (including 21 FR4/Cu honeycomb panels) has also been delivered to CERN and will be used to construct three prototype chambers. These will allow to train the factory personnel and to assess the readiness of the construction facility for chamber mass production.

Assuming no show-stoppers, we should be able to start the prototype production early this summer. The subsequent start of the chamber mass production will depend very much on the lead times for delivery of chamber parts to 904. In this respect, chamber panels represent the item with the longest lead time. Indeed, after panel fabrication cathode strips have to be milled and all panels must be cleaned. This process will take place at Fermilab where the necessary tooling and adequate expertise is available. The expected chamber mass production rate is of 3-to-4 CSC/month.
 

MUON DETECTORS: RPC

During data-taking in 2010 the RPC system behaviour was very satisfactory for both the detector and trigger performances. Most of the data analyses are now completed and many results and plots have been approved in order to be published in the muon detector paper. A very detailed analysis of the detector efficiency has been performed using 60 million muon events taken with the dedicated RPC monitor stream.

The results have shown that the 96.3% of the system was working properly with an average efficiency of 95.4% at 9.35 kV in the Barrel region and 94.9% at 9.55 kV in the Endcap. Cluster size goes from 1.6 to 2.2 showing a clear and well-known correlation with the strip pitch. Average noise in the Barrel is less than 0.4 Hz/cm2 and about 98% of full system has averaged noise less then 1 Hz/cm2. A linear dependence of the noise versus the luminosity has been preliminary observed and is now under study.

Detailed chamber efficiency maps have shown a few percent of chambers with a non-uniform efficiency distribution. This could be a clear sign of chambers that are not working in a plateau region. A special calibration was planned, during the first day of 2011 data-taking, to perform an efficiency scan as function of the high voltage in order to determine in a precise way the effective working point of all the chambers.

During the winter shutdown many problems have been fixed and in particular the power system has been completely recovered and re-calibrated (more than 1000 channels). Few cable swaps, found with the detailed 2010 data analysis, have been fixed. About 98.5% of the electronic channels are properly working. The present status of problematic chambers is: 6 chambers out of 912 are “disconnected” and 28 are working in “single-gap mode”. Most of the problems are due to broken high-voltage connectors or electronic failures that cannot be solved until the long shutdown foreseen for 2013-2014.

At the same time many other aspects of the RPC project have been improved in the last few months in view of the 2011 run. A new set of efficiency tables has been included in the Monte Carlo simulation in order to have a better performance modelling of the system. The 6 dead and the 28 single-gap chambers will be described as well in the MC.

A new trigger algorithm for the barrel region has been deeply studied and now implemented. Muon candidates are generated if at least three out of six (it was four out of six) chambers are fired, following a selected combination of chambers in order to keep constant the trigger rate. Another important improvement has been the development of a new procedure to configure the electronic front-end boards. All the thresholds and widths are now loaded automatically by the database and provide a very easy way to fine-tune the thresholds and improve the speed of the configuration. This was the last step to have the full RPC detector automatically configured and this will certainly further reduce dead time compared to less than 2% from 2010.

 

MUON DETECTORS: ALIGNMENT

Alignment efforts in the first few months of 2011 have shifted away from providing alignment constants (now a well established procedure) and focussed on some critical remaining issues. The single most important task left was to understand the systematic differences observed between the track-based (TB) and hardware-based (HW) barrel alignments: a systematic difference in r-φ and in z, which grew as a function of z, and which amounted to ~4-5 mm differences going from one end of the barrel to the other. This difference is now understood to be caused by the tracker alignment. The systematic differences disappear when the track-based barrel alignment is performed using the new “twist-free” tracker alignment. This removes the largest remaining source of systematic uncertainty.

Since the barrel alignment is based on hardware, it does not suffer from the tracker twist. However, untwisting the tracker causes endcap disks (which are aligned using tracker tracks) to rotate around the beam line by about 1 mrad in opposite directions, resulting in a poor relative alignment between barrel and endcap. A new endcap alignment has been produced which makes CSCs consistent with the new tracker alignment and which improves the relative barrel-endcap alignment.

There has also been progress in the barrel alignment using stand-alone muon tracks. An initial procedure has been tested on Monte Carlo and is now being tested on data. It aligns the entire barrel in sequence: all DTs inside each sector of each wheel first, neighbouring sectors within each wheel next, neighbouring wheels last. A new effort has begun to provide realistic alignment position errors (APEs) for all chambers. These APEs should help improve the dynamic truncation fitter, a new reconstruction for highly energetic muons which can undergo significant radiative losses.

TRIGGER

Level-1 Trigger Hardware and Software

After the winter shutdown minor hardware problems in several subsystems appeared and were corrected. A reassessment of the overall latency has been made. In the TTC system shorter cables between TTCci and TTCex have been installed, which saved one bunch crossing, but which may have required an adjustment of the RPC timing. In order to tackle Pixel out-of-syncs without influencing other subsystems, a special hardware/firmware re-sync protocol has been introduced in the Global Trigger. The link between the Global Calorimeter Trigger and the Global Trigger with the new optical Global Trigger Interface and optical receiver daughterboards has been successfully tested in the Electronics Integration Centre in building 904. New firmware in the GCT now allows a setting to remove the HF towers from energy sums. The HF sleeves have been replaced, which should lead to reduced rates of anomalous signals, which may allow their inclusion after this is validated. For ECAL, improvements in tower masking are underway. Using 2010 data, a new configuration of the e/γ trigger was studied which rejects anomalous energy deposits in the ECAL APDs. Fine-grain, H/E and isolation criteria were also introduced in the e/γ trigger to reduce the trigger rate, and η-dependent corrections were introduced in the L1 jet trigger. Several improvements were introduced in the muon triggers to improve the efficiency and to reject fake muon tracks. For the RPC, 3/6 patterns are being validated. For the DT and CSC, Track Finder changes in the ghost cancellation schemes have been introduced which significantly reduce the fake di-muon rates.

In the L1 trigger online software, a new central monitoring and alarm infrastructure common to all trigger sub-systems is now in place. New L1 trigger menus for luminosities up to 1 x 1033 cm-2s-1 were prepared and validated. These include cross-triggers, which become important as the basic single object triggers are pre-scaled for the lowest thresholds. In the L1 trigger offline software, the redesign of the L1 trigger emulator DQM based on quality tests was concluded. An effort is ongoing to implement automatic DQM quality tests on trigger rates, synchronisation and η-φ occupancies.

Level-1 Trigger Commissioning and Operations

After the successful increase in LHC luminosity last year, culminating with 348 colliding bunches at the end of the 2010 proton run, the accelerator is starting the 2011 run with a few colliding bunches and at low luminosities but plans to rapidly increase these numbers and surpass last year's values within a matter of weeks. So it is vital to ensure that also the CMS Level-1 Trigger can keep pace with this development.

To ensure smooth operation at high luminosities, almost all Level-1 trigger systems have used the winter shutdown for upgrades of their hardware, firmware and software. The focus is now to validate with the very first data that all these changes really show the expected results and do not introduce any unwanted side effects. Due to the large number of changes it is not always straightforward to disentangle the reasons for observed changes in the behavior of the detector and the electronics. However, most shutdown improvements have been validated by now and there remain only a few open questions of secondary importance.

To check the correct timing of all components we have again used minimum-bias triggers supplied by the beam scintillator telescope (BSC) and zero-bias triggers from the beam pickup system (BPTX). With increasing luminosity these triggers are now turned off or heavily pre-scaled and replaced by the “physics” triggers from the muon systems and the calorimeters. However, very soon also many of these triggers will have to be pre-scaled or replaced by triggers at higher thresholds. In order to prepare for these changes in advance the planned modifications are discussed between the Level-1 Trigger and the High Level Trigger communities on a regular basis so that the required menu changes and applied pre-scale factors on both sides can be matched.

In 2011 the LHC plans to arrive soon at a stable mode of operation, unlike 2010, when the number of bunches and the luminosity were increased every few days or weeks. So the trigger systems will not have to switch menus as frequently as last year. To get the most out of the available luminosity, pre-scale factors will be adjusted during runs so as to compensate for the decrease in luminosity over an LHC fill. This change will be made manually like in 2010 at first but is planned to switch to automatic operation controlled using the online luminosity measurements. So, each trigger configuration table will contain multiple "pre-scale set columns" both for L1 and HLT.

The achieved increase in the level of automation will allow the shifter to concentrate on monitoring the correct functioning of all systems and on checking the quality of the recorded data. Offline shifters will complement this work and allow the Trigger Field Managers to validate the data taken during their turn.

Trigger Studies Group

Since the end of the 2010 data-taking, the Trigger Studies group (TSG) has been preparing for a prolonged 2011-’12 run with luminosities exceeding 5E33 and covering large pile-up exceeding 10 interactions per crossing. To that end, a new series of “Trigger Reviews” took place at the end of November and the beginning of February. MC samples were provided to the physics groups in order to study the impact of high pile-up on trigger rates and signal efficiencies. The emerging conclusion is that the multiple-interaction environment in the 2011 LHC running comes with significant challenges that need to be addressed right away: higher rates (especially for triggers invoking multiple-jets, missing energy and energy sums), reduced rejection power for isolated leptons and significantly increased CPU-performance for HLT algorithms. An initial 5E32 menu with the corresponding datasets has been prepared and validated for the early LHC running, in close collaboration with the physics and detector groups. A follow-up Trigger Review has been scheduled for April for further refinement of the trigger strategy, using the data from the first 2011 collisions and with an eye towards 1E33 and beyond.

With the proliferation of multiple-object triggers, the task of Trigger Performance has become more challenging. Steps toward providing a higher level of automation of the validation and monitoring code have been taken. This aims at easing the burden on both shifters and validation experts, with further improvements expected over the next few months. Development on the monitoring of trigger and primary dataset rates continued with a focus on dynamic pre-scaling, and overall system reliability and redundancy. Validation and support of the HLT and trigger menus in several CMSSW releases continues to occur on a weekly basis.

Operationally, we saw the successful restart of data-taking in February. A dedicated trigger menu was deployed for dealing with off-beam operations, cosmic runs and circulating or ramping beams. Finally, the CMS trigger recorded its first 2011 collisions on Sunday, 13th March.

It should be noted that a very small team of people carries out the TSG work. In particular, the team of on-call experts dealing with the integration of trigger menus needs to be strengthened. We would like to take this opportunity to thank the retiring on-call experts Dr. Edgar Carrera and Dr. Maurizio Pierini for all their hard work, and welcome three new members in the on-call team: Dr. Stephanie Beauceron, Alex Mott, and Cory Fantasia. The TSG welcomes new members and is searching for new participants in Trigger Performance and Trigger Menu Integration. This is an exciting and challenging job, which offers opportunities to have a big impact in the collaboration!

DAQ

The DAQ system (see Figure 2) consists of:
– the full detector read-out of a total of 633 FEDs (front-end drivers). The FRL (front-end readout link) provides the common interface between the sub-detector specific FEDs and the central DAQ;
– 8 DAQ slices with a 100 GB/s event building capacity – corresponding to a nominal 2 kB per FRL at a Level-1 trigger rate of 100 kHz;
– an event filter to run the HLT (High Level Trigger) composing 720 PCs with two quad-core 2.6 GHz CPUs;
– a 16-node storage manager system allowing a writing rate that exceeds 1 GB/s, with concurrent transfers to Tier 0 at the same rate, and a total storage capacity of 250 TB. It also forwards events to the online DQM (Data Quality Monitoring).


Figure 2: The CMS DAQ system. The two-stage event builder assembles event fragments from typically eight front-ends located underground (USC) into one super-fragment which is then FED into one of the eight independent readout slices on the surface (SCX) where the complete event is built.

Developments for the 2011 physics run

A number of releases and updates of the online software, including framework and services, run control and central DAQ applications, have been made. These addressed bug fixes, performance improvements and functionality enhancements.

The XDAQ framework has been consolidated for the SLC4/32-bit platform with release-10.  This release is targeted to the sub-detector DAQ nodes and all those nodes have been migrated to the latest update at the beginning of 2011. Furthermore, release-11 has been developed, which supports in addition to the SLC4/32-bit platform, the SLC5 platform (64-bit OS, 64-bit applications with the alternate gcc434 compiler). The SLC5 platform is targeted to the central DAQ nodes, and in particular the HLT, Storage Manager and online DQM nodes running CMSSW.

All central DAQ nodes have been migrated to SLC5/64-bit kernel and 64-bit applications. The HLT, Storage Manager and online DQM are running currently CMSSW 4.x and the executables are built with the same compiler as offline. There is a ~20% performance improvement for HLT from the move from 32 to 64 bits.  The sub-detector DAQ nodes – mainly used for control and configuration of the VME crates – have stayed on SLC4/32-bits for the time being.

The HLT farm has been extended with additional PCs to increase the HLT power by about 50%. The cooling of the online data centre in SCX5 has been upgraded from 600 kW to 1 MW during the technical stop at the end of 2010. This increased cooling capacity allows for the current and all future extensions of the DAQ-HLT system.


Figure 3: The EVB-HLT installation.

The DAQ system deploys a two-tier event builder (see figure 2): an initial pre-assembly by the FED builder on the first stage, and a final assembly and event selection by eight independent DAQ slices on the second stage. Each FED builder assembles data from (typically) eight FEDs into one super-fragment using Myrinet switches. The super-fragment is delivered to one of the eight independent DAQ slices on the surface, where it is buffered in readout units (RUs) running on commodity PCs. Each readout unit (RU) is connected via a 540-port switch to Builder Units (BUs) using the TCP/IP protocol over Gigabit Ethernet. The event building in each slice is controlled by one event manager (EVM), which receives the L1A trigger information. The BUs store the complete events until the filter units (FUs), running the HLT algorithms, either reject or accept the events.

Data of accepted events are compressed and sent to the storage manager, which writes them to disk. The switching-fabric of the second stage event builder composes eight Force-10 E1200 switches (one for each DAQ slice), with a grand total of 4320 1 Gbps ports.  The PC nodes used during 2009-2010 for the combined BU-FU function are 720 units of Dell PE 1950 dual quad-core, 2.66 GHz CPU (Intel E5430 “Harpertown”) and 16 GB memory. All the nodes have two data links to the switching fabric, as there needs to be sufficient data bandwidth to sustain an event rate per node of 139 Hz with a L1 trigger rate of 100 kHz. At the beginning of 2011 an extension of the HLT farm has been installed (see Figure 3) by adding 72 units of Dell PE C6100. These compact units house four system boards, which are independent computing nodes, in a 2U chassis with a shared power supply. A configuration with dual six-core, 2.66 GHz CPUs (Intel X5650 “Westmere-EP”) and 24 GB memory has been chosen. These CPU cores support hyper threading which might give an additional performance improvement. This has brought the total HLT system to 1008 nodes, 9216 cores and 18 TB of memory. The estimated available CPU time for HLT processing has been increased from ~50 ms/event to ~82 ms/event. It is expected that the HLT extension will be fully commissioned and integrated during the April technical stop.

Based on experience with the 2011 data, a decision will be made if more HLT processing power is required for the 2012 LHC operation. In that case, a further extension of the HLT is possible by installing additional HLT nodes and re-cabling the event building network to reduce the number of data links per node from 2 to 1 and distribute the ports to the larger number of HLT nodes.

COMMISSIONING AND DETECTOR PERFORMANCE GROUPS (DPG)

As the technical interventions were finishing up and services restored at P5 we started central operations again in 2011. We started operations with a reduced shift crew, Shift leader and DCS shifter, on 24th January. We had a first mid-week global run on 2nd and 3rd February followed by cosmic data-taking between 10th and 20th February. Due to delays with cooling for the strip tracker the useful cosmic data-taking with the strip tracker was reduced to about four days. On 20th February, the LHC started beam commissioning, and cosmic data-taking with the full CMS detector was stopped. The machine availability has been much higher during the 2011 beam commissioning than during the comparable time period in 2010. This has given us few opportunities to turn on the tracker for further cosmic data-taking. Many changes and upgrades were performed during the winter shutdown. Among them was an upgrade to running the central DAQ on 64 bit. All of these upgrades have now been tested successfully.

The LHC delivered the first stable beams on 13th March, a day ahead of the schedule presented in January. LHC has commissioned the new beam optics for 2011 with b* = 1.5 m; at the end of 2010 we operated with b* = 3.5 m. So, with all other machine parameters the same, this gives us a luminosity increase of about a factor of 2.3. With nominal bunch charges, about 1.15 x 1011 protons, and an emittance off 2.2 mm we expect a pile-up of about 10 interactions per bunch crossing. Figure 4 shows an event where we had 13 reconstructed vertices. This high pile-up constitutes a significant challenge for the trigger; a lot of effort has been spent to prepare for the conditions expected in the 2011 run.


Figure 4: Event with 13 reconstructed vertices.

At first, LHC delivered collisions with two bunches colliding in CMS. These first fills were used to carry out delay scans, e.g. of pixels and strips, validate the trigger timing, and perform HV scans. We are now done with these special runs. Some final analysis still remains to be done and new constants have to be deployed in the online system. On Friday, 18th March, the LHC started the intensity ramp up with 32 bunches (30 colliding in CMS). The goal for the machine was to have three fills and 20 hours of stable beams at each intensity step. At the point of writing this bulletin the LHC has reached 64 bunches and are planning the next fill to be with 136 bunches. The last step is to go to 200 bunches, which will give a luminosity in excess of 2 x 1032 cm-2s-1, i.e. at the same level as we had last year. All subdetectors are back working at the same level as last year. There were changes in the timing for the RPCs and Pixels that are not fully understood, but they are now timed in with respect to the LHC beam. There will be a Jamboree between 13th and 15th April to assess the quality of the data we are taking to make sure that we are ready to take the large data sample we are expecting in 2011.

For the 2011 run we made some changes in the way the central shifts at P5 are organised. From the experience of the 2010 run we wanted to make sure that we have a more experienced shift crew. This means fewer shifters and more shifts per person. For the operation in 2011 we have instituted minimal quotas for the central shifts and have limited the number of people that we allow to take the central shifts in order to match the quota. So far this has been successful in the sense that we have filled all shifts for most central shift roles for 2011.

Tracker

The Tracker joined CRAFT11 on 14th February and collected approximately 1.2M cosmic ray tracks. Such exercises offer an excellent opportunity to monitor the detector and check the performances before restart. In addition, cosmic tracks are essential for the alignment of the Tracker. They allow to mitigate weak modes thanks to their non-trivial topology. Furthermore, they allowed to monitor and eventually correct the tracker geometry before the restart of the LHC, most notably the longitudinal shift of the barrel pixel half-shells which have been seen to occur several times in 2010. Finally, because cosmic tracks have a wider range of angles w.r.t. sensor planes than collision tracks, they contribute significantly to the correction of surface deformation of the silicon sensors.

Unfortunately the number of collected tracks fall well short of the 3M tracks that were needed to carry out the foreseen program. While The tracker in itself performed well and the data are overall of good quality, a timing-mismatch between the various detectors has been seen, which adversely affects the resolution of the hits in the pixel detector.

The alignment revealed that the longitudinal separation of the barrel pixel half-shells changed by approximately 60 microns and the barycentre of the barrel pixel moved along z by the same amount. Shifts of tens of microns in the pixel endcaps have also been observed and corrected in the alignment.

Further validation and analyses of the alignment geometry produced in November 2011 for the end-of-year reprocessing have also been completed. While this geometry offered significant improvements in the endcaps (with special benefit for B-physics), it led to a bias of the Z mass peak position with an η-dependence. The analysis revealed that this bias was caused by a twist of the Tracker geometry. The alignment procedure with only Cosmics and minimum-bias tracks is insensitive to such a twist (the twist is a “weak mode”) and the information coming from the Z mass must be used to fully constrain it. A new geometry was thus derived, which resulted in an almost twist-free geometry and nearly eliminates the observed η-dependent mass bias, keeping all the previous assets. This highlights the importance of the collaboration between the DPG, POG and PAG on such issues.

Since 13th March, the Tracker is taking fresh collision data with a high performance. The conditions obtained from the February commissioning are now used, including the alignment obtained from CRAFT11 data. The first days of data-taking were dedicated to commissioning runs needed to optimise performances in 2011. Among other things, it was immediately noted that the timing of the pixel detector was to be corrected by about half a bunch crossing, probably following changes to the trigger system during the winter break. After that correction, initial data show a nearly perfect agreement with the templates from 2010, both for the pixel and strip subdetectors. Using these data, a first assessment of the quality of alignment could be done in less than one week, confirming both the robustness of the tools and the reactivity of the team in charge. Special run data is being analysed and will be used to reassess the good timing of the detector and the impact of radiation on the silicon sensors.

ECAL

Since the December CMS Week, the focus of the ECAL DPG has been on the consolidation of results from 2010 data and preparations for high luminosity data taking in 2011.

The 2010 data have been used to further refine the energy and timing calibration of ECAL, and to provide precise spatial alignment corrections for the crystal barrel, endcap calorimeters, and the preshower detector. Energy inter-calibration procedures using minimum-bias events, and photons from π 0 and η particles have now reached a precision of up to 0.5% in the crystal barrel and 2-3% in the endcaps. Procedures to extract the absolute electron and photon energy scale from Z→ee and Z→μμγ events are now in a mature state, and satisfactory agreement between data and Monte Carlo simulations concerning the ECAL energy scale and energy resolution has been achieved.

Corrections to the energy scale of the barrel and endcap detectors due to crystal irradiation will be automatically applied to reconstructed data in 2011, using measurements from the ECAL light monitoring system. Detailed analysis of 2010 data has improved the accuracy of these corrections, which are derived from a theoretical model of crystal transparency loss due to irradiation, followed by recovery in beam-off periods. The parameters of this model will be further constrained by high luminosity data recorded during 2011.

Anomalous signals in the ECAL barrel (a.k.a. “spikes”) remain an important focus of the ECAL DPG. Spike rejection at Level-1, using the “strip fine grain veto bit” calculated during ECAL trigger primitive generation, has been commissioned and will be deployed online in 2011.

A significant achievement over the past few months has been the incorporation of online spike-killing into the Level-1 emulator. This will allow spike rejection rates to be predicted for a range of trigger-primitive thresholds and LHC luminosity/pile-up scenarios. The simulation of anomalous signals in the CMS Monte Carlo has been further improved, and we expect that samples generated with spikes included will be of increasing interest to POGs and PAGs during 2011.

Regarding offline spike rejection, following close consultation with Egamma, JetMET and Particle Flow groups, a common spike "cleaning" algorithm has been developed to remove anomalous ECAL energy deposits from Egamma objects and jets in the reconstruction of 2011 data.

We have also optimised the zero suppression settings and amplitude reconstruction weights in preparation for high intensity LHC running. These settings were tested during the winter shutdown, and are ready to be deployed online. In addition the silicon preshower detector will run in low-gain mode during 2011, to maximise energy resolution and π0/γ separation capabilities.

The excellent performance of the ECAL online, reconstruction, data quality monitoring, and prompt feedback groups have allowed us to re-establish high quality data-taking following the resumption of LHC collisions in 2011, and we look forward to continuing this trend as luminosity increases throughout the year.

HCAL

Missing transverse energy (MET) is an important signature of new physics and a good understanding of detector effects producing fake MET is essential. In many analysis there are two primary sources of anomalous signals in the forward calorimeter (HF) leading to large fake ME T. One source was first observed in test beams analyses and is due to charged particles producing Cherenkov light in the PMT window. This signal arrives earlier than the physics signal produced in the absorber in HF. A second source of anomalous signals, observed in the 2010 data, is due to scintillation light produced in the light-guide which arrives later than the physics signal. The physics signal is narrow and we can reduce the integration window used to reconstruct the energy without compromising the energy measurement while at the same time reducing the contribution from anomalous signals. Using a narrower time window also reduces the effect of out-of-time pile-up.

We also modified the light-guide by replacing the material that was the source of scintillation light. These modifications are expected to reduce greatly the sensitivity to anomalous signals in HF. Filters to further reduce anomalous signals have been updated for the 2011 operating conditions which will have shorter bunch spacing and higher pile-up conditions.

One of the important tasks of the HCAL DPG is the calibration of HCAL. The main purpose of the calibration is to establish a well-defined energy point so that the response of HCAL can be monitored as a function of time. Adjustments to the response can then be made in order to maintain the calibration point to an accuracy of 2-3%. The HCAL calibration starts from the response determined using test beam data on a limited number of HCAL modules and then extended to all of HCAL using Co60 wire sources. This initial calibration, referred to as pre-calibration, does not include the effects due to dead material in front of HCAL or due to the magnetic field. These effects can only be accounted for by using collision data. Special calibration triggers to collect non-zero suppressed data, photon-triggered events, and events rich in isolated tracks were used in 2010.

The HCAL calibration is done in two steps. First a relative scale adjustment is applied in φ which does not change the overall energy scale. The φ-symmetry calibration was done using the 2010 non-zero suppressed data and photon-triggered data. These two data samples were combined to reduce the uncertainty on the response corrections. Once the relative φ-symmetry calibration is applied, an absolute scale correction in η is established for a fixed energy point. The η-dependent correction uses isolated charged particles and require a well measured track momentum measured by the tracker system.

A sufficient sample of events with isolated tracks was collected in 2010 so that the η-dependent response correction could be determined. The full HCAL calibration will be applied to the 2011 data and we will continue to collect isolated track data until we have enough data to calibrate individual channels. Methods to extend the calibration to the forward region without Tracker coverage are under development. The forward region without Tracker coverage includes parts of HE, HF, CASTOR and ZDC. For HF we can use Z→ee events where one electron in the central region and the second electron is in HF. Photon+jet events can be used as an important cross-check. For CASTOR we may use the same techniques as for HF. ZDC is being calibrated with neutrons.

Early in 2011, the beam was intentionally steered into collimators upstream of CMS resulting in a spray of muons that are useful to check the response of the detectors. The so-called "splash" data were used to cross-check the φ-symmetry calibration and it was found that two regions of fixed φ have a problem with the first scintillator layer (layer 0). Layer 0 uses a thicker scintillator which also has a higher response than the other layers in HCAL.

Originally it was planned that layer 0 would be read out separately but later it was decided that it would be combined with the other layers in HCAL. In order to make the light output for this layer similar to the others, a neutral density filter was used. It appears that there is a problem with layer 0 for two φ slices leading to a higher response. The two φ slices have a different non-linear energy response than the other channels in HB. This can be compensated for in software reconstruction and a special correction function was developed.

This year a major emphasis will be made on establishing well-defined procedures to monitor HCAL and update conditions to ensure a stable response over time. We have established HCAL offline shifts to monitor HCAL specific conditions. The shifts will be done at remote operations centres in Russia and the USA.

DT

We designed an HLT path that isolates high purity J/ψ candidates to study the DT system behaviour with muons at low pT.  This allows to select events where one leg of the J/ψ is completely unbiased by the signal from the DTs.

Our calibration procedure is now able to fully exploit the DQM GUI to inspect the results of our calibration. The workflow infrastructure was reworked so that it is now completely automated. We also improved the ALCARECO assigned to the DT calibration. This will allow us to quickly react in case some change of conditions manifests in our detector.

We introduced a tuning of the drift velocity in our local reconstruction in the first layer of the external wheels, to take into account known small degradations of the magnetic field. This slightly improves the spatial resolution in that region. We also used the winter shutdown to summarise our 2010 performance.

More information can be found in the Muon DT section, elsewhere in this Bulletin.

CSC

During the end-of-year shutdown the CSC DPG was largely focussed on work required for the Muon Performance paper, which is currently being written jointly by all three Muon subdetector communities. Amongst the results are space and time resolutions of the CSC measurements, and efficiencies for CSC trigger primitives and local reconstruction of rechits and muon track segments, all based on 2010 collisions data. The performance is good and as expected from the detector design.

Learning from our 2010 experience we have improved the organisation of CSC DQM plots in order to simplify life for CSC shifters, who are now CSC DQMers, while expert oversight of CSC Operations is handled by the CSC DOC.  Incremental development continues online and offline to improve the precision of CSC timing measurements (and trigger timing). This includes improved timing values for rechits and segments, which are expected to benefit a number of physics analyses.

The CSC system contains nine (out of 473) chambers which currently provide no rechits or trigger primitives, due to various hardware problems. It has been realised that these should be suppressed in the CMS simulation in order that the L1 muon trigger can be realistically simulated. This was not done last year in the belief that rechit information could be suppressed a posteriori when necessary. Unfortunately this does not provide a way to simulate the L1 trigger. The bad chambers are now included in conditions data for the latest CMSSW 4_2_x release, so that they will provide no digis (and hence rechits or trigger primitives) in the simulation.

The Endcap Muon alignment group rapidly provided a new CSC alignment to match the new “twist-free” Tracker alignment. The old Tracker alignment effectively forced a ±1 mrad relative twist between the two CSC endcaps, to compensate for the fact that the CSC alignment is track-based whereas the barrel DT alignment is hardware-based and hence was unaffected by the Tracker twist. This is a happy solution of a long-standing problem that muon track-based and hardware-based alignments apparently disagreed. The new alignment will also be part of the latest CMSSW 4_2_x release.

The CSCs were rapidly brought back into operation early in 2011 and recommissioned with cosmic rays. Everything from the hardware level to local reconstruction was soon validated against 2010 Cosmics data. This has now been confirmed with the first 2011 LHC collisions data, obtained in mid-March. Within two days we had already seen two Z→μμ candidate events, from which we conclude that the Electroweak Sector is still operational in 2011. We now await higher luminosities and settling into stable detector operation and data collection. Although the CSCs are expected to be relatively immune to the effects of high pile-up we are watching carefully the behaviour of the entire system – from trigger to local reconstruction – as luminosities increase, and we are also closely monitoring backgrounds from beam halo, beam splash, and neutrons.

RPC

The main activity in the RPC DPG group during the last three months, has been to refine the analysis of detector performance on the full 2010 data sample.

Average Barrel RPC hit efficiency was stable and above 95% during all of 2010 at an applied voltage of 9.35 kV. For the endcaps, data have been taken at different HV and an average hit efficiency of around 94% has been reached at 9.55 kV. The endcap working point voltage is still not well-defined. At beginning of 2011 data-taking a set of calibration runs has been planned in order to define the best working voltage chamber by chamber. This activity is just started and analysis in ongoing, at the time of writing this report.

The efficiency table based on 2010 analysis has been included in CMSSW release 4_2_x for Monte Carlo simulation so that we should expect a better modelling of the system performance. Dead chambers will be described as well because they are simulated with 0 efficiency in the MC.

Results on cluster size, spatial resolution and hit efficiency have been approved and will be part of the common paper on muon subdetectors performance that is in preparation right now. Interesting analyses in parallel with other muon subdetectors are in progress to monitor the noise rate as a function of the instantaneous LHC luminosity. External and internal layers are affect by machine background and the rate is under study for different regions of the detector.

An increased effort has been devoted to improve the synergy between Muon DPGs and Muon POG. Work is in progress to study the use of RPC hits in the muon reconstruction and in cosmic discrimination making use of the RPC timing.

In 2011 a new trigger algorithm (Pattern Comparator) will be used in the Barrel region. The muon candidate is generated if at least three out of six crossed layers are fired (previously at least four fired layer were required). Based on MC simulation, the new algorithm should increase the RPC trigger efficiency in the barrel by a few percent. To keep the trigger rate under control only selected combinations of the three fired layers are allowed.

Since the beginning of the 2011 some modifications of the RPC trigger software have been applied. The XDAQ applications controlling the trigger hardware were converted into the Trigger Supervisor applications. The goal was to obtain a higher homogeneity and simplicity of the online software. A new version of the front-end boards (FEB) configuration has been implemented in the online software. An automatic procedure to load the electronic thresholds from the database has been included (all other parts of the system were already configured from the database). All basic functionalities are now ready and working fine. The updated software allows much easier fine-tuning of the FEB thresholds what will allow to improve the RPC detector performance.

During the early 2011 cosmic runs, a time shift of half a BX has been observed. The source of that shift is outside the RPC system but is still not understood. A new set of synchronisation parameters have been uploaded in the Link Board system to correct the time observed time shift. First preliminary results from collisions shows that the timing after that correction is as good as it was in the 2010, i.e. the level of pre- and post-firing is virtually zero.

COMPUTING

Introduction

It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences.

Heavy Ion

The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated to tape at a Tier 1. The decision of CMS not to zero suppress the Tracker led to raw events sizes of greater than 10MB/event. The large event size placed a very heavy load on CASTOR. During repacking where nearly equal sized events are read and written, and in reconstruction where large raw events are read to produce smaller reconstructed events, the IO load on CASTOR was large. CMS routinely asked 5 GB/s of reads and 1 GB/s of writes, including to tape. The load on CASTOR for reconstruction was increased by the excellent performance of the CMS reconstruction code. CMS was able to reconstruct promptly the complete heavy-ion datasets. Our original estimates were that reconstruction would need to stretch into the technical stop and holiday break.

The performance of the accelerator during the first heavy-ion run was an interesting indication of the running scenarios we will see in 2011 in proton-proton. The machine was able to setup and collide within a few hours of dumping the previous beam. The reconstruction stretched into the inter-fill periods and good resource utilisation was achieved.

Facilities and operations

The Facilities and Infrastructure Operations team has made important progress. Operational procedures involving the test/deployment/maintenance of the new CMS Workflow Management tool WMAgent have been established, in close collaboration with the CMS Computing Integration and Data Operations teams. Responsibility of the ongoing deployment of a new GlideIn WMS fabric at CERN continues. There has been an active participation in the test of improved data management tools, in particular the PhEDEx Data Transfer Service and the Frontier/Launchpad/Squid monitoring. Responsibility for migrating many central CMS Services from real to Virtual Machines (VM) at CERN, in collaboration with the CMS Offline project and CERN/IT has continued. Advantage was taken of the LHC winter break to improve various important monitoring aspects of the project, in particular: testing and deploying of Lemon alarming/recovery procedures for services running on VoBox's at CERN; further improving the computing shift procedures, critical service recovery procedures and computing shift monitoring; and re-enforcing the CMS site status and site downtime monitoring, in close collaboration with the CERN/Dashboard team.

The new Site Availability Monitoring machinery based on "Nagios" has been tested and is ready to deploy. Responsibility continued for migrating the current CMS analysis data stored on disks at CERN plus related data access patterns from a storage technology based on CASTOR to the new "EOS" storage solution proposed by CERN. Tests are ongoing, in close collaboration with CERN/IT Department, with the goal of having the full CMS Analysis data migrated to "EOS" by end of 2011.

Data Operations

The Data Operations Team worked very hard during the Christmas break to provide data and MC for the winter conference. While producing samples for new analyses, data operations has been actively involved in cleaning samples that are not needed anymore. The regular clean-out of older derived data samples will a regular feature of 2011 and 2012. The oldest reconstruction passes for data and MC and the full 8 TeV MC where on the list for clean-up.

The Tier 0 successfully supported HI data-taking in November and December. Tier 0 ran zero suppression for HI data in February and March. It is to be noted that two attempts were necessary because of software problem. Once the data was reprocessed a skim was created from the new datasets.

In addition to heavy-ion activities, the Tier 1s processed all 2010 data twice (4th and 22nd December). The Tier 1s were also engaged in processing all Fall ’10 MC and added pile-up events.

On the Tier 1 level, a lot or work was invested in the use of old MC production infrastructure to account for 100% of the data re-reconstructed and MC re-digitised/re-reconstructed.

User Support

The User Support conducted a tutorial and an analysis school in January 2011. One was the RooStat Tutorial (21st January 2011) that provides tools for the high-level statistics questions in ROOT and is built on RooFit, which provides basic building blocks for the statistical questions. RooStats has been distributed in the ROOT release since version 5.22 and is being updated continuously.

The other tutorial was the very first CMS Data Analysis School (CMSDAS) held at LHC Physics Center (LPC) at Fermilab. The school was designed to help CMS physicists from across the collaboration to learn, or to learn more, about CMS analysis and thereby to participate in significant ways to the discovery and elucidation of new physics. The innovative classes allowed the students, in some cases with zero experience, to have hands on experience with real data, physics measurements that CMS published just days before and then make them more precise by searching for new processes that the collaboration hasn’t done yet. The students are expected to continue the work on the measurements they started and then see it through to publication after the school is over. It provides opportunity to new members to meet in person with a lot of the experts and is very good way for bringing a new generation of people into the experiment. Of the 100 participants some came from as far as Brazil, Korea and Europe. The school was huge success and got the CMS management interested and there is now a plan in the works to hold this school at an annual frequency across CMS institutions on different continents. The same school was held at LPC, Fermilab, a year ago and was then called Extended-JTerm but the name has been changed to emphasise its primary focus: the analysis of real data and the opportunity to search for new physics. The material presented for the school is part of the CMS WorkBook.

The next periodic PAT tutorial is at CERN (4th-8th April 2011) following the current CMS week. There is a plan in the works to include other very useful tools used in physics analysis like tag-and-probe, Lumitools, Edmtools, FWLite etc. as part tutorials and hold these on regular basis, which is not the case now.

We wish to thank all users involved in tutorials, demonstrating a true collaborative spirit. An up-to-date list of tutorials held by the User Support can be found at https://twiki.cern.ch/twiki/bin/view/CMS/Tutorials.

On the documentation side, the user support team is working on updating the data format documentation and all RECO and AOD format tables will be updated for CMSSW_4_2_0. To check the completeness of the tables, the event content is read out from the data files with the framework tools and the contents are compared to the documentation pages.

Integration of Distributed Facilities and Services

The Distributed Computing Integration group has concentrated in the last few months on the integration, deployment, validation and testing of the new CMS workload management system, WMAgent. It is planned that WMAgent will replace the current production system (ProdAgent) within the next few months, and will be used as the basis for the net generation of user analysis job management system (CRAB) in the future. This work has been done in close collaboration with the development and operations teams. The new system uses a common framework that is easier to maintain. It is expected to be more reliable and reduce the operational load. WMAgent Integration test instances have been installed at CERN and FNAL. The system has been thoroughly exercised by means of large-scale tests and long-running continuous workflows at the Tier 1 sites. The testing helped in identifying various aspects of the system that can be improved in terms of reliability, scalability and ease of use. Changes are being been implemented and WMAgent is expected to be used for the data reprocessing of the 2010 data at the beginning of April. The Distributed Computing Integration group will increase its personnel with a new Cat-A person from beginning of April.

Analysis Operations

The volume of Analysis activity in CMS has now stably reached the Computing TDR expectations with about 400 different users every week and more then 100,000 grid jobs submitted daily and surpassed Data Operation volume as number of jobs (i.e. operational units of service). Analysis activity briefly reduced over the December holidays, coming back stably to the same level as Fall 2010.

 


Figure 5: Total number of jobs per week in CMS Computing infrastructure from 1st January 2010 to 1st March 2011. Analysis has peaked at almost 2M jobs/week.

Figure 6: Number of CMS analysis jobs per week from 1st January 2010 to 1st March 2011. The activity level in 2011 is only modestly reduced with respect to the Fall 2010 peak.

Analysis Operations’ focus in the last few months has been to keep the current system running and to deal with users’ problems. Some people left the project and while new effort is coming in with Spring 2011 we had to cut the scope to bare minimum so far.

Problem solving for users is still a substantial effort drain as the volume of mails in user support form is staying constant in the 400~600/month range. To help us in this respect we have deployed a new log-collecting service and provided a few fixes for the CRAB client and server to improve error diagnostic and reporting and cure some common cases of failures.


Figure 7: Mail volume (#messages) handled by Analysis Operations on the CrabFeedback forum in 2010 (left) and 2011 (right)

Analysis Operations is managing now about 2.5 TB of disk across all Tier 2s and a massive clean-up campaign is about to start to free space for 2011 data. Effort in this area is stable and the procedure under control.

OFFLINE

Introduction

Since the last CMS Bulletin report, in December 2010, the LHC has mostly been in winter shutdown, thus not delivering luminosity to the CMS Experiment. Nevertheless, activities on the Offline side have been frenetic, and the system taking data from 14th March is significantly different from the one that ended the 2010 run with heavy-ion (HI) collisions.

Activities after the machine shutdown have followed two distinct paths: on the one hand we wanted to close the book with 2010 data, with a complete Data and Monte Carlo reprocessing with latest available calibrations and algorithms; on the other we wanted to improve our system in view of the 2011 LHC Run.

The CMSSW_3_9_X cycle was already deployed successfully both Online and Offline for the HI run. It has also been used, after a tight agenda of calibrations and new algorithm development, for the 2010 end-of-year complete reprocessing that was launched before Christmas and essentially completed by the end of the vacation period. At the end of February, a release from the same cycle has been used to reprocess the HI data, to form the Tracker Zero Suppressed Dataset, which will be used as the basis of future analyses. When CMSSW_3_9 entered into production in early November, the next cycle, 3_10, started. Its goals were somewhat limited with respect to previous cycles, since the release was to be used to start the generation and simulation of the 2011 LHC collisions at 8 TeV. The new version of Geant4 (v9.4) was used to allow for the most recent developments, and a huge production of simulated events was launched during the Christmas break.

The Offline Project had two major goals to accomplish before the re-start of data-taking in 2011. In order to squeeze the last drop of CPU performance out of our code, the switch to 64-bit compilation was requested to help trigger selection. In addition, the version of ROOT we have as the basis of our code needed to be improved to benefit from improvements in IO performance and new features in managing schema changes in data formats. The two tasks required hectic activities up until the beginning of March, with frequent special releases that were used by physics groups to compare and validate results. The transition to 64-bit compilation was realised and validated in the 3_11 cycle by the middle of February, and was deployed on Tier 0 before the start of the 2011 Cosmics Run; the Online part happened roughly at the same time. Performance gains varied with use cases and in the case of reconstruction they were measured to be of the order of 20%. Transitioning to the new version of ROOT required more time, since we wanted to be sure the old custodial raw data was readable without problems. The Offline Project moved to the 4_1 release cycle (new ROOT, 64 bit) during the first week of March, and this was followed in the next week by deployment Online.

Following the decision to run the LHC at 3.5 TeV per beam in 2011, the Monte Carlo production that had been started in December using the 3_10 release was abandoned and restarted with the lower correct energy. This new production used updated parameters for the Beam Spot and number, as well as the correct structure of pile-up events. The production started in March, and is currently ongoing. At the same time, a re-reconstruction of 2010 Monte Carlo with 2011 pile-up parameters was requested, in order to get ready for high-luminosity conditions, and is currently ongoing.

The current development cycle is CMSSW_4_2, and is not planned for deployment on Tier 0 before the end of April. It contains many changes in event reconstruction, and again new calibrations. Most notably, it includes a “twist-free” alignment of the CMS Tracker. In view of the 2011 LHC Run, all the certification activities have restarted at full speed, and the Physics Validation team is gearing up to provide validated data within one week of the data being taken.

All the components delivered by the CMS Offline project have undergone important developments in the last six months:
– The plan to consolidate event display activities into a single project delivering a single application, called FireWorks, is now complete. FireWorks now contains all the features that had been identified in the review that took place at the start of 2010, features such as Geant4 geometry navigation, iSpy-like modes, integration with the full CMSSW framework and reconstruction on demand while visualising events.
– Full simulation of the detector has improved in many aspects, with new parameterised calorimetric libraries and extending the coverage up to high η to include the CASTOR detector in standard Monte Carlo samples.
– Reconstruction was able to deliver new features (e.g. improved Particle Flow code), and at the same time could follow the day-by-day operations with a negligible number of jobs failing prompt reconstruction in 2010. A lot of work was spent in the streamlining of our data formats and Data Tiers: 2011 will be a resource-constrained year, and disk space at Tier 1 and Tier 2 is a concern; a CMS-wide review of our data format helped to bring down data sizes even in the presence of pile-up, with gains which in some cases reach 20%.
– Fast Simulation is continuing efforts to match the data coming from the detector, and fast simulation of the CMS detector is effectively used in physics analyses to be presented at Winter Conferences.
– Alignment/Calibration is continuously updating CMS calibrations; this happens not only as far as new data arrives, but also with completely new algorithms being deployed. Examples are the production of a Tracker “twist-free” alignment, and the possibility to add sagitta to the Tracker modules. Moreover, the Prompt Calibration Loop at Tier 0 is now fully operational, and more and more calibration workflows are moving to it.
– The Generators project have adopted and integrated use of the Rivet and Professor tools in order to facilitate generator tuning and validation. Work has been done on the development of a Sample Generation Request Interface (PREP) that will ease the process of requesting hundreds of Monte Carlo datasets, and will allow an easy recovery of their production states.
– The new job submission tool, WMAgent, is now in the final state of validation and will become the single CMS component used to submit GRID Jobs. It is now being tested at Tier1 for reprocessing workflows; it will soon be used also for Monte Carlo production at Tier2 and later in the year for analysis tasks, serving as the base for the specific CRAB3 tool.
– The DQM project has been focussing on the automation of the certification workflows, with subsystems’ automatic answers soon after Tier 0 reconstruction; an improved Run Registry (v3) is also in preparation and should be ready in the next few months.

In 2011, an Offline Workshop took place at CERN on 2nd and 3rd February and was focussed on operations for this year’s run. Given the scarce manpower situation, efforts will be made to automate as much as possible all the validation, certification and calibration workflows, freeing manpower for addressing real development issues.

Another event aimed at the discussion of specific issues took place since the last Bulletin. From 24th to 26th January, a special workshop was held in Bari to discuss “Storage and Data Access Evolution” for the runs from 2012 onwards. CMS, like the other LHC experiments, is going to deploy a new Computing Model, with new access patterns to data; clearly this has consequences on the software components, starting with new Framework features that will need to be developed, tested and deployed. Five working groups have been formed to follow the tests in the next month, and will report to Offline and Computing Projects.

In the following we give more details of progress made in the various Offline sub-projects.

Generators

The activities in the Generator Tools group during the initial part of the year have been devoted to two main targets: prepare the mass production for 2011 and complete a number of developments required.

During the Christmas period effort has been put into producing the input samples needed for a mass production at 8 TeV that at that time seemed the most likely energy for the 2011 run. The changes in LHC plans have forced us to move to a different program i.e. to reprocess the existing samples at 7 TeV while planning a new production that will exploit improvements in the simulation and further extend the statistics. In this context, effort has been put in better coordinating the different requests of the main Standard Model samples in order to avoid duplications.

In parallel, efforts have been made to finalise the deployment of the Production and REProcessing (PREP) management tool, which is now in an advanced commissioning phase, and to make progress in the integration of new software components required. The most relevant among them are the tools to be used for generator tuning (Rivet/Professor) and the state of the art version of the LHAPDF library of parton distribution functions.

Full Simulation

There has been a large amount of activity in the Full Simulation group over the past three months. At the end of 2010, several improvements were added to the Simulation code base. A major addition was the new version of Geant4, with improved physics description of anti-protons and hyperons, and better shower models for low- and medium-energy showers. Substantial work was also completed for the forward calorimeter systems. A transition from a shower model description of HF to one using a combination of parametrised showers (GFlash) and Geant allowed the description of energy deposition in the HF photomultiplier windows and fibre bundles at the back of the calorimeter. This should allow the modelling of the large energy deposits seen when particles interact in these regions of the detector. The change in modelling strategy also allows the full integration of the CASTOR detector, which is now included in the default simulation configuration. The shower library simulation of CASTOR was validated against the full Geant simulation last year, and we look forward to a detailed comparison with collision data. In addition, the simulation has been updated to include the full components of the TOTEM T1 detector, which was installed during the winter shutdown.

A major focus of the Full Simulation moving forward will be the study and validation of the pile-up simulation. With the LHC expected to deliver instantaneous luminosities corresponding to anywhere between 10 and 16 interactions per crossing, CMS will enter a new regime in terms of detector occupancy. (This level can be compared to an average of slightly fewer than three interactions per crossing in 2010, a qualitatively lower value in many respects.) A considerable effort aimed at understanding pile-up issues, ranging from low-level detector-specific studies through the impact on physics analyses, was launched during the February Physics Week, with Simulation and the PVT group providing a focus for the work. Studies have included the effects of out-of-time pile-up in the simulation, as well as work on isolation variables and pile-up subtraction in jets. There is much work still to be done; we look forward to having a substantial dataset of collisions for a true evaluation of the impact of pile-up on CMS.

Reconstruction

At the end of 2010, the 3_8 and 3_9 releases have been used for the reprocessing of the proton-proton data collected in 2010, with a view to presenting results at the winter conferences. The 3_8 release had been deployed for prompt reconstruction in September 2010, and the reprocessing with 3_8 was done to provide a consistent dataset for physics analysis. The 3_9 reprocessing included state-of-the-art calibrations and improvements in almost every aspect of the reconstruction software. Improvements were brought to reconstruction of Muon with high transverse momentum, of track-corrected missing transverse energy and of electron identification. Special attention was given to low-level reconstruction in the electromagnetic calorimeter, with improved energy calibration due to loss of transparency of the crystals, partial recovery of dead channels and better noise identification. Monte Carlo samples have been reprocessed consistently, and results being presented at winter conferences are derived from the 3_9 datasets.

The 3_9 releases have been used for data taking during the heavy-ion collision operation of the LHC. The fact that operation and processing issues were promptly solved is an indicator of the excellence of the Offline and operation teams. However, this experience clearly revealed a lack of redundancy checks that we are looking forward to consolidating, together with the Heavy-Ion Physics community, before the next lead-lead LHC campaign.

Over the 2010/2011 winter break, three major releases were produced. The 3_10 release cycles included minor software developments and were aimed at the production of Monte Carlo samples with 8 TeV of centre-of-mass energy. This production and release was later deprecated due to the change in LHC planning.

The 3_11 release cycle includes part of the developments foreseen for the resumption of proton-proton operation. It has been deployed for the warm-up of the CMS detector and the cosmic data-taking campaign.

The 3_11 release cycle has been mirrored into the 4_1 series in order to adapt to changes in ROOT, which provides the core of the file handling. Careful validation was performed and the 4_1 series has been put in production for the 2011 proton-proton operation. The 4_2 release cycle includes primarily development in the reconstruction of electrons and photons, with a particle-flow approach. Changes in the structure of data formats have been made in order to reduce the RECO and AOD event size given the increased level of pile-up expected in 2011. A reduction of approximately 15% was achieved through the elimination of redundant information and improvements to the implementation of targeted data format classes. Initial development to cope with additional pile-up events from the LHC started during the 4_2 cycle and will continue in future cycles, making use of experience to be gained during 2011 data-taking.

As a continuous effort throughout release cycles, emphasis is put on release validation automation, release feature documentation, and simplification of operation maintenance, all of which requires attention and help from collaborators outside of the Offline group. We look forward to consolidating the RECO team with new members in order to perform smoothly all the tasks related to reconstruction.

Alignment and Calibration

At the end of the 2010 data-taking period a large number of conditions updates were prepared to best describe the status of the CMS detector through the whole period of running. These updates have been deployed in the reprocessing of the complete dataset in view of the winter conferences. The refined knowledge of the detector conditions that resulted from this effort is also beneficial for the online and prompt reconstruction of the data that CMS is going to collect in 2011.

A major step for the Alignment and Calibration group has been the full commissioning of the Prompt Calibration Loop, by means of which the values for time-dependent conditions are updated at the level of the first full reconstruction of the data. Prompt workflows for the beam-line calibration at luminosity section granularity, and the determination of channel status conditions for ECAL and strip tracker, were already prepared at the end of last year and are now in production. The ECAL laser corrections for the crystals transparency loss due to irradiation are now ready to be included in the Prompt Calibration Loop as well. They will be tested during the first weeks of data-taking on a dedicated stream and after careful validation will be deployed for use in the prompt reconstruction. The Tier 0 prompt processing is currently running with a delay of 48 hours to allow time for uploading the prompt calibration workflows in the condition database. This deployment represents an important milestone for alignment and calibration operations in CMS.

Cosmics data collected during the commissioning phase after the Christmas shutdown represent a key asset for the alignment of the silicon tracker. The alignment team used this dataset to assess promptly the small movements of the pixel detector and to prepare the alignment geometry for this year’s data-taking.

Finally, the conditions data used in the simulation have been updated in view of the massive Monte Carlo production, which will be used for the analyses being prepared for the summer conferences.

Database

In the first quarter of 2011 the Database project worked on further consolidating the various services it provides. Amongst the major milestones were the upgrade of all Oracle DBs to use version 10.0.2.5 in January and the addition of more disks to the online cluster in February, making good use of the time without beams. Both actions are to ensure the smooth running of the DB services during the upcoming data-taking in 2011 and to cope with the expected growth of the Databases. The IT DBA team has provided a first version of the tool to monitor DB-related information and is sending e-mail notifications for expiring passwords well ahead of the expiration date. Work is ongoing to extend this service to other areas like account locking, unusual growth of the accounts and the like. The internal review of the usage of the DB in the CMS Computing projects continued in the quarter. Most projects have presented details on their applications and their view of how to handle the expected large data volume during data-taking. The quality of the presentations was quite high and they were followed by fruitful discussions.

Fast Simulation

Since the last Bulletin, the Fast Simulation group has continued the effort to provide a more realistic simulation, while keeping high the requirements on speed and performance that are implicit in its mandate. All this must be seen in the context of the preparation for the challenges of the 2011 analyses that we all have ahead of us. The group introduced complete flexibility on the distribution of the number of pile-up events to be overlapped (in time) with the main generated event: the simple Poissonian model assumed so far (that remains as an option) can indeed be inadequate to describe correctly data sets which span across very different luminosity conditions. As a first application of this new feature, we provide the same kind of distribution as agreed for the next Spring ’11 Full Simulation production (flat until 10 events, then exponentially decreasing): this will also ease the comparison of the results obtained with the Fast and the Full simulations.

In coordination with the Higgs PAG and the EGamma POG, we are starting a test production of H→gg samples and we are testing the possibility to filter multi-jet QCD events at RECO level, thus exploiting the speed of the Fast Simulation to perform the complete simulation-reconstruction chain before deciding whether to save or reject an event according to loose cuts on photon identification. The possibility of filtering at the RECO level, which is only possible in an efficient way with a Fast Simulation, is expected to provide a significant increase in realism with respect to samples "EM-enriched" only at the generator level. We are witnessing a steady increase of the use of Fast Simulation by the physics groups. This is natural considering that more and more new physics searches start to be competitive with previous experiments, and parameter scans (e.g. in SUSY searches) demand large amounts of events to be produced quickly. This implies a more urgent demand of person-power for development and maintenance tasks; readers are invited to take a look at the Twiki https://twiki.cern.ch/twiki/bin/viewauth/ CMS/FastSimNeeds for the list of open tasks, which also includes a preliminary assessment of the amount of service points awarded to each task. Another implication of the growing interest in using the Fast Simulation is an increased need for careful validation. As already often remarked in the past, it is of paramount importance that the people in charge of release validations on behalf of the PAGs and POGs always provide their results for both Full and Fast simulations.

Analysis Tools

The new structure of retrieving jet energy corrections from the database is now fully established. The L1Offset (L1FastJet) corrections, which have been derived by the JetMET POG and which will gain increasing importance with the expected pile-up scenarios, gave a good and successful test case of the new structures; the new corrections were accessible for end user analyses very shortly after they had been derived.

The re-organisation of particle flow as an integral part of the RECO objects themselves (especially particle flow isolation for leptons) implied a transition of particle flow reconstruction packages from PhysicsTools into the CommonTools area. It was accompanied by a change in the data structure of RECO Taus. Final clean-up work is ongoing.

The CMSSW_3_8_7 Analysis Release was used for the analyses of 2010 data. In addition, nearly all analyses of the Top PAG shown at the Moriond conferences have made use of the PAT or other Analysis Tools in one way or another. Analysis Tools tightly accompanied a large majority of analyses and thus contributed to the big success of the 2010 data analysis campaign. A new Analysis Release CMSSW_3_9_9 has been announced and more releases of this kind are coming in the new release series for 2011. Users are highly encouraged to make use of them: they provide the best CMS reconstruction and analysis software available at the time, which can be applied out-of-the-box and with reasonable configurations without any need to apply recipes or to play tricks.

Now that Analysis Tools and the PAT are well established, the focus will now be on documentation and statistics tools. Work on the latter has already started in the area of multivariate analyses techniques and will soon start on the important subjects of combining results, calculating significances and determining limits using RooStat and other tools. This work is being done in close collaboration with the responsible person in the CMS Statistics Committee. Also documentation will gain emphasis by an adapted structure of tutorials and WorkBook and SWGuide contributions. These new structures are planned to be in place after the next PAT tutorial at CERN at the beginning of April and will be announced clearly by that time. This work is being done in close collaboration with the CMS User Support convenors.

Data Quality Management

With Cosmics, and finally with stable beam collision data since 13th March, we are happy to see that DQM tools are being used widely to assist commissioning of the CMS detector in preparation of the 2011 data-taking period. The group has been very busy maintaining daily operations, as well as improving DQM tools to prepare for sustainable future operation.

Data Quality Monitoring (DQM) group activities span both the online and offline areas. We manage 24/7 online and offline central DQM shift operation, as well as following up on the quality of data all the way to providing JSON files for the physics analysis through the process called “data certification”.  The DQM group is also responsible for developing, maintaining and validating many tools and programs, such as the Run Registry, DQM GUI, DQM releases etc., in order to carry out DQM operations.

2011 started with running intense offline DQM shifts in order to validate 2010 Dec22 ReReco data sets in time for the winter conferences. Double shifts were conducted in each of the three locations, simultaneously. The rest of the 2011 online and offline DQM shifts have been scheduled, following the new 2011 central shift sign-up procedure beginning in early February, together with the other central shifts.

Validating a new DQM release that incorporates many changes from a collection of many-tagged subsystems is a non-trivial time-consuming task. The testing and tagging procedure of each subsystem has been improved to simplify and improve the release validation process.

We have completed a detailed new design and plan for the Run Registry upgrade (RR3) and began the development work in January 2011. Run Registry keeps track of data quality information and also plays a key role producing JSON files. RR3, with a much cleaner design, should provide a better and more robust service, and we look forward to its completion in several months time.

A run can last hours while the duration of a Lumi Section (LS) is ~ 23 seconds. The LS is the smallest of data segments for which we could provide data quality information, and we have already been providing LS-based DCS information using automated processes. This is another area of intensive development and involves working closely with the subsystem DQM experts.

With the 2011 operation in full swing, we look forward to delivering high quality Data Quality Monitoring working closely with shift people and colleagues from many different areas.

Data and Workflow management

For the last few months, much of the DMWM Effort has been focused on rolling out the WMAgent based data processing system and commissioning it for Tier 1 operations. There has been continuous testing and refinement, with the Computing project's Data Operations and Integration teams running and re-running test workflows to ensure that the new system works and meets the goals set for reproducibility, reliability and a much higher level of automation than the previous ProdAgent system. As the WMAgent system has matured for Data Processing, it has also started to form the basis for CRAB3, the next generation of Analysis workflow management. The CRAB team is now shifting effort to developing and testing the prototype and wider testing will occur over the coming months.

The Data Aggregation Service (DAS) has rolled out into production and has been undergoing user tests and will soon supersede the current Data Discovery interface. This will allow users to query across many more CMS data services, rather than just being confined to DBS, making it possible to refine data searches using information from several sources such as DQM, Run Registry and others as they bring up data services.

Behind the scenes, the new DBS3 prototypes are up and running with development tests running against them, migration tests and cross checks of the old DBS2 data and ever more performance tests. PhEDEx continues to move CMS data at levels of over 200TB per week, with an improved schema and redesigned web interface in the pipeline.

The HTTP Group, a joint effort between Computing and DMWM, has commenced on a regular monthly release focussed on making sure we can deploy reliable, secure, central web services. There has been a lot of good work from Lassi Tuura and his team in refining and cleaning up the deployment of these critical services for CMS.

PHYSICS

Since the last CMS Week, all physics groups have been extremely active on analyses based on the full 2010 dataset, with most aiming for a preliminary measurement in time for the winter conferences. Nearly 50 analyses were approved in a “marathon” of approval meetings during the first two weeks of March, and the total number of approved analyses reached 90. The diversity of topics is very broad, including precision QCD, Top, and electroweak measurements, the first observation of single Top production at the LHC, the first limits on Higgs production at the LHC including the di-tau final state, and comprehensive searches for new physics in a wide range of topologies (so far all with null results unfortunately). Most of the results are based on the full 2010 pp data sample, which corresponds to 36 pb-1 at √s = 7 TeV. This report can only give a few of the highlights of a very rich physics program, which is listed below by physics group.

Most of these analyses profit from the particle-flow reconstruction performance. The process of integration of the particle-flow reconstruction and derived physics objects within the POGs (and relevant DPGs) is underway. The goal is to converge, for each object, towards a combination of the best of existing standard POG reconstruction algorithms and current particle flow reconstruction. This will help obtain the best objects for physics analyses together with fully consistent event description. The 2011 data reconstruction will profit from this integration.

(For related information on the Trigger, refer to the relevant section elsewhere in this Bulletin.)

Muons

The Muon POG has studied the performance of the muon HLT on 2010 data, including isolation and pile-up, and has prepared the trigger paths for the 5E32 menu. Contributions from the PAGs are greatly appreciated. In addition, comprehensive studies of the performance of muon reconstruction and identification in 2010 have been done. Work on the draft of the paper (MUO-10-004) summarising these results is nearing completion, with an approval targeted for April. This work also led to several improvements to the reconstruction algorithms – examples were shown during Physics week in February.

The tools and people put in place for the analysis of 2010 data will be used to re-assess the performance with the first 2011 data. High-pT muons move to the top of the priority list. The resolution and momentum scale have been re-evaluated on recently-produced tracker-pointing and superpointing skims of 2010 Cosmics data, and measurements of reconstruction and trigger efficiencies on cosmic muons are in progress. The first attempts to complement the sample of cosmic muons collected in dedicated runs by cosmic muons collected during collision runs with a dedicated RPC trigger (TT25) look promising.

Tau

The Tau POG finished commissioning of the tau identification and reconstruction algorithms using 2010 data. A tau reconstruction efficiency of 50% was measured for a level of fake-rates from jets of 1%. This allowed CMS to perform in 2010 Higgs and SUSY analyses using tau decay channels that already compete with similar studies done by Tevatron experiments. Based on the commissioning results, the algorithms were further improved for 2011 analyses.

A new trigger strategy that uses particle flow objects and cross triggers, as well as tighter isolation requirements, should allow CMS to keep the trigger efficiency in 2011 high for the most important physics analyses with taus while keeping the rate sustainable for the CMS trigger system.

JetMET

The JME POG has prepared several PASs and papers based on the 2010 data. This includes jet substructure and algorithms (JME-10-013), providing the PAGs with the ability to tag jets from boosted heavy objects like Top and W, and jet resolution (JME-10-014), measured in-situ and with systematic uncertainties assigned for use by the PAGs. In addition, two critical JINST papers on the performance of MET and Jets are now in the CMS approval process. Other notable developments include:
– complete JEC with first pile-up corrections for jets for 2010 data and the start of 2011 run,
– a MET significance algorithm and algorithms to reduce the effect of ECAL holes on MET,
– jet corrections, ID, and pile-up rate reduction for triggers in a new JetMET trigger group,
– rapid feedback to PVT on DQM and validation issues coordinated tightly with DPG.

B-tagging

The BTV POG completed the characterisation of b-tagging performance with the 2010 data. We measured the performance with impact parameter, secondary vertex finding, and jet probability algorithms using di-jet events with an associated muon and the ttbar events in the datasets collected by CMS. The b-tagging efficiency is measured to be 60% for a mistag rate of around 6%. We determined the data/MC scale factors for the efficiencies and mistag rates. Many analyses and results in the Top, SUSY, and BPH PAGs were based on identification of b-jets in the events. The work of the BTV group is an integral part of the many successful measurements CMS was able to produce with the early data. The focus of the group has now shifted to studying the 2011 data and optimising the b-tagging algorithms to continue to maintain the excellent performance in events with increased pile-up.

Forward

Jets in the region at large rapidities (3 < |η| < 5) have been measured for the first time (FWD-10-003). In this region, the parton densities are probed at small momentum fractions. The measured inclusive cross-sections agree rather well with the theoretical predictions. The energy flow in the forward region is a very sensitive probe for the structure of the underlying event and multiparton interactions. With the first measurements it became clear that models describing the underlying event in the central region do not necessarily describe well the forwards region. Diffractive dissociation at highest energies is the important contribution to pile-up. The description of diffractive dissociation is tested in detail with a measurement of the energy distribution in the forward calorimeters and with charged particle spectra in the central detector. Non-trivial correlations have been observed.

BPH

The B-Physics group has performed several measurements of bottom (or beauty) quark production. Inclusive cross-section measurements are based on the identification of semi-leptonic b decays into muons (BPH-10-007) and b-jet tagging with secondary vertices (BPH-10-009). In addition, differential production cross-sections of several B mesons (B+, B0 and Bs – BPH-10-004, 005, 013) have also been measured in exclusive decay channels. The dynamics of B hadron production was further investigated by measuring the BBbar angular correlations (BPH-10-010). A novel technique based on secondary vertex identification was adopted, accessing for the first time at LHC the region of collinear B-hadron emission. Measurements of quarkonium production are also an important part of the group’s physics program. Differential cross-sections of Y(1S, 2S, 3S) mesons have been released (BPH-10-003), while the measurement of J/ψ spin alignment is expected soon. Searches for rare decays and exotic states have been initiated, leading to the observation of the exotic meson X(3872) in the decay to J/ψ and two pions.

QCD

Many QCD results have been approved recently. In the low pT sector, the charged particle transverse momentum spectra were measured for pp collisions at √s = 0.9 and 7 TeV (QCD-10-008). The consistency of the 0.9 and 7 TeV spectra is demonstrated with an empirical xT scaling that collapses the differential cross-sections from a wide range of collision energies onto a common curve. A measurement of the underlying activity in scattering processes was performed at √s = 7 TeV (QCD-10-010). The production of charged particles with pseudorapidity |η| < 2 and pT > 0.5 GeV/c is studied in the azimuthal region transverse to that of the leading set of charged particles forming a track-jet. A significant growth of the average multiplicity and scalar pT sum of those particles is observed with increasing pT of the leading track-jet, followed by saturation above a few GeV/c.

In the high-pT jet sector, a measurement of the inclusive jet cross section (QCD-10-011) was performed using the full 2010 dataset, extending the kinematic regime of previously published results and found to be in agreement with the perturbative QCD predictions at NLO. It also serves as a jet commissioning measurement, indicating that the superb jet reconstruction and jet understanding in CMS. Similarly, a measurement of the di-jet cross-section as a function of the di-jet invariant mass (QCD-10-025) also is found to be in agreement with the perturbative QCD predictions at NLO. The experimental systematic uncertainties on these cross-section measurements are roughly comparable to the theory uncertainties, and a future reduction of the jet energy scale uncertainty will constrain the PDF models. A measurement of the ratio of the inclusive 3-jet to 2-jet cross-sections as a function of the total jet transverse momentum in the range 0.2 TeV < HT < 2.5 TeV was performed. Comparisons between the data and the predictions of different QCD-based Monte Carlo models for multi-jet production extend the validity of the models at LHC energies.

While several measurements with prompt photons are in progress, the measurement of the prompt isolated photon cross section performed last year was recently accepted by Physical Review Letters.

Top

The Top quark analyses based on 2010 data cover measurements of the Top-pair production cross-section utilising both the di-lepton (TOP-10-005) and the lepton+jets decay channels – without b-tagging (TOP-10-002) and with b-tagging (TOP-10-003) – as well as the first determination ever of the Top mass at a collider other than the Tevatron (TOP-10-006). Different approaches and cross-checks on the cross-section measurements were made, including one approach that simultaneously fits of many backgrounds and systematics across multiple bins of jets and tags.

Top-pair production also has been used as a probe for testing the presence of physics beyond the Standard Model in two ways: by looking for resonances in the production mechanism (TOP-10-007), as well as by looking for anomalies in the Top charge asymmetry (TOP-10-010), possible hints of signals of new physics coupling to the Top quark sector. All of the results, in terms of total or differential distributions, show impressive agreement with the expectations of the Standard Model. The limits on the presence of new physics can soon be competitive with, if not superior to, the Tevatron results, with the foreseen statistics of 2011.

Impressively, single Top electroweak production has been confirmed by CMS with the 2010 data alone. The first total single Top production cross-section in the t-channel has been measured (TOP-10-008), allowing a direct determination of |Vtb|. The quality of the detector, the simulation and the analyses will make it possible to study single Top production with great detail in 2011.

Electroweak

Many precise and novel measurements were made in the electroweak sector. Notable is the first measurement of the spin polarisation of the W boson at a proton collider, where the production of W bosons of both charges recoiling against hard jets have been measured to be predominantly left-handed, as expected from the dominant gluon-quark W+jet production process at the LHC (EWK-10-014).

The precision cross-section measurements benefit from the reduced uncertainty on the integrated luminosity, which is now reduced to 4%. The cross-section measurements of the inclusive production of W and Z bosons, previously published using 3 pb-1 of data, have been updated (EWK-10-005). The experimental uncertainties, dominated by systematic uncertainties linked to lepton identification and selection, are now at the level of 1%, much smaller than the theory uncertainties. The ratio of W-to-Z production cross-sections, for which the luminosity uncertainty cancels, constitutes a stringent test of the theory prediction with the most recent parton densities.

The inclusive productions of W and Z bosons have also been studied in the tau-lepton channel. A significant W signal is obtained in the hadronic tau decay channels, over a background from multi-jets that is controlled from the data (EWK-11-002). The study of the Z decay into tau pairs in four different final states allows a precise estimation of the tau-lepton identification efficiency in the hadronic decay modes (EWK-10-013).

At the LHC, the production of W+ bosons is on average 40% larger than that of W bosons, an asymmetry that depends on the boson rapidity. The resulting lepton charge asymmetry has been measured as a function of lepton pseudorapidity (EWK-10-006). The precision is such that different sets of parton densities can already be distinguished.

The differential production cross-sections of the Z boson as a function of boson rapidity and transverse momentum have been measured in the electron and muon channels (EWK-10-010). The electron identification capabilities of the HF are used to extend the boson rapidity range up to |y|=3, beyond the Z rapidity plateau. The agreement with predictions is good for all boson rapidities and at large transverse momenta up to 600 GeV. In the low transverse momentum region, below 30 GeV, the data confirm the quality of the most recent tunes of the Pythia soft QCD parameters at the LHC.

Measurements of the production cross-section of Drell-Yan muon pairs have been measured in bins of the di-muon invariant mass (EWK-10-007). The agreement with predictions is good over the full mass range. The uncorrected forward-backward asymmetry of Drell-Yan pairs also was obtained in bins of mass, and good agreement is found with the simulation in both electron and muon channels (EWK-10-011). The full kinematics of Drell-Yan muon pairs has been exploited in a promising angular analysis that demonstrates the feasibility of a precision measurement of the weak mixing angle at the LHC.

The associated production of jets with W and Z bosons has been studied up to four inclusive jets. Jets are selected with a transverse energy threshold of 30 GeV (EWK-10-012). The data clearly favour theory predictions based on the full calculation of matrix elements at NLO. A first measurement of the fraction of b-jets in the Z+jet sample has been performed, with a 25% precision, based on samples of several tens of Z events with one b-tagged jet (EWK-10-015).  Several events with two b-tagged jets have been identified.

Finally the first di-boson signals at the LHC have been studied. The rate of WW production, which constitutes an irreducible background for the Higgs boson searches in that mode, was measured based on a sample of approximately thirteen di-lepton plus MET events, and found to be consistent with expectations (EWK-10-009). The production cross-sections of W+γ and Z+γ, for photon transverse momenta larger than 10 GeV, were also measured with precisions of the order of 10% (EWK-10-008). A first indication of the radiation amplitude zero in the W mode comes from the charge-signed rapidity difference distribution. The WW and Wγ and Zγ measurements were used to set first limits on anomalous triple gauge couplings at the LHC.


Figure 8: Compilation of CMS electroweak and Top quark measurements compared to theory.

SUSY

A comprehensive strategy for SUSY searches has been put in place and has been exercised on the 2010 dataset. For searches in the all-hadronic final state, three complementary techniques using kinematics and detailed detector and background understanding have been pursued. These include the “alpha-T” approach (SUS-10-003), recently accepted for publication by Physics Letters B and also recently extended to include b-tagging (SUS-10-011), a new approach using the dimensionless razor (R) variable (SUS-10-009), and the traditional jets and MET approach (SUS-10-005). The reach from these searches is far beyond the Tevatron, and for the first time the limits are expressed in terms of simplified models to help model builders use the results.

The leptonic searches for SUSY include toplogies with one lepton (SUS-10-006), same-sign di-leptons (SUS-10-004), opposite-sign di-leptons (SUS-10-007), and multi-leptons (SUS-10-008). Included are hadronically decaying tau leptons in many channels. SUSY searches with photons include di-photons (SUS-10-002) and photon+lepton searches (SUS-11-002), with sensitivity surpassing previous experiments.

A wide range of searches has been performed, and limits are set on SUSY in a variety of models. Emphasis has been on data-driven methods and multiple techniques for robust analyses. CMS is ready to make a discovery in the 2011/12 data!

Exotica

The Exotica group has been extremely productive in mining the 2010 data for a variety of new physics models, delivering about 20 analyses on the complete 2010 dataset. Of these, the searches for stopped gluinos (EXO-10-003), and first and second generation leptoquarks in the leptons+jet final state (EXO-10-005,7) have been already been published while the searches for W′ (EXO-10-014,15), Z′ (EXO-10-012,13), b′ (EXO-10-018) and microscopic black holes (EXO-10-017) have been submitted for publication.

Additionally, Exotica has produced preliminary results from a number of other analyses. These include the search for first and second generation leptoquarks in the lepton+MET+jets final state (EXO-10-006,8), excited leptons (EXO-10-016), RS gravitons (EXO-10-019), LED in γγ (EXO-10-026), μμ (EXO-10-020), and in monojets (EXO-11-003), low mass ttbar resonances (EXO-10-023), a boosted Z as a search for excited quarks (EXO-10-025), lepton-jets (EXO-11-013) as a search for low-mass dark matter, and an RPV search in multi-jet resonances (EXO-11-001). Many of these results will be submitted for publication shortly as well.

With these searches, Exotica has set the most stringent limits over much of BSM model-space. The group has now begun preparing for a discovery in whichever of these analyses Nature will (hopefully) provide a signal in the 2011 data.

Higgs

The search for the Higgs launched in earnest with the 2010 data, with the completion of four analyses. This includes the Higgs search in the WW decay channel (HIG-10-003), the first paper on the Higgs boson published at the LHC. The search for MSSM H→ττ (HIG-10-002) exploits the excellent tau identification capability of CMS, significantly extending the reach beyond that of the Tevatron (see below). The searches for doubly-charged Higgs (HIG-11-001) and singly-charged Higgs in t→Hb (HIG-11-002) have already sensitivity similar to previous measurements performed at the Tevatron. Preparations are now underway to conduct an exhaustive search across many Higgs decay channels using the >1 fb -1 of delivered luminosity expected in 2011-’12.


Figure 9: Excluded region in the tanβ vs mA plane for the MSSM H→ττ search.

COMMUNICATIONS GROUP

The CMS Communications Group, established at the start of 2010, has been busy in all three areas of its responsibility: (1) Communications Infrastructure, (2) Information Systems, and (3) Outreach and Education.

Communications Infrastructure

There are now 55 CMS Centres worldwide that are well used by physicists working on remote CMS shifts, Computing operations, data quality monitoring, data analysis and outreach. The CMS Centre@CERN in Meyrin, is the centre of the CMS offline and computing operations, hosting dedicated analysis efforts such as during the CMS Heavy Ion lead-lead running. With a majority of CMS sub-detectors now operating in a “shifterless” mode, many monitoring operations are now routinely performed from there, rather than in the main Control Room at P5.

The CMS Communications Group, CERN IT and the EVO team are providing excellent videoconferencing support for the rapidly-increasing number of CMS meetings. In parallel, CERN IT and the LHC experiments are evaluating Vidyo, a possible commercial alternative, with the view to reaching a conclusion on its viability at the end of Spring 2011.

Two new CMS meeting rooms are being set up in buildings 28 and 42, next to building 40, and the P5 meeting room is being improved. All 15 CMS meeting rooms at CERN are due for significant refurbishment during 2011 – 2012 due to ageing equipment that is starting to fail.

The Communications Group also provides the CMS-TV (web) channels that broadcast the LHC status, the new CMS Page 1 from the WBM group, DAQ Page 1, live event displays from P5 and outreach material on public screens at CERN and in CMS institutes worldwide. Being fully web-based, anyone in the world can see what is happening at CMS at any time.


Figure 10: CMS Page 1

Information Systems

The rapid transition from the CMS construction phase to the physics analysis phase has left the CMS information systems in need of significant re-vamping, including the many web sites and document management systems. The CMS information landscape is intrinsically complex, as shown below.


Figure 11: The CMS information landscape

Although many of the individual systems are well-structured, the 500 “official” CMS web sites that have grown organically over the years have no overall coherence. The navigation within and between systems is very poor and the search systems are inadequate.

In line with the central CERN web services, the Communications Group is migrating the core CMS Public and internal Web sites to use Drupal, an open-source “Content Management System”. This will establish a completely comprehensible navigation system by managing the site structure, links and content in a coherent manner. Search functionality is being established that will enable users to search within all systems shown in the figure above at once (the web, CDS, CADI, CINCO, Indico, and so on). This new CMS site should be in production in June 2011.

CMS already has 171k documents (files) in Indico, 140k in the Twiki, 55k in DocDB and 8k in CDS, with many more in CADI, CINCO, EDMS, iCMS. These are all managed systems, which guarantee long-term preservation and have “browse” and “search” capabilities. However, there remain more than 100,000 other important documents from the R&D, construction and commissioning phases of CMS that currently reside on afs, dfs, local file systems and the aforementioned ~500 CMS Web sites. To ensure these files are not lost permanently and that they may be found when needed, the Communications group is leading a major effort to search for all such documents and import them into a managed system, usually DocDB.

All CMS users are encouraged to upload any documents or files of general interest using the simple web interface: https://cms-docdb.cern.ch/cgi-bin/DocDB/ DocumentDatabase. Tools have been established to automate the bulk harvesting and uploading of existing documents for large collections – just contact us if you wish to use them. Using this bulk upload tool 40,000 files have been selected and uploaded over the last few weeks, from a total sample of about 66k files processed.

Outreach and Education

The press interest in the LHC remains high. Highlights picked up by the media recently include the First Heavy Ion Results, Microscopic Black Holes, First Results on the Higgs Search, and the First SUSY results. In each case the Communications Group prepared material for the press, including a written “CMS Statement” (see figure below) including simplified explanations, supplementary images and further links.


Figure 12: Example of a CMS Statement.

Following an initiative from IPPOG, the International Particle Physics Outreach Group, we will soon start to prepare, with the Physics Groups, so-called “Discovery Packages”. These are sets of resources suitable for all audiences (general public, media, schools, other physicists) related to possible CMS discoveries or major measurements. Each package will address a number of questions (e.g. What is the Higgs? How was it discovered?  What are the implications? etc.)  in a variety of ways: with textual explanations, images, movie clips etc. This preparatory work will be done in close collaboration with the other LHC experiments, CERN and others involved in Outreach, including IPPOG of course.

The CMS Times continues to be published every two weeks and now has a regular following of about 1500 people both inside and outside the collaboration. Stories focus on the operations of the LHC and CMS, the various physics results and events of human interest such as conferences, visits and educational activities. We are making more use of Web 2.0 tools (Twitter, Facebook etc.) to reach the public. The Google “Street View” team filmed underground at P5 during the winter break and will make the footage publicly available as part of a CERN site-wide initiative of UNESCO; naturally privacy and security concerns are being addressed.

There are many other ongoing outreach activities. One of the most visible in the pipeline will be the full size high-resolution photo display of the CMS detector to be installed in building 40, as shown below.


Figure 13: Proposed photo display at building 40.

CMS is active in a number of educational programmes including the IPPOG Masterclasses (Europe, USA and elsewhere) as well as Quarknet and I2U2 in the USA. Following an agreement by the Collaboration Board in 2010, CMS is providing small samples of real event data (J/ψ decays, Υs and Zs) to students. Members of the CMS Communications Group are also involved in the production of realistic web-based tools for interactive event display and simple statistical analyses in order to profit from this availability of data. These have already proven to be extremely popular with students and teachers alike, as more countries and curricula are trying to adopt “enquiry-based learning” techniques into their classrooms.

Content


PDF Version