CMS Bulletin

CMS MANAGEMENT MEETINGS

The Agendas and Minutes of the Management Board meetings are accessible to CMS members at:
http://indico.cern.ch/categoryDisplay.py?categId=223

The Agendas and Minutes of the Collaboration Board meetings are accessible to CMS members at:
http://indico.cern.ch/categoryDisplay.py?categId=174

TECHNICAL COORDINATION

Operational Experience

Since the closure of the detector in February, the technical operation of CMS has been quite smooth and reliable. Some minor interventions in UXC were required to cure failures of power supplies, fans, readout boards and rack cooling connections, but all these failures were repaired in scheduled technical stops or parasitically during access dedicated to fixing LHC technical problems. The only occasion when CMS had to request an access between fills was to search for the source of an alarm from the leak-detection cables mounted in the DT racks. After a few minutes of diagnostic search, a leaking air-purge was found. Replacement was complete within 2 hours. This incident demonstrated once more the value of these leak detection cables; the system will be further extended (during the end of year technical stop) to cover more racks in UXC and the floor beneath the detector.

The magnet has also been operating reliably and reacted correctly to the 14s power cut on 29 May (see below). In order to minimize the mechanical ageing of the coil due to the stresses of field-cycling, the magnet will be kept at full field as much as possible. It will be switched off only for essential maintenance of its cryogenic, power or cooling systems or for work in UXC which is incompatible with the stray field. All such work will be grouped together as much as possible. There is now considerable accumulated experience of working in the UXC with the magnetic field on. We have recognised the need to expand our kit of non-magnetic tools, but otherwise most of the maintenance and repair, such as topping up the fluorocarbon cooling plants, can be done with the magnet at 3.8T.

Accesses to UXC are now all being organized through a Web interface to the Equipment Management Data-base (EMD). Each intervention has to be declared, approved and monitored as a work package. The system also monitors the material flow in and out of UXC through the radioprotection buffer zone. Individual sub-projects are responsible for requesting work packages, which are then reviewed and authorized by TC. The system is reliable and easy to use and its use is now mandatory. We would like to remind the collaboration that though granting an access is a well understood procedure, with a minimized administrative overhead, it still requires quite substantial supervision, both from CMS and LHC. We therefore ask subsystem teams to stay aware of the access status and to think ahead. This will help avoid last-minute requests, in particular those arriving after the cavern has already been closed again after a scheduled access, which, except in emergency cases, are likely to be refused!

In preparation for operating CMS with a central shift crew only, the alarm- and action-matrices of all subdetectors and the central safety and control systems have been reviewed during the year. Recently, after several iterations and an extended test phase under parallel central and local supervision, a final review was held to authorise detector components one by one for unattended operation under supervision by the central CMS technical shift. For EB, EE, ES, HB, HE, HF, DT, CSC and RPC the permission for central operation was given straight away. Pixel and Tracker soon followed, after implementing a lock-off mechanism, requested by TC, which prevents central DCS from switching them back on after they have been switched off (by an alarm or an expert), unless expert permission is given. ZDC and CASTOR are the only components which have not yet provided sufficient information to be operated centrally without an expert being present in the control room.

Despite a sound and maturing monitoring system, performing a twice-daily safety tour, including the accessible underground installations, is still highly desirable. The tour has been developed over the last couple of months and has proven to be justified, as several minor problems have been uncovered. However, it is often difficult for the central shift crew to find the time for the tour. It has therefore been decided to train our technical staff at Point 5 to perform these tours. The safety tour will be one duty of a CMS technical piquet service that will be available to help with technical incidents and searches of UXC in case the “patrol state” is lost. Besides helping our shift crew this measure also helps to maintain a close connection between the experiment and our technical staff, outside technical stops and shutdowns.

On Friday May 29 at 23:30, the CMS safety system and incident recovery procedures underwent a realistic test, when a circuit breaker flash-over caused an LHC-wide power cut of around 14s. Cooling and the cryogenic system consequently went down as well. It is encouraging to note that all safety systems worked as expected and the experiment was completely safe throughout the incident. The magnet went into slow discharge using Helium from the 6000l local dewar, as foreseen in such a case. The inertion of the tracker volume kept running normally; if needed it has several backup systems. The diesel generator at point 5 kicked in correctly and provided a minimum electrical power to monitor the situation and run safety systems such as lifts. The cold box was re-engaged after around 3 hours, so that Helium liquefaction re-started. By that time it was clear that LHC would be down for at least 24 hours, and that 9 hours would be needed to re-liquefy sufficient Helium reserve for full-field operation. As a consequence it was decided to re-establish services, then postpone restart of the experiment until the next morning. By the 4½ hour mark, all the cooling and HVAC was running again. It took about 6 hours to restart the experiment on Saturday 30 May and by 17:00 CMS was running smoothly again. It is worth remarking that this was achieved without entering UXC. With there being no prospect of physics beam before the Technical Stop of 31 May, the magnet re-start was postponed until the 3rd and last day of this stop, when a ramp to full field was achieved in 4½ hours.

Analysis of the incident nevertheless showed some room for improvement. Rack control was lost due to inadvertent switch off of a DCS PC, the cause being traced to PC racks with computers on UPS but fans not. The programmed fast discharge at the end of the magnet slow ramp-down may not be the fastest path to recovery when the cold box is already re-engaged and replenishment of the 6000l reserve Dewar is the priority.

Technical stops

The revised machine operating plan for 2010-11 currently foresees 4-day technical stops every 6 weeks, with a 1 day midweek cryo regeneration stop in between. In line with this, Technical Coordination and EAM must therefore have planning for these opportunities under constant development and update. In addition, contingency planning for a 1-week or 1-month LHC-fault-provoked stop is also prudent.

Latest LHC machine thinking envisages a 9-week extended tech stop starting around 6 December 2010. A detailed draft planning for this period already exists, with possible activities including: fixing the displaced –z end alignment ring, installing TOTEM T1, installing the UXC public address system, consolidating the Nitrogen dewars prior to SX5 OSC work, installing the PM54 PAD and shaft plug, trial installation of the ZDC crane, upgrade of the CASTOR magnetic shielding, and extension of the leak detection system. In this scenario, TOTEM activity will most likely be the critical path. T1 installation procedures and planning will be reviewed in an EDR before the end of June and a decision on the risks/benefits of proceeding should be taken by September.

Facilities and services supporting operation and maintenance

Now that the operation phase of CMS is well under way some effort can be redirected to infrastructure and other common projects at point 5 and elsewhere. The goal is to identify and prioritise necessary consolidation work, improvements required for higher luminosity, and changes or facilities that will make the operation of CMS easier and more efficient.

One project already launched is the refurbishment of parts of the SX5 assembly hall to fit the needs of subsystems for a local operations and maintenance support centre (OSC), a project whose initial phase (mostly Civil Engineering) is being led by our Point 5 EAM team. The immediate priority is to convert the SHL building into a clean room facility for the Pixel and BRM communities. This building was used to host the CMS cold box when the magnet was still on the surface. The work implies structural changes to the building and the installation of new services to support the clean room operation. Studies are well advanced and the first Civil Engineering jobs should take place before the summer. Almost as urgent is the creation of a large access-controlled zone within SX5 where activated materials can be stored or worked on; this zone will include a small workshop. Next on the high priority list is addressing the lack of office space at point 5 by constructing a new building. The SL53 existed on paper from an early stage in the design of the point 5 layout, but was never built. The plan is now to share the future available space between offices and a visitor center as well as recreational facilities like coffee or kitchen areas. The building permit is currently being reviewed by the French authorities.

Meanwhile, in Building 904, work has already started to consolidate the Electronic and Electrical Integration Centre, one of the main goals being to allow sub-detector groups and the level-1 trigger to test and validate revised versions of the readout firmware and software, while the systems at point 5 are in production. Racks have been installed for the DAQ slice (capable of reading 16 S-links at 100kHz) and work is underway to install the associated DSS, power and cooling infrastructure, plus a DCS system.

For the coming shut-downs, CMS has to be prepared to shield parts of the detector which may become radioactive. Activation is a progressive process becoming more and more severe as the average luminosity increases, but decreasing (initially rapidly) depending on the “cooling time” since high luminosity p-p operation. A working group has been set up to design shielding for the EE/ES region, the tracker bulk-head, the pump station at 13.7m, the flange at 10.6m and bellows around 3.5m and 16.3m (all distances from the ip on either end). In addition, beam pipe protection will be designed for all foreseen opening scenarios with the possibility to later shielding later. PSL, Wisconsin is leading the design. The aim is to have an initial shielding system ready for the 2012 shutdown. In parts, the design will be modular, so that material can be added only according to need, with the installation procedure being exercised using the light-weight versions initially necessary.

A second working group is evaluating how to improve the forward region around HF. The present procedures for opening and closing of HF, TOTEM and CASTOR, for engaging and disengaging the beam pipe support at 13.5m, and for beampipe load transfer prior to endcap opening, are not compatible with the ALARA principle once the area becomes significantly activated. In the same region, despite several attempts to stabilize the HF tower against movements in the magnetic field, it is observed that initial ramp-up of the magnet after re-assembling the HF-tower leads to unpredictable and potentially risky movements of the entire forward region. Fortunately, practical experience shows that subsequent ramps are predictable and reproducible. To better comply with ALARA and to avoid the initial movements, a working group for revision of the forward region has been started, led by the CERN-based infrastructure team. It is expected that the necessary changes will be realized progressively, with completion during the shutdown 2015 or 2016.

Finally, CMS is cooperating closely with the EAM-led effort to help tackle the challenges of the relocation of radiation intolerant LHC equipment from the UJ56 to other underground areas at point 5, including the CMS service cavern. This ambitious project, called R2E (Radiation To Electronics), aims at providing solutions for relocating critical electronic and electrical components which are predicted to fail (anytime after 2013) once luminosity and the associated radiation level increase sufficiently. As experience from CNGS shows, the disruption to beam operation induced by single event upset in machine electronics can be devastating. The goal is to make the best of the 2012 shutdown and possibly relocate all susceptible equipment to protected areas, some of which could be in the USC55 (mostly in S4). CMS has to be sure that there is no detrimental effect on experiment operation (eg from poor separation of grounding) and that sufficient space remains in USC for all foreseeable CMS activities (eg parallel operation of upgraded readout).

Upgrade Preparations

The list of Technical projects related to upgrade is already impressive. Some are relatively clear-cut, such as the increased SCX cooling capacity needed to accommodate the luminosity-driven HLT farm expansion. Some of the organisational structures needed require thought. An example is the evolution of the Engineering Integration task, for which a consultative review was held last week.

The proposed pattern of LHC running, with two long shutdowns in 2012 and 2015, but otherwise only annual technical stops, constrains the ability to exploit the rapid-opening design of CMS to execute changes in a progressive manner. Thus the first and most crucial project is to produce a realistic schedule of activities. The shutdown of 2015 (LINAC4 + collimation) is already packed with proposed tasks, notably the replacement of HCAL photo-transducers and front-ends and the insertion of the 4-layer, low-mass, pixel tracker. A TC task-force on the central beampipe diameter has recently concluded that there is insufficient time to specify a lower diameter beampipe for installation in 2012. This is basically because more understanding is needed about how to position the pipe axis on the actual beamline. Unfortunately, the beampipe upgrade will therefore also have to be executed in 2015, which then becomes a packed shutdown with logistics designed to maximise the time CMS is fully open. Any superimposed maintenance activities are also most like to re-inforce this fully open layout. The consequences for the shutdown of 2012 are profound. It is becoming clear that this is the only opportunity to complete the bulk of the forward muon upgrade. In preparation for this, the CERN-based engineering team recently completed the design of the YE4 shielding disk, in close collaboration with an engineer from the chosen manufacturers in Pakistan. Procurement must start soon. Similarly a task force is studying the integration of the forward muon upgrade, in preparation for an EDR in a few months time.

The timely execution of this upgrade depends primarily on releasing resources for CSC material procurement and manufacture, but it is also highly dependent on completion of the production facilities in building 904. This project, again being led by our EAM team for the Civil Engineering phase, is ready to be launched once central CERN funding is released. It will start with consolidation of the floor and roof, which will then open the way for production equipment to be installed, while gas, air conditioning and cooling installations are completed. By early 2011, CSC and RPC production should be possible, while the massive programme of consolidating the shell of the building continues outside and in parallel. Once this work is finished in June 2011, it will be possible to achieve the temperature and humidity control needed for reliable RPC testing.

MAGNET

Operation of the magnet has gone quite smoothly during the first half of this year. The magnet has been at 4.5K for the full period since January.

There was an unplanned short stop due to the CERN-wide power outage on May 28th, which caused a slow dump of the magnet. Since this occurred just before a planned technical stop of the LHC, during which access in the experimental cavern was authorized, it was decided to leave the magnet OFF until 2nd June, when magnet was ramped up again to 3.8T.

The magnet system experienced a fault also resulting in a slow dump on April 14th. This was triggered by a thermostat on a filter choke in the 20kA DC power converter. The threshold of this thermostat is 65°C. However, no variation in the water-cooling flow rate or temperature was observed. Vibration may have been the root cause of the fault. All the thermostats have been checked, together with the cables, connectors and the read out card. The tightening of the inductance fixations has also been checked. More temperature sensors have been added, to help the diagnosis if such an event occurred again. Following a series of tests at low current, under the supervision of the power converter experts from CERN/TE-EPC, it was agreed to put the converter back to full power. A thermal camera was used to confirm that the temperatures of all components were within the nominal range. Some vibration of a polycarbonate plastic protection was observed and corrected. The magnet was ramped back up to 3.8T on April 15th.

The magnet current is very stable, with no degradation, with 18164A +/-0.02A at the nominal field of 3.8T.

A minor problem occurred with the cryogenics, due to a filter on the water-cooling circuit used to cool down both the diffusion pump of the cold box and the turbines. The cooling flow was nevertheless still within the operating range. The filter was cleaned during the technical stop at the end of April when the magnet was OFF.

INFRASTRUCTURE

During the May 31st to June 2nd LHC Technical Stop, a major step was made towards upgrading the endcap cooling circuit. The chilled-water regulation valve on the primary side of the heat-exchanger was changed. This now allows reduction of the set-value of the water temperature cooling the RPCs and CSCs of the CMS endcaps. At the same time, the bypass re-circulating valve on the secondary circuit of the heat-exchanger was also changed to allow better regulation of this set-value.

A project has been launched with the objective of improving the distribution of the chilled water to the different users. This was triggered by evidence that the Tracker compressors in USC55 receive insufficient flow. The chilled water is shared with the HVAC system and experts are now looking at how to better balance the flow between these two main users.

The cooling loop filters located in UXC55 have been inspected and cleaned. Samples were sent to CERN Radioprotection Service to check for activation and to the Material Analysis Lab to measure the dissolved metal content.

Concerning the powering infrastructure, the main activity has been the debugging of the many spurious alarms on the powering control system. Hundreds of fake alarms were active, due essentially to incorrect addressing. The alarms were previously masked to allow operation, but they were still a source of concern for the operators. They have been all cleared and the status displayed on the console now respects properly the actual condition of the power system.

A new UPS will be installed to secure the experiment control room against power cuts. Two new switchboards have been ordered and will be installed soon.

In collaboration with the CCC, a first release of the GTPM project (Gestion Technique des Pannes Majeures) has been presented to CMS Technical Coordination. This is a new interface for real-time monitoring of the status of CMS infrastructures, with both the P5 shift crew and the CCC sharing the same screen-shots. A lot of effort has been made to make the system easily readable, considering that in some cases a fault of a single node might affect more than a hundred sub-nodes. The Technical Shifters are now getting acquainted with a beta-version of the current implementation.

TRACKER

The Tracker has continued to operate with excellent performance during this first period with 7 TeV collisions. Strips operations have been very smooth. The up-time during collisions was 98.5%, up to end of May, with a large fraction of the down-time coming during the planned fine-timing scan with early 7 TeV collisions. Pixels operations are also going very well, besides problems related to background beam-gas collisions where the particles produced generate very large clusters in the barrel modules. When CMS triggers on these events, the FEDs affected overflow and then timeout. Effort was mobilised very quickly to understand and mitigate this problem, with modifications made to the pixel FED firmware in order to provide automatic recovery.

With operations becoming more and more routine at P5, Pixels have begun the transition to centrally attended operation, which means that the P5 shifters will no longer be required to be on duty. The strip-Tracker is also planning to make this transition at the end of June. Operation and monitoring of the Tracker subsystems will then be done by a combination of the central shift crew and on-call Tracker experts. Data-quality and detector performance will continue to receive proper attention from the central-DQM and Tracker offline shift crews, supported by the Tracker DQM experts.

The Tracker infrastructure continues to perform very stably. Only seven problems required intervention in the strips power system during the first five months of operation in 2010, equivalent to a failure rate of ~1% of power supply units per year. In contrast, there were three times more interventions during the last five months of 2009.

The cooling systems all continue to run stably, with no significant, unplanned down-times so far in 2010. There have been a few small increments in the leak rate of the SS2 (silicon-strip-2) system in 2010, all coinciding with interventions on the system. These increases are under investigation and detailed leak rate measurements are being made in order to assess the contributions from specific parts of the system.

The combined strips and pixels DCS system was recently modified to provide a ‘locked-off’ state. This was put in place to satisfy the requirement for centrally-attended operations, specifically to avoid that the detector is switched on whilst the LHC is in an unstable mode of operation. Unlocking the locked-off state requires permission from the strips and pixels on-call experts. The locked-off state is also expected to assist in avoiding extra, unnecessary thermal cycles of the detectors. The default operation condition of the Tracker is now ‘Standby’ (LV on, HV off), though prudence is still maintained with regards to machine development and LHC conditions considered to be unstable. In particular, the Tracker DCS will automatically switch off (and lock) the detector if the Accelerator Mode goes to ‘Machine Development’, or Beam Mode goes to ‘Inject and Dump’ or ‘Circulate and Dump’.

HADRON CALORIMETER (HCAL)

Operations and Maintenance

All HCAL sub-detectors participated throughout the recent data taking with 7 TeV collisions. A timing scan of HF was performed to optimize the timing across the detectors and to set the overall time position of the ~10-ns wide signals within the 25-ns integration time slice. This position was chosen to ensure that the trigger primitives in physics events are generated synchronously at the desired bunch crossing, while also providing discrimination between the calorimeter signals and anomalous signals due to interactions within the photomultiplier tubes. This timing discrimination is now used in the standard filter algorithms for anomalous signals.

For HB and HE, once the statistics needed to assess the timing of a sufficient number of channels was accumulated, it was verified that the time settings determined with cosmic, splash events and initial collision data were appropriate for the 7 TeV collision data taking. A further fine-tuning of the HB and HE time settings will be performed once sufficient data at high transverse momentum will be available. This is expected to be after the data set collection for the ICHEP conference. Since the great majority of HB, HE and HF trigger primitives were coming at the correct bunch crossing, the main jet triggers have been enabled at the Level 1 trigger.

While the detector channel status had been stable for many months, two issues have arisen in the last few weeks. The data from one fiber (three channels) in HB is lost; only empty data frames are being sent. This problem is currently under investigation. And one HPD (18 channels) in HB now draws excessive bias current and must be operated at a reduced bias voltage. This problem too is under investigation. It is not known whether the failure is within the HPD itself or in the bias circuit. In either case, repair will not be possible until CMS is opened during a shutdown.

In order to help preserve the lifetime of the HPD the bias voltage was lowered from 80V to 30V. The pulse produced with a lowered bias voltage will tend to be broader. HCAL energy is reconstructed using 4 25ns time samples and we are investigating the impact of reducing the bias voltage on the energy determination. We do observe a broadening of the pulse, however it appears that most of the pulse remains within the 4 time slices. The peak of the pulse occurs at a later time slice and the impact on the trigger efficiency is being evaluated. Additional studies are also needed to correct the standard reconstruction.

HCAL suffered a series of failures in the HV power supply system in April. A review is ongoing, and initial improvements have been made, including additional fusing and improvements to the filtering circuit in the primary power supplies.

The high voltage distribution to 8 channels of HF failed. The board was repaired during a two-day maintenance period.

It has been agreed by CMS that the HPDs in the YB-2 and YB+2 rings of HO will not be operated for the 2010-2011 running period, where very high energy jets will be rare. The HPDs in these rings will be conserved as spares for future operation of HB, HE and HO (inner rings). HO rings 0 and 1 will continue to be operated as normal. The plan is to replace the HPDs in rings 1 and 2 with SiPM-based front-end electronics in 2012.

Several channels in Castor are affected by the magnetic field that is higher than expected in this region. The PMT gains have been equalized for all the non-affected PMTs, and the number of affected channels has been reduced by increasing the HV appropriately, after studying the LED signal and halo muon data with field on and off. An analysis is ongoing to study energy flow at high eta, including Castor and HF.

ZDC is operating routinely. Anomalous PMT hits (similar to those in HF) are seen in 0.1% of collisions but these can be readily identified and filtered. This will not compromise performance in heavy-ion running. The monte-carlo is being improved to better match the energy distribution seen in the data. Studies are underway to develop a ZDC-based trigger for diffractive physics.

Development for Improvements and Upgrades

The HCAL group continues development work on several improvements and upgrades targeted for the shutdowns expected in 2012 and 2015-16. The HPDs in HO rings 1 and 2 will be replaced with SiPM-based front-end electronics in 2012. A plan being prepared for replacement of the HF PMTs in the same shutdown with units having thinner windows, which will reduce the rate of anomalous signals. Design work continues towards replacing by 2016 the HPDs with SiPM-based readout for HB and HE and upgrading the readout to providing depth segmentation and improved signal/noise. There has been a significant focus in the last two months on radiation damage and longevity studies for these new components, both SiPMs and PMTs, including neutron and activation exposures. These studies will conclude in the next two months.

MUON DETECTORS: DT


The DT system operation since the 2010 LHC start up is remarkably smooth.
 All parts of the system have behaved very satisfactorily in the last two months of operation with LHC pp collisions. Disconnected HV channels remain at the level of 0.1%, and the loss in detector acceptance because of failures in the readout and Trigger electronics is about 0.4%.

The DT DCS-LHC handshake mechanism, which was strengthened after the short 2009 LHC run, operates without major problems.

A problem arose with the opto-receivers of the trigger links connecting the detector to USC; the receivers would unlock from transmission for specific frequencies of the LHC lock, in particular during the LHC ramp. For relocking the TX and RX a “re-synch” command had to be issued. The source of the problem has been isolated and cured in the Opto-RX boards and now the system is stable. The Theta trigger chain also has been commissioned and put in operation.

Several interventions on the system have been made, profiting from the LHC technical stops, though none of them would have caused serious degradation of the system performance. 


The reliability of the Anderson Power connectors used in the CAEN power supplies of the DT low voltage system remains a point of concern. After a major campaign to replace the 670 connectors during the past shutdown, there have already been 4 instances of overheating during the last 5 months. Even though this rate is lower than what we had last year (46 instances), the effectiveness of the solution applied in the last shutdown is not completely satisfactory. Studies aimed at an optimal solution are still ongoing.


The control and monitoring software continues to evolve with the aim of providing a fast and concise summary of the status of the system. Improvements are happening on a daily basis, not only to be able detect efficiently any problems, but also focusing in providing longer term summaries. Consequently, a lot of effort is being put on the Trigger Supervisor, in the DQM and in the Web Based Monitoring tools.


Stability of the DT system continues to be a source of pride and as a result the operation will require very soon only an on-call shifter.


In a two day workshop (April 19 to 21) the evolution of detector calibration and performance with increasing integrated luminosity was discussed and the performance of the calibration procedures reviewed on the basis of the small statistics so far accumulated. The group is ready and eagerly waiting for more data.

MUON DETECTORS: ALIGNMENT

The main developments in muon alignment since March 2010 have been the production, approval and deployment of alignment constants for the ICHEP data reprocessing.

In the barrel, a new geometry, combining information from both hardware and track-based alignment systems, has been developed for the first time. The hardware alignment provides an initial DT geometry, which is then anchored as a rigid solid, using the link alignment system, to a reference frame common to the tracker. The “GlobalPositionRecords” for both the Tracker and Muon systems are being used for the first time, and the initial tracker-muon relative positioning, based on the link alignment, yields good results within the photogrammetry uncertainties of the Tracker and alignment ring positions. For the first time, the optical and track-based alignments show good agreement between them; the optical alignment being refined by the track-based alignment. The resulting geometry is the most complete to date, aligning all 250 DTs, and is closer than ever to the final design alignment strategy.

In the endcaps, new optical alignment constants with (x,y) CSC positions were approved. Unfortunately, a technical mistake when combining these measurements with link alignment measurements of ME12 and ME11 chambers and CRAFT09 measurements for other degrees of freedom caused the final alignment database to be incorrect, and it was not fixed in time for ICHEP reprocessing. However, a successful beam-halo alignment based on overlapping tracks and photogrammetry information has been validated and deployed for CSCs.

The alignment efforts will now concentrate on the combination of hardware and track-based results in the endcaps, the automation of the entire data chain leading to alignment databases and tags, further refinements of alignment results and on extending the validation techniques.

What are the alignment constants? How can they be visualized?
The answer to the first question is simple: the CMS reconstruction needs to know where to place, in space, the hits recorded from the active tracking elements. For many reasons such as mounting tolerances, thermal effects, gravitational forces, magnetic forces, etc…, these tracking detectors are not exactly at their design position. In the case of the muon alignment, the alignment constants are a set of “corrections” with respect to this ideal alignment. They consist of six numbers (called degrees of freedom) for each layer of each muon chamber: three translations and three rotations, with respect to the nominal, ideal alignment. That’s a lot of numbers! Which brings us to the second question: what do they look like? Well, one could make a list, and then they would look something like this:


But a nicer way to look at them is show them organized by chamber and by degree of freedom in such a way that the visualization becomes a little more intuitive. The images below are nothing more than a graphical representation of the alignment constants in the current CMS reconstruction database for muon all chambers (ignoring internal layers):

TRIGGER

Level-1 Trigger Hardware and Software

The overall status of the L1 trigger has been excellent and the running efficiency has been high during physics fills. The timing is good to about 1%. The fine-tuning of the time synchronization of muon triggers is ongoing and will be completed after more than 10 nb-1 of data have been recorded. The CSC trigger primitive and RPC trigger timing have been refined. A new configuration for the CSC Track Finder featured modified beam halo cuts and improved ghost cancellation logic. More direct control was provided for the DT opto-receivers. New RPC Cosmic Trigger (RBC/TTU) trigger algorithms were enabled for collision runs. There is further work planned during the next technical stop to investigate a few of the links from the ECAL to the Regional Calorimeter Trigger (RCT). New firmware and a new configuration to handle trigger rate spikes in the ECAL barrel are also being tested. A board newly developed by the tracker group (ReTRI) has been installed and activated to block resonant trigger frequencies that could possibly damage wire-bonds on CMS detector ICs. The algorithm used was developed by CDF. A batch of new central Trigger Control System modules has been produced whose firmware can be programmed through VME. The parameter settings of the beam position monitoring (BPTX) trigger electronics have been optimized.

Before the start of LHC run in 2010 various other improvement to the online software and firmware were made, in particular: the luminosity section was shortened to 1018 orbits (~23 sec); a mechanism to automatically mask subsystems from L1 configuration using FED vector was put in place; automated resynchronizations in response to out-of-sync signals from detectors were enabled; new GCT firmware implemented an improved tau algorithm; the configuration of LHC clock changes and ramping was automated; the monitoring of LHC clock and orbit was improved; and the trigger shifter environment was enhanced.

Level-1 Trigger Commissioning and Operations

At the re-start of LHC operations the Beam Pickup and Beam Scintillator triggers were re-synchronized. This allowed successful delivery of first 7 TeV collisions with precise trigger timing at the Media Event.  The next LHC fills were used to perform clock phase scans for the HF, ECAL and CSC systems while relying on Beam Scintillation Counter as the main minimum bias trigger. Results from these scans contributed to improvements in the synchronization of Trigger Primitives from these subdetectors. Subsequently, all Level-1 object triggers have reached the condition of less then 1% of out-of-time triggers and could be switched from monitoring mode to actively trigger the CMS data acquisition. The final component to complete the fine timing process is the barrel muon trigger, which awaits increased luminosity. Needless to say, more statistics will also allow setting the timing of all individual trigger channels to maximum precision.

An important part of Level-1 Trigger commissioning with collisions is the study of efficiencies. Despite of the lack of a proper signal, especially at higher pT or ET, reconstructed signal-like objects triggered by minimum bias triggers are used to identify various sources of inefficiency.  This helps to tune various hardware parameters, e.g. Trigger Primitive thresholds or timing of inter-channel communication, and to apply and test the first corrections on Level-1 threshold quantities.

Despite the rapidly evolving LHC conditions, the smoothness of the operation of the Trigger has improved over time. The scope of the Trigger Shifter activity moved from taking care of configuration to monitoring the Trigger performance. The automation of trigger configuration tasks, one of the most recent being the automatic clock source selection, brought a reduction of configuration errors and contributed to an overall high level of data taking efficiency. The main responsibility to maintain correct configurations and keep continuity resides in hands of experienced Trigger Field Managers who rotate on a weekly basis. The newly introduced Offline Trigger Shifts ensure proper monitoring based on Offline DQM. The process of data certification is being improved with the help of extensive use of the Run Registry utility by all Trigger shifters.

Trigger Studies Group

It has been a very busy and productive period for the Trigger Studies Group (TSG). Since the end of March the instantaneous luminosity delivered by the LHC has increased by two orders of magnitude, up to about 21029. Several trigger menus were prepared, validated and deployed in the Filter Farm. The trigger performance has been monitored constantly and CMS has taken data with good quality triggers.

Trigger menus have been developed for every major luminosity milestone: 11027, 11028, 11029. The CMS trigger has gradually moved from the startup period with several commissioning triggers and a very open physics trigger, to a more constrained online selection with simplified minimum bias triggers and all physics HLT paths still running unprescaled. The triggered events are grouped into eight primary datasets to facilitate their timely reconstruction, with the addition of a dedicated Event Display stream for monitoring.

The trigger performance has been monitored at several levels:
  • On the DQM side, with trigger object-based selection provided by the Physics Groups
  • During data certification, to determine the quality of triggers in the data taken
  • With collision data skims that are used extensively by the trigger experts to study the L1 and HLT behavior
  • At the weekly Monday Trigger Performance meetings with regular reports from Level-1 and Physics Object experts
  • By studying the CPU-performance of the HLT paths and providing skims of "slow events" to HLT developers.

The TSG has also been providing support for three software releases in parallel with daily work on:
  • maintenance of multiple trigger menus
  • release validation to identify areas where improvements can be made and determine which software release should be used for data taking
  • MC production, which in the recent months has been using one of the latest online trigger menus
Operationally, a small dedicated team of HLT on-call experts has been ensuring the smooth and stable running of the CMS trigger. They not only drive the online deployment of the trigger menus, but also help collect and analyze error events and core files, producing more robust HLT software. This is an area where additional manpower is very welcome and can have a big impact on the quality of data collected by CMS.

Preparing for the next months, the TSG is developing new menus for every factor of two increase in luminosity. Rate predictions are extrapolated from recent runs with careful examination of contributions from collisions, noise and cosmics (see Figure 1). The effect of pileup on the CPU performance and efficiency of the HLT is being actively studied. Finally, work is being invested in a software tool that predicts individual trigger rates to guide the online shifter in detecting anomalies in real time.

Fig. 1: Comparison of predicted and actual trigger rates from a recent run. Rate predictions are extrapolated from the latest data with careful examination of contributions from collisions, noise and cosmics.



DAQ

The DAQ system has been deployed for physics data taking as well as supporting global test and commissioning activities. In addition to 24/7 operations, activities addressing performance and functional improvements are ongoing.

The DAQ system consists of the full detector readout, 8 DAQ slices with a 1 Tbit/s event building capacity, an event filter to run the HLT comprising 720 8-core PCs, and a 16-node storage manager system allowing up to 2 GByte/s writing rate and a total capacity of 250 TBytes.

Operation

The LHC delivered the highest luminosity in fills with 6-8 colliding bunches and reached peak luminosities of 1-2 1029/cm2/s. The DAQ was typically operating in those conditions with a ~15 kHz trigger rate, a raw event size of ~500 kByte, and a ~150 Hz recording of stream-A with a size of ~50 kB. The CPU load on the HLT was ~10%.

Tests for Heavy-Ion operation

Tests have been carried out to examine the situation for data-taking in the future Heavy Ion (HI) run. The high occupancy expected in HI running was simulated via non-zero-suppressed (NZS) data. The Tracker was not on at the time so it simulated “virgin raw” operation. Data compression was not used and the events were shipped to Tier0 with an average size of ~19 MBytes. It is expected that standard (lossless) ROOT compression will reduce this size to ~12 MBytes. A NZS Tracker FED event size of 50 kBytes yields an FRL data record size of 100 kBytes, at the level of the event builder input, for those FRLs that merge two Tracker FEDs. This is a factor 50 above nominal p-p conditions of 2 kBytes per FRL, and would correspond to a 2 kHz level-1 rate.

To test the event building limits for these huge events, recording of events was switched off and events were built without back-pressure at an input rate of 1.6 kHz. Although, it is slightly lower than expected, the 1.6 kHz level-1 rate is around 10 times larger than the anticipated HI rate. Recording events with the storage manager, the data throughput saturated at ~2.6 GBytes/s, as expected with 16 storage manager nodes. This is reduced to about ~1.6 GBytes/s if files are simultaneously transferred to Tier0, due to disk access contention.

There is the possibility that the total level-1 rate, in HI running, could be as high 300 Hz, rather than the 80 Hz initially stated. Above 150 Hz, some amount of data reduction before recording will be required, either by event rejection by the level-1 trigger or by data reduction in the HLT. From the test it is clear that, at the highest rates, the simultaneous transfer to the Tier0 would soon become the bottleneck. It is likely therefore that we will temporarily keep the bulk of the data at P5 and later transfer it to Tier0 in non-collisions periods.

Selected Developments

To support the DAQ shifter and analyze retrospectively the operation of the global DAQ a new tool has been developed. It is called the DaqDoctor. It correlates the monitoring information of various components in the central DAQ in order to draw conclusions on the overall DAQ status. The correlation helps to identify the origin of a given problem and if it corresponds to a known pattern, the tool presents instructions to the DAQ shifter. Furthermore, the shifters are alerted with sounds in case of problems. Cases handled are, for example, diagnostics when triggers have stopped, error states asserted by the sub-detectors on the TTS, PCs not responding, etc. The historical records of the DaqDoctor are interesting for subsystem experts in order to diagnose their systems port-mortem.

A searchable browser is available in the private network at the URL:
http://cmsdaqweb/cgi-bin/daqpro/DoctorsNotes.cgi

A subset of error conditions ordered by subsystem can be found under:
http://cmsdaqweb/cgi-bin/daqpro/subsystemErrors.cgi


COMMISSIONING AND DETECTOR PERFORMANCE GROUPS (DPG)

The period since the last CMS week has witnessed the start of the LHC as a 'physics' machine. The excitement of the first collisions at 7 TeV on March 30 will be remembered for a long time!

The preparation for the event was meticulous. The LHC was pushed to deliver non-colliding stable beams prior to the real collisions, which allowed CMS teams to use the few beam gas interactions within the length of the pixel detector to verify that the beams would indeed collide once both beams were circulating and the separation bump was collapsed. In passing, this exercise allowed us to catch some last minute features of the system, which could have affected our performance on the day!

The first collisions were detected practically simultaneously around the ring. Within tens of minutes not only event displays, but also some physics distributions were delivered to the audience of the press conference.

A plan of work had been carefully defined on how to use the first collisions following the first hour of excitement. It mainly consisted of using the early luminosity to perform systematic latency scans of the readout/trigger of our subdetectors. This implied a little sacrifice of luminosity, but the detector trigger and readout timing had to be tuned as early as possible to make sure that future performance would be optimal.

Also the luminosity delivered in the early runs would soon become negligible given that peak luminosity was at least 4 orders of magnitude below what has been forecast in 2010.

The delay scans were carried out and within a few of weeks the final timing corrections (integrating the knowledge accumulated form cosmics, beam splashes and delay scans) were deployed by pixel, silicon tracker, ECAL, HCAL and CSC.

The subsequent period has seen -as foreseen- an LHC which has been changing all the time in terms of bunch intensities and configurations. The luminosity is typically delivered at night or during weekends with normal working hours dedicated most of the time to commissioning the accelerator and striving to increase the luminosity.

Fig.1: Minimum bias trigger rates in the first physics fill at 7 TeV on March 30.


The record luminosities reached to date are beyond 2 1029 Hz/cm2.

More than 90% of the luminosity delivered has been collected. The time in between physics fills has been used to continue the investigations of problems that have led to the loss of data.

One major problem has been induced by events with particles running along the barrel pixel detector and hitting long rows of pixels: the load on the front end buffers has been such that it could induced loss of synchronization given the time it takes to 'flush' the buffers. The Pixel team has attacked the problem painstakingly, with several versions of firmware deployed which have progressively increased the robustness of the system, while mitigating actions have been deployed on the trigger side to reduce the impact of such (relatively rare) events.

Fig.2: Details of CMS up-time since March 30.

To be noted is also the discovery of features in our network configuration and of certain DB queries, which could affect our performance, especially when configuring and starting runs. This type of issue has probably caused most of the dead-time (which has been nevertheless below 10%). As a consequence, we expect to improve performance in the future runs and now the goal is to reach an average of more than 95% recorded luminosity.

As detailed below, major progress has been made in understanding other beam-related effects, such as isolated crystal signals in ECAL and/or background events in general.

The situation with respect to the fraction of detector usable for physics is shown in the table below. It should be noted that the impact on the corresponding physics object efficiency both for trigger and reconstruction is typically much smaller!

Table 1: Active fraction of detector channels usable for physics.



Detector Performance Groups

L1 Trigger

At the LHC startup, CMS collected data using the BPTX zero-bias and BSC minimum-bias triggers. Using this data, L1 Trigger DPG concentrated on verifying the trigger synchronization of the remaining triggers: e/γ, jet, muon, energy sums, and technical triggers. This allowed the full menu of triggers to be enabled, with confidence that the tracking detectors would be readout in time with collisions.

Since then, work has focused on measuring and understanding the performance of the basic object triggers. These studies include performance criteria such as resolution and efficiency with respect to offline measured objects, as well as detailed technical studies at the trigger hardware level.

Plots showing the trigger synchronization, as well as efficiency turn-on curves for e/γ and endcap muons have been approved for showing outside CMS, and similar plots for jets, energy sums and barrel muons are under preparation. Studies starting now include energy corrections for e/γ and jet triggers, and validation of e/γ isolation criteria.

In support of the performance studies, a suite of prompt analysis tools has been under continuous development and improvement during the past 6 months.

Tracker

The Tracker has been operating successfully and with high performance. In the pixel sub-detector 98.3% of the channels are operational and 98.1% in the strip sub-detector. Offline operations during data-taking are now reaching a routine pace with the automation of most of the calibration tasks. Data quality monitoring and certification are performed by the offline shifters and shift leaders with the occasional help of experts. The pixel sub-detector is seen to be a well calibrated and understood detector. A good agreement between data and M-C is obtained for most distributions.

The strip sub-detector operates as foreseen in deconvolution mode, and nominal S/N is observed. Since this readout mode has an impact on the local reconstruction due to the charge collection time, a measurement of the corrections needed has been done. This removes the biases observed when using the Lorentz-Angle measured with cosmic tracks, and allows combination of cosmic and collision data in order to produce an improved aligned geometry. Such a new alignment has been delivered for the reprocessing in view of the ICHEP conference.

The initial geometry used for prompt reconstruction of 2010 collisions was based on the 2.2M cosmic events collected in February 2010. This geometry was obtained with a combined method running the local method (HIP) on top of the global method (Millepede). While it allowed determination of the position of the barrel modules in the measurement coordinates with a few μm resolution, it nevertheless had a limited precision in the forward detectors. By combining these cosmic tracks with the collision tracks, a substantial improvement in the forward regions is observed. This also confirms the stability of the tracker since the winter shutdown within the current accuracy of the alignment.

Efforts have been spent in understanding the beam-related background and its impact on the pixel detector. It has been confirmed that its nature is most probably secondaries from beam-gas collisions. Methods have been developed to efficiently reject contamination from these events in collisions. Further work will be needed to better assess the impact of pileup of beam-gas and collision events as the beam current increases. The tracker DPG is also strengthening its contacts with the POG and PAG groups through involvement in the various task forces. This is of particular importance to make the best use of the detector and to properly evaluate systematics, for example from material budget and alignment uncertainties.

ECAL

The ECAL detector performance is now being tuned on 7 TeV collision data. Data taking is generally smooth and the ECAL trigger is fully enabled. Focus is now on optimizing the data taking efficiency and reliability. Part of this effort is a dedicated investigation into the small number of channels that are currently masked in the readout or the trigger and to potentially recover some of them for data taking.

The general performance of ECAL, demonstrating excellent understanding of the single channel response, the noise, the zero-suppression and selective readout scheme as well as the trigger, has been presented at the CALOR2010 conference in Beijing.

The calibration of the ECAL is making rapid progress, exploiting a data set of more than 10 million reconstructed π0 decays. The AlCa procedure for the 0 calibration is in full operation. The inter-calibration of individual channels is reaching a precision of 1% in the region around η = 0. The energy scale is studied using π0 decays, η decays as well as the phi-invariance of minimum bias events. The energy scale in barrel is uniform within 1% and agrees well with MC expectations on the same level of precision. Calibrating the endcap is progressing as well, but will require more statistics and careful investigation of systematic effects.

First J/φ and Z decays to electrons have been reconstructed and will be used to further study the detector performance. The preshower detector has performed an in-situ calibration and an in-situ alignment which significantly improve the sub-detector and CMS performance. A wide range of physics analyses, now being prepared for the summer conferences, demonstrates the superb quality of the ECAL detector on a daily basis.

Handling of the ECAL anomalous signals is continuously being improved and tuned to the needs of the physics analysis. In a joint effort with the respective Physics Object Groups and Physics Analysis Groups, procedures and tools are being put in place to ensure that there is no impact on the physics output of the CMS detector.

HCAL

The HCAL detector data taking proceeded smoothly during the last month with an effective fraction of active/readout channels of 99.2%.

Attention has been focused in the study of the noise events, which might have an impact on physics studies. A CMS task force (ASCTF) was established to provide recommendation for how CMS should treat the anomalous signals from ECAL and HCAL. Within the HCAL DPG, a working group was formed to provide optimal recommendations to clean up the anomalous HCAL signals, which are due to charged particles producing Cerenkov light in the windows of the HF PMTs. Such signals were first observed in test beam data and are seen to occur with collision data.

Filters have since been developed, using the HF pulse time and shape, to effectively identify and remove such anomalous signals in HF. The set of filter recommendations developed by the HCAL DPG was adopted by the ASCTF.

The most powerful filters utilize the pulse shape of signals in HF. This was only possible after the HCAL channel phase was adjusted using collision data at 7TeV. The pulse shape filter requires access to digi information and taking advantage of this filter requires data to be processed using CMSSW37x.

Events with high MET have been scanned and we see some events which appear to have residual noise in HF. Such events can occur when multiple PMTS are hit or if there is some noise overlapping with energy deposits from jet. A tighter timing requirement may help in identifying this residual noise.

Special triggers developed for the HCAL calibration are active and collecting calibration data. These include triggering to collecting non-zero suppressed events in order to determine the phi symmetry and isolated charged particles to determine response corrections. Response corrections will be determined using isolated charged particles with momenta of 40 - 60 GeV. The collection of this sample will require extended running, with an integrated luminosity of several inverse picobarns. The calibration procedures are currently being exercised using lower momenta tracks.

The online DQM is being refined in response to the operational experience gained. A key input to Run Certification requires access to the voltage settings from the Detector and Control System (DCS). The information resides in an online database and the software to propagate this information to the offline database is being tested. The DCS information will be used as part of the automated Run Certification in order to ensure that HCAL is operating at the target voltage settings.

Two abstracts have been submitted to ICHEP. One paper describes the Isolated Charge Particle response and a second paper describes the commissioning and performance of the HCAL.

Triggers to collect calibration data have been implemented in the online trigger menu and are working as expected. However, it will require time to accumulate sufficient statistics of isolated charged particles before any response corrections can be determined. We do not expect to apply a response correction for the ICHEP dataset. The calibration techniques are being tested with lower momenta tracks and are performing well.

DT

The 7 TeV LHC operations have offered the first sizable sample of collision tracks for the barrel muon system.

The DT DPG group concentrated its work on preliminary tests of the synchronization, calibration and local reconstruction of the Drift Tubes. Proper synchronization of the DT local trigger is essential to perform unambiguous BX assignment and optimize overall barrel muon trigger performance.

Methods to set local trigger fine synchronization with a precision ~1ns were developed and tested during commissioning with cosmic rays. As soon as the first 7 TeV data was available, coarse synchronization of the system was tested to ensure all chambers were delivering triggers with the same latency (i.e. the BX distribution for collision trigger for every chamber was peaked at the same bunch crossing). Computation of fine phase correction is currently ongoing, based on comparison with respect to timing information coming from local reconstruction.

The calibration of the DT system will need integrated luminosity of an inverse picobarn in order to be fully exploited. The calibration procedure involves determining the time pedestal of the signal coming from the chamber, which represents the latency with respect to the L1 trigger. The DTs are currently running with an educated guess of the time pedestal, whose validity was verified using the first data. The distributions of the residuals between the hits in the chambers and the reconstructed segments have been studied, and no statistically significant biases have been found.

The local reconstruction code has been tested extensively during the CRAFT data taking. In the recent months it has been refined with collision muons. A cut on the longitudinal plane was present in the collision reconstruction code to reject cosmic muons, but this led to a large number of segments with azimuthal information only. This problem has been corrected. The distribution of the number of hits used in the reconstructed segments has also been checked and found to be reasonable when quality selection criteria were applied.

RPC

The main activities in the RPC DPG have focused on the analysis of the data collected and on improving the tools for detector monitoring.

In order to protect CMS operations against increase of fake trigger rates due to noisy channels, the RPC operations started in a conservative way, masking all the channels exceeding a noise rate of 3KHz. With those strong requirements the total number of masked and dead channels in the system was around 3.5%.

A detailed analysis of the noise is in progress, in order to reduce the rate by increasing the threshold or to unmask channels that do not degrade the fake trigger rate. After careful analysis the fraction of dead and masked channels was reduced successfully to 1.2%. Work is in progress to understand the impact of this unmasking on the Trigger and Reconstruction performance.

The largest effort was on the organization for software release validation and Data certification. Tools for monitoring of the noise and for certification of the run quality through the DQM have been improved. The software validation modules have been integrated in the central validation suite and are now ready to be run for each new release.

A common skim for the RPC analysis has been defined that selects good muons from the Minimum Bias sample. The skim makes use of the JSON files to select good runs from the point of view of the muon system and is analyzed by several people to study RPC efficiencies, make overall comparison between data and M-C on standard RPC Reco variables, improve the synchronization, and study trigger performance.

Although the statistics are still not enough to evaluate the efficiency roll by roll, overall efficiencies for wheel/disks have been produced and are in reasonable agreement with results coming from cosmic data (around 93% for the barrel and 90% for the endcaps). The working point of the endcaps is under study to improve the performance of the system at the level of the barrel.

The operations of RPC PAC trigger hardware and online software have gone smoothly. Minor updates of the firmware and online software were made recently. During last months the main focus was on synchronization of the RPC hits. The synchronization parameters were updated, first based on analysis of the splash data, and then secondly, based on the collision data. After these corrections, the number of the hits not assigned to the proper BX is < 0.5%. The number of muon candidates generated in incorrect BX (mostly BX +1) is about 0.3%. First studies are ongoing to look at the muon identification efficiency of Pattern Comparator algorithm and to better understand impact of chamber efficiency (~90-93%) on trigger results.

CSC

The CSCs have settled into standard data collection operation over the past months, although only recently have the collision rates been high enough to start collecting muon-triggered events. The general performance of the CSC system is stable, and the fraction of dead channels is slightly above 1%.
CSC has been approved for 'unattended operation' and thus in principle the CSC system can be fully monitored by the central P5 operations crew.

There is still a dearth of collision muons for detailed chamber efficiency studies, although statistics are slowly accumulating. Progress continues in adjusting the timing of the trigger and readout using both collision and halo muons.

Fig.3: Effect of CSC trigger timing change.

The first iteration of a timing scan with collision muons has been successfully performed with the CSC trigger. The result is that approximately 99% of triggers from CSC are correctly timed in with CMS (see Figure). A set of corrections to account for time differences in electronic pathways at peripheral crate level has been derived and stored in the conditions database.

Another significant step forward has been the completion of the first track-based alignment for the CSC system, based on halo muons and photogrammetry. The original photogrammetry measurements have been shown to be consistent with measurements based on beam-halo muons, and vice versa, so that the photogrammetry can be used to fill in gaps where chambers do not overlap and the halo muons do not provide alignment values. This alignment has been approved and released for use in the ICHEP physics analyses.

Comparisons between minimum bias collisions data and simulated data continue and in general give very good agreement for quantities related to local reconstruction of muon tracks in the CSCs. We need to account for the readout conditions which - since the CSC system was designed to trigger on muons - require a trigger primitive before readout. This is a concern only when dealing with minimum bias data, in which muons are predominantly from π/K decays-in-flight, and hadron punch-through, which might not be expected to trigger the system. The fully-reconstructed muon tracks in the minimum bias collisions data are also well described by the simulation.

A number of 'performance plots' were approved by CMS for public presentation, and additional plots will be added as more data are accumulated. Where necessary, the reconstruction algorithms are being improved; particularly in handling of data from the ME1/1A chambers, which cover the highest rapidity region of the CSC acceptance from |η| = 2.1 to 2.4 and so are highly populated but have ganged strips in order to reduce the number of instrumented readout channels.

A very active topic is determination of the CSC L1 trigger efficiency, covering both trigger primitive generation and the CSC 'Track Finder' reconstruction. This work is intended to supplement tag-and-probe efficiency studies. There is a gradual but gratifying increase in statistics for the onia resonance channels decaying to muons, most of which are detected in the endcap CSC regions. The few candidate Z→μμ and W→μμ events which have been observed are whetting the appetite for future physics as LHC operation progresses.

COMPUTING

Introduction

Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way.

The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest priority), commissioning and physics analyses. Latencies, in particular at T0 and CAF, were well within design goals, allowing prompt reconstruction to be performed and calibration constants to be produced in a timely fashion.

There has been a continuous export of data from CERN, with high peaks during the first LHC “squeeze” at the end of April, tripling the initial transfer rate without any difficulties. Aggregated transfer rates of processed data from CERN to all T1s and T2s were well in the range of a few GB/sec and the system showed flexibility in dealing with occasional backlogs. The observed quality of service at T1s for prompt skimming (selecting samples of data for particular analyses) and reprocessing is satisfactory.

The vibrant activity at the T2s is an excellent indication of the expectations of the physics community, and a sense of the scale of the transfers is given Data Operations and Facilities Operations sections. The very high proportion of successful jobs can be directly linked to the readiness of the T2s: this key factor in a distributed environment has been constant during the past 12 months, an achievement only possible with the commitment and high quality work of the staff at the remote sites, the CMS computing shifters and Facility Operators.

Four new operators joined Core computing during the last three months, a result of a high turnover and of the tasks reengineering which has taken place in Facility Operations and Data operations in order to support mission critical tasks with limited resources.

In conclusion, the whole system – hardware and software - is stable and reliable. The data volume so far amounts to more than 100 TB of raw data, which is still modest compared to what is expected for the whole period of CMS operations with higher luminosities (around 3 orders of magnitude expected in the coming 18 months). But the past few weeks have given good indications about the capacity of the CMS computing system to deliver more, and to support the needs of the thousands of CMS physicists.

Facilities and operations

The bi-weekly ‘European/Asian CMS T2 support Meeting’, in addition to the already existing support meeting on the OSG side, provides central support to sites whenever needed. The average CMS Site Readiness for T2 sites is improving. Additionally, dedicated datasets were produced, and JobRobot and SAM tests moved to use latest CMSSW and CRAB releases. Complementing the Site Readiness, FacOps team is establishing a group to follow-up Tier-1 production performance and resource utilization.

In this first period of 2010 data taking, the CAF job/user monitoring has been improved.  CAF is heavily used in bursts, with jobs always able to start almost immediately in low-latency queues. There is a broadly distributed usage of resources among the different groups, with approximately 130 active users.

The pool of CMS Computing Shift Persons was further extended to 60 Computing Shift Persons in 3 time-zones, distributed in ~10 remote centers around the world. Procedures for 24/7 coverage of Critical Services are being deployed. The Computing Run Coordinator (CRC) acts now as liaison for WLCG daily Operations calls, and feedback to CMS is reported on Monday’s Computing Operations meeting. All CRCs have been provided with TEAM/ALARM roles to open GGUS tickets, and the Savannah-GGUS Bridge is currently fully operational.


Fig.1: Site readiness for CMS Tier 2 sites

The HTTP group is under creation, to implement a single oversight body and service operation best practices, and specifically security measures, for all CMS offline services delivered using HTTP/S and centrally supported and hosted at CERN by CMS. The project will establish a bridge for both the Offline project under DMWM and the Computing Project under Facility Operations.

Finally, we would like to inform that the CERN Facilities Operations manpower is now complete and the new team is fully operational since April 2010.

Data Operations

T0 Operations.  
Data collection from collisions at 7 TeV started end of March in the acquisition era Commissioning10. At that time MinimumBias was the only primary dataset used by physics; several skims from commissioning/PVT (for example GOODCOLL) and from physics (SD, CS) were run on MinimumBias as well. We recently moved to a physics setup of 8  PDs (Primary Datasets): Mu, MuMonitor, EG, EGMonitor, JetMETTau, JetMETTauMonitor, MinimumBias, ZeroBias; plus default commissioning PDs: Commissioning, Cosmics, etc.,  with Run2010A as acquisition era. By the end of May, we had recorded almost 580 million events in RAW format, yielding a total data volume in Commissioning10 over 430 TB, including all re-reconstructions.


Fig. 2: Simulated MC events per month.

Fig. 3: Simulated MC, size in GB per month.

Tier-1 Operations

Various re-RECO passes on real data and MC have been performed, as well as a complete redigi/rereco pass on the Summer09 MC sample. 500 workflows generated roughly 1500 output datasets; 500 million input events were processed in over 500 thousand jobs; the total input data size amounted to 400 TB, while the output data size for each format reached respectively ~400 TB for RAW and 220 TB for RECO.

MC production
350 Million events were simulated during the past 3 months for a total volume over 430 TB.  The completed samples are promptly announced on hn-cms-datasets@cern.ch as soon as they have been archived on tape at a T1 site.

RelVal
We produced almost 240 million events consuming over 35 TB of tape space  (at Tier-0 and Tier-1s) in 3189 individual datasets for 20 releases.

Transfers  
In the last 90 days, we transferred from T0 to T1 sites over 0.8 PB, and over 3 PB from all T1 sites to all T2 sites (transfers of datasets for analysis). 


Fig. 4: Cumulative transfer volume over last 90 days from T0 to T1 sites.
Fig. 5: Cumulative transfer volume over last 90 days from T1 to T2 sites.

User support

The User Support together with the Physics Analysis Toolkit (PAT) team has setup extensive study material for a remote e-learning course on "Using PAT in your analysis". The course is "virtual" i.e. there are no lectures and all material and exercises will be available on the Web. Tutoring service will be available for registered participants during the tutorial week 21-25 June.

This is the follow-up of the successful and well-received series of PAT courses with lectures and exercises, which will be continued later this year. An opportunity has been taken to consolidate the existing material and to add a thorough set of preparatory exercises for those with no or little knowledge of CMSSW. This approach was found to be very useful during the EJTerm course at Fermilab in January.

A team of motivated and committed experts and tutors has been formed around the PAT course, ensuring that the new updated material reflects the commitment of the providers. Maintaining CMS software and computing documentation requires continuous work. The CMS WorkBook benefitted greatly from the PAT course: not only has the PAT part been restructured and updated, but also the general material participants need to go through before entering to the details has been optimized. Feedback is welcome as usual.

The CMSSW reference manual has also been updated. Major improvements have been made to make the access to the class documentation quick and easy. The main entry page now gives the CMSSW directory structure based on the assignment of the packages to different projects and quick links to the SWGuide for general documentation and to cvs browser for the source code have been added.

Distributed Facilities and Services (DFS)

In 2010 Computing Integration has been split into two areas, CERN Facilities and Services (CFS) Integration and Distributed Facilities and Services (DFS) Integration. Stephen Gowdy and David Mason have been appointed coordinators of the former and Claudio Grandi and Jose Hernandez are the coordinators of the latter.

The DFS Integration activity will act as liaison between the various Computing Operations groups (Analysis, Facilities and Data Operations) and Offline DMWM developers for matters related to the distributed computing. It will coordinate the collection of requirements for the DMWM tools, bug reporting and release validation.

DFS Integration will act as liaison between Computing Operations and Offline CMSSW developers collecting requirements and in general observations related to the use of CMSSW from Computing Operations.

This includes e.g. memory usage, I/O access patterns, issues related with CMSSW deployment at the computing sites, etc.

DFS Integration will report to CMS Computing & Offline about the activities of WLCG and related projects on issues of potential interest and impact on the CMS software, e.g. middleware features, security constrains, changes in the infrastructure that may require changes in the procedures, etc. DFS Integration will collect requirements from CMS Computing & Offline and report them to WLCG and related projects, follow the implementation of the required functionalities and report back to CMS. DFS Integration will also provide advice to Computing and Offline on the definition of procedures and policies in accordance with the CMS Computing Model and on possible modifications to the model itself coming from the needs of Computing, Offline other bodies.

Analysis operations

The Level 2 task "Analysis Operations" in Computing is focused on the operational aspects of enabling physics data analysis at Tier-2 and Tier-3 centers worldwide. Its activities are subdivided into three subtasks: Data movement, access, and validation, CRAB server operations and related analysis support, and Metrics and evaluation of the global system for analysis.

Fig.6: Analysis jobs/day in last 3 months at CMS T2s.
Fig.7: Data transferred in last 3 months from T1s to T2s, about a half is data placement by Analysis Operations, which by now has placed about 1500 TBytes of data at more then 50 T2s.

The last months have seen the transition to a new version of CRAB and CRAB server. Both are now essentially complete from a functional standpoint of what will be needed to analyse the data in the current LHC run. The volume of operations has substantially increased both on user’s side and in central data placement, with resource usage now close to, or exceeding the Computing TDR expectations.

Fig.8: Mail volume (#messages) handled by Analysis Operations on the CrabFeedback forum in 2010.

 

OFFLINE

Introduction

Since the last CMS Bulletin, the CMS Collaboration has successfully achieved many milestones and the Offline Group has played a central role in the fulfillment of all of them. While many moments have been of historical importance for the whole Collaboration (the first 7 TeV collisions, the huge media coverage on March 30th, the constant increase in luminosity), we consider the level of efficient and sustained operations the group was able to reach over this period as our main achievement.

First collisions were delivered by the LHC on March 30th. This was an important day not only for the physicists finally seeing their detector recording events of unprecedented energy, but also for the close attention the whole world gave us, with tens of journalists physically present in the CMS Centre. The successful accomplishment of that day, when we were able to display “live” events, starting just 1 minute after the first collisions actually happened, was the result of a huge effort by the Offline team in many areas:
  • the CMSSW_3_5_0 release was ready just a few days before the event, with a CMSSW_3_5_1 released being deployed in the very last hours to fix critical problems,
  • suitable global tags were created,
  • Express Processing was exercised the night before the collisions to provide fast feedback on the beam positions when non-colliding beams were present in the machine,
  • And, at last, the new visualization system for P5/CMSCentre was tested.

This last component had actually just been commissioned, and was working in a substantially different way as compared to during the 2009 Run.

A central processing server located in the CERN Computing Centre is used to reconstruct a selected stream of events coming directly from P5. The selection can be performed either upstream, for which the stream contains only certain HLT paths, or downstream, in which case one can select/discard Beam Splashes, machine induced events etc.. The workflow used is taken directly from the Express Reconstruction, via a DAS call. Computers located at P5 and the CMS Centre have been set to serve quasi-online Event Displays (i.e. with less than 1 minute delay) running both FireWorks and iSpy. The system was set up to be ready for the first Beam Splashes, and subsequently collisions. Since then, it has been fully operational and has proven stable and met all requirements. As the events are processed before the Express Stream sees them, this system brings a clear advantage of offering a “first line” of reprocessing where errors can be spotted very quickly.

Since the Media Event on March 30th, all Offline components have been fully operational. DQM shifts have been run with 24 hour coverage in periods with beam, with the usual sharing of efforts between CERN, DESY and FNAL, and the PVT has certified data and reprocessing on a weekly basis.

New features have been added to reconstruction, such as much improved cleaning of noise and a more precise treatment of ECAL spikes, where attempts are being made to simulate the underlying mechanism. Simulation has improved in many other areas, for example the tracker gains and noise model are now vastly improved; also inputs from the Heavy Ions group have been taken into consideration.

The Analysis Tools group has stabilized the PAT for the whole 2010 Run for the benefit of those doing analysis; the PAT has also been demonstrated to be a viable solution for permitting the fast turnaround of analyses and was used to prepare rapidly and yield results on 7 TeV data for March 30th. The Database Group is performing a major review of the code, whilst at the same time helping the DPG/POG/PAG groups to migrate in a fast and effective way their payloads to use the "dropbox mechanism". This mechanism offers an elegant solution for the uploading of Express, Prompt and Offline payloads guaranteeing data reproducibility, whilst intervals of validity are checked and confirmed automatically before being pushed to the database.

The Fast Simulation group is continuing efforts to match the data coming from the detector, with greater and greater success. The AlcaReco team is working on implementing the prompt calibration loop, and prompt reconstruction will be delayed to make sure data are processed with optimal constants, which will be delivered within 48 hours. Whilst the new system is ready to be used, it was decided to delay its deployment until after ICHEP; the time between now until the end of July will be used for integrating the calibration loop to run at the Tier0, which will allow for a complete automation of the workflows. Also, the concept of validation for fast processing has been introduced: an AlcaReco workflow will run on data and produce DQM results, which will be automatically harvested and their results will go in a validation DB. A subsequent Tier0 workflow will use them, leaving to the DQM experts the final word about the quality of the validation payloads. In the case of positive feedback, Prompt Reconstruction will directly run using these payloads, and results that had previously required a re-reconstruction with up-to-date constants will be available in the 48 hours time frame.

These months have also been hectic also from the release point of view. Release CMSSW_3_5_X was deployed in production just a few days before the start of the Run eventually reaching version 3_5_8. At the same time CMSSW_3_6_0 was being prepared and was finalized on April 15th, with CMSSW_3_6_1 being deployed at the Tier0 as the current production release in the second half of May. The CMSSW_3_7_0 development cycle is now closed, with the first release being made on schedule on May 27th, and at the same time the CMSSW_3_8_X cycle was opened. The agreement with Physics has been not to push any new feature in the 3_6_X cycle, and to go straight for 3_7_X validation; 3_6_X will remain the production release at the Tier0 up to ICHEP, to minimize the risks of jeopardizing data taking.

Two full days were spent by a large fraction of the Computing and Offline management team at FNAL, at the start of May, for a Data Management and Workload Management Workshop. Discussions were focused on the definition of a timeline and a series of milestones toward the deployment of the next generation tools. CRAB2 development has been mostly frozen (apart from few features considered vital for analysis), and all the effort put on making a CRAB3 prototype by the end of the year. The same holds for ProdAgent, which will be replaced on a short time scale (end of Summer) by the new WMAgent-based tool.

As already mentioned, it was decided in the workshop to move the Prompt Calibration Loop to the Tier0 infrastructure, with the BeamSpot calibration being used as the first test case. Other important discussions were on DAS, RunRegistry and on the possibility to produce run-dependent Montecarlo samples; also in these cases milestones have been put. The general feeling was that the workshop had been very useful to plan the future DMWM work programme.

Finally it has been decided that, following the success of last year’s Amsterdam experience, an extended Offline management Meeting will take place in the week starting on July 12th at CERN.

Generators

In the last three months the main objectives of the generator group have been the management of the simulation reprocessing and the tuning activities.

About 500M Spring09 simulated events have been reprocessed with the latest available release, and more than 100M new events produced. In parallel several "data-like" samples, using the most realistic beamspot conditions, have been prepared. These samples have been used to compare with the early data, mostly at 7 TeV, in the last two months. The very good quality of the detector simulation has allowed several differences to be identified due to the physics model for minimum bias and underlying event used in the generators, mostly pythia6.

New experimental tunes have been provided, starting an intense activity which is still ongoing. Work has started to improve this process with the adoption of a more robust tool to document the analyses and scan the parameter space finding optimal combinations to describe data.

In parallel, the routine activity to update the libraries to the state-of-the-art versions of the generators has continued, and an effort has been put in building a new validation procedure to be used both in testing code development and for physics quality assessment of new versions.

Full Simulation

Work has focused on improving Monte-Carlo/Data agreement and on the preparation of tools for future simulation use.  As the 7 TeV data roll in, detailed comparisons between the detector simulation and the actual detector performance can be made at the level of individual detector elements. This has led to improved models of the noise levels in both ECAL and HCAL, and more realistic modeling of hit efficiencies in the muon systems.

The Tracker Group has implemented a much more realistic simulation of charge deposition and saturation using the individual channel gains. This will dramatically improve the simulation of dE/dx loss in the tracker.

Progress has also been made in simulating some of the unanticipated "features" found in the real data. ECAL simulation now includes the potential to simulate highly ionizing particles crossing the APDs, which are thought to generate the "spikes" seen in data. The simulation of HF has been updated to include the generation of energy deposition generated by particles striking the phototube faces. This effort is part of a larger project to optimise the simulation of the forward detectors using shower libraries and/or parametrised showers in order to dramatically improve the simulation performance in this region.

Work continues on the tuning of GFlash to collision data in the hope that it can be deployed at some future time with the effect of reducing the CPU time required to simulate events, and as an improved input for Fast Simulation.

Reconstruction

The reconstruction team continues to adapt the data reconstruction algorithms to detector conditions observed in the 2010 data taking. Examples include further tuning of the track reconstruction, which has been achieved by incorporating knowledge gained from studying collision data, mostly concentrating on the constraint to the origin of the tracks (collision, secondaries, beam-gas, etc.). Improvements have also been made to calorimeter data through the application of noise cleaning algorithms. These have been integrated primarily into the CMSSW_3_6_X cycle and partly back-ported to recent CMSSW_3_5_X releases for more rapid integration into the release used for re-reconstruction productions.

The CMSSW_3_6_X has recently been put into production on the Tier0 for reconstruction processing. Improvements include CSC local reconstruction tuning, the addition of Jet+Track algorithms, a centralized track extrapolation to calorimeter algorithm, new Tau algorithms, and updated electron identification code.

The CMSSW_3_7_X release cycle includes further improvements to particle flow, tcMET, beam halo ID algorithms, and new Hcal rechit flags. From a technical point of view, we continue to streamline the maintenance of the production reconstruction configurations and have a testing and validation procedure in place to help ensure smooth production operations. We continue to collaborate with the performance group on the technical performance of the reconstruction algorithms.

We continue to strive to consolidate the RECO team with new members in order to perform smoothly all the tasks related to reconstruction.

Alignment and Calibration

A major focus has been to ensure a stable setup for the startup of high-energy collisions and the LHC media event. An important step for consolidation of the handling of database constants is the introduction of the offline dropbox, which also allows more thorough consistency checks of the intervals of validity for the constants payloads. The policy of conditions database tags for reprocessing has been further adjusted: while constants covering more recent validity ranges can continuously be appended, frozen copies can be generated at any time for re-processing, maintaining the full traceability of the global tag.

The alignment & calibration software framework has been carefully improved in many details for the 36X and 37X release series. The selection strategies for the AlCaReco skim producers have been adjusted to the development of the trigger menu with increasing luminosity, and the migration to the schema of eight primary datasets. The prompt calibration concept has seen significant further development, which will hopefully lead to its use in production soon after the ICHEP conference.

Database

The CMS online database has been improved in preparation for data taking. Most of the online projects for the DCS have been reviewed. Old historical data have been archived and the data rate has been optimized and reduced to a reasonable value, a compromise between physics needs and space availability. An on-call service has been organized.

Concerning the offline database, a tutorial has been made for the alignment and calibration contacts. The topics covered included a general overview of databases in CMS, use of the offline DB in alignment and calibration, how to design a new offline DB record, how to inspect, modify and duplicate tags with the command line tools and how to insert data in the offline DB: PopCon and the online and offline drop-box. The training was very successful with about twenty people following the course.

The introduction of the offline dropbox has made the insertion of new data simpler and more robust, and it has allowed automation of new data insertion from calibration jobs.

Also the data browsing is being improved thanks to the new offline DB Browser. This supports browsing of data in all accounts and displays the content of each tag.

An offline database and calibration on-call service has been organized to guarantee expert availability during data taking.

Fast Simulation

Fast Simulation has also finally entered the exciting era of real data taking. Years of careful design of fast algorithms and emulators, as well as tuning on full simulation and test beam data, resulted in a tool that could successfully be used for comparison with real data. It has the potential to be very helpful in getting first physics results, based on the Minimum Bias events collected in 2009 and at the beginning of 2010 LHC data taking.

In order to facilitate a comparison between the Fast Simulation and the real data collected by CMS, a filter emulating the effect of the technical triggers, which are not directly simulated in the Fast Simulation, was also developed and provided to the users. Results of comparisons of various distributions (such as track multiplicity, missing ET, etc.) between Fast Simulation and those obtained from the real data and from the Full Simulation were shown during internal meetings and the offline workshop in April. They were found to be in remarkable agreement in most cases. This is an outstanding and encouraging result, taking into account that the Fast Simulation was never specifically tuned for the low-pT physics of Minimum Bias events.

A very good agreement between the Fast and Full Simulation in track multiplicities has led in a particular to the first example of the Fast Simulation usage for obtaining physical results. In fact, recently the Fast Simulation has been extensively used for the rapid production of more than 100 million Minimum Bias events and several representative physics samples at 7 TeV using several specially prepared PYTHIA tunes to study what might be the best choice of parameters for the coming MC production. This is quite an impressive example of the usefulness of Fast Simulation for the physics searches in CMS. Fast Simulation also has quite a strong potential for use in physics searches in other areas, for instance, in quick systematic studies, complex multi-parameter model (like SUSY) scans, large-scale background estimates etc.

In view of the presentation of CMS results at ICHEP and other physics conferences, the Fast Simulation team is now pursuing a campaign to encourage, and in fact to request, the approval and publication in the Physics Notes, as well as in the catalogue of approved CMS results, of plots showing the comparison of the distributions of physical observables in the data with the corresponding FastSim ones, whenever a similar comparison with the Full Simulation is provided. This would require little extra effort to the physics teams involved in the analyses, but these figures will serve as benchmark performance plots, demonstrating the status and the range of applicability of the Fast Simulation in CMS.

In the meantime, Fast Simulation is still an evolving tool, which is being improved functionally to cope with the growing requirements and demands of the LHC physics searches. Some of the latest additions include: the Global Run (GRun) HLT menu has been added to those already supported (1E31 and 8E29); an improvement of the electromagnetic shower simulation in the ECAL endcaps behind the PreShower; an implementation of an emulation of the muon hit association inefficiency in the reconstructed tracks resulting from delta ray emission in the DT and CSC.

Analysis Tools

The Analysis Tools group has been very active in support of the many physics analyses for ICHEP, as well as giving guidance for the analysis of first data taken at 7 TeV. Specific activities include another successful PAT tutorial (including information about accessing data correctly) and the subsequent successful deployment of web-based tutorial activities; a "data processing tutorial" that is intended to instruct new users about the details of data analysis as well as serving as a liaison to the Physics Validation Team; the development of POG selection software that is distributed to users for proper object identification; and finally the maintenance of the Physics Analysis Toolkit which provides the configurations needed to run correctly over the various Monte Carlo and data samples. Furthermore, we have been active in evaluating the analysis interface to the computing model, keeping regular synergy with Analysis Operations and the Primary Dataset Working Group.

Data and Workflow management

The DMWM Project would first of all like to thank Rick Egeland for all his hard work on PhEDEx over the last few years. We would also like to welcome Nicolo Magini aboard as the new L3 Manager for PhEDEx. A constructive handover workshop was held at Bristol, where the development plans for the data transfer system were discussed in order to ensure a smooth changeover.

Development work has been moving on apace on the new Workload Management System, with the Tier 1 processing system nearing rollout for integration and operations.  As soon as it enters the testing phase, the development focus will shift to analysis processing and simulation.

A very successful analysis-centric design and coding sprint was held in Perugia in the spring and will be followed up with sprints during August, September and the remainder of the year to push out new services. Anyone who can churn python code is welcome to attend any of these code sprints, although spaces are limited and preference is given to DMWM developers.

Data Quality Monitoring

The Data Quality Monitoring system (DQM) provides event-data based histograms as well as detailed data certification results from automated data quality checks.

The full system comprises a comprehensive view of all sub-detectors and the trigger, in real-time during online data taking, during the prompt and re-reconstruction at Tier-0 and Tier-1 centers, as well as histograms for Release and Monte Carlo validation. The results are published through a web-based application (DQM GUI) allowing CMS users worldwide to inspect the data of all runs recorded, including reference histograms and by-run trends and history plots of basic histogram quantities.

Central DQM shift persons inspect the data of each run. Two DQM shifts are organized, consisting of 3(4) shifts of 8(6) hours each for online (offline) data, respectively. The shifts, performed at P5 (online), as well as CERN (CMS Center), FNAL and DESY (offline) produce data certification results for each run per detector subsystem or physics object reconstruction. Since the beginning of the year 2010 about 100 different persons have taken DQM shifts.

The Run Registry (RR) is a web-application with database-backend serving the bookkeeping of the shift results, the tracking of the shift and sign-off workflows, the browsing of the results by CMS-users, as well as the generation of the final good-run list. It is capable of handling certification results by run and by dataset. Good-run lists are published weekly after sign-off. The good-run list file format allows for its direct input to analysis (CRAB) jobs.

Since the beginning of the year the DQM system was upgraded for improved performance and user-friendliness and also to incorporate 'by-luminosity section (LS)' tracking of the results. The RR retrieves by-LS conditions data (high voltage as well as beam status) from the database. This information is taken into account for the creation of the good-run lists. By-LS histogram handling was also introduced in offline DQM, in view of the goal to move to a higher level of certification automation, with finer resolution in time (by-LS) and geometry (by sub-detector subcomponent).

Since the beginning of the data taking, the DQM configuration, trigger selection and code was continuously tuned and improved to optimize sensitivity to problems and improved performance. In the area of online DQM the infrastructure was improved for better event selection capabilities and maintainability. Substantial improvements were introduced in the DQM run control code, as well as event server and event processor code. As one of the prominent upgrades of online DQM, track-reconstruction based beam spot monitoring was introduced, providing coordinates of the collisions vertex position and extension in real-time to both CMS and LHC control-rooms.

Further improvements towards automation and scalability are underway. Much of the RR code has been re-factored in view of integrating it with the web-based monitoring (WBM) system, thus making it accessible worldwide for CMS-users. As new additions, the next-version RR will contain results from algorithm-based "automatic" certification by-LS, as well as improved versioning capabilities. An offline instance of the WBM is planned, such that certification results are accessible to CMS-users at all times, even when P5 is down, and for scaling reasons. Furthermore, extensions are underway for the quality monitoring of Monte Carlo production data. In this area the focus is on the monitoring of basic generator quantities as well as the correct reconstruction of the physics objects.

In conclusion, with the beginning of the 2010 collisions data taking, the full end-to-end DQM chain was put into production and has proven to be a reliable and efficient means for problem detection (online) and the assessment of the data certification for physics analysis. Improvements are underway to further optimize performance while reducing maintenance and operations efforts to a level that is sustainable over long periods of time.

Physics Validation Team (PVT)

The Physics Validation Group (PVT) started its operations in Summer 2009 with the goal of providing the collaboration with fully validated datasets for Physics Analyses. It serves as a central forum for discussion and logging of the status of the validation; all practical information relevant to the use of recorded data are collected to help users with their analysis.

The regular PVT meetings serve two main purposes, namely the formal sign-off of Alignment and Calibration (AlCa) constants as well as run-by-run certification results. Specific details, relevant to the determination of constants, are discussed during the AlCa meeting. The effect of these changes, even if sometimes small (e.g. noise modeling, alignment, dead channels), can propagate with a large effect to high level reconstruction objects. This is discussed and presented to the PVT, with the participation of the POGs, before new constants can be deployed for reprocessing. The data quality certification and bookkeeping relies on the DQM system. Every week the list of runs certified as good for analysis is updated and is published in special files (JSON) that are used by the analysis teams. The new version of CRAB already supports the selection of good runs at the level of the CRAB job configuration.

The first task has been the validation of the large scale MonteCarlo production in Summer 2009 and there was a very good response, in time and quality, from the validators. At the end of 2009, proton collisions at 900 GeV were the first road test in an operational environment. In the preparation for the Winter Conferences, the PVT coordinated the validation of seven re-processings of the full datasets as well as the production and validation of the corresponding MonteCarlo samples.

Now the focus is on preparation for the ICHEP conference. The goal is to be able to provide the analysis team with the largest amount of collected data processed with the best available release. The schedule is tight and, as the first pre-approvals are approaching, the situation is well under control. The analyses producing results for ICHEP will use one of the two releases chosen, either the 36X series or 37X. The releases are in the validation phase and feedback is expected soon. The set of calibration and alignments approved for the ICHEP conference has been already applied to the latest round of reprocessing.

It should not be overlooked that the PVT is also committed to provide the validation of the MonteCarlo datasets that are needed for analyses before launching the full production. This was the case already for the large “Summer 2009” production and is now on-going in order to provide the reprocessing (re-digitization/ re-reconstruction) of those samples with the latest software release for use in the context of the ICHEP analyses.

Visualization / event-display

This year began with a major push towards consolidation of all visualization-related efforts (i.e. Fireworks, iSpy, and Iguana). Fireworks was chosen as the baseline event display, with the intention to extend functionality to include the existing features of iSpy and Iguana as well as to support operation from within the full software framework within a cmsRun job.

A new call for user inputs was made to help guide the development priorities (see Requirements document), and a developers' workshop, comprising personnel from all three visualization efforts, was held in February in San Diego to determine the work-plan for the year.

Keeping existing programs functional, to avoid disruptions to data-taking, commissioning, and physics analysis, the new team engaged in transplantation of additional iSpy features into Fireworks [see figure]. At the same time, the core object management system was generalized to allow for better code reuse and for various optimizations in memory and CPU consumption. An alpha version of the consolidated event display was released in early May and the official release will done in mid June.

After that the emphasis will be put on integration with the full framework and on development of detailed geometry browser.




PHYSICS

The Physics Groups are actively engaged on analyses of the first data from the LHC at 7 TeV, targeting many results for the ICHEP conference taking place in Paris this summer.

The first large batch of physics approvals is scheduled for this CMS Week, to be followed by four more weeks of approvals and analysis updates leading to the start of the conference in July.

Several high priority analysis areas were organized into task forces to ensure sufficient coverage from the relevant detector, object, and analysis groups in the preparation of these analyses. Already some results on charged particle correlations and multiplicities in 7 TeV minimum bias collisions have been approved. Only one small detail remains before ICHEP: further integrated luminosity delivered by the LHC! Beyond the Standard Model measurements that can be done with these data, the focus changes to the search for new physics at the TeV scale and for the Higgs boson in the period after ICHEP.

Particle Flow

The PFT group is focusing on the commissioning of the lepton identification and reconstruction after having confirmed the robustness of the reconstruction and identification of the charged hadrons, photons, and neutral hadrons at 7 TeV.

While waiting for a larger amount of leptons from the decay of W, Z and J/ψ, new Monte-Carlo studies have been conducted. They show that the reconstruction and identification of electrons and muons in the particle flow algorithm, as well as the particle-based lepton isolation, are now understood.

In EWK analyses, the particle flow muons perform in the same way as standard muons, and the containment of the particle flow electrons has been brought to the level of EGM reconstructed electrons. The particle-based electron and muon isolation now outperforms the traditional detector-based isolation approaches, and random cone studies show that the particle-based isolation efficiency can be extracted from a MinBias data sample. These new developments are available in CMSSW_3_7_0, and will be documented in PFT-10-002 and PFT-10-003.

Muon

Studies of the muon reconstruction performance expanded in scope and intensity with the advent of collisions at 7 TeV this year, as the number of muons recorded exceeded the 2009 count already on the first day of data taking and increased by several orders of magnitude over the following weeks.

Almost all of the measured distributions of muons are reproduced very well by the Monte Carlo simulation. Good progress is being made on understanding a few variables for which data and MC do not yet match perfectly. Even though about 3/4 of the muons recorded until now are expected to arise from pion and kaon decays, the very loose trigger selections used during this period made it possible to study efficiencies of muon HLT triggers by analyzing good-quality muons reconstructed offline. These efficiencies generally agree with MC predictions. First estimates of various reconstruction, identification, and trigger efficiencies obtained by applying the “tag-and-probe” method to muons from J/ψ decays became available, as well as fake rates evaluated on high-purity samples of pions (from Kshorts), kaons (from phi's), and protons (from Lambdas). Overall, the prospects for ICHEP look good.

Finally, as a reminder of the intense studies of cosmic muons conducted in 2008 and 2009, a paper describing the measurement of the charge ratio of atmospheric muons (arXiv:1005.5332) has gone through all rounds of CMS refereeing and was submitted to Phys. Lett. B.

Electron/Photon

The EGM group is carrying on with 7 TeV data the earlier activities started in 2010 with the data at 900 and 2360 GeV that were aimed at commissioning the electron and photon reconstruction and selection algorithms.

Four PASs are foreseen for the ICHEP conference that are the result of a very good collaboration between EGM and ECAL DPG. Two of them are managed by the ECAL DPG and will cover the aspects of the ECAL performance and calibration with the first 7 TeV data. The other two will instead cover the electron and photon commissioning and the measurement of the first performance on data. Given that the expected integrated luminosity that will be possible to analyze before ICHEP is expected to be of the order of 100nb-1, we consider it very unlikely to be able to cover analyses using electrons from Z decays for the measurement of selection efficiency or energy resolution.

The main objective of current electron and photon analyses is to commission all variables used in the selection and to obtain the first measurements of efficiency, purity and fake rates for given reference selections supported by the EGM group. The primary source of prompt electrons signal will be electrons from W that we expect to select with high purity in events containing a reconstructed electron or ECAL cluster, missing transverse energy, and very little additional hadronic activity. However, we have also observed a J/ψ into electrons peak that will serve to commission the reconstruction of electrons at low PT. A dedicated double low PT electron trigger is used to collect these events.

All analyses are rapidly evolving as the data arrive and we are moving now from analyses of all events, based on minimum bias triggers, to analyses using high PT photon and electron triggers. It is now becoming important to understand the turn-on curve of these triggers as well as their efficiency.

The EGM group has strong collaboration with the PFT and Physics Analysis Groups on electron commissioning and on physics analyses. Additionally, the EGM group is working with these other groups in order to define and put in place a common group skimming of electron and photon samples, expected to satisfy the needs of most of the physics groups and that could later evolve into centrally produced skims.

Tracking

With the LHC collisions at 7 TeV this year, the Tracking POG started with re-commissioning track reconstruction as for the 900 and 2360 GeV collision data, and then quickly moved toward making quantitative measurements of tracking performance.

Many tracking results from last December’s data are being published (TRK-10-001) and now with ICHEP on the horizon four separate PASs are planned to document the tracking efficiency, the tracker material, the momentum scale, and primary vertexing performance.

As CMS has accumulated more integrated luminosity, CMS has been able to reconstruct more known resonances to validate the overall performance of the tracking, e.g. K0S, Λ0, and φ peaks. With the added data, decays such as Ω- → Λ 0K-, and charm decays including D+→K-π+π+ and D*→ D0π, with D0→ K π have been reconstructed. The two plots included here show the D0→ K π peak and the D*-D0 mass difference.

These are much more than just PR plots now. From the ratio of the number of reconstructed D0→ K π and D0→ K3 π decays, the tracking efficiency can be extracted. With the data already recorded, samples of reconstructed resonances are used to study the momentum scale, and large samples of reconstructed conversions and nuclear interactions used to study the tracker material. The ability to reconstruct multiple interactions is also under study, which will be crucial as the LHC delivers higher luminosity. The results of these studies should be ready for ICHEP, but this is the first step in the program of measuring tracking performance with data.

Jet/MET


The JetMET group produced new approved results from the 7 TeV collision runs confirming the good understanding of jets and MET. Three different CMS reconstruction methods, calorimeter only, track-corrected, and particle flow, were investigated using about 0.3 nb-1 of minimum bias events recorded in 2010.

Fig.1: D0 → Kπ peak and the D*-D0 mass difference.

A sample of dijet events was selected and used to monitor jet quantities, extending the studies to larger transverse momenta compared to the previous 900-2360 GeV results. The first figure below shows the nice data-MC comparison of the dijet invariant mass. There was the same level of agreement for the other kinematic and jet quality variables. MET was cleaned of the effects of instrumental anomalies and beam induced backgrounds, thus significantly reducing tails. Inclusive and dijet events were used to monitor MET variables, which agreed well with simulation. As an example, the second figure shows the MET spread as a function of the scalar sum of the transverse energy for data and MC.

Jets and MET can be then used for the wide CMS physics program expected for ICHEP. The JetMET group is now working on determining more detailed information on data, as the jet response and MET corrections.


Fig.2: data-MC comparison of the dijet invariant mass.
Fig.3: MET spread versus scalar sum of the transverse energy for data and MC.
QCD

With the addition of 7 TeV data, the QCD group quickly published a paper (arXiv:1005.3299) based on the charged hadron distribution paper that was the first CMS publication. The preliminary analysis was done within a few hours of collecting data, with detailed checks and systematic studies done in due course.

The QCD group has also submitted a paper (arXiv:1005.3294) for publication on Bose-Einstein correlations, which measures spin-statistics correlations between same charge pions in the CMS minimum bias dataset at 0.9 and 2.36 TeV, concluding that the correlated particle emission region increases as a function of particle multiplicity.

Another particle correlation study is approved and heading towards publication, this one measuring two particle correlations in η in 0.9, 2.36 and 7 TeV data and modeling the result with a simple parameterisation that assumes particles are decay products of “clusters” which are emitted independently in the initial interaction. The effective cluster size and decay width can then be extracted from the data using this parameterisation, which shows a dependence of the cluster size on the beam energy, while the cluster decay width is roughly constant. Additional analyses on multiplicity distributions, identified particle spectra, and charged hadron spectra at high PT are well advanced and are expected to be completed for ICHEP.

One important job of the QCD group is to work on tuning the CMS Monte Carlo’s to better model the underlying event and minimum bias data. This is extremely important especially as we head into the era of pileup, where every hard scattered event will have many minimum bias events to keep it company. This effort is an LHC wide one, with a recent joint workshop held between ATLAS/CMS/ALICE and LHCb:
http://indico.cern.ch/conferenceOtherViews.py?view=standard&confId=87647

New tunes for CMS are coming and have already been shown to model better the underlying event (the part of the event that is not associated to the hard scatter). The data analysis that inspired the new tunes has been completed at 900 GeV and is well advanced for the 7 TeV data.

The QCD group, and the associated Jet Task Force spanning also the DPGs and POGs, has many jet analyses heading for ICHEP. One important analysis is the inclusive jet cross section using all four jet types: calorimeter only jets, particle flow jets, JPT (jet plus tracks) jets and track jets. This analysis is well advanced and is a nice verification of the different jet flavors, compared with NLO predictions. In addition, several dijet analyses are working towards ICHEP, two of them shared with the Exotica group and looking for new physics in dijet mass resonances or cross section ratios. Other dijet analyses include decorrelations, which is a good observable for Monte Carlo testing/tuning and for angular distributions which are sensitive to contact interactions. Jet substructure and overall event shape variables are also being studied and compared to Monte Carlo.

The QCD group is also studying photon production. Prompt photons, coming directly from the hard interaction, are a good probe of perturbative QCD. In addition, they are background to Higgs decay into two photons and other searches using photons. The inclusive photon and inclusive photon plus jet(s) spectra are two analyses targeted for ICHEP. In total the QCD group has 13 analyses specifically targeted for ICHEP and 5 that are already public (3 are published or submitted to journals).

B-Physics

The task force on measuring the inclusive b cross section is making very good progress. The analysis based on muon relative PT has first estimates of central values from 2nb-1 of data, which agree well with Pythia. The relative PT analysis having also b-tagged jets (i.e. with secondary vertices) has processed 10nb-1 and has first estimates of b-fractions, but not yet central values. The inclusive b-tagged jet cross section has first estimates of central values from 1nb-1. Measurements of final state b-jet correlations in a final state of J/ψ→μ μ plus an additional muon need at least 500nb-1 for two Delta phi bins. Alternative methods without leptons (inclusive vertex tagger) look promising and can work also with small integrated luminosity.

A second task force in the B-physics area is focused on measuring quarkonia production, J/ψ and upsilon, in their decays to muons. There are two principal motivations for the study of the J/ψ and upsilon at CMS: (1) The elucidation of the physical process (hadroproduction) that produces the J/ψ and upsilon in proton-proton collisions which is not presently understood, and (2) The J/ψ and the upsilon constitute standard candles to calibrate the detector response to low PT muons at CMS. The J/ψ cross section is expected to be about 100 nb. A plot of the invariant mass of opposite sign muon pairs for ~15 nb-1 of data is shown below. The fit (black line) is to a Crystal Ball function. The clearly visible low side tail is due to Final State Radiation. The blue dashed line is the opposite sign combinatoric background. The upsilon cross section is expected to be about ten times smaller than for the J/ψ, and so is harder to find. The radial excitations of the upsilon, known as the Y(2S) and Y(3S), are close in mass with the latter two overlapping when convolved with the CMS mass resolution. A collision having a candidate upsilon decay to muons is shown in the display below.

Fig.4: Invariant mass of opposite sign muon pairs (~15 nb-1 of data).
Fig. 5: Example of a candidate upsilon decay to muons.

Electroweak


The electroweak Physics Analysis Group focuses on the study of the production and decay of electroweak vector bosons with the CMS data.

The results aimed at the ICHEP Conference include the first measurements of W and Z production cross sections in proton-proton collisions at a 7 TeV center-of-mass energy. These measurement rely on the understanding of the selection efficiency of high transverse momentum isolated electrons and muons.

Although it is possible to determine efficiencies from the data, the ICHEP results, due to lack of statistics, will have to be based on Monte-Carlo simulation efficiency estimates. The Vector Boson Task Force has been created to carry out these cross-section measurements.

A high-purity sample of W bosons can be obtained by requiring large missing transverse energy in addition to an isolated lepton. Preliminary lepton charge asymmetry distributions will be presented at ICHEP; eventually lepton charge asymmetry in W events will provide strong constraints on the parton densities for valence quarks and sea antiquarks in the proton.

Vector bosons are also produced in association with hadronic jets, and first jet multiplicity distributions will be presented. The sample of boosted W bosons recoiling against jets can be used to measure the polarization asymmetry. Finally, depending on the available statistics by ICHEP, the first events with reconstructed Z bosons in their decay into a tau lepton pair will be presented.

Top

The Top group has recently launched two task forces en route to the first top-quark results for the summer conferences: one for the dilepton ttbar channel and another for ttbar decaying to e + jets or mu + jets. The first rough cross-section measurements in these channels can be performed starting with a minimum of 2 pb-1 of good-quality data, although probably slightly more is needed for the lepton + jets channels.

The baseline goal of these task forces is to perform a publishable cross-section measurement of ttbar production at 7 TeV using selections that are robust and simple and do not use b-tagging. These results shall be accompanied by figures showing distributions in support of the signal hypothesis of the selected sample, such as the jet and b-tagging multiplicities and the (coarsely) reconstructed top mass. The lepton + jets task force is planning, in parallel, an analysis that is also making use of b-tagging in the selection itself, in order to increase the purity.

Currently, the two task forces are synchronizing their lepton and jet reconstruction and identification within the group as well as with the EWK task forces wherever possible and reasonable, and are looking into the control samples (small jet multiplicities) to study data-driven methods of background estimation.

SUSY

The SUSY group is developing a broad set of searches based on simple, topological signatures. In early 2010, the sensitivities of many of these searches were evaluated and found that with as little as 50–100 pb-1 significant reach could be obtained beyond current Tevatron searches. In some channels, there is potential to uncover new physics with 10 pb-1 or even less. See CMS Note 2010/008 for further details on these projections and similar ones in Exotica and Higgs searches.

All of the SUSY searches are being commissioned as rapidly as possible. The initial steps in the plan involve working closely with the Physics Object Groups, especially JetMET, to understand the objects that will be used as the basis for SUSY searches.

Members of the group are contributing strongly to JetMET results that will be sent to the ICHEP conference. In addition, a SUSY PAS is being prepared for ICHEP, focusing on the QCD backgrounds. Due to the large QCD cross section and substantial theoretical uncertainties, it is critical that procedures are established quickly for suppressing and measuring this background using data-driven methods. These studies are expected to launch CMS towards a very intensive and exciting period in the fall. New people are very welcome in the SUSY group, especially those with direct detector experience.

Exotica

The current focus of the Exotica group is to produce first “pilot” results for ICHEP and position ourselves for a full-blown attack on new physics later this year, with the statistics of the order of 100 pb-1. Six analyses were identified that could extend sensitivity beyond the Tevatron limits with an integrated luminosity of 1-10 pb-1.

The first two of these analyses are searches for new physics (e.g. excited quarks, diquarks, and other strongly produced particles) in inclusive dijet events. One analysis searches for resonances which are exhibited as a “bump” in the dijet mass spectrum, while the other compares the ratio of central to forward dijet event, exploiting the fact that new physics is produced more isotropically. The next two analyses on the path for ICHEP involve the search for new long-lived heavy stable charged particles (e.g., gluinos bound into R-hadrons or stop squarks). The first analysis searches for these new particles by looking for the anomalously large ionization energy (dE/dx) that a slow moving particle would deposit in the tracker. The other analysis looks for the slowest of these HSCPs, which may stop due to nuclear interactions before exiting the CMS detector and eventually decay in the CMS calorimeter. To identify these out-of-time decays a trigger is used that is kept live only between the LHC bunch crossings (so that there are no collision backgrounds). The final two ICHEP analyses are searches for first and second generation leptoquarks (hypothesized particles, which carry both lepton and baryon number) in the eejj and µµjj final states.

Post-ICHEP, the plan is to publish pilot analyses with 10-50 pb-1 of data and, in parallel, pursue another dozen or so analyses expected to converge into a publication later this year or in early 2011. These include various searches for a fourth generation of matter, excited leptons, extra spatial dimensions, and additional gauge bosons.

Finally, the Exotica group is running a "Hotline" that identifies a handful of the most interesting events every day and reports them to a team of dedicated “scanners” allowing for a prompt feedback on very rare detector problems “hidden” in standard DQM output, as well as any hints of unexpected signatures that possibly come from new physics. Despite its short existence the Hotline has already identified a number of subtle detector and reconstruction problems reported to the corresponding DPG/POGs. Some of these resulted in new cleaning cuts and modifications to reconstruction algorithms.

COMMUNICATIONS GROUP

The recently established CMS Communications Group, led by Lucas Taylor, has been busy in all three of its main are areas of responsibility: Communications Infrastructure, Information Systems, and Outreach and Education

Communications Infrastructure

The damage caused by the flooding of the CMS Centre@CERN on 21st December has been completely repaired and all systems are back in operation. Major repairs were made to the roofs, ceilings and one third of the floor had to be completely replaced. Throughout these works, the CMS Centre was kept operating and even hosted a major press event for first 7 TeV collisions, as described below.

Incremental work behind the scenes is steadily improving the quality of the CMS communications infrastructure, particularly Webcasting, video conferencing, and meeting rooms at CERN.

CERN/IT is also deploying a pilot service of a new videoconference tool called Vidyo, to assess whether it might provide an enhanced service at a lower cost, compared to the EVO tool currently in widespread use. Vidyo is already regularly used for remote Computing Operations shifts at CMS Centres.

Information Systems

These are in need of significant revamping, and refocusing as we move from the construction phase to the physics analysis phase of the experiment.

There are 245 official CMS Web sites at CERN alone and many more offsite. Sites overlap in content and are in need of updating, consolidation or retirement. Work has started to improve the CMS top-level pages and to create new pages for physics results. Plans are being prepared to bring the CMS Web sites as a whole into a more useful and maintainable state, possibly with the help of a CERN-wide Web Content Management System. The goal is to make it much easier for CMS collaborators to find information and to keep it up-to-date and correct.

CMS has produced an estimated 100,000 documents so far, only about half of which are stored in official systems (iCMS, CDS, EDMS, Indico, etc.). Therefore the “lightweight” Fermilab “DocDB” Document Management System is being deployed and all sub-systems will be asked to harvest their existing documents from their communities and enter them into the easy-to-use DocDB system. A longer term goal is to have a coherent user interface to all CERN document systems with powerful search capabilities.

Outreach and Education – the 7 TeV Media Event

Following the successful LHC startup in September 2008 and the subsequent helium leak incident, the press has kept the LHC very much in the public view.

Just before 7am on 30th March 2010, TV crews and reporters started to arrive in numbers at the CMS Centre @ CERN, Meyrin, hoping to witness first collisions at 7 TeV. More than 50 media organizations visited the CMS Centre during the day, from a total of about 100 at CERN.


In addition, 42 (of the total 52) CMS Centres Worldwide held media events for their national and local press, researchers and VIPs – the Dubna event alone hosted more than 100 people. Locations ranged from Los Angeles to Tehran, from Sao Paolo to Seoul, and included 12 capital cities.

The CMS Communications group coordinated live feeds of the day’s events to all CMS Centres Worldwide, and to CERN’s building 40 for ATLAS and CMS collaborators. In all CMS Centres, physicist interviewees explained what was happening and what we hope to discover in the coming LHC run at 7 TeV.

Event display images of the first 7 TeV collisions were broadcast by CMS-TV simultaneously to all CMS Centres just minutes after they happened. CERN immediately held a press conference at which CMS showed event display images, an animation of a real collision, and even reconstructed particle mass spectra. CMS issued a press statement translated into 20 different languages by CMS Collaborators.


More than 2,200 news items appeared on the 30th March, with more than 800 TV broadcasts using CERN footage. Many focused on CMS due to the press-friendly setup of the CMS Centre @ CERN and to our unique network of CMS Centres Worldwide. Thanks to the efforts of CMS collaborators, these resulted in additional CMS media coverage by about 75 TV channels, 100 radio stations, and directly led to an additional 300 written articles.

Good use of Web 2.0 communications tools meant that 700,000 distinct people watched the CERN Webcast (181,000 for the CMS Webcast), CERN's public homepage recorded 205,000 unique visitors from 185 countries (24,000 for CMS), CERN had 120,000 Twitter followers (4,000 for CMS), and the CMS-TV live event display channel was followed by 17,000 people, receiving 1.6 million Web hits.

One journalist, comparing us to NASA’s space programme, rather generously wrote that the 7 TeV media event demonstrated our new-found leadership in public relations. In fact, the 7 TeV event clearly marked the arrival of the LHC as the world’s new leading particle accelerator.

Content


PDF Version