COMMISSIONING AND DETECTOR PERFORMANCE GROUPS (DPG)

As the technical interventions were finishing up and services restored at P5 we started central operations again in 2011. We started operations with a reduced shift crew, Shift leader and DCS shifter, on 24th January. We had a first mid-week global run on 2nd and 3rd February followed by cosmic data-taking between 10th and 20th February. Due to delays with cooling for the strip tracker the useful cosmic data-taking with the strip tracker was reduced to about four days. On 20th February, the LHC started beam commissioning, and cosmic data-taking with the full CMS detector was stopped. The machine availability has been much higher during the 2011 beam commissioning than during the comparable time period in 2010. This has given us few opportunities to turn on the tracker for further cosmic data-taking. Many changes and upgrades were performed during the winter shutdown. Among them was an upgrade to running the central DAQ on 64 bit. All of these upgrades have now been tested successfully.

The LHC delivered the first stable beams on 13th March, a day ahead of the schedule presented in January. LHC has commissioned the new beam optics for 2011 with b* = 1.5 m; at the end of 2010 we operated with b* = 3.5 m. So, with all other machine parameters the same, this gives us a luminosity increase of about a factor of 2.3. With nominal bunch charges, about 1.15 x 1011 protons, and an emittance off 2.2 mm we expect a pile-up of about 10 interactions per bunch crossing. Figure 4 shows an event where we had 13 reconstructed vertices. This high pile-up constitutes a significant challenge for the trigger; a lot of effort has been spent to prepare for the conditions expected in the 2011 run.


Figure 4: Event with 13 reconstructed vertices.

At first, LHC delivered collisions with two bunches colliding in CMS. These first fills were used to carry out delay scans, e.g. of pixels and strips, validate the trigger timing, and perform HV scans. We are now done with these special runs. Some final analysis still remains to be done and new constants have to be deployed in the online system. On Friday, 18th March, the LHC started the intensity ramp up with 32 bunches (30 colliding in CMS). The goal for the machine was to have three fills and 20 hours of stable beams at each intensity step. At the point of writing this bulletin the LHC has reached 64 bunches and are planning the next fill to be with 136 bunches. The last step is to go to 200 bunches, which will give a luminosity in excess of 2 x 1032 cm-2s-1, i.e. at the same level as we had last year. All subdetectors are back working at the same level as last year. There were changes in the timing for the RPCs and Pixels that are not fully understood, but they are now timed in with respect to the LHC beam. There will be a Jamboree between 13th and 15th April to assess the quality of the data we are taking to make sure that we are ready to take the large data sample we are expecting in 2011.

For the 2011 run we made some changes in the way the central shifts at P5 are organised. From the experience of the 2010 run we wanted to make sure that we have a more experienced shift crew. This means fewer shifters and more shifts per person. For the operation in 2011 we have instituted minimal quotas for the central shifts and have limited the number of people that we allow to take the central shifts in order to match the quota. So far this has been successful in the sense that we have filled all shifts for most central shift roles for 2011.

Tracker

The Tracker joined CRAFT11 on 14th February and collected approximately 1.2M cosmic ray tracks. Such exercises offer an excellent opportunity to monitor the detector and check the performances before restart. In addition, cosmic tracks are essential for the alignment of the Tracker. They allow to mitigate weak modes thanks to their non-trivial topology. Furthermore, they allowed to monitor and eventually correct the tracker geometry before the restart of the LHC, most notably the longitudinal shift of the barrel pixel half-shells which have been seen to occur several times in 2010. Finally, because cosmic tracks have a wider range of angles w.r.t. sensor planes than collision tracks, they contribute significantly to the correction of surface deformation of the silicon sensors.

Unfortunately the number of collected tracks fall well short of the 3M tracks that were needed to carry out the foreseen program. While The tracker in itself performed well and the data are overall of good quality, a timing-mismatch between the various detectors has been seen, which adversely affects the resolution of the hits in the pixel detector.

The alignment revealed that the longitudinal separation of the barrel pixel half-shells changed by approximately 60 microns and the barycentre of the barrel pixel moved along z by the same amount. Shifts of tens of microns in the pixel endcaps have also been observed and corrected in the alignment.

Further validation and analyses of the alignment geometry produced in November 2011 for the end-of-year reprocessing have also been completed. While this geometry offered significant improvements in the endcaps (with special benefit for B-physics), it led to a bias of the Z mass peak position with an η-dependence. The analysis revealed that this bias was caused by a twist of the Tracker geometry. The alignment procedure with only Cosmics and minimum-bias tracks is insensitive to such a twist (the twist is a “weak mode”) and the information coming from the Z mass must be used to fully constrain it. A new geometry was thus derived, which resulted in an almost twist-free geometry and nearly eliminates the observed η-dependent mass bias, keeping all the previous assets. This highlights the importance of the collaboration between the DPG, POG and PAG on such issues.

Since 13th March, the Tracker is taking fresh collision data with a high performance. The conditions obtained from the February commissioning are now used, including the alignment obtained from CRAFT11 data. The first days of data-taking were dedicated to commissioning runs needed to optimise performances in 2011. Among other things, it was immediately noted that the timing of the pixel detector was to be corrected by about half a bunch crossing, probably following changes to the trigger system during the winter break. After that correction, initial data show a nearly perfect agreement with the templates from 2010, both for the pixel and strip subdetectors. Using these data, a first assessment of the quality of alignment could be done in less than one week, confirming both the robustness of the tools and the reactivity of the team in charge. Special run data is being analysed and will be used to reassess the good timing of the detector and the impact of radiation on the silicon sensors.

ECAL

Since the December CMS Week, the focus of the ECAL DPG has been on the consolidation of results from 2010 data and preparations for high luminosity data taking in 2011.

The 2010 data have been used to further refine the energy and timing calibration of ECAL, and to provide precise spatial alignment corrections for the crystal barrel, endcap calorimeters, and the preshower detector. Energy inter-calibration procedures using minimum-bias events, and photons from π 0 and η particles have now reached a precision of up to 0.5% in the crystal barrel and 2-3% in the endcaps. Procedures to extract the absolute electron and photon energy scale from Z→ee and Z→μμγ events are now in a mature state, and satisfactory agreement between data and Monte Carlo simulations concerning the ECAL energy scale and energy resolution has been achieved.

Corrections to the energy scale of the barrel and endcap detectors due to crystal irradiation will be automatically applied to reconstructed data in 2011, using measurements from the ECAL light monitoring system. Detailed analysis of 2010 data has improved the accuracy of these corrections, which are derived from a theoretical model of crystal transparency loss due to irradiation, followed by recovery in beam-off periods. The parameters of this model will be further constrained by high luminosity data recorded during 2011.

Anomalous signals in the ECAL barrel (a.k.a. “spikes”) remain an important focus of the ECAL DPG. Spike rejection at Level-1, using the “strip fine grain veto bit” calculated during ECAL trigger primitive generation, has been commissioned and will be deployed online in 2011.

A significant achievement over the past few months has been the incorporation of online spike-killing into the Level-1 emulator. This will allow spike rejection rates to be predicted for a range of trigger-primitive thresholds and LHC luminosity/pile-up scenarios. The simulation of anomalous signals in the CMS Monte Carlo has been further improved, and we expect that samples generated with spikes included will be of increasing interest to POGs and PAGs during 2011.

Regarding offline spike rejection, following close consultation with Egamma, JetMET and Particle Flow groups, a common spike "cleaning" algorithm has been developed to remove anomalous ECAL energy deposits from Egamma objects and jets in the reconstruction of 2011 data.

We have also optimised the zero suppression settings and amplitude reconstruction weights in preparation for high intensity LHC running. These settings were tested during the winter shutdown, and are ready to be deployed online. In addition the silicon preshower detector will run in low-gain mode during 2011, to maximise energy resolution and π0/γ separation capabilities.

The excellent performance of the ECAL online, reconstruction, data quality monitoring, and prompt feedback groups have allowed us to re-establish high quality data-taking following the resumption of LHC collisions in 2011, and we look forward to continuing this trend as luminosity increases throughout the year.

HCAL

Missing transverse energy (MET) is an important signature of new physics and a good understanding of detector effects producing fake MET is essential. In many analysis there are two primary sources of anomalous signals in the forward calorimeter (HF) leading to large fake ME T. One source was first observed in test beams analyses and is due to charged particles producing Cherenkov light in the PMT window. This signal arrives earlier than the physics signal produced in the absorber in HF. A second source of anomalous signals, observed in the 2010 data, is due to scintillation light produced in the light-guide which arrives later than the physics signal. The physics signal is narrow and we can reduce the integration window used to reconstruct the energy without compromising the energy measurement while at the same time reducing the contribution from anomalous signals. Using a narrower time window also reduces the effect of out-of-time pile-up.

We also modified the light-guide by replacing the material that was the source of scintillation light. These modifications are expected to reduce greatly the sensitivity to anomalous signals in HF. Filters to further reduce anomalous signals have been updated for the 2011 operating conditions which will have shorter bunch spacing and higher pile-up conditions.

One of the important tasks of the HCAL DPG is the calibration of HCAL. The main purpose of the calibration is to establish a well-defined energy point so that the response of HCAL can be monitored as a function of time. Adjustments to the response can then be made in order to maintain the calibration point to an accuracy of 2-3%. The HCAL calibration starts from the response determined using test beam data on a limited number of HCAL modules and then extended to all of HCAL using Co60 wire sources. This initial calibration, referred to as pre-calibration, does not include the effects due to dead material in front of HCAL or due to the magnetic field. These effects can only be accounted for by using collision data. Special calibration triggers to collect non-zero suppressed data, photon-triggered events, and events rich in isolated tracks were used in 2010.

The HCAL calibration is done in two steps. First a relative scale adjustment is applied in φ which does not change the overall energy scale. The φ-symmetry calibration was done using the 2010 non-zero suppressed data and photon-triggered data. These two data samples were combined to reduce the uncertainty on the response corrections. Once the relative φ-symmetry calibration is applied, an absolute scale correction in η is established for a fixed energy point. The η-dependent correction uses isolated charged particles and require a well measured track momentum measured by the tracker system.

A sufficient sample of events with isolated tracks was collected in 2010 so that the η-dependent response correction could be determined. The full HCAL calibration will be applied to the 2011 data and we will continue to collect isolated track data until we have enough data to calibrate individual channels. Methods to extend the calibration to the forward region without Tracker coverage are under development. The forward region without Tracker coverage includes parts of HE, HF, CASTOR and ZDC. For HF we can use Z→ee events where one electron in the central region and the second electron is in HF. Photon+jet events can be used as an important cross-check. For CASTOR we may use the same techniques as for HF. ZDC is being calibrated with neutrons.

Early in 2011, the beam was intentionally steered into collimators upstream of CMS resulting in a spray of muons that are useful to check the response of the detectors. The so-called "splash" data were used to cross-check the φ-symmetry calibration and it was found that two regions of fixed φ have a problem with the first scintillator layer (layer 0). Layer 0 uses a thicker scintillator which also has a higher response than the other layers in HCAL.

Originally it was planned that layer 0 would be read out separately but later it was decided that it would be combined with the other layers in HCAL. In order to make the light output for this layer similar to the others, a neutral density filter was used. It appears that there is a problem with layer 0 for two φ slices leading to a higher response. The two φ slices have a different non-linear energy response than the other channels in HB. This can be compensated for in software reconstruction and a special correction function was developed.

This year a major emphasis will be made on establishing well-defined procedures to monitor HCAL and update conditions to ensure a stable response over time. We have established HCAL offline shifts to monitor HCAL specific conditions. The shifts will be done at remote operations centres in Russia and the USA.

DT

We designed an HLT path that isolates high purity J/ψ candidates to study the DT system behaviour with muons at low pT.  This allows to select events where one leg of the J/ψ is completely unbiased by the signal from the DTs.

Our calibration procedure is now able to fully exploit the DQM GUI to inspect the results of our calibration. The workflow infrastructure was reworked so that it is now completely automated. We also improved the ALCARECO assigned to the DT calibration. This will allow us to quickly react in case some change of conditions manifests in our detector.

We introduced a tuning of the drift velocity in our local reconstruction in the first layer of the external wheels, to take into account known small degradations of the magnetic field. This slightly improves the spatial resolution in that region. We also used the winter shutdown to summarise our 2010 performance.

More information can be found in the Muon DT section, elsewhere in this Bulletin.

CSC

During the end-of-year shutdown the CSC DPG was largely focussed on work required for the Muon Performance paper, which is currently being written jointly by all three Muon subdetector communities. Amongst the results are space and time resolutions of the CSC measurements, and efficiencies for CSC trigger primitives and local reconstruction of rechits and muon track segments, all based on 2010 collisions data. The performance is good and as expected from the detector design.

Learning from our 2010 experience we have improved the organisation of CSC DQM plots in order to simplify life for CSC shifters, who are now CSC DQMers, while expert oversight of CSC Operations is handled by the CSC DOC.  Incremental development continues online and offline to improve the precision of CSC timing measurements (and trigger timing). This includes improved timing values for rechits and segments, which are expected to benefit a number of physics analyses.

The CSC system contains nine (out of 473) chambers which currently provide no rechits or trigger primitives, due to various hardware problems. It has been realised that these should be suppressed in the CMS simulation in order that the L1 muon trigger can be realistically simulated. This was not done last year in the belief that rechit information could be suppressed a posteriori when necessary. Unfortunately this does not provide a way to simulate the L1 trigger. The bad chambers are now included in conditions data for the latest CMSSW 4_2_x release, so that they will provide no digis (and hence rechits or trigger primitives) in the simulation.

The Endcap Muon alignment group rapidly provided a new CSC alignment to match the new “twist-free” Tracker alignment. The old Tracker alignment effectively forced a ±1 mrad relative twist between the two CSC endcaps, to compensate for the fact that the CSC alignment is track-based whereas the barrel DT alignment is hardware-based and hence was unaffected by the Tracker twist. This is a happy solution of a long-standing problem that muon track-based and hardware-based alignments apparently disagreed. The new alignment will also be part of the latest CMSSW 4_2_x release.

The CSCs were rapidly brought back into operation early in 2011 and recommissioned with cosmic rays. Everything from the hardware level to local reconstruction was soon validated against 2010 Cosmics data. This has now been confirmed with the first 2011 LHC collisions data, obtained in mid-March. Within two days we had already seen two Z→μμ candidate events, from which we conclude that the Electroweak Sector is still operational in 2011. We now await higher luminosities and settling into stable detector operation and data collection. Although the CSCs are expected to be relatively immune to the effects of high pile-up we are watching carefully the behaviour of the entire system – from trigger to local reconstruction – as luminosities increase, and we are also closely monitoring backgrounds from beam halo, beam splash, and neutrons.

RPC

The main activity in the RPC DPG group during the last three months, has been to refine the analysis of detector performance on the full 2010 data sample.

Average Barrel RPC hit efficiency was stable and above 95% during all of 2010 at an applied voltage of 9.35 kV. For the endcaps, data have been taken at different HV and an average hit efficiency of around 94% has been reached at 9.55 kV. The endcap working point voltage is still not well-defined. At beginning of 2011 data-taking a set of calibration runs has been planned in order to define the best working voltage chamber by chamber. This activity is just started and analysis in ongoing, at the time of writing this report.

The efficiency table based on 2010 analysis has been included in CMSSW release 4_2_x for Monte Carlo simulation so that we should expect a better modelling of the system performance. Dead chambers will be described as well because they are simulated with 0 efficiency in the MC.

Results on cluster size, spatial resolution and hit efficiency have been approved and will be part of the common paper on muon subdetectors performance that is in preparation right now. Interesting analyses in parallel with other muon subdetectors are in progress to monitor the noise rate as a function of the instantaneous LHC luminosity. External and internal layers are affect by machine background and the rate is under study for different regions of the detector.

An increased effort has been devoted to improve the synergy between Muon DPGs and Muon POG. Work is in progress to study the use of RPC hits in the muon reconstruction and in cosmic discrimination making use of the RPC timing.

In 2011 a new trigger algorithm (Pattern Comparator) will be used in the Barrel region. The muon candidate is generated if at least three out of six crossed layers are fired (previously at least four fired layer were required). Based on MC simulation, the new algorithm should increase the RPC trigger efficiency in the barrel by a few percent. To keep the trigger rate under control only selected combinations of the three fired layers are allowed.

Since the beginning of the 2011 some modifications of the RPC trigger software have been applied. The XDAQ applications controlling the trigger hardware were converted into the Trigger Supervisor applications. The goal was to obtain a higher homogeneity and simplicity of the online software. A new version of the front-end boards (FEB) configuration has been implemented in the online software. An automatic procedure to load the electronic thresholds from the database has been included (all other parts of the system were already configured from the database). All basic functionalities are now ready and working fine. The updated software allows much easier fine-tuning of the FEB thresholds what will allow to improve the RPC detector performance.

During the early 2011 cosmic runs, a time shift of half a BX has been observed. The source of that shift is outside the RPC system but is still not understood. A new set of synchronisation parameters have been uploaded in the Link Board system to correct the time observed time shift. First preliminary results from collisions shows that the timing after that correction is as good as it was in the 2010, i.e. the level of pre- and post-firing is virtually zero.


by L. Malgeri, M. Chamizo and A. Ryd