CERN Accelerating science

Posters

新增:
2018-07-20
14:45
Improvements to the LHCb software performance testing infrastructure using message queues and big data technologies
Reference: Poster-2018-648
Created: 2018. -1 p
Creator(s): Szymanski, Maciej Pawel

Software is an essential component of the experiments in High Energy Physics. Due to the fact that it is upgraded on relatively short timescales, software provides flexibility, but at the same time is susceptible to issues introduced during development process, which enforces systematic testing. We present recent improvements to LHCbPR, the framework implemented at LHCb to measure physics and computational performance of complete applications. Such infrastructure is essential for keeping track of the optimisation activities related to the upgrade of computing systems which is crucial to meet the requirements of the LHCb detector upgrade for the next stage of data taking of the LHC. Latest developments in LHCbPR include application of messaging system to trigger the tests right after the corresponding software version is built within LHCb Nightly Builds infrastructure. We will also report on the investigation of using big data technologies in LHCbPR. We have found that using tools such as Apache Spark and Hadoop Distributed File System may significantly improve the functionality of the framework, providing an interactive exploration of the test results with efficient data filtering and flexible development of reports.

© CERN Geneva

Access to files

詳細記錄 - 相似記錄
2018-07-20
14:41
Machine Learning based Global Particle Identification Algorithms at the LHCb Experiment
Reference: Poster-2018-647
Created: 2018. -1 p
Creator(s): Hushchyn, Mikhail

One of the most important aspects of data processing at LHC experiments is the particle identification (PID) algorithm. In LHCb, several different sub-detector systems provide PID information: the Ring Imaging Cherenkov detectors, the hadronic and electromagnetic calorimeters, and the muon chambers. The charged PID based on the sub-detectors response is considered as a machine learning problem solved in different modes: one-vs-rest, one-vs-one and multi-classification, which affect the models training and prediction. To improve charged particle identification for pions, kaons, protons, muons and electrons, several neural networks and gradient boosting models have been tested. These approaches provide larger area under the curve of receiver operator characteristics than existing implementations in most cases. To reduce the systematic uncertainty arising from the use of PID efficiencies in certain physics measurements, it is also beneficial to achieve a flat dependency between efficiencies and spectator variables such as particle momentum. For this purpose, "flat” algorithms based on the boosted decision trees that guarantee the flatness property for efficiencies have also been developed. This talk presents approaches based on the state-of-the-art machine learning techniques and its performance evaluated on Run 2 data and simulation samples. A discussion of the performances is also presented.

© CERN Geneva

Access to files

詳細記錄 - 相似記錄
2018-07-20
14:32
Addressing Scalability with Message Queues: Architecture and Use Cases for DIRAC Interware
Reference: Poster-2018-646
Created: 2018. -1 p
Creator(s): Krzemien, Wojciech Jan

The Message Queue architecture is an asynchronous communication scheme that provides an attractive solution for certain scenarios in the distributed computing model. The introduction of the intermediate component (queue) in-between the interacting processes, allows to decouple the end-points making the system more flexible and providing high scalability and redundancy. The message queue brokers such as RabbitMQ, ActiveMQ or Kafka are proven technologies widely used nowadays. DIRAC is a general-purpose Interware software for distributed computing systems, which offers a common interface to a number of heterogeneous providers and guarantees transparent and reliable usage of the resources. The DIRAC platform has been adapted by several scientific projects, including High Energy Physics communities like LHCb, the Linear Collider and Belle2. A Message Queue generic interface has been incorporated into the DIRAC framework to help solving the scalability challenges that must be addressed during LHC Run3 starting in 2021. It allows to use the MQ scheme for the message exchange among the DIRAC components, or to communicate with third-party services. Within this contribution we will describe the integration of MQ systems with DIRAC, and several use cases will be shown. The focus will be put on the incorporation of MQ into the pilot logging system. Message Queues are also foreseen to be used as a backbone of the DIRAC component logging system, and monitoring. The results of the first performance tests will be presented.

© CERN Geneva

Access to files

詳細記錄 - 相似記錄
2018-07-20
14:30
New approaches for track reconstruction in LHCb’s Vertex Locator
Reference: Poster-2018-645
Created: 2018. -1 p
Creator(s): Hasse, Christoph

Starting with Upgrade 1 in 2021, LHCb will move to a purely software-based trigger system. Therefore, the new trigger strategy is to process events at the full rate of 30MHz. Given that the increase of CPU performance has slowed down in recent years, the predicted performance of the software trigger currently falls short of the necessary 30MHz throughput. To cope with this shortfall, LHCb's real-time reconstruction will have to be sped up significantly. We aim to help solve this shortfall by speeding up the track reconstruction of the Vertex Locator which currently takes up roughly a third of the time spent in the first phase of the High Level Trigger. In order to obtain the needed speedup, profiling and technical optimizations are explored as well as new algorithmic approaches. For instance, a clustering based algorithm can reduce the event rate prior to the track reconstruction by separating hits into two sets - hits from particles originating from the proton-proton interaction point, and those from secondary particles - allowing the reconstruction to treat them separately. We present an overview of our latest efforts in solving this problem, which is crucial to the success of the LHCb upgrade

© CERN Geneva

Access to files

詳細記錄 - 相似記錄
2018-07-20
14:25
LHCb’s Puppet 3.5 to Puppet 4.9 migration
Reference: Poster-2018-644
Created: 2018. -1 p
Creator(s): Mohamed, Hristo Umaru

Up until September 2017 LHCb Online was running on Puppet 3.5 Master/Server non redundant architecture. As a result, we had problem with outages, both planned and unplanned, as well as with scalability issues (How do you run 3000 nodes at the same time? How do you even run 100 without bringing down the Puppet Master). On top of that Puppet 5.0 was released, so we were running now 2 versions behind! As Puppet 4.9 was the de facto standard, something had to be done right now, so a quick self inflicted three weeks long nonstop hackathon had to happen. This talk will cover the pitfalls, mistakes and architecture decisions we took when migrating our entire Puppet codebase nearly from scratch, to a more modular one, addressing both existing exceptions and anticipating arising ones in the future - All while our entire infrastructure was running in physics productions and on top of that causing 0 outages. We will cover mistakes we had made in our Puppet 3 installment and how we fixed them in the end, in order to lower cotalogue compile time and reduce our overall codebase around 50%. We will cover how we setup a quickly scalable Puppet Core(Masters,CAs,Foreman,etc) infrastructure.

© CERN Geneva

Access to files

詳細記錄 - 相似記錄
2018-07-20
14:23
Perspectives for the migration of the LHCb geometry to the DD4hep toolkit
Reference: Poster-2018-643
Created: 2018. -1 p
Creator(s): Couturier, Ben

The LHCb experiment uses a custom made C++ detector and geometry description toolkit, integrated with the Gaudi framework, designed in the early 2000s when the LHCb software was first implemented. With the LHCb upgrade scheduled for 2021, it is necessary for the experiment to review this choice to adapt to the evolution of software and computing (need to support multi-threading, importance of vectorization...) The Detector Description Toolkit for High Energy Physics (DD4hep) is a good candidate for the replacement for LHCb own's geometry description framework: it is possible to integrate it with Gaudi and its features theoretically match what is needed by LHCb: in term of geometry description and detector description but also concerning the possibility to add detector alignment parameters and the integration with simulation tools. In this paper we will report on detailed studies undertaken to compare the feature set proposed by the DD4hep toolkit, to what is needed by LHCb. We will show not only how the main description could be migrated, but also how to integrate the LHCb real-time alignment tools in this toolkit, in order to identify the main obstacles to the migration of the experiment to DD4hep.

© CERN Geneva

Access to files

詳細記錄 - 相似記錄
2018-07-13
09:06
The LHCb tracks reconstruction in Run 2: strategy and performance
Reference: Poster-2018-641
Created: 2018. -1 p
Creator(s): Dufour, Laurent; Kopecna, Renata; Pearce, Alex; Van Veghel, Maarten

In order to accomplish its wide program of physics measurements, the LHCb collaboration has developed in the past years a complex of algorithms for the reconstruction of the trajectories of charged particles, taking into account the heterogeneous structure of the LHCb tracking system. Several data-driven approaches have been conceived to provide a precise evaluation of the tracking efficiency, a crucial ingredient of many physics analysis. These are mostly based on clean samples of muons, but the recent hints of lepton universality violation required the development of robust data-driven techniques specifically dedicated to electrons, in order to reduce the systematic uncertainties. In addition, special data streams with prompt access have been put in place to collect the calibration samples for both muons and electrons. While the end of the Run 2 data taking period is approaching, we provide an overview of the whole reconstruction strategy and of its performances, which have a direct impact on the quality of the current LHCb results and provide the basis for the upgrade era.

© CERN Geneva

Access to files

詳細記錄 - 相似記錄
2018-07-13
09:01
Central Exclusive Production at LHCb
Reference: Poster-2018-640
Created: 2018. -1 p
Creator(s): Gandini, Paolo

The installation of scintillating pad detectors (Herschel), bracketing the LHCb detector along the beamline, have significantly enhanced LHCb's sensitivity to central exclusive production. Additionally, dedicated triggers during the early measurement period of Run 2 have produced an extended CEP dataset. A summary of results from Run 1 as well as early results from Run 2 will be shown.

© CERN Geneva

Access to files

詳細記錄 - 相似記錄
2018-07-06
14:23
LHCb full-detector real-time alignment and calibration: latest developments and perspectives
Reference: Poster-2018-639
Created: 2018. -1 p
Creator(s): Maddrell-Mander, Samuel William; Vom Bruch, Dorothea

A key ingredient of the data taking strategy used by the LHCb experiment at CERN in Run-II is the novel real-time detector alignment and calibration. Data collected at the start of the fill are processed within minutes and used to update the alignment, while the calibration constants are evaluated hourly. This is one of the key elements which allow the reconstruction quality of the software trigger in Run-II to be as good as the offline quality of Run-I. The most recent developments of the real-time alignment and calibration paradigm enable the fully automated updates of the RICH detectors' mirror alignment and a novel calibration of the calorimeter systems. Both evolutions improve the particle identification performance stability resulting in higher purity selections. The latter leads also to an improvement in the energy measurement of neutral particles, resulting in a 15% better mass resolution of radiative b-hadron decays. A large variety of improvements has been explored for the last year of Run-II data taking and is under development for the LHCb detector upgrade foreseen in 2021.These range from the optimisation of the data samples selection and strategy to the study of a more accurate magnetic field description. Technical and operational aspects as well as performance achievements are presented, focusing on the new developments for both the current and upgraded detector.

© CERN Geneva

Access to files

詳細記錄 - 相似記錄
2018-07-06
14:18
A true real-time success story: the case of collecting beauty-ful data at the LHCb experiment
Reference: Poster-2018-638
Created: 2018. -1 p
Creator(s): Alessio, Federico

The LHCb experiment at CERN is currently completing its first big data taking campaign at the LHC started in 2009. It has been collecting data at more than 2.5 times its nominal design luminosity value and with a global efficiency of ~92%. Even more striking, the efficiency between online and offline recorded luminosity, obtained by comparing the data quality output, is close to 99%, which highlights how well the detector, its data acquisition system and its control system have been performing despite much harsher and more variable conditions than initially foreseen. In this paper, the excellent performance of the LHCb experiment will be described, by transversally tying together the timing and data acquisition system, the software trigger, the real-time calibration and the shifters interaction with the control system. Particular attention will be given to their real-time aspects, which allow performing an online reconstruction that is at the same performance level as the offline one through a real-time calibration and alignment of the full detector. In addition, the various solutions that have been chosen to operate the experiment safely and synchronously with the various phases of the LHC operations will also be shown. In fact, the quasi-autonomous control system of the LHCb experiment is the key to explain how such a large detector can be operated successfully around the clock by a pool of ~300 non-expert shifters. Finally, a critical review of the experiment will be presented in order to justify the reasons for a major upgrade of the detector.

© CERN Geneva

Access to files

詳細記錄 - 相似記錄
特輯:
Open Days 2013 Posters (58)