CERN Accelerating science

Posters

უკანასკნელი დამატებები:
2018-08-22
18:01
CDS Videos - The new platform for CERN videos
Reference: Poster-2018-654
Created: 2018. -1 p
Creator(s): Marian, Ludmila; Gabancho, Esteban; Gonzalez Lopez, Jose Benito; Tarocco, Nicola; Costa, Flavio [...]

CERN Document Server (CDS, cds.cern.ch) is the CERN Institutional Repository based on the Invenio open source digital repository framework. It is a heterogeneous repository, containing more than 2 million records, including research publications, audiovisual material, images, and the CERN archives. Its mission is to store and preserve all the content produced at CERN as well as to make it easily available to any outlet interested. CDS aims to be the CERN’s document hub. To achieve this we are transforming CDS into an aggregator over specialized repositories, each having its own software stack, with features enabled based on the repository’s content. The aim is to enable each content producer community to have its own identity, both visually and functionally, as well as increased control on the data model and the submission, curation, management, and dissemination of the data. This separation is made possible by using the Invenio 3 framework. The first specialized repository created is CDS Videos (videos.cern.ch). It has been launched in December 2017, and is the first step in the long-term project to migrate the entire CDS to the Invenio 3 framework. CDS Videos provides an integrated submission, long-term archival and dissemination of CERN video material. It offers a complete solution for the CERN video team, as well as for any department or user at CERN, to upload video productions. The CDS Videos system will ingest the video material, interact with the transcoding server for generating web and broadcaster subformats, mint DOI persistent identifiers, generate embeddable code to be reused by any other website, and store the master files for long-term archival. The talk will detail the software architecture of the CDS Videos as well as the infrastructure needed to run such a large-scale web application. It will present the technical solutions adopted, including the Python-based software stack (using among others Flask, IIIF, ElasticSearch, Celery, RabbitMQ) and the new AngularJS-based user interface which was exclusively designed for CDS Videos. It will also present our solution to a lossless migration of data: more than 5'000 videos from 1954 to 2017, summing up to 30TB of files, have been migrated from DFS to EOS in order to populate the CDS Videos platform. All this could be of high interest to other institutes wanting to reuse the CDS Videos open source code for creating their own video platform. Last but not least, the talk will detail how the user community at CERN and beyond can take advantage of the CDS Videos platform for creating and disseminating video content.

© CERN Geneva

Access to files

დეტალური ჩანაწერი - მსგავსი ჩანაწერები
2018-08-09
14:07
Honeypot Resurrection - Redesign of CERN's Security Honeypots
Reference: Poster-2018-653
Keywords:  computer security, honeypot, SOC
Created: 2018. -1 p
Creator(s): Buschendorf, Fabiola

Honeypots are a fake system residing in a companie's or organization's network, attracting attackers by emulating old and vulnerable software. If a Honeypot is accessed, all actions are logged and any submitted files are being stored on the host machine. The current Honeypot at CERN is deprecated and does not provide useful notifications. The task of this summer student project is to identify well maintained and up-to-date open source honeypots, test and configure them and finally deploy them to convincingly resemble a CERN host in order to collect information about potentially malicious activity inside the GPN.

© CERN Geneva

Access to files

დეტალური ჩანაწერი - მსგავსი ჩანაწერები
2018-08-03
16:36
Software packaging and distribution for LHCb using Nix
Reference: Poster-2018-652
Created: 2018. -1 p
Creator(s): Burr, Chris

Software is an essential and rapidly evolving component of modern high energy physics research. The ability to be agile and take advantage of new and updated packages from the wider data science community is allowing physicists to efficiently utilise the data available to them. However, these packages often introduce complex dependency chains and evolve rapidly introducing specific, and sometimes conflicting, version requirements which can make managing environments challenging. Additionally, there is a need to replicate old environments when generating simulated data and to utilise pre-existing datasets. Nix is a "purely functional package manager" which allows for software to be built and distributed with fully specified dependencies, making packages independent from those available on the host. Builds are reproducible and multiple versions/configurations of each package can coexist with the build configuration of each perfectly preserved. Here we will give an overview of Nix followed by the work that has been done to use Nix in LHCb and the advantages and challenges that this brings.

© CERN Geneva

Access to files

დეტალური ჩანაწერი - მსგავსი ჩანაწერები
2018-07-27
13:59
KairosDB and Chronix as longterm storage for Prometheus - For those who don’t want to deal with Hbase.
Reference: Poster-2018-651
Created: 2018. -1 p
Creator(s): Mohamed, Hristo Umaru

Prometheus is a leading open source monitoring and alerting tool. Prometheus's local storage is limited in its scalability and durability, but it integrates very well with other solutions which provide us with robust long term storage. This talk will cover two solutions which interface excellently and do not require us to deal with HBase - KairosDB and Chronix. Intended audience are people who are looking to evaluate a long term storage solution for their Prometheus data. This talk will cover the CERN@LHCb Online experience of choosing a monitoring solution for our data processing cluster. It will address two technologies on the maker Chronix and KairosDB which do not require us to maintain a HBase cluster.

© CERN Geneva

Access to files

დეტალური ჩანაწერი - მსგავსი ჩანაწერები
2018-07-26
15:22
Strategy and Automation of the Quality Assurance Testing of MaPMTs for the LHCb RICH Upgrade
Reference: Poster-2018-650
Created: 2018. -1 p
Creator(s): Gizdov, Konstantin

The LHCb RICH system will undergo major modifications for the LHCb Upgrade during the Long Shutdown 2 of the LHC, and the current photon detectors will be replaced by Multi Anode PMTs. The operating conditions of the upgraded experiment puts forth significant requirements onto the MaPMTs in terms of their performance, durability & reliability. Presented is an overview of the testing facilities designed and used to vet 3100 units of Hamamatsu 1-inch R13742 and 450 units of Hamamatsu 2-inch R13743 during the short 2 year testing period. Furthermore, discussed are the hardware architecture, the different read-out, power and control components, as well as the novel extensible software framework to steer the procedure. Finally, the operation of four automated stations, that have been deployed in two separate labs, is reported, with each station capable of fully characterising 16 MaMPTs per day.

© CERN Geneva

Access to files

დეტალური ჩანაწერი - მსგავსი ჩანაწერები
2018-07-26
13:27
First results from production testing of 64-channel MaPMT R13742 (1 in) and R13743 (2 in) for the LHCb RICH Upgrade
Reference: Poster-2018-649
Created: 2018. -1 p
Creator(s): Gizdov, Konstantin

During 2019/20 LHCb Upgrade of the Ring Imaging Cherenkov (RICH) system the current Hybrid Photon Detectors (HPDs), with embedded 1 MHz readout electronics, will be replaced with Multi-anode Photomultiplier Tubes (MaPMTs) with new external 40 MHz readout electronics. Two sizes of Hamamatsu 64-channel MaPMT have been selected as the photon detectors: the 1-inch R13742 and the 2-inch R13743 MaPMTs, custom modifications of the models R11625 and R12699. Including spares, 3100 R13742 and 450 R13743 are purchased. The campaign to characterise all units, to ensure compliance with minimum specifications and to allow for selection of units with similar operational parameters is ongoing. The key characteristics comprise the average gain, the spread of the gain (uniformity), the peak-to-valley ratio, the dark count rate as well as the dependency of the gain on the high voltage (k-factor). So far 474 and 45 units have been tested, respectively. The test results will be presented. Additional measurements and studies, made with subsets of MaPMT, round the picture: the Quantum Efficiency, the loss of photon detection efficiency in magnetic fields and minimal mu-metal shield configurations to effectively shield them up to 3 mT.

© CERN Geneva

Access to files

დეტალური ჩანაწერი - მსგავსი ჩანაწერები
2018-07-20
14:45
Improvements to the LHCb software performance testing infrastructure using message queues and big data technologies
Reference: Poster-2018-648
Created: 2018. -1 p
Creator(s): Szymanski, Maciej Pawel

Software is an essential component of the experiments in High Energy Physics. Due to the fact that it is upgraded on relatively short timescales, software provides flexibility, but at the same time is susceptible to issues introduced during development process, which enforces systematic testing. We present recent improvements to LHCbPR, the framework implemented at LHCb to measure physics and computational performance of complete applications. Such infrastructure is essential for keeping track of the optimisation activities related to the upgrade of computing systems which is crucial to meet the requirements of the LHCb detector upgrade for the next stage of data taking of the LHC. Latest developments in LHCbPR include application of messaging system to trigger the tests right after the corresponding software version is built within LHCb Nightly Builds infrastructure. We will also report on the investigation of using big data technologies in LHCbPR. We have found that using tools such as Apache Spark and Hadoop Distributed File System may significantly improve the functionality of the framework, providing an interactive exploration of the test results with efficient data filtering and flexible development of reports.

© CERN Geneva

Access to files

დეტალური ჩანაწერი - მსგავსი ჩანაწერები
2018-07-20
14:41
Machine Learning based Global Particle Identification Algorithms at the LHCb Experiment
Reference: Poster-2018-647
Created: 2018. -1 p
Creator(s): Hushchyn, Mikhail

One of the most important aspects of data processing at LHC experiments is the particle identification (PID) algorithm. In LHCb, several different sub-detector systems provide PID information: the Ring Imaging Cherenkov detectors, the hadronic and electromagnetic calorimeters, and the muon chambers. The charged PID based on the sub-detectors response is considered as a machine learning problem solved in different modes: one-vs-rest, one-vs-one and multi-classification, which affect the models training and prediction. To improve charged particle identification for pions, kaons, protons, muons and electrons, several neural networks and gradient boosting models have been tested. These approaches provide larger area under the curve of receiver operator characteristics than existing implementations in most cases. To reduce the systematic uncertainty arising from the use of PID efficiencies in certain physics measurements, it is also beneficial to achieve a flat dependency between efficiencies and spectator variables such as particle momentum. For this purpose, "flat” algorithms based on the boosted decision trees that guarantee the flatness property for efficiencies have also been developed. This talk presents approaches based on the state-of-the-art machine learning techniques and its performance evaluated on Run 2 data and simulation samples. A discussion of the performances is also presented.

© CERN Geneva

Access to files

დეტალური ჩანაწერი - მსგავსი ჩანაწერები
2018-07-20
14:32
Addressing Scalability with Message Queues: Architecture and Use Cases for DIRAC Interware
Reference: Poster-2018-646
Created: 2018. -1 p
Creator(s): Krzemien, Wojciech Jan

The Message Queue architecture is an asynchronous communication scheme that provides an attractive solution for certain scenarios in the distributed computing model. The introduction of the intermediate component (queue) in-between the interacting processes, allows to decouple the end-points making the system more flexible and providing high scalability and redundancy. The message queue brokers such as RabbitMQ, ActiveMQ or Kafka are proven technologies widely used nowadays. DIRAC is a general-purpose Interware software for distributed computing systems, which offers a common interface to a number of heterogeneous providers and guarantees transparent and reliable usage of the resources. The DIRAC platform has been adapted by several scientific projects, including High Energy Physics communities like LHCb, the Linear Collider and Belle2. A Message Queue generic interface has been incorporated into the DIRAC framework to help solving the scalability challenges that must be addressed during LHC Run3 starting in 2021. It allows to use the MQ scheme for the message exchange among the DIRAC components, or to communicate with third-party services. Within this contribution we will describe the integration of MQ systems with DIRAC, and several use cases will be shown. The focus will be put on the incorporation of MQ into the pilot logging system. Message Queues are also foreseen to be used as a backbone of the DIRAC component logging system, and monitoring. The results of the first performance tests will be presented.

© CERN Geneva

Access to files

დეტალური ჩანაწერი - მსგავსი ჩანაწერები
2018-07-20
14:30
New approaches for track reconstruction in LHCb’s Vertex Locator
Reference: Poster-2018-645
Created: 2018. -1 p
Creator(s): Hasse, Christoph

Starting with Upgrade 1 in 2021, LHCb will move to a purely software-based trigger system. Therefore, the new trigger strategy is to process events at the full rate of 30MHz. Given that the increase of CPU performance has slowed down in recent years, the predicted performance of the software trigger currently falls short of the necessary 30MHz throughput. To cope with this shortfall, LHCb's real-time reconstruction will have to be sped up significantly. We aim to help solve this shortfall by speeding up the track reconstruction of the Vertex Locator which currently takes up roughly a third of the time spent in the first phase of the High Level Trigger. In order to obtain the needed speedup, profiling and technical optimizations are explored as well as new algorithmic approaches. For instance, a clustering based algorithm can reduce the event rate prior to the track reconstruction by separating hits into two sets - hits from particles originating from the proton-proton interaction point, and those from secondary particles - allowing the reconstruction to treat them separately. We present an overview of our latest efforts in solving this problem, which is crucial to the success of the LHCb upgrade

© CERN Geneva

Access to files

დეტალური ჩანაწერი - მსგავსი ჩანაწერები
დაფოკუსირდი:
Open Days 2013 Posters (58)