Track Finding with Deep Neural Networks

Authors

DOI:

https://doi.org/10.7494/csci.2019.20.4.3376

Keywords:

Deep Neural Networks, Machine Learning, tracking, HEP

Abstract

High Energy Physics experiments require fast and efficient methods to
reconstruct the tracks of charged particles. Commonly used algorithms are
sequential and the CPU required increases rapidly with a number of tracks.
Neural networks can speed up the process due to their capability to model
complex non-linear data dependencies and finding all tracks in parallel.
In this paper we describe the application of the Deep Neural Network
to the reconstruction of straight tracks in a toy two-dimensional model. It is
planned to apply this method to the experimental data taken by the MUonE
experiment at CERN.

Downloads

Download data is not yet available.

References

Keras LSTM tutorial. URL https://adventuresinmachinelearning.com/

keras-lstm-tutorial/.

Abadi M., Barham P., Chen J., Chen Z., Davis A., Dean J., Devin M., Ghemawat

S., Irving G., Isard M., et al.: Tensorflow: A system for large-scale machine

learning. In: 12th {USENIX} Symposium on Operating Systems Design and

Implementation ({OSDI} 16), pp. 265–283. 2016.

Abbiendi G., al.: Measuring the leading hadronic contribution to the muon g − 2

via μe scattering. In: European Physical Journal C, vol. 77, p. 139, 2017.

Bishop C.M.: Mixture density networks. In: , 1994.

Brun R., Rademakers F.: ROOT: An object oriented data analysis framework.

In: Nucl. Instrum. Meth., vol. A389, pp. 81–86, 1997. URL http://dx.doi.org/

1016/S0168-9002(97)00048-X.

Calafiura P.: HEP advanced tracking algorithms with cross-cutting applications

(Project HEP.TrkX). URL https://heptrkx.github.io/.

Chollet F., et al.: Keras: Deep learning library for theano and tensorflow, 2015.

URL https://keras.io/.

Farrell S., Anderson D., Calafiura P., Cerati G., Gray L., Kowalkowski J.,

Mudigonda M., Spentzouris P., Spiropoulou M., Tsaris A., et al.: The HEP.

TrkX Project: deep neural networks for HL-LHC online and offline tracking. In:

EPJ Web of Conferences, vol. 150, p. 00003. EDP Sciences, 2017.

Farrell S., Calafiura P., Mudigonda M., Anderson D., Vlimant J.R., Zheng S.,

Bendavid J., Spiropulu M., Cerati G., Gray L., et al.: Novel deep learning meth-

ods for track reconstruction. In: arXiv preprint arXiv:1810.06111, 2018.

Frühwirth R.: Application of Kalman filtering to track and vertex fitting. In:

Nuclear Instruments and Methods in Physics Research Section A: Accelerators,

Spectrometers, Detectors and Associated Equipment, vol. 262(2-3), pp. 444–450,

Graves A., Wayne G., Danihelka I.: Neural turing machines. In: arXiv preprint

arXiv:1410.5401, 2014.

Hochreiter S., Schmidhuber J.: Long short-term memory. In: Neural computa-

tion, vol. 9(8), pp. 1735–1780, 1997.

James F., Roos M.: MINUIT: a system for function minimization and anal-

ysis of the parameter errors and corrections. In: Comput. Phys. Commun.,

vol. 10(CERN-DD-75-20), pp. 343–367, 1975.

LeCun Y., Bengio Y., Hinton G.: Deep learning. In: nature, vol. 521(7553), p.

, 2015.

Neal R.M.: Bayesian learning for neural networks, vol. 118. Springer Science &

Business Media, 2012.

Pearl J.: Markov and Bayesian Networks, chap. 3 Probabilistic Reasoning in

Intelligent Systems, 1988.

Srivastava N., Hinton G., Krizhevsky A., Sutskever I., Salakhutdinov R.:

Dropout: a simple way to prevent neural networks from overfitting. In: The

journal of machine learning research, vol. 15(1), pp. 1929–1958, 2014.

Vinyals O., Toshev A., Bengio S., Erhan D.: Show and tell: A neural image

caption generator. In: Proceedings of the IEEE conference on computer vision

and pattern recognition, pp. 3156–3164. 2015.

Downloads

Published

2019-12-04

Issue

Section

Articles

How to Cite

Most read articles by the same author(s)