EXPLAINABLE DEEP NEURAL NETWORK BASED ANALYSIS ON INTRUSION DETECTION SYSTEMS

Authors

  • Sagar Pande Lovely Professional University
  • Aditya Khamparia Babasaheb Bhimrao Ambedkar University, Lucknow, India

DOI:

https://doi.org/10.7494/csci.2023.24.1.4551

Abstract

The research on Intrusion Detection Systems (IDSs) have been increasing in recent years. Particularly, the research which are widely utilizing machine learning concepts, and it is proven that these concepts were effective with IDSs, particularly, deep neural network-based models enhanced the rate of detections of IDSs. At the same instance, the models are turning out to be very highly complex, users are unable to track down the explanations for the decisions made which indicates the necessity of identifying the explanations behind those decisions to ensure the interpretability of the framed model. In this aspect, the article deals with the proposed model that able to explain the obtained predictions. The proposed framework is a combination of a conventional intrusion detection system with the aid of a deep neural network and interpretability of the model predictions. The proposed model utilizes Shapley Additive Explanations (SHAP) that mixes with the local explainability as well as the global explainability for the enhancement of interpretations in the case of intrusion detection systems. The proposed model was implemented using the popular dataset, NSL-KDD, and the performance of the framework evaluated using accuracy, precision, recall, and F1-score. The accuracy of the framework is achieved by about 99.99%. The proposed framework able to identify the top 4 features using local explainability and the top 20 features using global explainability.

Downloads

Download data is not yet available.

Downloads

Published

2023-03-06

How to Cite

Pande, S., & Khamparia, A. (2023). EXPLAINABLE DEEP NEURAL NETWORK BASED ANALYSIS ON INTRUSION DETECTION SYSTEMS. Computer Science, 24(1). https://doi.org/10.7494/csci.2023.24.1.4551

Issue

Section

Articles