EXPLAINABLE DEEP NEURAL NETWORK BASED ANALYSIS ON INTRUSION DETECTION SYSTEMS
DOI:
https://doi.org/10.7494/csci.2023.24.1.4551Abstract
The research on Intrusion Detection Systems (IDSs) have been increasing in recent years. Particularly, the research which are widely utilizing machine learning concepts, and it is proven that these concepts were effective with IDSs, particularly, deep neural network-based models enhanced the rate of detections of IDSs. At the same instance, the models are turning out to be very highly complex, users are unable to track down the explanations for the decisions made which indicates the necessity of identifying the explanations behind those decisions to ensure the interpretability of the framed model. In this aspect, the article deals with the proposed model that able to explain the obtained predictions. The proposed framework is a combination of a conventional intrusion detection system with the aid of a deep neural network and interpretability of the model predictions. The proposed model utilizes Shapley Additive Explanations (SHAP) that mixes with the local explainability as well as the global explainability for the enhancement of interpretations in the case of intrusion detection systems. The proposed model was implemented using the popular dataset, NSL-KDD, and the performance of the framework evaluated using accuracy, precision, recall, and F1-score. The accuracy of the framework is achieved by about 99.99%. The proposed framework able to identify the top 4 features using local explainability and the top 20 features using global explainability.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 Computer Science
This work is licensed under a Creative Commons Attribution 4.0 International License.