Enhancing Explainability of Artificial Intelligence for Threat Detection in SDN-based Multicast Systems
: Prasad, Preety; Mohammad, Tahir; Isoaho, Jouni
: Shakshuki, Elhadi; Yasar, Ansar
: International Conference on Ambient Systems, Networks and Technologies
Publisher: Elsevier BV
: 2025
: Procedia Computer Science
: The 16th International Conference on Ambient Systems, Networks and Technologies Networks (ANT)/ the 8th International Conference on Emerging Data and Industry 4.0 (EDI40)
: Procedia Computer Science
: 257
: 569
: 574
: 1877-0509
DOI: https://doi.org/10.1016/j.procs.2025.03.073
: https://doi.org/10.1016/j.procs.2025.03.073
: https://research.utu.fi/converis/portal/detail/Publication/492312399
The increasing adoption of Artificial Intelligence (AI) based Software-Defined Networking (SDN) in multicast systems has improved network management and traffic efficiency. However, for network administrators, understanding AI outcomes and explanations for how conclusions are reached in threat detection and mitigation is essential for strengthening their overall security framework. Additionally, centralized control planes in SDN introduce new security challenges, which can complicate the detection and mitigation of various network threats. In this regard, this paper presents a novel framework that integrates Explainable AI (XAI) with SDN to detect and mitigate threats in real-time. The proposed framework leverages a hybrid machine learning model, using Convolutional Neural Networks (CNN) for analyzing network traffic features and Long Short-Term Memory (LSTM) networks for identifying patterns and anomalies. To enhance transparency and explanation for the threat detection process, the framework incorporates both LIME (Local Interpretable Model-Agnostic Explanations) and SHAP (SHapley Additive exPlanations). LIME provides local explanations by generating perturbed data instances and training a surrogate model to identify the most influential features in a specific prediction. This allows the network administrators to understand how different network features contribute to the classification decision. SHAP, on the other hand, quantifies the contribution of each feature to the overall model decision by computing Shapley values, offering a global perspective of feature importance. This approach offers a more effective and transparent solution for SDN systems in a multicast environment, improving threat detection and security.