A4 Refereed article in a conference publication

Enhancing Explainability of Artificial Intelligence for Threat Detection in SDN-based Multicast Systems




AuthorsPrasad, Preety; Mohammad, Tahir; Isoaho, Jouni

EditorsShakshuki, Elhadi; Yasar, Ansar

Conference nameInternational Conference on Ambient Systems, Networks and Technologies

PublisherElsevier BV

Publication year2025

JournalProcedia Computer Science

Book title The 16th International Conference on Ambient Systems, Networks and Technologies Networks (ANT)/ the 8th International Conference on Emerging Data and Industry 4.0 (EDI40)

Journal name in sourceProcedia Computer Science

Volume257

First page 569

Last page574

eISSN1877-0509

DOIhttps://doi.org/10.1016/j.procs.2025.03.073(external)

Web address https://doi.org/10.1016/j.procs.2025.03.073(external)

Self-archived copy’s web addresshttps://research.utu.fi/converis/portal/detail/Publication/492312399(external)


Abstract
The increasing adoption of Artificial Intelligence (AI) based Software-Defined Networking (SDN) in multicast systems has improved network management and traffic efficiency. However, for network administrators, understanding AI outcomes and explanations for how conclusions are reached in threat detection and mitigation is essential for strengthening their overall security framework. Additionally, centralized control planes in SDN introduce new security challenges, which can complicate the detection and mitigation of various network threats. In this regard, this paper presents a novel framework that integrates Explainable AI (XAI) with SDN to detect and mitigate threats in real-time. The proposed framework leverages a hybrid machine learning model, using Convolutional Neural Networks (CNN) for analyzing network traffic features and Long Short-Term Memory (LSTM) networks for identifying patterns and anomalies. To enhance transparency and explanation for the threat detection process, the framework incorporates both LIME (Local Interpretable Model-Agnostic Explanations) and SHAP (SHapley Additive exPlanations). LIME provides local explanations by generating perturbed data instances and training a surrogate model to identify the most influential features in a specific prediction. This allows the network administrators to understand how different network features contribute to the classification decision. SHAP, on the other hand, quantifies the contribution of each feature to the overall model decision by computing Shapley values, offering a global perspective of feature importance. This approach offers a more effective and transparent solution for SDN systems in a multicast environment, improving threat detection and security.

Downloadable publication

This is an electronic reprint of the original article.
This reprint may differ from the original in pagination and typographic detail. Please cite the original version.





Last updated on 2025-09-06 at 12:48