A4 Vertaisarvioitu artikkeli konferenssijulkaisussa

Enhancing Explainability of Artificial Intelligence for Threat Detection in SDN-based Multicast Systems




TekijätPrasad, Preety; Mohammad, Tahir; Isoaho, Jouni

ToimittajaShakshuki, Elhadi; Yasar, Ansar

Konferenssin vakiintunut nimiInternational Conference on Ambient Systems, Networks and Technologies

KustantajaElsevier BV

Julkaisuvuosi2025

JournalProcedia Computer Science

Kokoomateoksen nimiThe 16th International Conference on Ambient Systems, Networks and Technologies Networks (ANT)/ the 8th International Conference on Emerging Data and Industry 4.0 (EDI40)

Tietokannassa oleva lehden nimiProcedia Computer Science

Vuosikerta257

Aloitussivu569

Lopetussivu574

eISSN1877-0509

DOIhttps://doi.org/10.1016/j.procs.2025.03.073

Verkko-osoitehttps://doi.org/10.1016/j.procs.2025.03.073

Rinnakkaistallenteen osoitehttps://research.utu.fi/converis/portal/detail/Publication/492312399


Tiivistelmä
The increasing adoption of Artificial Intelligence (AI) based Software-Defined Networking (SDN) in multicast systems has improved network management and traffic efficiency. However, for network administrators, understanding AI outcomes and explanations for how conclusions are reached in threat detection and mitigation is essential for strengthening their overall security framework. Additionally, centralized control planes in SDN introduce new security challenges, which can complicate the detection and mitigation of various network threats. In this regard, this paper presents a novel framework that integrates Explainable AI (XAI) with SDN to detect and mitigate threats in real-time. The proposed framework leverages a hybrid machine learning model, using Convolutional Neural Networks (CNN) for analyzing network traffic features and Long Short-Term Memory (LSTM) networks for identifying patterns and anomalies. To enhance transparency and explanation for the threat detection process, the framework incorporates both LIME (Local Interpretable Model-Agnostic Explanations) and SHAP (SHapley Additive exPlanations). LIME provides local explanations by generating perturbed data instances and training a surrogate model to identify the most influential features in a specific prediction. This allows the network administrators to understand how different network features contribute to the classification decision. SHAP, on the other hand, quantifies the contribution of each feature to the overall model decision by computing Shapley values, offering a global perspective of feature importance. This approach offers a more effective and transparent solution for SDN systems in a multicast environment, improving threat detection and security.

Ladattava julkaisu

This is an electronic reprint of the original article.
This reprint may differ from the original in pagination and typographic detail. Please cite the original version.





Last updated on 2025-09-06 at 12:48