Source
IEEE Open Journal of the Industrial Electronics Society
DATE OF PUBLICATION
05/15/2024
Authors
Ilya Makarov Alexander Kovalenko Vitaliy Pozdnyakov M. D. Drobyshevsky Kirill Lukyanov
Share

Adversarial Attacks and Defenses in Fault Detection and Diagnosis: A Comprehensive Benchmark on the Tennessee Eastman Process

Abstract

Integrating machine learning into Automated Control Systems (ACS) enhances decision-making in industrial process management. One of the limitations to the widespread adoption of these technologies in industry is the vulnerability of neural networks to adversarial attacks. This study explores the threats in deploying deep learning models for Fault Detection and Diagnosis (FDD) in ACS using the Tennessee Eastman Process dataset. By evaluating three neural networks with different architectures, we subject them to six types of adversarial attacks and explore five different defense methods. Our results highlight the strong vulnerability of models to adversarial samples and the varying effectiveness of defense strategies. We also propose a new defense strategy based on combining adversarial training and data quantization. This research contributes several insights into securing machine learning within ACS, ensuring robust FDD in industrial processes.

Join AIRI