Data Poisoning in Machine Learning

Risks, Detection, and ountermeasures in Cybersecurity

Authors

  • Olivia Martinez Associate Professor, Department of Computer Science, University of California, Berkeley, California, USA

Keywords:

Data Poisoning, Machine Learning, Cybersecurity, Threat Detection, Intrusion Prevention, Software Development Lifecycle

Abstract

As machine learning (ML) systems are increasingly adopted in cybersecurity applications, the integrity and reliability of these models become critical. One significant threat to machine learning systems is data poisoning, wherein malicious actors intentionally manipulate training data to degrade model performance or mislead predictions. This paper explores the risks associated with data poisoning in machine learning models used in cybersecurity, emphasizing the potential impact on threat detection, intrusion prevention, and overall system robustness. Furthermore, it outlines various detection mechanisms for identifying poisoned data, including anomaly detection and robust training techniques. The paper also proposes a set of countermeasures aimed at safeguarding the integrity of AI-driven security systems, such as data sanitization, regular model audits, and the incorporation of adversarial training. By addressing these challenges, this research aims to enhance the resilience of machine learning systems against data poisoning attacks, thereby improving the security posture of organizations that rely on these technologies.

Downloads

Download data is not yet available.

Downloads

Published

18-10-2024

How to Cite

[1]
“Data Poisoning in Machine Learning: Risks, Detection, and ountermeasures in Cybersecurity”, J. of Art. Int. Research, vol. 4, no. 2, pp. 109–116, Oct. 2024, Accessed: Mar. 17, 2026. [Online]. Available: https://thesciencebrigade.org/JAIR/article/view/414