Back to Glossary

Data Poisoning

Data poisoning is a type of adversarial attack where malicious actors intentionally introduce false or misleading data into a machine learning model's training set. This manipulation can degrade model performance, leading to inaccurate predictions and compromised outcomes. As an emerging threat in cybersecurity and artificial intelligence, data poisoning can significantly impact various applications, from autonomous systems to fraud detection. Organizations must implement robust data validation and anomaly detection strategies to safeguard against this risk, ensuring the integrity and reliability of their machine learning models.