Back to Glossary
Universal Adversarial Perturbations
Universal Adversarial Perturbations (UAPs) are strategically crafted modifications applied to input data, such as images, that can universally deceive machine learning models, particularly in deep learning and computer vision. These perturbations are imperceptible to humans but lead to significant misclassifications across various instances of the same data. UAPs highlight vulnerabilities in artificial intelligence systems, raising concerns about security and robustness in applications like autonomous vehicles, facial recognition, and image classification. Understanding UAPs is crucial for developing more resilient AI systems and enhancing adversarial training methodologies.