Efficacy of Adversarial Attacks on Traffic Sign Recognition Models Presenter

Session Number

CMPS(ai) 17

Advisor(s)

Pooya Khorammi and Danielle Sullivan, MIT Lincoln Laboratory

Discipline

Computer Science

Start Date

17-4-2025 11:25 AM

End Date

17-4-2025 11:40 AM

Abstract

Adversarial learning is a critical area of research that examines the vulnerabilities of machine learning models to carefully crafted attacks. In safety-critical applications such autonomous driving, adversarial attacks on traffic sign recognition systems pose significant risks, potentially leading to severe consequences such as crashes.

This study explores various adversarial attack strategies, including white-box and black- box methods, to assess their impact on the traffic sign classifiers used in autonomous vehicles. Techniques such as the Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), DeepFool, Square Attack, and more are analyzed for their effectiveness in misleading recognition models. Additionally, we evaluate state-of-the-art defense mechanisms, including adversarial training, to enhance model resiliences. By benchmarking attack efficiency and mitigation strategies, we aim to contribute to the development of safe, reliable, and secure autonomous driving systems.

Share

COinS
 
Apr 17th, 11:25 AM Apr 17th, 11:40 AM

Efficacy of Adversarial Attacks on Traffic Sign Recognition Models Presenter

Adversarial learning is a critical area of research that examines the vulnerabilities of machine learning models to carefully crafted attacks. In safety-critical applications such autonomous driving, adversarial attacks on traffic sign recognition systems pose significant risks, potentially leading to severe consequences such as crashes.

This study explores various adversarial attack strategies, including white-box and black- box methods, to assess their impact on the traffic sign classifiers used in autonomous vehicles. Techniques such as the Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), DeepFool, Square Attack, and more are analyzed for their effectiveness in misleading recognition models. Additionally, we evaluate state-of-the-art defense mechanisms, including adversarial training, to enhance model resiliences. By benchmarking attack efficiency and mitigation strategies, we aim to contribute to the development of safe, reliable, and secure autonomous driving systems.