Improving Transferability of Physical Adversarial Attacks on Object Detectors Through Multi-Model Optimization
Publication Name: Applied Sciences Switzerland
Publication Date: 2024-12-01
Volume: 14
Issue: 23
Page Range: Unknown
Description:
Physical adversarial attacks face significant challenges in achieving transferability across different object detection models, especially in real-world conditions. This is primarily due to variations in model architectures, training data, and detection strategies, which can make adversarial examples highly model-specific. This study introduces a multi-model adversarial training approach to improve the transferability of adversarial textures across diverse detection models, including one-stage, two-stage, and transformer-based architectures. Using the Truck Adversarial Camouflage Optimization (TACO) framework and a novel combination of YOLOv8n, YOLOv5m, and YOLOv3 models for optimization, our approach achieves an AP@0.5 detection score of 0.0972—over 50% lower than textures trained on single models alone. This result highlights the importance of multi-model training in enhancing attack effectiveness across object detectors, contributing to improved adversarial effectiveness.
Open Access: Yes
DOI: 10.3390/app142311423