Adonisz Dimitriu

57221594953

Publications - 3

TACO: Adversarial Camouflage Optimization on Trucks to Fool Object Detectors

Publication Name: Big Data and Cognitive Computing

Publication Date: 2025-03-01

Volume: 9

Issue: 3

Page Range: Unknown

Description:

Adversarial attacks threaten the reliability of machine learning models in critical applications like autonomous vehicles and defense systems. As object detectors become more robust with models like YOLOv8, developing effective adversarial methodologies is increasingly challenging. We present Truck Adversarial Camouflage Optimization (TACO), a novel framework that generates adversarial camouflage patterns on 3D vehicle models to deceive state-of-the-art object detectors. Adopting Unreal Engine 5, TACO integrates differentiable rendering with a Photorealistic Rendering Network to optimize adversarial textures targeted at YOLOv8. To ensure the generated textures are both effective in deceiving detectors and visually plausible, we introduce the Convolutional Smooth Loss function, a generalized smooth loss function. Experimental evaluations demonstrate that TACO significantly degrades YOLOv8’s detection performance, achieving an AP@0.5 of 0.0099 on unseen test data. Furthermore, these adversarial patterns exhibit strong transferability to other object detection models such as Faster R-CNN and earlier YOLO versions.

Open Access: Yes

DOI: 10.3390/bdcc9030072

Improving Transferability of Physical Adversarial Attacks on Object Detectors Through Multi-Model Optimization

Publication Name: Applied Sciences Switzerland

Publication Date: 2024-12-01

Volume: 14

Issue: 23

Page Range: Unknown

Description:

Physical adversarial attacks face significant challenges in achieving transferability across different object detection models, especially in real-world conditions. This is primarily due to variations in model architectures, training data, and detection strategies, which can make adversarial examples highly model-specific. This study introduces a multi-model adversarial training approach to improve the transferability of adversarial textures across diverse detection models, including one-stage, two-stage, and transformer-based architectures. Using the Truck Adversarial Camouflage Optimization (TACO) framework and a novel combination of YOLOv8n, YOLOv5m, and YOLOv3 models for optimization, our approach achieves an AP@0.5 detection score of 0.0972—over 50% lower than textures trained on single models alone. This result highlights the importance of multi-model training in enhancing attack effectiveness across object detectors, contributing to improved adversarial effectiveness.

Open Access: Yes

DOI: 10.3390/app142311423

A Reinforcement Learning Approach to Military Simulations in Command: Modern Operations

Publication Name: IEEE Access

Publication Date: 2024-01-01

Volume: 12

Issue: Unknown

Page Range: 77501-77513

Description:

This paper presents a Reinforcement Learning (RL) framework for Command: Modern Operations (CMO), an advanced Real Time Strategy (RTS) game that simulates military operations. CMO challenges players to navigate tactical, operational, and strategic decision-making, involving the management of multiple units, effective resource allocation, and concurrent action assignment. The primary objective of this research is automating and enhancing military decision-making, utilizing the capabilities of RL. To achieve this goal, a parameterized Proximal Policy Optimization (PPO) agent with a unique architecture has been developed, specifically designed to address the unique challenges presented by CMO. By adapting and extending methodologies from achievements in the domain, such as AlphaStar and OpenAI Five, the agent showcases the potential of RL in military simulations. Our model can handle a wide range of scenarios presented in CMO, marking a significant step towards the integration of Artificial Intelligence (AI) with military studies and practices. This research establishes the groundwork for future explorations in applying AI to defense and strategic analysis.

Open Access: Yes

DOI: 10.1109/ACCESS.2024.3406148