ScAR: Scaling Adversarial Robustness for LiDAR Object Detection
Xiaohu Lu,Hayder Radha,Xiaohu Lu,Hayder Radha
The adversarial robustness of a model is its ability to resist adversarial attacks in the form of small perturbations to input data. Universal adversarial attack methods such as Fast Sign Gradient Method (FSGM) [1] and Projected Gradient Descend (PGD) [2] are popular for LiDAR object detection, but they are often deficient compared to task-specific adversarial attacks. Additionally, these universa...