Computational Asymmetries in Robust Classification

06/25/2023
by   Samuele Marro, et al.
0

In the context of adversarial robustness, we make three strongly related contributions. First, we prove that while attacking ReLU classifiers is 𝑁𝑃-hard, ensuring their robustness at training time is Σ^2_P-hard (even on a single example). This asymmetry provides a rationale for the fact that robust classifications approaches are frequently fooled in the literature. Second, we show that inference-time robustness certificates are not affected by this asymmetry, by introducing a proof-of-concept approach named Counter-Attack (CA). Indeed, CA displays a reversed asymmetry: running the defense is 𝑁𝑃-hard, while attacking it is Σ_2^P-hard. Finally, motivated by our previous result, we argue that adversarial attacks can be used in the context of robustness certification, and provide an empirical evaluation of their effectiveness. As a byproduct of this process, we also release UG100, a benchmark dataset for adversarial attacks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro