Keyed Non-Parametric Hypothesis Tests

05/25/2020
by   Yao Cheng, et al.
0

The recent popularity of machine learning calls for a deeper understanding of AI security. Amongst the numerous AI threats published so far, poisoning attacks currently attract considerable attention. In a poisoning attack the opponent partially tampers the dataset used for learning to mislead the classifier during the testing phase. This paper proposes a new protection strategy against poisoning attacks. The technique relies on a new primitive called keyed non-parametric hypothesis tests allowing to evaluate under adversarial conditions the training input's conformance with a previously learned distribution D. To do so we use a secret key κ unknown to the opponent. Keyed non-parametric hypothesis tests differs from classical tests in that the secrecy of κ prevents the opponent from misleading the keyed test into concluding that a (significantly) tampered dataset belongs to D.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro