When are Non-Parametric Methods Robust?

Robi Bhattacharjee,u00a0Kamalika Chaudhuri

A growing body of research has shown that many classifiers are susceptible to adversarial examples u2013 small strategic modifications to test inputs that lead to misclassification. In this work, we study general non-parametric methods, with a view towards understanding when they are robust to these modifications. We establish general conditions under which non-parametric methods are r-consistent u2013 in the sense that they converge to optimally robust and accurate classifiers in the large sample limit. Concretely, our results show that when data is well-separated, nearest neighbors and kernel classifiers are r-consistent, while histograms are not. For general data distributions, we prove that preprocessing by Adversarial Pruning (Yang et. al., 2019)u2013 that makes data well-separated u2013 followed by nearest neighbors or kernel classifiers also leads to r-consistency.