Label-Only Membership Inference Attacks
Christopher A. Choquette-Choo,u00a0Florian Tramer,u00a0Nicholas Carlini,u00a0Nicolas Papernot
Membership inference is one of the simplest privacy threats faced by machine learning models that are trained on private sensitive data. In this attack, an adversary infers whether a particular point was used to train the model, or not, by observing the modelu2019s predictions. Whereas current attack methods all require access to the modelu2019s predicted confidence score, we introduce a label-only attack that instead evaluates the robustness of the modelu2019s predicted (hard) labels under perturbations of the input, to infer membership. Our label-only attack is not only as-effective as attacks requiring access to confidence scores, it also demonstrates that a class of defenses against membership inference, which we call u201cconfidence maskingu201d because they obfuscate the confidence scores to thwart attacks, are insufficient to prevent the leakage of private information. Our experiments show that training with differential privacy or strong L2 regularization are the only current defenses that meaningfully decrease leakage of private information, even for points that are outliers of the training distribution.


