NIPS2017 Accepted Paper List DeepNLP AI Robotic and STEM Top Conference & Journal Papers

  • Introduction

    Conference NIPS2017 accepted paper complete List. Top ranking conferences for AI and Robotics communities. Total Accepted Paper Count 4

    NIPS2017 ACCEPTED PAPER LIST

  • Elfarouk Harb,Ho Shan Lam

    In this paper, we study the problem of fair clustering on the $k-$center objective. In fair clustering, the input is $N$ points, each belonging to at least one of $l$ protected groups, e.g. male, female, Asian, Hispanic. The objective is to cluster the $N$ points into $k$ clusters to minimize a classical clustering objective function. However, there is an additional constraint that each cluster needs to be fair, under some notion of fairness. This ensures that no group is either ``over-represented or ``under-represented in any cluster. Our work builds on the work of Chierichetti et al. (NIPS 2017), Bera et al. (NeurIPS 2019), Ahmadian et al. (KDD 2019), and Bercea et al. (APPROX 2019). We obtain a randomized $3-$approximation algorithm for the $k-$center objective function, beating the previous state of the art ($4-$approximation). We test our algorithm on real datasets, and show that our algorithm is effective in finding good clusters without over-representation or under-representation, surpassing the current state of the art in runtime speed, clustering cost, while achieving similar fairness violations.

  • Elfarouk Harb, Ho Shan Lam

    In this paper, we study the problem of fair clustering on the $k-$center objective. In fair clustering, the input is $N$ points, each belonging to at least one of $l$ protected groups, e.g. male, female, Asian, Hispanic. The objective is to cluster the $N$ points into $k$ clusters to minimize a classical clustering objective function. However, there is an additional constraint that each cluster needs to be fair, under some notion of fairness. This ensures that no group is either ``over-represented'' or ``under-represented'' in any cluster. Our work builds on the work of Chierichetti et al. (NIPS 2017), Bera et al. (NeurIPS 2019), Ahmadian et al. (KDD 2019), and Bercea et al. (APPROX 2019). We obtain a randomized $3-$approximation algorithm for the $k-$center objective function, beating the previous state of the art ($4-$approximation). We test our algorithm on real datasets, and show that our algorithm is effective in finding good clusters without over-representation or under-representation, surpassing the current state of the art in runtime speed, clustering cost, while achieving similar fairness violations.

  • Cihang Xie, Zhishuai Zhang, Yuyin Zhou, Song Bai, Jianyu Wang, Zhou Ren, Alan L. Yuille

    Though CNNs have achieved the state-of-the-art performance on various vision tasks, they are vulnerable to adversarial examples --- crafted by adding human-imperceptible perturbations to clean images. However, most of the existing adversarial attacks only achieve relatively low success rates under the challenging black-box setting, where the attackers have no knowledge of the model structure and parameters. To this end, we propose to improve the transferability of adversarial examples by creating diverse input patterns. Instead of only using the original images to generate adversarial examples, our method applies random transformations to the input images at each iteration. Extensive experiments on ImageNet show that the proposed attack method can generate adversarial examples that transfer much better to different networks than existing baselines. By evaluating our method against top defense solutions and official baselines from NIPS 2017 adversarial competition, the enhanced attack reaches an average success rate of 73.0%, which outperforms the top-1 attack submission in the NIPS competition by a large margin of 6.6%. We hope that our proposed attack strategy can serve as a strong benchmark baseline for evaluating the robustness of networks to adversaries and the effectiveness of different defense methods in the future. Code is available at https://github.com/cihangxie/DI-2-FGSM.