Intriguing Properties of Adversarial ML Problem-Space Attacks

News

  • May 2020: We'll be presenting the work at IEEE S&P on May 20th ~9am PDT, "see" you there!
  • Mar 2020: Paper accepted at IEEE Symp. Security & Privacy (Oakland) 2020

Access

We are hosting the attack code on a private Bitbucket repository. To get access to the repository, please complete the following form: For ethical reasons, we will only be sharing the code with verified academic researchers.

We have already granted access to researchers from the following institutions (alphabetical order):
  1. Ariel University
  2. Beijing University of Posts and Telecommunications, China
  3. Columbia University, USA
  4. Fudan University, China
  5. Georgia Tech, USA
  6. Guangzhou University, China
  7. The Hong Kong Polytechnic University, Hong Kong
  8. The Hong Kong University of Science and Technology Library, Hong Kong
  9. Karlsruhe Institute Technology, Germany
  10. King's College London, UK
  11. Korea University, Republic of Korea
  12. Nanjing University, China
  13. National University of Defense Technology, China
  14. Northeastern University, USA
  15. Orange Labs
  16. PSG College of Technology, India
  17. Queen's University, Canada
  18. Indian Institute of Information Technology and Management, Kerala, India
  19. Tsinghua University, China
  20. University of Adelaide, Australia
  21. University of British Columbia, Canada
  22. University of the Fraser Valley, Canada
  23. University of Illinois at Urbana-Champaign, USA
  24. University of Luxembourg, Luxembourg
  25. University of Michigan, USA
  26. University of Oregon, USA
  27. University of Virginia, USA
  28. University of Wisconsin-Madison, USA
  29. University of Rennes 1, INRIA, France
  30. Washington University in St. Louis, USA
  31. Wuhan University,China
  32. Xidian University, China
  33. Zhejiang Gongshang University, China
  34. Zhejiang University, China

Papers

Universal Adversarial Perturbations for Malware
Raphael Labaca-Castro, Luis Muñoz-González, Feargus Pendlebury, Gabi Dreo Rodosek, Fabio Pierazzi, Lorenzo Cavallaro
CoRR · arXiv CoRR, 2021
@article{labacacastro2021uaps,
author = {Raphael Labaca-Castro and Luis Muñoz-González and Feargus Pendlebury and Gabi Dreo Rodosek and Fabio Pierazzi and Lorenzo Cavallaro},
title = {Universal Adversarial Perturbations for Malware},
journal = {CoRR},
volume = {abs/2102.06747},
year = {2021},
url = {http://arxiv.org/abs/2102.06747},
eprint = {2102.06747},
archivePrefix = {arXiv}
}
Intriguing Properties of Adversarial ML Attacks in the Problem Space
Fabio Pierazzi*, Feargus Pendlebury*, Jacopo Cortellazzi, Lorenzo Cavallaro
IEEE S&P · 41st IEEE Symposium on Security and Privacy, 2020
@inproceedings{pierazzi2020problemspace,
author = {Fabio Pierazzi and Feargus Pendlebury and Jacopo Cortellazzi and Lorenzo Cavallaro},
booktitle = {2020 IEEE Symposium on Security and Privacy (SP)},
title = {Intriguing Properties of Adversarial ML Attacks in the Problem Space},
year = {2020},
volume = {},
issn = {2375-1207},
pages = {1308-1325},
doi = {10.1109/SP40000.2020.00073},
url = {https://doi.ieeecomputersociety.org/10.1109/SP40000.2020.00073},
publisher = {IEEE Computer Society},
}

Videos

Feargus Pendlebury presents the work at IEEE Security & Privacy (Oakland) 2020.
Teaser trailer for our presentation at IEEE Security & Privacy (Oakland) 2020.

People

  • Fabio Pierazzi, Lecturer (Assistant Professor), King's College London.
  • Feargus Pendlebury, Ph.D. Student, King's College London & Royal Holloway, University of London & The Alan Turing Institute
  • Jacopo Cortellazzi, Ph.D. Student, King's College London
  • Lorenzo Cavallaro, Full Professor of Computer Science, Chair in Cybersecurity (Systems Security), King's College London