Intriguing Properties of Adversarial ML Problem-Space Attacks
News
- May 2020: We'll be presenting the work at IEEE S&P on May 20th ~9am PDT, "see" you there!
- Mar 2020: Paper accepted at IEEE Symp. Security & Privacy (Oakland) 2020
Access
We are hosting the attack code on a private Bitbucket repository. To get access to the repository, please complete the following form: For ethical reasons, we will only be sharing the code with verified academic researchers.We have already granted access to researchers from the following institutions (alphabetical order):
- Ariel University
- Beijing University of Posts and Telecommunications, China
- Columbia University, USA
- Fudan University, China
- Georgia Tech, USA
- Guangzhou University, China
- The Hong Kong Polytechnic University, Hong Kong
- The Hong Kong University of Science and Technology Library, Hong Kong
- Karlsruhe Institute Technology, Germany
- King's College London, UK
- Korea University, Republic of Korea
- Nanjing University, China
- National University of Defense Technology, China
- Northeastern University, USA
- Orange Labs
- PSG College of Technology, India
- Queen's University, Canada
- Indian Institute of Information Technology and Management, Kerala, India
- Tsinghua University, China
- University of Adelaide, Australia
- University of British Columbia, Canada
- University of the Fraser Valley, Canada
- University of Illinois at Urbana-Champaign, USA
- University of Luxembourg, Luxembourg
- University of Michigan, USA
- University of Oregon, USA
- University of Virginia, USA
- University of Wisconsin-Madison, USA
- University of Rennes 1, INRIA, France
- Washington University in St. Louis, USA
- Wuhan University,China
- Xidian University, China
- Zhejiang Gongshang University, China
- Zhejiang University, China
Papers
@article{labacacastro2021uaps,
author = {Raphael Labaca-Castro and Luis Muñoz-González and Feargus Pendlebury and Gabi Dreo Rodosek and Fabio Pierazzi and Lorenzo Cavallaro},
title = {Universal Adversarial Perturbations for Malware},
journal = {CoRR},
volume = {abs/2102.06747},
year = {2021},
url = {http://arxiv.org/abs/2102.06747},
eprint = {2102.06747},
archivePrefix = {arXiv}
}
Intriguing Properties of Adversarial ML Attacks in the Problem Space
IEEE S&P · 41st IEEE Symposium on Security and Privacy, 2020
IEEE S&P · 41st IEEE Symposium on Security and Privacy, 2020
@inproceedings{pierazzi2020problemspace,
author = {Fabio Pierazzi and Feargus Pendlebury and Jacopo Cortellazzi and Lorenzo Cavallaro},
booktitle = {2020 IEEE Symposium on Security and Privacy (SP)},
title = {Intriguing Properties of Adversarial ML Attacks in the Problem Space},
year = {2020},
volume = {},
issn = {2375-1207},
pages = {1308-1325},
doi = {10.1109/SP40000.2020.00073},
url = {https://doi.ieeecomputersociety.org/10.1109/SP40000.2020.00073},
publisher = {IEEE Computer Society},
}
Videos
People
- Fabio Pierazzi, Lecturer (Assistant Professor), King's College London.
- Feargus Pendlebury, Ph.D. Student, King's College London & Royal Holloway, University of London & The Alan Turing Institute
- Jacopo Cortellazzi, Ph.D. Student, King's College London
- Lorenzo Cavallaro, Full Professor of Computer Science, Chair in Cybersecurity (Systems Security), King's College London