PyTorchFI: A Runtime Perturbation Tool for DNNs
Published in DSML, 2020
author = {Sanchez Vicarte, Jose Rodrigo and Schreiber, Benjamin and Paccagnella, Riccardo and Fletcher, Christopher W.},
title = {Game of Threads: Enabling Asynchronous Poisoning Attacks},
year = {2020},
isbn = {9781450371025},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3373376.3378462},
doi = {10.1145/3373376.3378462},
booktitle = {Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems},
pages = {35–52},
numpages = {18},
keywords = {adversarial machine learning, trusted execution environment, asynchronous stochastic gradient descent},
location = {Lausanne, Switzerland},
series = {ASPLOS ’20}
}
author = {Mahmoud, Abdulrahman and Aggarwal, Neeraj and Nobbe, Alex and Vicarte, Jose and Adve, Sarita and Fletcher, Christopher and Frosio, Iuri and Hari, Siva},
year = {2020},
month = {06},
pages = {25-31},
title = {PyTorchFI: A Runtime Perturbation Tool for DNNs},
doi = {10.1109/DSN-W50199.2020.00014}
}
author = {Choi, Woo-Seok and Tomei, Matthew and Vicarte, Jose Rodrigo Sanchez and Hanumolu, Pavan Kumar and Kumar, Rakesh},
title = {Guaranteeing Local Differential Privacy on Ultra-Low-Power Systems},
year = {2018},
isbn = {9781538659847},
publisher = {IEEE Press},
url = {https://doi.org/10.1109/ISCA.2018.00053},
doi = {10.1109/ISCA.2018.00053},
booktitle = {Proceedings of the 45th Annual International Symposium on Computer Architecture},
pages = {561–574},
numpages = {14},
keywords = {microcontrollers, randomized response, IoT, low-power systems, RAPPOR, differential privacy},
location = {Los Angeles, California},
series = {ISCA ’18}
}
PyTorchFI is a runtime perturbation tool for deep neural networks (DNNs), implemented for the popular PyTorch deep learning platform. PyTorchFI enables users to perform perturbations on weights or neurons of DNNs at runtime. It is designed with the programmer in mind, providing a simple and easy-to-use API, requiring as little as three lines of code for use. It also provides an extensible interface, enabling researchers to choose from various perturbation models (or design their own custom models), which allows for the study of hardware error (or general perturbation) propagation to the software layer of the DNN output. Additionally, PyTorchFI is extremely versatile: we demonstrate how it can be applied to five different use cases for dependability and reliability research, including resiliency analysis of classification networks, resiliency analysis of object detection networks, analysis of models robust to adversarial attacks, training resilient models, and for DNN interpertability. This paper discusses the technical underpinnings and design decisions of PyTorchFI which make it an easy-to-use, extensible, fast, and versatile research tool. PyTorchFI is open-sourced and available for download via pip or github at: https://github.com/pytorchfi