Publications
@INPROCEEDINGS{9499823, author={Sanchez Vicarte, Jose Rodrigo and Shome, Pradyumna and Nayak, Nandeeka and Trippel, Caroline and Morrison, Adam and Kohlbrenner, David and Fletcher, Christopher W.}, booktitle={2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture (ISCA)}, title={Opening Pandora’s Box: A Systematic Study of New Ways Microarchitecture Can Leak Private Data}, year={2021}, volume={}, number={}, pages={347-360}, doi={10.1109/ISCA52012.2021.00035}}
J. R. Sanchez Vicarte et al., "Opening Pandora’s Box: A Systematic Study of New Ways Microarchitecture Can Leak Private Data," 2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture (ISCA), 2021, pp. 347-360, doi: 10.1109/ISCA52012.2021.00035.
Published in IEEE S&P, 2022 (Full Paper | | )
Our experiments demonstrate the existence of a pointer-chasing DMP on recent Apple processors, including the A14 and M1. We then reverse engineer the details of this DMP to determine the opportunities for and restrictions it places on attackers using it. Finally, we demonstrate several basic attack primitives capable of leaking pointer values using the DMP.
Published in ISCA, 2021 (Full Paper | | )
Our study uncovers seven classes of microarchitectural optimization with novel security implications, proposes a conceptual framework through which to study them and demonstrates several proofs-of-concept to show their efficacy. The optimizations we study range from those that leak as much privacy as Spectre/Meltdown (but without exploiting speculative execution) to those that otherwise undermine security-critical programs in a variety of ways.
Published in Usenix, 2021 (Full Paper | | )
We present a novel attack called Double Cross, which aims to manipulate data labeling and model training in active learning settings.
Published in DSML, 2020 (Full Paper | | )
PyTorchFI is a runtime perturbation tool for deep neural networks (DNNs), implemented for the popular PyTorch deep learning platform. PyTorchFI enables users to perform perturbations on weights or neurons of DNNs at runtime.
Published in ASPLOS, 2020 (Full Paper | | )
Our attack influences training outcome—e.g., degrades model accuracy or biases the model towards an adversary-specified label—purely by scheduling asynchronous training threads in a malicious fashion. Since thread scheduling is outside the protections of modern trusted execution environments (TEEs), e.g., Intel SGX, our attack bypasses these protections even when the training set can be verified as correct.
Published in ISCA, 2018 (Full Paper | | )
We show that low resolution and fixed point nature of ultra-low-power implementations prevent privacy guarantees from being provided due to low quality noising. We present techniques, resampling and thresholding, to overcome this limitation.