Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
This is a page not in th emain menu
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml
and set future: false
.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in ISCA, 2018 (Full Paper | bibtex | Plain Text)
We show that low resolution and fixed point nature of ultra-low-power implementations prevent privacy guarantees from being provided due to low quality noising. We present techniques, resampling and thresholding, to overcome this limitation.
Published in ASPLOS, 2020 (Full Paper | bibtex | Plain Text)
Our attack influences training outcome—e.g., degrades model accuracy or biases the model towards an adversary-specified label—purely by scheduling asynchronous training threads in a malicious fashion. Since thread scheduling is outside the protections of modern trusted execution environments (TEEs), e.g., Intel SGX, our attack bypasses these protections even when the training set can be verified as correct.
Published in DSML, 2020 (Full Paper | bibtex | Plain Text)
PyTorchFI is a runtime perturbation tool for deep neural networks (DNNs), implemented for the popular PyTorch deep learning platform. PyTorchFI enables users to perform perturbations on weights or neurons of DNNs at runtime.
Published in Usenix, 2021 (Full Paper | bibtex | Plain Text)
We present a novel attack called Double Cross, which aims to manipulate data labeling and model training in active learning settings.
Published in ISCA, 2021 (Full Paper | bibtex | Plain Text)
Our study uncovers seven classes of microarchitectural optimization with novel security implications, proposes a conceptual framework through which to study them and demonstrates several proofs-of-concept to show their efficacy. The optimizations we study range from those that leak as much privacy as Spectre/Meltdown (but without exploiting speculative execution) to those that otherwise undermine security-critical programs in a variety of ways.
Published in IEEE S&P, 2022 (Full Paper | bibtex | Plain Text)
Our experiments demonstrate the existence of a pointer-chasing DMP on recent Apple processors, including the A14 and M1. We then reverse engineer the details of this DMP to determine the opportunities for and restrictions it places on attackers using it. Finally, we demonstrate several basic attack primitives capable of leaking pointer values using the DMP.
Published:
Presented an early version of “Guaranteeing Local Differential Privacy on Ultra-Low-Power Systems,” as Part of the Lawrence Technological University Alumni Career Series.
Published:
Presentation for “Game of Threads: Enabling Asynchronous Poisoning Attacks,” at ASPLOS’20
Published:
Presentation for “Double-Cross Attacks: Subverting Active Learning Systems,” at Usenix’21
Published:
Presentation for “Opening Pandora’s Box: A Systematic Study of New Ways Microarchitecture Can Leak Private Data,” at ISCA’21
Published:
Presentation for “Augury: Using data memory-dependent prefetchers to leak data at rest,” at IEEE S&P’22
Published:
Poster presentation for “Augury: Using data memory-dependent prefetchers to leak data at rest,” at the CSAW’22 Applied Research Competition. Our work was runner up.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.