Join us at noon on Thursday, April 21 in the Coda building or by Zoom as Varun Chandrasekaran, a doctoral candidate at the University of Wisconsin-Madison, presents a lecture to SCP’s faculty and students. More information on the talk is below. Virtual zoom link will be posted when available.
Information leakage in ML deployments: How, when, and why?
Machine learning (ML) is widely used today, ranging from applications in medicine to those in autonomous driving. Across all these applications, various forms of sensitive information is shared with the ML model, such as private medical records, or a user’s location. In this talk, I will explain what forms of private information can be learnt through interacting with the ML model. In particular, I will discuss when ML model parameters in cloud deployments are not confidential, and how this can be remediated. Next, I will discuss how model parameters learn private user information, how this can be prevented, and when such prevention mechanisms fail. Finally, I will reason about how users can delete sensitive user information from parameters of deep models on demand.
Varun Chandrasekaran is a doctoral candidate at the University of Wisconsin-Madison, where he works with Suman Banerjee and Somesh Jha. His areas of research interest are at the intersection of security & privacy, systems, and machine learning. His work aims to understand the security & privacy vulnerabilities of real-world ML deployments to design practical intervention by providing theoretical insight behind privacy violations in ML models.