Privacy Attacks and Defenses for Machine Learning Models
Faculty: Aaron Roth and Michael Kearns
Challenge: Huge amounts of sensitive data are used to train ML models (in industry) or produce aggregate statistical tables (in government: e.g. Census). But aggregation is not sufficient to protect privacy.
We expose these problems by demonstrating attacks that can reconstruct private data from supposedly anonymized aggregate forms. We have attacked real US Census Data and real image classification models.
Solutions include applying rigorous privacy technologies like differential privacy.