Interactive Demo
Drag the epsilon slider to adjust the privacy budget. Lower epsilon = more noise = stronger privacy but less utility. Watch how Laplace noise transforms the original data points in real time.
About this project
This research investigates the application of differential privacy to machine learning model training, addressing the fundamental tension between model utility and individual privacy. Differential privacy provides a mathematically rigorous framework guaranteeing that the inclusion or exclusion of any single data record does not significantly affect the model's outputs, thereby protecting individual privacy even against adversaries with arbitrary auxiliary information.
The project implements the Laplace mechanism, which adds carefully calibrated noise drawn from a Laplace distribution to query responses and model gradients. The privacy budget epsilon controls the privacy-utility tradeoff: smaller epsilon values provide stronger privacy guarantees but introduce more noise, potentially degrading model accuracy. Through extensive experiments, this work demonstrates that well-tuned differential privacy can effectively defend against membership inference and other black-box attacks while preserving model performance for downstream tasks.