Inference Attack
88 papers with code • 0 benchmarks • 2 datasets
Benchmarks
These leaderboards are used to track progress in Inference Attack
Libraries
Use these libraries to find Inference Attack models and implementationsMost implemented papers
Membership Inference Attacks From First Principles
A membership inference attack allows an adversary to query a trained machine learning model to predict whether or not a particular example was contained in the model's training dataset.
Safety and Performance, Why not Both? Bi-Objective Optimized Model Compression toward AI Software Deployment
By simulating the attack mechanism as the safety test, SafeCompress can automatically compress a big model to a small one following the dynamic sparse training paradigm.
Dissecting Distribution Inference
A distribution inference attack aims to infer statistical properties of data used to train machine learning models.
Understanding Membership Inferences on Well-Generalized Learning Models
Membership Inference Attack (MIA) determines the presence of a record in a machine learning model's training data by querying the model.
Machine Learning with Membership Privacy using Adversarial Regularization
In this paper, we focus on such attacks against black-box models, where the adversary can only observe the output of the model, but not its parameters.
Privacy Risks of Securing Machine Learning Models against Adversarial Examples
To perform the membership inference attacks, we leverage the existing inference methods that exploit model predictions.
Reconstruction and Membership Inference Attacks against Generative Models
We present two information leakage attacks that outperform previous work on membership inference against generative models.
GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models
In addition, we propose the first generic attack model that can be instantiated in a large range of settings and is applicable to various kinds of deep generative models.
RIGA: Covert and Robust White-Box Watermarking of Deep Neural Networks
White-box watermarking algorithms have the advantage that they do not impact the accuracy of the watermarked model.
An Empirical Study on the Intrinsic Privacy of SGD
Introducing noise in the training of machine learning systems is a powerful way to protect individual privacy via differential privacy guarantees, but comes at a cost to utility.