A Novel Data Poisoning Attack in Federated Learning based on Inverted Loss Function

Prajjwal Gupta, Krishna Yadav, Brij B. Gupta, Mamoun Alazab, Thippa Reddy Gadekallu

Research output: Contribution to journalArticlepeer-review

Abstract

Data poisoning attack is one of the common attacks that decreases the performance of a model in edge machine learning. The mechanism used in most of the existing data poisoning attacks diverges the gradients to a minimal extent which prevents models from achieving minima. In our approach, we have come with a new data poisoning attack that inverts the loss function of a benign model. The inverted loss function is then used to create malicious gradients at every SGD iteration, which is almost opposite to that of minima. Such gradients are then used to generate poisoned labels and inject those labels into the dataset. We have tested our attack in three different datasets, i.e. MNIST, Fashion-MNIST, and CIFAR-10, along with some preexisting data poisoning attacks. We have measured the performance of a global model in terms of accuracy drop in federated machine learning settings. The observed result suggests that our attack can be 1.6 times stronger than the targeted attack and 3.2 times stronger than a random poisoning attack in certain cases.

Original languageEnglish
Article number103270
Pages (from-to)1-8
Number of pages8
JournalComputers and Security
Volume130
DOIs
Publication statusE-pub ahead of print - 24 Apr 2023

Fingerprint

Dive into the research topics of 'A Novel Data Poisoning Attack in Federated Learning based on Inverted Loss Function'. Together they form a unique fingerprint.

Cite this