Data poisoning attack is one of the common attacks that decreases the performance of a model in edge machine learning. The mechanism used in most of the existing data poisoning attacks diverges the gradients to a minimal extent which prevents models from achieving minima. In our approach, we have come with a new data poisoning attack that inverts the loss function of a benign model. The inverted loss function is then used to create malicious gradients at every SGD iteration, which is almost opposite to that of minima. Such gradients are then used to generate poisoned labels and inject those labels into the dataset. We have tested our attack in three different datasets, i.e. MNIST, Fashion-MNIST, and CIFAR-10, along with some preexisting data poisoning attacks. We have measured the performance of a global model in terms of accuracy drop in federated machine learning settings. The observed result suggests that our attack can be 1.6 times stronger than the targeted attack and 3.2 times stronger than a random poisoning attack in certain cases.