TY - JOUR

T1 - Learning from obstructions

T2 - An effective deep learning approach for minimum vertex cover

AU - Abu-Khzam, Faisal N.

AU - Abd El-Wahab, Mohamed M.

AU - Haidous, Moussa

AU - Yosri, Noureldin

PY - 2022

Y1 - 2022

N2 - Computational intractability has for decades motivated the development of a plethora of methodologies that mainly aim at a quality-time trade-off. The use of Machine Learning has finally emerged as one of the possible tools to obtain approximate solutions to NP-hard optimization problems. Recently, Dai et al. introduced a method for computing such approximate solutions for instances of the Vertex Cover problem. In this paper we consider the effectiveness of selecting a proper training strategy by considering special problem instances called obstructions that we believe carry some intrinsic properties of the problem. Capitalizing on the recent work of Dai et al. on Vertex Cover, and using the same case study as well as 19 other problem instances, we show the utility of using obstructions for training neural networks. Experiments show that training with obstructions results in a surprisingly huge reduction in number of iterations needed for convergence, thus gaining a substantial reduction in the time needed for training the model.

AB - Computational intractability has for decades motivated the development of a plethora of methodologies that mainly aim at a quality-time trade-off. The use of Machine Learning has finally emerged as one of the possible tools to obtain approximate solutions to NP-hard optimization problems. Recently, Dai et al. introduced a method for computing such approximate solutions for instances of the Vertex Cover problem. In this paper we consider the effectiveness of selecting a proper training strategy by considering special problem instances called obstructions that we believe carry some intrinsic properties of the problem. Capitalizing on the recent work of Dai et al. on Vertex Cover, and using the same case study as well as 19 other problem instances, we show the utility of using obstructions for training neural networks. Experiments show that training with obstructions results in a surprisingly huge reduction in number of iterations needed for convergence, thus gaining a substantial reduction in the time needed for training the model.

KW - Deep learning

KW - Graph minor Theorem

KW - Vertex cover

UR - http://www.scopus.com/inward/record.url?scp=85136206924&partnerID=8YFLogxK

U2 - 10.1007/s10472-022-09813-2

DO - 10.1007/s10472-022-09813-2

M3 - Article

AN - SCOPUS:85136206924

SP - 1

EP - 12

JO - Annals of Mathematics and Artificial Intelligence

JF - Annals of Mathematics and Artificial Intelligence

SN - 1012-2443

ER -