On scalable parallel recursive backtracking

Faisal Abu Khzam, K Daudjee, A.E. Mouawad, N Nishimura

Research output: Contribution to journalArticleResearchpeer-review

1 Downloads (Pure)

Abstract

Supercomputers are equipped with an increasingly large number of cores to use computational power as a way of solving problems that are otherwise intractable. Unfortunately, getting serial algorithms to run in parallel to take advantage of these computational resources remains a challenge for several application domains. Many parallel algorithms can scale to only hundreds of cores. The limiting factors of such algorithms are usually communication overhead and poor load balancing. Solving NP-hard graph problems to optimality using exact algorithms is an example of an area in which there has so far been limited success in obtaining large scale parallelism. Many of these algorithms use recursive backtracking as their core solution paradigm. In this paper, we propose a lightweight, easy-to-use, scalable approach for transforming almost any recursive backtracking algorithm into a parallel one. Our approach incurs minimal communication overhead and guarantees a load-balancing strategy that is implicit, i.e., does not require any problem-specific knowledge. The key idea behind our approach is the use of efficient traversal operations on an indexed search tree that is oblivious to the problem being solved. We test our approach with parallel implementations of algorithms for the well-known Vertex Cover and Dominating Set problems. On sufficiently hard instances, experimental results show nearly linear speedups for thousands of cores, reducing running times from days to just a few minutes.
Original languageEnglish
Pages (from-to)65-75
Number of pages11
JournalJournal of Parallel and Distributed Computing
Volume84
Issue number8 August 2015
DOIs
Publication statusPublished - 2015
Externally publishedYes

Fingerprint

Backtracking
Load Balancing
Resource allocation
Vertex Cover
Search Trees
Supercomputer
Dominating Set
Exact Algorithms
Parallel Implementation
Supercomputers
Parallel Algorithms
Communication
Parallelism
Optimality
Parallel algorithms
NP-complete problem
Limiting
Paradigm
Resources
Experimental Results

Cite this

Abu Khzam, F., Daudjee, K., Mouawad, A. E., & Nishimura, N. (2015). On scalable parallel recursive backtracking. Journal of Parallel and Distributed Computing, 84(8 August 2015), 65-75. https://doi.org/10.1016/j.jpdc.2015.07.006
Abu Khzam, Faisal ; Daudjee, K ; Mouawad, A.E. ; Nishimura, N. / On scalable parallel recursive backtracking. In: Journal of Parallel and Distributed Computing. 2015 ; Vol. 84, No. 8 August 2015. pp. 65-75.
@article{a309487197ea4d71bc6e07d24418b0fe,
title = "On scalable parallel recursive backtracking",
abstract = "Supercomputers are equipped with an increasingly large number of cores to use computational power as a way of solving problems that are otherwise intractable. Unfortunately, getting serial algorithms to run in parallel to take advantage of these computational resources remains a challenge for several application domains. Many parallel algorithms can scale to only hundreds of cores. The limiting factors of such algorithms are usually communication overhead and poor load balancing. Solving NP-hard graph problems to optimality using exact algorithms is an example of an area in which there has so far been limited success in obtaining large scale parallelism. Many of these algorithms use recursive backtracking as their core solution paradigm. In this paper, we propose a lightweight, easy-to-use, scalable approach for transforming almost any recursive backtracking algorithm into a parallel one. Our approach incurs minimal communication overhead and guarantees a load-balancing strategy that is implicit, i.e., does not require any problem-specific knowledge. The key idea behind our approach is the use of efficient traversal operations on an indexed search tree that is oblivious to the problem being solved. We test our approach with parallel implementations of algorithms for the well-known Vertex Cover and Dominating Set problems. On sufficiently hard instances, experimental results show nearly linear speedups for thousands of cores, reducing running times from days to just a few minutes.",
author = "{Abu Khzam}, Faisal and K Daudjee and A.E. Mouawad and N Nishimura",
year = "2015",
doi = "10.1016/j.jpdc.2015.07.006",
language = "English",
volume = "84",
pages = "65--75",
journal = "Journal of Parallel and Distributed Computing",
issn = "0743-7315",
publisher = "Academic Press",
number = "8 August 2015",

}

Abu Khzam, F, Daudjee, K, Mouawad, AE & Nishimura, N 2015, 'On scalable parallel recursive backtracking', Journal of Parallel and Distributed Computing, vol. 84, no. 8 August 2015, pp. 65-75. https://doi.org/10.1016/j.jpdc.2015.07.006

On scalable parallel recursive backtracking. / Abu Khzam, Faisal; Daudjee, K; Mouawad, A.E.; Nishimura, N.

In: Journal of Parallel and Distributed Computing, Vol. 84, No. 8 August 2015, 2015, p. 65-75.

Research output: Contribution to journalArticleResearchpeer-review

TY - JOUR

T1 - On scalable parallel recursive backtracking

AU - Abu Khzam, Faisal

AU - Daudjee, K

AU - Mouawad, A.E.

AU - Nishimura, N

PY - 2015

Y1 - 2015

N2 - Supercomputers are equipped with an increasingly large number of cores to use computational power as a way of solving problems that are otherwise intractable. Unfortunately, getting serial algorithms to run in parallel to take advantage of these computational resources remains a challenge for several application domains. Many parallel algorithms can scale to only hundreds of cores. The limiting factors of such algorithms are usually communication overhead and poor load balancing. Solving NP-hard graph problems to optimality using exact algorithms is an example of an area in which there has so far been limited success in obtaining large scale parallelism. Many of these algorithms use recursive backtracking as their core solution paradigm. In this paper, we propose a lightweight, easy-to-use, scalable approach for transforming almost any recursive backtracking algorithm into a parallel one. Our approach incurs minimal communication overhead and guarantees a load-balancing strategy that is implicit, i.e., does not require any problem-specific knowledge. The key idea behind our approach is the use of efficient traversal operations on an indexed search tree that is oblivious to the problem being solved. We test our approach with parallel implementations of algorithms for the well-known Vertex Cover and Dominating Set problems. On sufficiently hard instances, experimental results show nearly linear speedups for thousands of cores, reducing running times from days to just a few minutes.

AB - Supercomputers are equipped with an increasingly large number of cores to use computational power as a way of solving problems that are otherwise intractable. Unfortunately, getting serial algorithms to run in parallel to take advantage of these computational resources remains a challenge for several application domains. Many parallel algorithms can scale to only hundreds of cores. The limiting factors of such algorithms are usually communication overhead and poor load balancing. Solving NP-hard graph problems to optimality using exact algorithms is an example of an area in which there has so far been limited success in obtaining large scale parallelism. Many of these algorithms use recursive backtracking as their core solution paradigm. In this paper, we propose a lightweight, easy-to-use, scalable approach for transforming almost any recursive backtracking algorithm into a parallel one. Our approach incurs minimal communication overhead and guarantees a load-balancing strategy that is implicit, i.e., does not require any problem-specific knowledge. The key idea behind our approach is the use of efficient traversal operations on an indexed search tree that is oblivious to the problem being solved. We test our approach with parallel implementations of algorithms for the well-known Vertex Cover and Dominating Set problems. On sufficiently hard instances, experimental results show nearly linear speedups for thousands of cores, reducing running times from days to just a few minutes.

U2 - 10.1016/j.jpdc.2015.07.006

DO - 10.1016/j.jpdc.2015.07.006

M3 - Article

VL - 84

SP - 65

EP - 75

JO - Journal of Parallel and Distributed Computing

JF - Journal of Parallel and Distributed Computing

SN - 0743-7315

IS - 8 August 2015

ER -