Multi-agent path planning on grid maps is a challenging problem and has numerous real-life applications ranging from robotics to real-time strategy games and non-player characters in video games. A* is a cost-optimal forward search algorithm for path planning which scales up poorly in practice since both the search space and the branching factor grow exponentially in the number of agents. In this work, we propose an A* implementation for the Graphics Processor Units (GPUs) which uses as search space a grid map. The approach uses a search space decomposition to break down the forward search A* algorithm into parallel independently forward sub-searches. The solution offer no guarantees with respect to completeness and solution quality but exploits the computational capability of GPUs to accelerate path planning for many thousands of agents. The paper describes this implementation using the Compute Unified Device Architecture (CUDA) programming environment, and demonstrates its advantages in GPU performance compared to GPU implementation of Real-Time Adaptive A*.

Exploiting GPUs for multi-agent path planning on grid maps

CAGGIANESE, GIUSEPPE;ERRA, UGO
2012-01-01

Abstract

Multi-agent path planning on grid maps is a challenging problem and has numerous real-life applications ranging from robotics to real-time strategy games and non-player characters in video games. A* is a cost-optimal forward search algorithm for path planning which scales up poorly in practice since both the search space and the branching factor grow exponentially in the number of agents. In this work, we propose an A* implementation for the Graphics Processor Units (GPUs) which uses as search space a grid map. The approach uses a search space decomposition to break down the forward search A* algorithm into parallel independently forward sub-searches. The solution offer no guarantees with respect to completeness and solution quality but exploits the computational capability of GPUs to accelerate path planning for many thousands of agents. The paper describes this implementation using the Compute Unified Device Architecture (CUDA) programming environment, and demonstrates its advantages in GPU performance compared to GPU implementation of Real-Time Adaptive A*.
2012
978-1-4673-2359-8
File in questo prodotto:
File Dimensione Formato  
hpcs_2012_caggianese_erra.pdf

accesso aperto

Descrizione: Versione finale pre-print.
Tipologia: Documento in Pre-print
Licenza: DRM non definito
Dimensione 3.17 MB
Formato Adobe PDF
3.17 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11563/84891
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? ND
social impact