Abstract

Navigating autonomous vehicles efficiently across unstructured and off-road terrains remains a formidable challenge, often requiring intricate mapping or multi-step pipelines. However, these conventional approaches struggle to adapt to dynamic environments. This paper presents TerrainSense, an end-to-end framework that overcomes these limitations. By utilizing a transformers, TerrainSense detects lane semantics and topology from camera images, enabling mapless path planning without the reliance on highly detailed maps. The efficacy of TerrainSense was rigorously assessed on six diverse datasets, evaluating its efficacy in detection, segmentation, and path prediction using various metrics. Notably, it outperforms the other state-of-the-art methods by 9.32\% in precisely predicting the path with 18.28\% faster inference time.

Bilal Hassan, Arjun Sharma, Nadya Abdel Madjid, Majid Khonji, Jorge Dias. “TerrainSense: Vision-Driven Mapless Navigation for Unstructured Off-Road Environments. ICRA 2024.” Yokohama. Japan.