Boosting Omnidirectional Stereo Matching with a Pre-trained Depth Foundation Model
Beschreibung
Omnidirectional depth perception is essential for mobile robotics applications that require scene understanding across a full 360° field of view. Camera-based setups offer a cost-effective option by using stereo depth estimation to generate dense, high-resolution depth maps without relying on expensive active sensing. However, existing omnidirectional stereo matching approaches achieve only limited depth accuracy across diverse environments, depth ranges, and lighting conditions, due to the scarcity of real-world data. We present DFI-OmniStereo, a novel omnidirectional stereo matching method that leverages a large-scale pre-trained foundation model for relative monocular depth estimation within an iterative optimization-based stereo matching architecture. We introduce a dedicated two-stage training strategy to utilize the relative monocular depth features for our omnidirectional stereo matching before scale-invariant fine-tuning. DFI-OmniStereo achieves state-of-the-art results on the real-world Helvipad dataset, reducing disparity MAE by approximately 16% compared to the previous best omnidirectional stereo method.
Schlagwort
Depth Prediction;Omnidirectional depth perception;Stereo Matching;deep learning;foundation modelDFG-Fächer
4.43-04 Künstliche Intelligenz und Maschinelle Lernverfahren4.43-05 Bild- und Sprachverarbeitung, Computergraphik und Visualisierung, Human Computer Interaction, Ubiquitous und Wearable Computing
Zugehörige Drittmittelprojekte
EC/H2020 | 866008 | REDHMWK | III L6-519/03/05.001-(0016) | emergenCity - TP Roth
HMWK | 500/10.001-(00012) | TAM - TP Roth
Verknüpfte Ressourcen
- Wird beschrieben durch: arXiv:2503.23502
Sammlungen
-
Segmentation [9]
Die folgenden Lizenzbestimmungen sind mit dieser Ressource verbunden: