Louw, Jakobus MVerster, Jacobus JDickens, John S2025-12-092025-12-092025-112261-236Xhttps://doi.org/10.1051/matecconf/202541704002http://hdl.handle.net/10204/14510This paper evaluates the performance of Depth Anything V2, a deep learning-based monocular depth estimation model, as a low-cost alternative to LiDAR for robotic depth sensing. LiDAR, while widely used, is expensive, prompting the search for affordable solutions. Six datasets were recorded in indoor environments to assess the performance of the pretrained metric depth model. Qualitative analysis showed that overall relative depth is well estimated, but fine details and close-range depths in featuresparse areas are not represented well. Quantitative analysis revealed variability in performance across datasets, with mean errors ranging from 0.32 m to 0.66 m. Additionally, performance varies with distance. For objects within 2 m, 89.1% of errors are within ±0.5 m. This decreases to 77.0% for objects within 4 m and further drops to 70.8% for objects within 6 m. Depth Anything V2 demonstrates higher pixel resolution than LiDAR but with significantly reduced metric depth accuracy. While not suitable for high-precision applications like indoor navigation and obstacle avoidance, the model can still provide useful depth information in scenarios where finegrained accuracy is less critical.FulltextenDepth Anything V2LiDARRoboticsAssessing Depth Anything V2 monocular depth estimation as a LiDAR alternative in roboticsArticlen/a