Learning to Identify Out-of-Distribution Objects for 3D LiDAR Anomaly Segmentation
Abstract
A novel 3D LiDAR anomaly segmentation method operates directly in feature space to distinguish known from unknown objects, addressing limitations of existing datasets through mixed real-synthetic datasets with complex environments.
Understanding the surrounding environment is fundamental in autonomous driving and robotic perception. Distinguishing between known classes and previously unseen objects is crucial in real-world environments, as done in Anomaly Segmentation. However, research in the 3D field remains limited, with most existing approaches applying post-processing techniques from 2D vision. To cover this lack, we propose a new efficient approach that directly operates in the feature space, modeling the feature distribution of inlier classes to constrain anomalous samples. Moreover, the only publicly available 3D LiDAR anomaly segmentation dataset contains simple scenarios, with few anomaly instances, and exhibits a severe domain gap due to its sensor resolution. To bridge this gap, we introduce a set of mixed real-synthetic datasets for 3D LiDAR anomaly segmentation, built upon established semantic segmentation benchmarks, with multiple out-of-distribution objects and diverse, complex environments. Extensive experiments demonstrate that our approach achieves state-of-the-art and competitive results on the existing real-world dataset and the newly introduced mixed datasets, respectively, validating the effectiveness of our method and the utility of the proposed datasets. Code and datasets are available at https://simom0.github.io/lido-page/.
Community
In this paper we propose LIDO, a novel approach for 3D LiDAR anomaly segmentation that directly works on the feature space to distinguish between inlier known classes and anomalous objects. We rely on a combination of training losses and inference scores to produce both semantic and anomaly segmentation, while achieving real-time performance.
We also introduce three new mixed real-synthetic OoD datasets for 3D LiDAR anomaly segmentation, based on established autonomous driving benchmarks. We design a pipeline to insert and geometrically align synthetic 3D objects into real LiDAR scans, including realistic intensity computation.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Neural Distribution Prior for LiDAR Out-of-Distribution Detection (2026)
- ProOOD: Prototype-Guided Out-of-Distribution 3D Occupancy Prediction (2026)
- ALOOD: Exploiting Language Representations for LiDAR-Based Out-of-Distribution Object Detection (2025)
- TerraSeg: Self-Supervised Ground Segmentation for Any LiDAR (2026)
- SegVGGT: Joint 3D Reconstruction and Instance Segmentation from Multi-View Images (2026)
- Feasibility of Indoor Frame-Wise Lidar Semantic Segmentation via Distillation from Visual Foundation Model (2026)
- Gau-Occ: Geometry-Completed Gaussians for Multi-Modal 3D Occupancy Prediction (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2604.23604 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper