Papers
arxiv:2604.23604

Learning to Identify Out-of-Distribution Objects for 3D LiDAR Anomaly Segmentation

Published on Apr 26
· Submitted by
Simone Mosco
on Apr 28
Authors:
,

Abstract

A novel 3D LiDAR anomaly segmentation method operates directly in feature space to distinguish known from unknown objects, addressing limitations of existing datasets through mixed real-synthetic datasets with complex environments.

AI-generated summary

Understanding the surrounding environment is fundamental in autonomous driving and robotic perception. Distinguishing between known classes and previously unseen objects is crucial in real-world environments, as done in Anomaly Segmentation. However, research in the 3D field remains limited, with most existing approaches applying post-processing techniques from 2D vision. To cover this lack, we propose a new efficient approach that directly operates in the feature space, modeling the feature distribution of inlier classes to constrain anomalous samples. Moreover, the only publicly available 3D LiDAR anomaly segmentation dataset contains simple scenarios, with few anomaly instances, and exhibits a severe domain gap due to its sensor resolution. To bridge this gap, we introduce a set of mixed real-synthetic datasets for 3D LiDAR anomaly segmentation, built upon established semantic segmentation benchmarks, with multiple out-of-distribution objects and diverse, complex environments. Extensive experiments demonstrate that our approach achieves state-of-the-art and competitive results on the existing real-world dataset and the newly introduced mixed datasets, respectively, validating the effectiveness of our method and the utility of the proposed datasets. Code and datasets are available at https://simom0.github.io/lido-page/.

Community

Paper author Paper submitter

In this paper we propose LIDO, a novel approach for 3D LiDAR anomaly segmentation that directly works on the feature space to distinguish between inlier known classes and anomalous objects. We rely on a combination of training losses and inference scores to produce both semantic and anomaly segmentation, while achieving real-time performance.
We also introduce three new mixed real-synthetic OoD datasets for 3D LiDAR anomaly segmentation, based on established autonomous driving benchmarks. We design a pipeline to insert and geometrically align synthetic 3D objects into real LiDAR scans, including realistic intensity computation.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2604.23604
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2604.23604 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2604.23604 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2604.23604 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.