Datasets:
Weight Space Representation Learning on Diverse NeRF Architectures (ICLR 2026)
This repository contains the datasets for the paper Weight Space Representation Learning on Diverse NeRF Architectures, accepted at ICLR 2026. The paper proposes a framework that is capable of processing NeRFs with diverse architectures (MLPs, tri-planes, and hash tables) by training a graph metanetwork to obtain an architecture-agnostic latent space.
NeRF weights
Main dataset structure:
.
βββ nerf
βββ shapenet
βββ hash
β βββ class_id
β βββ nerf_id
β βββ train
β β βββ *.png # object views used to train the NeRF
β βββ grid.pth # nerfacc-like occupancy grid parameters
β βββ nerf_weights.pth # nerfacc-like NeRF parameters
β βββ transforms_train.json # camera poses
βββ mlp
β βββ class_id
β βββ nerf_id
β βββ train
β β βββ *.png
β βββ grid.pth
β βββ nerf_weights.pth
β βββ transforms_train.json
βββ triplane
β βββ class_id
β βββ nerf_id
β βββ train
β β βββ *.png
β βββ grid.pth
β βββ nerf_weights.pth
β βββ transforms_train.json
βββ test.txt # test split
βββ train.txt # training split
βββ val.txt # validation split
Unseen architectures (nerf/shapenet/hash_unseen, nerf/shapenet/mlp_unseen, and nerf/shapenet/triplane_unseen) and Objaverse NeRFs (nerf/objaverse) have analogous directory structures.
NeRF graphs
Main dataset structure:
.
βββ graph
βββ shapenet
βββ hash
β βββ test
β β βββ *.pt # torch_geometric-like graph data
β βββ train
β β βββ *.pt
β βββ val
β βββ *.pt
βββ mlp
β βββ test
β β βββ *.pt
β βββ train
β β βββ *.pt
β βββ val
β βββ *.pt
βββ triplane
βββ test
β βββ *.pt
βββ train
β βββ *.pt
βββ val
βββ *.pt
Unseen architectures (graph/shapenet/hash_unseen, graph/shapenet/mlp_unseen, and graph/shapenet/triplane_unseen) and Objaverse NeRFs (graph/objaverse) have analogous directory structures.
NeRF embeddings
Main dataset structure:
.
βββ emb
βββ model
βββ shapenet
βββ hash
β βββ test
β β βββ *.h5
β βββ train
β β βββ *.h5
β βββ val
β βββ *.h5
βββ mlp
β βββ test
β β βββ *.h5
β βββ train
β β βββ *.h5
β βββ val
β βββ *.h5
βββ triplane
βββ test/
β βββ *.h5
βββ train
β βββ *.h5
βββ val
βββ *.h5
where models are:
l_con, akal_rec, akal_rec_con, aka
Unseen architectures (emb/model/shapenet/hash_unseen, emb/model/shapenet/mlp_unseen, and emb/model/shapenet/triplane_unseen) and Objaverse NeRFs (emb/model/objaverse) have analogous directory structures.
Language data
The language directory contains embeddings (i.e. those found in emb/l_rec_con/shapenet) paired with textual annotations from the ShapeNeRF-Text dataset. This directory structure allows running the official LLaNA code without any additional preprocessing.
Cite us
If you find our work useful, please cite us:
@inproceedings{ballerini2026weight,
title = {Weight Space Representation Learning on Diverse {NeRF} Architectures},
author = {Ballerini, Francesco and Zama Ramirez, Pierluigi and Di Stefano, Luigi and Salti, Samuele},
booktitle = {The Fourteenth International Conference on Learning Representations},
year = {2026}
}
- Downloads last month
- 435