Download
⚖️ Model weights
⚙️ Model configuration
📂 Dataset splits
Abstract
MAESTRO is a tailored adaptation of the Masked Autoencoder (MAE) that effectively orchestrates the use of multimodal, multitemporal, and multispectral Earth Observation (EO) data. Evaluated on four EO datasets, MAESTRO sets a new state-of-the-art on tasks that strongly rely on multitemporal dynamics, while remaining competitive on tasks dominated by a single monotemporal modality.
MAESTRO's contributions are as follows:
- Extensive benchmarking of multimodal and multitemporal SSL: Impact evaluation of various fusion strategies for multimodal and multitemporal SSL.
- Patch-group-wise normalization: Novel normalization scheme that normalizes reconstruction targets patch-wise within groups of highly correlated spectral bands.
- MAESTRO: Novel adaptation of the MAE that combines optimized fusion strategies with patch-group-wise normalization.
📃 Paper: https://arxiv.org/abs/2508.10894
💻 Code repository: https://github.com/IGNF/MAESTRO
Pre-training
This model is pre-trained on S2-NAIP urban, an urban subset of S2-NAIP derived by intersecting the S2-NAIP footprints with the urban set defined in Zooming-in zooming-out.
The resulting subset contains 167,397 tiles of size 640 m × 640 m, covering a total area of 68,565 km2 across the continental United States.
We retain three distinct modalities:
- Aerial NAIP imagery RGB + NIR (1.25 m)
- Sentinel-1 time series (mixed ascending and descending orbits)
- Sentinel-2 time series
During pre-training, we generate surrogate modalities for aerial and SPOT imagery via resampling of NAIP imagery.
Below is the reconstruction loss during pre-training on the combined training, validation, and test ensembles, using patch-group-wise normalization and modality-weighted averaging proportional to token counts.
Fine-tuning
For optimal fine-tuning results with this model:
- Ensure that patch sizes and channels match between pre-training and fine-tuning for each modality:
- Modality "aerial":
- Patch size: 16
- Channels: NIR, RED, GREEN, BLUE
- Modality "spot":
- Patch size: 16
- Channels: RED, GREEN, BLUE
- Modality "s1":
- Patch size: 2
- Channels: VV, VH
- Modality "s2":
- Patch size: 2
- Channels: B02, B03, B04, B05, B06, B07, B08, B8A, B11, B12
- Modality "aerial":
- Use fixed cross-dataset grids for positional encodings proportional to ground sampling distance:
grid_pos_enc≈ 1.6 *crop_meters - Retain separate Sentinel-1 modalities by orbit (if available on the fine-tuning dataset), but use a shared embedding layer initialized from the pre-trained Sentinel-1 layer
Note that modality names must match between pre-training and fine-tuning.
Below are cross-dataset evaluation results obtained with these guidelines on TreeSatAI-TS, PASTIS-HD, FLAIR#2, and FLAIR-HUB.
| Model | Pre-training dataset | TreeSatAI-TS | PASTIS-HD | FLAIR#2 | FLAIR-HUB |
|---|---|---|---|---|---|
| MAESTRO (ours) | S2-NAIP urban | 78.8 | 67.4 | 62.6 | 64.6 |
| DINO-v2 | LVD-142M | 76.7 | 64.4 | 64.2 | 66.0 |
| DINO-v2 sat. | Maxar Vivid2 | 76.3 | 64.0 | 63.5 | 66.0 |
| DOFA | DOFA MM | 76.0 | 62.9 | 62.3 | 65.1 |
| CROMA | SSL4EO | 70.5 | 65.0 | 39.0 | 44.3 |
| Prithvi-EO-2.0 | HLS | 75.6 | 66.2 | 41.8 | 44.9 |
| SatMAE | fMoW RGB+S | 76.9 | 66.6 | 42.5 | 45.0 |
🚀 Getting started
Prerequisites:
- Clone MAESTRO's code repository
- Fetch Dataset splits and move them to each dataset directory
- Fetch model weights and move them into
/path/to/experiments/MAESTRO_S2-NAIP-urban_base/checkpoints/ - Fetch model configuration and move it into
/path/to/experiments/MAESTRO_S2-NAIP-urban_base/.hydra/
The module is setup with Poetry.
# 1. Change directory
cd MAESTRO
# 2. Install dependencies with Poetry
poetry install
Pre-training on S2-NAIP urban is performed using:
# batch size 16 on 8 nodes with 4 GPUs per node
poetry run python main.py \
model.model=mae model.model_size=medium \
model.fusion_mode=group model.inter_depth=3 \
opt_pretrain.epochs=15 opt_probe.epochs=0 opt_finetune.epochs=0 \
opt_pretrain.base_lr=1e-5 \
opt_pretrain.batch_size=16 trainer.num_nodes=8 \
datasets.name_dataset=s2_naip \
datasets.s2_naip.filter_inputs=[aerial,spot,s2,s1] \
datasets.s2_naip.crop_meters=120 datasets.s2_naip.grid_pos_enc=192 datasets.s2_naip.repeats=5 \
datasets.s2_naip.aerial.image_size=384 datasets.s2_naip.aerial.patch_size.mae=16 \
datasets.s2_naip.spot.image_size=128 datasets.s2_naip.spot.patch_size.mae=16 \
datasets.s2_naip.s2.image_size=12 datasets.s2_naip.s2.patch_size.mae=2 \
datasets.s2_naip.s1.image_size=12 datasets.s2_naip.s1.patch_size.mae=2 \
datasets.root_dir=/path/to/dataset/dir datasets.s2_naip.rel_dir=s2-naip-urban \
run.exp_dir=/path/to/experiments/dir run.exp_name=MAESTRO_S2-NAIP-urban_base
Fine-tuning on TreeSatAI-TS:
# batch size 24 on 1 node with 4 GPUs per node
# re-use embeddings' weights with "name_embed" argument
# re-use encoder's weights with "name_group" argument
# load pre-trained model "MAESTRO_S2-NAIP-urban_base"
poetry run python main.py \
model.model=mae model.model_size=medium \
model.fusion_mode=group model.inter_depth=3 \
opt_pretrain.epochs=0 opt_probe.epochs=10 opt_finetune.epochs=50 \
opt_probe.batch_size=24 opt_finetune.batch_size=24 trainer.num_nodes=1 \
opt_finetune.monitor=treesat_mlc_thresh/weighted_f1_val \
datasets.name_dataset=treesatai_ts \
datasets.treesatai_ts.filter_inputs=[aerial,s2,s1_asc,s1_des] \
datasets.treesatai_ts.crop_meters=60 datasets.treesatai_ts.grid_pos_enc=96 \
datasets.treesatai_ts.aerial.image_size=240 datasets.treesatai_ts.aerial.patch_size.mae=16 \
datasets.treesatai_ts.s2.image_size=6 datasets.treesatai_ts.s2.patch_size.mae=2 \
datasets.treesatai_ts.s1_asc.image_size=6 datasets.treesatai_ts.s1_asc.patch_size.mae=2 \
datasets.treesatai_ts.s1_des.image_size=6 datasets.treesatai_ts.s1_des.patch_size.mae=2 \
datasets.treesatai_ts.s1_asc.name_embed=s1 datasets.treesatai_ts.s1_des.name_embed=s1 \
datasets.treesatai_ts.s1_asc.name_group=s1 datasets.treesatai_ts.s1_des.name_group=s1 \
datasets.root_dir=/path/to/dataset/dir datasets.treesatai_ts.rel_dir=TreeSatAI-TS \
run.exp_dir=/path/to/experiments/dir run.exp_name=MAESTRO_S2-NAIP-urban-x-TSAI-TS_base \
run.load_name=MAESTRO_S2-NAIP-urban_base
Fine-tuning on PASTIS-HD:
# batch size 12 on 1 node with 4 GPUs per node
# re-use embeddings' weights with "name_embed" argument
# re-use encoder's weights with "name_group" argument
# load pre-trained model "MAESTRO_S2-NAIP-urban_base"
poetry run python main.py \
model.model=mae model.model_size=medium \
model.fusion_mode=group model.inter_depth=3 \
opt_pretrain.epochs=0 opt_probe.epochs=10 opt_finetune.epochs=50 \
opt_probe.batch_size=12 opt_finetune.batch_size=12 trainer.num_nodes=1 \
opt_finetune.monitor=pastis_seg/average_iou_val \
datasets.name_dataset=pastis_hd \
datasets.pastis_hd.filter_inputs=[spot,s2,s1_asc,s1_des] \
datasets.pastis_hd.crop_meters=160 datasets.pastis_hd.grid_pos_enc=256 datasets.pastis_hd.repeats=8 \
datasets.pastis_hd.spot.image_size=160 datasets.pastis_hd.spot.patch_size.mae=16 \
datasets.pastis_hd.s2.image_size=16 datasets.pastis_hd.s2.patch_size.mae=2 \
datasets.pastis_hd.s1_asc.image_size=16 datasets.pastis_hd.s1_asc.patch_size.mae=2 \
datasets.pastis_hd.s1_des.image_size=16 datasets.pastis_hd.s1_des.patch_size.mae=2 \
datasets.pastis_hd.s1_asc.name_embed=s1 datasets.pastis_hd.s1_des.name_embed=s1 \
datasets.pastis_hd.s1_asc.name_group=s1 datasets.pastis_hd.s1_des.name_group=s1 \
datasets.root_dir=/path/to/dataset/dir datasets.pastis_hd.rel_dir=PASTIS-HD \
run.exp_dir=/path/to/experiments/dir run.exp_name=MAESTRO_S2-NAIP-urban-x-PASTIS-HD_base \
run.load_name=MAESTRO_S2-NAIP-urban_base
Fine-tuning on FLAIR#2:
# batch size 6 on 2 nodes with 4 GPUs per node
# load pre-trained model "MAESTRO_S2-NAIP-urban_base"
poetry run python main.py \
model.model=mae model.model_size=medium \
model.fusion_mode=group model.inter_depth=3 \
opt_pretrain.epochs=0 opt_probe.epochs=15 opt_finetune.epochs=100 \
opt_probe.batch_size=6 opt_finetune.batch_size=6 trainer.num_nodes=2 \
opt_finetune.monitor=cosia/average_iou_val \
datasets.name_dataset=flair \
datasets.flair.version=flair2 \
datasets.flair.filter_inputs=[aerial,s2] \
datasets.flair.crop_meters=102.4 datasets.flair.grid_pos_enc=160 \
datasets.flair.aerial.image_size=512 datasets.flair.aerial.patch_size.mae=16 \
datasets.flair.s2.image_size=10 datasets.flair.s2.patch_size.mae=2 \
datasets.root_dir=/path/to/dataset/dir datasets.flair.csv_dir=/path/to/dataset/dir/FLAIR-HUB datasets.flair.rel_dir=FLAIR-HUB \
run.exp_dir=/path/to/experiments/dir run.exp_name=MAESTRO_S2-NAIP-urban-x-FLAIR2_base \
run.load_name=MAESTRO_S2-NAIP-urban_base
Fine-tuning on FLAIR-HUB:
# batch size 6 on 4 nodes with 4 GPUs per node
# re-use embeddings' weights with "name_embed" argument
# re-use encoder's weights with "name_group" argument
# load pre-trained model "MAESTRO_S2-NAIP-urban_base"
poetry run python main.py \
model.model=mae model.model_size=medium \
model.fusion_mode=group model.inter_depth=3 \
opt_pretrain.epochs=0 opt_probe.epochs=15 opt_finetune.epochs=100 \
opt_probe.batch_size=6 opt_finetune.batch_size=6 trainer.num_nodes=4 \
opt_finetune.monitor=cosia/average_iou_val \
datasets.name_dataset=flair \
datasets.flair.filter_inputs=[aerial,s2,s1_asc,s1_des] \
datasets.flair.crop_meters=102.4 datasets.flair.grid_pos_enc=160 \
datasets.flair.aerial.image_size=512 datasets.flair.aerial.patch_size.mae=16 \
datasets.flair.s2.image_size=10 datasets.flair.s2.patch_size.mae=2 \
datasets.flair.s1_asc.image_size=10 datasets.flair.s1_asc.patch_size.mae=2 \
datasets.flair.s1_des.image_size=10 datasets.flair.s1_des.patch_size.mae=2 \
datasets.flair.s1_asc.name_embed=s1 datasets.flair.s1_des.name_embed=s1 \
datasets.flair.s1_asc.name_group=s1 datasets.flair.s1_des.name_group=s1 \
datasets.root_dir=/path/to/dataset/dir datasets.flair.csv_dir=/path/to/dataset/dir/FLAIR-HUB datasets.flair.rel_dir=FLAIR-HUB \
run.exp_dir=/path/to/experiments/dir run.exp_name=MAESTRO_S2-NAIP-urban-x-FLAIR-HUB_base \
run.load_name=MAESTRO_S2-NAIP-urban_base
Reference
If you use this model, please cite:
@article{labatie2025maestro,
title={MAESTRO: Masked AutoEncoders for Multimodal, Multitemporal, and Multispectral Earth Observation Data},
author={Labatie, Antoine and Vaccaro, Michael and Lardiere, Nina and Garioud, Anatol and Gonthier, Nicolas},
journal={arXiv preprint arXiv:2508.10894},
year={2025}
}
Acknowledgement
The experiments in the paper were conducted using HPC/AI resources from GENCI-IDRIS (allocations A0181013803, A0161013803, AD010114597R1, and AD011014690R1).