Seminiferous Tubule Segmentation Model

U-Net with ResNet34 encoder for 3-class semantic segmentation of seminiferous tubules in H&E-stained histology images.

Classes

ID Class Color in overlays Description
0 Background Interstitial tissue
1 Tubule Wall 🟢 Green Seminiferous epithelium
2 Lumen 🔵 Blue Hollow center of tubule

Results

Evaluated on 70 validation images:

Metric Mean ± Std
Mean IoU 0.7498 ± 0.0188
IoU Background 0.9634 ± 0.0192
IoU Tubule Wall 0.4255 ± 0.0313
IoU Lumen 0.8603 ± 0.0300
Dice Background 0.9813 ± 0.0105
Dice Tubule Wall 0.5963 ± 0.0316
Dice Lumen 0.9246 ± 0.0174

Model Details

Architecture U-Net (segmentation_models_pytorch)
Encoder ResNet34 (ImageNet pretrained)
Input RGB, 256×256, ImageNet normalization
Output 3-class segmentation mask
Parameters ~24.4M
Best epoch 42
Checkpoint IoU 0.8677

Training Configuration

  • Dataset: LuGot16/tubules-segmentation (281 train / 70 val)
  • Loss: 0.5×DiceLoss + 0.5×CrossEntropyLoss (inverse-frequency class weights)
  • Optimizer: AdamW (lr=1e-4, weight_decay=1e-4)
  • Scheduler: CosineAnnealingLR
  • Augmentations: RandomResizedCrop, D4 symmetry (flips + rot90), ElasticTransform, ColorJitter

Quick Start

import torch
import segmentation_models_pytorch as smp
from huggingface_hub import hf_hub_download

# Load
ckpt_path = hf_hub_download("LuGot16/tubule-segmentation-unet", "best_model.pt")
ckpt = torch.load(ckpt_path, map_location="cpu", weights_only=False)
model = smp.Unet(encoder_name="resnet34", encoder_weights=None, in_channels=3, classes=3)
model.load_state_dict(ckpt["model_state_dict"])
model.eval()

Inference with Area Calculation

from inference import load_model, predict

model, config = load_model("LuGot16/tubule-segmentation-unet")
mask, areas = predict(model, "image.png", config)

print(f"Tubule wall area: {areas['tubule_wall_area_um2']:.1f} μm²")
print(f"Lumen area: {areas['lumen_area_um2']:.1f} μm²")
print(f"Lumen/Tubule ratio: {areas['lumen_to_tubule_ratio']:.3f}")
print(f"Tubule coverage: {areas['tubule_coverage_pct']:.1f}%")

Default scale: 0.32 μm/pixel (calibrated from 25μm scale bar). Adjust with scale_um_per_px parameter.

Files

  • best_model.pt — Trained model checkpoint
  • inference.py — Inference script with area calculation (μm²)
  • extract_masks.py — Tool to extract GT masks from red-contour-annotated images
  • train.py — Training script (reproducible)
  • evaluate.py — Evaluation script
  • visualizations/ — GT vs prediction comparisons on val set

Data Pipeline

  1. Original images have manually drawn red contours marking tubule boundaries
  2. extract_masks.py uses OpenCV contour hierarchy detection (RETR_CCOMP) to extract 3-class masks
  3. Red annotations are removed via inpainting for clean training inputs
  4. Model is trained on clean images, predicts on unannotated images
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train LuGot16/tubule-segmentation-unet