This model uses the LTG-BERT architecture. The model was trained on a combination of the BabyLM Dataset, the TinyStories Dataset, and generated data, in accordance with the rules of the Stric track, and the 100M word budget.

The model was trained with 128 token sequence length

Hyperparameters used and evaluation scores will follow in a subsequent update.

Downloads last month
11
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including nikitastheo/BERTtime-Stories-100m-nucleus-1