This is a decensored version of facebook/MobileLLM-R1.5-950M, made using Heretic v1.2.0

Abliteration parameters

Parameter Value
direction_index 14.92
attn.o_proj.max_weight 0.88
attn.o_proj.max_weight_position 13.42
attn.o_proj.min_weight 0.46
attn.o_proj.min_weight_distance 6.05

Performance

Metric This model Original model (facebook/MobileLLM-R1.5-950M)
KL divergence 0.3553 0 (by definition)
Refusals 4/100 17/100

🤗 Hugging Face   |    📑 Paper    |    💻 Code   

Model Details

MobileLLM-R1.5 is an updated version of MobileLLM-R1, with the primary improvement being the integration of on-policy knowledge distillation [1,2]. We found this method to be particularly effective when applied as a final stage in the post-training pipeline for small models, yielding a significant performance boost. Leveraging the same reasoning SFT datasets used for MobileLLM-R1, including OpenMathReasoning, OpenScienceReasoning-2, and OpenCodeReasoning-2, MobileLLM-R1.5 performs additional on-policy KD, fintuned from MobileLLM-R1 final model. This release includes three models:

Note: These models are not general-purpose chat models. They are Supervised Fine-Tuned (SFT) models, specifically trained to address mathematical, programming (Python, C++), and scientific problems.

Highlights

  • Applying on-policy KD as a final post-training stage yields substantial accuracy gains (i.e., 10 to 35 points) on challenging reasoning benchmarks. For example, applying it to MobileLLM-R1-950M increases its AIME score from 15.5 to 39.9 in MobileLLM-R1.5-950M. The improvement is even more pronounced at the 360M scale, where MATH accuracy rises from 28.4 → 63.4 and GSM8K from 24.5 → 52.8, more than doubling performance.
  • MobileLLM-R1.5-950M also outperforms DeepSeek-R1-Distill-Qwen-1.5B across all evaluated math and coding benchmarks, despite having significantly fewer parameters (0.95B vs. 1.5B); for example, 39.9 vs. 29.1 on AIME’24.
  • Notably, MobileLLM-R1.5-950M is trained on only ~2T high-quality pre-training tokens (fewer than 5T total tokens) yet dramatically outperforms Qwen3-0.6B, which was trained on 36T tokens.

mobilellm-r1.5

[1] On-Policy Distillation of Language Models: Learning from Self-Generated Mistakes
[2] https://thinkingmachines.ai/blog/on-policy-distillation/

News

Model Architecture:

# Layers # Attnetion Heads # KV Heads Dim Hidden Dim Params
MobileLLM-R1.5-140M 15 9 3 576 2048 140M
MobileLLM-R1.5-360M 15 16 4 1024 4096 359M
MobileLLM-R1.5-950M 22 24 6 1536 6144 949M
Input modalities Output modalities Context Length Vocaburary Size Shared Embeddings
MobileLLM-R1.5-140M Text Text 32k 128k Yes
MobileLLM-R1.5-360M Text Text 32k 128k Yes
MobileLLM-R1.5-950M Text Text 32k 128k Yes

How to use

To load the pretrained model for further finetuning or evaluation:

from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/MobileLLM-R1.5-950M")
model = AutoModelForCausalLM.from_pretrained("facebook/MobileLLM-R1.5-950M")

Inference examples

Transformers

from transformers import pipeline
import torch

model_id = "facebook/MobileLLM-R1.5-950M"

pipe = pipeline(
    "text-generation",
    model=model_id,
    torch_dtype="auto",
    device_map="auto",
)

# Math problem / default scenario
messages = [
    {
        "role": "system",
        "content": "Please reason step by step, and put your final answer within \\boxed{}."
    },
    {"role": "user", "content": "Compute: $1-2+3-4+5- \\dots +99-100$."},
]

# C++ coding scenario
messages = [
    {
        "role": "system",
        "content": (
            "\nYou are a helpful and harmless assistant. You should think step-by-step before responding to the instruction below.\n\n"
            "Please use c++ programming language only.\n"
            "You must use ```cpp for just the final solution code block with the following format:\n"
            "```cpp\n# Your code here\n```\n"
        )
    },
    {"role": "user", "content": "Write a C++ program that prints 'Hello, World!'."},
]

# Python coding scenario
messages = [
    {
        "role": "system",
        "content": (
            "\nYou are a helpful and harmless assistant. You should think step-by-step before responding to the instruction below.\n\n"
            "Please use python programming language only.\n"
            "You must use ```python for just the final solution code block with the following format:\n"
            "```python\n# Your code here\n```\n"
        )
    },
    {"role": "user", "content": "Write a Python function that returns the square of a number."},
]

outputs = pipe(
    messages,
    max_new_tokens=8192,
)
print(outputs[0]["generated_text"][-1])

You can also run inference with vLLM. You only need to register the model architecture Llama4ForCausalLM with the vLLM ModelRegistry.

from vllm.model_executor.models.llama4 import Llama4ForCausalLM
from vllm.model_executor.models.registry import ModelRegistry
ModelRegistry.register_model("Llama4ForCausalLM", Llama4ForCausalLM)

On-policy KD Overview.

Previous knowledge distillation for LLMs is mainly categorized into token-level distillation and sequence-level distillation:

  • Token-level distillation, also referred to as logit distillation, uses the teacher’s predicted probability distribution (logits) at each token to supervise the student, encouraging it to match the teacher’s confidence over the vocabulary.

  • Sequence-level distillation, on the other hand, uses the teacher to generate full sequences or trajectories, and the student learns to mimic these sequences. The advantage of sequence-level distillation is that the student can capture the teacher’s generation patterns, including long-range dependencies and coherent structures. However, a limitation of this approach is that it is biased toward the teacher’s outputs. If the student encounters scenarios that diverge from those outputs, it may struggle to recover or generalize, since all of its supervision comes from the teacher’s trajectories.

On-policy distillation addresses the traditional sequence-level distillation's limitation by letting the student generate its own sequences, while the teacher provides token-level logit to guide or judge whether the student’s outputs are correct. This process can be viewed as a form of token-level reward modeling, analogous to reinforcement learning. Because the student is trained on its own generated context rather than the teacher’s, the mismatch between training and inference distributions is minimized, allowing the student to better handle errors and adapt to situations not covered by the teacher. In essence, on-policy KD bridges knowledge distillation and RL-like self-improvement, giving the student more autonomy while still leveraging teacher guidance. For a more detailed explanation of on-policy distillation, this post is a helpful reference.

On-policy KD for MobileLLM-R1.5

A key requirement for on-policy knowledge distillation (KD) is that the student model must be capable of generating sequences aligned with the target downstream tasks. Therefore, in our setup, we start from the MobileLLM-R1 final model, which has already been optimized for the mathematics and coding tasks of interest. We use the same reasoning SFT datasets (OpenMathReasoning, OpenScienceReasoning-2, and OpenCodeReasoning-2) to assess whether on-policy KD can further improve performance.

Strictly speaking, on-policy KD entails generating each new training sample using the student model of the current iteration. However, this approach is prohibitively time-intensive: sample generation is autoregressive, whereas training operates in parallel across tokens. Generating samples during training dramatically increases the overall training time. Since the model parameters and generation distribution change only minimally between steps, we adopt a "semi on-policy KD" strategy. Specifically, we use the student model (MobileLLM-R1-140M/360M/950M) to regenerate reasoning SFT data from its seed prompts. We train with forward KL-divergence loss for 4 epochs, using nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1 as the teacher model.

Training

Training stages and hyperparameter details

In MobileLLM-R1.5, we resume training from the final MobileLLM-R1 model, which was trained in three stages (pre-training, mid-training, and post-training), and perform an additional on-policy knowledge distillation (KD) step during the post-training phase. We use the Adam optimizer with zero weight decay. The learning rate is set to 8e-5 with a warmup ratio of 0.1 and follows a cosine decay schedule from its maximum value to zero. Full training hyperparameters are provided in the table below.

Model Stage Phase Tokens / Samples BS Sequence Length Steps LR #GPUs Training Time
MobileLLM-R1 Pre-training Phase1 2T tokens 16 2k 500k 4.00E-03 16 x 8 4-5 days
Phase2 2T tokens 16 2k 500k 4.00E-03 16 x 8 4-5 days
Mid-training Phase1 100B tokens 4 4k 50K 3.60E-04 16 x 8 1-2 days
Phase2 100B tokens 4 4k 50K 3.60E-04 16 x 8 1-2 days
Post-training General SFT 866K samples 4 4k 2 epochs 5.00E-06 16 x 8 ~2h
Reasoning SFT 6.2M samples 8 32k 4 epochs 8.00E-05 16 x 8 ~2.5days
MobileLLM-R1.5 Post-training On-policy KD 6.2M samples 8 32k 4 epochs 8.00E-05 16 x 8 ~6.5days

Training data

We use the prompts from the data sources listed in the table below and employ the MobileLLM-R1 model to regenerate the SFT dataset based on these seed prompts, using a temperature of 0.6, a top-p of 0.95, and a maximum sequence length of 32,768.

Dataset Rows
OpenMathReasoning 3.2M samples
OpenScienceReasoning-2 803K samples
OpenCodeReasoning-2 2.16M samples

Evaluation

MobileLLM-R1.5 post-trained model

Model Size MATH500 GSM8K AIME'24 AIME'25 LiveCodeBench-v6
0-shot
pass@1
0-shot
pass@1
0-shot
pass@1, n=64
0-shot
pass@1, n=64
0-shot
pass@1, n=16
<150M
SmolLM2-135M-Instruct 135M 3.0 2.4 -- -- 0.0
MobileLLM-R1-140M 140M 6.2 4.1 -- -- 1.7
MobileLLM-R1.5-140M 140M 16 8.3 -- -- 1.5
150M - 400M
Gemma-3-270m-it 268M 6.8 8.4 -- -- 0.0
SmolLM2-360M-Instruct 362M 3.4 8.1 -- -- 0.7
MobileLLM-R1-360M 359M 28.4 24.5 -- -- 5.1
MobileLLM-R1.5-360M 359M 63.4 52.8 4.1 10.6 8.6
400M - 1B
Qwen2.5-0.5B-Instruct 494M 31.2 48.1 0.1 0.3 3.6
Qwen3-0.6B 596M 73.0 79.2 11.3 17.0 14.9
MobileLLM-R1-950M 949M 74.0 67.5 15.5 16.3 19.9
MobileLLM-R1.5-950M 949M 86.6 82.6 39.9 31.1 29.1
> 1B
Gemma-3-1B-it 1.0B 45.4 62.9 0.9 0.0 2.0
LLaMA3.2-1B-Instruct 1.24B 24.8 38.8 1.1 0.2 4.1
OLMo-2-0425-1B-Instruct 1.48B 19.2 69.7 0.6 0.1 0.0
OpenReasoning-Nemotron-1.5B 1.54B 83.4 76.7 49.7 40.4 28.3
DeepSeek-R1-Distill-Qwen-1.5B 1.54B 83.2 77.3 29.1 23.4 19.9
Qwen2.5-1.5B-Instruct 1.54B 54.0 70.0 2.5 0.9 7.9
SmolLM2-1.7B-Instruct 1.71B 19.2 41.8 0.3 0.1 4.4
Qwen3-1.7B 1.72B 89.4 90.3 47.0 37.0 29.8

For AIME, we evaluate models across 64 runs and report the average accuracy. For LiveCodeBench, results are reported as the average accuracy across 16 runs. Models with fewer than 400M parameters do not produce reliable AIME scores and are therefore denoted as '—'.

Citation

If you find our model useful for your research, please consider citing:

@article{zhao2025mobilellm-r1,
  title={MobileLLM-R1: Exploring the Limits of Sub-Billion Language Model Reasoners with Open Training Recipes},
  author={Zhao, Changsheng and Chang, Ernie and Liu, Zechun and Chang, Chia-Jung and Wen, Wei and Lai, Chen and Cao, Sheng, and Tian, Yuandong and Krishnamoorthi, Raghuraman and Shi, Yangyang and  Chandra, Vikas},
  journal={arXiv preprint arXiv:2509.24945},
  year={2025}
}

Contact

Changsheng Zhao, Meta Inc (cszhao at meta dot com)

Ernie Chang, Meta Inc (erniecyc at meta dot com)

Zechun Liu, Meta Inc (zechunliu at meta dot com)

License

MobileLLM-R1.5 is FAIR NC licensed as of now

Downloads last month
40
Safetensors
Model size
0.9B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for sasa2000/MobileLLM-R1.5-950M-heretic

Finetuned
(1)
this model
Quantizations
2 models

Papers for sasa2000/MobileLLM-R1.5-950M-heretic