This is a gradient blockmerge (0.8,0.2) of two Mistral models.

The logic model is a SLERP merge of https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B and https://huggingface.co/openchat/openchat_3.5

The prose model is a SLERP merge of https://huggingface.co/NousResearch/Nous-Capybara-7B-V1.9 and https://huggingface.co/HuggingFaceH4/zephyr-7b-beta

Downloads last month
6
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for BlueNipples/Trion-M-7b

Quantizations
5 models