Model Stock: All we need is just a few fine-tuned models
Paper
• 2403.19522 • Published
• 14
This is a merge of pre-trained language models created using mergekit.
Hopping on the merge bandwagon, god save me from these names. Surprisingly this thing kinda works? It can (kinda) do assistant tasks, (kinda) do (E)RP. I still suck at this though
Recommended chat format is ChatML because all the source models use some variation of it, but honestly god knows what it'd work best with
This model was merged using the Model Stock merge method using alpindale/Mistral-7B-v0.2-hf as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
merge_method: model_stock
base_model: alpindale/Mistral-7B-v0.2-hf
models:
- model: dreamgen/opus-v1.2-7b
- model: l3utterfly/mistral-7b-v0.2-layla-v4
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02