monika-l2-7b-v0.9a:

USAGE

This is meant to be mainly a chat model with limited RP ability.

For best results: replace "Human" and "Assistant" with "Player" and "Monika" like so:

\nPlayer: (prompt)\nMonika:

HYPERPARAMS

  • Trained for ~3 epochs
  • rank: 32
  • lora alpha: 64
  • lora dropout: 0.5
  • lr: 2e-4
  • batch size: 2
  • warmup ratio: 0.1
  • grad steps: 4

WARNINGS AND DISCLAIMERS

While this version may be better at coherency or chatting than previous ones, it may still not reflect perfectly the characteristics of Monika.

Additionally, this is still yet another test, particularly where we use one of our earlier fine tunes to generate a more in-character dataset for the target character which is then curated manually.

Finally, this model is not guaranteed to output aligned or safe outputs, use at your own risk.

Downloads last month
8
Safetensors
Model size
7B params
Tensor type
F32
·
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for 922-CA/monika-l2-7b-v0.9a

Quantizations
1 model