These GGUF quants were made from https://huggingface.co/zai-org/GLM-ASR-Nano-2512 and designed for use in KoboldCpp 1.104 and above.
Contains 3 GGUF quants of GLM-ASR-Nano-2512, as well as the associated mmproj file.
To use:
- Download the main model (GLM-ASR-Nano-1.6B-2512-Q4_K.gguf) and the mmproj (mmproj-GLM-ASR-Nano-2512-Q8_0.gguf)
- Launch KoboldCpp and go to Loaded Files tab
- Select the main model as "Text Model" and the mmproj as "mmproj"
- Downloads last month
- 545
Hardware compatibility
Log In to add your hardware
8-bit
16-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for concedo/GLM-ASR-Nano-2512-GGUF
Base model
zai-org/GLM-ASR-Nano-2512