-
-
-
-
-
-
TheBloke/Wizard-Vicuna-30B-Uncensored-GPTQ
Text Generation
• 33B • Updated
• 139k
• 603
huggingface/falcon-40b-gptq
Text Generation
• 7B • Updated
• 28
• 14
TheBloke/Falcon-180B-GPTQ
Text Generation
• 179B • Updated
• 34
• 9
TheBloke/sqlcoder-7B-GPTQ
Text Generation
• 7B • Updated
• 12
• 7
Qwen/Qwen1.5-0.5B-Chat-GPTQ-Int4
Text Generation
• 0.5B • Updated
• 1.57k
• 14
Qwen/Qwen2-VL-72B-Instruct-GPTQ-Int4
Image-Text-to-Text
• 74B • Updated
• 2.7k
• 30
Qwen/Qwen2.5-14B-Instruct-GPTQ-Int4
Text Generation
• 15B • Updated
• 89.9k
• 25
Qwen/Qwen2.5-Coder-7B-Instruct-GPTQ-Int4
Text Generation
• 8B • Updated
• 660k
• 13
ModelCloud/Falcon3-10B-Instruct-gptqmodel-4bit-vortex-mlx-v1
2B • Updated
• 2
• 4
ModelCloud/DeepSeek-R1-Distill-Qwen-7B-gptqmodel-4bit-vortex-v1
Text Generation
• 8B • Updated
• 8
• 6
ModelCloud/DeepSeek-R1-Distill-Qwen-7B-gptqmodel-4bit-vortex-v2
Text Generation
• 8B • Updated
• 423
• 8
ModelCloud/QwQ-32B-gptqmodel-4bit-vortex-v1
Text Generation
• 33B • Updated
• 10
• 12
ISTA-DASLab/gemma-3-27b-it-GPTQ-4b-128g
Image-Text-to-Text
• 5B • Updated
• 13.8k
• 44
JunHowie/Qwen3-32B-GPTQ-Int8
Text Generation
• 33B • Updated
• 699
• 4
AngelSlim/Qwen3-14B_int4_gptq
15B • Updated
• 6
• 1
QuantTrio/Qwen3-Coder-30B-A3B-Instruct-GPTQ-Int8
Text Generation
• 31B • Updated
• 5.19k
• 8
JunHowie/Qwen3-4B-Instruct-2507-GPTQ-Int4
Text Generation
• 4B • Updated
• 1.04k
• 2
ModelCloud/Marin-32B-Base-GPTQMODEL-AWQ-W4A16
Text Generation
• 33B • Updated
• 8
• 2
ModelCloud/Granite-4.0-H-1B-GPTQMODEL-W4A16
Text Generation
• 1B • Updated
• 6
• 1
ModelCloud/Granite-4.0-H-350M-GPTQMODEL-W4A16
Text Generation
• 0.3B • Updated
• 24
• 1
ModelCloud/Brumby-14B-Base-GPTQMODEL-W4A16
Text Generation
• 15B • Updated
• 5
• 1
ModelCloud/Brumby-14B-Base-GPTQMODEL-W4A16-v2
Text Generation
• 15B • Updated
• 5
• 1
ModelCloud/bloom-560m-gptqmodel-4bit
0.6B • Updated
• 3
• 1
avtc/GLM-4.5-Air-GPTQMODEL-W8A16
Text Generation
• 116B • Updated
• 9
• 2
SEOKDONG/gpt-oss-safeguard-20b-kor-enterprise-gptq-4bit
Text Generation
• 21B • Updated
• 95
• 1
btbtyler09/Devstral-Small-2-24B-Instruct-INT4-INT8-Mixed-GPTQ
Image-Text-to-Text
• 7B • Updated
• 1.23k
• 3
ModelCloud/Qwen3-Coder-30B-A3B-Instruct-GPTQMODEL-W4A16-A
Text Generation
• 31B • Updated
• 25
• 1
ModelCloud/Qwen3-Coder-30B-A3B-Instruct-GPTQMODEL-W4A16-B
Text Generation
• 31B • Updated
• 11
• 1
Mohaaxa/qwen2.5-1.5b-gptq-4bit-v2
Text Generation
• 2B • Updated
• 27
• 1
Text Generation
• Updated
• 3