Original model: https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct
Quantitation documentation: https://docs.openvino.ai/nightly/notebooks/qwen2-vl-with-output.html
Quantitation config:
import nncf
compression_configuration = {
"mode": nncf.CompressWeightsMode.INT8_ASYM
}
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support