# Reference

`optimum.intel.neural_compressor` is deprecated and will be removed in the next major release.

## INCQuantizer[[optimum.intel.INCQuantizer]]

#### optimum.intel.INCQuantizer[[optimum.intel.INCQuantizer]]

[Source](https://github.com/huggingface/optimum-intel/blob/main/optimum/intel/neural_compressor/quantization.py#L78)

Handle the Neural Compressor quantization process.

get_calibration_datasetoptimum.intel.INCQuantizer.get_calibration_datasethttps://github.com/huggingface/optimum-intel/blob/main/optimum/intel/neural_compressor/quantization.py#L247[{"name": "dataset_name", "val": ": str"}, {"name": "num_samples", "val": ": int = 100"}, {"name": "dataset_config_name", "val": ": typing.Optional[str] = None"}, {"name": "dataset_split", "val": ": str = 'train'"}, {"name": "preprocess_function", "val": ": typing.Optional[typing.Callable] = None"}, {"name": "preprocess_batch", "val": ": bool = True"}, {"name": "use_auth_token", "val": ": typing.Union[bool, str, NoneType] = None"}, {"name": "token", "val": ": typing.Union[bool, str, NoneType] = None"}]- **dataset_name** (`str`) --
  The dataset repository name on the Hugging Face Hub or path to a local directory containing data files
  in generic formats and optionally a dataset script, if it requires some code to read the data files.
- **num_samples** (`int`, defaults to 100) --
  The maximum number of samples composing the calibration dataset.
- **dataset_config_name** (`str`, *optional*) --
  The name of the dataset configuration.
- **dataset_split** (`str`, defaults to `"train"`) --
  Which split of the dataset to use to perform the calibration step.
- **preprocess_function** (`Callable`, *optional*) --
  Processing function to apply to each example after loading dataset.
- **preprocess_batch** (`bool`, defaults to `True`) --
  Whether the `preprocess_function` should be batched.
- **use_auth_token** (Optional[Union[bool, str]], defaults to `None`) --
  Deprecated. Please use `token` instead.
- **token** (Optional[Union[bool, str]], defaults to `None`) --
  The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
  when running `huggingface-cli login` (stored in `~/.huggingface`).0The calibration `datasets.Dataset` to use for the post-training static quantization calibration step.

Create the calibration `datasets.Dataset` to use for the post-training static quantization calibration step.

**Parameters:**

dataset_name (`str`) : The dataset repository name on the Hugging Face Hub or path to a local directory containing data files in generic formats and optionally a dataset script, if it requires some code to read the data files.

num_samples (`int`, defaults to 100) : The maximum number of samples composing the calibration dataset.

dataset_config_name (`str`, *optional*) : The name of the dataset configuration.

dataset_split (`str`, defaults to `"train"`) : Which split of the dataset to use to perform the calibration step.

preprocess_function (`Callable`, *optional*) : Processing function to apply to each example after loading dataset.

preprocess_batch (`bool`, defaults to `True`) : Whether the `preprocess_function` should be batched.

use_auth_token (Optional[Union[bool, str]], defaults to `None`) : Deprecated. Please use `token` instead.

token (Optional[Union[bool, str]], defaults to `None`) : The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated when running `huggingface-cli login` (stored in `~/.huggingface`).

**Returns:**

The calibration `datasets.Dataset` to use for the post-training static quantization calibration step.
#### quantize[[optimum.intel.INCQuantizer.quantize]]

[Source](https://github.com/huggingface/optimum-intel/blob/main/optimum/intel/neural_compressor/quantization.py#L120)

Quantize a model given the optimization specifications defined in `quantization_config`.

**Parameters:**

quantization_config (`Union[PostTrainingQuantConfig]`) : The configuration containing the parameters related to quantization.

save_directory (`Union[str, Path]`) : The directory where the quantized model should be saved.

calibration_dataset (`datasets.Dataset`, defaults to `None`) : The dataset to use for the calibration step, needed for post-training static quantization.

batch_size (`int`, defaults to 8) : The number of calibration samples to load per batch.

data_collator (`DataCollator`, defaults to `None`) : The function to use to form a batch from a list of elements of the calibration dataset.

remove_unused_columns (`bool`, defaults to `True`) : Whether or not to remove the columns unused by the model forward method.

## INCTrainer[[optimum.intel.INCTrainer]]

#### optimum.intel.INCTrainer[[optimum.intel.INCTrainer]]

[Source](https://github.com/huggingface/optimum-intel/blob/main/optimum/intel/neural_compressor/trainer.py#L109)

INCTrainer enables Intel Neural Compression quantization aware training, pruning and distillation.

compute_distillation_lossoptimum.intel.INCTrainer.compute_distillation_losshttps://github.com/huggingface/optimum-intel/blob/main/optimum/intel/neural_compressor/trainer.py#L843[{"name": "student_outputs", "val": ""}, {"name": "teacher_outputs", "val": ""}]

How the distillation loss is computed given the student and teacher outputs.
#### compute_loss[[optimum.intel.INCTrainer.compute_loss]]

[Source](https://github.com/huggingface/optimum-intel/blob/main/optimum/intel/neural_compressor/trainer.py#L767)

How the loss is computed by Trainer. By default, all models return the loss in the first element.
#### save_model[[optimum.intel.INCTrainer.save_model]]

[Source](https://github.com/huggingface/optimum-intel/blob/main/optimum/intel/neural_compressor/trainer.py#L676)

Will save the model, so you can reload it using `from_pretrained()`.
Will only save from the main process.

## INCModel[[optimum.intel.INCModel]]

#### optimum.intel.INCModel[[optimum.intel.INCModel]]

[Source](https://github.com/huggingface/optimum-intel/blob/main/optimum/intel/neural_compressor/modeling_base.py#L71)

## INCModelForSequenceClassification[[optimum.intel.INCModelForSequenceClassification]]

#### optimum.intel.INCModelForSequenceClassification[[optimum.intel.INCModelForSequenceClassification]]

[Source](https://github.com/huggingface/optimum-intel/blob/main/optimum/intel/neural_compressor/modeling_base.py#L396)

## INCModelForQuestionAnswering[[optimum.intel.INCModelForQuestionAnswering]]

#### optimum.intel.INCModelForQuestionAnswering[[optimum.intel.INCModelForQuestionAnswering]]

[Source](https://github.com/huggingface/optimum-intel/blob/main/optimum/intel/neural_compressor/modeling_base.py#L391)

## INCModelForTokenClassification[[optimum.intel.INCModelForTokenClassification]]

#### optimum.intel.INCModelForTokenClassification[[optimum.intel.INCModelForTokenClassification]]

[Source](https://github.com/huggingface/optimum-intel/blob/main/optimum/intel/neural_compressor/modeling_base.py#L401)

## INCModelForMultipleChoice[[optimum.intel.INCModelForMultipleChoice]]

#### optimum.intel.INCModelForMultipleChoice[[optimum.intel.INCModelForMultipleChoice]]

[Source](https://github.com/huggingface/optimum-intel/blob/main/optimum/intel/neural_compressor/modeling_base.py#L406)

## INCModelForMaskedLM[[optimum.intel.INCModelForMaskedLM]]

#### optimum.intel.INCModelForMaskedLM[[optimum.intel.INCModelForMaskedLM]]

[Source](https://github.com/huggingface/optimum-intel/blob/main/optimum/intel/neural_compressor/modeling_base.py#L416)

## INCModelForCausalLM[[optimum.intel.INCModelForCausalLM]]

#### optimum.intel.INCModelForCausalLM[[optimum.intel.INCModelForCausalLM]]

[Source](https://github.com/huggingface/optimum-intel/blob/main/optimum/intel/neural_compressor/modeling_base.py#L426)

## INCModelForSeq2SeqLM[[optimum.intel.INCModelForSeq2SeqLM]]

#### optimum.intel.INCModelForSeq2SeqLM[[optimum.intel.INCModelForSeq2SeqLM]]

[Source](https://github.com/huggingface/optimum-intel/blob/main/optimum/intel/neural_compressor/modeling_base.py#L411)