Diffusers documentation

AutoencoderKLHunyuanImage

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v0.36.0).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

AutoencoderKLHunyuanImage

The 2D variational autoencoder (VAE) model with KL loss used in [HunyuanImage2.1].

The model can be loaded with the following code snippet.

from diffusers import AutoencoderKLHunyuanImage

vae = AutoencoderKLHunyuanImage.from_pretrained("hunyuanvideo-community/HunyuanImage-2.1-Diffusers", subfolder="vae", torch_dtype=torch.bfloat16)

AutoencoderKLHunyuanImage

class diffusers.AutoencoderKLHunyuanImage

< >

( in_channels: int out_channels: int latent_channels: int block_out_channels: typing.Tuple[int, ...] layers_per_block: int spatial_compression_ratio: int sample_size: int scaling_factor: float = None downsample_match_channel: bool = True upsample_match_channel: bool = True )

A VAE model for 2D images with spatial tiling support.

This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented for all models (such as downloading or saving).

wrapper

< >

( *args **kwargs )

enable_tiling

< >

( tile_sample_min_size: typing.Optional[int] = None tile_overlap_factor: typing.Optional[float] = None )

Parameters

  • tile_sample_min_size (int, optional) — The minimum size required for a sample to be separated into tiles across the spatial dimension.
  • tile_overlap_factor (float, optional) — The overlap factor required for a latent to be separated into tiles across the spatial dimension.

Enable spatial tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow processing larger images.

forward

< >

( sample: Tensor sample_posterior: bool = False return_dict: bool = True generator: typing.Optional[torch._C.Generator] = None )

Parameters

  • sample (torch.Tensor) — Input sample.
  • return_dict (bool, optional, defaults to True) — Whether or not to return a DecoderOutput instead of a plain tuple.

tiled_decode

< >

( z: Tensor return_dict: bool = True ) ~models.vae.DecoderOutput or tuple

Parameters

  • z (torch.Tensor) — Latent tensor of shape (B, C, H, W).
  • return_dict (bool, optional, defaults to True) — Whether or not to return a ~models.vae.DecoderOutput instead of a plain tuple.

Returns

~models.vae.DecoderOutput or tuple

If return_dict is True, a ~models.vae.DecoderOutput is returned, otherwise a plain tuple is returned.

Decode latent using spatial tiling strategy.

tiled_encode

< >

( x: Tensor ) torch.Tensor

Parameters

  • x (torch.Tensor) — Input tensor of shape (B, C, T, H, W).

Returns

torch.Tensor

The latent representation of the encoded images.

Encode input using spatial tiling strategy.

DecoderOutput

class diffusers.models.autoencoders.vae.DecoderOutput

< >

( sample: Tensor commit_loss: typing.Optional[torch.FloatTensor] = None )

Parameters

  • sample (torch.Tensor of shape (batch_size, num_channels, height, width)) — The decoded output sample from the last layer of the model.

Output of decoding method.

Update on GitHub