Shortcuts

mmagic.models.editors.animatediff.animatediff_utils

Module Contents

Functions

shave_segments(path[, n_shave_prefix_segments])

Removes segments.

renew_resnet_paths(old_list[, n_shave_prefix_segments])

Updates paths inside resnets to the new naming scheme (local

renew_vae_resnet_paths(old_list[, n_shave_prefix_segments])

Updates paths inside resnets to the new naming scheme (local

renew_attention_paths(old_list[, n_shave_prefix_segments])

Updates paths inside attentions to the new naming scheme (local

renew_vae_attention_paths(old_list[, ...])

Updates paths inside attentions to the new naming scheme (local

assign_to_checkpoint(paths, checkpoint, old_checkpoint)

This does the final conversion step: take locally converted weights and

conv_attn_to_linear(checkpoint)

create_unet_diffusers_config(original_config, image_size)

Creates a config for the diffusers based on the config of the LDM

create_vae_diffusers_config(original_config, image_size)

Creates a config for the diffusers based on the config of the LDM

convert_ldm_unet_checkpoint(checkpoint, config[, ...])

Takes a state dict and a config, and returns a converted checkpoint.

convert_ldm_vae_checkpoint(checkpoint, config)

convert_ldm_clip_checkpoint(checkpoint)

convert_paint_by_example_checkpoint(checkpoint)

convert_open_clip_checkpoint(checkpoint)

stable_unclip_image_encoder(original_config)

Returns the image processor and clip image encoder for the img2img

stable_unclip_image_noising_components(original_config)

Returns the noising components for the img2img and txt2img unclip

save_videos_grid(videos, path[, rescale, n_rows, fps])

Attributes

textenc_conversion_lst

textenc_conversion_map

textenc_transformer_conversion_lst

protected

textenc_pattern

mmagic.models.editors.animatediff.animatediff_utils.shave_segments(path, n_shave_prefix_segments=1)[source]

Removes segments.

Positive values shave the first segments, negative shave the last segments.

mmagic.models.editors.animatediff.animatediff_utils.renew_resnet_paths(old_list, n_shave_prefix_segments=0)[source]

Updates paths inside resnets to the new naming scheme (local renaming)

mmagic.models.editors.animatediff.animatediff_utils.renew_vae_resnet_paths(old_list, n_shave_prefix_segments=0)[source]

Updates paths inside resnets to the new naming scheme (local renaming)

mmagic.models.editors.animatediff.animatediff_utils.renew_attention_paths(old_list, n_shave_prefix_segments=0)[source]

Updates paths inside attentions to the new naming scheme (local renaming)

mmagic.models.editors.animatediff.animatediff_utils.renew_vae_attention_paths(old_list, n_shave_prefix_segments=0)[source]

Updates paths inside attentions to the new naming scheme (local renaming)

mmagic.models.editors.animatediff.animatediff_utils.assign_to_checkpoint(paths, checkpoint, old_checkpoint, attention_paths_to_split=None, additional_replacements=None, config=None)[source]

This does the final conversion step: take locally converted weights and apply a global renaming to them. It splits attention layers, and takes into account additional replacements that may arise.

Assigns the weights to the new checkpoint.

mmagic.models.editors.animatediff.animatediff_utils.conv_attn_to_linear(checkpoint)[source]
mmagic.models.editors.animatediff.animatediff_utils.create_unet_diffusers_config(original_config, image_size: int, controlnet=False)[source]

Creates a config for the diffusers based on the config of the LDM model.

mmagic.models.editors.animatediff.animatediff_utils.create_vae_diffusers_config(original_config, image_size: int)[source]

Creates a config for the diffusers based on the config of the LDM model.

mmagic.models.editors.animatediff.animatediff_utils.convert_ldm_unet_checkpoint(checkpoint, config, path=None, extract_ema=False, controlnet=False)[source]

Takes a state dict and a config, and returns a converted checkpoint.

mmagic.models.editors.animatediff.animatediff_utils.convert_ldm_vae_checkpoint(checkpoint, config)[source]
mmagic.models.editors.animatediff.animatediff_utils.convert_ldm_clip_checkpoint(checkpoint)[source]
mmagic.models.editors.animatediff.animatediff_utils.textenc_conversion_lst = [('cond_stage_model.model.positional_embedding',...[source]
mmagic.models.editors.animatediff.animatediff_utils.textenc_conversion_map[source]
mmagic.models.editors.animatediff.animatediff_utils.textenc_transformer_conversion_lst = [('resblocks.', 'text_model.encoder.layers.'), ('ln_1', 'layer_norm1'), ('ln_2', 'layer_norm2'),...[source]
mmagic.models.editors.animatediff.animatediff_utils.protected[source]
mmagic.models.editors.animatediff.animatediff_utils.textenc_pattern[source]
mmagic.models.editors.animatediff.animatediff_utils.convert_paint_by_example_checkpoint(checkpoint)[source]
mmagic.models.editors.animatediff.animatediff_utils.convert_open_clip_checkpoint(checkpoint)[source]
mmagic.models.editors.animatediff.animatediff_utils.stable_unclip_image_encoder(original_config)[source]

Returns the image processor and clip image encoder for the img2img unclip pipeline.

We currently know of two types of stable unclip models which separately use the clip and the openclip image encoders.

mmagic.models.editors.animatediff.animatediff_utils.stable_unclip_image_noising_components(original_config, clip_stats_path: Optional[str] = None, device: Optional[str] = None)[source]

Returns the noising components for the img2img and txt2img unclip pipelines.

Converts the stability noise augmentor into 1. a StableUnCLIPImageNormalizer for holding the CLIP stats 2. a DDPMScheduler for holding the noise schedule

If the noise augmentor config specifies a clip stats path, the clip_stats_path must be provided.

mmagic.models.editors.animatediff.animatediff_utils.save_videos_grid(videos: torch.Tensor, path: str, rescale=False, n_rows=6, fps=8)[source]
Read the Docs v: latest
Versions
latest
stable
0.x
Downloads
pdf
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.