Shortcuts

mmagic.models.editors.animatediff.motion_module

Module Contents

Classes

TemporalTransformer3DModelOutput

Output of TemporalTransformer3DModel.

VanillaTemporalModule

Module which uses transformer to handle 3d motion.

TemporalTransformer3DModel

Module which uses implement 3D Transformer.

TemporalTransformerBlock

Module which is a component of Temporal 3D Transformer.

PositionalEncoding

a implementation of PositionEncoding.

VersatileAttention

a implementation of VersatileAttention.

FeedForward

A feed-forward layer.

GELU

GELU activation function

Functions

zero_module(module)

Zero out the parameters of a module and return it.

get_motion_module(in_channels, motion_module_type, ...)

Get motion module.

Attributes

xformers

mmagic.models.editors.animatediff.motion_module.zero_module(module)[source]

Zero out the parameters of a module and return it.

class mmagic.models.editors.animatediff.motion_module.TemporalTransformer3DModelOutput[source]

Bases: diffusers.utils.BaseOutput

Output of TemporalTransformer3DModel.

sample :torch.FloatTensor[source]
mmagic.models.editors.animatediff.motion_module.xformers[source]
mmagic.models.editors.animatediff.motion_module.get_motion_module(in_channels, motion_module_type: str, motion_module_kwargs: dict)[source]

Get motion module.

class mmagic.models.editors.animatediff.motion_module.VanillaTemporalModule(in_channels, num_attention_heads=8, num_transformer_block=2, attention_block_types=('Temporal_Self', 'Temporal_Self'), cross_frame_attention_mode=None, temporal_position_encoding=False, temporal_position_encoding_max_len=24, temporal_attention_dim_div=1, zero_initialize=True)[source]

Bases: torch.nn.Module

Module which uses transformer to handle 3d motion.

forward(input_tensor, temb, encoder_hidden_states, attention_mask=None, anchor_frame_idx=None)[source]

forward with sample.

class mmagic.models.editors.animatediff.motion_module.TemporalTransformer3DModel(in_channels, num_attention_heads, attention_head_dim, num_layers, attention_block_types=('Temporal_Self', 'Temporal_Self'), dropout=0.0, norm_num_groups=32, cross_attention_dim=768, activation_fn='geglu', attention_bias=False, upcast_attention=False, cross_frame_attention_mode=None, temporal_position_encoding=False, temporal_position_encoding_max_len=24)[source]

Bases: torch.nn.Module

Module which uses implement 3D Transformer.

forward(hidden_states, encoder_hidden_states=None, attention_mask=None)[source]

forward with hidden states, encoder_hidden_states and attention_mask.

class mmagic.models.editors.animatediff.motion_module.TemporalTransformerBlock(dim, num_attention_heads, attention_head_dim, attention_block_types=('Temporal_Self', 'Temporal_Self'), dropout=0.0, norm_num_groups=32, cross_attention_dim=768, activation_fn='geglu', attention_bias=False, upcast_attention=False, cross_frame_attention_mode=None, temporal_position_encoding=False, temporal_position_encoding_max_len=24)[source]

Bases: torch.nn.Module

Module which is a component of Temporal 3D Transformer.

forward(hidden_states, encoder_hidden_states=None, attention_mask=None, video_length=None)[source]

forward with hidden states, encoder_hidden_states and attention_mask.

class mmagic.models.editors.animatediff.motion_module.PositionalEncoding(d_model, dropout=0.0, max_len=24)[source]

Bases: torch.nn.Module

a implementation of PositionEncoding.

forward(x)[source]

forward function.

class mmagic.models.editors.animatediff.motion_module.VersatileAttention(attention_mode=None, cross_frame_attention_mode=None, temporal_position_encoding=False, temporal_position_encoding_max_len=24, *args, **kwargs)[source]

Bases: mmagic.models.editors.animatediff.attention_3d.CrossAttention

a implementation of VersatileAttention.

extra_repr()[source]

return module information.

reshape_heads_to_batch_dim(tensor)[source]

reshape heads num to batch dim.

reshape_batch_dim_to_heads(tensor)[source]

reshape batch dim to heads num.

_memory_efficient_attention_xformers(query, key, value, attention_mask)[source]

use xformers to save memory.

forward(hidden_states, encoder_hidden_states=None, attention_mask=None, video_length=None)[source]

forward with hidden states, encoder_hidden_states and attention_mask.

class mmagic.models.editors.animatediff.motion_module.FeedForward(dim: int, dim_out: Optional[int] = None, mult: int = 4, dropout: float = 0.0, activation_fn: str = 'geglu')[source]

Bases: torch.nn.Module

A feed-forward layer.

Parameters
  • dim (int) – The number of channels in the input.

  • dim_out (int, optional) –

  • given (The number of channels in the output. If not) –

  • dim. (defaults to) –

  • mult (int, optional, defaults to 4) –

  • dimension. (The multiplier to use for the hidden) –

  • dropout (float, optional, defaults to 0.0) –

  • use. (The dropout probability to) –

  • activation_fn (str, optional, defaults to “geglu”) –

  • feed-forward. (Activation function to be used in) –

forward(hidden_states)[source]
class mmagic.models.editors.animatediff.motion_module.GELU(dim_in: int, dim_out: int)[source]

Bases: torch.nn.Module

GELU activation function

gelu(gate)[source]
forward(hidden_states)[source]
Read the Docs v: latest
Versions
latest
stable
0.x
Downloads
pdf
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.