Shortcuts

mmagic.models.editors.animatediff.motion_module

Module Contents

Classes

TemporalTransformer3DModelOutput

Output of TemporalTransformer3DModel.

VanillaTemporalModule

Module which uses transformer to handle 3d motion.

TemporalTransformer3DModel

Module which uses implement 3D Transformer.

TemporalTransformerBlock

Module which is a component of Temporal 3D Transformer.

PositionalEncoding

a implementation of PositionEncoding.

VersatileAttention

a implementation of VersatileAttention.

FeedForward

A feed-forward layer.

GELU

GELU activation function

Functions

zero_module(module)

Zero out the parameters of a module and return it.

get_motion_module(in_channels, motion_module_type, ...)

Get motion module.

Attributes

xformers

mmagic.models.editors.animatediff.motion_module.zero_module(module)[源代码]

Zero out the parameters of a module and return it.

class mmagic.models.editors.animatediff.motion_module.TemporalTransformer3DModelOutput[源代码]

Bases: diffusers.utils.BaseOutput

Output of TemporalTransformer3DModel.

sample: torch.FloatTensor[源代码]
mmagic.models.editors.animatediff.motion_module.xformers[源代码]
mmagic.models.editors.animatediff.motion_module.get_motion_module(in_channels, motion_module_type: str, motion_module_kwargs: dict)[源代码]

Get motion module.

class mmagic.models.editors.animatediff.motion_module.VanillaTemporalModule(in_channels, num_attention_heads=8, num_transformer_block=2, attention_block_types=('Temporal_Self', 'Temporal_Self'), cross_frame_attention_mode=None, temporal_position_encoding=False, temporal_position_encoding_max_len=24, temporal_attention_dim_div=1, zero_initialize=True)[源代码]

Bases: torch.nn.Module

Module which uses transformer to handle 3d motion.

forward(input_tensor, temb, encoder_hidden_states, attention_mask=None, anchor_frame_idx=None)[源代码]

forward with sample.

class mmagic.models.editors.animatediff.motion_module.TemporalTransformer3DModel(in_channels, num_attention_heads, attention_head_dim, num_layers, attention_block_types=('Temporal_Self', 'Temporal_Self'), dropout=0.0, norm_num_groups=32, cross_attention_dim=768, activation_fn='geglu', attention_bias=False, upcast_attention=False, cross_frame_attention_mode=None, temporal_position_encoding=False, temporal_position_encoding_max_len=24)[源代码]

Bases: torch.nn.Module

Module which uses implement 3D Transformer.

forward(hidden_states, encoder_hidden_states=None, attention_mask=None)[源代码]

forward with hidden states, encoder_hidden_states and attention_mask.

class mmagic.models.editors.animatediff.motion_module.TemporalTransformerBlock(dim, num_attention_heads, attention_head_dim, attention_block_types=('Temporal_Self', 'Temporal_Self'), dropout=0.0, norm_num_groups=32, cross_attention_dim=768, activation_fn='geglu', attention_bias=False, upcast_attention=False, cross_frame_attention_mode=None, temporal_position_encoding=False, temporal_position_encoding_max_len=24)[源代码]

Bases: torch.nn.Module

Module which is a component of Temporal 3D Transformer.

forward(hidden_states, encoder_hidden_states=None, attention_mask=None, video_length=None)[源代码]

forward with hidden states, encoder_hidden_states and attention_mask.

class mmagic.models.editors.animatediff.motion_module.PositionalEncoding(d_model, dropout=0.0, max_len=24)[源代码]

Bases: torch.nn.Module

a implementation of PositionEncoding.

forward(x)[源代码]

forward function.

class mmagic.models.editors.animatediff.motion_module.VersatileAttention(attention_mode=None, cross_frame_attention_mode=None, temporal_position_encoding=False, temporal_position_encoding_max_len=24, *args, **kwargs)[源代码]

Bases: mmagic.models.editors.animatediff.attention_3d.CrossAttention

a implementation of VersatileAttention.

extra_repr()[源代码]

return module information.

reshape_heads_to_batch_dim(tensor)[源代码]

reshape heads num to batch dim.

reshape_batch_dim_to_heads(tensor)[源代码]

reshape batch dim to heads num.

_memory_efficient_attention_xformers(query, key, value, attention_mask)[源代码]

use xformers to save memory.

forward(hidden_states, encoder_hidden_states=None, attention_mask=None, video_length=None)[源代码]

forward with hidden states, encoder_hidden_states and attention_mask.

class mmagic.models.editors.animatediff.motion_module.FeedForward(dim: int, dim_out: Optional[int] = None, mult: int = 4, dropout: float = 0.0, activation_fn: str = 'geglu')[源代码]

Bases: torch.nn.Module

A feed-forward layer.

参数
  • dim (int) – The number of channels in the input.

  • dim_out (int, optional) –

  • given (The number of channels in the output. If not) –

  • dim. (defaults to) –

  • mult (int, optional, defaults to 4) –

  • dimension. (The multiplier to use for the hidden) –

  • dropout (float, optional, defaults to 0.0) –

  • use. (The dropout probability to) –

  • activation_fn (str, optional, defaults to “geglu”) –

  • feed-forward. (Activation function to be used in) –

forward(hidden_states)[源代码]
class mmagic.models.editors.animatediff.motion_module.GELU(dim_in: int, dim_out: int)[源代码]

Bases: torch.nn.Module

GELU activation function

gelu(gate)[源代码]
forward(hidden_states)[源代码]
Read the Docs v: latest
Versions
latest
stable
0.x
Downloads
pdf
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.