mmagic.models.editors.animatediff.motion_module
¶
Module Contents¶
Classes¶
Output of TemporalTransformer3DModel. |
|
Module which uses transformer to handle 3d motion. |
|
Module which uses implement 3D Transformer. |
|
Module which is a component of Temporal 3D Transformer. |
|
a implementation of PositionEncoding. |
|
a implementation of VersatileAttention. |
|
A feed-forward layer. |
|
GELU activation function |
Functions¶
|
Zero out the parameters of a module and return it. |
|
Get motion module. |
Attributes¶
- mmagic.models.editors.animatediff.motion_module.zero_module(module)[source]¶
Zero out the parameters of a module and return it.
- class mmagic.models.editors.animatediff.motion_module.TemporalTransformer3DModelOutput[source]¶
Bases:
diffusers.utils.BaseOutput
Output of TemporalTransformer3DModel.
- mmagic.models.editors.animatediff.motion_module.get_motion_module(in_channels, motion_module_type: str, motion_module_kwargs: dict)[source]¶
Get motion module.
- class mmagic.models.editors.animatediff.motion_module.VanillaTemporalModule(in_channels, num_attention_heads=8, num_transformer_block=2, attention_block_types=('Temporal_Self', 'Temporal_Self'), cross_frame_attention_mode=None, temporal_position_encoding=False, temporal_position_encoding_max_len=24, temporal_attention_dim_div=1, zero_initialize=True)[source]¶
Bases:
torch.nn.Module
Module which uses transformer to handle 3d motion.
- class mmagic.models.editors.animatediff.motion_module.TemporalTransformer3DModel(in_channels, num_attention_heads, attention_head_dim, num_layers, attention_block_types=('Temporal_Self', 'Temporal_Self'), dropout=0.0, norm_num_groups=32, cross_attention_dim=768, activation_fn='geglu', attention_bias=False, upcast_attention=False, cross_frame_attention_mode=None, temporal_position_encoding=False, temporal_position_encoding_max_len=24)[source]¶
Bases:
torch.nn.Module
Module which uses implement 3D Transformer.
- class mmagic.models.editors.animatediff.motion_module.TemporalTransformerBlock(dim, num_attention_heads, attention_head_dim, attention_block_types=('Temporal_Self', 'Temporal_Self'), dropout=0.0, norm_num_groups=32, cross_attention_dim=768, activation_fn='geglu', attention_bias=False, upcast_attention=False, cross_frame_attention_mode=None, temporal_position_encoding=False, temporal_position_encoding_max_len=24)[source]¶
Bases:
torch.nn.Module
Module which is a component of Temporal 3D Transformer.
- class mmagic.models.editors.animatediff.motion_module.PositionalEncoding(d_model, dropout=0.0, max_len=24)[source]¶
Bases:
torch.nn.Module
a implementation of PositionEncoding.
- class mmagic.models.editors.animatediff.motion_module.VersatileAttention(attention_mode=None, cross_frame_attention_mode=None, temporal_position_encoding=False, temporal_position_encoding_max_len=24, *args, **kwargs)[source]¶
Bases:
mmagic.models.editors.animatediff.attention_3d.CrossAttention
a implementation of VersatileAttention.
- class mmagic.models.editors.animatediff.motion_module.FeedForward(dim: int, dim_out: Optional[int] = None, mult: int = 4, dropout: float = 0.0, activation_fn: str = 'geglu')[source]¶
Bases:
torch.nn.Module
A feed-forward layer.
- Parameters
dim (int) – The number of channels in the input.
dim_out (int, optional) –
given (The number of channels in the output. If not) –
dim. (defaults to) –
mult (int, optional, defaults to 4) –
dimension. (The multiplier to use for the hidden) –
dropout (float, optional, defaults to 0.0) –
use. (The dropout probability to) –
activation_fn (str, optional, defaults to “geglu”) –
feed-forward. (Activation function to be used in) –