Shortcuts

mmagic.models.archs.attention_injection

Module Contents

Classes

AttentionInjection

Wrapper for stable diffusion unet.

Functions

torch_dfs(model)

Attributes

AttentionStatus

mmagic.models.archs.attention_injection.AttentionStatus[source]
mmagic.models.archs.attention_injection.torch_dfs(model: torch.nn.Module)[source]
class mmagic.models.archs.attention_injection.AttentionInjection(module: torch.nn.Module, injection_weight=5)[source]

Bases: torch.nn.Module

Wrapper for stable diffusion unet.

Parameters

module (nn.Module) – The module to be wrapped.

forward(x: torch.Tensor, t, encoder_hidden_states=None, down_block_additional_residuals=None, mid_block_additional_residual=None, ref_x=None) torch.Tensor[source]

Forward and add LoRA mapping.

Parameters

x (Tensor) – The input tensor.

Returns

The output tensor.

Return type

Tensor

Read the Docs v: latest
Versions
latest
stable
0.x
Downloads
pdf
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.