Shortcuts

mmagic.models.archs.attention_injection

Module Contents

Classes

AttentionInjection

Wrapper for stable diffusion unet.

Functions

torch_dfs(model)

Attributes

AttentionStatus

mmagic.models.archs.attention_injection.AttentionStatus[源代码]
mmagic.models.archs.attention_injection.torch_dfs(model: torch.nn.Module)[源代码]
class mmagic.models.archs.attention_injection.AttentionInjection(module: torch.nn.Module, injection_weight=5)[源代码]

Bases: torch.nn.Module

Wrapper for stable diffusion unet.

参数

module (nn.Module) – The module to be wrapped.

forward(x: torch.Tensor, t, encoder_hidden_states=None, down_block_additional_residuals=None, mid_block_additional_residual=None, ref_x=None) torch.Tensor[源代码]

Forward and add LoRA mapping.

参数

x (Tensor) – The input tensor.

返回

The output tensor.

返回类型

Tensor

Read the Docs v: latest
Versions
latest
stable
0.x
Downloads
pdf
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.