Shortcuts

mmagic.models.editors.guided_diffusion.classifier

Module Contents

Classes

CheckpointFunction

Base class to create custom autograd.Function

Upsample

An upsampling layer with an optional convolution.

TimestepBlock

Any module where forward() takes timestep embeddings as a second

AttentionBlock

An attention block that allows spatial positions to attend to each

TimestepEmbedSequential

A sequential module that passes timestep embeddings to the children that

GroupNorm32

Applies Group Normalization over a mini-batch of inputs as described in

Downsample

A downsampling layer with an optional convolution.

ResBlock

A residual block that can optionally change the number of channels.

AttentionPool2d

Adapted from CLIP:

EncoderUNetModel

The half UNet model with attention and timestep embedding.

Functions

checkpoint(func, inputs, params, flag)

Evaluate a function without caching intermediate activations, allowing

timestep_embedding(timesteps, dim[, max_period])

Create sinusoidal timestep embeddings.

zero_module(module)

Zero out the parameters of a module and return it.

normalization(channels)

Make a standard normalization layer.

mmagic.models.editors.guided_diffusion.classifier.checkpoint(func, inputs, params, flag)[source]

Evaluate a function without caching intermediate activations, allowing for reduced memory at the expense of extra compute in the backward pass.

Parameters
  • func – the function to evaluate.

  • inputs – the argument sequence to pass to func.

  • params – a sequence of parameters func depends on but does not explicitly take as arguments.

  • flag – if False, disable gradient checkpointing.

class mmagic.models.editors.guided_diffusion.classifier.CheckpointFunction(*args, **kwargs)[source]

Bases: torch.autograd.Function

Base class to create custom autograd.Function

To create a custom autograd.Function, subclass this class and implement the forward() and backward() static methods. Then, to use your custom op in the forward pass, call the class method apply. Do not call forward() directly.

To ensure correctness and best performance, make sure you are calling the correct methods on ctx and validating your backward function using torch.autograd.gradcheck().

See extending-autograd for more details on how to use this class.

Examples:

>>> class Exp(Function):
>>>     @staticmethod
>>>     def forward(ctx, i):
>>>         result = i.exp()
>>>         ctx.save_for_backward(result)
>>>         return result
>>>
>>>     @staticmethod
>>>     def backward(ctx, grad_output):
>>>         result, = ctx.saved_tensors
>>>         return grad_output * result
>>>
>>> # Use it by calling the apply method:
>>> # xdoctest: +SKIP
>>> output = Exp.apply(input)
static forward(ctx, run_function, length, *args)[source]

Performs the operation.

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).

The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with ctx.save_for_backward() if they are intended to be used in backward (equivalently, vjp) or ctx.save_for_forward() if they are intended to be used for in jvp.

static backward(ctx, *output_grads)[source]

Defines a formula for differentiating the operation with backward mode automatic differentiation (alias to the vjp function).

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by as many outputs as the forward() returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.

The context can be used to retrieve tensors saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computated w.r.t. the output.

mmagic.models.editors.guided_diffusion.classifier.timestep_embedding(timesteps, dim, max_period=10000)[source]

Create sinusoidal timestep embeddings.

Parameters
  • timesteps – a 1-D Tensor of N indices, one per batch element. These may be fractional.

  • dim – the dimension of the output.

  • max_period – controls the minimum frequency of the embeddings.

Returns

an [N x dim] Tensor of positional embeddings.

mmagic.models.editors.guided_diffusion.classifier.zero_module(module)[source]

Zero out the parameters of a module and return it.

class mmagic.models.editors.guided_diffusion.classifier.Upsample(channels, use_conv, dims=2, out_channels=None)[source]

Bases: torch.nn.Module

An upsampling layer with an optional convolution.

Parameters
  • channels – channels in the inputs and outputs.

  • use_conv – a bool determining if a convolution is applied.

  • dims – determines if the signal is 1D, 2D, or 3D. If 3D, then upsampling occurs in the inner-two dimensions.

forward(x)[source]

Forward function.

Parameters

x (torch.Tensor) – The tensor to upsample.

Returns

The upsample results.

Return type

torch.Tensor

class mmagic.models.editors.guided_diffusion.classifier.TimestepBlock[source]

Bases: torch.nn.Module

Any module where forward() takes timestep embeddings as a second argument.

abstract forward(x, emb)[source]

Apply the module to x given emb timestep embeddings.

class mmagic.models.editors.guided_diffusion.classifier.AttentionBlock(channels, num_heads=1, num_head_channels=- 1, use_checkpoint=False, use_new_attention_order=False)[source]

Bases: torch.nn.Module

An attention block that allows spatial positions to attend to each other.

Originally ported from here, but adapted to the N-d case. https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/models/unet.py#L66.

forward(x)[source]

Forward function. This function support gradient checkpoint to save memory.

Parameters

x (torch.Tensor) – The input tensor for attention.

Returns

The attention results

Return type

torch.Tensor

_forward(x)[source]

Forward function of attention block.

Parameters

x (torch.Tensor) – The input tensor for attention.

Returns

The attention results

Return type

torch.Tensor

class mmagic.models.editors.guided_diffusion.classifier.TimestepEmbedSequential(*args: torch.nn.modules.module.Module)           TimestepEmbedSequential(arg: OrderedDict[str, Module])[source]

Bases: torch.nn.Sequential, TimestepBlock

A sequential module that passes timestep embeddings to the children that support it as an extra input.

forward(x, emb)[source]

Forward function. This function support sequential forward with embedding input.

Parameters
  • x (torch.Tensor) – Input tensor to forward.

  • emb (torch.Tensor) – Input timestep embedding.

Returns

The forward results.

Return type

torch.Tensor

class mmagic.models.editors.guided_diffusion.classifier.GroupNorm32(num_groups: int, num_channels: int, eps: float = 1e-05, affine: bool = True, device=None, dtype=None)[source]

Bases: torch.nn.GroupNorm

Applies Group Normalization over a mini-batch of inputs as described in the paper Group Normalization

\[y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]

The input channels are separated into num_groups groups, each containing num_channels / num_groups channels. num_channels must be divisible by num_groups. The mean and standard-deviation are calculated separately over the each group. \(\gamma\) and \(\beta\) are learnable per-channel affine transform parameter vectors of size num_channels if affine is True. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False).

This layer uses statistics computed from input data in both training and evaluation modes.

Parameters
  • num_groups (int) – number of groups to separate the channels into

  • num_channels (int) – number of channels expected in input

  • eps – a value added to the denominator for numerical stability. Default: 1e-5

  • affine – a boolean value that when set to True, this module has learnable per-channel affine parameters initialized to ones (for weights) and zeros (for biases). Default: True.

Shape:
  • Input: \((N, C, *)\) where \(C=\text{num\_channels}\)

  • Output: \((N, C, *)\) (same shape as input)

Examples:

>>> input = torch.randn(20, 6, 10, 10)
>>> # Separate 6 channels into 3 groups
>>> m = nn.GroupNorm(3, 6)
>>> # Separate 6 channels into 6 groups (equivalent with InstanceNorm)
>>> m = nn.GroupNorm(6, 6)
>>> # Put all 6 channels into a single group (equivalent with LayerNorm)
>>> m = nn.GroupNorm(1, 6)
>>> # Activating the module
>>> output = m(input)
forward(x)[source]

Forward group normalization.

Parameters

x (torch.Tensor) – The input tensor.

Returns

Tensor after group norm.

Return type

torch.Tensor

mmagic.models.editors.guided_diffusion.classifier.normalization(channels)[source]

Make a standard normalization layer.

Parameters

channels – number of input channels.

Returns

an nn.Module for normalization.

class mmagic.models.editors.guided_diffusion.classifier.Downsample(channels, use_conv, dims=2, out_channels=None)[source]

Bases: torch.nn.Module

A downsampling layer with an optional convolution.

Parameters
  • channels – channels in the inputs and outputs.

  • use_conv – a bool determining if a convolution is applied.

  • dims – determines if the signal is 1D, 2D, or 3D. If 3D, then downsampling occurs in the inner-two dimensions.

forward(x)[source]

Forward function for downsample.

Parameters

x (torch.Tensor) – The input tensor.

Returns

Results after downsample.

Return type

torch.Tenor

class mmagic.models.editors.guided_diffusion.classifier.ResBlock(channels, emb_channels, dropout, out_channels=None, use_conv=False, use_scale_shift_norm=False, dims=2, use_checkpoint=False, up=False, down=False)[source]

Bases: TimestepBlock

A residual block that can optionally change the number of channels.

Parameters
  • channels – the number of input channels.

  • emb_channels – the number of timestep embedding channels.

  • dropout – the rate of dropout.

  • out_channels – if specified, the number of out channels.

  • use_conv – if True and out_channels is specified, use a spatial convolution instead of a smaller 1x1 convolution to change the channels in the skip connection.

  • dims – determines if the signal is 1D, 2D, or 3D.

  • use_checkpoint – if True, use gradient checkpointing on this module.

  • up – if True, use this block for upsampling.

  • down – if True, use this block for downsampling.

forward(x, emb)[source]

Apply the block to a Tensor, conditioned on a timestep embedding.

Parameters
  • x – an [N x C x …] Tensor of features.

  • emb – an [N x emb_channels] Tensor of timestep embeddings.

Returns

an [N x C x …] Tensor of outputs.

_forward(x, emb)[source]

Forward function.

Parameters
  • x (torch.Tensor) – Input feature tensor to forward.

  • emb (torch.Tensor) – The timesteps embedding to forward.

Returns

The forward results.

Return type

torch.Tensor

class mmagic.models.editors.guided_diffusion.classifier.AttentionPool2d(spacial_dim: int, embed_dim: int, num_heads_channels: int, output_dim: int = None)[source]

Bases: torch.nn.Module

Adapted from CLIP:

https://github.com/openai/CLIP/blob/main/clip/model.py.

forward(x)[source]

Forward function.

Parameters

x (torch.Tensor) – Input feature tensor to forward.

Returns

The forward results.

Return type

torch.Tensor

class mmagic.models.editors.guided_diffusion.classifier.EncoderUNetModel(image_size, in_channels, model_channels, out_channels, num_res_blocks, attention_resolutions, dropout=0, channel_mult=(1, 2, 4, 8), conv_resample=True, dims=2, use_checkpoint=False, use_fp16=False, num_heads=1, num_head_channels=- 1, num_heads_upsample=- 1, use_scale_shift_norm=False, resblock_updown=False, use_new_attention_order=False, pool='adaptive')[source]

Bases: torch.nn.Module

The half UNet model with attention and timestep embedding.

For usage, see UNet.

convert_to_fp16()[source]

Convert the torso of the model to float16.

convert_to_fp32()[source]

Convert the torso of the model to float32.

forward(x, timesteps)[source]

Apply the model to an input batch.

Parameters
  • x – an [N x C x …] Tensor of inputs.

  • timesteps – a 1-D batch of timesteps.

Returns

an [N x K] Tensor of outputs.

Read the Docs v: latest
Versions
latest
stable
0.x
Downloads
pdf
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.