Shortcuts

mmagic.models.editors.cain.cain_net

Module Contents

Classes

CAINNet

CAIN network structure.

ConvNormWithReflectionPad

Apply reflection padding, followed by a convolution, which can be

ChannelAttentionLayer

Channel Attention (CA) Layer.

ResidualChannelAttention

Residual Channel Attention Module.

ResidualGroup

Residual Group, consisting of a stack of residual channel attention,

Functions

get_padding_functions(x[, padding])

Generate padding function for CAIN.

class mmagic.models.editors.cain.cain_net.CAINNet(in_channels=3, kernel_size=3, num_block_groups=5, num_block_layers=12, depth=3, reduction=16, norm=None, padding=7, act=nn.LeakyReLU(0.2, True), init_cfg=None)[源代码]

Bases: mmengine.model.BaseModule

CAIN network structure.

Paper: Channel Attention Is All You Need for Video Frame Interpolation. Ref repo: https://github.com/myungsub/CAIN

参数
  • in_channels (int) – Channel number of inputs. Default: 3.

  • kernel_size (int) – Kernel size of CAINNet. Default: 3.

  • num_block_groups (int) – Number of block groups. Default: 5.

  • num_block_layers (int) – Number of blocks in a group. Default: 12.

  • depth (int) – Down scale depth, scale = 2**depth. Default: 3.

  • reduction (int) – Channel reduction of CA. Default: 16.

  • norm (str | None) – Normalization layer. If it is None, no normalization is performed. Default: None.

  • padding (int) – Padding of CAINNet. Default: 7.

  • act (function) – activate function. Default: nn.LeakyReLU(0.2, True).

  • init_cfg (dict, optional) – Initialization config dict. Default: None.

forward(imgs, padding_flag=False)[源代码]

Forward function.

参数
  • imgs (Tensor) – Input tensor with shape (n, 2, c, h, w).

  • padding_flag (bool) – Padding or not. Default: False.

返回

Forward results.

返回类型

Tensor

mmagic.models.editors.cain.cain_net.get_padding_functions(x, padding=7)[源代码]

Generate padding function for CAIN.

This function produces two functions to pad and depad a tensor, given the number of pixels to be padded. When applying padding and depadding sequentially, the original tensor is obtained.

The generated padding function will pad the given tensor to the ‘padding’ power of 2, i.e., pow(2, ‘padding’).

tensor –padding_function–> padded tensor padded tensor –depadding_function–> original tensor

参数
  • x (Tensor) – Input tensor.

  • padding (int) – Padding size. Default: 7.

返回

Padding function. depadding_function (Function): Depadding function.

返回类型

padding_function (Function)

class mmagic.models.editors.cain.cain_net.ConvNormWithReflectionPad(in_channels, out_channels, kernel_size, norm=None)[源代码]

Bases: mmengine.model.BaseModule

Apply reflection padding, followed by a convolution, which can be followed by an optional normalization.

参数
  • in_channels (int) – Channel number of input features.

  • out_channels (int) – Channel number of output features.

  • kernel_size (int) – Kernel size of convolution layer.

  • norm (str | None) – Normalization layer. If it is None, no normalization is performed. Default: None.

forward(x)[源代码]

Forward function for ConvNormWithReflectionPad.

参数

x (Tensor) – Input tensor with shape (n, c, h, w).

返回

Output tensor with shape (n, c, h, w).

返回类型

Tensor

class mmagic.models.editors.cain.cain_net.ChannelAttentionLayer(mid_channels, reduction=16)[源代码]

Bases: mmengine.model.BaseModule

Channel Attention (CA) Layer.

参数
  • mid_channels (int) – Channel number of the intermediate features.

  • reduction (int) – Channel reduction of CA. Default: 16.

forward(x)[源代码]

Forward function for ChannelAttentionLayer.

参数

x (Tensor) – Input tensor with shape (n, c, h, w).

返回

Output tensor with shape (n, c, h, w).

返回类型

Tensor

class mmagic.models.editors.cain.cain_net.ResidualChannelAttention(mid_channels, kernel_size=3, reduction=16, norm=None, act=nn.LeakyReLU(0.2, True))[源代码]

Bases: mmengine.model.BaseModule

Residual Channel Attention Module.

参数
  • mid_channels (int) – Channel number of the intermediate features.

  • kernel_size (int) – Kernel size of convolution layers. Default: 3.

  • reduction (int) – Channel reduction. Default: 16.

  • norm (None | function) – Norm layer. If None, no norm layer. Default: None.

  • act (function) – activation function. Default: nn.LeakyReLU(0.2, True).

forward(x)[源代码]

Forward function for ResidualChannelAttention.

参数

x (Tensor) – Input tensor with shape (n, c, h, w).

返回

Output tensor with shape (n, c, h, w).

返回类型

Tensor

class mmagic.models.editors.cain.cain_net.ResidualGroup(block_layer, num_block_layers, mid_channels, kernel_size, reduction, act=nn.LeakyReLU(0.2, True), norm=None)[源代码]

Bases: mmengine.model.BaseModule

Residual Group, consisting of a stack of residual channel attention, followed by a convolution.

参数
  • block_layer (nn.Module) – nn.Module class for basic block.

  • num_block_layers (int) – number of blocks.

  • mid_channels (int) – Channel number of the intermediate features.

  • kernel_size (int) – Kernel size of ResidualGroup.

  • reduction (int) – Channel reduction of CA. Default: 16.

  • act (function) – activation function. Default: nn.LeakyReLU(0.2, True).

  • norm (str | None) – Normalization layer. If it is None, no normalization is performed. Default: None.

forward(x)[源代码]

Forward function for ResidualGroup.

参数

x (Tensor) – Input tensor with shape (n, c, h, w).

返回

Output tensor with shape (n, c, h, w).

返回类型

Tensor

Read the Docs v: latest
Versions
latest
stable
0.x
Downloads
pdf
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.