Shortcuts

mmagic.models.editors.restormer.restormer_net

Module Contents

Classes

BiasFree_LayerNorm

Layer normalization without bias.

WithBias_LayerNorm

Layer normalization with bias. The bias can be learned.

LayerNorm

Layer normalization module.

FeedForward

Gated-Dconv Feed-Forward Network (GDFN)

Attention

Multi-DConv Head Transposed Self-Attention (MDTA)

TransformerBlock

Transformer Block.

OverlapPatchEmbed

Overlapped image patch embedding with 3x3 Conv.

Downsample

Downsample modules.

Upsample

Upsample modules.

Restormer

Restormer A PyTorch impl of: `Restormer: Efficient Transformer for High-

Functions

to_3d(x)

Reshape input tensor.

to_4d(x, h, w)

Reshape input tensor.

mmagic.models.editors.restormer.restormer_net.to_3d(x)[source]

Reshape input tensor.

mmagic.models.editors.restormer.restormer_net.to_4d(x, h, w)[source]

Reshape input tensor.

class mmagic.models.editors.restormer.restormer_net.BiasFree_LayerNorm(normalized_shape)[source]

Bases: mmengine.model.BaseModule

Layer normalization without bias.

Parameters

normalized_shape (tuple) – The shape of inputs.

forward(x)[source]

Forward function.

Parameters

x (Tensor) – Input tensor with shape (B, C, H, W).

Returns

Forward results.

Return type

Tensor

class mmagic.models.editors.restormer.restormer_net.WithBias_LayerNorm(normalized_shape)[source]

Bases: mmengine.model.BaseModule

Layer normalization with bias. The bias can be learned.

Parameters

normalized_shape (tuple) – The shape of inputs.

forward(x)[source]

Forward function.

Parameters

x (Tensor) – Input tensor with shape (B, C, H, W).

Returns

Forward results.

Return type

Tensor

class mmagic.models.editors.restormer.restormer_net.LayerNorm(dim, LayerNorm_type)[source]

Bases: mmengine.model.BaseModule

Layer normalization module.

Note: This is different from the layernorm2d in pytorch.

The layer norm here can select Layer Normalization type.

Parameters
  • dim (int) – Channel number of inputs.

  • LayerNorm_type (str) – Layer Normalization type.

forward(x)[source]

Forward function.

Parameters

x (Tensor) – Input tensor with shape (B, C, H, W).

Returns

Forward results.

Return type

Tensor

class mmagic.models.editors.restormer.restormer_net.FeedForward(dim, ffn_expansion_factor, bias)[source]

Bases: mmengine.model.BaseModule

Gated-Dconv Feed-Forward Network (GDFN)

The original version of GDFN in “Restormer: Efficient Transformer for High-Resolution Image Restoration”.

Parameters
  • dim (int) – Channel number of inputs.

  • ffn_expansion_factor (float) – channel expansion factor. Default: 2.66

  • bias (bool) – The bias of convolution.

forward(x)[source]

Forward function.

Parameters

x (Tensor) – Input tensor with shape (B, C, H, W).

Returns

Forward results.

Return type

Tensor

class mmagic.models.editors.restormer.restormer_net.Attention(dim, num_heads, bias)[source]

Bases: mmengine.model.BaseModule

Multi-DConv Head Transposed Self-Attention (MDTA)

The original version of MDTA in “Restormer: Efficient Transformer for High-Resolution Image Restoration”.

Parameters
  • dim (int) – Channel number of inputs.

  • num_heads (int) – Number of attention heads.

  • bias (bool) – The bias of convolution.

forward(x)[source]

Forward function.

Parameters

x (Tensor) – Input tensor with shape (B, C, H, W).

Returns

Forward results.

Return type

Tensor

class mmagic.models.editors.restormer.restormer_net.TransformerBlock(dim, num_heads, ffn_expansion_factor, bias, LayerNorm_type)[source]

Bases: mmengine.model.BaseModule

Transformer Block.

The original version of Transformer Block in “Restormer: Efficient Transformer for High-Resolution Image Restoration”.

Parameters
  • dim (int) – Channel number of inputs.

  • num_heads (int) – Number of attention heads.

  • ffn_expansion_factor (float) – channel expansion factor. Default: 2.66

  • bias (bool) – The bias of convolution.

  • LayerNorm_type (str) – Layer Normalization type.

forward(x)[source]

Forward function.

Parameters

x (Tensor) – Input tensor with shape (B, C, H, W).

Returns

Forward results.

Return type

Tensor

class mmagic.models.editors.restormer.restormer_net.OverlapPatchEmbed(in_c=3, embed_dim=48, bias=False)[source]

Bases: mmengine.model.BaseModule

Overlapped image patch embedding with 3x3 Conv.

Parameters
  • in_c (int, optional) – Channel number of inputs. Default: 3

  • embed_dim (int, optional) – embedding dimension. Default: 48

  • bias (bool, optional) – The bias of convolution. Default: False

forward(x)[source]

Forward function.

Parameters

x (Tensor) – Input tensor with shape (B, C, H, W).

Returns

Forward results.

Return type

Tensor

class mmagic.models.editors.restormer.restormer_net.Downsample(n_feat)[source]

Bases: mmengine.model.BaseModule

Downsample modules.

Parameters

n_feat (int) – Channel number of features.

forward(x)[source]

Forward function.

Parameters

x (Tensor) – Input tensor with shape (B, C, H, W).

Returns

Forward results.

Return type

Tensor

class mmagic.models.editors.restormer.restormer_net.Upsample(n_feat)[source]

Bases: mmengine.model.BaseModule

Upsample modules.

Parameters

n_feat (int) – Channel number of features.

forward(x)[source]

Forward function.

Parameters

x (Tensor) – Input tensor with shape (B, C, H, W).

Returns

Forward results.

Return type

Tensor

class mmagic.models.editors.restormer.restormer_net.Restormer(inp_channels=3, out_channels=3, dim=48, num_blocks=[4, 6, 6, 8], num_refinement_blocks=4, heads=[1, 2, 4, 8], ffn_expansion_factor=2.66, bias=False, LayerNorm_type='WithBias', dual_pixel_task=False, dual_keys=['imgL', 'imgR'])[source]

Bases: mmengine.model.BaseModule

Restormer A PyTorch impl of: Restormer: Efficient Transformer for High- Resolution Image Restoration. Ref repo: https://github.com/swz30/Restormer.

Parameters
  • inp_channels (int) – Number of input image channels. Default: 3.

  • out_channels (int) – Number of output image channels: 3.

  • dim (int) – Number of feature dimension. Default: 48.

  • num_blocks (List(int)) – Depth of each Transformer layer. Default: [4, 6, 6, 8].

  • num_refinement_blocks (int) – Number of refinement blocks. Default: 4.

  • heads (List(int)) – Number of attention heads in different layers. Default: 7.

  • ffn_expansion_factor (float) – Ratio of feed forward network expansion. Default: 2.66.

  • bias (bool) – The bias of convolution. Default: False

  • LayerNorm_type (str|optional) – Select layer Normalization type. Optional: ‘WithBias’,’BiasFree’ Default: ‘WithBias’.

  • dual_pixel_task (bool) – True for dual-pixel defocus deblurring only. Also set inp_channels=6. Default: False.

  • dual_keys (List) – Keys of dual images in inputs. Default: [‘imgL’, ‘imgR’].

forward(inp_img)[source]

Forward function.

Parameters

inp_img (Tensor) – Input tensor with shape (B, C, H, W).

Returns

Forward results.

Return type

Tensor

Read the Docs v: latest
Versions
latest
stable
0.x
Downloads
pdf
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.