mmagic.models.editors.restormer.restormer_net
¶
Module Contents¶
Classes¶
Layer normalization without bias. |
|
Layer normalization with bias. The bias can be learned. |
|
Layer normalization module. |
|
Gated-Dconv Feed-Forward Network (GDFN) |
|
Multi-DConv Head Transposed Self-Attention (MDTA) |
|
Transformer Block. |
|
Overlapped image patch embedding with 3x3 Conv. |
|
Downsample modules. |
|
Upsample modules. |
|
Restormer A PyTorch impl of: `Restormer: Efficient Transformer for High- |
Functions¶
|
Reshape input tensor. |
|
Reshape input tensor. |
- class mmagic.models.editors.restormer.restormer_net.BiasFree_LayerNorm(normalized_shape)[source]¶
Bases:
mmengine.model.BaseModule
Layer normalization without bias.
- Parameters
normalized_shape (tuple) – The shape of inputs.
- class mmagic.models.editors.restormer.restormer_net.WithBias_LayerNorm(normalized_shape)[source]¶
Bases:
mmengine.model.BaseModule
Layer normalization with bias. The bias can be learned.
- Parameters
normalized_shape (tuple) – The shape of inputs.
- class mmagic.models.editors.restormer.restormer_net.LayerNorm(dim, LayerNorm_type)[source]¶
Bases:
mmengine.model.BaseModule
Layer normalization module.
- Note: This is different from the layernorm2d in pytorch.
The layer norm here can select Layer Normalization type.
- Parameters
dim (int) – Channel number of inputs.
LayerNorm_type (str) – Layer Normalization type.
- class mmagic.models.editors.restormer.restormer_net.FeedForward(dim, ffn_expansion_factor, bias)[source]¶
Bases:
mmengine.model.BaseModule
Gated-Dconv Feed-Forward Network (GDFN)
The original version of GDFN in “Restormer: Efficient Transformer for High-Resolution Image Restoration”.
- Parameters
dim (int) – Channel number of inputs.
ffn_expansion_factor (float) – channel expansion factor. Default: 2.66
bias (bool) – The bias of convolution.
- class mmagic.models.editors.restormer.restormer_net.Attention(dim, num_heads, bias)[source]¶
Bases:
mmengine.model.BaseModule
Multi-DConv Head Transposed Self-Attention (MDTA)
The original version of MDTA in “Restormer: Efficient Transformer for High-Resolution Image Restoration”.
- Parameters
dim (int) – Channel number of inputs.
num_heads (int) – Number of attention heads.
bias (bool) – The bias of convolution.
- class mmagic.models.editors.restormer.restormer_net.TransformerBlock(dim, num_heads, ffn_expansion_factor, bias, LayerNorm_type)[source]¶
Bases:
mmengine.model.BaseModule
Transformer Block.
The original version of Transformer Block in “Restormer: Efficient Transformer for High-Resolution Image Restoration”.
- Parameters
dim (int) – Channel number of inputs.
num_heads (int) – Number of attention heads.
ffn_expansion_factor (float) – channel expansion factor. Default: 2.66
bias (bool) – The bias of convolution.
LayerNorm_type (str) – Layer Normalization type.
- class mmagic.models.editors.restormer.restormer_net.OverlapPatchEmbed(in_c=3, embed_dim=48, bias=False)[source]¶
Bases:
mmengine.model.BaseModule
Overlapped image patch embedding with 3x3 Conv.
- Parameters
in_c (int, optional) – Channel number of inputs. Default: 3
embed_dim (int, optional) – embedding dimension. Default: 48
bias (bool, optional) – The bias of convolution. Default: False
- class mmagic.models.editors.restormer.restormer_net.Downsample(n_feat)[source]¶
Bases:
mmengine.model.BaseModule
Downsample modules.
- Parameters
n_feat (int) – Channel number of features.
- class mmagic.models.editors.restormer.restormer_net.Upsample(n_feat)[source]¶
Bases:
mmengine.model.BaseModule
Upsample modules.
- Parameters
n_feat (int) – Channel number of features.
- class mmagic.models.editors.restormer.restormer_net.Restormer(inp_channels=3, out_channels=3, dim=48, num_blocks=[4, 6, 6, 8], num_refinement_blocks=4, heads=[1, 2, 4, 8], ffn_expansion_factor=2.66, bias=False, LayerNorm_type='WithBias', dual_pixel_task=False, dual_keys=['imgL', 'imgR'])[source]¶
Bases:
mmengine.model.BaseModule
Restormer A PyTorch impl of: Restormer: Efficient Transformer for High- Resolution Image Restoration. Ref repo: https://github.com/swz30/Restormer.
- Parameters
inp_channels (int) – Number of input image channels. Default: 3.
out_channels (int) – Number of output image channels: 3.
dim (int) – Number of feature dimension. Default: 48.
num_blocks (List(int)) – Depth of each Transformer layer. Default: [4, 6, 6, 8].
num_refinement_blocks (int) – Number of refinement blocks. Default: 4.
heads (List(int)) – Number of attention heads in different layers. Default: 7.
ffn_expansion_factor (float) – Ratio of feed forward network expansion. Default: 2.66.
bias (bool) – The bias of convolution. Default: False
LayerNorm_type (str|optional) – Select layer Normalization type. Optional: ‘WithBias’,’BiasFree’ Default: ‘WithBias’.
dual_pixel_task (bool) – True for dual-pixel defocus deblurring only. Also set inp_channels=6. Default: False.
dual_keys (List) – Keys of dual images in inputs. Default: [‘imgL’, ‘imgR’].