mmagic.models.editors.pggan.pggan_modules
¶
Module Contents¶
Classes¶
Equalized Learning Rate. |
|
Pixel Normalization. |
|
Equalized LR ConvModule. |
|
Equalized LR (Upsample + Conv) Module. |
|
Equalized LR (Conv + Downsample) Module. |
|
Equalized LR LinearModule. |
|
Base module for all modules in openmmlab. |
|
Base module for all modules in openmmlab. |
|
Minibatch standard deviation. |
Functions¶
|
Equalized Learning Rate. |
|
Pixel Normalization. |
- class mmagic.models.editors.pggan.pggan_modules.EqualizedLR(name='weight', gain=2 ** 0.5, mode='fan_in', lr_mul=1.0)[source]¶
Equalized Learning Rate.
This trick is proposed in: Progressive Growing of GANs for Improved Quality, Stability, and Variation
The general idea is to dynamically rescale the weight in training instead of in initializing so that the variance of the responses in each layer is guaranteed with some statistical properties.
Note that this function is always combined with a convolution module which is initialized with \(\mathcal{N}(0, 1)\).
- Parameters
name (str | optional) – The name of weights. Defaults to ‘weight’.
mode (str, optional) – The mode of computing
fan
which is the same askaiming_init
in pytorch. You can choose one from [‘fan_in’, ‘fan_out’]. Defaults to ‘fan_in’.
- compute_weight(module)[source]¶
Compute weight with equalized learning rate.
- Parameters
module (nn.Module) – A module that is wrapped with equalized lr.
- Returns
Updated weight.
- Return type
torch.Tensor
- static apply(module, name, gain=2 ** 0.5, mode='fan_in', lr_mul=1.0)[source]¶
Apply function.
This function is to register an equalized learning rate hook in an
nn.Module
.- Parameters
module (nn.Module) – Module to be wrapped.
name (str | optional) – The name of weights. Defaults to ‘weight’.
mode (str, optional) – The mode of computing
fan
which is the same askaiming_init
in pytorch. You can choose one from [‘fan_in’, ‘fan_out’]. Defaults to ‘fan_in’.
- Returns
Module that is registered with equalized lr hook.
- Return type
nn.Module
- mmagic.models.editors.pggan.pggan_modules.equalized_lr(module, name='weight', gain=2 ** 0.5, mode='fan_in', lr_mul=1.0)[source]¶
Equalized Learning Rate.
This trick is proposed in: Progressive Growing of GANs for Improved Quality, Stability, and Variation
The general idea is to dynamically rescale the weight in training instead of in initializing so that the variance of the responses in each layer is guaranteed with some statistical properties.
Note that this function is always combined with a convolution module which is initialized with \(\mathcal{N}(0, 1)\).
- Parameters
module (nn.Module) – Module to be wrapped.
name (str | optional) – The name of weights. Defaults to ‘weight’.
mode (str, optional) – The mode of computing
fan
which is the same askaiming_init
in pytorch. You can choose one from [‘fan_in’, ‘fan_out’]. Defaults to ‘fan_in’.
- Returns
Module that is registered with equalized lr hook.
- Return type
nn.Module
- mmagic.models.editors.pggan.pggan_modules.pixel_norm(x, eps=1e-06)[source]¶
Pixel Normalization.
This normalization is proposed in: Progressive Growing of GANs for Improved Quality, Stability, and Variation
- Parameters
x (torch.Tensor) – Tensor to be normalized.
eps (float, optional) – Epsilon to avoid dividing zero. Defaults to 1e-6.
- Returns
Normalized tensor.
- Return type
torch.Tensor
- class mmagic.models.editors.pggan.pggan_modules.PixelNorm(in_channels=None, eps=1e-06)[source]¶
Bases:
mmengine.model.BaseModule
Pixel Normalization.
This module is proposed in: Progressive Growing of GANs for Improved Quality, Stability, and Variation
- Parameters
eps (float, optional) – Epsilon value. Defaults to 1e-6.
- class mmagic.models.editors.pggan.pggan_modules.EqualizedLRConvModule(*args, equalized_lr_cfg=dict(mode='fan_in'), **kwargs)[source]¶
Bases:
mmcv.cnn.bricks.ConvModule
Equalized LR ConvModule.
In this module, we inherit default
mmcv.cnn.ConvModule
and adopt equalized lr in convolution. The equalized learning rate is proposed in: Progressive Growing of GANs for Improved Quality, Stability, and VariationNote that, the initialization of
self.conv
will be overwritten as \(\mathcal{N}(0, 1)\).- Parameters
equalized_lr_cfg (dict | None, optional) – Config for
EqualizedLR
. IfNone
, equalized learning rate is ignored. Defaults to dict(mode=’fan_in’).
- class mmagic.models.editors.pggan.pggan_modules.EqualizedLRConvUpModule(*args, upsample=dict(type='nearest', scale_factor=2), **kwargs)[source]¶
Bases:
EqualizedLRConvModule
Equalized LR (Upsample + Conv) Module.
In this module, we inherit
EqualizedLRConvModule
and adopt upsampling before convolution. As for upsampling, in addition to the sampling layer in MMCV, we also offer the “fused_nn” type. “fused_nn” denotes fusing upsampling and convolution. The fusion is modified from the official Tensorflow implementation in: https://github.com/tkarras/progressive_growing_of_gans/blob/master/networks.py#L86- Parameters
upsample (dict | None, optional) – Config for upsampling operation. If
None –
as (you should set it) –
Tensorflow (the official PGGAN in) –
as –
``dict –
``dict –
- class mmagic.models.editors.pggan.pggan_modules.EqualizedLRConvDownModule(*args, downsample=dict(type='fused_pool'), **kwargs)[source]¶
Bases:
EqualizedLRConvModule
Equalized LR (Conv + Downsample) Module.
In this module, we inherit
EqualizedLRConvModule
and adopt downsampling after convolution. As for downsampling, we provide two modes of “avgpool” and “fused_pool”. “avgpool” denotes the commonly used average pooling operation, while “fused_pool” represents fusing downsampling and convolution. The fusion is modified from the official Tensorflow implementation in: https://github.com/tkarras/progressive_growing_of_gans/blob/master/networks.py#L109- Parameters
downsample (dict | None, optional) – Config for downsampling operation. If
None
, downsampling is ignored. Currently, we support the types of [“avgpool”, “fused_pool”]. Defaults to dict(type=’fused_pool’).
- class mmagic.models.editors.pggan.pggan_modules.EqualizedLRLinearModule(*args, equalized_lr_cfg=dict(mode='fan_in'), **kwargs)[source]¶
Bases:
torch.nn.Linear
Equalized LR LinearModule.
In this module, we adopt equalized lr in
nn.Linear
. The equalized learning rate is proposed in: Progressive Growing of GANs for Improved Quality, Stability, and VariationNote that, the initialization of
self.weight
will be overwritten as \(\mathcal{N}(0, 1)\).- Parameters
equalized_lr_cfg (dict | None, optional) – Config for
EqualizedLR
. IfNone
, equalized learning rate is ignored. Defaults to dict(mode=’fan_in’).
- class mmagic.models.editors.pggan.pggan_modules.PGGANNoiseTo2DFeat(noise_size, out_channels, act_cfg=dict(type='LeakyReLU', negative_slope=0.2), norm_cfg=dict(type='PixelNorm'), normalize_latent=True, order=('linear', 'act', 'norm'))[source]¶
Bases:
mmengine.model.BaseModule
Base module for all modules in openmmlab.
BaseModule
is a wrapper oftorch.nn.Module
with additional functionality of parameter initialization. Compared withtorch.nn.Module
,BaseModule
mainly adds three attributes.init_cfg
: the config to control the initialization.init_weights
: The function of parameter initialization and recording initialization information._params_init_info
: Used to track the parameter initialization information. This attribute only exists during executing theinit_weights
.
Note
PretrainedInit
has a higher priority than any other initializer. The loaded pretrained weights will overwrite the previous initialized weights.- Parameters
init_cfg (dict or List[dict], optional) – Initialization config dict.
- class mmagic.models.editors.pggan.pggan_modules.PGGANDecisionHead(in_channels, mid_channels, out_channels, bias=True, equalized_lr_cfg=dict(gain=1), act_cfg=dict(type='LeakyReLU', negative_slope=0.2), out_act=None)[source]¶
Bases:
mmengine.model.BaseModule
Base module for all modules in openmmlab.
BaseModule
is a wrapper oftorch.nn.Module
with additional functionality of parameter initialization. Compared withtorch.nn.Module
,BaseModule
mainly adds three attributes.init_cfg
: the config to control the initialization.init_weights
: The function of parameter initialization and recording initialization information._params_init_info
: Used to track the parameter initialization information. This attribute only exists during executing theinit_weights
.
Note
PretrainedInit
has a higher priority than any other initializer. The loaded pretrained weights will overwrite the previous initialized weights.- Parameters
init_cfg (dict or List[dict], optional) – Initialization config dict.
- class mmagic.models.editors.pggan.pggan_modules.MiniBatchStddevLayer(group_size=4, eps=1e-08, gather_all_batch=False)[source]¶
Bases:
mmengine.model.BaseModule
Minibatch standard deviation.
- Parameters
group_size (int, optional) – The size of groups in batch dimension. Defaults to 4.
eps (float, optional) – Epsilon value to avoid computation error. Defaults to 1e-8.
gather_all_batch (bool, optional) – Whether gather batch from all GPUs. Defaults to False.