Shortcuts

mmagic.models.editors.pggan.pggan_modules

Module Contents

Classes

EqualizedLR

Equalized Learning Rate.

PixelNorm

Pixel Normalization.

EqualizedLRConvModule

Equalized LR ConvModule.

EqualizedLRConvUpModule

Equalized LR (Upsample + Conv) Module.

EqualizedLRConvDownModule

Equalized LR (Conv + Downsample) Module.

EqualizedLRLinearModule

Equalized LR LinearModule.

PGGANNoiseTo2DFeat

Base module for all modules in openmmlab. BaseModule is a wrapper of

PGGANDecisionHead

Base module for all modules in openmmlab. BaseModule is a wrapper of

MiniBatchStddevLayer

Minibatch standard deviation.

Functions

equalized_lr(module[, name, gain, mode, lr_mul])

Equalized Learning Rate.

pixel_norm(x[, eps])

Pixel Normalization.

class mmagic.models.editors.pggan.pggan_modules.EqualizedLR(name='weight', gain=2 ** 0.5, mode='fan_in', lr_mul=1.0)[源代码]

Equalized Learning Rate.

This trick is proposed in: Progressive Growing of GANs for Improved Quality, Stability, and Variation

The general idea is to dynamically rescale the weight in training instead of in initializing so that the variance of the responses in each layer is guaranteed with some statistical properties.

Note that this function is always combined with a convolution module which is initialized with \(\mathcal{N}(0, 1)\).

参数
  • name (str | optional) – The name of weights. Defaults to ‘weight’.

  • mode (str, optional) – The mode of computing fan which is the same as kaiming_init in pytorch. You can choose one from [‘fan_in’, ‘fan_out’]. Defaults to ‘fan_in’.

compute_weight(module)[源代码]

Compute weight with equalized learning rate.

参数

module (nn.Module) – A module that is wrapped with equalized lr.

返回

Updated weight.

返回类型

torch.Tensor

__call__(module, inputs)[源代码]

Standard interface for forward pre hooks.

static apply(module, name, gain=2 ** 0.5, mode='fan_in', lr_mul=1.0)[源代码]

Apply function.

This function is to register an equalized learning rate hook in an nn.Module.

参数
  • module (nn.Module) – Module to be wrapped.

  • name (str | optional) – The name of weights. Defaults to ‘weight’.

  • mode (str, optional) – The mode of computing fan which is the same as kaiming_init in pytorch. You can choose one from [‘fan_in’, ‘fan_out’]. Defaults to ‘fan_in’.

返回

Module that is registered with equalized lr hook.

返回类型

nn.Module

mmagic.models.editors.pggan.pggan_modules.equalized_lr(module, name='weight', gain=2 ** 0.5, mode='fan_in', lr_mul=1.0)[源代码]

Equalized Learning Rate.

This trick is proposed in: Progressive Growing of GANs for Improved Quality, Stability, and Variation

The general idea is to dynamically rescale the weight in training instead of in initializing so that the variance of the responses in each layer is guaranteed with some statistical properties.

Note that this function is always combined with a convolution module which is initialized with \(\mathcal{N}(0, 1)\).

参数
  • module (nn.Module) – Module to be wrapped.

  • name (str | optional) – The name of weights. Defaults to ‘weight’.

  • mode (str, optional) – The mode of computing fan which is the same as kaiming_init in pytorch. You can choose one from [‘fan_in’, ‘fan_out’]. Defaults to ‘fan_in’.

返回

Module that is registered with equalized lr hook.

返回类型

nn.Module

mmagic.models.editors.pggan.pggan_modules.pixel_norm(x, eps=1e-06)[源代码]

Pixel Normalization.

This normalization is proposed in: Progressive Growing of GANs for Improved Quality, Stability, and Variation

参数
  • x (torch.Tensor) – Tensor to be normalized.

  • eps (float, optional) – Epsilon to avoid dividing zero. Defaults to 1e-6.

返回

Normalized tensor.

返回类型

torch.Tensor

class mmagic.models.editors.pggan.pggan_modules.PixelNorm(in_channels=None, eps=1e-06)[源代码]

Bases: mmengine.model.BaseModule

Pixel Normalization.

This module is proposed in: Progressive Growing of GANs for Improved Quality, Stability, and Variation

参数

eps (float, optional) – Epsilon value. Defaults to 1e-6.

_abbr_ = 'pn'[源代码]
forward(x)[源代码]

Forward function.

参数

x (torch.Tensor) – Tensor to be normalized.

返回

Normalized tensor.

返回类型

torch.Tensor

class mmagic.models.editors.pggan.pggan_modules.EqualizedLRConvModule(*args, equalized_lr_cfg=dict(mode='fan_in'), **kwargs)[源代码]

Bases: mmcv.cnn.bricks.ConvModule

Equalized LR ConvModule.

In this module, we inherit default mmcv.cnn.ConvModule and adopt equalized lr in convolution. The equalized learning rate is proposed in: Progressive Growing of GANs for Improved Quality, Stability, and Variation

Note that, the initialization of self.conv will be overwritten as \(\mathcal{N}(0, 1)\).

参数

equalized_lr_cfg (dict | None, optional) – Config for EqualizedLR. If None, equalized learning rate is ignored. Defaults to dict(mode=’fan_in’).

_init_conv_weights()[源代码]

Initialize conv weights as described in PGGAN.

class mmagic.models.editors.pggan.pggan_modules.EqualizedLRConvUpModule(*args, upsample=dict(type='nearest', scale_factor=2), **kwargs)[源代码]

Bases: EqualizedLRConvModule

Equalized LR (Upsample + Conv) Module.

In this module, we inherit EqualizedLRConvModule and adopt upsampling before convolution. As for upsampling, in addition to the sampling layer in MMCV, we also offer the “fused_nn” type. “fused_nn” denotes fusing upsampling and convolution. The fusion is modified from the official Tensorflow implementation in: https://github.com/tkarras/progressive_growing_of_gans/blob/master/networks.py#L86

参数
  • upsample (dict | None, optional) – Config for upsampling operation. If

  • None

  • as (you should set it) –

  • Tensorflow (the official PGGAN in) –

  • as

  • ``dict

  • ``dict

forward(x, **kwargs)[源代码]

Forward function.

参数

x (Tensor) – Input tensor with shape (n, c, h, w).

返回

Forward results.

返回类型

Tensor

static fused_nn_hook(module, inputs)[源代码]

Standard interface for forward pre hooks.

class mmagic.models.editors.pggan.pggan_modules.EqualizedLRConvDownModule(*args, downsample=dict(type='fused_pool'), **kwargs)[源代码]

Bases: EqualizedLRConvModule

Equalized LR (Conv + Downsample) Module.

In this module, we inherit EqualizedLRConvModule and adopt downsampling after convolution. As for downsampling, we provide two modes of “avgpool” and “fused_pool”. “avgpool” denotes the commonly used average pooling operation, while “fused_pool” represents fusing downsampling and convolution. The fusion is modified from the official Tensorflow implementation in: https://github.com/tkarras/progressive_growing_of_gans/blob/master/networks.py#L109

参数

downsample (dict | None, optional) – Config for downsampling operation. If None, downsampling is ignored. Currently, we support the types of [“avgpool”, “fused_pool”]. Defaults to dict(type=’fused_pool’).

forward(x, **kwargs)[源代码]

Forward function.

参数

x (Tensor) – Input tensor with shape (n, c, h, w).

返回

Normalized tensor.

返回类型

torch.Tensor

static fused_avgpool_hook(module, inputs)[源代码]

Standard interface for forward pre hooks.

class mmagic.models.editors.pggan.pggan_modules.EqualizedLRLinearModule(*args, equalized_lr_cfg=dict(mode='fan_in'), **kwargs)[源代码]

Bases: torch.nn.Linear

Equalized LR LinearModule.

In this module, we adopt equalized lr in nn.Linear. The equalized learning rate is proposed in: Progressive Growing of GANs for Improved Quality, Stability, and Variation

Note that, the initialization of self.weight will be overwritten as \(\mathcal{N}(0, 1)\).

参数

equalized_lr_cfg (dict | None, optional) – Config for EqualizedLR. If None, equalized learning rate is ignored. Defaults to dict(mode=’fan_in’).

_init_linear_weights()[源代码]

Initialize linear weights as described in PGGAN.

class mmagic.models.editors.pggan.pggan_modules.PGGANNoiseTo2DFeat(noise_size, out_channels, act_cfg=dict(type='LeakyReLU', negative_slope=0.2), norm_cfg=dict(type='PixelNorm'), normalize_latent=True, order=('linear', 'act', 'norm'))[源代码]

Bases: mmengine.model.BaseModule

Base module for all modules in openmmlab. BaseModule is a wrapper of torch.nn.Module with additional functionality of parameter initialization. Compared with torch.nn.Module, BaseModule mainly adds three attributes.

  • init_cfg: the config to control the initialization.

  • init_weights: The function of parameter initialization and recording initialization information.

  • _params_init_info: Used to track the parameter initialization information. This attribute only exists during executing the init_weights.

备注

PretrainedInit has a higher priority than any other initializer. The loaded pretrained weights will overwrite the previous initialized weights.

参数

init_cfg (dict or List[dict], optional) – Initialization config dict.

forward(x)[源代码]

Forward function.

参数

x (Tensor) – Input noise tensor with shape (n, c).

返回

Forward results with shape (n, c, 4, 4).

返回类型

Tensor

class mmagic.models.editors.pggan.pggan_modules.PGGANDecisionHead(in_channels, mid_channels, out_channels, bias=True, equalized_lr_cfg=dict(gain=1), act_cfg=dict(type='LeakyReLU', negative_slope=0.2), out_act=None)[源代码]

Bases: mmengine.model.BaseModule

Base module for all modules in openmmlab. BaseModule is a wrapper of torch.nn.Module with additional functionality of parameter initialization. Compared with torch.nn.Module, BaseModule mainly adds three attributes.

  • init_cfg: the config to control the initialization.

  • init_weights: The function of parameter initialization and recording initialization information.

  • _params_init_info: Used to track the parameter initialization information. This attribute only exists during executing the init_weights.

备注

PretrainedInit has a higher priority than any other initializer. The loaded pretrained weights will overwrite the previous initialized weights.

参数

init_cfg (dict or List[dict], optional) – Initialization config dict.

forward(x)[源代码]

Forward function.

参数

x (Tensor) – Input tensor with shape (n, c, h, w).

返回

Forward results.

返回类型

Tensor

class mmagic.models.editors.pggan.pggan_modules.MiniBatchStddevLayer(group_size=4, eps=1e-08, gather_all_batch=False)[源代码]

Bases: mmengine.model.BaseModule

Minibatch standard deviation.

参数
  • group_size (int, optional) – The size of groups in batch dimension. Defaults to 4.

  • eps (float, optional) – Epsilon value to avoid computation error. Defaults to 1e-8.

  • gather_all_batch (bool, optional) – Whether gather batch from all GPUs. Defaults to False.

forward(x)[源代码]

Forward function.

参数

x (Tensor) – Input tensor with shape (n, c, h, w).

返回

Forward results.

返回类型

Tensor

Read the Docs v: latest
Versions
latest
stable
0.x
Downloads
pdf
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.