Shortcuts

mmagic.models.editors.biggan.biggan_generator

Module Contents

Classes

BigGANGenerator

BigGAN Generator. The implementation refers to

class mmagic.models.editors.biggan.biggan_generator.BigGANGenerator(output_scale, noise_size=120, num_classes=0, out_channels=3, base_channels=96, input_scale=4, with_shared_embedding=True, shared_dim=128, sn_eps=1e-06, sn_style='ajbrock', split_noise=True, act_cfg=dict(type='ReLU'), upsample_cfg=dict(type='nearest', scale_factor=2), with_spectral_norm=True, auto_sync_bn=True, blocks_cfg=dict(type='BigGANGenResBlock'), arch_cfg=None, out_norm_cfg=dict(type='BN'), rgb2bgr=False, init_cfg=dict(type='ortho'))[source]

Bases: mmengine.model.BaseModule

BigGAN Generator. The implementation refers to https://github.com/ajbrock/BigGAN-PyTorch/blob/master/BigGAN.py # noqa.

In BigGAN, we use a SAGAN-based architecture composing of an self-attention block and number of convolutional residual blocks with spectral normalization.

More details can be found in: Large Scale GAN Training for High Fidelity Natural Image Synthesis (ICLR2019).

The design of the model structure is highly corresponding to the output resolution. For the original BigGAN’s generator, you can set output_scale as you need and use the default value of arch_cfg and blocks_cfg. If you want to customize the model, you can set the arguments in this way:

arch_cfg: Config for the architecture of this generator. You can refer the _default_arch_cfgs in the _get_default_arch_cfg function to see the format of the arch_cfg. Basically, you need to provide information of each block such as the numbers of input and output channels, whether to perform upsampling, etc.

blocks_cfg: Config for the convolution block. You can replace the block type to your registered customized block and adjust block params here. However, you should notice that some params are shared among these blocks like act_cfg, with_spectral_norm, sn_eps, etc.

Parameters
  • output_scale (int) – Output scale for the generated image.

  • noise_size (int, optional) – Size of the input noise vector. Defaults to 120.

  • num_classes (int, optional) – The number of conditional classes. If set to 0, this model will be degraded to an unconditional model. Defaults to 0.

  • out_channels (int, optional) – Number of channels in output images. Defaults to 3.

  • base_channels (int, optional) – The basic channel number of the generator. The other layers contains channels based on this number. Defaults to 96.

  • input_scale (int, optional) – The scale of the input 2D feature map. Defaults to 4.

  • with_shared_embedding (bool, optional) – Whether to use shared embedding. Defaults to True.

  • shared_dim (int, optional) – The output channels of shared embedding. Defaults to 128.

  • sn_eps (float, optional) – Epsilon value for spectral normalization. Defaults to 1e-6.

  • sn_style (str, optional) – The style of spectral normalization. If set to ajbrock, implementation by ajbrock(https://github.com/ajbrock/BigGAN-PyTorch/blob/master/layers.py) will be adopted. If set to torch, implementation by PyTorch will be adopted. Defaults to ajbrock.

  • split_noise (bool, optional) – Whether to split input noise vector. Defaults to True.

  • act_cfg (dict, optional) – Config for the activation layer. Defaults to dict(type=’ReLU’).

  • upsample_cfg (dict, optional) – Config for the upsampling operation. Defaults to dict(type=’nearest’, scale_factor=2).

  • with_spectral_norm (bool, optional) – Whether to use spectral normalization. Defaults to True.

  • auto_sync_bn (bool, optional) – Whether to use synchronized batch normalization. Defaults to True.

  • blocks_cfg (dict, optional) – Config for the convolution block. Defaults to dict(type=’BigGANGenResBlock’).

  • arch_cfg (dict, optional) – Config for the architecture of this generator. Defaults to None.

  • out_norm_cfg (dict, optional) – Config for the norm of output layer. Defaults to dict(type=’BN’).

  • rgb2bgr (bool, optional) – Whether to reformat the output channels with order bgr. We provide several pre-trained BigGAN weights whose output channels order is rgb. You can set this argument to True to use the weights.

  • init_cfg (dict, optional) – Initialization config dict. If type is Pretrained, the pretrain model will be loaded. Otherwise, type will be parsed as the name of initialization method. Support values are ‘ortho’, ‘N02’, ‘xavier’. Defaults to dict(type=’ortho’).

_get_default_arch_cfg(output_scale, base_channels)[source]
forward(noise, label=None, num_batches=0, return_noise=False, truncation=- 1.0, use_outside_embedding=False)[source]

Forward function.

Parameters
  • noise (torch.Tensor | callable | None) – You can directly give a batch of noise through a torch.Tensor or offer a callable function to sample a batch of noise data. Otherwise, the None indicates to use the default noise sampler.

  • label (torch.Tensor | callable | None) – You can directly give a batch of label through a torch.Tensor or offer a callable function to sample a batch of label data. Otherwise, the None indicates to use the default label sampler. Defaults to None.

  • num_batches (int, optional) – The number of batch size. Defaults to 0.

  • return_noise (bool, optional) – If True, noise_batch and label will be returned in a dict with fake_img. Defaults to False.

  • truncation (float, optional) – Truncation factor. Give value not less than 0., the truncation trick will be adopted. Otherwise, the truncation trick will not be adopted. Defaults to -1..

  • use_outside_embedding (bool, optional) – Whether to use outside embedding or use shared_embedding. Set to True if embedding has already be performed outside this function. Default to False.

Returns

If not return_noise, only the output image

will be returned. Otherwise, a dict contains fake_img, noise_batch and label will be returned.

Return type

torch.Tensor | dict

init_weights()[source]

Init weights for models.

Read the Docs v: latest
Versions
latest
stable
0.x
Downloads
pdf
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.