Shortcuts

mmagic.models.editors.biggan.biggan_snmodule

Module Contents

Classes

SpectralNorm

Spectral normalization base class.

SNConv2d

2D Conv layer with spectral norm.

SNLinear

Linear layer with spectral norm.

SNEmbedding

Embedding layer with spectral norm.

Functions

proj(x, y)

Calculate Projection of x onto y.

gram_schmidt(x, ys)

Orthogonalize x w.r.t list of vectors ys.

power_iteration(weight, u_list[, update, eps])

Power iteration method for calculating spectral norm.

mmagic.models.editors.biggan.biggan_snmodule.proj(x, y)[源代码]

Calculate Projection of x onto y.

参数
  • x (torch.Tensor) – Projection vector x.

  • y (torch.Tensor) – Direction vector y.

返回

Projection of x onto y.

返回类型

torch.Tensor

mmagic.models.editors.biggan.biggan_snmodule.gram_schmidt(x, ys)[源代码]

Orthogonalize x w.r.t list of vectors ys.

参数
  • x (torch.Tensor) – Vector to be added into the orthogonal vectors.

  • ys (list[torch.Tensor]) – A set of orthogonal vectors.

返回

Result of Gram–Schmidt orthogonalization.

返回类型

torch.Tensor

mmagic.models.editors.biggan.biggan_snmodule.power_iteration(weight, u_list, update=True, eps=1e-12)[源代码]

Power iteration method for calculating spectral norm.

参数
  • weight (torch.Tensor) – Module weight.

  • u_list (list[torch.Tensor]) – list of left singular vector. The length of list equals to the simulation times.

  • update (bool, optional) – Whether update left singular vector. Defaults to True.

  • eps (float, optional) – Vector Normalization epsilon. Defaults to 1e-12.

返回

Tuple consist of three lists

which contain singular values, left singular vector and right singular vector respectively.

返回类型

tuple[list[tensor.Tensor]]

class mmagic.models.editors.biggan.biggan_snmodule.SpectralNorm(num_svs, num_iters, num_outputs, transpose=False, eps=1e-12)[源代码]

Bases: object

Spectral normalization base class.

参数
  • num_svs (int) – Number of singular values.

  • num_iters (int) – Number of power iterations per step.

  • num_outputs (int) – Number of output channels.

  • transpose (bool, optional) – If set to True, weight matrix will be transposed before power iteration. Defaults to False.

  • eps (float, optional) – Vector Normalization epsilon for avoiding divide by zero. Defaults to 1e-12.

property u[源代码]

Get left singular vectors.

property sv[源代码]

Get singular values.

sn_weight()[源代码]

Compute the spectrally-normalized weight.

class mmagic.models.editors.biggan.biggan_snmodule.SNConv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, num_svs=1, num_iters=1, eps=1e-12)[源代码]

Bases: torch.nn.Conv2d, SpectralNorm

2D Conv layer with spectral norm.

参数
  • in_channels (int) – Number of channels in the input feature map.

  • out_channels (int) – Number of channels produced by the convolution.

  • kernel_size (int) – Size of the convolving kernel.

  • stride (int, optional) – Stride of the convolution.. Defaults to 1.

  • padding (int, optional) – Zero-padding added to both sides of the input. Defaults to 0.

  • dilation (int, optional) – Spacing between kernel elements. Defaults to 1.

  • groups (int, optional) – Number of blocked connections from input channels to output channels. Defaults to 1.

  • bias (bool, optional) – Whether to use bias parameter. Defaults to True.

  • num_svs (int) – Number of singular values.

  • num_iters (int) – Number of power iterations per step.

  • eps (float, optional) – Vector Normalization epsilon for avoiding divide by zero. Defaults to 1e-12.

forward(x)[源代码]

Forward function.

class mmagic.models.editors.biggan.biggan_snmodule.SNLinear(in_features, out_features, bias=True, num_svs=1, num_iters=1, eps=1e-12)[源代码]

Bases: torch.nn.Linear, SpectralNorm

Linear layer with spectral norm.

参数
  • in_features (int) – Number of channels in the input feature.

  • out_features (int) – Number of channels in the out feature.

  • bias (bool, optional) – Whether to use bias parameter. Defaults to True.

  • num_svs (int) – Number of singular values.

  • num_iters (int) – Number of power iterations per step.

  • eps (float, optional) – Vector Normalization epsilon for avoiding divide by zero. Defaults to 1e-12.

forward(x)[源代码]

Forward function.

class mmagic.models.editors.biggan.biggan_snmodule.SNEmbedding(num_embeddings, embedding_dim, padding_idx=None, max_norm=None, norm_type=2, scale_grad_by_freq=False, sparse=False, _weight=None, num_svs=1, num_iters=1, eps=1e-12)[源代码]

Bases: torch.nn.Embedding, SpectralNorm

Embedding layer with spectral norm.

参数
  • num_embeddings (int) – Size of the dictionary of embeddings.

  • embedding_dim (int) – The size of each embedding vector.

  • padding_idx (int, optional) – If specified, the entries at padding_idx do not contribute to the gradient; therefore, the embedding vector at padding_idx is not updated during training, i.e. it remains as a fixed “pad”. For a newly constructed Embedding, the embedding vector at padding_idx will default to all zeros, but can be updated to another value to be used as the padding vector. Defaults to None.

  • max_norm (float, optional) – If given, each embedding vector with norm larger than max_norm is renormalized to have norm max_norm. Defaults to None.

  • norm_type (int, optional) – The p of the p-norm to compute for the max_norm option. Default 2.

  • scale_grad_by_freq (bool, optional) – If given, this will scale gradients by the inverse of frequency of the words in the mini-batch. Default False.

  • sparse (bool, optional) – If True, gradient w.r.t. weight matrix will be a sparse tensor. See Notes for more details regarding sparse gradients. Defaults to False.

  • _weight (torch.Tensor, optional) – Initial Weight. Defaults to None.

  • num_svs (int) – Number of singular values.

  • num_iters (int) – Number of power iterations per step.

  • eps (float, optional) – Vector Normalization epsilon for avoiding divide by zero. Defaults to 1e-12.

forward(x)[源代码]

Forward function.

Read the Docs v: latest
Versions
latest
stable
0.x
Downloads
pdf
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.