Shortcuts

mmagic.models.utils.model_utils

Module Contents

Functions

default_init_weights(module[, scale])

Initialize network weights.

make_layer(block, num_blocks, **kwarg)

Make layers by stacking the same blocks.

get_module_device(module)

Get the device of a module.

set_requires_grad(nets[, requires_grad])

Set requires_grad for all the networks.

generation_init_weights(module[, init_type, init_gain])

Default initialization of network weights for image generation.

get_valid_noise_size(→ Optional[int])

Get the value of noise_size from input, generator and check the

get_valid_num_batches(→ int)

Try get the valid batch size from inputs.

build_module(→ Any)

Build module from config or return the module itself.

xformers_is_enable(→ bool)

Check whether xformers is installed.

set_xformers(→ torch.nn.Module)

Set xformers' efficient Attention for attention modules.

set_tomesd(model[, ratio, max_downsample, sx, sy, ...])

Patches a stable diffusion model with ToMe. Apply this to the highest

remove_tomesd(model)

Removes a patch from a ToMe Diffusion module if it was already patched.

mmagic.models.utils.model_utils.default_init_weights(module, scale=1)[source]

Initialize network weights.

Parameters
  • modules (nn.Module) – Modules to be initialized.

  • scale (float) – Scale initialized weights, especially for residual blocks. Default: 1.

mmagic.models.utils.model_utils.make_layer(block, num_blocks, **kwarg)[source]

Make layers by stacking the same blocks.

Parameters
  • block (nn.module) – nn.module class for basic block.

  • num_blocks (int) – number of blocks.

Returns

Stacked blocks in nn.Sequential.

Return type

nn.Sequential

mmagic.models.utils.model_utils.get_module_device(module)[source]

Get the device of a module.

Parameters

module (nn.Module) – A module contains the parameters.

Returns

The device of the module.

Return type

torch.device

mmagic.models.utils.model_utils.set_requires_grad(nets, requires_grad=False)[source]

Set requires_grad for all the networks.

Parameters
  • nets (nn.Module | list[nn.Module]) – A list of networks or a single network.

  • requires_grad (bool) – Whether the networks require gradients or not

mmagic.models.utils.model_utils.generation_init_weights(module, init_type='normal', init_gain=0.02)[source]

Default initialization of network weights for image generation.

By default, we use normal init, but xavier and kaiming might work better for some applications.

Parameters
  • module (nn.Module) – Module to be initialized.

  • init_type (str) – The name of an initialization method: normal | xavier | kaiming | orthogonal. Default: ‘normal’.

  • init_gain (float) – Scaling factor for normal, xavier and orthogonal. Default: 0.02.

mmagic.models.utils.model_utils.get_valid_noise_size(noise_size: Optional[int], generator: Union[Dict, torch.nn.Module]) Optional[int][source]

Get the value of noise_size from input, generator and check the consistency of these values. If no conflict is found, return that value.

Parameters
  • noise_size (Optional[int]) – noise_size passed to BaseGAN_refactor’s initialize function.

  • generator (ModelType) – The config or the model of generator.

Returns

The noise size feed to generator.

Return type

int | None

mmagic.models.utils.model_utils.get_valid_num_batches(batch_inputs: Optional[mmagic.utils.typing.ForwardInputs] = None, data_samples: List[mmagic.structures.DataSample] = None) int[source]

Try get the valid batch size from inputs.

  • If some values in batch_inputs are Tensor and ‘num_batches’ is in batch_inputs, we check whether the value of ‘num_batches’ and the the length of first dimension of all tensors are same. If the values are not same, AssertionError will be raised. If all values are the same, return the value.

  • If no values in batch_inputs is Tensor, ‘num_batches’ must be contained in batch_inputs. And this value will be returned.

  • If some values are Tensor and ‘num_batches’ is not contained in batch_inputs, we check whether all tensor have the same length on the first dimension. If the length are not same, AssertionError will be raised. If all length are the same, return the length as batch size.

  • If batch_inputs is a Tensor, directly return the length of the first dimension as batch size.

Parameters

batch_inputs (ForwardInputs) – Inputs passed to forward().

Returns

The batch size of samples to generate.

Return type

int

mmagic.models.utils.model_utils.build_module(module: Union[dict, torch.nn.Module], builder: mmengine.registry.Registry, *args, **kwargs) Any[source]

Build module from config or return the module itself.

Parameters
  • module (Union[dict, nn.Module]) – The module to build.

  • builder (Registry) – The registry to build module.

  • *args – Arguments passed to build function.

  • **kwargs

    Arguments passed to build function.

Returns

The built module.

Return type

Any

mmagic.models.utils.model_utils.xformers_is_enable(verbose: bool = False) bool[source]

Check whether xformers is installed. :param verbose: Whether to print the log. :type verbose: bool

Returns

Whether xformers is installed.

Return type

bool

mmagic.models.utils.model_utils.set_xformers(module: torch.nn.Module, prefix: str = '') torch.nn.Module[source]

Set xformers’ efficient Attention for attention modules.

Parameters
  • module (nn.Module) – The module to set xformers.

  • prefix (str) – The prefix of the module name.

Returns

The module with xformers’ efficient Attention.

Return type

nn.Module

mmagic.models.utils.model_utils.set_tomesd(model: torch.nn.Module, ratio: float = 0.5, max_downsample: int = 1, sx: int = 2, sy: int = 2, use_rand: bool = True, merge_attn: bool = True, merge_crossattn: bool = False, merge_mlp: bool = False)[source]

Patches a stable diffusion model with ToMe. Apply this to the highest level stable diffusion object.

Refer to: https://github.com/dbolya/tomesd/blob/main/tomesd/patch.py#L173 # noqa

Parameters
  • model (torch.nn.Module) – A top level Stable Diffusion module to patch in place.

  • ratio (float) – The ratio of tokens to merge. I.e., 0.4 would reduce the total number of tokens by 40%.The maximum value for this is 1-(1/(sx * sy)). By default, the max ratio is 0.75 (usually <= 0.5 is recommended). Higher values result in more speed-up, but with more visual quality loss.

  • max_downsample (int) – Apply ToMe to layers with at most this amount of downsampling. E.g., 1 only applies to layers with no downsampling, while 8 applies to all layers. Should be chosen from [1, 2, 4, or 8]. 1 and 2 are recommended.

  • sx (int, int) – The stride for computing dst sets. A higher stride means you can merge more tokens, default setting of (2, 2) works well in most cases. sx and sy do not need to divide image size.

  • sy (int, int) – The stride for computing dst sets. A higher stride means you can merge more tokens, default setting of (2, 2) works well in most cases. sx and sy do not need to divide image size.

  • use_rand (bool) – Whether or not to allow random perturbations when computing dst sets. By default: True, but if you’re having weird artifacts you can try turning this off.

  • merge_attn (bool) – Whether or not to merge tokens for attention (recommended).

  • merge_crossattn (bool) – Whether or not to merge tokens for cross attention (not recommended).

  • merge_mlp (bool) – Whether or not to merge tokens for the mlp layers (particular not recommended).

Returns

Model patched by ToMe.

Return type

model (torch.nn.Module)

mmagic.models.utils.model_utils.remove_tomesd(model: torch.nn.Module)[source]

Removes a patch from a ToMe Diffusion module if it was already patched.

Refer to: https://github.com/dbolya/tomesd/blob/main/tomesd/patch.py#L251 # noqa

Read the Docs v: latest
Versions
latest
stable
0.x
Downloads
pdf
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.