Shortcuts

mmagic.engine

Package Contents

Classes

ReduceLRSchedulerHook

A hook to update learning rate.

BasicVisualizationHook

Basic hook that invoke visualizers during validation and test.

VisualizationHook

MMagic Visualization Hook. Used to visual output samples in training,

ExponentialMovingAverageHook

Exponential Moving Average Hook.

IterTimerHook

IterTimerHooks inherits from mmengine.hooks.IterTimerHook and

PGGANFetchDataHook

PGGAN Fetch Data Hook.

PickleDataHook

Pickle Useful Data Hook.

MultiOptimWrapperConstructor

OptimizerConstructor for GAN models. This class construct optimizer for

PGGANOptimWrapperConstructor

OptimizerConstructor for PGGAN models. Set optimizers for each

SinGANOptimWrapperConstructor

OptimizerConstructor for SinGAN models. Set optimizers for each

MultiTestLoop

Test loop for MMagic models which support evaluate multiply dataset at

MultiValLoop

Validation loop for MMagic models which support evaluate multiply

LogProcessor

LogProcessor inherits from mmengine.runner.LogProcessor and

LinearLrInterval

Linear learning rate scheduler for image generation.

ReduceLR

Decays the learning rate of each parameter group by linearly changing

class mmagic.engine.ReduceLRSchedulerHook(val_metric: str = None, by_epoch=True, interval=1)[source]

Bases: mmengine.hooks.ParamSchedulerHook

A hook to update learning rate.

Parameters
  • val_metric (str) – The metric of validation. If val_metric is not None, we check val_metric to reduce learning. Default: None.

  • by_epoch (bool) – Whether to update by epoch. Default: True.

  • interval (int) – The interval of iterations to update. Default: 1.

_calculate_average_value()[source]
after_train_epoch(runner: mmengine.runner.Runner)[source]

Call step function for each scheduler after each train epoch.

Parameters

runner (Runner) – The runner of the training process.

after_train_iter(runner: mmengine.runner.Runner, batch_idx: int, data_batch: DATA_BATCH = None, outputs: Optional[dict] = None) None[source]

Call step function for each scheduler after each iteration.

Parameters
  • runner (Runner) – The runner of the training process.

  • batch_idx (int) – The index of the current batch in the train loop.

  • data_batch (Sequence[dict], optional) – Data from dataloader. In order to keep this interface consistent with other hooks, we keep data_batch here. Defaults to None.

  • outputs (dict, optional) – Outputs from model. In order to keep this interface consistent with other hooks, we keep data_batch here. Defaults to None.

after_val_epoch(runner, metrics: Optional[Dict[str, float]] = None)[source]

Call step function for each scheduler after each validation epoch.

Parameters
  • runner (Runner) – The runner of the training process.

  • metrics (dict, optional) – The metrics of validation. Default: None.

class mmagic.engine.BasicVisualizationHook(interval: dict = {}, on_train=False, on_val=True, on_test=True)[source]

Bases: mmengine.hooks.Hook

Basic hook that invoke visualizers during validation and test.

Parameters
  • interval (int | dict) – Visualization interval. Default: {}.

  • on_train (bool) – Whether to call hook during train. Default to False.

  • on_val (bool) – Whether to call hook during validation. Default to True.

  • on_test (bool) – Whether to call hook during test. Default to True.

priority = NORMAL
_after_iter(runner, batch_idx: int, data_batch: Optional[Sequence[dict]], outputs: Optional[Sequence[mmengine.structures.BaseDataElement]], mode=None) None[source]

Show or Write the predicted results.

Parameters
  • runner (Runner) – The runner of the training process.

  • batch_idx (int) – The index of the current batch in the test loop.

  • data_batch (Sequence[dict], optional) – Data from dataloader. Defaults to None.

  • outputs (Sequence[BaseDataElement], optional) – Outputs from model. Defaults to None.

class mmagic.engine.VisualizationHook(interval: int = 1000, vis_kwargs_list: Tuple[List[dict], dict] = None, fixed_input: bool = True, n_samples: Optional[int] = 64, n_row: Optional[int] = None, message_hub_vis_kwargs: Optional[Tuple[str, dict, List[str], List[Dict]]] = None, save_at_test: bool = True, max_save_at_test: int = 100, test_vis_keys: Optional[Union[str, List[str]]] = None, show: bool = False, wait_time: float = 0)[source]

Bases: mmengine.hooks.Hook

MMagic Visualization Hook. Used to visual output samples in training, validation and testing. In this hook, we use a list called sample_kwargs_list to control how to generate samples and how to visualize them. Each element in sample_kwargs_list, called sample_kwargs, may contains the following keywords:

  • Required key words:
    • ‘type’: Value must be string. Denotes what kind of sampler is used to

      generate image. Refers to get_sampler().

  • Optional key words (If not passed, will use the default value):
    • ‘n_row’: Value must be int. The number of images in one row.

    • ‘num_samples’: Value must be int. The number of samples to visualize.

    • ‘vis_mode’: Value must be string. How to visualize the generated

      samples (e.g. image, gif).

    • ‘fixed_input’: Value must be bool. Whether use the fixed input

      during the loop.

    • ‘draw_gt’: Value must be bool. Whether save the real images.

    • ‘target_keys’: Value must be string or list of string. The keys of

      the target image to visualize.

    • ‘name’: Value must be string. If not passed, will use

      sample_kwargs[‘type’] as default.

For convenience, we also define a group of alias of samplers’ type for models supported in MMagic. Refers to :attr:self.SAMPLER_TYPE_MAPPING.

Example

>>> # for GAN models
>>> custom_hooks = [
>>>     dict(
>>>         type='VisualizationHook',
>>>         interval=1000,
>>>         fixed_input=True,
>>>         vis_kwargs_list=dict(type='GAN', name='fake_img'))]
>>> # for Translation models
>>> custom_hooks = [
>>>     dict(
>>>         type='VisualizationHook',
>>>         interval=10,
>>>         fixed_input=False,
>>>         vis_kwargs_list=[dict(type='Translation',
>>>                                  name='translation_train',
>>>                                  n_samples=6, draw_gt=True,
>>>                                  n_row=3),
>>>                             dict(type='TranslationVal',
>>>                                  name='translation_val',
>>>                                  n_samples=16, draw_gt=True,
>>>                                  n_row=4)])]

# NOTE: user-defined vis_kwargs > vis_kwargs_mapping > hook init args

Parameters
  • interval (int) – Visualization interval. Default: 1000.

  • sampler_kwargs_list (Tuple[List[dict], dict]) – The list of sampling behavior to generate images.

  • fixed_input (bool) – The default action of whether use fixed input to generate samples during the loop. Defaults to True.

  • n_samples (Optional[int]) – The default value of number of samples to visualize. Defaults to 64.

  • n_row (Optional[int]) – The default value of number of images in each row in the visualization results. Defaults to None.

  • (Optional[Tuple[str (message_hub_vis_kwargs) – List[Dict]]]): Key arguments visualize images in message hub. Defaults to None.

  • dict – List[Dict]]]): Key arguments visualize images in message hub. Defaults to None.

  • List[str] – List[Dict]]]): Key arguments visualize images in message hub. Defaults to None.

:paramList[Dict]]]): Key arguments visualize images in message hub.

Defaults to None.

Parameters
  • save_at_test (bool) – Whether save images during test. Defaults to True.

  • max_save_at_test (int) – Maximum number of samples saved at test time. If None is passed, all samples will be saved. Defaults to 100.

  • show (bool) – Whether to display the drawn image. Default to False.

  • wait_time (float) – The interval of show (s). Defaults to 0.

priority = NORMAL
VIS_KWARGS_MAPPING
after_val_iter(runner: mmengine.runner.Runner, batch_idx: int, data_batch: dict, outputs) None[source]

VisualizationHook do not support visualize during validation.

Parameters
  • runner (Runner) – The runner of the training process.

  • batch_idx (int) – The index of the current batch in the test loop.

  • data_batch (Sequence[dict], optional) – Data from dataloader. Defaults to None.

  • outputs – outputs of the generation model

after_test_iter(runner: mmengine.runner.Runner, batch_idx: int, data_batch: dict, outputs)[source]

Visualize samples after test iteration.

Parameters
  • runner (Runner) – The runner of the training process.

  • batch_idx (int) – The index of the current batch in the test loop.

  • data_batch (dict, optional) – Data from dataloader. Defaults to None.

  • outputs – outputs of the generation model Defaults to None.

after_train_iter(runner: mmengine.runner.Runner, batch_idx: int, data_batch: dict = None, outputs: Optional[dict] = None) None[source]

Visualize samples after train iteration.

Parameters
  • runner (Runner) – The runner of the training process.

  • batch_idx (int) – The index of the current batch in the train loop.

  • data_batch (dict) – Data from dataloader. Defaults to None.

  • outputs (dict, optional) – Outputs from model. Defaults to None.

vis_sample(runner: mmengine.runner.Runner, batch_idx: int, data_batch: dict, outputs: Optional[dict] = None) None[source]

Visualize samples.

Parameters
  • runner (Runner) – The runner contains model to visualize.

  • batch_idx (int) – The index of the current batch in loop.

  • data_batch (dict) – Data from dataloader. Defaults to None.

  • outputs (dict, optional) – Outputs from model. Defaults to None.

vis_from_message_hub(batch_idx: int)[source]

Visualize samples from message hub.

Parameters
  • batch_idx (int) – The index of the current batch in the test loop.

  • color_order (str) – The color order of generated images.

  • target_mean (Sequence[Union[float, int]]) – The original mean of the image tensor before preprocessing. Image will be re-shifted to target_mean before visualizing.

  • target_std (Sequence[Union[float, int]]) – The original std of the image tensor before preprocessing. Image will be re-scaled to target_std before visualizing.

class mmagic.engine.ExponentialMovingAverageHook(module_keys, interp_mode='lerp', interp_cfg=None, interval=- 1, start_iter=0)[source]

Bases: mmengine.hooks.Hook

Exponential Moving Average Hook.

Exponential moving average is a trick that widely used in current GAN literature, e.g., PGGAN, StyleGAN, and BigGAN. This general idea of it is maintaining a model with the same architecture, but its parameters are updated as a moving average of the trained weights in the original model. In general, the model with moving averaged weights achieves better performance.

Parameters
  • module_keys (str | tuple[str]) – The name of the ema model. Note that we require these keys are followed by ‘_ema’ so that we can easily find the original model by discarding the last four characters.

  • interp_mode (str, optional) – Mode of the interpolation method. Defaults to ‘lerp’.

  • interp_cfg (dict | None, optional) – Set arguments of the interpolation function. Defaults to None.

  • interval (int, optional) – Evaluation interval (by iterations). Default: -1.

  • start_iter (int, optional) – Start iteration for ema. If the start iteration is not reached, the weights of ema model will maintain the same as the original one. Otherwise, its parameters are updated as a moving average of the trained weights in the original model. Default: 0.

static lerp(a, b, momentum=0.001, momentum_nontrainable=1.0, trainable=True)[source]

Does a linear interpolation of two parameters/ buffers.

Parameters
  • a (torch.Tensor) – Interpolation start point, refer to orig state.

  • b (torch.Tensor) – Interpolation end point, refer to ema state.

  • momentum (float, optional) – The weight for the interpolation formula. Defaults to 0.001.

  • momentum_nontrainable (float, optional) – The weight for the interpolation formula used for nontrainable parameters. Defaults to 1..

  • trainable (bool, optional) – Whether input parameters is trainable. If set to False, momentum_nontrainable will be used. Defaults to True.

Returns

Interpolation result.

Return type

torch.Tensor

every_n_iters(runner: mmengine.runner.Runner, n: int)[source]

This is the function to perform every n iterations.

Parameters
  • runner (Runner) – runner used to drive the whole pipeline

  • n (int) – the number of iterations

Returns

the latest iterations

Return type

int

after_train_iter(runner: mmengine.runner.Runner, batch_idx: int, data_batch: DATA_BATCH = None, outputs: Optional[dict] = None) None[source]

This is the function to perform after each training iteration.

Parameters
  • runner (Runner) – runner to drive the pipeline

  • batch_idx (int) – the id of batch

  • data_batch (DATA_BATCH, optional) – data batch. Defaults to None.

  • outputs (Optional[dict], optional) – output. Defaults to None.

before_run(runner: mmengine.runner.Runner)[source]

This is the function perform before each run.

Parameters

runner (Runner) – runner used to drive the whole pipeline

Raises

RuntimeError – error message

class mmagic.engine.IterTimerHook[source]

Bases: mmengine.hooks.IterTimerHook

IterTimerHooks inherits from mmengine.hooks.IterTimerHook and overwrites self._after_iter().

This hooks should be used along with mmagic.engine.runner.MultiValLoop and mmagic.engine.runner.MultiTestLoop.

_after_iter(runner, batch_idx: int, data_batch: DATA_BATCH = None, outputs: Optional[Union[dict, Sequence[mmengine.structures.BaseDataElement]]] = None, mode: str = 'train') None[source]

Calculating time for an iteration and updating “time” HistoryBuffer of runner.message_hub. If mode is ‘train’, we take runner.max_iters as the total iterations and calculate the rest time. If mode in val or test, we use runner.val_loop.total_length or runner.test_loop.total_length as total number of iterations. If you want to know how total_length is calculated, please refers to mmagic.engine.runner.MultiValLoop.run() and mmagic.engine.runner.MultiTestLoop.run().

Parameters
  • runner (Runner) – The runner of the training validation and testing process.

  • batch_idx (int) – The index of the current batch in the loop.

  • data_batch (Sequence[dict], optional) – Data from dataloader. Defaults to None.

  • outputs (dict or sequence, optional) – Outputs from model. Defaults to None.

  • mode (str) – Current mode of runner. Defaults to ‘train’.

class mmagic.engine.PGGANFetchDataHook[source]

Bases: mmengine.hooks.Hook

PGGAN Fetch Data Hook.

Parameters

interval (int, optional) – The interval of calling this hook. If set to -1, the visualization hook will not be called. Defaults to 1.

before_train_iter(runner, batch_idx: int, data_batch: DATA_BATCH = None) None[source]

All subclasses should override this method, if they need any operations before each training iteration.

Parameters
  • runner (Runner) – The runner of the training process.

  • batch_idx (int) – The index of the current batch in the train loop.

  • data_batch (dict or tuple or list, optional) – Data from dataloader.

update_dataloader(dataloader: torch.utils.data.dataloader.DataLoader, curr_scale: int) Optional[torch.utils.data.dataloader.DataLoader][source]

Update the data loader.

Parameters
  • dataloader (DataLoader) – The dataloader to be updated.

  • curr_scale (int) – The current scale of the generated image.

Returns

The updated dataloader. If the dataloader do

not need to update, return None.

Return type

Optional[DataLoader]

class mmagic.engine.PickleDataHook(output_dir, data_name_list, interval=- 1, before_run=False, after_run=False, filename_tmpl='iter_{}.pkl')[source]

Bases: mmengine.hooks.Hook

Pickle Useful Data Hook.

This hook will be used in SinGAN training for saving some important data that will be used in testing or inference.

Parameters
  • output_dir (str) – The output path for saving pickled data.

  • data_name_list (list[str]) – The list contains the name of results in outputs dict.

  • interval (int) – The interval of calling this hook. If set to -1, the PickleDataHook will not be called during training. Default: -1.

  • before_run (bool, optional) – Whether to save before running. Defaults to False.

  • after_run (bool, optional) – Whether to save after running. Defaults to False.

  • filename_tmpl (str, optional) – Format string used to save images. The output file name will be formatted as this args. Defaults to ‘iter_{}.pkl’.

after_run(runner)[source]

The behavior after each train iteration.

Parameters

runner (object) – The runner.

before_run(runner)[source]

The behavior after each train iteration.

Parameters

runner (object) – The runner.

after_train_iter(runner, batch_idx: int, data_batch: DATA_BATCH = None, outputs: Optional[dict] = None)[source]

The behavior after each train iteration.

Parameters
  • runner (Runner) – The runner of the training process.

  • batch_idx (int) – The index of the current batch in the train loop.

  • data_batch (Sequence[dict], optional) – Data from dataloader. Defaults to None.

  • outputs (dict, optional) – Outputs from model. Defaults to None.

_pickle_data(runner: mmengine.runner.Runner)[source]

Save target data to pickle file.

Parameters

runner (Runner) – The runner of the training process.

_get_numpy_data(data: Tuple[List[torch.Tensor], torch.Tensor, int]) Tuple[List[numpy.ndarray], numpy.ndarray, int][source]

Convert tensor or list of tensor to numpy or list of numpy.

Parameters

data (Tuple[List[Tensor], Tensor, int]) – Data to be converted.

Returns

Converted data.

Return type

Tuple[List[np.ndarray], np.ndarray, int]

class mmagic.engine.MultiOptimWrapperConstructor(optim_wrapper_cfg: dict, paramwise_cfg=None)[source]

OptimizerConstructor for GAN models. This class construct optimizer for the submodules of the model separately, and return a mmengine.optim.OptimWrapperDict or mmengine.optim.OptimWrapper.

Example 1: Build multi optimizers (e.g., GANs):
>>> # build GAN model
>>> model = dict(
>>>     type='GANModel',
>>>     num_classes=10,
>>>     generator=dict(type='Generator'),
>>>     discriminator=dict(type='Discriminator'))
>>> gan_model = MODELS.build(model)
>>> # build constructor
>>> optim_wrapper = dict(
>>>     generator=dict(
>>>         type='OptimWrapper',
>>>         accumulative_counts=1,
>>>         optimizer=dict(type='Adam', lr=0.0002,
>>>                        betas=(0.5, 0.999))),
>>>     discriminator=dict(
>>>         type='OptimWrapper',
>>>         accumulative_counts=1,
>>>         optimizer=dict(type='Adam', lr=0.0002,
>>>                            betas=(0.5, 0.999))))
>>> optim_dict_builder = MultiOptimWrapperConstructor(optim_wrapper)
>>> # build optim wrapper dict
>>> optim_wrapper_dict = optim_dict_builder(gan_model)
Example 2: Build multi optimizers for specific submodules:
>>> # build model
>>> class GAN(nn.Module):
>>>     def __init__(self) -> None:
>>>         super().__init__()
>>>         self.generator = nn.Conv2d(3, 3, 1)
>>>         self.discriminator = nn.Conv2d(3, 3, 1)
>>> class TextEncoder(nn.Module):
>>>     def __init__(self):
>>>         super().__init__()
>>>         self.embedding = nn.Embedding(100, 100)
>>> class ToyModel(nn.Module):
>>>     def __init__(self) -> None:
>>>         super().__init__()
>>>         self.m1 = GAN()
>>>         self.m2 = nn.Conv2d(3, 3, 1)
>>>         self.m3 = nn.Linear(2, 2)
>>>         self.text_encoder = TextEncoder()
>>> model = ToyModel()
>>> # build constructor
>>> optim_wrapper = {
>>>     '.*embedding': {
>>>         'type': 'OptimWrapper',
>>>         'optimizer': {
>>>             'type': 'Adam',
>>>             'lr': 1e-4,
>>>             'betas': (0.9, 0.99)
>>>         }
>>>     },
>>>     'm1.generator': {
>>>         'type': 'OptimWrapper',
>>>         'optimizer': {
>>>             'type': 'Adam',
>>>             'lr': 1e-5,
>>>             'betas': (0.9, 0.99)
>>>         }
>>>     },
>>>     'm2': {
>>>         'type': 'OptimWrapper',
>>>         'optimizer': {
>>>             'type': 'Adam',
>>>             'lr': 1e-5,
>>>         }
>>>     }
>>> }
>>> optim_dict_builder = MultiOptimWrapperConstructor(optim_wrapper)
>>> # build optim wrapper dict
>>> optim_wrapper_dict = optim_dict_builder(model)
Example 3: Build a single optimizer for multi modules (e.g., DreamBooth):
>>> # build StableDiffusion model
>>> model = dict(
>>>     type='StableDiffusion',
>>>     unet=dict(type='unet'),
>>>     vae=dict(type='vae'),
        text_encoder=dict(type='text_encoder'))
>>> diffusion_model = MODELS.build(model)
>>> # build constructor
>>> optim_wrapper = dict(
>>>     modules=['unet', 'text_encoder']
>>>     optimizer=dict(type='Adam', lr=0.0002),
>>>     accumulative_counts=1)
>>> optim_dict_builder = MultiOptimWrapperConstructor(optim_wrapper)
>>> # build optim wrapper dict
>>> optim_wrapper_dict = optim_dict_builder(diffusion_model)
Parameters
  • optim_wrapper_cfg_dict (dict) – Config of the optimizer wrapper.

  • paramwise_cfg (dict) – Config of parameter-wise settings. Default: None.

__call__(module: torch.nn.Module) Union[mmengine.optim.OptimWrapperDict, mmengine.optim.OptimWrapper][source]

Build optimizer and return a optimizer_wrapper_dict.

class mmagic.engine.PGGANOptimWrapperConstructor(optim_wrapper_cfg: dict, paramwise_cfg: Optional[dict] = None)[source]

OptimizerConstructor for PGGAN models. Set optimizers for each stage of PGGAN. All submodule must be contained in a torch.nn.ModuleList named ‘blocks’. And we access each submodule by MODEL.blocks[SCALE], where MODEL is generator or discriminator, and the scale is the index of the resolution scale.

More detail about the resolution scale and naming rule please refers to PGGANGenerator and PGGANDiscriminator.

Example

>>> # build PGGAN model
>>> model = dict(
>>>     type='ProgressiveGrowingGAN',
>>>     data_preprocessor=dict(type='GANDataPreprocessor'),
>>>     noise_size=512,
>>>     generator=dict(type='PGGANGenerator', out_scale=1024,
>>>                    noise_size=512),
>>>     discriminator=dict(type='PGGANDiscriminator', in_scale=1024),
>>>     nkimgs_per_scale={
>>>         '4': 600,
>>>         '8': 1200,
>>>         '16': 1200,
>>>         '32': 1200,
>>>         '64': 1200,
>>>         '128': 1200,
>>>         '256': 1200,
>>>         '512': 1200,
>>>         '1024': 12000,
>>>     },
>>>     transition_kimgs=600,
>>>     ema_config=dict(interval=1))
>>> pggan = MODELS.build(model)
>>> # build constructor
>>> optim_wrapper = dict(
>>>     generator=dict(optimizer=dict(type='Adam', lr=0.001,
>>>                                   betas=(0., 0.99))),
>>>     discriminator=dict(
>>>         optimizer=dict(type='Adam', lr=0.001, betas=(0., 0.99))),
>>>     lr_schedule=dict(
>>>         generator={
>>>             '128': 0.0015,
>>>             '256': 0.002,
>>>             '512': 0.003,
>>>             '1024': 0.003
>>>         },
>>>         discriminator={
>>>             '128': 0.0015,
>>>             '256': 0.002,
>>>             '512': 0.003,
>>>             '1024': 0.003
>>>         }))
>>> optim_wrapper_dict_builder = PGGANOptimWrapperConstructor(
>>>     optim_wrapper)
>>> # build optim wrapper dict
>>> optim_wrapper_dict = optim_wrapper_dict_builder(pggan)
Parameters
  • optim_wrapper_cfg (dict) – Config of the optimizer wrapper.

  • paramwise_cfg (Optional[dict]) – Parameter-wise options.

__call__(module: torch.nn.Module) mmengine.optim.OptimWrapperDict[source]

Build optimizer and return a optimizerwrapperdict.

class mmagic.engine.SinGANOptimWrapperConstructor(optim_wrapper_cfg: dict, paramwise_cfg: Optional[dict] = None)[source]

OptimizerConstructor for SinGAN models. Set optimizers for each submodule of SinGAN. All submodule must be contained in a torch.nn.ModuleList named ‘blocks’. And we access each submodule by MODEL.blocks[SCALE], where MODEL is generator or discriminator, and the scale is the index of the resolution scale.

More detail about the resolution scale and naming rule please refers to SinGANMultiScaleGenerator and SinGANMultiScaleDiscriminator.

Example

>>> # build SinGAN model
>>> model = dict(
>>>     type='SinGAN',
>>>     data_preprocessor=dict(
>>>         type='GANDataPreprocessor',
>>>         non_image_keys=['input_sample']),
>>>     generator=dict(
>>>         type='SinGANMultiScaleGenerator',
>>>         in_channels=3,
>>>         out_channels=3,
>>>         num_scales=2),
>>>     discriminator=dict(
>>>         type='SinGANMultiScaleDiscriminator',
>>>         in_channels=3,
>>>         num_scales=3))
>>> singan = MODELS.build(model)
>>> # build constructor
>>> optim_wrapper = dict(
>>>     generator=dict(optimizer=dict(type='Adam', lr=0.0005,
>>>                                   betas=(0.5, 0.999))),
>>>     discriminator=dict(
>>>         optimizer=dict(type='Adam', lr=0.0005,
>>>                        betas=(0.5, 0.999))))
>>> optim_wrapper_dict_builder = SinGANOptimWrapperConstructor(
>>>     optim_wrapper)
>>> # build optim wrapper dict
>>> optim_wrapper_dict = optim_wrapper_dict_builder(singan)
Parameters
  • optim_wrapper_cfg (dict) – Config of the optimizer wrapper.

  • paramwise_cfg (Optional[dict]) – Parameter-wise options.

__call__(module: torch.nn.Module) mmengine.optim.OptimWrapperDict[source]

Build optimizer and return a optimizerwrapperdict.

class mmagic.engine.MultiTestLoop(runner, dataloader, evaluator, fp16=False)[source]

Bases: mmengine.runner.base_loop.BaseLoop

Test loop for MMagic models which support evaluate multiply dataset at the same time. This class support evaluate:

  1. Metrics (metric) on a single dataset (e.g. PSNR and SSIM on DIV2K dataset)

  2. Different metrics on different datasets (e.g. PSNR on DIV2K and SSIM and PSNR on SET5)

Use cases:

Case 1: metrics on a single dataset

>>> # add the following lines in your config
>>> # 1. use `MultiTestLoop` instead of `TestLoop` in MMEngine
>>> val_cfg = dict(type='MultiTestLoop')
>>> # 2. specific MultiEvaluator instead of Evaluator in MMEngine
>>> test_evaluator = dict(
>>>     type='MultiEvaluator',
>>>     metrics=[
>>>         dict(type='PSNR', crop_border=2, prefix='Set5'),
>>>         dict(type='SSIM', crop_border=2, prefix='Set5'),
>>>     ])
>>> # 3. define dataloader
>>> test_dataloader = dict(...)

Case 2: different metrics on different datasets

>>> # add the following lines in your config
>>> # 1. use `MultiTestLoop` instead of `TestLoop` in MMEngine
>>> Test_cfg = dict(type='MultiTestLoop')
>>> # 2. specific a list MultiEvaluator
>>> # do not forget to add prefix for each metric group
>>> div2k_evaluator = dict(
>>>     type='MultiEvaluator',
>>>     metrics=dict(type='SSIM', crop_border=2, prefix='DIV2K'))
>>> set5_evaluator = dict(
>>>     type='MultiEvaluator',
>>>     metrics=[
>>>         dict(type='PSNR', crop_border=2, prefix='Set5'),
>>>         dict(type='SSIM', crop_border=2, prefix='Set5'),
>>>     ])
>>> # define evaluator config
>>> test_evaluator = [div2k_evaluator, set5_evaluator]
>>> # 3. specific a list dataloader for each metric groups
>>> div2k_dataloader = dict(...)
>>> set5_dataloader = dict(...)
>>> # define dataloader config
>>> test_dataloader = [div2k_dataloader, set5_dataloader]
Parameters
  • runner (Runner) – A reference of runner.

  • dataloader (Dataloader or dict or list) – A dataloader object or a dict to build a dataloader a list of dataloader object or a list of config dicts.

  • evaluator (Evaluator or dict or list) – A evaluator object or a dict to build the evaluator or a list of evaluator object or a list of config dicts.

property total_length: int
_build_dataloaders(dataloader: DATALOADER_TYPE) List[torch.utils.data.DataLoader][source]

Build dataloaders.

Parameters

dataloader (Dataloader or dict or list) – A dataloader object or a dict to build a dataloader a list of dataloader object or a list of config dict.

Returns

List of dataloaders for compute metrics.

Return type

List[Dataloader]

_build_evaluators(evaluator: EVALUATOR_TYPE) List[mmengine.evaluator.Evaluator][source]

Build evaluators.

Parameters

evaluator (Evaluator or dict or list) – A evaluator object or a dict to build the evaluator or a list of evaluator object or a list of config dicts.

Returns

List of evaluators for compute metrics.

Return type

List[Evaluator]

run()[source]

Launch validation. The evaluation process consists of four steps.

  1. Prepare pre-calculated items for all metrics by calling self.evaluator.prepare_metrics().

  2. Get a list of metrics-sampler pair. Each pair contains a list of metrics with the same sampler mode and a shared sampler.

  3. Generate images for the each metrics group. Loop for elements in each sampler and feed to the model as input by calling self.run_iter().

  4. Evaluate all metrics by calling self.evaluator.evaluate().

run_iter(idx, data_batch: dict, metrics: Sequence[mmengine.evaluator.BaseMetric])[source]

Iterate one mini-batch and feed the output to corresponding metrics.

Parameters
  • idx (int) – Current idx for the input data.

  • data_batch (dict) – Batch of data from dataloader.

  • metrics (Sequence[BaseMetric]) – Specific metrics to evaluate.

class mmagic.engine.MultiValLoop(runner, dataloader: DATALOADER_TYPE, evaluator: EVALUATOR_TYPE, fp16: bool = False)[source]

Bases: mmengine.runner.base_loop.BaseLoop

Validation loop for MMagic models which support evaluate multiply dataset at the same time. This class support evaluate:

  1. Metrics (metric) on a single dataset (e.g. PSNR and SSIM on DIV2K dataset)

  2. Different metrics on different datasets (e.g. PSNR on DIV2K and SSIM and PSNR on SET5)

Use cases:

Case 1: metrics on a single dataset

>>> # add the following lines in your config
>>> # 1. use `MultiValLoop` instead of `ValLoop` in MMEngine
>>> val_cfg = dict(type='MultiValLoop')
>>> # 2. specific MultiEvaluator instead of Evaluator in MMEngine
>>> val_evaluator = dict(
>>>     type='MultiEvaluator',
>>>     metrics=[
>>>         dict(type='PSNR', crop_border=2, prefix='Set5'),
>>>         dict(type='SSIM', crop_border=2, prefix='Set5'),
>>>     ])
>>> # 3. define dataloader
>>> val_dataloader = dict(...)

Case 2: different metrics on different datasets

>>> # add the following lines in your config
>>> # 1. use `MultiValLoop` instead of `ValLoop` in MMEngine
>>> val_cfg = dict(type='MultiValLoop')
>>> # 2. specific a list MultiEvaluator
>>> # do not forget to add prefix for each metric group
>>> div2k_evaluator = dict(
>>>     type='MultiEvaluator',
>>>     metrics=dict(type='SSIM', crop_border=2, prefix='DIV2K'))
>>> set5_evaluator = dict(
>>>     type='MultiEvaluator',
>>>     metrics=[
>>>         dict(type='PSNR', crop_border=2, prefix='Set5'),
>>>         dict(type='SSIM', crop_border=2, prefix='Set5'),
>>>     ])
>>> # define evaluator config
>>> val_evaluator = [div2k_evaluator, set5_evaluator]
>>> # 3. specific a list dataloader for each metric groups
>>> div2k_dataloader = dict(...)
>>> set5_dataloader = dict(...)
>>> # define dataloader config
>>> val_dataloader = [div2k_dataloader, set5_dataloader]
Parameters
  • runner (Runner) – A reference of runner.

  • dataloader (Dataloader or dict or list) – A dataloader object or a dict to build a dataloader a list of dataloader object or a list of config dicts.

  • evaluator (Evaluator or dict or list) – A evaluator object or a dict to build the evaluator or a list of evaluator object or a list of config dicts.

property total_length: int
_build_dataloaders(dataloader: DATALOADER_TYPE) List[torch.utils.data.DataLoader][source]

Build dataloaders.

Parameters

dataloader (Dataloader or dict or list) – A dataloader object or a dict to build a dataloader a list of dataloader object or a list of config dict.

Returns

List of dataloaders for compute metrics.

Return type

List[Dataloader]

_build_evaluators(evaluator: EVALUATOR_TYPE) List[mmengine.evaluator.Evaluator][source]

Build evaluators.

Parameters

evaluator (Evaluator or dict or list) – A evaluator object or a dict to build the evaluator or a list of evaluator object or a list of config dicts.

Returns

List of evaluators for compute metrics.

Return type

List[Evaluator]

run()[source]

Launch validation. The evaluation process consists of four steps.

  1. Prepare pre-calculated items for all metrics by calling self.evaluator.prepare_metrics().

  2. Get a list of metrics-sampler pair. Each pair contains a list of metrics with the same sampler mode and a shared sampler.

  3. Generate images for the each metrics group. Loop for elements in each sampler and feed to the model as input by calling self.run_iter().

  4. Evaluate all metrics by calling self.evaluator.evaluate().

run_iter(idx, data_batch: dict, metrics: Sequence[mmengine.evaluator.BaseMetric])[source]

Iterate one mini-batch and feed the output to corresponding metrics.

Parameters
  • idx (int) – Current idx for the input data.

  • data_batch (dict) – Batch of data from dataloader.

  • metrics (Sequence[BaseMetric]) – Specific metrics to evaluate.

class mmagic.engine.LogProcessor(window_size=10, by_epoch=True, custom_cfg: Optional[List[dict]] = None, num_digits: int = 4, log_with_hierarchy: bool = False, mean_pattern='.*(loss|time|data_time|grad_norm).*')[source]

Bases: mmengine.runner.LogProcessor

LogProcessor inherits from mmengine.runner.LogProcessor and overwrites self.get_log_after_iter().

This log processor should be used along with mmagic.engine.runner.MultiValLoop and mmagic.engine.runner.MultiTestLoop.

_get_dataloader_size(runner, mode) int[source]

Get dataloader size of current loop. In MultiValLoop and MultiTestLoop, we use total_length instead of len(dataloader) to denote the total number of iterations.

Parameters
  • runner (Runner) – The runner of the training/validation/testing

  • mode (str) – Current mode of runner.

Returns

The dataloader size of current loop.

Return type

int

class mmagic.engine.LinearLrInterval(*args, interval=1, **kwargs)[source]

Bases: mmengine.optim.LinearLR

Linear learning rate scheduler for image generation.

In the beginning, the learning rate is ‘start_factor’ defined in mmengine. We give a target learning rate ‘end_factor’ and a start point ‘begin’. If :attr:self.by_epoch is True, ‘begin’ is calculated by epoch, otherwise, calculated by iteration.” Before ‘begin’, we fix learning rate as ‘start_factor’; After ‘begin’, we linearly update learning rate to ‘end_factor’.

Parameters

interval (int) – The interval to update the learning rate. Default: 1.

_get_value()[source]

Compute value using chainable form of the scheduler.

class mmagic.engine.ReduceLR(optimizer, mode: str = 'min', factor: float = 0.1, patience: int = 10, threshold: float = 0.0001, threshold_mode: str = 'rel', cooldown: int = 0, min_lr: float = 0.0, eps: float = 1e-08, **kwargs)[source]

Bases: mmengine.optim._ParamScheduler

Decays the learning rate of each parameter group by linearly changing small multiplicative factor until the number of epoch reaches a pre-defined milestone: end.

Notice that such decay can happen simultaneously with other changes to the learning rate from outside this scheduler.

Note

The learning rate of each parameter group will be update at regular

intervals.

Parameters
  • optimizer (Optimizer or OptimWrapper) – Wrapped optimizer.

  • mode (str, optional) – One of min, max. In min mode, lr will be reduced when the quantity monitored has stopped decreasing; in max mode it will be reduced when the quantity monitored has stopped increasing. Default: ‘min’.

  • factor (float, optional) – Factor by which the learning rate will be reduced. new_lr = lr * factor. Default: 0.1.

  • patience (int, optional) – Number of epochs with no improvement after which learning rate will be reduced. For example, if patience = 2, then we will ignore the first 2 epochs with no improvement, and will only decrease the LR after the 3rd epoch if the loss still hasn’t improved then. Default: 10.

  • threshold (float, optional) – Threshold for measuring the new optimum, to only focus on significant changes. Default: 1e-4.

  • threshold_mode (str, optional) – One of rel, abs. In rel mode, dynamic_threshold = best * ( 1 + threshold ) in ‘max’ mode or best * ( 1 - threshold ) in min mode. In abs mode, dynamic_threshold = best + threshold in max mode or best - threshold in min mode. Default: ‘rel’.

  • cooldown (int, optional) – Number of epochs to wait before resuming normal operation after lr has been reduced. Default: 0.

  • min_lr (float, optional) – Minimum LR value to keep. If LR after decay is lower than min_lr, it will be clipped to this value. Default: 0.

  • eps (float, optional) – Minimal decay applied to lr. If the difference between new and old lr is smaller than eps, the update is ignored. Default: 1e-8.

  • begin (int) – Step at which to start updating the learning rate. Defaults to 0.

  • end (int) – Step at which to stop updating the learning rate.

  • last_step (int) – The index of last step. Used for resume without state dict. Defaults to -1.

  • by_epoch (bool) – Whether the scheduled learning rate is updated by epochs. Defaults to True.

property in_cooldown
_get_value()[source]

Compute value using chainable form of the scheduler.

_init_is_better(mode)[source]
_reset()[source]
is_better(a, best)[source]
Read the Docs v: latest
Versions
latest
stable
0.x
Downloads
pdf
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.