Shortcuts

mmagic.engine.runner

Package Contents

Classes

LogProcessor

LogProcessor inherits from mmengine.runner.LogProcessor and

MultiTestLoop

Test loop for MMagic models which support evaluate multiply dataset at

MultiValLoop

Validation loop for MMagic models which support evaluate multiply

class mmagic.engine.runner.LogProcessor(window_size=10, by_epoch=True, custom_cfg: Optional[List[dict]] = None, num_digits: int = 4, log_with_hierarchy: bool = False, mean_pattern='.*(loss|time|data_time|grad_norm).*')[source]

Bases: mmengine.runner.LogProcessor

LogProcessor inherits from mmengine.runner.LogProcessor and overwrites self.get_log_after_iter().

This log processor should be used along with mmagic.engine.runner.MultiValLoop and mmagic.engine.runner.MultiTestLoop.

_get_dataloader_size(runner, mode) int[source]

Get dataloader size of current loop. In MultiValLoop and MultiTestLoop, we use total_length instead of len(dataloader) to denote the total number of iterations.

Parameters
  • runner (Runner) – The runner of the training/validation/testing

  • mode (str) – Current mode of runner.

Returns

The dataloader size of current loop.

Return type

int

class mmagic.engine.runner.MultiTestLoop(runner, dataloader, evaluator, fp16=False)[source]

Bases: mmengine.runner.base_loop.BaseLoop

Test loop for MMagic models which support evaluate multiply dataset at the same time. This class support evaluate:

  1. Metrics (metric) on a single dataset (e.g. PSNR and SSIM on DIV2K dataset)

  2. Different metrics on different datasets (e.g. PSNR on DIV2K and SSIM and PSNR on SET5)

Use cases:

Case 1: metrics on a single dataset

>>> # add the following lines in your config
>>> # 1. use `MultiTestLoop` instead of `TestLoop` in MMEngine
>>> val_cfg = dict(type='MultiTestLoop')
>>> # 2. specific MultiEvaluator instead of Evaluator in MMEngine
>>> test_evaluator = dict(
>>>     type='MultiEvaluator',
>>>     metrics=[
>>>         dict(type='PSNR', crop_border=2, prefix='Set5'),
>>>         dict(type='SSIM', crop_border=2, prefix='Set5'),
>>>     ])
>>> # 3. define dataloader
>>> test_dataloader = dict(...)

Case 2: different metrics on different datasets

>>> # add the following lines in your config
>>> # 1. use `MultiTestLoop` instead of `TestLoop` in MMEngine
>>> Test_cfg = dict(type='MultiTestLoop')
>>> # 2. specific a list MultiEvaluator
>>> # do not forget to add prefix for each metric group
>>> div2k_evaluator = dict(
>>>     type='MultiEvaluator',
>>>     metrics=dict(type='SSIM', crop_border=2, prefix='DIV2K'))
>>> set5_evaluator = dict(
>>>     type='MultiEvaluator',
>>>     metrics=[
>>>         dict(type='PSNR', crop_border=2, prefix='Set5'),
>>>         dict(type='SSIM', crop_border=2, prefix='Set5'),
>>>     ])
>>> # define evaluator config
>>> test_evaluator = [div2k_evaluator, set5_evaluator]
>>> # 3. specific a list dataloader for each metric groups
>>> div2k_dataloader = dict(...)
>>> set5_dataloader = dict(...)
>>> # define dataloader config
>>> test_dataloader = [div2k_dataloader, set5_dataloader]
Parameters
  • runner (Runner) – A reference of runner.

  • dataloader (Dataloader or dict or list) – A dataloader object or a dict to build a dataloader a list of dataloader object or a list of config dicts.

  • evaluator (Evaluator or dict or list) – A evaluator object or a dict to build the evaluator or a list of evaluator object or a list of config dicts.

property total_length: int
_build_dataloaders(dataloader: DATALOADER_TYPE) List[torch.utils.data.DataLoader][source]

Build dataloaders.

Parameters

dataloader (Dataloader or dict or list) – A dataloader object or a dict to build a dataloader a list of dataloader object or a list of config dict.

Returns

List of dataloaders for compute metrics.

Return type

List[Dataloader]

_build_evaluators(evaluator: EVALUATOR_TYPE) List[mmengine.evaluator.Evaluator][source]

Build evaluators.

Parameters

evaluator (Evaluator or dict or list) – A evaluator object or a dict to build the evaluator or a list of evaluator object or a list of config dicts.

Returns

List of evaluators for compute metrics.

Return type

List[Evaluator]

run()[source]

Launch validation. The evaluation process consists of four steps.

  1. Prepare pre-calculated items for all metrics by calling self.evaluator.prepare_metrics().

  2. Get a list of metrics-sampler pair. Each pair contains a list of metrics with the same sampler mode and a shared sampler.

  3. Generate images for the each metrics group. Loop for elements in each sampler and feed to the model as input by calling self.run_iter().

  4. Evaluate all metrics by calling self.evaluator.evaluate().

run_iter(idx, data_batch: dict, metrics: Sequence[mmengine.evaluator.BaseMetric])[source]

Iterate one mini-batch and feed the output to corresponding metrics.

Parameters
  • idx (int) – Current idx for the input data.

  • data_batch (dict) – Batch of data from dataloader.

  • metrics (Sequence[BaseMetric]) – Specific metrics to evaluate.

class mmagic.engine.runner.MultiValLoop(runner, dataloader: DATALOADER_TYPE, evaluator: EVALUATOR_TYPE, fp16: bool = False)[source]

Bases: mmengine.runner.base_loop.BaseLoop

Validation loop for MMagic models which support evaluate multiply dataset at the same time. This class support evaluate:

  1. Metrics (metric) on a single dataset (e.g. PSNR and SSIM on DIV2K dataset)

  2. Different metrics on different datasets (e.g. PSNR on DIV2K and SSIM and PSNR on SET5)

Use cases:

Case 1: metrics on a single dataset

>>> # add the following lines in your config
>>> # 1. use `MultiValLoop` instead of `ValLoop` in MMEngine
>>> val_cfg = dict(type='MultiValLoop')
>>> # 2. specific MultiEvaluator instead of Evaluator in MMEngine
>>> val_evaluator = dict(
>>>     type='MultiEvaluator',
>>>     metrics=[
>>>         dict(type='PSNR', crop_border=2, prefix='Set5'),
>>>         dict(type='SSIM', crop_border=2, prefix='Set5'),
>>>     ])
>>> # 3. define dataloader
>>> val_dataloader = dict(...)

Case 2: different metrics on different datasets

>>> # add the following lines in your config
>>> # 1. use `MultiValLoop` instead of `ValLoop` in MMEngine
>>> val_cfg = dict(type='MultiValLoop')
>>> # 2. specific a list MultiEvaluator
>>> # do not forget to add prefix for each metric group
>>> div2k_evaluator = dict(
>>>     type='MultiEvaluator',
>>>     metrics=dict(type='SSIM', crop_border=2, prefix='DIV2K'))
>>> set5_evaluator = dict(
>>>     type='MultiEvaluator',
>>>     metrics=[
>>>         dict(type='PSNR', crop_border=2, prefix='Set5'),
>>>         dict(type='SSIM', crop_border=2, prefix='Set5'),
>>>     ])
>>> # define evaluator config
>>> val_evaluator = [div2k_evaluator, set5_evaluator]
>>> # 3. specific a list dataloader for each metric groups
>>> div2k_dataloader = dict(...)
>>> set5_dataloader = dict(...)
>>> # define dataloader config
>>> val_dataloader = [div2k_dataloader, set5_dataloader]
Parameters
  • runner (Runner) – A reference of runner.

  • dataloader (Dataloader or dict or list) – A dataloader object or a dict to build a dataloader a list of dataloader object or a list of config dicts.

  • evaluator (Evaluator or dict or list) – A evaluator object or a dict to build the evaluator or a list of evaluator object or a list of config dicts.

property total_length: int
_build_dataloaders(dataloader: DATALOADER_TYPE) List[torch.utils.data.DataLoader][source]

Build dataloaders.

Parameters

dataloader (Dataloader or dict or list) – A dataloader object or a dict to build a dataloader a list of dataloader object or a list of config dict.

Returns

List of dataloaders for compute metrics.

Return type

List[Dataloader]

_build_evaluators(evaluator: EVALUATOR_TYPE) List[mmengine.evaluator.Evaluator][source]

Build evaluators.

Parameters

evaluator (Evaluator or dict or list) – A evaluator object or a dict to build the evaluator or a list of evaluator object or a list of config dicts.

Returns

List of evaluators for compute metrics.

Return type

List[Evaluator]

run()[source]

Launch validation. The evaluation process consists of four steps.

  1. Prepare pre-calculated items for all metrics by calling self.evaluator.prepare_metrics().

  2. Get a list of metrics-sampler pair. Each pair contains a list of metrics with the same sampler mode and a shared sampler.

  3. Generate images for the each metrics group. Loop for elements in each sampler and feed to the model as input by calling self.run_iter().

  4. Evaluate all metrics by calling self.evaluator.evaluate().

run_iter(idx, data_batch: dict, metrics: Sequence[mmengine.evaluator.BaseMetric])[source]

Iterate one mini-batch and feed the output to corresponding metrics.

Parameters
  • idx (int) – Current idx for the input data.

  • data_batch (dict) – Batch of data from dataloader.

  • metrics (Sequence[BaseMetric]) – Specific metrics to evaluate.

Read the Docs v: latest
Versions
latest
stable
0.x
Downloads
pdf
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.