Shortcuts

mmagic.evaluation

Package Contents

Classes

Evaluator

Evaluator for generative models. Unlike high-level vision tasks, metrics

MAE

Mean Absolute Error metric for image.

MSE

Mean Squared Error metric for image.

NIQE

Calculate NIQE (Natural Image Quality Evaluator) metric.

PSNR

Peak Signal-to-Noise Ratio.

SAD

Sum of Absolute Differences metric for image matting.

SNR

Signal-to-Noise Ratio.

SSIM

Calculate SSIM (structural similarity).

ConnectivityError

Connectivity error for evaluating alpha matte prediction.

Equivariance

Metric for generative metrics. Except for the preparation phase

FrechetInceptionDistance

FID metric. In this metric, we calculate the distance between real

GradientError

Gradient error for evaluating alpha matte prediction.

InceptionScore

IS (Inception Score) metric. The images are split into groups, and the

MattingMSE

Mean Squared Error metric for image matting.

MultiScaleStructureSimilarity

MS-SSIM (Multi-Scale Structure Similarity) metric.

PerceptualPathLength

Perceptual path length.

PrecisionAndRecall

Improved Precision and recall metric.

SlicedWassersteinDistance

SWD (Sliced Wasserstein distance) metric. We calculate the SWD of two

TransFID

FID metric. In this metric, we calculate the distance between real

TransIS

IS (Inception Score) metric. The images are split into groups, and the

Functions

gauss_gradient(img, sigma)

Gaussian gradient.

class mmagic.evaluation.Evaluator(metrics: Union[dict, mmengine.evaluator.BaseMetric, Sequence])[source]

Bases: mmengine.evaluator.Evaluator

Evaluator for generative models. Unlike high-level vision tasks, metrics for generative models have various input types. For example, Inception Score (IS, InceptionScore) only needs to take fake images as input. However, Frechet Inception Distance (FID, FrechetInceptionDistance) needs to take both real images and fake images as input, and the numbers of real images and fake images can be set arbitrarily. For Perceptual path length (PPL, PerceptualPathLength), generator need to sample images along a latent path.

In order to be compatible with different metrics, we designed two critical functions, prepare_metrics() and prepare_samplers() to support those requirements.

  • prepare_metrics() set the image images’ color order and pass the dataloader to all metrics. Therefore metrics need pre-processing to prepare the corresponding feature.

  • prepare_samplers() pass the dataloader and model to the metrics, and get the corresponding sampler of each kind of metrics. Metrics with same sample mode can share the sampler.

The whole evaluation process can be found in mmagic.engine.runner.MultiValLoop.run() and mmagic.engine.runner.MultiTestLoop.run().

Parameters

metrics (dict or BaseMetric or Sequence) – The config of metrics.

prepare_metrics(module: mmengine.model.BaseModel, dataloader: torch.utils.data.dataloader.DataLoader)[source]

Prepare for metrics before evaluation starts. Some metrics use pretrained model to extract feature. Some metrics use pretrained model to extract feature and input channel order may vary among those models. Therefore, we first parse the output color order from data preprocessor and set the color order for each metric. Then we pass the dataloader to each metrics to prepare pre-calculated items. (e.g. inception feature of the real images). If metric has no pre-calculated items, metric.prepare() will be ignored. Once the function has been called, self.is_ready will be set as True. If self.is_ready is True, this function will directly return to avoid duplicate computation.

Parameters
  • module (BaseModel) – Model to evaluate.

  • dataloader (DataLoader) – The dataloader for real images.

static _cal_metric_hash(metric: mmagic.evaluation.metrics.base_gen_metric.GenMetric)[source]

Calculate a unique hash value based on the SAMPLER_MODE and sample_model.

prepare_samplers(module: mmengine.model.BaseModel, dataloader: torch.utils.data.dataloader.DataLoader) List[Tuple[List[mmengine.evaluator.BaseMetric], Iterator]][source]

Prepare for the sampler for metrics whose sampling mode are different. For generative models, different metric need image generated with different inputs. For example, FID, KID and IS need images generated with random noise, and PPL need paired images on the specific noise interpolation path. Therefore, we first group metrics with respect to their sampler’s mode (refers to :attr:~`GenMetrics.SAMPLER_MODE`), and build a shared sampler for each metric group. To be noted that, the length of the shared sampler depends on the metric of the most images required in each group.

Parameters
  • module (BaseModel) – Model to evaluate. Some metrics (e.g. PPL) require module in their sampler.

  • dataloader (DataLoader) – The dataloader for real image.

Returns

A list of “metrics-shared

sampler” pair.

Return type

List[Tuple[List[BaseMetric], Iterator]]

process(data_samples: Sequence[mmagic.structures.DataSample], data_batch: Optional[Any], metrics: Sequence[mmengine.evaluator.BaseMetric]) None[source]

Pass data_batch from dataloader and predictions (generated results) to corresponding metrics.

Parameters
  • data_samples (Sequence[DataSample]) – A batch of generated results from model.

  • data_batch (Optional[Any]) – A batch of data from the metrics specific sampler or the dataloader.

  • metrics (Optional[Sequence[BaseMetric]]) – Metrics to evaluate.

evaluate() dict[source]

Invoke evaluate method of each metric and collect the metrics dictionary. Different from Evaluator.evaluate, this function does not take size as input, and elements in self.metrics will call their own evaluate method to calculate the metric.

Returns

Evaluation results of all metrics. The keys are the names

of the metrics, and the values are corresponding results.

Return type

dict

mmagic.evaluation.gauss_gradient(img, sigma)[source]

Gaussian gradient.

From https://www.mathworks.com/matlabcentral/mlc-downloads/downloads/ submissions/8060/versions/2/previews/gaussgradient/gaussgradient.m/ index.html

Parameters
  • img (np.ndarray) – Input image.

  • sigma (float) – Standard deviation of the gaussian kernel.

Returns

Gaussian gradient of input img.

Return type

np.ndarray

class mmagic.evaluation.MAE(gt_key: str = 'gt_img', pred_key: str = 'pred_img', mask_key: Optional[str] = None, scaling=1, device='cpu', collect_device: str = 'cpu', prefix: Optional[str] = None)[source]

Bases: mmagic.evaluation.metrics.base_sample_wise_metric.BaseSampleWiseMetric

Mean Absolute Error metric for image.

mean(abs(a-b))

Parameters
  • gt_key (str) – Key of ground-truth. Default: ‘gt_img’

  • pred_key (str) – Key of prediction. Default: ‘pred_img’

  • mask_key (str, optional) – Key of mask, if mask_key is None, calculate all regions. Default: None

  • collect_device (str) – Device name used for collecting results from different ranks during distributed training. Must be ‘cpu’ or ‘gpu’. Defaults to ‘cpu’.

  • prefix (str, optional) – The prefix that will be added in the metric names to disambiguate homonymous metrics of different evaluators. If prefix is not provided in the argument, self.default_prefix will be used instead. Default: None

Metrics:
  • MAE (float): Mean of Absolute Error

metric = MAE
process_image(gt, pred, mask)[source]

Process an image.

Parameters
  • gt (Tensor | np.ndarray) – GT image.

  • pred (Tensor | np.ndarray) – Pred image.

  • mask (Tensor | np.ndarray) – Mask of evaluation.

Returns

MAE result.

Return type

result (np.ndarray)

class mmagic.evaluation.MSE(gt_key: str = 'gt_img', pred_key: str = 'pred_img', mask_key: Optional[str] = None, scaling=1, device='cpu', collect_device: str = 'cpu', prefix: Optional[str] = None)[source]

Bases: mmagic.evaluation.metrics.base_sample_wise_metric.BaseSampleWiseMetric

Mean Squared Error metric for image.

mean((a-b)^2)

Parameters
  • gt_key (str) – Key of ground-truth. Default: ‘gt_img’

  • pred_key (str) – Key of prediction. Default: ‘pred_img’

  • mask_key (str, optional) – Key of mask, if mask_key is None, calculate all regions. Default: None

  • collect_device (str) – Device name used for collecting results from different ranks during distributed training. Must be ‘cpu’ or ‘gpu’. Defaults to ‘cpu’.

  • prefix (str, optional) – The prefix that will be added in the metric names to disambiguate homonymous metrics of different evaluators. If prefix is not provided in the argument, self.default_prefix will be used instead. Default: None

Metrics:
  • MSE (float): Mean of Squared Error

metric = MSE
process_image(gt, pred, mask)[source]

Process an image.

Parameters
  • gt (Torch | np.ndarray) – GT image.

  • pred (Torch | np.ndarray) – Pred image.

  • mask (Torch | np.ndarray) – Mask of evaluation.

Returns

MSE result.

Return type

result (np.ndarray)

class mmagic.evaluation.NIQE(key: str = 'pred_img', is_predicted: bool = True, collect_device: str = 'cpu', prefix: Optional[str] = None, crop_border=0, input_order='HWC', convert_to='gray')[source]

Bases: mmagic.evaluation.metrics.base_sample_wise_metric.BaseSampleWiseMetric

Calculate NIQE (Natural Image Quality Evaluator) metric.

Ref: Making a “Completely Blind” Image Quality Analyzer. This implementation could produce almost the same results as the official MATLAB codes: http://live.ece.utexas.edu/research/quality/niqe_release.zip

We use the official params estimated from the pristine dataset. We use the recommended block size (96, 96) without overlaps.

Parameters
  • key (str) – Key of image. Default: ‘pred_img’

  • is_predicted (bool) – If the image is predicted, it will be picked from predictions; otherwise, it will be picked from data_batch. Default: True

  • collect_device (str) – Device name used for collecting results from different ranks during distributed training. Must be ‘cpu’ or ‘gpu’. Defaults to ‘cpu’.

  • prefix (str, optional) – The prefix that will be added in the metric names to disambiguate homonymous metrics of different evaluators. If prefix is not provided in the argument, self.default_prefix will be used instead. Default: None

  • crop_border (int) – Cropped pixels in each edges of an image. These pixels are not involved in the PSNR calculation. Default: 0.

  • input_order (str) – Whether the input order is ‘HWC’ or ‘CHW’. Default: ‘HWC’.

  • convert_to (str) – Whether to convert the images to other color models. If None, the images are not altered. When computing for ‘Y’, the images are assumed to be in BGR order. Options are ‘Y’ and None. Default: ‘gray’.

Metrics:
  • NIQE (float): Natural Image Quality Evaluator

metric = NIQE
process_image(gt, pred, mask) None[source]

Process an image.

Parameters
  • gt (np.ndarray) – GT image.

  • pred (np.ndarray) – Pred image.

  • mask (np.ndarray) – Mask of evaluation.

Returns

NIQE result.

Return type

result (np.ndarray)

class mmagic.evaluation.PSNR(gt_key: str = 'gt_img', pred_key: str = 'pred_img', collect_device: str = 'cpu', prefix: Optional[str] = None, crop_border=0, input_order='CHW', convert_to=None)[source]

Bases: mmagic.evaluation.metrics.base_sample_wise_metric.BaseSampleWiseMetric

Peak Signal-to-Noise Ratio.

Ref: https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio

Parameters
  • gt_key (str) – Key of ground-truth. Default: ‘gt_img’

  • pred_key (str) – Key of prediction. Default: ‘pred_img’

  • collect_device (str) – Device name used for collecting results from different ranks during distributed training. Must be ‘cpu’ or ‘gpu’. Defaults to ‘cpu’.

  • prefix (str, optional) – The prefix that will be added in the metric names to disambiguate homonymous metrics of different evaluators. If prefix is not provided in the argument, self.default_prefix will be used instead. Default: None

  • crop_border (int) – Cropped pixels in each edges of an image. These pixels are not involved in the PSNR calculation. Default: 0.

  • input_order (str) – Whether the input order is ‘HWC’ or ‘CHW’. Default: ‘CHW’.

  • convert_to (str) – Whether to convert the images to other color models. If None, the images are not altered. When computing for ‘Y’, the images are assumed to be in BGR order. Options are ‘Y’ and None. Default: None.

Metrics:
  • PSNR (float): Peak Signal-to-Noise Ratio

metric = PSNR
process_image(gt, pred, mask)[source]

Process an image.

Parameters
  • gt (Torch | np.ndarray) – GT image.

  • pred (Torch | np.ndarray) – Pred image.

  • mask (Torch | np.ndarray) – Mask of evaluation.

Returns

PSNR result.

Return type

np.ndarray

class mmagic.evaluation.SAD(norm_const=1000, **kwargs)[source]

Bases: mmagic.evaluation.metrics.base_sample_wise_metric.BaseSampleWiseMetric

Sum of Absolute Differences metric for image matting.

This metric compute per-pixel absolute difference and sum across all pixels. i.e. sum(abs(a-b)) / norm_const

Note

Current implementation assume image / alpha / trimap array in numpy format and with pixel value ranging from 0 to 255.

Note

pred_alpha should be masked by trimap before passing into this metric

Default prefix: ‘’

Parameters

norm_const (int) – Divide the result to reduce its magnitude. Default to 1000.

Metrics:
  • SAD (float): Sum of Absolute Differences

default_prefix =
metric = SAD
prepare(module: torch.nn.Module, dataloader: torch.utils.data.dataloader.DataLoader)[source]
process(data_batch: Sequence[dict], data_samples: Sequence[dict]) None[source]

Process one batch of data and predictions.

Parameters
  • data_batch (Sequence[Tuple[Any, dict]]) – A batch of data from the dataloader.

  • predictions (Sequence[dict]) – A batch of outputs from the model.

compute_metrics(results: List)[source]

Compute the metrics from processed results.

Parameters

results (dict) – The processed results of each batch.

Returns

The computed metrics. The keys are the names of the metrics, and the values are corresponding results.

Return type

Dict

class mmagic.evaluation.SNR(gt_key: str = 'gt_img', pred_key: str = 'pred_img', collect_device: str = 'cpu', prefix: Optional[str] = None, crop_border=0, input_order='CHW', convert_to=None)[source]

Bases: mmagic.evaluation.metrics.base_sample_wise_metric.BaseSampleWiseMetric

Signal-to-Noise Ratio.

Ref: https://en.wikipedia.org/wiki/Signal-to-noise_ratio

Parameters
  • gt_key (str) – Key of ground-truth. Default: ‘gt_img’

  • pred_key (str) – Key of prediction. Default: ‘pred_img’

  • collect_device (str) – Device name used for collecting results from different ranks during distributed training. Must be ‘cpu’ or ‘gpu’. Defaults to ‘cpu’.

  • prefix (str, optional) – The prefix that will be added in the metric names to disambiguate homonymous metrics of different evaluators. If prefix is not provided in the argument, self.default_prefix will be used instead. Default: None

  • crop_border (int) – Cropped pixels in each edges of an image. These pixels are not involved in the SNR calculation. Default: 0.

  • input_order (str) – Whether the input order is ‘HWC’ or ‘CHW’. Default: ‘CHW’.

  • convert_to (str) – Whether to convert the images to other color models. If None, the images are not altered. When computing for ‘Y’, the images are assumed to be in BGR order. Options are ‘Y’ and None. Default: None.

Metrics:
  • SNR (float): Signal-to-Noise Ratio

metric = SNR
process_image(gt, pred, mask)[source]

Process an image.

Parameters
  • gt (Torch | np.ndarray) – GT image.

  • pred (Torch | np.ndarray) – Pred image.

  • mask (Torch | np.ndarray) – Mask of evaluation.

Returns

SNR result.

Return type

np.ndarray

class mmagic.evaluation.SSIM(gt_key: str = 'gt_img', pred_key: str = 'pred_img', collect_device: str = 'cpu', prefix: Optional[str] = None, crop_border=0, input_order='CHW', convert_to=None)[source]

Bases: mmagic.evaluation.metrics.base_sample_wise_metric.BaseSampleWiseMetric

Calculate SSIM (structural similarity).

Ref: Image quality assessment: From error visibility to structural similarity

The results are the same as that of the official released MATLAB code in https://ece.uwaterloo.ca/~z70wang/research/ssim/.

For three-channel images, SSIM is calculated for each channel and then averaged.

Parameters
  • gt_key (str) – Key of ground-truth. Default: ‘gt_img’

  • pred_key (str) – Key of prediction. Default: ‘pred_img’

  • collect_device (str) – Device name used for collecting results from different ranks during distributed training. Must be ‘cpu’ or ‘gpu’. Defaults to ‘cpu’.

  • prefix (str, optional) – The prefix that will be added in the metric names to disambiguate homonymous metrics of different evaluators. If prefix is not provided in the argument, self.default_prefix will be used instead. Default: None

  • crop_border (int) – Cropped pixels in each edges of an image. These pixels are not involved in the PSNR calculation. Default: 0.

  • input_order (str) – Whether the input order is ‘HWC’ or ‘CHW’. Default: ‘HWC’.

  • convert_to (str) – Whether to convert the images to other color models. If None, the images are not altered. When computing for ‘Y’, the images are assumed to be in BGR order. Options are ‘Y’ and None. Default: None.

Metrics:
  • SSIM (float): Structural similarity

metric = SSIM
process_image(gt, pred, mask)[source]

Process an image.

Parameters
  • gt (Torch | np.ndarray) – GT image.

  • pred (Torch | np.ndarray) – Pred image.

  • mask (Torch | np.ndarray) – Mask of evaluation.

Returns

SSIM result.

Return type

np.ndarray

class mmagic.evaluation.ConnectivityError(step=0.1, norm_constant=1000, **kwargs)[source]

Bases: mmagic.evaluation.metrics.base_sample_wise_metric.BaseSampleWiseMetric

Connectivity error for evaluating alpha matte prediction.

Note

Current implementation assume image / alpha / trimap array in numpy format and with pixel value ranging from 0 to 255.

Note

pred_alpha should be masked by trimap before passing into this metric

Parameters
  • step (float) – Step of threshold when computing intersection between alpha and pred_alpha. Default to 0.1 .

  • norm_const (int) – Divide the result to reduce its magnitude. Default to 1000.

Default prefix: ‘’

Metrics:
  • ConnectivityError (float): Connectivity Error

metric = ConnectivityError
prepare(module: torch.nn.Module, dataloader: torch.utils.data.dataloader.DataLoader)[source]
process(data_batch: Sequence[dict], data_samples: Sequence[dict]) None[source]

Process one batch of data samples and predictions. The processed results should be stored in self.results, which will be used to compute the metrics when all batches have been processed.

Parameters
  • data_batch (Sequence[dict]) – A batch of data from the dataloader.

  • predictions (Sequence[dict]) – A batch of outputs from the model.

compute_metrics(results: List)[source]

Compute the metrics from processed results.

Parameters

results (dict) – The processed results of each batch.

Returns

The computed metrics. The keys are the names of the metrics, and the values are corresponding results.

Return type

Dict

class mmagic.evaluation.Equivariance(fake_nums: int, real_nums: int = 0, fake_key: Optional[str] = None, real_key: Optional[str] = 'gt_img', need_cond_input: bool = False, sample_mode: str = 'ema', sample_kwargs: dict = dict(), collect_device: str = 'cpu', prefix: Optional[str] = None, eq_cfg=dict())[source]

Bases: mmagic.evaluation.metrics.base_gen_metric.GenerativeMetric

Metric for generative metrics. Except for the preparation phase (prepare()), generative metrics do not need extra real images.

Parameters
  • fake_nums (int) – Numbers of the generated image need for the metric.

  • real_nums (int) – Numbers of the real image need for the metric. If -1 is passed means all images from the dataset is need. Defaults to 0.

  • fake_key (Optional[str]) – Key for get fake images of the output dict. Defaults to None.

  • real_key (Optional[str]) – Key for get real images from the input dict. Defaults to ‘img’.

  • need_cond_input (bool) – If true, the sampler will return the conditional input randomly sampled from the original dataset. This require the dataset implement get_data_info and field gt_label must be contained in the return value of get_data_info. Noted that, for unconditional models, set need_cond_input as True may influence the result of evaluation results since the conditional inputs are sampled from the dataset distribution; otherwise will be sampled from the uniform distribution. Defaults to False.

  • sample_model (str) – Sampling mode for the generative model. Support ‘orig’ and ‘ema’. Defaults to ‘ema’.

  • collect_device (str) – Device name used for collecting results from different ranks during distributed training. Must be ‘cpu’ or ‘gpu’. Defaults to ‘cpu’.

  • prefix (str, optional) – The prefix that will be added in the metric names to disambiguate homonymous metrics of different evaluators. If prefix is not provided in the argument, self.default_prefix will be used instead. Defaults to None.

  • sample_kwargs (dict) – Sampling arguments for model test.

name = Equivariance
process(data_batch: dict, data_samples: Sequence[dict]) None[source]

Process one batch of data samples and predictions. The processed results should be stored in self.fake_results, which will be used to compute the metrics when all batches have been processed.

Parameters
  • data_batch (dict) – A batch of data from the dataloader.

  • data_samples (Sequence[dict]) – A batch of outputs from the model.

get_metric_sampler(model: torch.nn.Module, dataloader: torch.utils.data.dataloader.DataLoader, metrics: List[mmagic.evaluation.metrics.base_gen_metric.GenerativeMetric])[source]

Get sampler for generative metrics. Returns a dummy iterator, whose return value of each iteration is a dict containing batch size and sample mode to generate images.

Parameters
  • model (nn.Module) – Model to evaluate.

  • dataloader (DataLoader) – Dataloader for real images. Used to get batch size during generate fake images.

  • metrics (List['GenerativeMetric']) – Metrics with the same sampler mode.

Returns

Sampler for generative metrics.

Return type

dummy_iterator

compute_metrics(results) dict[source]

Compute the metrics from processed results.

Parameters

results (list) – The processed results of each batch.

Returns

The computed metrics. The keys are the names of the metrics, and the values are corresponding results.

Return type

dict

_collect_target_results(target: str) Optional[list][source]

Collect function for Eq metric. This function support collect results typing as Dict[List[Tensor]]`.

Parameters

target (str) – Target results to collect.

Returns

The collected results.

Return type

Optional[list]

class mmagic.evaluation.FrechetInceptionDistance(fake_nums: int, real_nums: int = - 1, inception_style='StyleGAN', inception_path: Optional[str] = None, inception_pkl: Optional[str] = None, fake_key: Optional[str] = None, real_key: Optional[str] = 'gt_img', need_cond_input: bool = False, sample_model: str = 'orig', collect_device: str = 'cpu', prefix: Optional[str] = None, sample_kwargs: dict = dict())[source]

Bases: mmagic.evaluation.metrics.base_gen_metric.GenerativeMetric

FID metric. In this metric, we calculate the distance between real distributions and fake distributions. The distributions are modeled by the real samples and fake samples, respectively. Inception_v3 is adopted as the feature extractor, which is widely used in StyleGAN and BigGAN.

Parameters
  • fake_nums (int) – Numbers of the generated image need for the metric.

  • real_nums (int) – Numbers of the real images need for the metric. If -1 is passed, means all real images in the dataset will be used. Defaults to -1.

  • inception_style (str) – The target inception style want to load. If the given style cannot be loaded successful, will attempt to load a valid one. Defaults to ‘StyleGAN’.

  • inception_path (str, optional) – Path the the pretrain Inception network. Defaults to None.

  • inception_pkl (str, optional) – Path to reference inception pickle file. If None, the statistical value of real distribution will be calculated at running time. Defaults to None.

  • fake_key (Optional[str]) – Key for get fake images of the output dict. Defaults to None.

  • real_key (Optional[str]) – Key for get real images from the input dict. Defaults to ‘img’.

  • need_cond_input (bool) – If true, the sampler will return the conditional input randomly sampled from the original dataset. This require the dataset implement get_data_info and field gt_label must be contained in the return value of get_data_info. Noted that, for unconditional models, set need_cond_input as True may influence the result of evaluation results since the conditional inputs are sampled from the dataset distribution; otherwise will be sampled from the uniform distribution. Defaults to False.

  • sample_model (str) – Sampling mode for the generative model. Support ‘orig’ and ‘ema’. Defaults to ‘orig’.

  • collect_device (str, optional) – Device name used for collecting results from different ranks during distributed training. Must be ‘cpu’ or ‘gpu’. Defaults to ‘cpu’.

  • prefix (str, optional) – The prefix that will be added in the metric names to disambiguate homonymous metrics of different evaluators. If prefix is not provided in the argument, self.default_prefix will be used instead. Defaults to None.

name = FID
prepare(module: torch.nn.Module, dataloader: torch.utils.data.dataloader.DataLoader) None[source]

Preparing inception feature for the real images.

Parameters
  • module (nn.Module) – The model to evaluate.

  • dataloader (DataLoader) – The dataloader for real images.

_load_inception(inception_style: str, inception_path: Optional[str]) Tuple[torch.nn.Module, str][source]

Load inception and return the successful loaded style.

Parameters
  • inception_style (str) – Target style of Inception network want to load.

  • inception_path (Optional[str]) – The path to the inception.

Returns

The actually loaded inception network and

corresponding style.

Return type

Tuple[nn.Module, str]

forward_inception(image: torch.Tensor) torch.Tensor[source]

Feed image to inception network and get the output feature.

Parameters

data_samples (Sequence[dict]) – A batch of data sample dict used to extract inception feature.

Returns

Image feature extracted from inception.

Return type

Tensor

process(data_batch: dict, data_samples: Sequence[dict]) None[source]

Process one batch of data samples and predictions. The processed results should be stored in self.fake_results, which will be used to compute the metrics when all batches have been processed.

Parameters
  • data_batch (dict) – A batch of data from the dataloader.

  • data_samples (Sequence[dict]) – A batch of outputs from the model.

static _calc_fid(sample_mean: numpy.ndarray, sample_cov: numpy.ndarray, real_mean: numpy.ndarray, real_cov: numpy.ndarray, eps: float = 1e-06) Tuple[float][source]

Refer to the implementation from:

https://github.com/rosinality/stylegan2-pytorch/blob/master/fid.py#L34

compute_metrics(fake_results: list) dict[source]

Compute the result of FID metric.

Parameters

fake_results (list) – List of image feature of fake images.

Returns

A dict of the computed FID metric and its mean and

covariance.

Return type

dict

class mmagic.evaluation.GradientError(sigma=1.4, norm_constant=1000, **kwargs)[source]

Bases: mmagic.evaluation.metrics.base_sample_wise_metric.BaseSampleWiseMetric

Gradient error for evaluating alpha matte prediction.

Note

Current implementation assume image / alpha / trimap array in numpy format and with pixel value ranging from 0 to 255.

Note

pred_alpha should be masked by trimap before passing into this metric

Parameters
  • sigma (float) – Standard deviation of the gaussian kernel. Defaults to 1.4 .

  • norm_const (int) – Divide the result to reduce its magnitude. Defaults to 1000 .

Default prefix: ‘’

Metrics:
  • GradientError (float): Gradient Error

metric = GradientError
prepare(module: torch.nn.Module, dataloader: torch.utils.data.dataloader.DataLoader)[source]
process(data_batch: Sequence[dict], data_samples: Sequence[dict]) None[source]

Process one batch of data samples and predictions. The processed results should be stored in self.results, which will be used to compute the metrics when all batches have been processed.

Parameters
  • data_batch (Sequence[dict]) – A batch of data from the dataloader.

  • predictions (Sequence[dict]) – A batch of outputs from the model.

compute_metrics(results: List)[source]

Compute the metrics from processed results.

Parameters

results (dict) – The processed results of each batch.

Returns

The computed metrics. The keys are the names of the metrics, and the values are corresponding results.

Return type

Dict

class mmagic.evaluation.InceptionScore(fake_nums: int = 50000.0, resize: bool = True, splits: int = 10, inception_style: str = 'StyleGAN', inception_path: Optional[str] = None, resize_method='bicubic', use_pillow_resize: bool = True, fake_key: Optional[str] = None, need_cond_input: bool = False, sample_model='orig', collect_device: str = 'cpu', prefix: str = None)[source]

Bases: mmagic.evaluation.metrics.base_gen_metric.GenerativeMetric

IS (Inception Score) metric. The images are split into groups, and the inception score is calculated on each group of images, then the mean and standard deviation of the score is reported. The calculation of the inception score on a group of images involves first using the inception v3 model to calculate the conditional probability for each image (p(y|x)). The marginal probability is then calculated as the average of the conditional probabilities for the images in the group (p(y)). The KL divergence is then calculated for each image as the conditional probability multiplied by the log of the conditional probability minus the log of the marginal probability. The KL divergence is then summed over all images and averaged over all classes and the exponent of the result is calculated to give the final score.

Ref: https://github.com/sbarratt/inception-score-pytorch/blob/master/inception_score.py # noqa

Note that we highly recommend that users should download the Inception V3 script module from the following address. Then, the inception_pkl can be set with user’s local path. If not given, we will use the Inception V3 from pytorch model zoo. However, this may bring significant different in the final results.

Tero’s Inception V3: https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/inception-2015-12-05.pt # noqa

Parameters
  • fake_nums (int) – Numbers of the generated image need for the metric.

  • resize (bool, optional) – Whether resize image to 299x299. Defaults to True.

  • splits (int, optional) – The number of groups. Defaults to 10.

  • inception_style (str) – The target inception style want to load. If the given style cannot be loaded successful, will attempt to load a valid one. Defaults to ‘StyleGAN’.

  • inception_path (str, optional) – Path the the pretrain Inception network. Defaults to None.

  • resize_method (str) – Resize method. If resize is False, this will be ignored. Defaults to ‘bicubic’.

  • use_pil_resize (bool) – Whether use Bicubic interpolation with Pillow’s backend. If set as True, the evaluation process may be a little bit slow, but achieve a more accurate IS result. Defaults to False.

  • fake_key (Optional[str]) – Key for get fake images of the output dict. Defaults to None.

  • need_cond_input (bool) – If true, the sampler will return the conditional input randomly sampled from the original dataset. This require the dataset implement get_data_info and field gt_label must be contained in the return value of get_data_info. Noted that, for unconditional models, set need_cond_input as True may influence the result of evaluation results since the conditional inputs are sampled from the dataset distribution; otherwise will be sampled from the uniform distribution. Defaults to False.

  • sample_model (str) – Sampling mode for the generative model. Support ‘orig’ and ‘ema’. Defaults to ‘orig’.

  • collect_device (str, optional) – Device name used for collecting results from different ranks during distributed training. Must be ‘cpu’ or ‘gpu’. Defaults to ‘cpu’.

  • prefix (str, optional) – The prefix that will be added in the metric names to disambiguate homonymous metrics of different evaluators. If prefix is not provided in the argument, self.default_prefix will be used instead. Defaults to None.

name = IS
pil_resize_method_mapping
prepare(module: torch.nn.Module, dataloader: torch.utils.data.dataloader.DataLoader) None[source]

Prepare for the pre-calculating items of the metric. Defaults to do nothing.

Parameters
  • module (nn.Module) – Model to evaluate.

  • dataloader (DataLoader) – Dataloader for the real images.

_load_inception(inception_style: str, inception_path: Optional[str]) Tuple[torch.nn.Module, str][source]

Load pretrain model of inception network. :param inception_style: Target style of Inception network want to

load.

Parameters

inception_path (Optional[str]) – The path to the inception.

Returns

The actually loaded inception network and

corresponding style.

Return type

Tuple[nn.Module, str]

_preprocess(image: torch.Tensor) torch.Tensor[source]

Preprocess image before pass to the Inception. Preprocess operations contain channel conversion and resize.

Parameters

image (Tensor) – Image tensor before preprocess.

Returns

Image tensor after resize and channel conversion

(if need.)

Return type

Tensor

process(data_batch: dict, data_samples: Sequence[dict]) None[source]

Process one batch of data samples and predictions. The processed results should be stored in self.fake_results, which will be used to compute the metrics when all batches have been processed.

Parameters
  • data_batch (dict) – A batch of data from the dataloader.

  • data_samples (Sequence[dict]) – A batch of outputs from the model.

compute_metrics(fake_results: list) dict[source]

Compute the results of Inception Score metric.

Parameters

fake_results (list) – List of image feature of fake images.

Returns

A dict of the computed IS metric and its standard error

Return type

dict

class mmagic.evaluation.MattingMSE(norm_const=1000, **kwargs)[source]

Bases: mmagic.evaluation.metrics.base_sample_wise_metric.BaseSampleWiseMetric

Mean Squared Error metric for image matting.

This metric compute per-pixel squared error average across all pixels. i.e. mean((a-b)^2) / norm_const

Note

Current implementation assume image / alpha / trimap array in numpy format and with pixel value ranging from 0 to 255.

Note

pred_alpha should be masked by trimap before passing into this metric

Default prefix: ‘’

Parameters

norm_const (int) – Divide the result to reduce its magnitude. Default to 1000.

Metrics:
  • MattingMSE (float): Mean of Squared Error

default_prefix =
metric = MattingMSE
prepare(module: torch.nn.Module, dataloader: torch.utils.data.dataloader.DataLoader)[source]
process(data_batch: Sequence[dict], data_samples: Sequence[dict]) None[source]

Process one batch of data and predictions.

Parameters
  • data_batch (Sequence[dict]) – A batch of data from the dataloader.

  • data_samples (Sequence[dict]) – A batch of outputs from the model.

compute_metrics(results: List)[source]

Compute the metrics from processed results.

Parameters

results (dict) – The processed results of each batch.

Returns

The computed metrics. The keys are the names of the metrics, and the values are corresponding results.

Return type

Dict

class mmagic.evaluation.MultiScaleStructureSimilarity(fake_nums: int, fake_key: Optional[str] = None, need_cond_input: bool = False, sample_model: str = 'ema', collect_device: str = 'cpu', prefix: Optional[str] = None)[source]

Bases: mmagic.evaluation.metrics.base_gen_metric.GenerativeMetric

MS-SSIM (Multi-Scale Structure Similarity) metric.

Ref: https://github.com/tkarras/progressive_growing_of_gans/blob/master/metrics/ms_ssim.py # noqa

Parameters
  • fake_nums (int) – Numbers of the generated image need for the metric.

  • fake_key (Optional[str]) – Key for get fake images of the output dict. Defaults to None.

  • real_key (Optional[str]) – Key for get real images from the input dict. Defaults to ‘img’.

  • need_cond_input (bool) – If true, the sampler will return the conditional input randomly sampled from the original dataset. This require the dataset implement get_data_info and field gt_label must be contained in the return value of get_data_info. Noted that, for unconditional models, set need_cond_input as True may influence the result of evaluation results since the conditional inputs are sampled from the dataset distribution; otherwise will be sampled from the uniform distribution. Defaults to False.

  • sample_model (str) – Sampling mode for the generative model. Support ‘orig’ and ‘ema’. Defaults to ‘ema’.

  • collect_device (str, optional) – Device name used for collecting results from different ranks during distributed training. Must be ‘cpu’ or ‘gpu’. Defaults to ‘cpu’.

  • prefix (str, optional) – The prefix that will be added in the metric names to disambiguate homonymous metrics of different evaluators. If prefix is not provided in the argument, self.default_prefix will be used instead. Defaults to None.

name = MS-SSIM
process(data_batch: dict, data_samples: Sequence[dict]) None[source]

Feed data to the metric.

Parameters
  • data_batch (dict) – Real images from dataloader. Do not be used in this metric.

  • data_samples (Sequence[dict]) – Generated images.

_collect_target_results(target: str) Optional[list][source]

Collected results for MS-SSIM metric. Size of self.fake_results in MS-SSIM does not relay on self.fake_nums but self.num_pairs.

Parameters

target (str) – Target results to collect.

Returns

The collected results.

Return type

Optional[list]

compute_metrics(results_fake: List)[source]

Computed the result of MS-SSIM.

Returns

Calculated MS-SSIM result.

Return type

dict

class mmagic.evaluation.PerceptualPathLength(fake_nums: int, real_nums: int = 0, fake_key: Optional[str] = None, real_key: Optional[str] = 'gt_img', need_cond_input: bool = False, sample_model: str = 'ema', collect_device: str = 'cpu', prefix: Optional[str] = None, crop=True, epsilon=0.0001, space='W', sampling='end', latent_dim=512)[source]

Bases: mmagic.evaluation.metrics.base_gen_metric.GenerativeMetric

Perceptual path length.

Measure the difference between consecutive images (their VGG16 embeddings) when interpolating between two random inputs. Drastic changes mean that multiple features have changed together and that they might be entangled.

Ref: https://github.com/rosinality/stylegan2-pytorch/blob/master/ppl.py # noqa

Parameters
  • num_images (int) – The number of evaluated generated samples.

  • image_shape (tuple, optional) – Image shape in order “CHW”. Defaults to None.

  • crop (bool, optional) – Whether crop images. Defaults to True.

  • epsilon (float, optional) – Epsilon parameter for path sampling. Defaults to 1e-4.

  • space (str, optional) – Latent space. Defaults to ‘W’.

  • sampling (str, optional) – Sampling mode, whether sampling in full path or endpoints. Defaults to ‘end’.

  • latent_dim (int, optional) – Latent dimension of input noise. Defaults to 512.

  • need_cond_input (bool) – If true, the sampler will return the conditional input randomly sampled from the original dataset. This require the dataset implement get_data_info and field gt_label must be contained in the return value of get_data_info. Noted that, for unconditional models, set need_cond_input as True may influence the result of evaluation results since the conditional inputs are sampled from the dataset distribution; otherwise will be sampled from the uniform distribution. Defaults to False.

SAMPLER_MODE = path
process(data_batch: dict, data_samples: Sequence[dict]) None[source]

Process one batch of data samples and predictions. The processed results should be stored in self.fake_results, which will be used to compute the metrics when all batches have been processed.

Parameters
  • data_batch (dict) – A batch of data from the dataloader.

  • data_samples (Sequence[dict]) – A batch of outputs from the model.

_compute_distance(images)[source]

Feed data to the metric.

Parameters

images (Tensor) – Input tensor.

compute_metrics(fake_results: list) dict[source]

Summarize the results.

Returns

Summarized results.

Return type

dict | list

get_metric_sampler(model: torch.nn.Module, dataloader: torch.utils.data.dataloader.DataLoader, metrics: list)[source]

Get sampler for generative metrics. Returns a dummy iterator, whose return value of each iteration is a dict containing batch size and sample mode to generate images.

Parameters
  • model (nn.Module) – Model to evaluate.

  • dataloader (DataLoader) – Dataloader for real images. Used to get batch size during generate fake images.

  • metrics (list) – Metrics with the same sampler mode.

Returns

Sampler for generative metrics.

Return type

dummy_iterator

class mmagic.evaluation.PrecisionAndRecall(fake_nums, real_nums=- 1, k=3, fake_key: Optional[str] = None, real_key: Optional[str] = 'gt_img', need_cond_input: bool = False, sample_model: str = 'ema', collect_device: str = 'cpu', prefix: Optional[str] = None, vgg16_script='work_dirs/cache/vgg16.pt', vgg16_pkl=None, row_batch_size=10000, col_batch_size=10000, auto_save=True)[source]

Bases: mmagic.evaluation.metrics.base_gen_metric.GenerativeMetric

Improved Precision and recall metric.

In this metric, we draw real and generated samples respectively, and embed them into a high-dimensional feature space using a pre-trained classifier network. We use these features to estimate the corresponding manifold. We obtain the estimation by calculating pairwise Euclidean distances between all feature vectors in the set and, for each feature vector, construct a hypersphere with radius equal to the distance to its kth nearest neighbor. Together, these hyperspheres define a volume in the feature space that serves as an estimate of the true manifold. Precision is quantified by querying for each generated image whether the image is within the estimated manifold of real images. Symmetrically, recall is calculated by querying for each real image whether the image is within estimated manifold of generated image.

Ref: https://github.com/NVlabs/stylegan2-ada-pytorch/blob/main/metrics/precision_recall.py # noqa

Note that we highly recommend that users should download the vgg16 script module from the following address. Then, the vgg16_script can be set with user’s local path. If not given, we will use the vgg16 from pytorch model zoo. However, this may bring significant different in the final results.

Tero’s vgg16: https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/vgg16.pt

Parameters
  • num_images (int) – The number of evaluated generated samples.

  • image_shape (tuple) – Image shape in order “CHW”. Defaults to None.

  • num_real_need (int | None, optional) – The number of real images. Defaults to None.

  • full_dataset (bool, optional) – Whether to use full dataset for evaluation. Defaults to False.

  • k (int, optional) – Kth nearest parameter. Defaults to 3.

  • bgr2rgb (bool, optional) – Whether to change the order of image channel. Defaults to True.

  • vgg16_script (str, optional) – Path for the Tero’s vgg16 module. Defaults to ‘work_dirs/cache/vgg16.pt’.

  • row_batch_size (int, optional) – The batch size of row data. Defaults to 10000.

  • col_batch_size (int, optional) – The batch size of col data. Defaults to 10000.

  • auto_save (bool, optional) – Whether save vgg feature automatically.

  • need_cond_input (bool) – If true, the sampler will return the conditional input randomly sampled from the original dataset. This require the dataset implement get_data_info and field gt_label must be contained in the return value of get_data_info. Noted that, for unconditional models, set need_cond_input as True may influence the result of evaluation results since the conditional inputs are sampled from the dataset distribution; otherwise will be sampled from the uniform distribution. Defaults to False.

name = PR
_load_vgg(vgg16_script: Optional[str]) Tuple[torch.nn.Module, bool][source]

Load VGG network from the given path.

Parameters

vgg16_script – The path of script model of VGG network. If None, will load the pytorch version.

Returns

The actually loaded VGG network and

corresponding style.

Return type

Tuple[nn.Module, str]

extract_features(images: torch.Tensor) torch.Tensor[source]

Extracting image features.

Parameters

images (torch.Tensor) – Images tensor.

Returns

Vgg16 features of input images.

Return type

torch.Tensor

compute_metrics(results_fake) dict[source]

compute_metrics.

Returns

Summarized results.

Return type

dict

process(data_batch: dict, data_samples: Sequence[dict]) None[source]

Process one batch of data samples and predictions. The processed results should be stored in self.fake_results, which will be used to compute the metrics when all batches have been processed.

Parameters
  • data_batch (dict) – A batch of data from the dataloader.

  • data_samples (Sequence[dict]) – A batch of outputs from the model.

prepare(module: torch.nn.Module, dataloader: torch.utils.data.dataloader.DataLoader) None[source]

Prepare for the pre-calculating items of the metric. Defaults to do nothing.

Parameters
  • module (nn.Module) – Model to evaluate.

  • dataloader (DataLoader) – Dataloader for the real images.

class mmagic.evaluation.SlicedWassersteinDistance(fake_nums: int, image_shape: tuple, fake_key: Optional[str] = None, real_key: Optional[str] = 'gt_img', sample_model: str = 'ema', collect_device: str = 'cpu', prefix: Optional[str] = None)[source]

Bases: mmagic.evaluation.metrics.base_gen_metric.GenMetric

SWD (Sliced Wasserstein distance) metric. We calculate the SWD of two sets of images in the following way. In every ‘feed’, we obtain the Laplacian pyramids of every images and extract patches from the Laplacian pyramids as descriptors. In ‘summary’, we normalize these descriptors along channel, and reshape them so that we can use these descriptors to represent the distribution of real/fake images. And we can calculate the sliced Wasserstein distance of the real and fake descriptors as the SWD of the real and fake images.

Ref: https://github.com/tkarras/progressive_growing_of_gans/blob/master/metrics/sliced_wasserstein.py # noqa

Parameters
  • fake_nums (int) – Numbers of the generated image need for the metric.

  • image_shape (tuple) – Image shape in order “CHW”.

  • fake_key (Optional[str]) – Key for get fake images of the output dict. Defaults to None.

  • real_key (Optional[str]) – Key for get real images from the input dict. Defaults to ‘gt_img’.

  • sample_model (str) – Sampling mode for the generative model. Support ‘orig’ and ‘ema’. Defaults to ‘ema’.

  • collect_device (str) – Device name used for collecting results from different ranks during distributed training. Must be ‘cpu’ or ‘gpu’. Defaults to ‘cpu’.

  • prefix (str, optional) – The prefix that will be added in the metric names to disambiguate homonymous metrics of different evaluators. If prefix is not provided in the argument, self.default_prefix will be used instead. Defaults to None.

name = SWD
process(data_batch: dict, data_samples: Sequence[dict]) None[source]

Process one batch of data samples and predictions. The processed results should be stored in self.fake_results and self.real_results, which will be used to compute the metrics when all batches have been processed.

Parameters
  • data_batch (dict) – A batch of data from the dataloader.

  • data_samples (Sequence[dict]) – A batch of outputs from the model.

_collect_target_results(target: str) Optional[list][source]

Collect function for SWD metric. This function support collect results typing as List[List[Tensor]].

Parameters

target (str) – Target results to collect.

Returns

The collected results.

Return type

Optional[list]

compute_metrics(results_fake, results_real) dict[source]

Compute the result of SWD metric.

Parameters
  • fake_results (list) – List of image feature of fake images.

  • real_results (list) – List of image feature of real images.

Returns

A dict of the computed SWD metric.

Return type

dict

class mmagic.evaluation.TransFID(fake_nums: int, real_nums: int = - 1, inception_style='StyleGAN', inception_path: Optional[str] = None, inception_pkl: Optional[str] = None, fake_key: Optional[str] = None, real_key: Optional[str] = 'img', sample_model: str = 'ema', collect_device: str = 'cpu', prefix: Optional[str] = None)[source]

Bases: FrechetInceptionDistance

FID metric. In this metric, we calculate the distance between real distributions and fake distributions. The distributions are modeled by the real samples and fake samples, respectively. Inception_v3 is adopted as the feature extractor, which is widely used in StyleGAN and BigGAN.

Parameters
  • fake_nums (int) – Numbers of the generated image need for the metric.

  • real_nums (int) – Numbers of the real images need for the metric. If -1 is passed, means all real images in the dataset will be used. Defaults to -1.

  • inception_style (str) – The target inception style want to load. If the given style cannot be loaded successful, will attempt to load a valid one. Defaults to ‘StyleGAN’.

  • inception_path (str, optional) – Path the the pretrain Inception network. Defaults to None.

  • inception_pkl (str, optional) – Path to reference inception pickle file. If None, the statistical value of real distribution will be calculated at running time. Defaults to None.

  • fake_key (Optional[str]) – Key for get fake images of the output dict. Defaults to None.

  • real_key (Optional[str]) – Key for get real images from the input dict. Defaults to ‘img’.

  • need_cond_input (bool) – If true, the sampler will return the conditional input randomly sampled from the original dataset. This require the dataset implement get_data_info and field gt_label must be contained in the return value of get_data_info. Noted that, for unconditional models, set need_cond_input as True may influence the result of evaluation results since the conditional inputs are sampled from the dataset distribution; otherwise will be sampled from the uniform distribution. Defaults to False.

  • sample_model (str) – Sampling mode for the generative model. Support ‘orig’ and ‘ema’. Defaults to ‘orig’.

  • collect_device (str, optional) – Device name used for collecting results from different ranks during distributed training. Must be ‘cpu’ or ‘gpu’. Defaults to ‘cpu’.

  • prefix (str, optional) – The prefix that will be added in the metric names to disambiguate homonymous metrics of different evaluators. If prefix is not provided in the argument, self.default_prefix will be used instead. Defaults to None.

get_metric_sampler(model: torch.nn.Module, dataloader: torch.utils.data.dataloader.DataLoader, metrics: List[mmagic.evaluation.metrics.base_gen_metric.GenerativeMetric]) torch.utils.data.dataloader.DataLoader[source]

Get sampler for normal metrics. Directly returns the dataloader.

Parameters
  • model (nn.Module) – Model to evaluate.

  • dataloader (DataLoader) – Dataloader for real images.

  • metrics (List['GenMetric']) – Metrics with the same sample mode.

Returns

Default sampler for normal metrics.

Return type

DataLoader

class mmagic.evaluation.TransIS(fake_nums: int = 50000, resize: bool = True, splits: int = 10, inception_style: str = 'StyleGAN', inception_path: Optional[str] = None, resize_method='bicubic', use_pillow_resize: bool = True, fake_key: Optional[str] = None, sample_model='ema', collect_device: str = 'cpu', prefix: str = None)[source]

Bases: InceptionScore

IS (Inception Score) metric. The images are split into groups, and the inception score is calculated on each group of images, then the mean and standard deviation of the score is reported. The calculation of the inception score on a group of images involves first using the inception v3 model to calculate the conditional probability for each image (p(y|x)). The marginal probability is then calculated as the average of the conditional probabilities for the images in the group (p(y)). The KL divergence is then calculated for each image as the conditional probability multiplied by the log of the conditional probability minus the log of the marginal probability. The KL divergence is then summed over all images and averaged over all classes and the exponent of the result is calculated to give the final score.

Ref: https://github.com/sbarratt/inception-score-pytorch/blob/master/inception_score.py # noqa

Note that we highly recommend that users should download the Inception V3 script module from the following address. Then, the inception_pkl can be set with user’s local path. If not given, we will use the Inception V3 from pytorch model zoo. However, this may bring significant different in the final results.

Tero’s Inception V3: https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/inception-2015-12-05.pt # noqa

Parameters
  • fake_nums (int) – Numbers of the generated image need for the metric.

  • resize (bool, optional) – Whether resize image to 299x299. Defaults to True.

  • splits (int, optional) – The number of groups. Defaults to 10.

  • inception_style (str) – The target inception style want to load. If the given style cannot be loaded successful, will attempt to load a valid one. Defaults to ‘StyleGAN’.

  • inception_path (str, optional) – Path the the pretrain Inception network. Defaults to None.

  • resize_method (str) – Resize method. If resize is False, this will be ignored. Defaults to ‘bicubic’.

  • use_pil_resize (bool) – Whether use Bicubic interpolation with Pillow’s backend. If set as True, the evaluation process may be a little bit slow, but achieve a more accurate IS result. Defaults to False.

  • fake_key (Optional[str]) – Key for get fake images of the output dict. Defaults to None.

  • sample_model (str) – Sampling mode for the generative model. Support ‘orig’ and ‘ema’. Defaults to ‘ema’.

  • collect_device (str, optional) – Device name used for collecting results from different ranks during distributed training. Must be ‘cpu’ or ‘gpu’. Defaults to ‘cpu’.

  • prefix (str, optional) – The prefix that will be added in the metric names to disambiguate homonymous metrics of different evaluators. If prefix is not provided in the argument, self.default_prefix will be used instead. Defaults to None.

get_metric_sampler(model: torch.nn.Module, dataloader: torch.utils.data.dataloader.DataLoader, metrics: List[mmagic.evaluation.metrics.base_gen_metric.GenerativeMetric]) torch.utils.data.dataloader.DataLoader[source]

Get sampler for normal metrics. Directly returns the dataloader.

Parameters
  • model (nn.Module) – Model to evaluate.

  • dataloader (DataLoader) – Dataloader for real images.

  • metrics (List['GenMetric']) – Metrics with the same sample mode.

Returns

Default sampler for normal metrics.

Return type

DataLoader

Read the Docs v: latest
Versions
latest
stable
0.x
Downloads
pdf
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.