Shortcuts

mmagic.evaluation.metrics.inception_score

Module Contents

Classes

InceptionScore

IS (Inception Score) metric. The images are split into groups, and the

TransIS

IS (Inception Score) metric. The images are split into groups, and the

class mmagic.evaluation.metrics.inception_score.InceptionScore(fake_nums: int = 50000.0, resize: bool = True, splits: int = 10, inception_style: str = 'StyleGAN', inception_path: Optional[str] = None, resize_method='bicubic', use_pillow_resize: bool = True, fake_key: Optional[str] = None, need_cond_input: bool = False, sample_model='orig', collect_device: str = 'cpu', prefix: str = None)[源代码]

Bases: mmagic.evaluation.metrics.base_gen_metric.GenerativeMetric

IS (Inception Score) metric. The images are split into groups, and the inception score is calculated on each group of images, then the mean and standard deviation of the score is reported. The calculation of the inception score on a group of images involves first using the inception v3 model to calculate the conditional probability for each image (p(y|x)). The marginal probability is then calculated as the average of the conditional probabilities for the images in the group (p(y)). The KL divergence is then calculated for each image as the conditional probability multiplied by the log of the conditional probability minus the log of the marginal probability. The KL divergence is then summed over all images and averaged over all classes and the exponent of the result is calculated to give the final score.

Ref: https://github.com/sbarratt/inception-score-pytorch/blob/master/inception_score.py # noqa

Note that we highly recommend that users should download the Inception V3 script module from the following address. Then, the inception_pkl can be set with user’s local path. If not given, we will use the Inception V3 from pytorch model zoo. However, this may bring significant different in the final results.

Tero’s Inception V3: https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/inception-2015-12-05.pt # noqa

参数
  • fake_nums (int) – Numbers of the generated image need for the metric.

  • resize (bool, optional) – Whether resize image to 299x299. Defaults to True.

  • splits (int, optional) – The number of groups. Defaults to 10.

  • inception_style (str) – The target inception style want to load. If the given style cannot be loaded successful, will attempt to load a valid one. Defaults to ‘StyleGAN’.

  • inception_path (str, optional) – Path the the pretrain Inception network. Defaults to None.

  • resize_method (str) – Resize method. If resize is False, this will be ignored. Defaults to ‘bicubic’.

  • use_pil_resize (bool) – Whether use Bicubic interpolation with Pillow’s backend. If set as True, the evaluation process may be a little bit slow, but achieve a more accurate IS result. Defaults to False.

  • fake_key (Optional[str]) – Key for get fake images of the output dict. Defaults to None.

  • need_cond_input (bool) – If true, the sampler will return the conditional input randomly sampled from the original dataset. This require the dataset implement get_data_info and field gt_label must be contained in the return value of get_data_info. Noted that, for unconditional models, set need_cond_input as True may influence the result of evaluation results since the conditional inputs are sampled from the dataset distribution; otherwise will be sampled from the uniform distribution. Defaults to False.

  • sample_model (str) – Sampling mode for the generative model. Support ‘orig’ and ‘ema’. Defaults to ‘orig’.

  • collect_device (str, optional) – Device name used for collecting results from different ranks during distributed training. Must be ‘cpu’ or ‘gpu’. Defaults to ‘cpu’.

  • prefix (str, optional) – The prefix that will be added in the metric names to disambiguate homonymous metrics of different evaluators. If prefix is not provided in the argument, self.default_prefix will be used instead. Defaults to None.

name = 'IS'[源代码]
pil_resize_method_mapping[源代码]
prepare(module: torch.nn.Module, dataloader: torch.utils.data.dataloader.DataLoader) None[源代码]

Prepare for the pre-calculating items of the metric. Defaults to do nothing.

参数
  • module (nn.Module) – Model to evaluate.

  • dataloader (DataLoader) – Dataloader for the real images.

_load_inception(inception_style: str, inception_path: Optional[str]) Tuple[torch.nn.Module, str][源代码]

Load pretrain model of inception network. :param inception_style: Target style of Inception network want to

load.

参数

inception_path (Optional[str]) – The path to the inception.

返回

The actually loaded inception network and

corresponding style.

返回类型

Tuple[nn.Module, str]

_preprocess(image: torch.Tensor) torch.Tensor[源代码]

Preprocess image before pass to the Inception. Preprocess operations contain channel conversion and resize.

参数

image (Tensor) – Image tensor before preprocess.

返回

Image tensor after resize and channel conversion

(if need.)

返回类型

Tensor

process(data_batch: dict, data_samples: Sequence[dict]) None[源代码]

Process one batch of data samples and predictions. The processed results should be stored in self.fake_results, which will be used to compute the metrics when all batches have been processed.

参数
  • data_batch (dict) – A batch of data from the dataloader.

  • data_samples (Sequence[dict]) – A batch of outputs from the model.

compute_metrics(fake_results: list) dict[源代码]

Compute the results of Inception Score metric.

参数

fake_results (list) – List of image feature of fake images.

返回

A dict of the computed IS metric and its standard error

返回类型

dict

class mmagic.evaluation.metrics.inception_score.TransIS(fake_nums: int = 50000, resize: bool = True, splits: int = 10, inception_style: str = 'StyleGAN', inception_path: Optional[str] = None, resize_method='bicubic', use_pillow_resize: bool = True, fake_key: Optional[str] = None, sample_model='ema', collect_device: str = 'cpu', prefix: str = None)[源代码]

Bases: InceptionScore

IS (Inception Score) metric. The images are split into groups, and the inception score is calculated on each group of images, then the mean and standard deviation of the score is reported. The calculation of the inception score on a group of images involves first using the inception v3 model to calculate the conditional probability for each image (p(y|x)). The marginal probability is then calculated as the average of the conditional probabilities for the images in the group (p(y)). The KL divergence is then calculated for each image as the conditional probability multiplied by the log of the conditional probability minus the log of the marginal probability. The KL divergence is then summed over all images and averaged over all classes and the exponent of the result is calculated to give the final score.

Ref: https://github.com/sbarratt/inception-score-pytorch/blob/master/inception_score.py # noqa

Note that we highly recommend that users should download the Inception V3 script module from the following address. Then, the inception_pkl can be set with user’s local path. If not given, we will use the Inception V3 from pytorch model zoo. However, this may bring significant different in the final results.

Tero’s Inception V3: https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/inception-2015-12-05.pt # noqa

参数
  • fake_nums (int) – Numbers of the generated image need for the metric.

  • resize (bool, optional) – Whether resize image to 299x299. Defaults to True.

  • splits (int, optional) – The number of groups. Defaults to 10.

  • inception_style (str) – The target inception style want to load. If the given style cannot be loaded successful, will attempt to load a valid one. Defaults to ‘StyleGAN’.

  • inception_path (str, optional) – Path the the pretrain Inception network. Defaults to None.

  • resize_method (str) – Resize method. If resize is False, this will be ignored. Defaults to ‘bicubic’.

  • use_pil_resize (bool) – Whether use Bicubic interpolation with Pillow’s backend. If set as True, the evaluation process may be a little bit slow, but achieve a more accurate IS result. Defaults to False.

  • fake_key (Optional[str]) – Key for get fake images of the output dict. Defaults to None.

  • sample_model (str) – Sampling mode for the generative model. Support ‘orig’ and ‘ema’. Defaults to ‘ema’.

  • collect_device (str, optional) – Device name used for collecting results from different ranks during distributed training. Must be ‘cpu’ or ‘gpu’. Defaults to ‘cpu’.

  • prefix (str, optional) – The prefix that will be added in the metric names to disambiguate homonymous metrics of different evaluators. If prefix is not provided in the argument, self.default_prefix will be used instead. Defaults to None.

get_metric_sampler(model: torch.nn.Module, dataloader: torch.utils.data.dataloader.DataLoader, metrics: List[mmagic.evaluation.metrics.base_gen_metric.GenerativeMetric]) torch.utils.data.dataloader.DataLoader[源代码]

Get sampler for normal metrics. Directly returns the dataloader.

参数
  • model (nn.Module) – Model to evaluate.

  • dataloader (DataLoader) – Dataloader for real images.

  • metrics (List['GenMetric']) – Metrics with the same sample mode.

返回

Default sampler for normal metrics.

返回类型

DataLoader

Read the Docs v: latest
Versions
latest
stable
0.x
Downloads
pdf
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.