mmagic.evaluation.metrics.ppl
¶
Module Contents¶
Classes¶
Perceptual path length. |
Functions¶
|
Spherical linear interpolation between two unnormalized vectors. |
- mmagic.evaluation.metrics.ppl.slerp(a, b, percent)[source]¶
Spherical linear interpolation between two unnormalized vectors.
- Parameters
a (Tensor) – Tensor with shape [N, C].
b (Tensor) – Tensor with shape [N, C].
percent (float|Tensor) – A float or tensor with shape broadcastable to the shape of input Tensors.
- Returns
Spherical linear interpolation result with shape [N, C].
- Return type
Tensor
- class mmagic.evaluation.metrics.ppl.PerceptualPathLength(fake_nums: int, real_nums: int = 0, fake_key: Optional[str] = None, real_key: Optional[str] = 'gt_img', need_cond_input: bool = False, sample_model: str = 'ema', collect_device: str = 'cpu', prefix: Optional[str] = None, crop=True, epsilon=0.0001, space='W', sampling='end', latent_dim=512)[source]¶
Bases:
mmagic.evaluation.metrics.base_gen_metric.GenerativeMetric
Perceptual path length.
Measure the difference between consecutive images (their VGG16 embeddings) when interpolating between two random inputs. Drastic changes mean that multiple features have changed together and that they might be entangled.
Ref: https://github.com/rosinality/stylegan2-pytorch/blob/master/ppl.py # noqa
- Parameters
num_images (int) – The number of evaluated generated samples.
image_shape (tuple, optional) – Image shape in order “CHW”. Defaults to None.
crop (bool, optional) – Whether crop images. Defaults to True.
epsilon (float, optional) – Epsilon parameter for path sampling. Defaults to 1e-4.
space (str, optional) – Latent space. Defaults to ‘W’.
sampling (str, optional) – Sampling mode, whether sampling in full path or endpoints. Defaults to ‘end’.
latent_dim (int, optional) – Latent dimension of input noise. Defaults to 512.
need_cond_input (bool) – If true, the sampler will return the conditional input randomly sampled from the original dataset. This require the dataset implement get_data_info and field gt_label must be contained in the return value of get_data_info. Noted that, for unconditional models, set need_cond_input as True may influence the result of evaluation results since the conditional inputs are sampled from the dataset distribution; otherwise will be sampled from the uniform distribution. Defaults to False.
- process(data_batch: dict, data_samples: Sequence[dict]) None [source]¶
Process one batch of data samples and predictions. The processed results should be stored in
self.fake_results
, which will be used to compute the metrics when all batches have been processed.- Parameters
data_batch (dict) – A batch of data from the dataloader.
data_samples (Sequence[dict]) – A batch of outputs from the model.
- _compute_distance(images)[source]¶
Feed data to the metric.
- Parameters
images (Tensor) – Input tensor.
- compute_metrics(fake_results: list) dict [source]¶
Summarize the results.
- Returns
Summarized results.
- Return type
dict | list
- get_metric_sampler(model: torch.nn.Module, dataloader: torch.utils.data.dataloader.DataLoader, metrics: list)[source]¶
Get sampler for generative metrics. Returns a dummy iterator, whose return value of each iteration is a dict containing batch size and sample mode to generate images.
- Parameters
model (nn.Module) – Model to evaluate.
dataloader (DataLoader) – Dataloader for real images. Used to get batch size during generate fake images.
metrics (list) – Metrics with the same sampler mode.
- Returns
Sampler for generative metrics.
- Return type
dummy_iterator