mmagic.evaluation.metrics.base_sample_wise_metric
¶
Evaluation metrics based on each sample.
Module Contents¶
Classes¶
Base sample wise metric of edit. |
- class mmagic.evaluation.metrics.base_sample_wise_metric.BaseSampleWiseMetric(gt_key: str = 'gt_img', pred_key: str = 'pred_img', mask_key: Optional[str] = None, scaling=1, device='cpu', collect_device: str = 'cpu', prefix: Optional[str] = None)[source]¶
Bases:
mmengine.evaluator.BaseMetric
Base sample wise metric of edit.
Subclass must provide process function.
- Parameters
gt_key (str) – Key of ground-truth. Default: ‘gt_img’
pred_key (str) – Key of prediction. Default: ‘pred_img’
mask_key (str, optional) – Key of mask, if mask_key is None, calculate all regions. Default: None
collect_device (str) – Device name used for collecting results from different ranks during distributed training. Must be ‘cpu’ or ‘gpu’. Defaults to ‘cpu’.
device (str) – Device used to place torch tensors to compute metrics. Defaults to ‘cpu’.
prefix (str, optional) – The prefix that will be added in the metric names to disambiguate homonymous metrics of different evaluators. If prefix is not provided in the argument, self.default_prefix will be used instead. Default: None
scaling (float, optional) – Scaling factor for final metric. E.g. scaling=100 means the final metric will be amplified by 100 for output. Default: 1
- compute_metrics(results: List)[source]¶
Compute the metrics from processed results.
- Parameters
results (List) – The processed results of each batch.
- Returns
The computed metrics. The keys are the names of the metrics, and the values are corresponding results.
- Return type
Dict
- process(data_batch: Sequence[dict], data_samples: Sequence[dict]) None [source]¶
Process one batch of data and predictions.
- Parameters
data_batch (Sequence[dict]) – A batch of data from the dataloader.
predictions (Sequence[dict]) – A batch of outputs from the model.
- evaluate() dict [source]¶
Evaluate the model performance of the whole dataset after processing all batches.
- Parameters
size (int) – Length of the entire validation dataset. When batch size > 1, the dataloader may pad some data samples to make sure all ranks have the same length of dataset slice. The
collect_results
function will drop the padded data based on this size.- Returns
Evaluation metrics dict on the val dataset. The keys are the names of the metrics, and the values are corresponding results.
- Return type
dict
- get_metric_sampler(model: torch.nn.Module, dataloader: torch.utils.data.dataloader.DataLoader, metrics) torch.utils.data.dataloader.DataLoader [source]¶
Get sampler for normal metrics. Directly returns the dataloader.
- Parameters
model (nn.Module) – Model to evaluate.
dataloader (DataLoader) – Dataloader for real images.
metrics (List['GenMetric']) – Metrics with the same sample mode.
- Returns
Default sampler for normal metrics.
- Return type
DataLoader